Allow the endpoint for telemetry to be passed in via the cli args, e.g
```
--telemetry-endpoint "https://somehost/test/"
```
and the actual endpoint always appends `v3` to it. So, above URL becomes
"https://somehost/test/v3"
Separate out methods of the Catalog API that are used on the query side into a new trait `DatabaseSchemaProvider`. The new trait includes methods from the Catalog that get the underlying `DatabaseSchema` or interact with names/IDs.
This will allow for a separate implementation of the Catalog for pro that only needs to hold a replicated/combined view in-memory of one or more catalogs without the need to do persistence that a write buffer's catalog needs to do.
While in there I also switched the `QueryExecutorImpl::new` method to take an args struct to avoid the clippy lint.
* feat: Add TableId and ColumnId
* feat: swap over to DbId and TableId everywhere
This commit swaps us over to using the DbId and TableId types everywhere
for our internal systems. Anywhere that's external facing, such as names
for last cache tables or line protocol parsing, use names. In these cases
we have the `Catalog` which keeps a map of TableIds and DbIds in a
bidirectional mapping for easy lookup i.e. id <-> names. While in essence
the change itself isn't that complicated given the nature of how much we
depended on names for things, the changes end up being quite invasive and
extensive. Luckily it shouldn't be too hard to review. Note this does
not add the column ids which will be done in a follow up PR.
Closes#25375Closes#25403Closes#25404Closes#25405Closes#25412Closes#25413
- added mechanism within PersistedFile to expose parquet file related
metrics. The details are updated when new snapshot is generated and
also when all snapshots are loaded when the process starts up
- at the point of creating the telemetry payload these parquet metrics
are looked up before sending it to the server.
Closes: https://github.com/influxdata/influxdb/issues/25418
- instrumented code to get read and write measurement
- introduced EventsBucket for collection of reads/writes
- sampler now samples every minute for all metrics (including
reads/writes)
- other tidy ups
closes: https://github.com/influxdata/influxdb/issues/25372
* test: check parquet cache in the write buffer
Checked that the parquet cache will serve queries when chunks are
requested from the write buffer. The added test also checks for get_range
requests made to the object store, which are typically made by DataFusion
to infer schema for parquet files.
* refactor: make parquet cache optional on write buffer
* test: add test to verify parquet cache function
This makes the parquet cache optional at the write buffer level, and adds
a test that verifies that the cache catches and prevents requests to the
object store in the event of a cache hit.
Closes#25382Closes#25383
This refactors the parquet cache to use less locking by switching from using the `clru` crate to a hand-rolled cache implementation. The new cache still acts as an LRU, but it uses atomics to track hit-time per entry, and handles pruning in a separate process that is decoupled from insertion/gets to the cache.
The `Cache` type uses a [`DashMap`](https://docs.rs/dashmap/latest/dashmap/struct.DashMap.html) internally to store cache entries. This should help reduce lock contention, and also has the added benefit of not requiring mutability to insert into _or_ get from the map.
The cache maps an `object_store::Path` to a `CacheEntry`. On a hit, an entry will have its `hit_time` (an `AtomicI64`) incremented. During a prune operation, entries that have the oldest hit times will be removed from the cache. See the `Cache::prune` method for details.
The cache is setup with a memory _capacity_ and a _prune percent_. The cache tracks memory used when entries are added, based on their _size_, and when a prune is invoked in the background, if the cache has exceeded its capacity, it will prune `prune_percent * cache.len()` entries from the cache.
Two tests were added:
* `cache_evicts_lru_when_full` to check LRU behaviour of the cache
* `cache_hit_while_fetching` to check that a cache entry hit while a request is in flight to fetch that entry will not result in extra calls to the underlying object store
Part of #25347
This sets up a new implementation of an in-memory parquet file cache in the `influxdb3_write` crate in the `parquet_cache.rs` module.
This module introduces the following types:
* `MemCachedObjectStore` - a wrapper around an `Arc<dyn ObjectStore>` that can serve GET-style requests to the store from an in-memory cache
* `ParquetCacheOracle` - an interface (trait) that can accept requests to create new cache entries in the cache used by the `MemCachedObjectStore`
* `MemCacheOracle` - implementation of the `ParquetCacheOracle` trait
## `MemCachedObjectStore`
This takes inspiration from the [`MemCacheObjectStore` type](1eaa4ed5ea/object_store_mem_cache/src/store.rs (L205-L213)) in core, but has some different semantics around its implementation of the `ObjectStore` trait, and uses a different cache implementation.
The reason for wrapping the object store is that this ensures that any GET-style request being made for a given object is served by the cache, e.g., metadata requests made by DataFusion.
The internal cache comes from the [`clru` crate](https://crates.io/crates/clru), which provides a least-recently used (LRU) cache implementation that allows for weighted entries. The cache is initialized with a capacity and entries are given a weight on insert to the cache that represents how much of the allotted capacity they will take up. If there isn't enough room for a new entry on insert, then the LRU item will be removed.
### Limitations of `clru`
The `clru` crate conveniently gives us an LRU eviction policy but its API may put some limitations on the system:
* gets to the cache require an `&mut` reference, which means that the cache needs to be behind a `Mutex`. If this slows down requests through the object store, then we may need to explore alternatives.
* we may want more sophisticated eviction policies than a straight LRU, i.e., to favour certain tables over others, or files that represent recent data over those that represent old data.
## `ParquetCacheOracle` / `MemCacheOracle`
The cache oracle is responsible for handling cache requests, i.e., to fetch an item and store it in the cache. In this PR, the oracle runs a background task to handle these requests. I defined this as a trait/struct pair since the implementation may look different in Pro vs. OSS.
- uses Arc<str> to represent create once and read everywhere type
of string
- updated snapshots for insta asserts, uses redaction to hardcode
randomly generated UUID strings
- added methods to catalog to expose instace and host ids
Closes: https://github.com/influxdata/influxdb/issues/25315
This makes some changes to the TestServer E2E framework, which is used
for running integration tests in the influxdb3 crate. These changes are
meant so that we can more easily split the code for pro.
* refactor: add catalog as dep to influxdb3
* refactor: move catalog and last cache initialization out of write buffer
The Write buffer used to handle initialization of the catalog and last
n value cache. This commit moves that logic out, so that both can be
initialized independently, and injected into the write buffer. This is to
enable downstream changes that will need to make sharing the catalog and
last cache possible.
* refactor: use dyn traits in WriteBufferImpl
This changes the WriteBufferImpl to use a dyn TimeProvider instead of
a generic in its type signature.
The Server type now uses a dyn WriteBuffer instead of using a generic
in its type signature, and the ServerBuilder was updated to accommodate
this accordingly.
These chages were to make downstream code changes more seamless.
* refactor: make some items pub
This makes functions on the QueryableBuffer and LastCache pub so that they
can be used downstream.
The Persister trait was only implemented by a single type, because the
underlying ObjectStore interface has several ways of being mocked, we
mock that instead of the Persister interface.
This commit removes the Persister trait, and moves its interface/impl
directly on a single Persister type in the persister module of the
influxdb3_write crate.
deny.toml had some incorrect field names in license.exceptions, those
were fixed from 'crate' to 'name'.
* refactor: Make Level0Duration part of WAL
I noticed this during some testing and cleanup with other PRs. The WAL had its own level_0_duration and the write buffer had a different one, which would cause some weird problems if they weren't the same. This refactors Level0Duration to be in the WAL and fixes up the tests.
As an added bonus, this surfaced a bug where multiple L0 blocks getting persisted in the same snapshot wasn't supported. So now snapshot details can have many files per table.
* fix: have persisted files always return in descending data time order
* fix: sort record batches for test verification
This extends the system tables available with a new `parquet_files` table
which will list the parquet files associated with a given table in a
database.
Queries to system.parquet_files must provide a table_name predicate to
specify the table name of interest.
The files are accessed through the QueryableBuffer.
In addition, a test was added to check success and failure modes of the
new system table query.
Finally, the Persister trait had its associated error type removed. This
was somewhat of a consequence of how I initially implemented this change,
but I felt cleaned the code up a bit, so I kept it in the commit.
This enforces the use of a host identifier prefix in all object store
paths (currently, for parquet files, catalog files, and snapshot files).
The persister retains the host identifier prefix, and uses it when
constructing paths.
The WalObjectStore also holds the host identifier prefix, so that it can
use it when saving and loading WAL files.
The influxdb3 binary requires a new argument 'host-id' to be passed that
is used to specify the prefix.
* fix: query bugs with buffer
This fixes three different bugs with the buffer. First was that aggregations would fail because projection was pushed down to the in-buffer data that de-duplication needs to be called on. The test in influxdb3/tests/server/query.rs catches that.
I also added a test in write_buffer/mod.rs to ensure that data is correctly queryable when combining with different states: only data in buffer, only data in parquet files, and data across both. This showed two bugs, one where the parquet data was being doubled up (parquet chunks were being created in write buffer mod and in queryable buffer. The second was that the timestamp min max on table buffer would panic if the buffer was empty.
* refactor: PR feedback
* fix: fix wal replay and buffer snapshot
Fixes two problems uncovered by adding to the write_buffer/mod.rs test. Ensures we can replay wal data and that snapshots work properly with replayed data.
* fix: run cargo update to fix audit
* feat: refactor WAL and WriteBuffer
There is a ton going on here, but here are the high level things. This implements a new WAL, which is backed entirely by object store. It then updates the WriteBuffer to be able to work with how the new WAL works, which also required an update to how the Catalog is modified and persisted.
The concept of Segments has been removed. Previously there was a separate WAL per segment of time. Instead, there is now a single WAL that all writes and updates flow into. Data within the write buffer is organized by Chunk(s) within tables, which is based on the timestamp of the row data. These are known as the Level0 files, which will be persisted as Parquet into object store. The default chunk duration for level 0 files is 10 minutes.
The WAL is written as single files that get created at the configured WAL flush interval (1s by default). After a certain number of files have been created, the server will attempt to snapshot the WAL (default is to snapshot the first 600 files of the WAL after we have 900 total, i.e. snapshot 10 minutes of WAL data).
The design goal with this is to persist 10 minute chunks of data that are no longer receiving writes, while clearing out old WAL files. This works if data getting written in around "now" with no more than 5 minutes of delay. If we continue to have delayed writes, a snapshot of all data will be forced in order to clear out the WAL and free up memory in the buffer.
Overall, this structure of a single wal, with flushes and snapshots and chunks in the queryable buffer led to a simpler setup for the write buffer overall. I was able to clear out quite a bit of code related to the old segment organization.
Fixes#25142 and fixes#25173
* refactor: address PR feedback
* refactor: wal to replay and background flush on new
* chore: remove stray println
This commit updates us to rustc 1.80. There are three significant changes
here:
1. LazyLock and LazyCell have been stabilized meaning we can replace our
usage of Lazy from the once_cell crate with the std lib versions
2. Lints were added to handle unknown cfg directives. `tokio_unstable`
is affected by this and while we do have the flags in our
.cargo/config.toml Cargo still output a lint for it so we supress
that warning now in our Cargo.toml for the workspace
3. clippy now throws a new warning about priority levels for lints. It's
quite frankly a thing that doesn't make sense to me and should be
something cargo fixes, but here we are.
Besides that it was a painless upgrade and now we're on the latest and
greatest.
* fix: catalog support for last caches that accept new fields
Last cache definitions in the catalog were augmented to either store an
explicit set of column names (including time), or to accept new fields.
This will allow these caches to be loaded properly on server restart such
that all non-key columns are cached.
* refactor: use tagged serialization for last cache values def
This also updated the client code to accept the new structure in
influxdb3_client.
* test: add e2e tests to catch regressions in influxdb3_client
* chore: cargo update for audit
Closes#25169
This PR ensures the last cache configuration is persisted to the catalog when last caches are created, and are removed from the catalog when they are deleted. The last cache is initialized on server start fro the catalog.
A new trait was added to the write buffer: LastCacheManager, which provides the methods to create and delete last caches (and which is invoked from the HTTP API). Both create/delete methods will update the catalog, but also force persistence of the catalog to object store, vs. waiting for the WAL flush interval / segment persistence process to do it. This should ensure that the catalog is up-to-date with respect to the last cache configuration, in the event that the server is stopped before segment persistence.
A test was added to check this behaviour in influxdb3_write/src/write_buffer/mod.rs.
* feat: new last-cache CLI
This adds two new CLIs:
influxdb3 last-cache create
influxdb3 last-cache delete
These utilize the new underlying APIs/client methods for the last-n-value
cache feature.
* refactor: switch around the token CLI to new convention
* docs: re-word CLI docs
Added a new system table, system.last_caches, to enable queries that display information about last caches in a database.
You can query the table like so:
SELECT * FROM system.last_caches
Since queries are scoped to a database, this will only show last caches configured for the database being queried.
Results look like so:
+-------+----------------+----------------+---------------+-------+-----+
| table | name | key_columns | value_columns | count | ttl |
+-------+----------------+----------------+---------------+-------+-----+
| mem | mem_last_cache | [host, region] | [time, usage] | 1 | 60 |
+-------+----------------+----------------+---------------+-------+-----+
An end-to-end test was added to verify queries to the system.last_caches table.
Adds an API for deleting last caches.
- The API allows parameters to be passed in either the request URI query string, or in the body as JSON
- Some additional error modes were handled, specifically, for better HTTP status code responses, e.g., invalid content type is now a 415, URL query string parsing errors are now 400
- An end-to-end test was added to check behaviour of the API
Closes#25096
- Adds a new HTTP API that allows the creation of a last cache, see the issue for details
- An E2E test was added to check success/failure behaviour of the API
- Adds the mime crate, for parsing request MIME types, but this is only used in the code I added - we may adopt it in other APIs / parts of the HTTP server in future PRs
* feat: impl datafusion traits on last cache
Created a new module for the DataFusion table function implementations.
The TableProvider impl for LastCache was moved there, and new code that
implements the TableFunctionImpl trait to make the last cache queryable
was also written.
The LastCacheProvider and LastCache were augmented to make this work:
- The provider stores an Arc<LastCache> instead of a LastCache
- The LastCache uses interior mutability via an RwLock, to make the above
possible.
* feat: register last_cache UDTF on query context
* refactor: make server accept listener instead of socket addr
The server used to accept a socket address and bind it directly, returning
error if the bind fails.
This commit changes that so the ServerBuilder accepts a TcpListener. The
behaviour is essentially the same, but this allows us to bind the address
from tests when instantiating the server, so we can easily assign unused
ports.
Tests in the influxdb3_server were updated to exploit this in order to
use port 0 auto assignment and stop flaky test failures.
A new, failing, test was also added to that module for the last cache.
* refactor: naive implementation of last cache key columns
Committing here as the last cache is in a working state, but it is naively
implemented as it just stores all key columns again (still with the hierarchy)
* refactor: make the last cache work with the query executor
* chore: fix my own feedback and appease clippy
* refactor: remove lower lock in last cache
* chore: cargo update
* refactor: rename function
* fix: broken doc comment
* feat: base for last cache implementation
Each last cache holds a ring buffer for each column in an index map, which
preserves the insertion order for faster record batch production.
The ring buffer uses a custom type to handle the different supported
data types that we can have in the system.
* feat: implement last cache provider
LastCacheProvider is the API used to create last caches and write
table batches to them. It uses a two-layer RwLock/HashMap: the first for
the database, and the second layer for the table within the database.
This allows for table-level locks when writing in buffered data, and only
gets a database-level lock when creating a cache (and in future, when
removing them as well).
* test: APIs on write buffer and test for last cache
Added basic APIs on the write buffer to access the last cache and then a
test to the last_cache module to see that it works with a simple example
* docs: add some doc comments to last_cache
* chore: clippy
* chore: one small comment on IndexMap
* chore: clean up some stale comments
* refactor: part of PR feedback
Addressed three parts of PR feedback:
1. Remove double-lock on cache map
2. Re-order the get when writing to the cache to be outside the loop
3. Move the time check into the cache itself
* refactor: nest cache by key columns
This refactors the last cache to use a nested caching structure, where
the key columns for a given cache are used to create a hierarchy of
nested maps, terminating in the actual store for the values in the cache.
Access to the cache is done via a set of predicates which can optionally
specify the key column values at any level in the cache hierarchy to only
gather record batches from children of that node in the cache.
Some todos:
- Need to handle the TTL
- Need to move the TableProvider impl up to the LastCache type
* refactor: TableProvider impl to LastCache
This re-writes the datafusion TableProvider implementation on the correct
type, i.e., the LastCache, and adds conversion from the filter Expr's to
the Predicate type for the cache.
* feat: support TTL in last cache
Last caches will have expired entries walked when writes come in.
* refactor: add panic when unexpected predicate used
* refactor: small naming convention change
* refactor: include keys in query results and no null keys
Changed key columns so that they do not accept null values, i.e., rows
that are pushed that are missing key column values will be ignored.
When producing record batches for a cache, if not all key columns are
used in the predicate, then this change makes it so that the non-predicate
key columns are produced as columns in the outputted record batches.
A test with a few cases showing this was added.
* fix: last cache key column query output
Ensure key columns in the last cache that are not included in the
predicate are emitted in the RecordBatches as a column.
Cleaned up and added comments to the new test.
* chore: clippy and some un-needed code
* fix: clean up some logic errors in last_cache
* test: add tests for non default cache size and TTL
Added two tests, as per commit title. Also moved the eviction process
to a separate function so that it was not being done on every write to
the cache, which could be expensive, and this ensures that entries are
evicted regardless of whether writes are coming in or not.
* test: add invalid predicate test cases to last_cache
* test: last_cache with field key columns
* test: last_cache uses series key for default keys
* test: last_cache uses tag set as default keys
* docs: add doc comments to last_cache
* fix: logic error in last cache creation
CacheAlreadyExists errors were only being based on the database and
table names, and not including the cache names, which was not
correct.
* docs: add some comments to last cache create fn
* feat: support null values in last cache
This also adds explicit support for series key columns to distinguish
them from normal tags in terms of nullability
A test was added to check nulls work
* fix: reset last cache last time when ttl evicts all data
* refactor: use hashbrown with entry_ref api
* refactor: use hashbrown hashmap instead of std hashmap in places that would from the `entry_ref` API
* chore: Cargo update to pass CI
* feat: track buffer memory usage and persist
This is a bit light on the test coverage, but I expect there is going to be some big refactoring coming to segment state and some of these other pieces that track parquet files in the system. However, I wanted to get this in so that we can keep things moving along. Big changes here:
* Create a persister module in the write_buffer
* Check the size of the buffer (all open segments) every 10s and predict its size in 5 minutes based on growth rate
* If the projected growth rate is over the configured limit, either close segments that haven't received writes in a minute, or persist the largest tables (oldest 90% of their data)
* Added functions to table buffer to split a table based on 90% older timestamp data and 10% newer timestamp data, to persist the old and keep the new in memory
* When persisting, write the information in the WAL
* When replaying from the WAL, clear out the buffer of the persisted data
* Updated the object store path for persisted parquet files in a segment to have a file number since we can now have multiple parquet files per segment
* refactor: PR feedback
Introduce the experimental series key feature to monolith, along with the new `/api/v3/write` API which accepts the new line protocol to write to tables containing a series key.
Series key
* The series key is supported in the `schema::Schema` type by the addition of a metadata entry that stores the series key members in their correct order. Writes that are received to `v3` tables must have the same series key for every single write.
Series key columns are `NOT NULL`
* Nullability of columns is enforced in the core `schema` crate based on a column's membership in the series key. So, when building a `schema::Schema` using `schema::SchemaBuilder`, the arrow `Field`s that are injected into the schema will have `nullable` set to false for columns that are part of the series key, as well as the `time` column.
* The `NOT NULL` _constraint_, if you can call it that, is enforced in the buffer (see [here](https://github.com/influxdata/influxdb/pull/25066/files#diff-d70ef3dece149f3742ff6e164af17f6601c5a7818e31b0e3b27c3f83dcd7f199R102-R119)) by ensuring there are no gaps in data buffered for series key columns.
Series key columns are still tags
* Columns in the series key are annotated as tags in the arrow schema, which for now means that they are stored as Dictionaries. This was done to avoid having to support a new column type for series key columns.
New write API
* This PR introduces the new write API, `/api/v3/write`, which accepts the new `v3` line protocol. Currently, the only part of the new line protocol proposed in https://github.com/influxdata/influxdb/issues/24979 that is supported is the series key. New data types are not yet supported for fields.
Split write paths
* To support the existing write path alongside the new write path, a new module was set up to perform validation in the `influxdb3_write` crate (`write_buffer/validator.rs`). This re-uses the existing write validation logic, and replicates it with needed changes for the new API. I refactored the validation code to use a state machine over a series of nested function calls to help distinguish the fallible validation/update steps from the infallible conversion steps.
* The code in that module could potentially be refactored to reduce code duplication.
The system.queries table is now accessible, when queries are initiated
in debug mode, which is not currently enabled via the HTTP API, therefore
this is not yet accessible unless via the gRPC interface.
The system.queries table lists all queries in the QueryLog on the
QueryExecutorImpl.
Introduction of the `TokioDatafusionConfig` clap block for configuring the DataFusion runtime - this exposes many new `--datafusion-*` options on start, including `--datafusion-num-threads`
To accommodate renaming of `QueryNamespaceProvider` to `QueryDatabase` in `influxdb3_core`, I renamed the `QueryDatabase` type to `Database`.
Fixed tests that broke as a result of sync.
Removed the _series_id column that stored a SHA256 hash of the tag set
for each write.
Updated all test assertions that made reference to it.
Corrected the limits on columns to un-account for the additional _series_id
column.
* chore: clean up heappy, pprof, and jemalloc
Setup the use of jemalloc as default allocator using tikv-jemallocator
crate instead of tikv-jemalloc-sys.
Removed heappy and pprof, and also cleaned up all the mutually exclusive
compiler flags for using heappy as the allocator.
* chore: remove heappy from ci
For releases we need to have Docker images and binary images available for the
user to actually run influxdb3. These CI changes will build the binaries on a
release tag and the Docker image as well, test, sign, and publish them and make
them available for download.
Co-Authored-By: Brandon Pfeifer <bpfeifer@influxdata.com>