Adds a metric to track total retried catalog operations due to the catalog
being updated elsewhere. Includes a test to check the counter increments
on basic catalog operations.
* feat: generate persistable admin token
- this commit allows admin token creation using `influxdb3 create token
--admin` and also allows regeneration of admin token by `influxdb3
create token --admin --regenerate`
- `influxdb3_authz` crate hosts all low level token types and behaviour
- catalog log and snapshot types updated to use the token repo
- tests that relied on auth have been updated to use the new token
generation mechanism and new admin token generation/regeneration tests
have been added
* feat: list admin tokens
- allows listing admin tokens
- uses _internal db for token system table
- mostly test fixes due to _internal db
This creates a CatalogUpdateMessage type that is used to send
CatalogUpdates; this type performs the send on the oneshot Sender so
that the consumer of the message does not need to do so.
Subscribers to the catalog get a CatalogSubscription, which uses the
CatalogUpdateMessage type to ACK the message broadcast from the catalog.
This means that catalog message broadcast can fail, but this commit does
not provide any means of rolling back a catalog update.
A test was added to check that it works.
* refactor: use repository in catalog
The catalog was refactored to use identifiers on everything, and store
everything in a consistent structure. This structure makes use of the
`Repository` type that holds a `SerdeVecMap` of Id to Resource, along
with the next Id, and a bi-map of Id to resource name.
The `Repository` type is used at each level of the catalog where a
resource is stored.
This simplified repeated logic for snapshot'ing, insert and update of
resources in the catalog, as well as accessor methods for getting by id
or name, and mapping names to ids and vice-versa.
In addition, the process for catalog batch verification and permit was
altered so that the permit process induces a retry if the catalog was
updated while the catalog batch function was producing the batch, i.e, if
the catalog sequence incremented while the caller was waiting for a permit.
This eliminated the need for verifying the catalog batch after it had been
generated, and allows for a single path to apply a catalog batch after it
has been persisted to object store.
This assumes that the generation of the catalog batch implies validity.
Irelevant tests were removed.
Last and Distinct cache's now rely more heavily on Ids, though the proc-
essing engine still needs to switch over to use Ids for starting/stopping
triggers.
This commit restores the old behavior we had where new tags can be added
to a schema. To do this we made tags nullable and brings us in line with
our other products. These changes were made in this PR:
https://github.com/influxdata/influxdb3_core/pull/41.
Changes to accomplish this new behavior were:
- Queries now do not return an empty string for null tags instead they
are returned as null, or in many formats not at all.
- References to v1 for parsing and validating lines were removed as we
only have one path for doing so these days shared amongst all the
write_lp endpoints.
- We fixed failing tests that expected us to not be able to have new
tags or depended on that functionality indirectly
- Tests had their snapshot files updated to reflect that tags are
nullable by default
- Behavior for making a schema and checking whether a column can be null
were updated in a separate repo and integrated here
- The series_key is updated whenever we get a new tag added to the
schema
- New tests were added to show that you can add a new tag and that the
series key is updated as part of that
With the above changes we can now allow tags to be added again by users
like they would expect, especially with v1 and v2 apis and Telegraf
plugins.
Fixed a bug in the distinct cache where projection that skipped column
in the cache hierarchy caused a panic.
This simplifies the display of the projection in the DistinctCacheExec
in EXPLAIN output to not include the column index, and only the name.
* deduplicate QueryParams->QueryRequest and Format->QueryFormat
* move WriteParams into influxdb3_types crate
* DRY up client HTTP request handling code in *RequestBuilder.send
methods.
* DRY up a bunch of other non-Builder http request handling
* feat: introduce parquet caching in query path
This commit scans the parquet files that will be used in query to check
if they can be cached. There are three conditions to satisfy,
- not cached already
- cache has enough space
- file times overlap with the cache policy times
closes: https://github.com/influxdata/influxdb/issues/25906
* refactor: rename env var
This refactors plugins and triggers so that plugins no longer need to be "created". Since plugins exist in either the configured local directory or on the Github repo, a user now only needs to create a trigger and reference the plugin filename.
Closes#25876
* feat: first stab at locally updating parquet cache
closes: https://github.com/influxdata/influxdb/issues/25887
* refactor: use enums to separate out the modes
This commit introduced the `Immediate` and `Eventual` modes for
fulfilling the cache request. In immediate mode since the data is
readily available to be cached, we can avoid extra requests to object
store.
part of: https://github.com/influxdata/influxdb/issues/25887
This changes the CLI arg `host-id` to `writer-id` to more accurately
indicate meaning.
This changes also goes through the codebase and changes struct fields,
methods, and variables to use the term `writer_id` or `writer_identifier_prefix`
instead of `host_id` etc., to make the meaning clear in the code.
This also changes the catalog serialization to use the field `writer_id`
instead of `host_id`, which is breaking change.
* feat: snapshot when wal buffer is empty
- This commit changes the functionality to allow snapshots to happen even when
wal buffer is empty. For snapshots wal periods are still required but
not the wal buffer. To allow this, we write a no-op into wal file with
snapshot details. This enables force snapshotting functionality
closes: https://github.com/influxdata/influxdb/issues/25685
* refactor: address PR feedback
* feat: add startup time to logging output
This change adds a startup time counter to the output when starting up
a server. The main purpose of this is to verify whether the impact of
changes actually speeds up the loading of the server.
* feat: Significantly decrease startup times for WAL
This commit does a few important things to speedup startup times:
1. We avoid changing an Arc<str> to a String with the series key as the
From<String> impl will call with_column which will then turn it into
an Arc<str> again. Instead we can just call `with_column` directly
and pass in the iterator without also collecting into a Vec<String>
2. We switch to using bitcode as the serialization format for the WAL.
This significantly reduces startup time as this format is faster to
use instead of JSON, which was eating up massive amounts of time.
Part of this change involves not using the tag feature of serde as
it's currently not supported by bincode
3. We also parallelize reading and deserializing the WAL files before
we then apply them in order. This reduces time waiting on IO and we
eagerly evaluate each spawned task in order as much as possible.
This gives us about a 189% speedup over what we were doing before.
Closes#25534
* feat: parquet cache metrics
* feat: track parquet cache metrics
Adds metrics to track the following in the in-memory parquet cache:
* cache size in bytes (also included a fix in the calculation of that)
* cache size in n files
* cache hits
* cache misses
* cache misses while the oracle is fetching a file
A test was added to check this functionality
* refactor: clean up logic and fix cache removal tracking error
Some logic and naming was cleaned up and the boolean to optionally track
metrics on entry removal was removed, as it was incorrect in the first place:
a fetching entry still has a size, which counts toward the size of the
cache. So, this makes is such that anytime an entry is removed, whether
its state is success or fetching, its size will be decremented from
the cache size metrics.
The sizing caclulations were made to be correct, and the cache metrics
test was updated with more thurough assertions
Moved all of the last cache implementation into the `influxdb3_cache`
crate. This also splits out the implementation into three modules:
- `cache.rs`: the core cache implementation
- `provider.rs`: the cache provider used by the database to hold multiple
caches.
- `table_function.rs`: same as before, holds the DataFusion impls
Tests were preserved and moved to `mod.rs`, however, they were updated to
not rely on the WriteBuffer implementation, and instead use the types in
the `influxdb3_cache::last_cache` module directly. This simplified the
test code, while not changing any of the test assertions at all.
This adds a new system table "meta_caches" that allows users to view the
state of their metadata caches on a per-db basis
An integration test was added to verify that it works.
This adds the MetaDataCacheProvider for managing metadata caches in the
influxdb3 instance. This includes APIs to create caches through the WAL
as well as from a catalog on initialization, to write data into the
managed caches, and to query data out of them.
The query side is fairly involved, relying on Datafusion's TableFunctionImpl
and TableProvider traits to make querying the cache using a user-defined
table function (UDTF) possible.
The predicate code was modified to only support two kinds of predicates:
IN and NOT IN, which simplifies the code, and maps nicely with the DataFusion
LiteralGuarantee which we leverage to derive the predicates from the
incoming queries.
A custom ExecutionPlan implementation was added specifically for the
metadata cache that can report the predicates that are pushed down to
the cache during query planning/execution.
A big set of tests was added to to check that queries are working, and
that predicates are being pushed down properly.
* feat: core metadata cache structs with basic tests
Implement the base MetaCache type that holds the hierarchical structure
of the metadata cache providing methods to create and push rows from the
WAL into the cache.
Added a prune method as well as a method for gathering record batches
from a meta cache. A test was added to check the latter for various
predicates and that the former works, though, pruning shows that we need
to modify how record batches are produced such that expired entries are
not emitted.
* refactor: filter expired entries and do some clean up in the meta cache