Adds a metric to track total retried catalog operations due to the catalog
being updated elsewhere. Includes a test to check the counter increments
on basic catalog operations.
Catalog update APIs were returning an Option that was not necessary. It
was always Some, so this removes the Option from the API to make the
intent clear - if there is an update made by the requested change, there
is a Batch; whereas if the requested change is erroneous, or would not
produce a change, the response is Err.
* feat: enable auth by default
- Removes `--bearer-token` support and starts the server with auth by
default.
- Adds `--without-auth` switch to start the server without any auth
* feat: changes for auth being turned off
when auth is turned off,
- disallow token endpoints (returns 405)
- remove hash column when querying tokens system table
* refactor: address PR feedback
This commit allows deletion of tokens by name. Below is an example,
`influxdb3 delete token --token-name _admin --token $CURRENT_ADMIN_TOKEN`
It needs user confirmation before proceeding with the delete
This commit adds TLS support to influxdb3 and allows users to pass in a
path to a key and cert file with the --tls-key and --tls-cert flags in
the serve command. It also adds the ability for every command to specify
a certificate authority for requests. This is mostly needed when the
cert is self signed, but there are other use cases for this.
The big thing is that most of our tests now use TLS by default. Included
are self signed certs for localhost and the the CA cert included in the
commit. Since these are *only* used for testing this should be fine to
include as they are not used in nor are they intended to be used in any
production system. The expiry has been set for 365 days and the file
perms are set to o600 like the original issue mentioned. The tests pass
with this restriction.
I've verified that the API works via curl with the self signed certs as
I did *not* need to pass in the -k option to bypass checking the certs
were valid. The same goes for our tests. They use the rootCA.pem file
to verify the self signed cert when connecting and reject it otherwise.
With this users can be confident that their queries are safely encrypted
during transport.
Note that TLS works for both FlightSQL and our normal APIs.
Closes#25774
* feat: generate persistable admin token
- this commit allows admin token creation using `influxdb3 create token
--admin` and also allows regeneration of admin token by `influxdb3
create token --admin --regenerate`
- `influxdb3_authz` crate hosts all low level token types and behaviour
- catalog log and snapshot types updated to use the token repo
- tests that relied on auth have been updated to use the new token
generation mechanism and new admin token generation/regeneration tests
have been added
* feat: list admin tokens
- allows listing admin tokens
- uses _internal db for token system table
- mostly test fixes due to _internal db
* chore: couple of updates to fix cargo audit job
- remove humantime ignore in deny.toml
- update pyo3 to use 0.24.1 (https://rustsec.org/advisories/RUSTSEC-2025-0020.html)
* chore: moved pyo3 version to root cargo.toml
* feat: add influxdb3_shutdown crate
provides basic wait methods for unix/windows OS's
* feat: graceful shutdown
* docs: add rust docs and test to influxdb3_shutdown
Added rustdoc comments to types and methods in the influxdb3_shutdown
crate as well as a test that shows the ordering of a shutdown.
This creates a CatalogUpdateMessage type that is used to send
CatalogUpdates; this type performs the send on the oneshot Sender so
that the consumer of the message does not need to do so.
Subscribers to the catalog get a CatalogSubscription, which uses the
CatalogUpdateMessage type to ACK the message broadcast from the catalog.
This means that catalog message broadcast can fail, but this commit does
not provide any means of rolling back a catalog update.
A test was added to check that it works.
* refactor: use repository in catalog
The catalog was refactored to use identifiers on everything, and store
everything in a consistent structure. This structure makes use of the
`Repository` type that holds a `SerdeVecMap` of Id to Resource, along
with the next Id, and a bi-map of Id to resource name.
The `Repository` type is used at each level of the catalog where a
resource is stored.
This simplified repeated logic for snapshot'ing, insert and update of
resources in the catalog, as well as accessor methods for getting by id
or name, and mapping names to ids and vice-versa.
In addition, the process for catalog batch verification and permit was
altered so that the permit process induces a retry if the catalog was
updated while the catalog batch function was producing the batch, i.e, if
the catalog sequence incremented while the caller was waiting for a permit.
This eliminated the need for verifying the catalog batch after it had been
generated, and allows for a single path to apply a catalog batch after it
has been persisted to object store.
This assumes that the generation of the catalog batch implies validity.
Irelevant tests were removed.
Last and Distinct cache's now rely more heavily on Ids, though the proc-
essing engine still needs to switch over to use Ids for starting/stopping
triggers.
In #25927 we missed that JSON queries were broken despite having some
tests use the format. This fixes JSON queries such that they now
properly contain a comma between RecordBatches. This commit also
includes tests for the formats that now stream data back (CSV, JSON, and
JSON Lines) so that we won't run into this issue again.
* deduplicate QueryParams->QueryRequest and Format->QueryFormat
* move WriteParams into influxdb3_types crate
* DRY up client HTTP request handling code in *RequestBuilder.send
methods.
* DRY up a bunch of other non-Builder http request handling
Partially fixes https://github.com/influxdata/influxdb/issues/24672
* move most HTTP req/resp types into `influxdb3_types` crate
* removes the use of locally-scoped request type structs from the `influxdb3_client` crate
* fix plugin dependency/package install bug
* it looks like the `DELETE` http method was being used where `POST` was expected for `/api/v3/configure/plugin_environment/install_packages` and `/api/v3/configure/plugin_environment/install_requirements`
This commit allows us to stream data back for CSV and JSON formatted
queries. Prior to this we would buffer up all of the data in memory
before sending it back. Now we can make it so that we only buffer in
one RecordBatch at a time to reduce memory overhead.
Note that due to the way the APIs for writers work and for how Body in
hyper 0.14 works we can't use a streaming body that we can write too.
This in turn means we have to use a manually written Future state
machine, which works but is far from ideal.
Note this does not include the pretty and parquet files as streamable.
I'm attempting to get the pretty one to be streamable, but I don't think
that this one and parquet are as likely to be streamable back to the
user. In general we might want to discourage these formats from being
used.
This updates trigger creation to load the plugin file before creating the trigger.
Another small change is to make Github references use filenames and paths identical to what they would be in the plugin-dir. This makes it a little easier to have the plugins repo local and develop against it and then be able to reference the same file later with gh: once it's up on the repo.
This refactors plugins and triggers so that plugins no longer need to be "created". Since plugins exist in either the configured local directory or on the Github repo, a user now only needs to create a trigger and reference the plugin filename.
Closes#25876
* feat: first stab at locally updating parquet cache
closes: https://github.com/influxdata/influxdb/issues/25887
* refactor: use enums to separate out the modes
This commit introduced the `Immediate` and `Eventual` modes for
fulfilling the cache request. In immediate mode since the data is
readily available to be cached, we can avoid extra requests to object
store.
part of: https://github.com/influxdata/influxdb/issues/25887
This commit does a few key things:
- Removes the 72 hour query and write restrictions in Core
- Limits the queries to a default number of parquet files. We chose 432
as this is about 72 hours using default settings for the gen1
timeblock
- The file limit can be increased, but the help text and error message
when exceeded note that query performance will likely be degraded as
a result.
- We warn users to use smaller time ranges if possible if they hit this
query error
With this we eliminate the hard restriction we have in place, but
instead create a soft one that users can choose to take the performance
hit with. If they can't take that hit then it's recomended that they
upgrade to Enterprise which has the compactor built in to make
performant historical queries.
* refactor: reduce catalog locks when getting chunks
The main refactor was to change the ChunkContainer trait to use the
DatabaseSchema and TableDefinition types directly in the signature, vs.
the names, which then required an additional catalog lock and lookups for
both entities. This was already handled upstream in the QueryTable, so
there was no need to do the lookups again.
This required the addition of a test helper in influxdb3_write::test_helpers
that provides convenience methods for getting record batches from the
WriteBuffer. We have been implementing such a method manually in several
places, so this is nice to have it unified. This provides a blanket impl
so that anything implementing WriteBuffer gets the method.
Some other house cleaning was included.
* refactor: clean up test helpers in influxdb3_write
* refactor: pass original df filters forward with ChunkFilter
* chore: clippy
* feat: Add request plugin capability
Adds the request plugin type. Triggers can be bound to an API endpoint at /api/v3/engine/<path>. Requests will get yielded to the plugin with the query parameters, request parameters, and request body.
I didn't implement the test endpoint for this plugin type as it seems much more natural for users to save the file and make a new request. Once #25863 is done it'll make it very easy.
Closes#25862
* chore: fix spelling in error message
Although the `format` in the request is used, the value coming
through the header is parsed earlier. So, when that lookup in
the header fails an error is returned (`InvalidMimeType`).
In this commit, there are extra checks to allow the default `Accept`
header values that come from the browser by defaulting it to `json`
closes: https://github.com/influxdata/influxdb/issues/25874
* feat(processing_engine): Add cron plugins and triggers to the processing engine.
* feat(processing_engine): switch from 'cron plugin' to 'schedule plugin', use TimeProvider.
* feat(processing_engine): add test for test scheduled plugin.
This updates the v1 /query API hanlder to handle InfluxDB v1's unique
query response structure when GROUP BY clauses are provided.
The distinction is in the addition of a "tags" field to the emitted series
data that contains a map of the GROUP BY tags along with their distinct
values associated with the data in the "values" field.
This required splitting the QueryExecutor into two query paths for InfluxQL
and SQL, as this allowed for handling InfluxQL query parsing in advance
of query planning.
A set of snapshot tests were added to check that it all works.
This changes the CLI arg `host-id` to `writer-id` to more accurately
indicate meaning.
This changes also goes through the codebase and changes struct fields,
methods, and variables to use the term `writer_id` or `writer_identifier_prefix`
instead of `host_id` etc., to make the meaning clear in the code.
This also changes the catalog serialization to use the field `writer_id`
instead of `host_id`, which is breaking change.
This updates the create plugin API and CLI so that it doesn't take the pugin code, but instead takes a file name of a file that must be in the plugin-dir of the server. It returns an error if the plugin-dir is not configured or if the file isn't there.
Also updates the WAL and catalog so that it doesn't store the plugin code directly. The code is read from disk one time when the plugin runs.
Closes#25797
* feat: introduce num wal files to keep
This commit allows a configurable number of wal files to be left behind
in OS. This is necessary as enterprise replicas rely on these files.
closes: https://github.com/influxdata/influxdb/issues/25788
* refactor: address PR feedback
* refactor: address PR comment
This allows the user to specify arguments that will be passed to each execution of a wal plugin trigger. The CLI test was updated to check this end to end.
Closes#25655
This updates the WAL so that it can have new file notifiers added that will get updated when the wal flushes. The processing engine now implements the WALNotifier trait.
I've updated the CLI test for creating a trigger to run and end-to-end test that defines a plugin, creates a trigger, writes data into the database, triggering the plugin, which writes summary statistics back into the database in a different table. The test queries the destination table to confirm that the plugin worked.
* feat: Update WAL plugin for new structure
This ended up being a very large change set. In order to get around circular dependencies, the processing engine had to be moved into its own crate, which I think is ultimately much cleaner.
Unfortunately, this required changing a ton of things. There's more testing and things to add on to this, but I think it's important to get this through and build on it.
Importantly, the processing engine no longer resides inside the write buffer. Instead, it is attached to the HTTP server. It is now able to take a query executor, write buffer, and WAL so that the full range of functionality of the server can be exposed to the plugin API.
There are a bunch of system-py feature flags littered everywhere, which I'm hoping we can remove soon.
* refactor: PR feedback
This ended up being a couple things rolled into one. In order to add a query API to the Python plugin, I had to pull the QueryExecutor trait out of server into a place so that the python crate could use it.
This implements the query API, but also fixes up the WAL plugin test CLI a bit. I've added a test in the CLI section so that it shows end-to-end operation of the WAL plugin test API and exercise of the entire Plugin API.
Closes#25757
* feat: snapshot when wal buffer is empty
- This commit changes the functionality to allow snapshots to happen even when
wal buffer is empty. For snapshots wal periods are still required but
not the wal buffer. To allow this, we write a no-op into wal file with
snapshot details. This enables force snapshotting functionality
closes: https://github.com/influxdata/influxdb/issues/25685
* refactor: address PR feedback
Closes#25749
This changes the `/query` API handler so that the parameters can be passed in either the request URI or in the request body for either a `GET` or `POST` request.
Parameters can be specified in the URI, the body, or both; if they are specified in both places, those in the body will take precedent.
Error variants in the HTTP server code related to missing request parameters were updated to return `400` status.
* fix: bind to correct port for e2e tests
This also fixes up some log messages on server start for naming
* chore: do notpass value in TEST_LOG env var to CI tests
_Follows #25737 (keeping in draft until that merges)_
Closes#25745
This PR provides both a CLI and underlying API for listing databases in the InfluxDB 3 Core server. Details are below.
There was already a method to list databases for the query executor for InfluxQL; this works by exposing that via the `HttpApi` in `influxdb3_server`.
However, one thing that we may address is that the query result for that uses `iox::database` as the column name. If we are removing references to `iox`, then we may want to just have it as `database`. I left it as is, for now, because I wanted to keep code churn down and wasn't sure why we use that prefix in the first place for the `SHOW DATABASES` and `SHOW RETENTION POLICIES` InfluxQL queries.
## Details
### CLI
This PR provides the `influxdb3 show` CLI:
```
influxdb3 show -h
List resources on the InfluxDB 3 Core server
Usage: influxdb3 show <COMMAND>
Commands:
databases List databases
help Print this message or the help of the given subcommand(s)
Options:
-h, --help Print help information
```
with the ability to list databases:
```
influxdb3 show databases -h
List databases
Usage: influxdb3 show databases [OPTIONS]
Options:
-H, --host <HOST_URL> The host URL of the running InfluxDB 3 Core server [env: INFLUXDB3_HOST_URL=] [default: http://127.0.0.1:8181]
--token <AUTH_TOKEN> The token for authentication with the InfluxDB 3 Core server [env: INFLUXDB3_AUTH_TOKEN=]
--show-deleted Include databases that were marked as deleted in the output
--format <OUTPUT_FORMAT> The format in which to output the list of databases [default: pretty] [possible values: pretty, json, json_lines, csv]
-h, --help Print help information
```
Since this uses the query executor, we can pass a `--format` argument to get the output as JSON, CSV, or JSONL, but by default, it uses the `pretty` format:
```
influxdb3 show databases
+---------------+
| iox::database |
+---------------+
| bar |
+---------------+
```
The `--show-deleted` flag will have the `deleted` column displayed as well as any databases that have been marked as deleted:
```
influxdb3 show databases --show-deleted
+---------------------+---------+
| iox::database | deleted |
+---------------------+---------+
| bar | false |
| foo-20250105T202949 | true |
+---------------------+---------+
```
### API
The API to list databases can be invoked via:
```
GET /api/v3/configure/database
```
with optional parameters:
* `format`: `pretty`, `json`, `csv`, `parquet`, or `jsonl`
* `show_deleted`: `bool`, defaults to `false`
Note that `database` is singular in the API endpoint, to be consistent with the other database related create/delete API endpoints. We could change it to be plural `databases` if that is the convention we want to go with.
This makes quite a few major changes to our CLI and how users interact
with it:
1. All commands are now in the form <verb> <noun> this was to make the
commands consistent. We had last-cache as a noun, but serve as a
verb in the top level. Given that we could only create or delete
All noun based commands have been move under a create and delete
command
2. --host short form is now -H not -h which is reassigned to -h/--help
for shorter help text and is in line with what users would expect
for a CLI
3. Only the needed items from clap_blocks have been moved into
`influxdb3_clap_blocks` and any IOx specific references were changed
to InfluxDB 3 specific ones
4. References to InfluxDB 3.0 OSS have been changed to InfluxDB 3 Core
in our CLI tools
5. --dbname has been changed to --database to be consistent with --table
in many commands. The short -d flag still remains. In the create/
delete command for the database however the name of the database is
a positional arg
e.g. `influxbd3 create database foo` rather than
`influxdb3 database create --dbname foo`
6. --table has been removed from the delete/create command for tables
and is now a positional arg much like database
7. clap_blocks was removed as dependency to avoid having IOx specific
env vars
8. --cache-name is now an optional positional arg for last_cache and meta_cache
9. last-cache/meta-cache commands are now last_cache and meta_cache respectively
Unfortunately we have quite a few options to run the software and I
couldn't cut down on them, but at least with this commands and options
will be more discoverable and we have full control over our CLI options
now.
Closes#25646
* feat: Implement WAL plugin test API
This implements the WAL plugin test API. It also introduces a new API for the Python plugins to be called, get their data, and call back into the database server.
There are some things that I'll want to address in follow on work:
* CLI tests, but will wait on #25737 to land for a refactor of the CLI here
* Would be better to hook the Python logging to call back into the plugin return state like here: https://pyo3.rs/v0.23.3/ecosystem/logging.html#the-python-to-rust-direction
* We should only load the LineBuilder interface once in a module, rather than on every execution of a WAL plugin
* More tests all around
But I want to get this in so that the actual plugin and trigger system can get udated to build around this model.
* refactor: PR feedback
This commit removes the required fields restriction when using the CLI
or the API to create a new table. As users can't write via the line
protocol without a field this is fine and the schema will be updated on
write. This expands the test to check for the correct response code now
and make sure that we can both query the empty table and write new data
to it.
Closes#25735
Prior to this change we would error correctly with a 422 if a limit was
hit. However, we would not send back the correct error in the case of a
limit being hit that caused a partial write. This change fixes that by
checking the error messages for failed lines and if one is found that
caused a limit to be hit, then a 422 is returned rather than a 400 as
we would have been able to process the line otherwise, but the limit was
hit instead
Closes#25208
Store the series key column names on the TableDefinitin in catalog so
looking up the series key by column names is more efficient
Remove the /api/v3/write API and related code/tests
Added prometheus metrics to track lines written and bytes written per
database. The write buffer does the tracking after validation of incoming
line protocol.
Tests added to verify.
* feat: create DB and Tables via REST and CLI
This commit does a few things:
1. It brings the database command naming scheme for types inline with
the rest of the CLI types
2. It brings the table command naming scheme for types inline with
the rest of the CLI types
3. Adds tests to check that the num of dbs is not exceeded and that you
cannot create more than one database with a given name.
4. Adds tests to check that you can create a table and put data into it
and querying it
5. Adds tests for the CLI for both the database and table commands
6. It creates an endpoint to create databases given a JSON blob
7. It creates an endpoint to create tables given a JSON blob
With this users can now create a database or table without first needing
to write to the database via the line protocol!
Closes#25640Closes#25641
* Move processing engine invocation to a seperate tokio task.
* Support writing back line protocol from python via insert_line_protocol().
* Update structs to work with bincode.
This changes the code to reference InfluxDB 3 OSS rather than Edge which
had been it's original name when we first started the project. With this
we now have the code reflect what we are actually calling it. On top of
this the long help text has been changed to give advice about how to
actually run the code now with the bare minimum set of flags needed now
as `influxdb serve` is no longer a viable command on it's own.
Closes#25649
* feat: add startup time to logging output
This change adds a startup time counter to the output when starting up
a server. The main purpose of this is to verify whether the impact of
changes actually speeds up the loading of the server.
* feat: Significantly decrease startup times for WAL
This commit does a few important things to speedup startup times:
1. We avoid changing an Arc<str> to a String with the series key as the
From<String> impl will call with_column which will then turn it into
an Arc<str> again. Instead we can just call `with_column` directly
and pass in the iterator without also collecting into a Vec<String>
2. We switch to using bitcode as the serialization format for the WAL.
This significantly reduces startup time as this format is faster to
use instead of JSON, which was eating up massive amounts of time.
Part of this change involves not using the tag feature of serde as
it's currently not supported by bincode
3. We also parallelize reading and deserializing the WAL files before
we then apply them in order. This reduces time waiting on IO and we
eagerly evaluate each spawned task in order as much as possible.
This gives us about a 189% speedup over what we were doing before.
Closes#25534
* feat: parquet cache metrics
* feat: track parquet cache metrics
Adds metrics to track the following in the in-memory parquet cache:
* cache size in bytes (also included a fix in the calculation of that)
* cache size in n files
* cache hits
* cache misses
* cache misses while the oracle is fetching a file
A test was added to check this functionality
* refactor: clean up logic and fix cache removal tracking error
Some logic and naming was cleaned up and the boolean to optionally track
metrics on entry removal was removed, as it was incorrect in the first place:
a fetching entry still has a size, which counts toward the size of the
cache. So, this makes is such that anytime an entry is removed, whether
its state is success or fetching, its size will be decremented from
the cache size metrics.
The sizing caclulations were made to be correct, and the cache metrics
test was updated with more thurough assertions
Moved all of the last cache implementation into the `influxdb3_cache`
crate. This also splits out the implementation into three modules:
- `cache.rs`: the core cache implementation
- `provider.rs`: the cache provider used by the database to hold multiple
caches.
- `table_function.rs`: same as before, holds the DataFusion impls
Tests were preserved and moved to `mod.rs`, however, they were updated to
not rely on the WriteBuffer implementation, and instead use the types in
the `influxdb3_cache::last_cache` module directly. This simplified the
test code, while not changing any of the test assertions at all.
This commit does three important major changes:
1. We will deny writes to the v1, v2, and v3 write apis that add new tags in
subsequent writes after the first write
2. We make every table have a series key by default now
3. We enfore sorting order by the series key which is the order the keys came in
With these changes we have consistentcy across the various write apis and can
make optimizations and future features with the assumption we have a series key.
Closes#25585
This commit introduces basic store for sys events and the backing ring
buffer. Since the buffer needs to hold arbitrary data, it uses `Box<dyn
Any>`
closes: https://github.com/influxdata/influxdb/issues/25581
This adds a new system table "meta_caches" that allows users to view the
state of their metadata caches on a per-db basis
An integration test was added to verify that it works.
* feat: make query executor as trait object
This commit moves `QueryExecutorImpl` behind a `dyn` (trait object) as
we have other impls in core for `QueryExecutor` and this will keep both
pro and OSS traits in sync
* chore: fix cargo audit failures
- address https://rustsec.org/advisories/RUSTSEC-2024-0399.html by
running `cargo update --precise 0.23.18 --package rustls@0.23.14`
- address yanked version of `url` crate (2.5.3) by running
`cargo update -p url`
This commit allows deleting (soft) a table. For an user, following
command will allow soft deleting a table (bar) in db (foo)
```
influxdb3 table delete --dbname foo --table bar --host $host
```
- Added `soft_delete_table` to `DatabaseManager` trait, which already
hosts `soft_delete_database` method. The code roughly follows the same
flow as db delete. Although like db schema, it does clone on write
because the reference is behind an Arc, `Arc::make_mut` is used in
this change.
- Moved db delete related cli parser under "manage" module that has both
db and table delete functionality
- Some minor tidyups (removing unused methods, renaming method so that
the order in name matches actual return type eg. `table_id_and_schema`,
should return (id, schema) and not (schema, id))
closes: https://github.com/influxdata/influxdb/issues/25561
* feat: drop/delete database
This commit allows soft deletion of database using `influxdb3 database
delete <db_name>` command. The write buffer and last value cache are
cleared as well.
closes: https://github.com/influxdata/influxdb/issues/25523
* feat: reuse same code path when deleting database
- In previous commit, the deletion of database immediately triggered
clearing last cache and query buffer. But on restarts same logic had
to be repeated to allow deleting database when starting up. This
commit removes immediate deletion by explicitly calling necessary
methods and moves the logic to `apply_catalog_batch` which already
applies `CatalogOp` and also clearing cache and buffer in
`buffer_ops` method which has hooks to call other places.
closes: https://github.com/influxdata/influxdb/issues/25523
* feat: use reqwest query api for query param
Co-authored-by: Trevor Hilton <thilton@influxdata.com>
* feat: include deleted flag in DatabaseSnapshot
- `DatabaseSchema` serialization/deserialization is delegated to
`DatabaseSnapshot`, so the `deleted` flag should be included in
`DatabaseSnapshot` as well.
- insta test snapshots fixed
closes: https://github.com/influxdata/influxdb/issues/25523
* feat: address PR comments + tidy ups
---------
Co-authored-by: Trevor Hilton <thilton@influxdata.com>
* chore: update core deps
- arrow/parquet deps are patched (as in core)
- three specific code changes to cope with changes in core crates
- TransitionPartitionId, use `from_parts` instead of `new`
- arrow buffers can take &[u8] directly without `to_vec()`/`vec!`
(used only in tests)
- `schema` and `influxdb_line_protocol` crates need `v3` feature enabled
* chore: update deny.toml
* chore: formatting and deny toml changes
Unicode-3.0 license is added to allowed licenses list, without it
end up with 19 errors (`zerovec`, `zerovec-derive` etc.)
* chore: address PR feedback
- move enabling v3 feature to root Cargo.toml
- added the upstream PR for datafusion-common that introduced RUSTSEC-2024-0384
* refactor: make last cache eviction optional
This changes how the last cache is evicted. It will no longer run eviction
on writes to the cache, instead, there is an optional method to create a
last cache provider that will run eviction in a background task on a specified
interval.
Otherwise, when records are produced from the cache, only those that have
not expired will be produced.
This should reduce locks on the cache and hopefully improve performance.
* feat: configurable last cache eviction interval
* docs: clean up var names, code docs, and comments