- this commit allows admin token creation using `influxdb3 create token
--admin` and also allows regeneration of admin token by `influxdb3
create token --admin --regenerate`
- `influxdb3_authz` crate hosts all low level token types and behaviour
- catalog log and snapshot types updated to use the token repo
- tests that relied on auth have been updated to use the new token
generation mechanism and new admin token generation/regeneration tests
have been added
* refactor: make ShutdownManager Clone
ShutdownManager can be clone since its underlying types from tokio are
all shareable via clone.
* refactor: make ShutdownToken not Clone
Alters the API so that the ShutdownToken is not cloneable. This will help
ensure that the Drop implementation is invoked from the correct place.
* chore: couple of updates to fix cargo audit job
- remove humantime ignore in deny.toml
- update pyo3 to use 0.24.1 (https://rustsec.org/advisories/RUSTSEC-2025-0020.html)
* chore: moved pyo3 version to root cargo.toml
Added an additional check in the serve command to ensure that the frontend
has shutdown before exiting so that we don't close any connections pre-
emptively.
* feat: trigger shutdown if wal has been overwritten
WAL persist uses PutMode::Create in order to invoke shutdown if another
process writes to the WAL ahead of it.
A test was added to check that it works from CLI test suite.
* chore: clippy
This ensures a ShutdownToken will invoke `complete` by calling it from
its `Drop` implementation. This means registered components are not
required to signal completion, but can if needed.
Some comments and other cleanup refactoring was done as well.
* feat: add influxdb3_shutdown crate
provides basic wait methods for unix/windows OS's
* feat: graceful shutdown
* docs: add rust docs and test to influxdb3_shutdown
Added rustdoc comments to types and methods in the influxdb3_shutdown
crate as well as a test that shows the ordering of a shutdown.
This introduces a new version for the catalog file formats ([snapshot](3b57682214/influxdb3_catalog/src/snapshot/versions/mod.rs (L2)) files and [log](3b57682214/influxdb3_catalog/src/log/versions/mod.rs (L2)) files). The reason for introducing a new version is to change the serialization/deserialization format from [`bitcode`](https://docs.rs/bitcode/latest/bitcode/) to JSON. See #26180.
The approach taken was to copy the existing type definitions for both log and snapshot files into two places: a `v1` module and a `v2` module. Going forward:
* Types defined in `v1` should not be changed. They are only there to enable deserialization of existing bitcode-serialized catalog files.
* Types defined in `v2` can be modified in a backward-compatible manor, and new types can be added to the `v2` modules
With this PR, old files are not overwritten. The server does not migrate any files on startup. See https://github.com/influxdata/influxdb/pull/26183#issuecomment-2748238191Closes#26180
* simplify FieldValue types by making load generator functions should be
generic over RngCore and passing the RNG in to methods rather than
depending on it being available on every type instance that needs it
* expose influxdb3_load_generator as library crate
* export config, spec, and measurement types publicly to suppore use in
the antithesis-e2e crate
* fix bug that surfaced whenever the cardinality value was less than the
lines per sample value by forcing LP lines in a set of samples to be
distinct from one another with nanosecond increments
This adds a sleep so that the parquet cache has a little bit of time to
populate before we make another request to the query buffer. Sometimes
it does not populate and so we have a race condition where the new
request comes in and actually goes to object store. This is fine in
practice because it would also take time to fill the cache in production
as well. I haven't really seen the test fail since adding this, but
triggering it in the first place is really hard and in practice does not
happen all that often.
When starting up a new cluster in Enterprise we might have multiple
nodes starting at the same time. We might have an issue wherby we have
multiple catalogs with different UUIDs in their in memory
representation.
For example:
- Let's say we have node0 and node1
- node0 and node1 start at the same time and both check object storage
to see if there is a catalog to load
- They both see there is no catalog
- They both create a new one by generating a UUID and persisting it to
object storage
- Whichever is written second is now the one with the correct UUID in
their in memory representation while the other will not have the
correct one until restarted likely
This in practice isn't an issue today as Trevor notes in
https://github.com/influxdata/influxdb_pro/issues/600, but it could be
once we start using `--cluster-id` for licensing purposes. In order to
prevent this we instead make the write to object storage use the Put
mode. If it exists then the write will fail and the node that lost the
race will instead just load the other's catalog.
For example if node1 wins the race then node0 will load the catalog
created by node1 and use that UUID instead.
As this is hard to create a test for as it involves a race condition to
happen I have not included one as we could never really be sure it was
taken care of and we rely on the underlying object store we are writing
to to handle this for us. It's also not likely to happen given this is
only on a new cluster being initiated for the first time decreasing the
chances of it occurring in the first place.
This creates a CatalogUpdateMessage type that is used to send
CatalogUpdates; this type performs the send on the oneshot Sender so
that the consumer of the message does not need to do so.
Subscribers to the catalog get a CatalogSubscription, which uses the
CatalogUpdateMessage type to ACK the message broadcast from the catalog.
This means that catalog message broadcast can fail, but this commit does
not provide any means of rolling back a catalog update.
A test was added to check that it works.
* feat(python): update to python-build-standalone 3.13.2
References:
- https://github.com/influxdata/influxdb/issues/26044
* fix: update fetch-python-standalone.bash to properly set 'executable'
* fix: use PYO3_CONFIG_FILE to find PYTHONHOME.
* fix: add comment about PYO3_CONFIG_FILE.
* fix: remove ensure_pyo3().
* fix: add some sleep so catalog is updated.
---------
Co-authored-by: Jackson Newhouse <jnewhouse@influxdata.com>
* refactor: use repository in catalog
The catalog was refactored to use identifiers on everything, and store
everything in a consistent structure. This structure makes use of the
`Repository` type that holds a `SerdeVecMap` of Id to Resource, along
with the next Id, and a bi-map of Id to resource name.
The `Repository` type is used at each level of the catalog where a
resource is stored.
This simplified repeated logic for snapshot'ing, insert and update of
resources in the catalog, as well as accessor methods for getting by id
or name, and mapping names to ids and vice-versa.
In addition, the process for catalog batch verification and permit was
altered so that the permit process induces a retry if the catalog was
updated while the catalog batch function was producing the batch, i.e, if
the catalog sequence incremented while the caller was waiting for a permit.
This eliminated the need for verifying the catalog batch after it had been
generated, and allows for a single path to apply a catalog batch after it
has been persisted to object store.
This assumes that the generation of the catalog batch implies validity.
Irelevant tests were removed.
Last and Distinct cache's now rely more heavily on Ids, though the proc-
essing engine still needs to switch over to use Ids for starting/stopping
triggers.
* fix(circleci): add librt.so to list of acceptable libraries
* feat(circleci): check for glibc portability
* fix(circleci): remove rpm --nodeps workaround in rpm validate
Now that we have glibc portability in rpm builds, we no longer need the
'rpm --nodeps' workaround and can go back to 'yum localinstall'.
Closes#26011
* chore: update README_processing_engine.md for glibc portability
Continuing our work of creating versioned files before Beta, this commit
adds a PersistedSnapshotVersion which is used at the boundary of
serializing and deserializing so that we can easily upgrade to a newer
version and handle old versions without breaking things for users.