Adds a metric to track total retried catalog operations due to the catalog
being updated elsewhere. Includes a test to check the counter increments
on basic catalog operations.
* feat: enable auth by default
- Removes `--bearer-token` support and starts the server with auth by
default.
- Adds `--without-auth` switch to start the server without any auth
* feat: changes for auth being turned off
when auth is turned off,
- disallow token endpoints (returns 405)
- remove hash column when querying tokens system table
* refactor: address PR feedback
This commit allows deletion of tokens by name. Below is an example,
`influxdb3 delete token --token-name _admin --token $CURRENT_ADMIN_TOKEN`
It needs user confirmation before proceeding with the delete
This commit adds TLS support to influxdb3 and allows users to pass in a
path to a key and cert file with the --tls-key and --tls-cert flags in
the serve command. It also adds the ability for every command to specify
a certificate authority for requests. This is mostly needed when the
cert is self signed, but there are other use cases for this.
The big thing is that most of our tests now use TLS by default. Included
are self signed certs for localhost and the the CA cert included in the
commit. Since these are *only* used for testing this should be fine to
include as they are not used in nor are they intended to be used in any
production system. The expiry has been set for 365 days and the file
perms are set to o600 like the original issue mentioned. The tests pass
with this restriction.
I've verified that the API works via curl with the self signed certs as
I did *not* need to pass in the -k option to bypass checking the certs
were valid. The same goes for our tests. They use the rootCA.pem file
to verify the self signed cert when connecting and reject it otherwise.
With this users can be confident that their queries are safely encrypted
during transport.
Note that TLS works for both FlightSQL and our normal APIs.
Closes#25774
This changes the help messages for the serve command and the influxdb3
top level help output. This is to provide better help output and options
for users compared to the clap defaults. After many iterations this code
was the final design I landed on. In order to make our custom help work
we:
1. Roughly parse the args to check for a given command and help message
2. If there is no subcommand we print out our custom help message for
the top level command
3. If there is a subcommand we only print it out for serve currently
4. --help-all prints a more detailed message
This lets us upgrade our help messages in place over time as the
subcommands themselves have subcommands that also need their own help
messages. For now we only include these two.
We could not use something like `clap-help` or derive the output from
the derived clap struct automatically due to multiple issues, but
suffice to say the main thing was that we could not parse the args into
the `Config` struct and then figure out our help message and `clap`'s
message template function was too underpowered. We also wanted to create
a `help-all` flag and this was quite hard to make work with `clap`'s
short and long help.
In the end the best option was to roll our own. While a bit hacky in
terms of us maintaining it, the overall outcome should be a much nicer
user experience in which they can have much better introductions to the
tool. `serve` was quite egregious with a lot of output and very little
actionable options to use for most users.
Closes#26201
* feat: generate persistable admin token
- this commit allows admin token creation using `influxdb3 create token
--admin` and also allows regeneration of admin token by `influxdb3
create token --admin --regenerate`
- `influxdb3_authz` crate hosts all low level token types and behaviour
- catalog log and snapshot types updated to use the token repo
- tests that relied on auth have been updated to use the new token
generation mechanism and new admin token generation/regeneration tests
have been added
* feat: list admin tokens
- allows listing admin tokens
- uses _internal db for token system table
- mostly test fixes due to _internal db
* refactor: make ShutdownManager Clone
ShutdownManager can be clone since its underlying types from tokio are
all shareable via clone.
* refactor: make ShutdownToken not Clone
Alters the API so that the ShutdownToken is not cloneable. This will help
ensure that the Drop implementation is invoked from the correct place.
Added an additional check in the serve command to ensure that the frontend
has shutdown before exiting so that we don't close any connections pre-
emptively.
* feat: trigger shutdown if wal has been overwritten
WAL persist uses PutMode::Create in order to invoke shutdown if another
process writes to the WAL ahead of it.
A test was added to check that it works from CLI test suite.
* chore: clippy
This ensures a ShutdownToken will invoke `complete` by calling it from
its `Drop` implementation. This means registered components are not
required to signal completion, but can if needed.
Some comments and other cleanup refactoring was done as well.
* feat: add influxdb3_shutdown crate
provides basic wait methods for unix/windows OS's
* feat: graceful shutdown
* docs: add rust docs and test to influxdb3_shutdown
Added rustdoc comments to types and methods in the influxdb3_shutdown
crate as well as a test that shows the ordering of a shutdown.
This creates a CatalogUpdateMessage type that is used to send
CatalogUpdates; this type performs the send on the oneshot Sender so
that the consumer of the message does not need to do so.
Subscribers to the catalog get a CatalogSubscription, which uses the
CatalogUpdateMessage type to ACK the message broadcast from the catalog.
This means that catalog message broadcast can fail, but this commit does
not provide any means of rolling back a catalog update.
A test was added to check that it works.
* chore: clean up runtime code for python setup
ccd5d22aab introduced working but temporary code for setting up the
python runtime environment. This cleans that up:
* refactor various find_python() functionality into virtualenv.rs
* refactor PYTHONHOME calculation to virtualenv.rs:find_python_install()
* adjust init_pyo3() to temporarily set PYTHONHOME based on
virtualenv.rs:find_python_install() as this is the only place it is
needed (indeed, venv activation scripts try to remove it)
Importantly, virtualenv.rs:find_python_install() tries to find the
python build standalone runtime based on a few heuristics. This function
could be improved in the fullness of time to, eg, be configured via a
build parameter.
Also, virtualenv.rs:find_python() can be used pre and post venv
activation. Before entering the venv, it will use find_python_install()
which is useful for things like setting up the initial venv. After
entering the venv, it will honor VIRTUAL_ENV (as set by
virtualenv.rs:initialize_venv()) to find python, which is important for
install packages with pip and having them installed into the venv.
* chore: update README_processing_engine.md for default venv
* chore: add bug reference for venv migrations with python minor releases
* chore: add security URLs to README_processing_engine.md
* chore: find_python_home() returns Option<PathBuf>. Thanks Jackson Newhouse
* fix: manually set sys.prefix, exec_prefix and sys.path
When activating a venv in the shell, sys.base_prefix and
sys.base_exec_prefix should be set to the installation location while
sys.prefix and sys.exec_prefix should be set to the venv dir.
Unfortunately, when initialize_venv() and init_pyo3() are called, we
can't use Py_InitializeFromConfig() to set any of these and certain
platforms are unable to find python-build-standalone. For now, we'll
temporarily set PYTHONHOME in init_pyo3() to the installation location
to make python work. By setting it at this point in the code, sys.prefix
and sys.exec_prefix end up also being set to the installation location,
which is fine when not under a venv, but is different from when entering
an venv.
To address this, in PYTHON_INIT.call_once() and when VIRTUAL_ENV is set
(which initialize_venv() will have set at this point), manually set
sys.prefix and sys.exec_prefix to what is in VIRTUAL_ENV.
Similarly, when activating a venv in the shell, sys.path is appended to
have the venv's site-packages dir. Previously we were setting PYTHONPATH
in initialize_venv() which ensures that the venv's site-packages dir is
in sys.path, but this ends up having the venv's site-packages dir first
in sys.path. To correct this, don't set PYTHONPATH any more and instead
adjust PYTHON_INIT.call_once() to append the venv's site-packages dir to
sys.path when VIRTUAL_ENV is set.
Finally, when exiting init_pyo3(), unconditionally unset PYTHONHOME when
VIRTUAL_ENV is set (like activation scripts do) and restore/unset when
it isn't.
Prior to these changes, all target incorrectly had the venv's
site-packages first in sys.path and OSX and Windows additionally had an
incorrect sys.prefix and sys.exec_prefix. With these initialization
changes in place, the runtime environment for the plugins is much closer
to that of a shell activated venv.
- breaking change, replaced `--parquet-mem-cache-size-mb` and env var for
it with `--parquet-mem-cache-size` (takes value now in percentage or
MB), now defaults to 20% of total available memory
- force snapshotting is set at 50%
- datafusion mem pool is set to 20%
closes: https://github.com/influxdata/influxdb/issues/26009
* feat(ci): fetch and configure for python-build-standalone binaries
* fix: make the process engine usable on windows
* feat(ci): build with python-build-standalone (and drop musl)
* fix(ci): set rpath on Linux and libpath on OSX in ci
* fix: set PYTHONHOME everywhere and PYTHONPATH on Windows
* chore(ci): update to use more recent ci-packager-next
* fix(ci): adjust validate to allow certain dynamically linked libraries
* chore: remove install_influxdb.sh (using install_influxdb3.sh instead)
* chore(install_influxdb3.sh): update for processing engine and release builds
* fix: temporarily use rpm --nodeps until compile with old GLIBC
* feat(ci): build docker with python-build-standalone
* chore: add README_processing_engine.md
* chore: add a few more details to README_processing_engine.md
* fix(ci): use patchelf --set-rpath
Not all patchelf versions support --add-rpath for appending to the
RPATH, but --set-path can be used with a colon-separated list. Use
--set-rpath first for maximum compatibility.
* chore: update README_processing_engine.md for standalone local builds
* fix(Dockerfile): also use patchelf --set-rpath
* chore: update code comment for accuracy
* chore: typos, grammar and formatting change in README_processing_engine.md
* chore: update README_processing_engine.md for Docker arm64 (thanks Jackson)
* deduplicate QueryParams->QueryRequest and Format->QueryFormat
* move WriteParams into influxdb3_types crate
* DRY up client HTTP request handling code in *RequestBuilder.send
methods.
* DRY up a bunch of other non-Builder http request handling
Partially fixes https://github.com/influxdata/influxdb/issues/24672
* move most HTTP req/resp types into `influxdb3_types` crate
* removes the use of locally-scoped request type structs from the `influxdb3_client` crate
* fix plugin dependency/package install bug
* it looks like the `DELETE` http method was being used where `POST` was expected for `/api/v3/configure/plugin_environment/install_packages` and `/api/v3/configure/plugin_environment/install_requirements`
This commit restructures our tests to look like Enterprise in their
layout. We break cli.rs into it's own module, combine the server tests
and cli tests under one lib.rs file and handle the changes for
visibility and import paths needed to make things work. the packages
tests have been cfged out as a module so that it would not need to be
added on a per test basis. Note that those tests fail locally for me
currently, but it seems like we weren't testing these in CI at the
moment.
There is no issue for this.
* feat: introduce parquet caching in query path
This commit scans the parquet files that will be used in query to check
if they can be cached. There are three conditions to satisfy,
- not cached already
- cache has enough space
- file times overlap with the cache policy times
closes: https://github.com/influxdata/influxdb/issues/25906
* refactor: rename env var
This updates trigger creation to load the plugin file before creating the trigger.
Another small change is to make Github references use filenames and paths identical to what they would be in the plugin-dir. This makes it a little easier to have the plugins repo local and develop against it and then be able to reference the same file later with gh: once it's up on the repo.
This refactors plugins and triggers so that plugins no longer need to be "created". Since plugins exist in either the configured local directory or on the Github repo, a user now only needs to create a trigger and reference the plugin filename.
Closes#25876
This change allows *both* the write and query commands to accept input
via stdin, string, or by a file. With this change larger queries are more
feasible to write as they can now be written in a file and smaller
writes via a string are now possible. This also makes the program work
more like people would expect it to, especially on unix based systems.
This commit also contains three tests to make sure the functionality works
as expected.
Closes#25772Closes#25892
* feat: first stab at locally updating parquet cache
closes: https://github.com/influxdata/influxdb/issues/25887
* refactor: use enums to separate out the modes
This commit introduced the `Immediate` and `Eventual` modes for
fulfilling the cache request. In immediate mode since the data is
readily available to be cached, we can avoid extra requests to object
store.
part of: https://github.com/influxdata/influxdb/issues/25887
This commit does a few key things:
- Removes the 72 hour query and write restrictions in Core
- Limits the queries to a default number of parquet files. We chose 432
as this is about 72 hours using default settings for the gen1
timeblock
- The file limit can be increased, but the help text and error message
when exceeded note that query performance will likely be degraded as
a result.
- We warn users to use smaller time ranges if possible if they hit this
query error
With this we eliminate the hard restriction we have in place, but
instead create a soft one that users can choose to take the performance
hit with. If they can't take that hit then it's recomended that they
upgrade to Enterprise which has the compactor built in to make
performant historical queries.
This updates plugins so that they will reload the code if the local file is modified. Github pugins continue to be loaded only once when they are initially created or loaded on startup.
This will make iterating on plugin development locally much easier.
Closes#25863
* feat: Add request plugin capability
Adds the request plugin type. Triggers can be bound to an API endpoint at /api/v3/engine/<path>. Requests will get yielded to the plugin with the query parameters, request parameters, and request body.
I didn't implement the test endpoint for this plugin type as it seems much more natural for users to save the file and make a new request. Once #25863 is done it'll make it very easy.
Closes#25862
* chore: fix spelling in error message
Although the `format` in the request is used, the value coming
through the header is parsed earlier. So, when that lookup in
the header fails an error is returned (`InvalidMimeType`).
In this commit, there are extra checks to allow the default `Accept`
header values that come from the browser by defaulting it to `json`
closes: https://github.com/influxdata/influxdb/issues/25874
* feat(processing_engine): Add cron plugins and triggers to the processing engine.
* feat(processing_engine): switch from 'cron plugin' to 'schedule plugin', use TimeProvider.
* feat(processing_engine): add test for test scheduled plugin.
* feat: improve plugin logging interface
Updates the plugin log functions so they can take any number of Python objects which will be converted into a single log line string.
Closes#25847
* refactor: udpate on PR feedback
* feat: return better plugin execution errors
This sets up the framework for fleshing out more useful plugin execution errors that get returned to the user during testing. We'll also want to capture these for logging in system tables.
Also fixes a test that was broken in previous commit on time limits. Didn't show up because of the feature flag.
* fix: compile errors without system-py feature
This updates the v1 /query API hanlder to handle InfluxDB v1's unique
query response structure when GROUP BY clauses are provided.
The distinction is in the addition of a "tags" field to the emitted series
data that contains a map of the GROUP BY tags along with their distinct
values associated with the data in the "values" field.
This required splitting the QueryExecutor into two query paths for InfluxQL
and SQL, as this allowed for handling InfluxQL query parsing in advance
of query planning.
A set of snapshot tests were added to check that it all works.
This commit sets InfluxDB 3 Core to have a 72 hour limit for queries and
writes. What this means is that writes that contain historical data
older than 72 hours will be rejected and queries will filter out data
older than 72 hours. Core is intended to be a recent timeseries database
and performance over data older than 72 hours will degrade without a
garbage collector, a core feature of InfluxDB 3 Enterprise. InfluxDB 3
Enterprise does not have this write or query limit in place.
Note that this does *not* mean older data is deleted. Older data is
still accessible in object storage as Parquet files that can still be
used in other services and analyzed with dataframe libraries like pandas
and polars.
This commit does a few things:
- Uses timestamps in the year 2065 for tests as these should not break
for longer than many of us will be working in our lifetimes. This is
only needed for the integration tests as other tests use the
MockProvider for time.
- Filters the buffer and persisted files to only show data newer than
3 days ago
- Fixes the integration tests to work with the fact that writes older
than 3 days are rejected
This changes the CLI arg `host-id` to `writer-id` to more accurately
indicate meaning.
This changes also goes through the codebase and changes struct fields,
methods, and variables to use the term `writer_id` or `writer_identifier_prefix`
instead of `host_id` etc., to make the meaning clear in the code.
This also changes the catalog serialization to use the field `writer_id`
instead of `host_id`, which is breaking change.
This updates the create plugin API and CLI so that it doesn't take the pugin code, but instead takes a file name of a file that must be in the plugin-dir of the server. It returns an error if the plugin-dir is not configured or if the file isn't there.
Also updates the WAL and catalog so that it doesn't store the plugin code directly. The code is read from disk one time when the plugin runs.
Closes#25797
* feat: introduce num wal files to keep
This commit allows a configurable number of wal files to be left behind
in OS. This is necessary as enterprise replicas rely on these files.
closes: https://github.com/influxdata/influxdb/issues/25788
* refactor: address PR feedback
* refactor: address PR comment
This allows the user to specify arguments that will be passed to each execution of a wal plugin trigger. The CLI test was updated to check this end to end.
Closes#25655
This updates the WAL so that it can have new file notifiers added that will get updated when the wal flushes. The processing engine now implements the WALNotifier trait.
I've updated the CLI test for creating a trigger to run and end-to-end test that defines a plugin, creates a trigger, writes data into the database, triggering the plugin, which writes summary statistics back into the database in a different table. The test queries the destination table to confirm that the plugin worked.
* feat: Update WAL plugin for new structure
This ended up being a very large change set. In order to get around circular dependencies, the processing engine had to be moved into its own crate, which I think is ultimately much cleaner.
Unfortunately, this required changing a ton of things. There's more testing and things to add on to this, but I think it's important to get this through and build on it.
Importantly, the processing engine no longer resides inside the write buffer. Instead, it is attached to the HTTP server. It is now able to take a query executor, write buffer, and WAL so that the full range of functionality of the server can be exposed to the plugin API.
There are a bunch of system-py feature flags littered everywhere, which I'm hoping we can remove soon.
* refactor: PR feedback
This ended up being a couple things rolled into one. In order to add a query API to the Python plugin, I had to pull the QueryExecutor trait out of server into a place so that the python crate could use it.
This implements the query API, but also fixes up the WAL plugin test CLI a bit. I've added a test in the CLI section so that it shows end-to-end operation of the WAL plugin test API and exercise of the entire Plugin API.
Closes#25757
This commit allows checking memory in the background and force
snapshotting if query buffer size is > mem threshold. This hooks into
the function (`force_flush_buffer`) to achieve it.
closes: https://github.com/influxdata/influxdb/issues/25685
* fix: get create token subcommand to work again
In #25737 token creation was broken as a client would attempt to be
made, but hit an unreachable branch. The fix was to just make the branch
create a Client that won't do anything. We also add a test to make sure
that if there is a regression we'll catch it in the future.
Closes#25764
* fix: make client turn a duration into seconds/u64
This commit fixes the creation of a metadata cache made via the cli by
actually sending a time in seconds as a u64 and not as a duration. With
that we can now make metadata caches again when the `--max-age` flag is
specified.
This commit also adds a regression test for the CLI so that we can catch
this again in the future.
Closes#25749
This changes the `/query` API handler so that the parameters can be passed in either the request URI or in the request body for either a `GET` or `POST` request.
Parameters can be specified in the URI, the body, or both; if they are specified in both places, those in the body will take precedent.
Error variants in the HTTP server code related to missing request parameters were updated to return `400` status.