The distinct cache info for tables was not serialized in the catalog.
This fixes it, but also updates the catalog serialization to use the
snapshot type serialization from the Catalog type all the way down.
The Eq and PartialEq impls were removed from Catalog and InnerCatalog
as they were only used in tests, and wer replaced by pure insta snapshot
tests.
A test was added to check that the distinct cache serializes/deserializes
Fixed a bug in the distinct cache where projection that skipped column
in the cache hierarchy caused a panic.
This simplifies the display of the projection in the DistinctCacheExec
in EXPLAIN output to not include the column index, and only the name.
In #25927 we missed that JSON queries were broken despite having some
tests use the format. This fixes JSON queries such that they now
properly contain a comma between RecordBatches. This commit also
includes tests for the formats that now stream data back (CSV, JSON, and
JSON Lines) so that we won't run into this issue again.
* deduplicate QueryParams->QueryRequest and Format->QueryFormat
* move WriteParams into influxdb3_types crate
* DRY up client HTTP request handling code in *RequestBuilder.send
methods.
* DRY up a bunch of other non-Builder http request handling
Partially fixes https://github.com/influxdata/influxdb/issues/24672
* move most HTTP req/resp types into `influxdb3_types` crate
* removes the use of locally-scoped request type structs from the `influxdb3_client` crate
* fix plugin dependency/package install bug
* it looks like the `DELETE` http method was being used where `POST` was expected for `/api/v3/configure/plugin_environment/install_packages` and `/api/v3/configure/plugin_environment/install_requirements`
* feat: clear query buffer incrementally when snapshotting
This commit clears the query buffer incrementally as soon as a table's
data in buffer is written into parquet file and cached. Previously,
clearing the buffer happened at the end in the background
* refactor: only clear buffer after adding to persisted files
* refactor: rename function
This commit allows us to stream data back for CSV and JSON formatted
queries. Prior to this we would buffer up all of the data in memory
before sending it back. Now we can make it so that we only buffer in
one RecordBatch at a time to reduce memory overhead.
Note that due to the way the APIs for writers work and for how Body in
hyper 0.14 works we can't use a streaming body that we can write too.
This in turn means we have to use a manually written Future state
machine, which works but is far from ideal.
Note this does not include the pretty and parquet files as streamable.
I'm attempting to get the pretty one to be streamable, but I don't think
that this one and parquet are as likely to be streamable back to the
user. In general we might want to discourage these formats from being
used.
This commit restructures our tests to look like Enterprise in their
layout. We break cli.rs into it's own module, combine the server tests
and cli tests under one lib.rs file and handle the changes for
visibility and import paths needed to make things work. the packages
tests have been cfged out as a module so that it would not need to be
added on a per test basis. Note that those tests fail locally for me
currently, but it seems like we weren't testing these in CI at the
moment.
There is no issue for this.
* feat: introduce parquet caching in query path
This commit scans the parquet files that will be used in query to check
if they can be cached. There are three conditions to satisfy,
- not cached already
- cache has enough space
- file times overlap with the cache policy times
closes: https://github.com/influxdata/influxdb/issues/25906
* refactor: rename env var
This updates trigger creation to load the plugin file before creating the trigger.
Another small change is to make Github references use filenames and paths identical to what they would be in the plugin-dir. This makes it a little easier to have the plugins repo local and develop against it and then be able to reference the same file later with gh: once it's up on the repo.
This speeds up snapshot persistence by taking all of the persist jobs
and running them simultaneously on a JoinSet. With this we can speed
things up a bit by not waiting for each file to persist before the next
one can be persisted. Instead we now can run all the persisting at the
same time using the tokio runtime.
Closes#24658
This refactors plugins and triggers so that plugins no longer need to be "created". Since plugins exist in either the configured local directory or on the Github repo, a user now only needs to create a trigger and reference the plugin filename.
Closes#25876
This change allows *both* the write and query commands to accept input
via stdin, string, or by a file. With this change larger queries are more
feasible to write as they can now be written in a file and smaller
writes via a string are now possible. This also makes the program work
more like people would expect it to, especially on unix based systems.
This commit also contains three tests to make sure the functionality works
as expected.
Closes#25772Closes#25892
* feat: first stab at locally updating parquet cache
closes: https://github.com/influxdata/influxdb/issues/25887
* refactor: use enums to separate out the modes
This commit introduced the `Immediate` and `Eventual` modes for
fulfilling the cache request. In immediate mode since the data is
readily available to be cached, we can avoid extra requests to object
store.
part of: https://github.com/influxdata/influxdb/issues/25887
This commit does a few key things:
- Removes the 72 hour query and write restrictions in Core
- Limits the queries to a default number of parquet files. We chose 432
as this is about 72 hours using default settings for the gen1
timeblock
- The file limit can be increased, but the help text and error message
when exceeded note that query performance will likely be degraded as
a result.
- We warn users to use smaller time ranges if possible if they hit this
query error
With this we eliminate the hard restriction we have in place, but
instead create a soft one that users can choose to take the performance
hit with. If they can't take that hit then it's recomended that they
upgrade to Enterprise which has the compactor built in to make
performant historical queries.
* refactor: reduce catalog locks when getting chunks
The main refactor was to change the ChunkContainer trait to use the
DatabaseSchema and TableDefinition types directly in the signature, vs.
the names, which then required an additional catalog lock and lookups for
both entities. This was already handled upstream in the QueryTable, so
there was no need to do the lookups again.
This required the addition of a test helper in influxdb3_write::test_helpers
that provides convenience methods for getting record batches from the
WriteBuffer. We have been implementing such a method manually in several
places, so this is nice to have it unified. This provides a blanket impl
so that anything implementing WriteBuffer gets the method.
Some other house cleaning was included.
* refactor: clean up test helpers in influxdb3_write
* refactor: pass original df filters forward with ChunkFilter
* chore: clippy
This updates plugins so that they will reload the code if the local file is modified. Github pugins continue to be loaded only once when they are initially created or loaded on startup.
This will make iterating on plugin development locally much easier.
Closes#25863
* feat: Add request plugin capability
Adds the request plugin type. Triggers can be bound to an API endpoint at /api/v3/engine/<path>. Requests will get yielded to the plugin with the query parameters, request parameters, and request body.
I didn't implement the test endpoint for this plugin type as it seems much more natural for users to save the file and make a new request. Once #25863 is done it'll make it very easy.
Closes#25862
* chore: fix spelling in error message
Although the `format` in the request is used, the value coming
through the header is parsed earlier. So, when that lookup in
the header fails an error is returned (`InvalidMimeType`).
In this commit, there are extra checks to allow the default `Accept`
header values that come from the browser by defaulting it to `json`
closes: https://github.com/influxdata/influxdb/issues/25874