The API user still CAN use dynamic dispatch but doesn't have to. This
also simplifies the generics a bit.
This is similar to #5520.
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* feat: Parse various InfluxQL literals
* feat: Parse regex, refactor single and double quoted string parsing
* chore: Literals do not include sign; those are unary expressions
* chore: Add docs
* chore: Integer literals are unsigned
Add more tests for max values
* chore: Impl Display for Literal; add macro to write escaped strings
Also added Duration type for InfluxQL durations, so they can be properly
formatted when displayed.
The macro uses match to efficiently map a small number of characters
to their escaped equivalent. It also removes a bit of boilerplate.
* chore: Don't tie lifetime of AST elements to source `str`
* feat: Impl From trait for Literal, Regex and Duration
* chore: Derive Copy for Duration
* chore: PR Feedback, use unwrap_err for better output when API fails
* chore: Drive-by cleanup using unwrap_err
And only allow setting this when no record batch or line protocol is
specified so that there isn't a way to create a parquet file with data
that has a mismatched row count.
* fix: loop forever in compact_hot_partition_candidates
* chore: cleanup
* fix: avoid using continues that will cause bugs in corner cases
* fix: Pass compaction fn as a closure instead to allow collection of groups in test
* fix: Add Send bound as suggested by clippy
* fix: fix the test to return data of round 3 instead of round 2
Co-authored-by: Carol (Nichols || Goulding) <carol.nichols@gmail.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
This removes some `Box<dyn ...>` indirection when the user doesn't want
it (you still can, but don't have to) and makes the whole type handling
easier to understand.
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
This limit restricts a single partition to containing at most N rows
before it is marked for persistence (note: being marked for persistence
does not currently prevent further ingest for that partition.)
* feat: initial implementation of memory estimation for a compaction
* feat: estimate size of files and have the right actions for the needed budget
* feat: run candidates in parallel
* fix: have the right name for the column field of the output struct
* feat: add metrics for estimated budgets
* chore: cleanup
* chore: Apply suggestions from code review
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
* fix: fix syntax after applying review's suggestions
* refactor: Convert a Vec to VecDeque to go well with pop and push
* chore: remove max_concurrent_size_bytes and input_size_threshold_bytes
* chore: remove input_file_count_threshold
* test: tests for estimate_arrow_bytes_for_file
Co-authored-by: Carol (Nichols || Goulding) <193874+carols10cents@users.noreply.github.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
Changes the compactor code to tolerate a SplitExec yielding an empty
partition (with no rows).
This raises a WARN as the situation in which this is acceptable is very
rare, and is more likely indicative of an opportunity to improve the
SplitExec usage (i.e. pruning out unnecessary split points).
Previously when attempting to serialise a stream of one or more
RecordBatch containing no rows (resulting in an empty file), the parquet
serialisation code would panic.
This changes the code path to raise an error instead, to support the
compactor making multiple splits at once, which may overlap a single
chunk:
────────────── Time ────────────▶
│ │
┌█████──────────────────────█████┐
│█████ │ Chunk 1 │ █████│
└█████──────────────────────█████┘
│ │
│ │
Split T1 Split T2
In the example above, the chunk has an unusual distribution of write
timestamps over the time range it covers, with all data having a
timestamp before T1, or after T2. When a running a SplitExec to slice
this chunk at T1 and T2, the middle of the resulting 3 subsets will
contain no rows. Because we store only the min/max timestamps in the
chunk statistics, it is unfortunately impossible to prune one of these
split points from the plan ahead of time.
1. Cache converted schema instead of catalog schema. This safes a buch
of memcopies during conversion.
2. Simplify creation of new chunks, we now only need a `CachedTable`
instead of a namespace and a table schema.
In an artificial benchmark, this removed around 10ms from the query
(although that was prior to #5467 which moved schema conversion one
level up). Still I think it is the cleaner cache design.
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* refactor: use a single timestamp in policy backend
Prior to this PR we had at least 1 `TimeProvider::now` calls per GET
request (for caches that only used LRU) and up to 3 calls (caches with
LRU + refresh + TTL). Let's instead use a single timestamp that is
created by the policy backend itself (instead of the policies). This has
the following consequences:
- **efficiency:** `SystemProvider::now` is not free, even though under Linux
this doesn't result in a syscall, it uses the stdlib time system which
also checks for monotonicity
- **consistency:** All changes for a single trigger (e.g. a
GET cache call) now use a single timestamp instead of slightly
increasing ones. I argue this is the better semantic, simpler to
understand and better to debug.
For some (slightly artificial) local performance experiment, this shaves
off around 2ms per single-table SQL query. However I expect that there might
be more degenerated cases (e.g. multi-table SQL queries or some
InfluxRPC requests that hit multiple tables).
The majority of this patch is moving the `TimeProvider` from the
policies into the policy backend.
* docs: explain `now` parameter
* chore: Update datafusion pin
* chore: Update now that user is a reserved word
* chore: Update cargo.lock
* fix: update query for user function
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>