* fix(compactor): prevent sort order mismatches from creating overlapping regions
* chore: test additions for incorrectly created regions
* fix(compactor): more sort order mismatch fixes
* chore: insta updates
* chore: insta updates after merge
Rather than always having to request all of a namespace's schema then
filtering to the one you want. Will make this more consistent with
upserting schema by namespace+table.
Fixes#4997.
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* fix(influxql): FILL(linear) for selectors
Ensure that selector functions such as FIRST, LAST, MIN and MAX can
use LINEAR filling in the same way as influxdb 1.8.
* chore: review suggestions
Apply suggestions from the review. This adds more tests and support
for interpolation in SQL.
* fix: lint
* fix: lint
* chore: buffered input for struct arrays
Ensure that for linear interpolation the buffered input of a struct
field ensures that buffering only stops when there is a non-null
struct containing a non-null value.
* fix: integration test
* fix(iox_query): make clippy happy
---------
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
The layer that serializes our requests. This also contains the logic to
leave out non-serialiable filters like the V1 version (same tests, just
slightly differently arranged).
For #8349.
* feat: more `TestResponse` constructors
* feat: "logging" layer for i->q V2 client
Logging layer for #8349. This mostly logs in debug mode but emits errors
to the log. Simple implementation that can be extended later.
---------
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
For #8350 we want to be able to stream record batches from the ingester
instead of waiting to buffer them fully before the query starts. Hence
we can no longer inspect the batches in the "display" implementation of
the plan.
This change mostly contains the display change, not the actual streaming
part. I'll do that in a follow-up.
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* feat: have ingester's SortKeyState include sort_key_ids
* fix: test failures
* chore: address review comments
* chore: address review comments by asding asserts to catch bugs if any
* chore: fix typo
* test: get column IDs for the tests
* refactor: reuse function
* chore: address review comments
This will enable some subsystems to trivially respect any `IngestStateError`
set while ignoring specific errors which they may be responsible for
resolving (such as WAL replay needing to ingest from disk when `DiskFull`
is set).
Optionally initialise the gossip subsystem in the compactor.
This will cause the compactor to perform PEX and join the cluster, but
as it registers no topic interests, it will not receive any
application-level payloads.
No messages are currently sent (in fact, gossip shuts down immediately).
Prior to this commit, the NamespaceCache was only implemented for
Arc<MemoryNamespaceCache> instead of the cache type itself.
In the vast majority of cases, this Arc wrapper is completely
unnecessary - it adds both runtime overhead, and code/type complexity.
This commit impls NamespaceCache for any Arc-wrapped NamespaceCache, and
removes all unnecessary Arc wrapping of the MemoryNamespaceCache.
When we come across an `UnableToReadNextOps` error during Ingester
start-up it means that one of two things has happened:
1. Ingester crashed mid-flush to WAL, so the write entry is incomplete
and the user received an error.
2. The WAL segment file has been corrupted in a way which cannot be
recovered from. The format does not allow for recovery of a bad
entry.
In either case we can safely drop the WAL file after all good entries
have been replayed and persisted so as not to block recovery.
See https://github.com/influxdata/influxdb_iox/issues/8613 for more
details.
This decouples the implementation of the `ClosedSegmentFileReader` from
the ability to replay batches of `Vec<SequencedWalOp>`s, making it easy
to test.
Nothing currently relies on this public error variant, but for automatic
recovery we need the WAL to provide a contract that this error is
returned only when the next operation in the WAL is unsalvageable.