This commit is part of a two-part change in order to add the table &
namespace IDs to the write buffer wire format. This commit forms the
first half; changing the producer to send the IDs.
In this commit the new ID values are never read on the consumer side,
ensuring there is no consumer dependency on them. This ensures they
remain operational during a rollout, where the consumer may be updated
to the latest code dependent on the IDs before the producer is updated
to send them. This also ensures we have a window of time where where the
consumers can be rolled back after being updated, and still handle
replaying messages in Kafka.
Changes the DmlWrite type to require a PartitionKey be specified,
instead of accepting an Option.
This requirement was already in place - the write buffer upheld an
invariant that all writes contained a partition key value (was not
"None") or it panicked at runtime when attempting to enqueue the write.
It is now possible to encode this invariant in the type system, which is
what this change does.
We basically assume everywhere that a column falls into one of the three
known categories (time, tag, field), so lets encode this in our type
system instead of defining "unknown" as "undefined behavior, may or may
not crash".
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* refactor: `IOxReadFilterNode` can always accumulate statistics
`IOxReadFilterNode` used to not emit statistics if one chunk has
duplicates or delete predicates. This is wrong (or at least overly
conservative), because the node itself (or the chunks themselves) do NOT
perform dedup or delete predicate filtering. Instead this is done is
done by parent nodes (`DeduplicateExec` and `FilterExec`) and its their
job to propagate statistics correctly.
Helps w/ #5897.
* test: explain setup
Co-authored-by: Andrew Lamb <alamb@influxdata.com>
Co-authored-by: Andrew Lamb <alamb@influxdata.com>
Use the proper top-level DataFusion context and register the object
store there.
Note that we still hide the `ParquetExec` behind an opaque record batch
stream. Fixing that is next on my list.
Helps with #5897.
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* feat: query only from parquet
* Revert "feat: query only from parquet"
This reverts commit 5ce3c3449c0b9c90154c8c6ece4a40a9c083b7ba.
* Revert "revert: disable read buffer usage in querier (#5579) (#5603)"
This reverts commit df5ef875b4.
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* fix: only emit ttbr metric for applied ops
* fix: move DmlApplyAction to s/w accessible
* chore: test for skipped ingest; comments and log improvements
* fix: fixed ingester test re skipping write
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
This commit removes tombstone support from the ingester, and deletes
associated code/helpers/tests. This commit does NOT remove tombstone
support from any other service, but MAY include removing overlapping
test coverage.
This also removes the tombstone support from the Ingester -> Querier RPC
response message.
This has the nice side effect of removing a whole lot of thread spawning
in the ingester tests for the Executor, speeding everything up!
* feat: initial step to identify where the projection should be provided
* feat: start getting columns of all expressions
* chore: format
* test: test for the table_chunk_stream
* fix: fix a compile error. Thanks @alamb
* test: full tests for table_chunk_stream
* chore: cleanup
* fix: do not cut any columns in case all fields are needed
* test: add one more test case of reading all columns
* refactor: move code that identify columbs ot push down to a function. Add the use of field_columns
* chore: cleanup
* refactor: make sream_from_batch support empty batches
* chore: cleanup
* chore: fix clippy after auto merge
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
This commit changes the persist() call so that it passes through all
relevant IDs so that the impl can locate the partition in the buffer
tree - this will enable elimination of many queries against the catalog
in the future.
This commit also cleans up the persist() impl, deferring queries until
the result will be used to avoid unnecessary load, improves logging &
error handling, and documents a TOCTOU bug in code:
https://github.com/influxdata/influxdb_iox/issues/5777
Changes the NoopLifecycleHandle to MockLifecycleCall, and adds code
causing it to log all calls made to the log_write() method.
This will allow tests to assert calls and their values in DML buffering
tests.
Changes the lifecycle handle to also track the namespace + table ID in
addition to the existing shard ID.
Adds asserts to ensure the values never vary for a given partition.
Cache the 10,000 most recent partitions at startup, and share them
across all shards.
At commit time, there are approx ~8,000 partitions per day, per
ingester, so this should cache all of the partitions for a given day so
far at startup.
Move the initialisation of ShardData (an internal ingester data
structure) into the ingester itself.
Previously callers would initialise the ingester state, and pass it into
the IngesterData constructor.
* refactor: do not run de-dup in ingester for querier requests
This removes the entire de-dup logic from the inegster for querier
requests. Furthermore, it even removes the entire datafusion execution
from the querier and just dumps the in-memory record batches as quickly
as possible. No filters are applied. Note that even prior to this PR,
we've never applied projections (tracked by #5624).
**Pros:**
- speed up query planning within the querier (since we need the ingester
response for state reconciling)
- lowered ingester CPU load
**Cons:**
- more querier<>ingester network traffic
Closes#5602.
* test: extend query test case
* fix: ingester tests
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* refactor: retry querier->ingester requests
Esp. for InfluxRPC requests that scan multiple tables, it may be that
one ingester requests fails. We shall retry that request instead of
failing the entire query.
* refactor: improve docs
Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
* fix: less foo
* docs: remove outdated TODO
* test: assert that panic happened
Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
* refactor: concurrent table scan in "field columns"
Similar to #5647 and #5649.
* docs: improve
Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
Instead of passing the ShardId into each function for child nodes of the
Shard, store it. This avoids the possibility of mistakenly passing the
wrong value.
* refactor: concurrent table planning in InfluxRPC
Some InfluxRPC can scan multiple tables. Prior to this PR we were always
scanning the tables in sequence, adding up potential latencies (catalog,
ingester, object store). There is no reason we need to do this,
"ordinary" SQL queries would not serialize this way either.
So let's scan tables concurrently. This add concurrency to:
- read filter
- read group
- read window aggregate
There are other query types that could benefit from a similar treatment.
They will be changed in a follow-up.
* docs: improve
Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
* test: explain `Send` assertion
* refactor: change `CONCURRENT_TABLE_JOBS` to 10
Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>