Removes reliance on string name identifiers for namespaces in the
ingester buffer tree, reducing the memory usage of the namespace index
and associated overhead.
The namespace name is required (though unused by IOx) in the IoxMetadata
embedded within a parquet file, and therefore the name is necessary at
persist time. For this reason, a DeferredLoad is used to query the
catalog (by ID) for the name, at some uniformly random duration of time
after initialisation of the NamespaceData, up to a maximum of 1 minute
later. This ensures the query remains off the hot ingest path, and the
jitter prevents spikes in catalog load during replay/ingester startup.
As an additional / easy optimisation, the persist code causes a
pre-fetch of the name in the background while compacting, hiding the
query latency should it not have already been resolved.
In order to keep the the ingester buffer & catalog decoupled / easily
testable, this commit uses a provider/factory trait
NamespaceNameProvider and corresponding implementation
(NamespaceNameResolver) in a similar fashion to the PartitionResolver,
allowing easy mocking for tests, and composition for prod code, allowing
future optimisations such as pre-fetching / caching the "hot" namespace
names at startup.
Internal string identifier removal is a pre-requisite for removing
string identifiers from the write wire format (#4880).
Changes the ingester's buffer tree to use the deferred loading primitive
to resolve the namespace name for NamespaceData.
Note that the loader is initialised with the name in the first place -
this commit just introduces the use of the deferred loading primitive,
and doesn't change where the name is sourced from.
This lets deferred loads be used in place of a non-differed T, such as
log context fields.
If the value has not been resolved, the display impl returns
"<unresolved>".
Allow a caller to signal to the DeferredLoad that the value it may or
may not have to materialise will be used imminently, optimistically
hiding the latency of resolving the value (typically a catalog query).
* refactor: generic deferred loader helper
Splits the DeferredSortKey loader introduced in #5807 into two parts - a
generic helper type that implements deferred/background loading of
values, and SortKey specific logic for use with it.
As this will be more widley used, this implementation features improved
behaviour of the deferred loader under concurrent demand requests
(multiple calls to get() do not attempt to concurrently resolve the
value), as well as complete cancellation safety (cancelling the get()
doesn't affect the liveness of the background task).
* docs: doc-link & minor comment amendments
Fixes naming, adds missing doc-links, and expands some code comments.
* test: bound wait times to avoid hangs
Adds timeouts to all .await of the code under test, ensuring tests don't
hang if something goes wrong.
Currently we see some prod panics:
```
'assertion failed: len <= std::u32::MAX as usize', tonic/src/codec/encode.rs:127:5
```
This is due to an upstream bug in tonic:
https://github.com/hyperium/tonic/issues/1141
However the fix will only turn this into an error instead of panicking.
We should instead NOT return such overlarge results, esp. because
InfluxRPC supports streaming.
While we currently don't perform streaming conversion (like streaming
the data out of the query stack into the gRPC layer), the 4GB size limit
can easily be triggered (in prod) w/ enough RAM. So let's re-chunk our
in-memory responses so that they stream nicely to the client.
We may later implement proper streaming conversion, see #4445 and #503.
I tracked down the source of the size difference to the difference in
`mem::size_of::<mutable_batch::column::ColumnData>`. I believe this enum
is now able to take advantage of this niche-filling optimization:
<https://github.com/rust-lang/rust/pull/94075/>
* feat: flag partition for delete
* fix: compare the right date and time
* chore: Run cargo hakari tasks
* chore: cleanup
* fix: typos
* chore: rust style tidy ups in catalog
Co-authored-by: CircleCI[bot] <circleci@influxdata.com>
Co-authored-by: Luke Bond <luke.n.bond@gmail.com>
* refactor: NS+table ID (instead of name) in querier<>ingester
* feat(ingester): use IDs for query API
Changes the ingester to utilise the ID fields (instead of names) sent
over the query wire message wrapped within the Flight API.
BREAKING: this changes the "query-ingester" CLI command arguments which
now expects the namespace & table IDs, rather than their names.
* refactor(ingester): add more query logging context
Updates the log messages during query execution to include more context
fields.
* style: remove unused import
Co-authored-by: Marco Neumann <marco@crepererum.net>
* test: Ensure router's HTTP error messages are stable
If you change the text of an error, the tests will fail.
If you add a new error variant to the `Error` enum but don't add it to
the test, test compilation will fail with a "non-exhaustive patterns"
message.
If you remove an error variant, test compilation will fail with a "no
variant named `RemovedError`" message.
You can get the list of error variants and their current text via
`cargo test -p router -- print_out_error_text --nocapture`.
A step towards accomplishing #5863
Co-authored-by: Carol (Nichols || Goulding) <carol.nichols@gmail.com>
Co-authored-by: Jake Goulding <jake.goulding@integer32.com>
* fix: Remove optional commas and document macro arguments
* docs: Clarify the purpose of the tests the check_errors macro generates
* fix: Add tests for inner mutable batch LP error variants
Co-authored-by: Carol (Nichols || Goulding) <carol.nichols@gmail.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* test: use `EXPLAIN ANALYZE` for SQL metric tests
Needs a bit more infra (due to normalization), but this seems to be
worth it so we can easily hook up more metrics in the future.
* docs: explain regexes
* feat: Added single and inline comment combinators
* chore: Add tests for ws0 function
* feat: Add ws1 combinator
* feat: Use ws0 and ws1 combinators to properly handle comments
* feat: deletion flagging in GC based on retention policy
* chore: typo in comment
* fix: only soft delete parquet files that aren't yet soft deleted
* fix: guard against flakiness in catalog test
* chore: some better tests for parquet file delete flagging
Co-authored-by: Nga Tran <nga-tran@live.com>