* fix: Split overlapped files based on the order of sequence numbers and only group non-overlapped contigous small files
* test: add one more test for group contiguous files:
* refactor: address review comments
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* chore: Update datafusion pin
* fix: Update for api
* fix: Explicitly set coalsce batch size
* fix: Update batch size as well
* fix: update tests for new explain plan, and improved coercion
Went through and remove all lazy_static uses with once_cell (while waiting for the project to compile). There are still dependencies using lazy_static so it is still in the crate graph but at least there isn't an explicit dependency on it (and it is easier to update to `std::lazy::Lazy` once that is stable).
* refactor: change level 1 to level 2 preparing for next design changes
* fix: make level-2 consistent everywhere
* chore: remove unused comments
* refactor: change all the name level_1 to level_2 to completely replace 1 with 2 to amke everything consistent
* chore: add correspinding constants for the comapction levels in the comments
Co-authored-by: Dom <dom@itsallbroken.com>
* chore(arrow_util): readability improvements
Signed-off-by: Ryan Russell <git@ryanrussell.org>
* chore(tracker): readability improvements
Signed-off-by: Ryan Russell <git@ryanrussell.org>
* chore(cache_system): improve readability
Signed-off-by: Ryan Russell <git@ryanrussell.org>
* refactor(lru test): rename `test_panic_id_collision`
Signed-off-by: Ryan Russell <git@ryanrussell.org>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
Currently the querier fetches RB in a serial manner, which is probably
not good since each cache miss takes between 10ms and 250ms.
Let's try to fetch 2 in parallel and if that works well, make this a
proper config.
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
The Influx deployment pipeline was changed so the an image push is used
as a signal for deployment (instead of a magic script that was used
before). So we need to adopt our CI to only push images when all tests
pass.
Old workflow:
- build release: builds docker images and push commit-based tags to
registry
- deploy release: pulls built images from registry, adds+pushes branch
tags, calls magic deploy script
New workflow:
- build release: builds docker image, saves them to disk
- deploy release: load image files, tags them, pushes tags
You may wonder why there are two steps if we could just use a single
one. The reason is: time-to-deploy. We can already build the image while we
are waiting for the tests. If the tests fail, the image will just not be
published.
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* fix: `Tombstone::size` must include serialized predicate
* fix: `CachedPartition::size` must include `Arc` heap allocation
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* refactor: remove `DecodedParquetFile` from `iox_tests`
* refactor: remove `DecodedParquetFile` from querier
Also pull out all the chunk schema and sort key handling into a function
so that RB chunks and parquet chunks mostly use the same code path.
* refactor: remove `DecodedParquetFile`
* refactor: remove `ParquetFileWithMetadata` usage
* fix: test data consistency
Previously the column data type was exposed using an internal i32 value.
This commit changes the Schema API to use a self-descriptive proto enum
for the column data type.
* refactor: simplify sort key calculation
* refactor: use schema from catalog instead from file
* refactor: do not request parquet file MD in compactor
* test: ensure that `QueryableParquetChunk` works correctly
* refactor: avoid feeding sort key from struct into same struct
* feat: allow namespace schema query by ID
* refactor: do not use binary parquet file MD in compactor tests
* refactor: do not use in-parquet IOx metadata
* refactor: reduce number of catalog queries
* fix: avoid using min_time, which can be negative, for ChunkId. Using object store id which is uuid instead
* chore: Apply suggestions from code review
Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
* chore: run fmt
Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* feat: split times of compacting results based on the max file size
* feat: cosider max file size while computing split time
* test: tests for comput_split_time
* feat: first step to teach the function split_the_steam to know how to split data into n streams using n-1 input PhysicalExprs
* feat: make StreamSplitNode support a list of expression
* docs: explain how StreamSplitNode works
* feat: Teach compute_split_time to split a time range into many contiguous ranges and split compacted result into multiple non-overlapped files based on the config comapction_max_size_bytes
* chore: cleanup
* chore: clean up doc
* chore: address review comments
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
The ingester maintains a rough "total memory in use" counter it uses to
try and limit the amount of memory the ingester is using overall.
When a partition is persisted, this total memory usage value is adjusted
to account for releasing the partition memory. Prior to this commit, the
ordering was:
* Writes increase the memory counter
* maybe_persist() is called to trigger persistence
* A partition is identified for persistence
* Partition memory usage is released back to the total memory counter
* Persistence starts
This meant that the partitions in the process of being persisted were
not accounted for in the ingester's total memory counter, and therefore
we could significantly overrun the configured memory limit.
After this commit, the ordering is:
* Writes increase the memory counter
* maybe_persist() is called to trigger persistence
* A partition is identified for persistence
* Persistence starts
* Persistence completes
* Partition memory usage is released back to the total memory counter
This ensures persisting partitions are sill tracked in the total memory
counter, causing pauses to correctly fire.
* feat: conversion from `ParquetFile` to `ParquetFilePath`
* refactor: slim down parquet chunk
- ensure it works without binary parquet metadata
- timestamp range is no longer optional (ensured by the NG type system)
- remove table summary: this is only needed for SOME API users. The
compactor can perfectly work without statistics since has the timestamp
range which is sufficient for the current overlap check (we don't use
any other primary key stats at the moment). The querier currently does
NOT use parquet chunks (was replaced by read buffer) but if it will
again in some future it will likely need to find a way to fetch and
cache the statistics.
- the schema is now provided by the API user since it can be
reconstructed using the NG catalog only (and "wrong" column orders are
tolerated as of #4921)
Ref #4124
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* fix: use proper sort key in tests
* feat: do not rely on encoded parquet metadata for RB chunks
Ref #4124.
* refactor: allocate less strings
* refactor: use upstream PK calculation
* fix: cache expiration w/o a good reason
* refactor: make namespace cache safer to use
* refactor: make partition cache safer to use
* fix: column handling when reading parquet files
This improves/fixes/tests a few aspects when reading parquet files:
- fix usage of `Selection::Some(...)`. This was broken since #4912 but
apparently no test caught that.
- ensure that the order of `Selection::Some(...)` is preserved
- ensure that schema metadata is attached to output batches
- ignore parquet columns that we don't care about (i.e. do not select)
- allow parquet file to have a different column order than our internal
bookkeeping, this makes it way simpler to read parquet files w/o
scanning the metadata first
- extend the test coverage
Ref #4124.
* test: even more tests for parquet reader
Queries internals are not meant to be used by other crates. Only a
handful selected interfaces should be used by IOxD and the query tests.
The compactor only used a very small subset just to read parquet files
back into memory. It shall rather use the official `parquet_file`
interface instead.
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
Fixes interaction of `maybe_skip_kafka_integration!` and `should_panic`
by ensuring that `maybe_skip_kafka_integration!` panics to skip
`should_panic` tests.
Without that it is not possible to just run `cargo test -p write_buffer`.
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>