* chore: Update DataFusion pin
* chore: Update for new API
* fix: fix test
* fix: only check error messages
---------
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* chore: Update DataFusion pin
* chore: Update for new API
* fix: Update for API
* fix: update compactor test
* fix: Update to patched version of arrow 46.0.0
* fix: map `DataFusionError::Configuration` to an internal error
* fix: do not use deprecated API
---------
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
Adds a crate that layers compaction-specific gossip types and
abstractions over the underlying gossip transport for a nicer (and
decoupled!) internal API.
Adds basic structure for #8349. This will be filled in using separate
PRs for easier review.
The layer structure was chosen to simplify testing and allow composition
of features (like retries, circuit breaking, metrics, etc.). In contrast
to the V1 client (`querier::ingester`) a client here addresses exactly 1
ingester, not multiple (via an `addr` parameter). The tracking around
mutiple states in the V1 version is not really nice and overly
complicated.
Adds a reusable "gossip_parquet_file" crate that provides a use-case
specific wrapper over the underlying gossip transport.
This crate deals with the encoding and decoding of parquet gossip
messages, handling them off to the application, and decoupling latency
of handlers from the gossip reactor.
Adds a new gossip_schema crate that provides a high-level interface to
schema change notifications.
This crate layers schema-specific interfaces over the existing low-level
gossip crate. Users can obtain best-effort schema change notifications
by implementing a SchemaEventHandler delegate given to a SchemaRx, or
efficiently dispatch schema change notifications to listening peers
using a SchemaTx.
Schema notifications are sent over the Topic::SchemaChanges topic
(ID=1), which the caller must register as an interest on receiving
gossip nodes.
* chore: Update datafusion pin
* fix: Update for change in API
* chore: Update plan
---------
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* chore: Update datafusion to get new grouping
* chore: Update for new API
* chore: update tests
* fix: new API
* fix: state type
---------
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* chore: Update datafusion + arrow/arrow-flight/parquet to version `42.0.0`
* chore: Update for new APIs
---------
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* refactor: make compactor_scheduler crate
* refactor: move PartitionsSource into the compactor_scheduler
The compactor currently uses PartitionsSource in two ways:
* for the preparation of PartitionIds prior to the compactor pipeline.
* for the abstraction which utilize the PartitionIds during the IO pipeline.
This commit is a refactoring to enable us to delineate between these two utilizations.
The former (preparation) utilization will now be done in the compactor_scheduler.
Since the compactor is dependent on the compactor_scheduler, it made sense to move the trait to the scheduler.
* chore: Update DataFusion pin
* chore: Update API changes
* chore: Don't use deprecated API
* chore: Run cargo hakari tasks
* chore: Update tests due to changes in logical plan nodes from DF update
* chore: Fix broken links in docs
* chore: Adjust changes to expected output
---------
Co-authored-by: CircleCI[bot] <circleci@influxdata.com>
Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
* chore: Update DataFusion pin
* chore: Update cargo
* fix: update for API changes
* fix: Update plans
* chore: Update for new api
* fix: Update plans
* chore: Update for API changes more
---------
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* chore: update DataFusion and arrow/parquet/arrow-flight to 39.0.0
* chore: update DataFusion and arrow/parquet/arrow-flight to 39.0.0 in workspace-hack/Cargo.toml
* chore: Run cargo hakari tasks
* chore: fix CI test and lint
* chore: update csv schema
* refactor: remove type-annotate for `Arc`
---------
Co-authored-by: CircleCI[bot] <circleci@influxdata.com>
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>