Temporary fix as there's no official 1.66.1 docker image (yet?), so use
1.66 for now.
I kept the toolchain version pinned and switched only the CI version so
that we are all using the fixed cargo. I will revert this commit when
there's a 1.66.1 image.
* feat: InfluxQL learns how to plan some queries
Also added a means to test the planner and execution
* chore: Update module docs
* chore: Document the planner functions
* chore: Update end_to_end_cases crate
* chore: Clarify why `SLIMIT` and `SOFFSET` return `NotImplemented`
* chore: Address lint issues
* chore: Fix rustdoc link issue
* chore: Remove InfluxQL tests from query_tests crate
Will follow conventions established by @carols10cents when
new query_tests crate is merged.
* chore: `now` field
`now` is a DataFusion built-in scalar function
* chore: remove unused code
* chore: Add additional arithmetic expression tests
* chore: Establish pattern for identifying and tracking InfluxQL issues
* chore: Add tests for case sensitivity issues
* chore: group tests into modules and functions
This avoids mass rewriting of insta snapshots as new
tests are added to each function. When tests are added in the middle,
existing snapshots are renamed (-N+1, -N+2, etc) resulting in
having to review numerous additional snapshots.
* feat: cold
* chore: debug info
* feat: only compact qualified cold partition candidates
* fix: catalog test
* chore: cleanup
* chore: add new config flag for cold partition candidates
* chore: implement display for CompactionType and add tests for max num partitions
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
Prior to this commit, when initialising the persist system it would
return a PersistState instance, used to communicate the saturation
status of the persistence system. The RPC write path used this
information to accept or deny write requests accordingly.
This was unfortunate in that it tightly coupled the ingest handler to
the persist system - in order to initialise the RPC handler, you had to
provide a PersistState; this required us to initialise a persist system
when testing only the RPC handler (which had nothing to do with
persisting). This smells!
This commit inverts the dependency, and decouples the subsystems via a
shared type (IngestState). Instead of the persist system telling ingest
to stop, the ingest system provides a means to be told to stop - this
subtle difference decouples the ingest handler from all components that
need to block ingest. This allows a fast O(1) error state read for N
components and prevents us from having to start N components to test a
RPC handler.
Additionally this commit introduces an unused ingest error state
(GracefulStop) as part of figuring out the API (to be used shortly).
* refactor: invalidate querier cache if ingester is gone
For #6549 but I think even w/o the plan illustrated there, this is the
right thing to do.
Also changes the cache system to use flats sorted vectors instead of costly hash
maps.
* refactor: simplify code
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* feat: Update replication.proto
* Remove the PartitionId in the replicate request as a single replicate request can have the data for many partitions.
* Add namespace_id and table_id to persist complete request to make data easier to lookup in buffer.
* feat: Initial ingest_replica skeleton
A bunch of copy pasta here from ingester2, but this takes out a ton of stuff that isn't used in replicas.
Also lays the groundwork for the simpler buffer structure to keep the data and a basic cache for catalog information that will be required.
* feat: update replication.proto GetPartitionBufferResponse
* chore: PR cleanup
* chore: PR cleanup
Include the number of DML operations applied to the persisted buffer
in the "persisted partition" message.
Partly because I'm intrigued / it's useful information, and partly to
ensure LLVM doesn't get snazzy and dead-code the sequence number
tracking because it was never read.
Changes the ingester2 buffer FSM to track the sequence numbers that have
been applied to it.
This is a pre-requisite for replication & correct WAL segment dropping.
Previously the ingester(1) required ordered writes to be applied, this
requirement has been relaxed, and the asserts (previously) removed in
ingester2.