* feat: flags for pushing down new aggregates
* refactor: grouped aggregate rewrite rules
The storage operation ReadGroup aggregates per series on the storage
side. The planner will rewrite grouped aggregate queries to call
ReadGroup, which will perform a partial aggregation, followed by
another operation that will perform the rest of the aggregation on
the compute side.
* feat: storage capabilities for grouped aggregates
* fix: changes from review
* feat: group read operation name should include aggregate
This modifies the read window aggregate interfaces to future-proof it
if and when we add additional capabilities to the method. Previously,
the interface was all or nothing. If we modified the RPC call itself, we
would have to make a new interface to denote the change to the Go code.
This changes the interface so now a `WindowAggregateCapability` exists.
This way, we can modify the struct to include things like:
```
type WindowAggregateCapability struct {
WindowPeriodCapability bool
MeanAggregateCapability bool
}
```
This way we can learn if the RPC call itself supports some specific
option. If the first iteration doesn't support a mean aggregate or the
mean aggregate is only supported by single server implementations, the
window aggregate can tell the caller that it won't be able to compute
the mean aggregate.
Since it fills in a struct with these capabilities, the struct can
safely introduce new values. If a downstream consumer wants to take
advantage of that functionality, then all interfaces in the chain have
to be updated to consume the upstream capabilities.
Added an interface for an additional storage capability. This interface
will allow for checking if the reader supports the window aggregate call
and another method for invoking the call if it does.
This is implemented using a single interface. If the reader implements
the interface, it indicates that the client is capable of reading the
response. The `HasXXX` method is intended to check if the store supports
the operation. This method also takes a context because it could require
a remote call or to wait for one.
* Extend storage service protobuf with TagKeys and TagValues
Co-authored-by: Michael Desa <mjdesa@gmail.com>
Co-authored-by: Jacob Marble <jacobmarble@influxdata.com>
* Extend the reads.Store interface with new TagKeys and TagValues APIs
* Extend readservice.store to implement refactored reads.Store interface
* Implement a StringIterator gRPC writer / serializer
* Implement a StringIterator gRPC reader / deserializer
* Implement a StringIterator merger
I did this with a dumb editor macro, so some comments changed too.
Also rename root package from platform to influxdb.
In interest of minimizing risk, anyone importing the root package has
now aliased it to "platform" so that no changes beyond imports were
necessary in those files.
Lastly, replace the old platform module to local path /dev/null so that
nobody can accidentally reintroduce a platform dependency while
migrating platform code to influxdb.
This pulls in the code that allows doing reads with flux into the
platform repo, and removes extra.go.
The reusable portion is under storage/reads, where the concrete
implementation for one of the platform's engines is in
storage/readservice.
In order to make this more reusable, the cursors had to move into
their own package, decoupling it from all of the other code in the
tsdb package. tsdb/cursors is this new package, and type/function
aliases have been added to the tsdb package to point at it.
The models package already is very light on transitive dependencies
and so it was allowed to be depended on in a concrete way in the
cursors package.
Finally, the protobuf definitions for issuing GRPC reads has been
moved into its own package for two reasons:
1. It's a clean separation, and helps keep it that way.
2. Many/most consumers will not be using GRPC. We just
use the datatypes to express the API which helps making
a GRPC server easier.
It is left up to future refactorings (specifically ones that involve
GPRC) to determine if these types should remain, or if there is a
cleaner way.
There's still some dependencies on both github.com/influxdata/influxql
and github.com/influxdata/influxdb/logger that we can hopefully remove
in future refactorings.