The storage table reader will now work correctly when there are multiple
outputs. The table interface now implements the new table and column
reader interfaces and works properly with `execute.CopyTable`. The
source uses `execute.CopyTable` to buffer the table in memory when there
are multiple output transformations.
The copy was unnecessary since it was just going to be copied
immediately afterwards into an Arrow buffer. In the future, we will want
to have storage directly send the arrow buffer, but right now we are
putting it in an array and copying it anyway.
Even when we send an arrow buffer, the underlying sequence of bytes is
probably going to be different and we will rely on the allocator to
reuse bytes so let's remove the extra copy.
I did this with a dumb editor macro, so some comments changed too.
Also rename root package from platform to influxdb.
In interest of minimizing risk, anyone importing the root package has
now aliased it to "platform" so that no changes beyond imports were
necessary in those files.
Lastly, replace the old platform module to local path /dev/null so that
nobody can accidentally reintroduce a platform dependency while
migrating platform code to influxdb.
The table interface was modified to expose the arrow buffers. The
storage table has now been converted to use this interface with the same
fixes so that it exposes arrow buffers.
The influxql package has also been updated to use the `DoArrow` method
from the `flux.Table` interface.
tl;dr
Previously, `Close` was being called concurrently by multiple
goroutines, resulting in a race condition. This commit resolves those
issues.
Background
The `Close` method was performing multiple duties, closing resources
and triggering that the table reading by the `Do` method was done.
Additionally, state to track whether more records existed and if the
table was empty, was ported from the more complicated gRPC
implementation. This logic has been simplified.
This new behavior:
* `table#Do` is responsible for triggering it is done, by closing the
done channel
* The creator of the `table` is responsible for releasing the resources
by calling the `table#Close` method
* The `table#Do` reading can be cancelled by calling the `Cancel`
function, which is safe for concurrent use.
* the Do and Close methods are protected by a mutex to protect storage
resources, such as cursors.
This commit removes the remaining bits of the fields index. In doing
so, the buildCursor method on the engine would need to be updated.
It turns out, that code was statically dead, so delete it and anything
that depended on it. Additionally, delete anything as reported by
the unused tool in the tsdb package.
This pulls in the code that allows doing reads with flux into the
platform repo, and removes extra.go.
The reusable portion is under storage/reads, where the concrete
implementation for one of the platform's engines is in
storage/readservice.
In order to make this more reusable, the cursors had to move into
their own package, decoupling it from all of the other code in the
tsdb package. tsdb/cursors is this new package, and type/function
aliases have been added to the tsdb package to point at it.
The models package already is very light on transitive dependencies
and so it was allowed to be depended on in a concrete way in the
cursors package.
Finally, the protobuf definitions for issuing GRPC reads has been
moved into its own package for two reasons:
1. It's a clean separation, and helps keep it that way.
2. Many/most consumers will not be using GRPC. We just
use the datatypes to express the API which helps making
a GRPC server easier.
It is left up to future refactorings (specifically ones that involve
GPRC) to determine if these types should remain, or if there is a
cleaner way.
There's still some dependencies on both github.com/influxdata/influxql
and github.com/influxdata/influxdb/logger that we can hopefully remove
in future refactorings.