This commit
* adds new request and response data types for schema gRPC calls
* adds fmt.Stringer implementation to cursors.FieldType
* adds APIs to sort a slice of MeasurementField values,
* upgrades the gogo protobuf package to v1.3.1, which
includes improvements to serialization.
Filter cursors buffer points in between calls to Next() if the number
of read points exceeds 1000. Previously, this buffer was being cleared
out before being iterated over which caused queries to return a resultset
which had a number of rows divisable by 1000.
This change moves the clearing of the buffer until after the points have
been read. This change affects any queries which read more than 1000 points
from a single series & have a filter that can be successfully applied to at
least one of those points.
* refactor(storage): move type ByTagKey to the only package that uses it
* refactor(tsdb): use types in tsdb/cursors
* refactor(tsdb): remove unused type SeriesIDElems
* refactor(tsdb): inline only use of tsdb.ReadAllSeriesIDIterator
* refactor(tsdb): move series file to its own package
* refactor(storage): remove platform->influxdb aliases
* refactor(storage): add readSource field accessors
* refactor(storage): remove unused limitSeriesCursor
* refactor(storage): export IndexSeriesCursor
This allows IDPE to use the same implementation, rather than duplicate
code. Also copied unit tests from IDPE.
* chore: go fmt
* refactor(storage): move unused code to repo that needs it
Turns out that a bunch of code is only needed in IDPE. This change
removes that code, and another PR adds it to IDPE.
* refactor(storage): export KeyMerger
* refactor(storage): export NilSortHi and NilSortLo
* refactor(storage): move StringIterator & friends to IDPE
* refactor(storage): unexport a few test helper funcs
This adds an lru cache for the columns that are produced as tags. When
producing the columns that are part of the group key, it will generate
the column and then keep it in an lru cache to reuse for future tables.
The start and stop column are effectively cached for every table because
they are special and will be the same for all of the tables.
For the tags, it retains the most recently used since they may be used
by a future table. That way most of the columns will get shared with
each other.
When the size differs, a slice is used so the underlying data is still
shared, but the size is different.
This removes the duplicate filter that is used by the reader. The
storage engine shouldn't be sending us duplicate tables anyway and this
code hurts performance in high cardinality queries because of the memory
it uses to keep track of all of the keys that have been seen.
The ResponseWriter would truncate the last series if the byte size of
the points frames exceeded the writeSize constant, causing a Flush to
occur and the cumulative ResponseWriter.sz to reset to zero. Because
ResponseWriter.sz was not incremented for each frame, it remained at
zero, which resulted in the final Flush short circuiting.
This commit implements the Size method for the cursors.Array types
to be used to estimate the size of frame. This is in place of calling
the Protocol Buffer `Size` function, which can be very expensive.
If the reader produces more than one table with the same group key, we
discard the later ones because the stream should never give us more than
one table with the same group key.
This is an error and it indicates the server sent us a bad set of data.
This change makes it so that the client is tolerant of that data and
will discard it if it exists.
When a buffered column reader was used, the length was not reset to
whatever the requested length was for the buffer so it was possible for
the length to be longer than the actual columns.
The storage table reader will now work correctly when there are multiple
outputs. The table interface now implements the new table and column
reader interfaces and works properly with `execute.CopyTable`. The
source uses `execute.CopyTable` to buffer the table in memory when there
are multiple output transformations.
The copy was unnecessary since it was just going to be copied
immediately afterwards into an Arrow buffer. In the future, we will want
to have storage directly send the arrow buffer, but right now we are
putting it in an array and copying it anyway.
Even when we send an arrow buffer, the underlying sequence of bytes is
probably going to be different and we will rely on the allocator to
reuse bytes so let's remove the extra copy.
This manifested as incorrect sort ordering when serialized via RPC,
resulting in an `invalid partition key order` error.
This fix introduces a delimiter to ensure sort keys cannot collide.
These tables were previously used to perform meta queries.
Meta queries are now answered using a specific API, and as
a result, these tables can go away.
Translate the measurement and field tag key names to their non-storage
names and add the `_start` and `_stop` tag keys to the output since
they aren't real tags, but ones that are added by range.
The RPC call should translate `_measurement` and `_field` to their
proper shortened byte strings when requesting the tag values.
This also fixes the planner rewrites to return the root node even when
no rewrite happened as this is required by the planner.
If a pattern is seen that matches the `v1.tagValues(...)` call, then it
will be replaced with a direct RPC call to read the tag values for the
selected tag key which should be better optimized than reading from the
storage engine tsm1 files.
If a pattern is seen that matches reading the tag keys, it will be
replaced with a direct RPC call to read the tag keys which should be
better optimized than reading from the storage engine tsm1 files.