* test(storage): ensure multiple engines can run concurrently
* feat(storage): expose control over retention run
Fixes#15134.
This commit adds the ability to inject a functional option into a
storage.Engine for controlling when the retention enforcer can run.
Previously, retention enforcers ran on an interval; if you ran multiple
storage engines (as we do in some environments) then it was not possible
to coordinate when engines ran retention. Often they would synchronise
because they started at the same time.
This change will let you specify a blocking function to control when the
retention enforcer can run.
A simple function for serialising retention enforcement across multiple
storage engines could look like:
```go
var mu sync.Mutex
func f() (done func()) {
mu.Lock()
return func() { mu.Unlock() }
}
```
The ResponseWriter would truncate the last series if the byte size of
the points frames exceeded the writeSize constant, causing a Flush to
occur and the cumulative ResponseWriter.sz to reset to zero. Because
ResponseWriter.sz was not incremented for each frame, it remained at
zero, which resulted in the final Flush short circuiting.
This commit implements the Size method for the cursors.Array types
to be used to estimate the size of frame. This is in place of calling
the Protocol Buffer `Size` function, which can be very expensive.
If the reader produces more than one table with the same group key, we
discard the later ones because the stream should never give us more than
one table with the same group key.
This is an error and it indicates the server sent us a bad set of data.
This change makes it so that the client is tolerant of that data and
will discard it if it exists.
Adds the ability to set the current generation to use when compacting
the cache only. Previously, we used the current generation for all
files but this causes issues and we should only use the current
generation for level 1 compaction.
When a buffered column reader was used, the length was not reset to
whatever the requested length was for the buffer so it was possible for
the length to be longer than the actual columns.
The storage table reader will now work correctly when there are multiple
outputs. The table interface now implements the new table and column
reader interfaces and works properly with `execute.CopyTable`. The
source uses `execute.CopyTable` to buffer the table in memory when there
are multiple output transformations.
I don't see anywhere obvious that an engine would be closed twice, but
if it was, the RLock would have been held permanently, such that a Lock
could not be taken later.
Running go test ./storage/... did not trigger a double-close.
The controller implementation is primarily used by influxdb so it
shouldn't be part of the flux repository. This copies the code from flux
to influxdb so it can be removed from the next flux release.
The copy was unnecessary since it was just going to be copied
immediately afterwards into an Arrow buffer. In the future, we will want
to have storage directly send the arrow buffer, but right now we are
putting it in an array and copying it anyway.
Even when we send an arrow buffer, the underlying sequence of bytes is
probably going to be different and we will rely on the allocator to
reuse bytes so let's remove the extra copy.
This manifested as incorrect sort ordering when serialized via RPC,
resulting in an `invalid partition key order` error.
This fix introduces a delimiter to ensure sort keys cannot collide.
These tables were previously used to perform meta queries.
Meta queries are now answered using a specific API, and as
a result, these tables can go away.
Translate the measurement and field tag key names to their non-storage
names and add the `_start` and `_stop` tag keys to the output since
they aren't real tags, but ones that are added by range.
The RPC call should translate `_measurement` and `_field` to their
proper shortened byte strings when requesting the tag values.
This also fixes the planner rewrites to return the root node even when
no rewrite happened as this is required by the planner.
The TagValues API will perform a linear scan if there is no predicate;
otherwise, it will use the index to find a list of candidate series
keys.
TagKeys expects the predicate to be transformed such that
`_measurement` and `_field` are remapped to `\x00` and `\xff`
respectively.
There is one TODO marked to analyze the predicate for a
`\x00 = '<measurement>'` pattern. If found, the predicate can be
eliminated and fall back to a linear prefix scan by combining the org,
bucket and measurement. This is tracked by issue #13497.
If a pattern is seen that matches the `v1.tagValues(...)` call, then it
will be replaced with a direct RPC call to read the tag values for the
selected tag key which should be better optimized than reading from the
storage engine tsm1 files.
If a pattern is seen that matches reading the tag keys, it will be
replaced with a direct RPC call to read the tag keys which should be
better optimized than reading from the storage engine tsm1 files.
* Extend storage service protobuf with TagKeys and TagValues
Co-authored-by: Michael Desa <mjdesa@gmail.com>
Co-authored-by: Jacob Marble <jacobmarble@influxdata.com>
* Extend the reads.Store interface with new TagKeys and TagValues APIs
* Extend readservice.store to implement refactored reads.Store interface
* Implement a StringIterator gRPC writer / serializer
* Implement a StringIterator gRPC reader / deserializer
* Implement a StringIterator merger
Storage converts references to _value in filter predicates to $.
It then considers any predicate that does not reference $ to be
a tag predicate. Tag predicates are used to construct series index
cursors.
This commit fixes a bug where field comparisons were being included
in tag predicates due to the HasFieldValueKey function searching
for comparison expressions referencing _value instead of $. Because
all references to _value had already been replaced. An expression
of the form '$ < 3000' would be considered a tag expression and
therefore would be mistakenly included as a tag predicate.
Fixes#13159.
A cursor can be in process (local) or remote. Remote cursors have
already applied the `hasPoints` check, to reduce network traffic.
Testing whether the cursor has points again here:
https://github.com/influxdata/influxdb/blob/master/storage/reads/reader.go#L221
will always return `false` for a remote cursor that has applied the
NoPoints optimization.
This temporary fix allows the `hasPoints` function to differentiate
a streamCursor and always return true in that case.
The storage engine will now drop any points that contain invalid tag
data. Special tag keys for the measurement and field key will be
excepted from this validation.
We will want to validate that all tag key/value data is valid unicode.
This commit changes the validation helper to only validate provided
tags, since measurements are currently very likely to contain invalid
utf-8 characters.
There are two exceptions to the tag validation: the validation of the
special tag keys for measurements and field keys.
When the WAL was moved up, the validation that happened at the cache
was skipped. This moves the field type validation for a batch of
points up ahead of the WAL again.
It is possible a StreamReader (gRPC) may return an empty response. This
change adds retry and bail-out support. When a bail-out occurs,
reads.ErrStreamNoData is returned.
StorageReadClient adapts a gRPC Storage_ReadClient to provide
cursors.CursorStats by reading the trailer after receiving the final
message from the stream.
This commit adds the pkg/lifecycle.Resource to help manage opening,
closing, and leasing out references to some resource. A resource
cannot be closed until all acquired references have been released.
If the debug_ref tag is enabled, all resource acquisitions keep
track of the stack trace that created them and have a finalizer
associated with them to print on stderr if they are leaked. It also
registers a handler on SIGUSR2 to dump all of the currently live
resources.
Having resources tracked in a uniform way with a data type allows us
to do more sophisticated tracking with the debug_ref tag, as well.
For example, we could panic the process if a resource cannot be
closed within a certain time frame, or attempt to figure out the
DAG of resource ownership dynamically.
This commit also fixes many issues around resources, correctness
during error scenarios, reporting of errors, idempotency of
close, tracking of memory for some data structures, resource leaks
in tests, and out of order dependency closes in tests.
It turns out that LastModified and DiskSize are unused, and so it
was easy to change to not care about the WAL.
This hooks up metrics and starts the WAL again.
At the cost of some nil checks, we don't have to have an interface, defend against
subtle bugs with nils in non-nil interfaces, an empty implementation, etc.
Also, the tsm1 engine is losing the WAL anyway.
Because the WAL relies on the tsm1.Value type, we move that into its own
tsm1/value package and set up some aliases forwarding them into tsm1. This
also required adding some methods and changing consumers to avoid the
unexported fields. I imagine this step will be useful one day when we make
the write path more efficient with respect to consuming points.
This commit additionally fixes some issues with generation. The iterator.tmpldata
and generation for array_cursor_* were removed accidentally when removing
iterators, making those generated files stale. Restore that and regenerate.
No change in functionality.
This commit improves the performance of a mass delete on the TSI index
by deleting at the measurement level instead of deleting each series
individually.
I did this with a dumb editor macro, so some comments changed too.
Also rename root package from platform to influxdb.
In interest of minimizing risk, anyone importing the root package has
now aliased it to "platform" so that no changes beyond imports were
necessary in those files.
Lastly, replace the old platform module to local path /dev/null so that
nobody can accidentally reintroduce a platform dependency while
migrating platform code to influxdb.
The storage bucket service wraps another bucket service, and invokes
actions on a storage engine based upon the actions taken upon buckets.
Currently, the storage bucket service will delete bucket data from the
storage engine when the bucket is deleted via the bucket service.
A standard Makefile is used now in all subdirs that run go generate.
Make will only generate the file if its source files changed.
The checkgenerate target runs clean to ensure all targets a generated
fresh.
The table interface was modified to expose the arrow buffers. The
storage table has now been converted to use this interface with the same
fixes so that it exposes arrow buffers.
The influxql package has also been updated to use the `DoArrow` method
from the `flux.Table` interface.
Moves the check to determine if a series has data for the current time
range into the groupBySort function, which is consistent with the
groupNoneSort function.
The flux query controller was updated to include a Shutdown method a
while ago. Explicitly handle query controller creation and shutdown
where applicable.
In influxd, this ensures that outstanding queries are handled before the
process dies. In tests, this ensures that query controller goroutines
aren't leaked, which drastically simplifies reading full stack traces.
This change also registers query controller metrics with the prometheus
registry in influxd.
We were passing a non-nil tsm1.Log containing a nil *tsm1.WAL which
would cause a panic when it was attempted to be used. Instead, always
pass a non-nil WAL.
We change the storage engine code to not pass in a nil WAL, and
additionally add a defensive check to change any nil WALs into a
NopWAL.
- Add some documentation.
- Move compaction planner to an option instead of config.
The latter fits with the general theme of having config be things
that can be specified in a toml, and everything else being an
option.
tl;dr
Previously, `Close` was being called concurrently by multiple
goroutines, resulting in a race condition. This commit resolves those
issues.
Background
The `Close` method was performing multiple duties, closing resources
and triggering that the table reading by the `Do` method was done.
Additionally, state to track whether more records existed and if the
table was empty, was ported from the more complicated gRPC
implementation. This logic has been simplified.
This new behavior:
* `table#Do` is responsible for triggering it is done, by closing the
done channel
* The creator of the `table` is responsible for releasing the resources
by calling the `table#Close` method
* The `table#Do` reading can be cancelled by calling the `Cancel`
function, which is safe for concurrent use.
* the Do and Close methods are protected by a mutex to protect storage
resources, such as cursors.
These are the log messages that get printed immediately when starting
the application for the first time. This fixes the messages to conform
to the logging style guide.
Both of these options would dereference a nil pointer when attempting
to apply. Instead, set the field to be the address of an integer
containing the right value.
This commit removes the remaining bits of the fields index. In doing
so, the buildCursor method on the engine would need to be updated.
It turns out, that code was statically dead, so delete it and anything
that depended on it. Additionally, delete anything as reported by
the unused tool in the tsdb package.
The generate commands have been modified to take advantage of the new
functionality in Go 1.11 that allows `go run` to execute a package
instead of individual files.
This functionality combined with Go modules allows us to execute a
package directly out of our pinned dependencies rather than accidentally
picking up another binary outside of the build environment.
This also simplifies the Makefile because they no longer have to be
responsible for installing the correct tooling since the Go command
takes care of that logic. It also makes it so that the Makefiles with
file generation can now be invoked from their appropriate subdirectories
so they are contained within the directory itself rather than relying on
values in the top level Makefile.
It is now possible to generate all files within this project by using:
go generate ./...
Or the Makefile can continue to be used.
This commit also copies over the special copy of `tmpl` that the storage
engine uses within the influxdb repository. It was never copied over so
using `go generate` on these packages did not work.
We are moving the necessary code for 2.0 from the influxdb 1.X
repository to the platform 2.0 repository. The logger is an unnecessary
dependency on the old influxdb that is making life more complicated.
This pulls in the code that allows doing reads with flux into the
platform repo, and removes extra.go.
The reusable portion is under storage/reads, where the concrete
implementation for one of the platform's engines is in
storage/readservice.
In order to make this more reusable, the cursors had to move into
their own package, decoupling it from all of the other code in the
tsdb package. tsdb/cursors is this new package, and type/function
aliases have been added to the tsdb package to point at it.
The models package already is very light on transitive dependencies
and so it was allowed to be depended on in a concrete way in the
cursors package.
Finally, the protobuf definitions for issuing GRPC reads has been
moved into its own package for two reasons:
1. It's a clean separation, and helps keep it that way.
2. Many/most consumers will not be using GRPC. We just
use the datatypes to express the API which helps making
a GRPC server easier.
It is left up to future refactorings (specifically ones that involve
GPRC) to determine if these types should remain, or if there is a
cleaner way.
There's still some dependencies on both github.com/influxdata/influxql
and github.com/influxdata/influxdb/logger that we can hopefully remove
in future refactorings.
It may become a uint64 in the future, for example. This does mean
that we have to call Decode on some data that we just Encoded, but
we can fix that later.