If a drop statement failed to remove state on disk, the meta store
would still be updated and you would not be able to retry the delete
leaving orphaned data around.
This reverses the logic so the data must be removed before the meta
store is updated.
The TSDBStore interface needs to also allow for remote TSDBStore but the
DatabaseIndex is only for a local TSDB instance. Moved the optimized
SHOW TAG VALUES path to do a typecast to the LocalTSDBStore struct
instead of always attempting to use the optimized version.
If the TSDBStore is not local and does not have the DatabaseIndex, it
will default to using the distributed query instead.
This commit optimizes `SHOW TAG VALUES` so that it avoids the
`SELECT` query engine execution and iterator creation. There
are also optimizations to reduce individual memory allocations
and to reduce in-memory heap size by only operating on one
measurement at a time.
Execution time has been reduce to approximately 900ms for
500,000 rows. This is about 2µs per row. Of this time,
approximately 1µs is spent retrieving and sorting the row
and 1µs is spent encoding into JSON and writing to the
response body.
For restoring a shard, we need to be able to have the shard open,
but disabled. It was racy to open it and then disable it separately
since writes/queries could occur in between that time.
This allows us to add additional options to ExecuteQuery without
creating parameter bloat.
Removing the unused Series structs. Their necessity was removed by a
previous commit, but the structs were not removed yet.
Add another type of interrupt iterator that monitors the interrupt
channel and calls `Close()` on the iterator when the interrupt happens.
It will primarily be used for asynchronously closing the ReaderIterator,
but it will only close the read side of the connection properly. More
work needs to be done to allow closing the write side efficiently.
If all points in a series are timestamped in the future, the SHOW
queries will not return anything from these series.
This commit changes the to time used when querying shards for the SHOW
queries to the maximum time, in order to ensure future points are
considered in the results for these queries.
Fixes#6599.
Casting syntax is done with the PostgreSQL syntax `field1::float` to
specify which type should be used when selecting a field. You can also
do `field1::field` or `tag1::tag` to specify that a field or tag should
be selected.
This makes it possible to select a tag when a field key and a tag key
conflict with each other in a measurement. It also means it's possible
to choose a field with a specific type if multiple shards disagree. If
no types are given, the same ordering for how a type is chosen is used
to determine which type to return.
The FieldDimensions method has been updated to return the data type for
the fields that get returned. The SeriesKeys function has also been
removed since it is no longer needed. SeriesKeys was originally used for
the fill iterator, but then expanded to be used by auxiliary iterators
for determining the channel iterator types. The fill iterator doesn't
need it anymore and the auxiliary types are better served by
FieldDimensions implementing that functionality, so SeriesKeys is no
longer needed.
Fixes#6519.
Background of the bug: Prior to this patch we actually tried writing
points that were older than the retention period of the shard. This
caused race condition when it came to writing points to a shard that's
being dropped, which will happen frequently if the user is loading old
data (by accident). This is demonstrated in the test in this commit.This
bug was previously addressed in #985. It turns the fix for #985 wasn't
enough. A user reported in #1078 that some shards are left behind and
not deleted.
It turns out that while the shard is being dropped more write
requests could come in and end up on line `cluster/shard.go:195` which
will cause the datastore to create a shard on disk that isn't tracked
anywhere in the metadata. This shard will live forever and never get
deleted. This fix address this issue by not writing old points in, but
there are still some edge cases with the current implementation, at
least not as bad as current master.
Close#1078
Fixes#853. Close#854. Previously, there was an unprotected endpoint in
raft to return the cluster config that would include user hashes. This
endpoint is useful for debugging purposes so I restructured it and moved
it to the API. It ensures the requesting user is a cluster admin.
Cluster config will now return all of the cluster state including
servers, CQs, shards, etc.
Conflicts:
integration/single_server_test.go
Fixes#867. Close#927. Updated lexer and parser to work, added code to
coordinator to insert spaces if requested. Now the user can request the
shard spaces. `list series include spaces`
Fixes#853. Close#854. Previously, there was an unprotected endpoint in
raft to return the cluster config that would include user hashes. This
endpoint is useful for debugging purposes so I restructured it and moved
it to the API. It ensures the requesting user is a cluster admin.
Cluster config will now return all of the cluster state including
servers, CQs, shards, etc.
This will help users recover from #886. It's dangerous functionality because it only changes the metadata. Will document and tell people to use with caution.
This will help users recover from #886. It's dangerous functionality because it only changes the metadata. Will document and tell people to use with caution.