Fixes#10052
This commit fixes an issue where field keys would reappear in results
when querying previously dropped measurements.
The issue manifests itself when duplicates of a new series are inserted
into the `inmem` index. In this case, a map that tracks the number of
series belonging to a measurement was incorrectly incremented once for
each duplication of the series. Then, when it came time to drop the
measurement, the index assumed there were several series belonging to
the measurement left in the index (because the counter was higher than
it should be). The result of that was that the `fields.idx` file (which
stores a mapping between measurements and field keys) was not truncated
and rebuilt. This left old field keys in that file, which were then
returned in subsequent queries over all field keys.
The flux in influxdb has been upgraded to use v0.33.2. A lot of
interfaces for the storage engine were changed during this so code had
to change to accomodate the new interfaces and remove the old ones.
Included in this commit is a patch file for the changes that were made.
A patch was generated for the following packages:
* `flux/stdlib/influxdata/influxdb`
* `storage/reads`
* `tsdb/cursors`
These are the three packages that are in common with version 2 of the
database and the first of these packages contains the specific
implementations that are used for version 1.
It is very possible that the next time we upgrade this, the patch will
not apply cleanly just like it wouldn't have applied cleanly to this
update. The patch is mostly meant to document exactly what changed
during the copy over to help ensure we don't forget things when adapting
the interfaces.
Add a patch file to hopefully make this easier in the future
StringArrayEncodeAll will panic if the total length of strings
contained in the src slice is > 0xffffffff. This change adds a unit
test to replicate the issue and an associated fix to return an error.
This also raises an issue that compactions will be unable to make
progress under the following condition:
* multiple string blocks are to be merged to a single block and
* the total length of all strings exceeds the maximum block size that
snappy will encode (0xffffffff)
The observable effect of this is errors in the logs indicating a
compaction failure.
Fixes#13687
StringArrayEncodeAll will panic if the total length of strings
contained in the src slice is > 0xffffffff. This change adds a unit
test to replicate the issue and an associated fix to return an error.
This also raises an issue that compactions will be unable to make
progress under the following condition:
* multiple string blocks are to be merged to a single block and
* the total length of all strings exceeds the maximum block size that
snappy will encode (0xffffffff)
The observable effect of this is errors in the logs indicating a
compaction failure.
Fixes#13687
There is no need to print such messages in the CLI output of the version
flag. Besides being unnecessary, it makes harder to automate some tests
for those installing influxDB CE by themselves and/or automating
dockerfiles.
Fix influxdata/influxdb/issues/10451
The series index looks at a set of tombstones when querying the id for
a given key, but it does not look when asking for the offset for some
id, even if that id is deleted.
Update the verify tooling to check that the index agrees with the
deleted status of the id, but skip doing the extra checks if the
id is deleted.
The reduce iterators would read in the points for a window, which
matched the grouping of the outermost query, and then it would sort them
by the time before emitting the points.
When there were multiple series, this would sometimes cause a conflict
because it would change the sorting of the inner query output when
selectors were used within a subquery. Then, these emitted points would
be output in the wrong order and they wouldn't join correctly when
multiple cursors were used.
This fixes it so the sorting happens per series grouping rather than on
all of the points together so they retain their tag order which is the
correct sorting method.
When a Flux predicate is transformed to a Store.Read / GroupRead request,
the `_measurement` and `_field` keys are remapped to match 2.x internal
tag keys.
This change does not modify the 2.x behavior, but rather updates the
1.x mapping, so merging future updates from the 2.x storage/reads
package should have fewer conflicts.