Add FutureWriteLimit and PastWriteLimit to retention
policies. Points which are outside of
now() + FutureWriteLimit
or
now() - PastWriteLimit
will be rejected on write with a PartialWriteError.
closes https://github.com/influxdata/influxdb/issues/25424
* feat: upgrade flux to 0.171.0
Tests failing, safety commit
First step in https://github.com/influxdata/influxdb/issues/23815
* fix: remove "org" parameter" from writeOptSource
I attempted to implement the "orgOpt" argument in a similar fashion
to f6669f7512. However, it looks like Flux doesn't accept "org" as
a parameter to "load". It responds with:
Error calling function \"load\" @113:16-113:30: error calling function \"to\" @6:19-6:47: unused arguments [org]
This brings us from 194 passing to 570 passing.
* fix: temporarily disable broken flux tests
These tests expect rows to be stored in a certain order. However,
nothing is specifying the sort order. This has been fixed in a
later update to flux: (see 3d6f47ded).
Temporarily disable these tests until we include a fixed
version of the flux tests.
* chore: add tests from a492993012
This fixes "test-flux.sh" so it runs tests within the "flux/"
directory. This uncovered some other issues with the tests
located within "flux/". These also needed to be updated
to match the newer flux API.
* feat: upgrade flux to 0.172.0
This includes changes made in "cbbf4b27da". Since "test.go" in 2.x
diverged from 1.x, some modifications were required to make this
compatible.
* feat: upgrade flux to 0.173.0
* feat: upgrade flux to v0.174.0
* fix: Update the condition when reseting cursor (#23522)
Filters that contain `or` may change between cursor resets so we must remember to update the condition in the read cursor.
```flux
|> filter(fn: (r) => ((r["_field"] == "field1" and r["_value"]==true) or (r["_field"] == "field2" and r["_value"] == false)))
```
Closes https://github.com/influxdata/flux/issues/4804
* feat: upgrade flux to 0.174.1
* feat: upgrade flux to 0.175.0
* chore: remove end-to-end tests
These were removed in a492993 for 2.x. These tests prevent "go test ./..."
from completing. As stated in the original commit, these tests should now be
handled by the "fluxtest" harness.
* feat: upgrade flux to 0.176.0
Some tests needed to be disabled within the flux harness. This is a
result of enabling "Optimize Aggregate Window" in flux@05a1065f.
These tests are not present in 2.x. Therefore, I am unsure if
the breakage is resolved in a later commit.
* feat: upgrade flux to 0.177.0
* feat: upgrade flux to 0.178.0
* feat: upgrade flux to v0.179.0
This removes all invocations of "flux.RegisterOpSpec". According
to flux@e39096d5, "flux.RegisterOpSpec" does nothing in the
current version of flux and was removed.
* chore: update fluxtest skip list (#23633)
* chore: manually backport 785a465e9a
This removes the reference to "flux.Spec".
* build(flux): update flux to v0.181.0 (#23682)
* build(flux): update flux to v0.184.2
* chore: skip more Flux acceptance tests
There are issues for each skip detailed in test-flux.sh.
* feat: upgrade flux to v0.185.0
This adds "FluxTesting" to the "HTTPD" configuration. This option is
hidden and disabled by default. When "FluxTesting" is set, it
enables the default testing flags for "Flux".
These flags allow the "vectorized float tests" and tests requiring
the "removeRedundantSortNodes" and "labelPolymorphism" flag
enabled to work. These changes are based off of d8553c002e.
flux@3d6f47ded is included within this version of Flux. Therefore
we can now include the "group_*" tests.
* feat: upgrade flux to 0.186.0
* feat: upgrade flux to 0.187.0
* feat: upgrade flux to 0.188.0
* fix: re-run ./generate.sh with updated protoc
* fix: restrict cores to match CircleCI documentation
Co-authored-by: davidby-influx <dbyrne@influxdata.com>
Co-authored-by: Markus Westerlind <marwes91@gmail.com>
Co-authored-by: Sean Brickley <sean@wabr.io>
Co-authored-by: Jonathan A. Sternberg <jonathan@influxdata.com>
Co-authored-by: Christopher M. Wolff <chris.wolff@influxdata.com>
Ensure that the Sources field of the ShowTagValuesStatement is
filled in. Then use the sources to limit the retention policies,
and thus the shards from which tag values are collected.
This fix only works on TSI databases; INMEM shards share
indices, so restricting shard indices used does not restrict the
tag values returned.
This will not permit multiple retention policies to be specified in
a query; either all RPs or one are permitted.
Closes https://github.com/influxdata/influxdb/issues/21981
* fix: Revert performance improvement for sorted merge iterator
This reverts commit af8e66cd25.
* test: add end to end regression test for broken group-by
* chore: update changelog
* chore: pull in controller from 2.x
* chore: fix up 2.x controller to work with 1.x
* feat: Default query limits in flux code
Partial fix of https://github.com/influxdata/influxdb/issues/17212
* chore: update changelog
* chore: refactor to remove panic and reformat code
* test: add script to run flux tests
* feat(flux): enable test capabilities in Flux controller
* feat(flux): add MergeFiltersRule
* build: bump existing Dockerfiles to go 1.15
* build: add flux tests to CI
* refactor: allow for overriding tcp.Mux logger
* build: upgrade to Flux v0.111.0
* chore: Update flux to 0.67
* chore: Builds against 0.68 flux
* chore: Builds against 0.80.0
* chore: Builds against 0.90.0
* chore: Everything builds on latest flux
* chore: goimports fixed
* chore: fix tests locally
* chore: fix CI dockerfiles
* chore: clean up some unused code
* chore: remove flux repl and Spec in flux query json
* chore: port flux end to end tests from 2.x
* chore: fix up goimports
* chore: remove 32 bit build support
* fix: Change from RewriteExpr to PartitionExpr
Also remove some dead code
* feat: WITH KEY implementation
* feat: query rewriting for WITH KEY in SHOW TAG KEYS
This commit quiets staticcheck's warnings about "unnecessary use of
fmt.Sprintf" and "unnecessary use of fmt.Sprint".
Prior to this commit we were wrapping simple constant strings without
any formatting verbs with fmt.Sprintf().
If a series was split by the encoder because of chunking and it was
reconstructed by the http handler, it would not reset the partial
indicator for the series to indicate if the series was still partial or
not. That meant that a result that returned more than the 10,000 values
in a single series with chunking disabled would say that the series was
partial, but it was not.
This fixes it so the handler now correctly sets the partial attribute of
the series to indicate if the series is still partial or not. This was
done when merging results, but was not done with series.
This commit allows users to filter on the `value` field in the
`SHOW TAG VALUES` command:
SHOW TAG VALUES WITH KEY = "mytag" WHERE "value" = 'myvalue'
Previously this command would return all values.
This change makes it so that we simplify the math engine so it doesn't
use a complicated set of nested iterators. That way, we have to change
math in one fewer place.
It also greatly simplifies the query engine as now we can create the
necessary iterators, join them by time, name, and tags, and then use the
cursor interface to read them and use eval to compute the result. It
makes it so the auxiliary iterators and all of their complexity can be
removed.
This also makes use of the new eval functionality that was recently
added to the influxql package.
No math functions have been added, but the scaffolding has been included
so things like trigonometry functions are just a single commit away.
This also introduces a small breaking change. Because of the call
optimization, it is now possible to use the same selector multiple times
as a selector. So if you do this:
SELECT max(value) * 2, max(value) / 2 FROM cpu
This will now return the timestamp of the max value rather than zero
since this query is considered to have only a single selector rather
than multiple separate selectors. If any aspect of the selector is
different, such as different selector functions or different arguments,
it will consider the selectors to be aggregates like the old behavior.
This commits adds a randomised deletion test, which aims to test the
correctness of deletions and drops in the index and series file.
The test works by maintaining its own index of points, series and
measurements, tracking the presence or absence or series.
There are four commands that can be randomly executed:
- insert a series at a random time
- drop an entire measurement
- drop an entire series
- delete series across a time range
After every invocation of one of these commands, `SHOW SERIES` is
executed on the server, and the results are compared to what the
test's series tracker believes should be present. If the results
differ then the test fails.
On failure, the last 100 queries executed, as well as the series
we expected, and the series returned by `SHOW SERIES`, are returned.
* Live Restore + Enterprise data format compatability
* Extended ImportData to import all DB's if no db name given
* Added a new enterprise data test, and backup command now prints the backup file paths at conclusion
* Added whole-system backup test
* Update to use protobuf in all enterprise data cases
* Update to test to do cross-testing with enterprise version
* incremental enterprise backup format support