When querying data before 1970-01-01 (UNIX time 0)
validateArgs would set start to -in64 max and end to int64 max.
closes https://github.com/influxdata/influxdb/issues/24669
Co-authored-by: Paul Hegenberg <paul.hegenberg@gmail.com>
* fix: prevent retention service from hanging (#25055)
Fix issue that can cause the retention service to hang waiting on a
`Shard.Close` call. When this occurs, no other shards will be deleted
by the retention service. This is usually noticed as an increase in
disk usage because old shards are not cleaned up.
The fix adds to new methods to `Store`, `SetShardNewReadersBlocked`
and `InUse`. `InUse` can be used to poll if a shard has active readers,
which the retention service uses to skip over in-use shards to prevent
the service from hanging. `SetShardNewReadersBlocked` determines if
new read access may be granted to a shard. This is required to prevent
race conditions around the use of `InUse` and the deletion of shards.
If the retention service skips over a shard because it is in-use, the
shard will be checked again the next time the retention service is run.
It can be deleted on subsequent checks if it is no longer in-use. If
the shards is stuck in-use, the retention service will not be able to
delete the shards, which can be observed in the logs for manual
intervention. Other shards can still be deleted by the retention service
even if a shard is stuck with readers.
This is a port of ad68ec8 from master-1.x to main-2.x.
closes: #25076
(cherry picked from commit b4bd607eef)
This adds locking to the load method and renames it to Reload(). This
method replaces the cached data from the underlying kv.Store and needs a
write lock. The restore api uses it and may have been an issue with
concurrent writes into the cached data during a restore.
* fixes#24895
Under certain circumstances, the retention service can fail to delete shards from
the store in a timely manner. When the shard groups are pruned based on age, this
leaves orphaned shard files on the disk. The retention service will then not attempt
to remove the obsolete shard files because the meta store does not know about them.
This can cause excessive disk space usage for some users.
This corrects that by requiring shards files be deleted before they can be removed
from the meta store.
fixes: #24529
(cherry picked from commit 7bd3f89d18)
closes https://github.com/influxdata/influxdb/issues/24545
Co-authored-by: Geoffrey Wossum <gwossum@influxdata.com>
HTTP 5XX errors were being returned incorrectly from
BoltDB errors that were actually bad requests, e.g.,
names that were too long for buckets, users, and
organizations. Map BoltDB errors to correct Influx
errors and return 4XX errors where appropriate. Also
add op codes to more errors
* fix: fixes an error querying virtual dbrps
When the virtual pointer was set to false, the mappings were being ignored.
* fix: missed part in a rebase
* test: add test for shard mapping virtual dbrps
* fix: do not create virtual mappings for equivalent physical mappings
* fix: fix find dbrps, make bucket default if not overridden
* fix: allow filtering of virtual DBRPs, filter in shard mapper
* fix: update tests to mock for virtual filter for shards and update server test
We previously allowed read tokens access to all of v1 query, including
InfluxQL queries that made state changes to the DB, specifically,
'DELETE' and 'DROP MEASUREMENT'. This allowed tokens with only read
permissions to delete points via the legacy /query endpoint.
/api/v2/query was unaffected.
This adjusts the behavior to verify that the token has write permissions
when specifying 'DELETE' and 'DROP MEASUREMENT' InfluxQL queries. We
follow the same pattern as other existing v1 failure scenarios and
instead of failing hard with 401, we use ectx.Send() to send an error to
the user (with 200 status):
{"results":[{"statement_id":0,"error":"insufficient permissions"}]}
Returning in this manner is consistent with Cloud 2, which also returns
200 with "insufficient permissions" for these two InfluxQL queries.
To facilitate authorization unit tests, we add MustNewPermission() to
testing/util.go.
Closes: #22799
* fix: remove nats for scraper processing
Scrapers now use go channels instead of NATS and interprocess communication.
This should fix#23085 .
Additionally, found and fixed#23106 .
* chore: fix formatting
* chore: fix static check and go.mod
* test: fix some flaky tests
* fix: mark NATS arguments as deprecated
* feat: show measurement database and retention policy wildcards
Closes#22390
* chore: formatting
* test: this commit fails tests with empty database
* fix: show measurements with one empty database
* feat: works with custom iterator
* feat: works with existing iterators
* chore: cleanup
* test: consistent assertions for tests
* fix: better log message if trying to filter on the value of a field key
* fix: comment for handling boolean literal; handle false boolean as well
* fix: make time range checking inclusive
* Move tenant.Service unit tests into its package
* Delete the top-level TenantService interface now that it's not used.
* Move helper funcs for setting up test stores into testing pkg
* Delete duplicate implementations scattered through the codebase
* Move error assertions into store-creation helpers
* fix: field metaqueries take fast path if predicate is only on `_measurement`
* chore: update CHANGELOG
* test: add test for fields with measurement predicate
* Regenerate protos using gogo 1.3.2
* Add protos to generate, add checkgenerate to CI
* Address proto warning
* Add generator tooling to Makefile
* Delete recursive Makefiles, simplify generation run by goreleaser
* Use env bash for fetch-ui-assets
* Add static-data to clean target
* feat: new metadata backup endpoint
* feat: added restore/sql API endpoint
* fix: content-type is multipart/mixed, part names are kv and sql
* fix: changed multipart manifest to buckets and made it .json
* feat: added lock for backing up sqlite and bolt dbs
* fix: use read lock instead of write lock on kv during backup
* fix: use filepath.Join for temp dirs
* feat(query): enable min/max pushdown
* fix(query): fix the group last pushdown to use descending cursors
* test(storage): add read group test with no agg
Co-authored-by: Jonathan A. Sternberg <jonathan@influxdata.com>
* fix(storage): Detect need for descending cursor in WindowAggregate
* chore: Format + comments
* chore: PR cosmetic feedback (#21141)
Co-authored-by: Phil Bracikowski <pbracikowski+git@influxdata.com>
* chore: rename testcase and fix comments
Co-authored-by: Phil Bracikowski <pbracikowski+git@influxdata.com>