* Revert "fix(kv): Don't stop when key not found from index."
This reverts commit bd9167d383.
* Revert "fix(kv): push down org ID to skip in delete URM (#16841)"
This reverts commit a5f508de77.
* Revert "fix(kv): delete authorization from correct index bucket (#16835)"
This reverts commit 7349216e94.
* Revert "feat(kv): Index Authorizations by User ID (#16818)"
This reverts commit df36fe957b.
* Revert "feat: add indexes to urm for user lookups (#16789)"
This reverts commit 9561d0a4f4.
* fix(kv): push down org ID to skip in delete URM
* fix(kv): use database key rather than resource id
We are trying to skip deletes that would remove keys
that have already been deleted. This is a rather
extreme approach and I believe we should think about how
to fix user-resource-mappings.
Co-authored-by: Lyon Hill <lyondhill@gmail.com>
Signed-off-by: Chris Goller <goller@gmail.com>
Co-authored-by: George <me@georgemac.com>
Co-authored-by: Lyon Hill <lyondhill@gmail.com>
* fix(kv): delete authorization from correct index bucket
* fix(kv): return not found code when user resource mapping indexed by not in source
* chore(kv): define failing test for URM on delete
* feat(kv): add user id index on authorizations
* chore(auths): test FindAuthorizations both with and without a populated index
* chore(kv): cleanup index skipping flag in auths service
* fix(kv): bad flag around auth by user index population
* fix(kv): auth by user index lookup use correct buckets
* chore(kv): ensure indexer is called as expected when auth user index missing
* chore(kv): add benchmarks around authorization lookup
* feat(kv): Create a indexer to allow the addition of indexes
This will allow the population of indexes to be incremental and allow
for a rolling update to the index's to be handled cleanly.
* Begin implementing retreival of telegraf plugin stats
* Implement storing/deletion of telegraf plugin stats
* Test plugin stats
* Initialize plugins bucket for tests
* Add comment
* Shorten time and frequency in bolt when providing telegraf plugins metrics
* Simplify ticker loop
* Leak underlying ticker while still satisfying linter
* fix(kv): Update scrapers to use new forward cursor
I also made a minor update to move a db lookup outside of a for loop to save
time and optimize
* fix(inmem): fix a potential race condition
* feat(backup): `influx backup` creates data backup
* feat(backup): initial restore work
* feat(restore): initial restore impl
Adds a restore tool which does offline restore of data and metadata.
* fix(restore): pr cleanup
* fix(restore): fix data dir creation
* fix(restore): pr cleanup
* chore: amend CHANGELOG
* fix: restore to empty dir fails differently
* feat(backup): backup and restore credentials
Saves the credentials file to backups and restores it from backups.
Additionally adds some logging for errors when fetching backup files.
* fix(restore): add missed commit
* fix(restore): pr cleanup
* fix(restore): fix default credentials restore path
* fix(backup): actually copy the credentials file for the backup
* fix: dirs get 0777, files get 0666
* fix: small review feedback
Co-authored-by: tmgordeeva <tanya@influxdata.com>
* feat(checks): Add custom check type
* feat(checks): Remove alert builder from custom check
* feat(checks): Add AlertBuilderAction to list of possible actions
* feat(checks): Query visualization does not make sense for custom check
* feat(check): check editor should only reexecute queries if view query changes
* Update ui/src/timeMachine/components/TimeMachineFluxEditor.tsx
Co-Authored-By: Bucky Schwarz <hoorayimhelping@users.noreply.github.com>
* Address PR review
Co-authored-by: Bucky Schwarz <hoorayimhelping@users.noreply.github.com>
issue here is that the unique by name index for variables was implemented
and has the same functionality about it that this orgs index has. The duplicative
orgs index was nuked. The migration to hydrate the org/name index never
happened. This is a stop gap until that migration is in place.
this is work moving us towards more reusable components that add some
rigidity around handling indexes and the entity bucket. The behavior
is very common across much of the kv pkg. This can be reused throughout.
adding some easy wins for tracing(eventually metrics) that enable more
insight that what is currently possible. It normalizes these concerns
across the kv store.
this work is to support pkger, but was able to add back in the
skipped tests. seeing failures upstream, and didn't catch it in
influxdb b/c the tests were being skipped.
closes: #14799
In the event that findTaskByIDWithAuth cannot find the task ID contained
in the bucket, the outer loop will never terminate.
This ensures that we are calling Next() along-side any calls to continue
while using a Cursor.