Compare commits

...

62 Commits

Author SHA1 Message Date
Brandon Pfeifer b3b982d746
chore: update MacOS executor to M1 (#24372) 2023-09-20 14:30:21 -04:00
Brandon Pfeifer 75a8bcfae2
chore: replace "package builder" shell implemention with python (#24306) 2023-06-30 10:45:03 -04:00
Jeffrey Smith II a6bb6a85d6
fix: update ui to remove new data explorer (#24220) 2023-06-29 15:34:23 -04:00
alespour cc62221501
chore: upgrade to Go 1.20.5 (#24299) 2023-06-20 09:32:44 -05:00
cui fliter 46ec649b9c
chore: fix function name in comment (#24281) 2023-06-14 11:18:13 -04:00
Brandon Pfeifer 3dabfcdd08
fix: correct CHANGELOG.md upload destination (#24287)
This updates the job logic so that workflow condition is evaluated
by CircleCI rather than the shell. This also uses the "aws-s3" orb
for uploading to S3 (rather than awscli).
2023-06-13 20:00:45 -04:00
Brandon Pfeifer b352a179a2
fix: update terraform with newer version (#24284)
The terraform shipped with snap (in the older version of Ubuntu)
only supported public key encryption with ssh-rsa. New versions
of Linux started deprecating ssh-rsa, so this version bump
is required.
2023-06-13 16:55:37 -04:00
Brandon Pfeifer 2f3733c2cb
chore: generate "influxdb2.${CIRCLE_TAG}.digests" for each release (#24276) 2023-06-13 11:37:58 -04:00
Jeffrey Smith II 6219f98e1c
chore: Update the go version required (#24217) 2023-05-30 18:00:08 -04:00
Christopher M. Wolff 4acc733019
build(flux): update flux to v0.194.3 (#24252) 2023-05-30 14:21:20 -07:00
Brandon Pfeifer ba7f1a7ab6
chore: upgrade changelogger to latest version (#24244)
This allows changelogs to be built from "non-release" tags. These
changelogs use "UNRELEASED" as the first section header. Commits
from these sections are eventually rolled into a proper "release"
tag (e.g v2.7.0).
2023-05-26 12:09:44 -04:00
dependabot[bot] 398660438f
build(deps): bump github.com/docker/distribution (#24230)
Bumps [github.com/docker/distribution](https://github.com/docker/distribution) from 2.8.1+incompatible to 2.8.2+incompatible.
- [Release notes](https://github.com/docker/distribution/releases)
- [Commits](https://github.com/docker/distribution/compare/v2.8.1...v2.8.2)

---
updated-dependencies:
- dependency-name: github.com/docker/distribution
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-15 15:03:30 -05:00
Christopher M. Wolff 3eb091d2e7
build(flux): update flux to v0.194.1 (#24203) 2023-04-17 15:09:43 -07:00
Brandon Pfeifer 49c7a7407a
feat: implement remote package signing (#24194) 2023-04-12 11:41:41 -04:00
Martin Hilton e237d01fc8
build(flux): update flux to v0.194.0 (#24186) 2023-04-11 10:56:17 +01:00
Brandon Pfeifer 1bb310e606
fix: use Amazon EC2 Image instead of CentOS EC2 Image (#24181) 2023-04-05 17:41:58 -04:00
Jeffrey Smith II 7616ca642b
chore: update to go 1.20.3 (#24180) 2023-04-05 11:29:13 -04:00
dependabot[bot] 85f725f8b9
build(deps): bump github.com/docker/docker (#24179)
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 23.0.0+incompatible to 23.0.3+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v23.0.0...v23.0.3)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-05 10:03:06 -04:00
Jeffrey Smith II 01dda5d9f2
chore: update UI to latest version (#24171) 2023-04-04 12:13:29 -04:00
Jeffrey Smith II c854e53c2b
fix: chmod'ing the manifest is unnecessary (#24165) 2023-04-03 13:09:01 -04:00
dependabot[bot] eac0ee0acc
build(deps): bump github.com/opencontainers/runc from 1.1.3 to 1.1.5 (#24163)
Bumps [github.com/opencontainers/runc](https://github.com/opencontainers/runc) from 1.1.3 to 1.1.5.
- [Release notes](https://github.com/opencontainers/runc/releases)
- [Changelog](https://github.com/opencontainers/runc/blob/v1.1.5/CHANGELOG.md)
- [Commits](https://github.com/opencontainers/runc/compare/v1.1.3...v1.1.5)

---
updated-dependencies:
- dependency-name: github.com/opencontainers/runc
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-29 14:36:44 -04:00
Eng Zer Jun 903d30d658
test: use `T.TempDir` to create temporary test directory (#23258)
* test: use `T.TempDir` to create temporary test directory

This commit replaces `os.MkdirTemp` with `t.TempDir` in tests. The
directory created by `t.TempDir` is automatically removed when the test
and all its subtests complete.

Prior to this commit, temporary directory created using `os.MkdirTemp`
needs to be removed manually by calling `os.RemoveAll`, which is omitted
in some tests. The error handling boilerplate e.g.
	defer func() {
		if err := os.RemoveAll(dir); err != nil {
			t.Fatal(err)
		}
	}
is also tedious, but `t.TempDir` handles this for us nicely.

Reference: https://pkg.go.dev/testing#T.TempDir
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>

* test: fix failing TestSendWrite on Windows

=== FAIL: replications/internal TestSendWrite (0.29s)
    logger.go:130: 2022-06-23T13:00:54.290Z	DEBUG	Created new durable queue for replication stream	{"id": "0000000000000001", "path": "C:\\Users\\circleci\\AppData\\Local\\Temp\\TestSendWrite1627281409\\001\\replicationq\\0000000000000001"}
    logger.go:130: 2022-06-23T13:00:54.457Z	ERROR	Error in replication stream	{"replication_id": "0000000000000001", "error": "remote timeout", "retries": 1}
    testing.go:1090: TempDir RemoveAll cleanup: remove C:\Users\circleci\AppData\Local\Temp\TestSendWrite1627281409\001\replicationq\0000000000000001\1: The process cannot access the file because it is being used by another process.

Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>

* test: fix failing TestStore_BadShard on Windows

=== FAIL: tsdb TestStore_BadShard (0.09s)
    logger.go:130: 2022-06-23T12:18:21.827Z	INFO	Using data dir	{"service": "store", "path": "C:\\Users\\circleci\\AppData\\Local\\Temp\\TestStore_BadShard1363295568\\001"}
    logger.go:130: 2022-06-23T12:18:21.827Z	INFO	Compaction settings	{"service": "store", "max_concurrent_compactions": 2, "throughput_bytes_per_second": 50331648, "throughput_bytes_per_second_burst": 50331648}
    logger.go:130: 2022-06-23T12:18:21.828Z	INFO	Open store (start)	{"service": "store", "op_name": "tsdb_open", "op_event": "start"}
    logger.go:130: 2022-06-23T12:18:21.828Z	INFO	Open store (end)	{"service": "store", "op_name": "tsdb_open", "op_event": "end", "op_elapsed": "77.3µs"}
    testing.go:1090: TempDir RemoveAll cleanup: remove C:\Users\circleci\AppData\Local\Temp\TestStore_BadShard1363295568\002\data\db0\rp0\1\index\0\L0-00000001.tsl: The process cannot access the file because it is being used by another process.

Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>

* test: fix failing TestPartition_PrependLogFile_Write_Fail and TestPartition_Compact_Write_Fail on Windows

=== FAIL: tsdb/index/tsi1 TestPartition_PrependLogFile_Write_Fail/write_MANIFEST (0.06s)
    testing.go:1090: TempDir RemoveAll cleanup: remove C:\Users\circleci\AppData\Local\Temp\TestPartition_PrependLogFile_Write_Failwrite_MANIFEST656030081\002\0\L0-00000003.tsl: The process cannot access the file because it is being used by another process.
    --- FAIL: TestPartition_PrependLogFile_Write_Fail/write_MANIFEST (0.06s)

=== FAIL: tsdb/index/tsi1 TestPartition_Compact_Write_Fail/write_MANIFEST (0.08s)
    testing.go:1090: TempDir RemoveAll cleanup: remove C:\Users\circleci\AppData\Local\Temp\TestPartition_Compact_Write_Failwrite_MANIFEST3398667527\002\0\L0-00000003.tsl: The process cannot access the file because it is being used by another process.
    --- FAIL: TestPartition_Compact_Write_Fail/write_MANIFEST (0.08s)

We must close the open file descriptor otherwise the temporary file
cannot be cleaned up on Windows.

Fixes: 619eb1cae6 ("fix: restore in-memory Manifest on write error")
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>

* test: fix failing TestReplicationStartMissingQueue on Windows

=== FAIL: TestReplicationStartMissingQueue (1.60s)
    logger.go:130: 2023-03-17T10:42:07.269Z	DEBUG	Created new durable queue for replication stream	{"id": "0000000000000001", "path": "C:\\Users\\circleci\\AppData\\Local\\Temp\\TestReplicationStartMissingQueue76668607\\001\\replicationq\\0000000000000001"}
    logger.go:130: 2023-03-17T10:42:07.305Z	INFO	Opened replication stream	{"id": "0000000000000001", "path": "C:\\Users\\circleci\\AppData\\Local\\Temp\\TestReplicationStartMissingQueue76668607\\001\\replicationq\\0000000000000001"}
    testing.go:1206: TempDir RemoveAll cleanup: remove C:\Users\circleci\AppData\Local\Temp\TestReplicationStartMissingQueue76668607\001\replicationq\0000000000000001\1: The process cannot access the file because it is being used by another process.

Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>

* test: update TestWAL_DiskSize

Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>

* test: fix failing TestWAL_DiskSize on Windows

=== FAIL: tsdb/engine/tsm1 TestWAL_DiskSize (2.65s)
    testing.go:1206: TempDir RemoveAll cleanup: remove C:\Users\circleci\AppData\Local\Temp\TestWAL_DiskSize2736073801\001\_00006.wal: The process cannot access the file because it is being used by another process.

Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>

---------

Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2023-03-21 16:22:11 -04:00
anon8675309 96d6dc3d82
fix: Removed timeout which is hit with large databases or slow servers #22803 (#23400)
Co-authored-by: anon8675309 <m7gy7uav@duck.com>
2023-03-21 10:47:55 -04:00
L1Cafe 5a7ce078f5
fix: scraping failed when Content-Type header is not set (#24135)
Co-authored-by: L1Cafe <L1Cafe@donotemail.me>
2023-03-14 10:13:28 -04:00
Jeffrey Smith II e1d0102a6f
fix: add error message when attempting to delete by field (#24131)
* fix: add error message when attempting to delete by field

* test: add test for delete by field
2023-03-10 09:13:24 -05:00
Jeffrey Smith II b819edf095
fix: rename replication fields for better clarity (#24126)
* fix: rename replication fields for better clarity

* fix: dont rename, only add new field
2023-03-09 13:11:43 -05:00
Jeffrey Smith II 77fd64a975
fix: handle replication missing queue (#24123)
* fix: replications should startup after backup/restore

* chore: refactor

* test: improve logging and handle test better
2023-03-09 13:10:53 -05:00
Ikko Eltociear Ashimine 387d9007a7
chore: fix typo in functions.go (#24133)
intial -> initial
2023-03-09 12:40:17 -05:00
fuyou 22d698bd7e
fix(sec): upgrade containerd to 1.6.18 (#24129) 2023-03-09 12:39:30 -05:00
Jamie Strandboge 569e84d4a7
chore: use go 1.20.1 (#24114) 2023-03-01 15:49:27 -06:00
dependabot[bot] 23446cc371
build(deps): bump golang.org/x/net from 0.5.0 to 0.7.0 (#24112)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.5.0 to 0.7.0.
- [Release notes](https://github.com/golang/net/releases)
- [Commits](https://github.com/golang/net/compare/v0.5.0...v0.7.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-28 12:30:15 -05:00
Christopher M. Wolff a035bcfb6e
build(flux): update flux to v0.193.0 (#24103) 2023-02-23 10:18:46 -08:00
wiedld 4fc73ea221
feat(monitor-ci/415): get oss-e2es working locally in UI repo (#24098)
* feat(monitor-ci/415): get oss-e2es working locally in UI repo

* chore: remove old, unused artifact directories

* fix: handle import cycle caused by trying to use the onboarding client

---------

Co-authored-by: Jeffrey Smith II <jsmith@influxdata.com>
2023-02-22 09:28:46 -05:00
Manuel de la Peña 260d88b45d
chore: bump testcontainers-go to 0.18.0 (#24097) 2023-02-21 10:07:17 -05:00
Jeffrey Smith II f74c69c5e4
chore: update to go 1.20 (#24088)
* build: upgrade to go 1.19

* chore: bump go.mod

* chore: `gofmt` changes for doc comments

https://tip.golang.org/doc/comment

* test: update tests for new sort order

* chore: make generate-sources

* chore: make generate-sources

* chore: go 1.20

* chore: handle rand.Seed deprecation

* chore: handle rand.Seed deprecation in tests

---------

Co-authored-by: DStrand1 <dstrandboge@influxdata.com>
2023-02-09 14:14:35 -05:00
Jeffrey Smith II 8ad6e17265
chore: add additional error logging when deleting shard (#24038)
* chore: add additional error logging when deleting shard

* chore: better logging message
2023-02-09 09:10:25 -05:00
Jeffrey Smith II 06a59020d0
fix: prevent unauthorized writes in flux "to" function (#24077)
* fix: prevent unauthorized writes in flux "to" function

* test: add test for "to" permissions fix
2023-02-06 10:07:18 -05:00
suitableZebraCaller ec7fdd3a58
fix: Show Replication Queue size and Replication TCP Errors (#23960)
* feat: Show remaining replication queue size

* fix: Show non-http related error messages

* fix: Show non-http related error messages with backoff

* fix: Updates for replication tests

* chore: formatting

* chore: formatting

* chore: formatting

* chore: formatting

* chore: lowercase json field

---------

Co-authored-by: Geoffrey <suitableZebraCaller@users.noreply.github.com>
Co-authored-by: Jeffrey Smith II <jeffreyssmith2nd@gmail.com>
2023-02-02 09:47:45 -05:00
dependabot[bot] e2f835bb0f
build(deps): bump github.com/aws/aws-sdk-go from 1.30.12 to 1.33.0 (#24070)
Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.30.12 to 1.33.0.
- [Release notes](https://github.com/aws/aws-sdk-go/releases)
- [Changelog](https://github.com/aws/aws-sdk-go/blob/v1.33.0/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go/compare/v1.30.12...v1.33.0)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go
  dependency-type: indirect
...

closes https://github.com/influxdata/edge/issues/371

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-30 15:08:33 -08:00
Joshua Powers 109bc88512
chore: update package repo GPG key (#24061) 2023-01-26 16:10:09 -07:00
Chunchun Ye 8ed55e72b8
build(flux): update flux to v0.192.0 (#24028) 2023-01-13 10:22:57 -06:00
Jeffrey Smith II 6b60728843
fix: Update UI to resolve Dashboard crash and All Access Token creati… (#24017)
* fix: Update UI to resolve Dashboard crash and All Access Token creation (#24014)

* chore: update docs around ui releases
2023-01-04 09:49:21 -05:00
Jeffrey Smith II 24a2b621ea
fix: Pin UI to older version to address Dashboard issues (#23980) 2022-12-15 11:49:43 -05:00
Jeffrey Smith II ffd069a8a5
fix: handle NaN in scraper (#23944)
* fix: handle NaN values in scraper

* chore: a converted int will never be NaN
2022-12-13 15:58:50 -05:00
Brandon Pfeifer ade21ad9a1
fix: restrict file permissions by default (#23959)
Most of these changes can be overridden by the system
maintainer with environment variables or systemd
override snippets.
2022-12-13 11:00:50 -05:00
Jamie Strandboge cee487fe21
chore: update Go to 1.18.9 (#23973) 2022-12-07 15:25:06 -06:00
Brandon Pfeifer 853d6157e3
feat: perform basic package validation (#23863)
* chore: remove unused build/ci scripts

* feat: validate packages during build

* chore: test CentOS aarch64 package

* fix: remove x86_64 from parameterized workflow

* fix: don't upgrade packages

Since some unrelated packages break during upgrade, this
no longer upgrades the system before installing
influxdb.
2022-12-01 10:58:33 -05:00
Manuel de la Peña 26daa86648
chore: bump testcontainers to latest released version (#23858)
* chore: bump testcontainers to v0.15.0

* chore: run go mod tidy

* chore: update test to latest version of testcontainers

* chore: update package

* fix: use collectors.NewGoCollector instead

SA1019 detected by staticcheck
2022-11-23 13:18:10 -05:00
Jeffrey Smith II ef098ac65f
chore: cleanup codeowners file (#23940) 2022-11-22 10:28:40 -05:00
Jeffrey Smith II c2eac86131
feat: port report-db command from 1.x (#23922)
* feat: port report-db command from 1.x

* chore: fix linting

* chore: rename db to bucket

* chore: fix linting
2022-11-21 11:23:13 -05:00
Jeffrey Smith II 77081081b5
feat: port check-schema and merge-schema from 1.x (#23921)
* feat: add check-schema and merge-schema commands to influx inspect

* chore: fix linting

* fix: add warning if fields.idxl is encountered
2022-11-21 10:39:50 -05:00
Jeffrey Smith II 9bf8840a63
fix: update me and users routes to match cloud/documentation (#23837)
* fix: update me and users routes to match cloud/documentation

* fix: handle errors in user routes properly
2022-11-21 10:39:30 -05:00
Jeffrey Smith II f026d7bdaf
fix: Fixes migrating when a remote already exists (#23912)
* fix: handle migrating with already defined remotes

* test: add test to verify migrating already defined remotes

* fix: properly handle Up
2022-11-17 14:23:10 -05:00
Ole Kristian (Zee) 666cabb1f4
fix: fix wrong max age transformation from seconds (#23684)
* fix: fix wrong max age transformation from seconds

* refactor: clarify max age intent

* refactor: remove unnecessary duration
2022-11-16 16:18:43 -05:00
davidby-influx 7ad8fbad22
chore: fix trace message text (#23918) 2022-11-16 08:40:26 -05:00
Nathaniel Cook 07e6ef2839
build(flux): update flux to v0.191.0 (#23913) 2022-11-15 12:57:16 -07:00
Sam Arnold 4de89afd37
refactor: remove dead iterator code (#23887)
* fix: codegen without needing goimports

* refactor: remove dead code
2022-11-09 19:26:12 -05:00
Jeffrey Smith II 46464f409c
fix: Optimize SHOW FIELD KEY CARDINALITY (#23886)
Use the _fieldKeys system iterator
2022-11-09 14:51:41 -05:00
Jeffrey Smith II 8f936e9e6a
Revert "fix: set limited permissions on package installs (#23683)" (#23855)
This reverts commit 8d5f0b52f3.
2022-11-04 15:56:56 -04:00
Jeffrey Smith II 61870e5202
chore: update 2.5 changelog (#23848)
* chore: update 2.5 changelog

* chore: release 2.5.1 influxdb
2022-11-04 15:56:07 -04:00
Christopher M. Wolff 86207fe46a
build(flux): update flux to v0.189.0 (#23853) 2022-11-03 11:03:33 -07:00
Jamie Strandboge e62c8abaa9
chore: upgrade to Go 1.18.8 (#23852) 2022-11-02 17:18:42 -05:00
198 changed files with 4701 additions and 3946 deletions

View File

@ -7,7 +7,7 @@ orbs:
parameters:
cross-container-tag:
type: string
default: go1.18.7-f2a580ca8029f26f2c8a2002d6851967808bf96d
default: go1.20.5-latest
workflow:
type: string
@ -22,16 +22,16 @@ executors:
resource_class: large
linux-amd64:
machine:
image: ubuntu-2004:202107-02
image: ubuntu-2204:current
resource_class: large
linux-arm64:
machine:
image: ubuntu-2004:202101-01
image: ubuntu-2204:current
resource_class: arm.large
darwin:
resource_class: macos.m1.medium.gen1
macos:
xcode: 12.5.1
resource_class: medium
xcode: 15.0.0
shell: /bin/bash -eo pipefail
windows:
machine:
@ -65,6 +65,7 @@ nofork_filter: &nofork_filter
branches:
ignore: /pull\/[0-9]+/
workflows:
version: 2
build:
@ -121,18 +122,28 @@ workflows:
exclude:
- { os: darwin, arch: arm64 }
- { os: windows, arch: arm64 }
- build-package:
- build-packages:
<<: *any_filter
name: build-package-<< matrix.os >>-<< matrix.arch >>
requires:
- build-<< matrix.os >>-<< matrix.arch >>
- build-linux-amd64
- build-linux-arm64
- build-darwin-amd64
- build-windows-amd64
- check_package_deb_amd64:
requires:
- build-packages
- check_package_deb_arm64:
requires:
- build-packages
- check_package_rpm:
<<: *nofork_filter
name:
check_package_rpm-<< matrix.arch >>
matrix:
parameters:
os: [ linux, darwin, windows ]
arch: [ amd64, arm64 ]
exclude:
- { os: darwin, arch: arm64 }
- { os: windows, arch: arm64 }
arch: [ x86_64, aarch64 ]
requires:
- build-packages
- test-downgrade:
<<: *any_filter
requires:
@ -144,29 +155,29 @@ workflows:
- test-linux-packages:
<<: *nofork_filter
requires:
- build-package-linux-amd64
- changelog:
<<: *any_filter
- build-packages
- sign-packages:
<<: *release_filter
requires:
- build-packages
- s3-publish-packages:
<<: *release_filter
requires:
- test-linux-packages
- build-package-darwin-amd64
- build-package-linux-amd64
- build-package-linux-arm64
- build-package-windows-amd64
- s3-publish-changelog:
- build-packages
- sign-packages
- changelog:
<<: *release_filter
publish-type: release
workflow: release
- changelog-upload:
<<: *release_filter
workflow: release
requires:
- changelog
- perf-test:
record_results: true
requires:
- build-package-darwin-amd64
- build-package-linux-amd64
- build-package-linux-arm64
- build-package-windows-amd64
- build-packages
filters:
branches:
only:
@ -218,9 +229,10 @@ workflows:
- equal: [ << pipeline.trigger_source >>, scheduled_pipeline ]
- equal: [ << pipeline.parameters.workflow >>, nightly ]
jobs:
- changelog
- s3-publish-changelog:
publish-type: nightly
- changelog:
workflow: nightly
- changelog-upload:
workflow: nightly
requires:
- changelog
- test-race
@ -282,18 +294,12 @@ workflows:
requires:
- build-docker-nightly-amd64
- build-docker-nightly-arm64
- build-package:
name: build-package-<< matrix.os >>-<< matrix.arch >>
- build-packages:
requires:
- build-nightly-<< matrix.os >>-<< matrix.arch >>
- changelog
matrix:
parameters:
os: [ linux, darwin, windows ]
arch: [ amd64, arm64 ]
exclude:
- { os: darwin, arch: arm64 }
- { os: windows, arch: arm64 }
- build-nightly-linux-amd64
- build-nightly-linux-arm64
- build-nightly-darwin-amd64
- build-nightly-windows-amd64
- litmus-full-test:
requires:
- build-nightly-linux-amd64
@ -389,6 +395,9 @@ jobs:
- checkout
- attach_workspace:
at: .
- run:
name: Install Rosetta
command: .circleci/scripts/install-rosetta
- run:
name: Run tests
command: ./scripts/ci/run-prebuilt-tests.sh $(pwd)/test-bin $(pwd)/test-results
@ -461,46 +470,80 @@ jobs:
paths:
- bin
build-package:
executor: linux-amd64
parameters:
os:
type: string
arch:
type: string
build-packages:
machine:
image: ubuntu-2204:current
steps:
- checkout
- attach_workspace:
at: .
- run:
name: Install Package Dependencies
command: |
export DEBIAN_FRONTEND=noninteractive
sudo apt-get update
sudo apt-get install --yes \
build-essential \
git \
rpm \
ruby-dev
at: /tmp/workspace
- checkout
- run: |
export DEBIAN_FRONTEND=noninteractive
sudo -E apt-get update
sudo -E apt-get install --no-install-recommends --yes \
asciidoc \
build-essential \
git \
python3 \
rpm \
ruby-dev \
xmlto
gem install fpm
- run:
name: Get InfluxDB Version
command: |
PREFIX=2.x .circleci/scripts/get-version
- run:
name: Build Package
command: |
export PLAT=<< parameters.os >>
export ARCH=<< parameters.arch >>
.circleci/scripts/build-package
sudo gem install fpm
python3 -m pip install -r .circleci/scripts/package/requirements.txt
# Unfortunately, this must be executed as root. This is so permission
# modifying commands (chown, chmod, etc.) succeed.
sudo --preserve-env=CIRCLE_TAG,CIRCLE_SHA1 .circleci/scripts/package/build.py
- store_artifacts:
path: artifacts/
- persist_to_workspace:
root: /
root: .
paths:
- artifacts
sign-packages:
circleci_ip_ranges: true
docker:
- image: quay.io/influxdb/rsign:latest
auth:
username: $QUAY_RSIGN_USERNAME
password: $QUAY_RSIGN_PASSWORD
steps:
- add_ssh_keys:
fingerprints:
- fc:7b:6e:a6:38:7c:63:5a:13:be:cb:bb:fa:33:b3:3c
- attach_workspace:
at: /tmp/workspace
- run: |
for target in /tmp/workspace/artifacts/*
do
case "${target}"
in
# rsign is shipped on Alpine Linux which uses "busybox ash" instead
# of bash. ash is somewhat more posix compliant and is missing some
# extensions and niceties from bash.
*.deb|*.rpm|*.tar.gz|*.zip)
rsign "${target}"
;;
esac
if [ -f "${target}" ]
then
# Since all artifacts are present, sign them here. This saves Circle
# credits over spinning up another instance just to separate out the
# checksum job. Individual checksums are written by the
# "build_packages" script.
sha256sum "${target}" >> "/tmp/workspace/artifacts/influxdb2.${CIRCLE_TAG}.digests"
fi
done
- persist_to_workspace:
root: /tmp/workspace
paths:
- artifacts
- store_artifacts:
path: /artifacts
destination: artifacts
path: /tmp/workspace/artifacts
s3-publish-packages:
docker:
@ -530,38 +573,6 @@ jobs:
aws s3 sync . 's3://dl.influxdata.com/influxdb/releases'
s3-publish-changelog:
parameters:
publish-type:
type: string
docker:
- image: ubuntu:latest
steps:
- attach_workspace:
at: /tmp/workspace
- checkout
- run:
name: Publish Changelog to S3
command: |
export DEBIAN_FRONTEND=noninteractive
apt-get update
apt-get install --yes awscli git
PREFIX=2.x .circleci/scripts/get-version
source "${BASH_ENV}"
pushd /tmp/workspace/changelog_artifacts
case "<< parameters.publish-type >>"
in
release)
aws s3 cp CHANGELOG.md "s3://dl.influxdata.com/influxdb/releases/CHANGELOG.${VERSION}.md"
;;
nightly)
aws s3 cp CHANGELOG.md "s3://dl.influxdata.com/platform/nightlies/<< pipeline.git.branch >>/CHANGELOG.md"
;;
esac
build-docker-nightly:
parameters:
resource_class:
@ -626,14 +637,14 @@ jobs:
- checkout
- add_ssh_keys:
fingerprints:
- "91:0a:5b:a7:f9:46:77:f3:5d:4a:cf:d2:44:c8:2c:5a"
- 3a:d1:7a:b7:57:d7:85:0b:76:79:85:51:38:f3:e4:67
- terraform/validate:
path: scripts/ci/
- run:
name: Terraform apply
command: |
set -x
export DEBNAME="$(find /tmp/workspace/artifacts/influxdb2-*-amd64.deb)"
export DEBNAME="$(find /tmp/workspace/artifacts/influxdb2*amd64.deb)"
terraform -chdir=scripts/ci init -input=false
AWS_ACCESS_KEY_ID=$TEST_AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$TEST_AWS_SECRET_ACCESS_KEY terraform \
-chdir=scripts/ci \
@ -682,7 +693,7 @@ jobs:
# To ssh into aws without failing host key checks
- add_ssh_keys:
fingerprints:
- "91:0a:5b:a7:f9:46:77:f3:5d:4a:cf:d2:44:c8:2c:5a"
- 3a:d1:7a:b7:57:d7:85:0b:76:79:85:51:38:f3:e4:67
- run:
name: Set up AWS CLI
command: |
@ -720,7 +731,7 @@ jobs:
- checkout
- add_ssh_keys:
fingerprints:
- "91:0a:5b:a7:f9:46:77:f3:5d:4a:cf:d2:44:c8:2c:5a"
- 3a:d1:7a:b7:57:d7:85:0b:76:79:85:51:38:f3:e4:67
- run:
name: Destroy AWS instances with datestring more than a day old
no_output_timeout: 20m
@ -858,25 +869,108 @@ jobs:
docker push quay.io/influxdb/oss-acceptance:latest
changelog:
parameters:
workflow:
type: string
docker:
- image: quay.io/influxdb/changelogger:d7093c409adedd8837ef51fa84be0d0f8319177a
- image: quay.io/influxdb/changelogger:latest
steps:
- checkout
- run:
name: Generate changelog
command: |
PREFIX=2.x .circleci/scripts/get-version
source "${BASH_ENV}"
- when:
condition:
or:
- equal: [ << parameters.workflow >>, nightly ]
- equal: [ << parameters.workflow >>, snapshot ]
steps:
- run: changelogger --product OSS
- when:
condition:
equal: [ << parameters.workflow >>, release ]
steps:
- run: |
export DESCRIPTION="In addition to the list of changes below, please also see the [official release \
notes](https://docs.influxdata.com/influxdb/${CIRCLE_BRANCH}/reference/release-notes/influxdb/) for \
other important information about this release."
if [[ "${RELEASE:-}" ]]
then
export DESCRIPTION="In addition to the list of changes below, please also see the [official release notes](https://docs.influxdata.com/influxdb/${VERSION}/reference/release-notes/influxdb/) for other important information about this release."
fi
PRODUCT="OSS" changelogger
changelogger --product OSS --release "<< pipeline.git.tag >>" --description "${DESCRIPTION}"
- store_artifacts:
path: changelog_artifacts/
- persist_to_workspace:
root: .
root: changelog_artifacts/
paths:
- changelog_artifacts
- .
changelog-upload:
parameters:
workflow:
type: string
docker:
- image: cimg/python:3.6
steps:
- attach_workspace:
at: /tmp/workspace
- when:
condition:
equal: [ << parameters.workflow >>, release ]
steps:
- aws-s3/copy:
aws-region: AWS_S3_REGION
aws-access-key-id: AWS_ACCESS_KEY_ID
aws-secret-access-key: AWS_SECRET_ACCESS_KEY
to: "s3://dl.influxdata.com/influxdb/releases/CHANGELOG.<< pipeline.git.tag >>.md"
from: /tmp/workspace/CHANGELOG.md
- when:
condition:
equal: [ << parameters.workflow >>, nightly ]
steps:
- aws-s3/copy:
aws-region: AWS_S3_REGION
aws-access-key-id: AWS_ACCESS_KEY_ID
aws-secret-access-key: AWS_SECRET_ACCESS_KEY
to: "s3://dl.influxdata.com/platform/nightlies/<< pipeline.git.branch >>/CHANGELOG.md"
from: /tmp/workspace/CHANGELOG.md
check_package_deb_amd64:
machine:
image: ubuntu-2204:current
resource_class: medium
steps:
- attach_workspace:
at: /tmp/workspace
- checkout
- run:
name: Validate Debian Package (AMD64)
command: |
sudo .circleci/scripts/package-validation/debian \
/tmp/workspace/artifacts/influxdb2*amd64.deb
check_package_deb_arm64:
machine:
image: ubuntu-2204:current
resource_class: arm.medium
steps:
- attach_workspace:
at: /tmp/workspace
- checkout
- run:
name: Validate Debian Package (ARM64)
command: |
sudo .circleci/scripts/package-validation/debian \
/tmp/workspace/artifacts/influxdb2*arm64.deb
check_package_rpm:
executor: linux-amd64
parameters:
arch:
type: string
steps:
- attach_workspace:
at: /tmp/workspace
- add_ssh_keys:
fingerprints:
- 3a:d1:7a:b7:57:d7:85:0b:76:79:85:51:38:f3:e4:67
- checkout
- run: |
AWS_ACCESS_KEY_ID=$TEST_AWS_ACCESS_KEY_ID \
AWS_SECRET_ACCESS_KEY=$TEST_AWS_SECRET_ACCESS_KEY \
.circleci/scripts/package-validation/redhat << parameters.arch >> /tmp/workspace/artifacts/influxdb2*.<< parameters.arch >>.rpm

View File

@ -1,170 +0,0 @@
#!/bin/bash
set -o errexit \
-o nounset \
-o pipefail
REGEX_RELEASE_VERSION='[[:digit:]]+\.[[:digit:]]+\.[[:digit:]]+'
if [[ ${RELEASE:-} ]]
then
# This ensures that release packages are built with valid versions.
# Unfortunately, `fpm` is fairly permissive with what version tags
# it accepts. This becomes a problem when `apt` or `dpkg` is used
# to install the package (both have strict version requirements).
if ! [[ ${VERSION} =~ ^${REGEX_RELEASE_VERSION}$ ]]
then
printf 'Release version is invalid!\n' >&2 && exit 1
fi
fi
function run_fpm()
{
if [[ ${1} == rpm ]]
then
case ${ARCH} in
arm64)
ARCH=aarch64
;;
amd64)
ARCH=x86_64
;;
esac
fi
pushd "${workspace}"
fpm \
--log error \
`# package description` \
--name influxdb2 \
--vendor InfluxData \
--description 'Distributed time-series database.' \
--url https://influxdata.com \
--maintainer support@influxdb.com \
--license MIT \
`# package configuration` \
--input-type dir \
--output-type "${1}" \
--architecture "${ARCH}" \
--version "${VERSION}" \
--iteration 1 \
`# package relationships` \
--deb-recommends influxdb2-cli \
--conflicts influxdb \
--depends curl \
`# package scripts` \
--before-install control/preinst \
--after-install control/postinst \
--after-remove control/postrm \
`# package files` \
--chdir fs/ \
--package /artifacts \
--directories /var/lib/influxdb \
--rpm-defattrdir 750 \
--rpm-defattrfile 750
popd
# `goreleaser` stripped off the package revision and replaced '_' with
# '-'. Since the dockerfiles expect the previous naming convention,
# this rewrites the package names to match. Version information is
# also stored as metadata within the package.
case ${1} in
deb)
mv "/artifacts/influxdb2_${VERSION}-1_${ARCH}.deb" \
"/artifacts/influxdb2-${VERSION}-${ARCH}.deb"
;;
rpm)
mv "/artifacts/influxdb2-${VERSION//-/_}-1.${ARCH}.rpm" \
"/artifacts/influxdb2-${VERSION//-/_}.${ARCH}.rpm"
;;
esac
}
sudo bash <<'EOF'
mkdir /artifacts && chown -R circleci: /artifacts
EOF
build_archive()
{
workspace="$(mktemp -d)"
mkdir "${workspace}/influxdb2_${PLAT}_${ARCH}"
# `failglob` is required because `bin/influxd_${PLAT}_${ARCH}/*` may
# not expand. This will prevent the package from being built without
# the included binary files. This will also display as an error
# from CircleCI interface.
shopt -s failglob
cp -p LICENSE README.md "bin/influxd_${PLAT}_${ARCH}/"* \
"${workspace}/influxdb2_${PLAT}_${ARCH}/"
pushd "${workspace}"
if [[ ${PLAT} != windows ]]
then
# Using `find .. -type f` to supply a list of files to `tar` serves two
# purposes. The first being that `tar` wont construct a '.' directory
# in the root of the tarfile. The second being that this excludes
# empty directories from the tarfile.
find "influxdb2_${PLAT}_${ARCH}/" -type f \
| tar -czf "/artifacts/influxdb2-${VERSION}-${PLAT}-${ARCH}.tar.gz" -T -
else
# windows uses zip
find "influxdb2_${PLAT}_${ARCH}/" -type f \
| zip -r "/artifacts/influxdb2-${VERSION}-${PLAT}-${ARCH}.zip" -@
fi
popd
}
build_package_linux()
{
if [[ ${PLAT} != linux ]]
then
return 0
fi
workspace="$(mktemp -d)"
mkdir -p "${workspace}/fs/usr/bin"
# (see reasoning above)
shopt -s failglob
cp -rp .circleci/package/. "${workspace}/"
cp -p "bin/influxd_${PLAT}_${ARCH}/"* "${workspace}/fs/usr/bin"
run_fpm deb
run_fpm rpm
}
sign_artifacts()
{
# If this is not a release version, don't sign the artifacts. This
# prevents unathorized PRs and branches from being signed with our
# signing key.
if [[ ! ${RELEASE:-} ]]
then
return 0
fi
# CircleCI mangles environment variables with newlines. This key contians
# escaped newlines. For `gpg` to import the key, it requires `echo -e` to
# expand the escape sequences.
gpg --batch --import <<<"$(echo -e "${GPG_PRIVATE_KEY}")"
# TODO(bnpfeife): replace with code signing server
for target in /artifacts/*
do
gpg \
--batch \
--pinentry-mode=loopback \
--passphrase "${PASSPHRASE}" \
--detach-sign \
--armor "${target}"
done
}
build_archive
build_package_linux
sign_artifacts

View File

@ -0,0 +1,7 @@
#!/bin/bash
set -euo pipefail
if [[ "${MACHTYPE}" == "arm64-apple-darwin"* ]]
then
/usr/sbin/softwareupdate --install-rosetta --agree-to-license
fi

View File

@ -0,0 +1,8 @@
#!/bin/bash
set -o errexit \
-o nounset \
-o pipefail
path="$(dirname "$(realpath "${BASH_SOURCE[0]}")")"
"${path}/validate" deb "${1}"

View File

@ -0,0 +1,97 @@
#!/bin/bash
set -o errexit \
-o nounset \
-o pipefail
# $1 -> architecture
# $2 -> package path
case ${1} in
x86_64) arch=x86_64 ;;
aarch64) arch=arm64 ;;
esac
package="$(realpath "${2}")"
path="$(dirname "$(realpath "${BASH_SOURCE[0]}")")"
terraform_init() {
pushd "${path}/tf" &>/dev/null
# Unfortunately, CircleCI doesn't offer any RPM based machine images.
# This is required to test the functionality of the systemd services.
# (systemd doesn't run within docker containers). This will spawn a
# Amazon Linux instance in AWS.
terraform init
terraform apply \
-auto-approve \
-var "architecture=${1}" \
-var "package_path=${2}" \
-var "identifier=${CIRCLE_JOB}"
popd &>/dev/null
}
terraform_free() {
pushd "${path}/tf" &>/dev/null
terraform destroy \
-auto-approve \
-var "architecture=${1}" \
-var "package_path=${2}" \
-var "identifier=${CIRCLE_JOB}"
popd &>/dev/null
}
terraform_ip() {
pushd "${path}/tf" &>/dev/null
terraform output -raw node_ssh
popd &>/dev/null
}
# This ensures that the associated resources within AWS are released
# upon exit or when encountering an error. This is setup before the
# call to "terraform apply" so even partially initialized resources
# are released.
# shellcheck disable=SC2064
trap "terraform_free \"${arch}\" \"${package}\"" \
SIGINT \
SIGTERM \
ERR \
EXIT
function terraform_setup()
{
# TODO(bnpfeife): remove this once the executor is updated.
#
# Unfortunately, terraform provided by the CircleCI executor is *terribly*
# out of date. Most Linux distributions are disabling "ssh-rsa" public key
# algorithms which this uses to remote into the ec2 instance . This
# installs the latest version of terraform.
#
# Addendum: the "terraform_version" CircleCI option is broken!
sudo tee /etc/apt/sources.list.d/hashicorp.list <<EOF >/dev/null || true
deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main
EOF
curl -fL https://apt.releases.hashicorp.com/gpg | gpg --dearmor | \
sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg >/dev/null
export DEBIAN_FRONTEND=noninteractive
sudo -E apt-get update
sudo -E apt-get install --yes terraform
}
terraform_setup
terraform_init "${arch}" "${package}"
printf 'Setup complete! Testing %s... (this takes several minutes!)' "${1}"
# Since terraform *just* created this instance, the host key is not
# known. Therefore, we'll disable StrictHostKeyChecking so ssh does
# not wait for user input.
ssh -o 'StrictHostKeyChecking=no' "ec2-user@$(terraform_ip)" 'sudo ./validate rpm ./influxdb2.rpm'

View File

@ -0,0 +1,114 @@
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 2.70"
}
}
}
variable "architecture" {
type = string
}
variable "identifier" {
type = string
}
variable "package_path" {
type = string
}
provider "aws" {
region = "us-east-1"
}
data "aws_ami" "test_ami" {
most_recent = true
filter {
name = "name"
values = ["al20*-ami-20*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
filter {
name = "architecture"
values = [var.architecture]
}
owners = ["137112412989"]
}
resource "aws_security_group" "influxdb_test_package_sg" {
ingress {
description = "Allow ssh connection"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
description = "Allow all outgoing"
from_port = 0
to_port = 0
protocol = "all"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "test_instance" {
count = 1
ami = data.aws_ami.test_ami.id
instance_type = var.architecture == "x86_64" ? "t2.micro" : "c6g.medium"
key_name = "circleci-oss-test"
vpc_security_group_ids = [aws_security_group.influxdb_test_package_sg.id]
tags = {
Name = format("circleci_%s_test_%s", var.identifier, var.architecture)
}
provisioner "file" {
source = var.package_path
destination = "/home/ec2-user/influxdb2.rpm"
connection {
type = "ssh"
user = "ec2-user"
host = self.public_dns
agent = true
}
}
provisioner "file" {
source = "../validate"
destination = "/home/ec2-user/validate"
connection {
type = "ssh"
user = "ec2-user"
host = self.public_dns
agent = true
}
}
provisioner "remote-exec" {
inline = [
"chmod +x /home/ec2-user/validate",
]
connection {
type = "ssh"
user = "ec2-user"
host = self.public_dns
agent = true
}
}
}
output "node_ssh" {
value = aws_instance.test_instance.0.public_dns
}

View File

@ -0,0 +1,116 @@
#!/bin/bash
set -o errexit \
-o nounset \
-o pipefail
usage() {
cat <<'EOF'
usage: validate [type] [path]
Program:
This application performs sanity checks on the provided InfluxDB
package. InfluxDB should *not* be installed on the system before
running this application. This validates new installations and
performs specific checks relevant only to InfluxDB.
Options:
type Must be "deb" or "rpm". This option instructs the
application to use the package manager associated
with "type".
path Path to InfluxDB package to validate.
EOF
}
if [[ ! "${1:-}" ]] || [[ ! "${2:-}" ]]
then
(usage) && exit 1
fi
PACKAGE_TYPE="${1}"
PACKAGE_PATH="${2}"
install_deb() {
# When installing the package, ensure that the latest repository listings
# are available. This might be required so that all dependencies resolve.
# Since this needs to be run by CI, we supply "noninteractive" and "-y"
# so no prompts stall the pipeline.
export DEBIAN_FRONTEND=noninteractive
apt-get update
# "apt-get install" should be used instead of "dpkg -i", because "dpkg"
# does not resolve dependencies. "apt-get" requires that the package
# path looks like a path (either fullpath or prefixed with "./").
apt-get install -y binutils "$(realpath "${PACKAGE_PATH}")"
}
install_rpm() {
# see "install_deb" for "update"
yum update -y
yum install -y binutils
yum localinstall -y "$(realpath "${PACKAGE_PATH}")"
}
case ${PACKAGE_TYPE}
in
deb)
(install_deb)
;;
rpm)
(install_rpm)
;;
esac
if ! which influxd &>/dev/null
then
printf 'ERROR: Failed to locate influxd executable!\n' >&2
exit 2
fi
NEEDED="$(readelf -d "$(which influxd)" | (grep 'NEEDED' || true ))"
# shellcheck disable=SC2181
if [[ ${?} -ne 0 ]]
then
cat <<'EOF'
ERROR: readelf could not analyze the influxd executable! This
might be the consequence of installing a package built
for another platform OR invalid compiler/linker flags.
EOF
exit 2
fi
if [[ "${NEEDED:-}" ]]
then
cat <<'EOF'
ERROR: influxd not statically linked! This may prevent all
platforms from running influxd without installing
separate dependencies.
EOF
exit 2
fi
PIE="$(readelf -d "$(which influxd)" | (grep 'Flags: PIE' || true))"
if [[ ! "${PIE:-}" ]]
then
printf 'ERROR: influxd not linked with "-fPIE"!\n'
exit 2
fi
if ! systemctl is-active influxdb &>/dev/null
then
systemctl start influxdb
fi
for i in 0..2
do
if ! systemctl is-active influxdb &>/dev/null
then
printf 'ERROR: influxdb service failed to start!\n'
exit 2
fi
# Sometimes the service fails several seconds or minutes after
# starting. This failure may not propagate to the original
# "systemctl start <influxdb>" command. Therefore, we'll
# poll the service several times before exiting.
sleep 30
done
printf 'Finished validating influxdb!\n'

View File

@ -0,0 +1,383 @@
#!/usr/bin/env python3
import os
import re
import shutil
import subprocess
import tempfile
import yaml
def build_linux_archive(source, package, version):
"""
Builds a Linux Archive.
This archive contains the binary artifacts, configuration, and scripts
installed by the DEB and RPM packages. This mimics the file-system. So,
binaries are installed into "/usr/bin", configuration into "/etc", and
scripts into their relevant directories. Permissions match those of
the DEB and RPM packages.
"""
with tempfile.TemporaryDirectory() as workspace:
# fmt: off
shutil.copytree(os.path.join(package["source"], "fs"),
workspace, dirs_exist_ok=True, ignore=shutil.ignore_patterns(".keepdir"))
# fmt: on
for extra in package["extras"]:
# fmt: off
shutil.copy(extra["source"],
os.path.join(workspace, extra["target"]))
# fmt: on
for binary in package["binaries"]:
target = os.path.join(source["binary"], binary)
if os.path.exists(target):
# fmt: off
shutil.copy(target,
os.path.join(workspace, "usr/bin", os.path.basename(target)))
# fmt: on
# After the package contents are copied into the working directory,
# the permissions must be updated. Since the CI executor may change
# occasionally (images/ORBs deprecated over time), the umask may
# not be what we expect. This allows this packaging script to be
# agnostic to umask/system configuration.
for root, dirs, files in os.walk(workspace):
for target in [os.path.join(root, f) for f in files]:
# files in "usr/bin" are executable
if os.path.relpath(root, workspace) == "usr/bin":
os.chmod(target, 0o0755)
else:
# standard file permissions
os.chmod(target, 0o0644)
# fmt: off
shutil.chown(
target,
user = "root",
group = "root")
# fmt: on
for target in [os.path.join(root, d) for d in dirs]:
# standard directory permissions
os.chmod(target, 0o0755)
# fmt: off
shutil.chown(
target,
user = "root",
group = "root")
# fmt: on
for override in package["perm_overrides"]:
target = os.path.join(workspace, override["target"])
os.chmod(target, override["perms"])
# "owner" and "group" should be a system account and group with
# a well-defined UID and GID. Otherwise, the UID/GID might vary
# between systems. When the archive is extracted/package is
# installed, things may not behave as we would expect.
# fmt: off
shutil.chown(
target,
user = override["owner"],
group = override["group"])
# fmt: on
os.makedirs(source["target"], exist_ok=True)
# fmt: off
subprocess.check_call([
"tar", "-czf",
os.path.join(
source["target"],
"{:s}-{:s}_{:s}_{:s}.tar.gz".format(
package["name"],
version,
source["plat"],
source["arch"]
)
),
# ".keepdir" allows Git to track otherwise empty directories. The presence
# of the directories allows `package["extras"]` and `package["binaries"]`
# to be copied into the archive without requiring "mkdir". These should
# directories are excluded from the final archive.
"--exclude", ".keepdir",
# This re-parents the contents of the archive with `package["name"]-version`.
# It is undocumented, however, when matching, "--transform" always removes
# the trailing slash. This regex must handle "./" and "./<more components>".
"--transform",
"s#^.\(/\|$\)#{:s}-{:s}/#".format(
package["name"],
version
),
# compress everything within `workspace`
"-C", workspace, '.'
])
# fmt: on
def build_archive(source, package, version):
"""
Builds Archive for other (not-Linux) Platforms.
This archive contains binary artifacts and configuration. Unlike the
linux archive, which contains the configuration and matches the file-
system of the DEB and RPM packages, everything is located within the
root of the archive. However, permissions do match those of the DEB
and RPM packages.
"""
with tempfile.TemporaryDirectory() as workspace:
for extra in package["extras"]:
# fmt: off
target = os.path.join(workspace,
os.path.basename(extra["target"]))
# fmt: on
shutil.copy(extra["source"], target)
os.chmod(target, 0o0644)
# fmt: off
shutil.chown(
target,
user = "root",
group = "root")
# fmt: on
for binary in package["binaries"]:
target = os.path.join(source["binary"], binary)
if os.path.exists(target):
# fmt: off
shutil.copy(target,
os.path.join(workspace, os.path.basename(target)))
# fmt: on
os.chmod(target, 0o0755)
# fmt: off
shutil.chown(
target,
user = "root",
group = "root")
# fmt: on
os.makedirs(source["target"], exist_ok=True)
if source["plat"] == "darwin":
# fmt: off
subprocess.check_call([
"tar", "-czf",
os.path.join(
source["target"],
"{:s}-{:s}_{:s}_{:s}.tar.gz".format(
package["name"],
version,
source["plat"],
source["arch"]
)
),
# This re-parents the contents of the archive with `package["name"]-version`.
# It is undocumented, however, when matching, "--transform" always removes
# the trailing slash. This regex must handle "./" and "./<more components>".
"--transform",
"s#^.\(/\|$\)#{:s}-{:s}/#".format(
package["name"],
version
),
# compress everything within `workspace`
"-C", workspace, '.'
])
# fmt: on
if source["plat"] == "windows":
# preserve current working directory
current = os.getcwd()
for root, dirs, files in os.walk(workspace):
for file in files:
# Unfortunately, it looks like "-r" cannot be combined with
# "-j" (which strips the path of input files). This changes
# directory to the current input file and *then* appends it
# to the archive.
os.chdir(os.path.join(workspace, root))
# fmt: off
subprocess.check_call([
"zip", "-r",
os.path.join(
os.path.join(current, source["target"]),
"{:s}-{:s}-{:s}.zip".format(
package["name"],
version,
source["plat"],
source["arch"]
)
),
file
])
# fmt: on
# restore current working directory
os.chdir(current)
def build_linux_package(source, package, version):
"""
Constructs a DEB or RPM Package.
"""
with tempfile.TemporaryDirectory() as workspace:
# fmt: off
shutil.copytree(package["source"], workspace,
dirs_exist_ok=True, ignore=shutil.ignore_patterns(".keepdir"))
# fmt: on
for extra in package["extras"]:
# fmt: off
shutil.copy(extra["source"],
os.path.join(workspace, "fs", extra["target"]))
# fmt: on
for binary in package["binaries"]:
target = os.path.join(source["binary"], binary)
if os.path.exists(target):
# fmt: off
shutil.copy(target,
os.path.join(workspace, "fs/usr/bin", os.path.basename(target)))
# fmt: on
# After the package contents are copied into the working directory,
# the permissions must be updated. Since the CI executor may change
# occasionally (images/ORBs deprecated over time), the umask may
# not be what we expect. This allows this packaging script to be
# agnostic to umask/system configuration.
for root, dirs, files in os.walk(workspace):
for target in [os.path.join(root, f) for f in files]:
# files in "fs/usr/bin" are executable
if os.path.relpath(root, workspace) == "fs/usr/bin":
os.chmod(target, 0o0755)
else:
# standard file permissions
os.chmod(target, 0o0644)
# fmt: off
shutil.chown(
target,
user = "root",
group = "root")
# fmt: on
for target in [os.path.join(root, d) for d in dirs]:
# standard directory permissions
os.chmod(target, 0o0755)
# fmt: off
shutil.chown(
target,
user = "root",
group = "root")
# fmt: on
for override in package["perm_overrides"]:
target = os.path.join(workspace, "fs", override["target"])
os.chmod(target, override["perms"])
# "owner" and "group" should be a system account and group with
# a well-defined UID and GID. Otherwise, the UID/GID might vary
# between systems. When the archive is extracted/package is
# installed, things may not behave as we would expect.
# fmt: off
shutil.chown(
target,
user = override["owner"],
group = override["group"])
# fmt: on
os.makedirs(source["target"], exist_ok=True)
fpm_wrapper(source, package, version, workspace, "rpm")
fpm_wrapper(source, package, version, workspace, "deb")
def fpm_wrapper(source, package, version, workspace, package_type):
"""
Constructs either a DEB/RPM Package.
This wraps some configuration settings that are *only* relevant
to `fpm`.
"""
conffiles = []
for root, dirs, files in os.walk(os.path.join(workspace, "fs/etc")):
for file in files:
# fmt: off
conffiles.extend([
"--config-files", os.path.join("/", os.path.relpath(root, os.path.join(workspace, "fs")), file)
])
# fmt: on
# `source["arch"]` matches DEB architecture names. When building RPMs, it must
# be converted into RPM architecture names.
architecture = source["arch"]
if package_type == "rpm":
if architecture == "amd64":
architecture = "x86_64"
if architecture == "arm64":
architecture = "aarch64"
# fmt: off
p = subprocess.check_call([
"fpm",
"--log", "error",
# package description
"--name", package["name"],
"--vendor", "InfluxData",
"--description", "Distributed time-series database.",
"--url", "https://influxdata.com",
"--maintainer", "support@influxdb.com",
"--license", "MIT",
# package configuration
"--input-type", "dir",
"--output-type", package_type,
"--architecture", architecture,
"--version", version,
"--iteration", "1",
# maintainer scripts
"--after-install", os.path.join(workspace, "control/postinst"),
"--after-remove", os.path.join(workspace, "control/postrm"),
"--before-install", os.path.join(workspace, "control/preinst"),
# package relationships
"--deb-recommends", "influxdb2-cli",
"--conflicts", "influxdb",
"--depends", "curl",
# package conffiles
*conffiles,
# package options
"--chdir", os.path.join(workspace, "fs/"),
"--package", source["target"]
])
# fmt: on
circle_tag = os.getenv("CIRCLE_TAG", default="")
circle_sha = os.getenv("CIRCLE_SHA1", default="DEADBEEF")
# Determine if `circle_tag` matches the semantic version regex. Otherwise,
# assume that `circle_tag` is not intended to tag a release. The regex is
# permissive of what occurs after the semantic version. This allows for
# alphas, betas, and release candidates.
if re.match("^v[0-9]+.[0-9]+.[0-9]+", circle_tag):
version = circle_tag[1:]
else:
# When `circle_tag` cannot be used to construct the package version,
# use `circle_sha`. Since `circle_sha` can start with an alpha (non-
# -numeric) character, prefix it with "2.x-".
version = "2.x-" + circle_sha[:8]
with open(".circleci/scripts/package/config.yaml") as file:
document = yaml.load(file, Loader=yaml.SafeLoader)
# fmt: off
for s, p in [
(s, p)
for s in document["sources" ]
for p in document["packages"]
]:
# fmt: on
if s["plat"] == "linux":
build_linux_archive(s, p, version)
build_linux_package(s, p, version)
if s["plat"] == "darwin" or s["plat"] == "windows":
build_archive(s, p, version)

View File

@ -0,0 +1,49 @@
---
sources:
- binary: /tmp/workspace/bin/influxd_linux_amd64/
target: artifacts/
arch: amd64
plat: linux
- binary: /tmp/workspace/bin/influxd_linux_arm64/
target: artifacts/
arch: arm64
plat: linux
- binary: /tmp/workspace/bin/influxd_darwin_amd64/
target: artifacts/
arch: amd64
plat: darwin
- binary: /tmp/workspace/bin/influxd_windows_amd64/
target: artifacts/
arch: amd64
plat: windows
packages:
- name: influxdb2
binaries:
- influxd
- influxd.exe
extras:
- source: LICENSE
target: usr/share/influxdb/LICENSE
- source: README.md
target: usr/share/influxdb/README.md
perm_overrides:
- owner: root
group: root
perms: 0755
target: usr/share/influxdb/influxdb2-upgrade.sh
- owner: root
group: root
perms: 0755
target: usr/lib/influxdb/scripts/init.sh
- owner: root
group: root
perms: 0755
target: usr/lib/influxdb/scripts/influxd-systemd-start.sh
source: .circleci/scripts/package/influxdb2

View File

@ -111,8 +111,8 @@ elif [[ -f /etc/debian_version ]]; then
# Moving these lines out of this if statement would make `rmp -V` fail after installation.
chown -R -L influxdb:influxdb $LOG_DIR
chown -R -L influxdb:influxdb $DATA_DIR
chmod 750 $LOG_DIR
chmod 750 $DATA_DIR
chmod 755 $LOG_DIR
chmod 755 $DATA_DIR
# Debian/Ubuntu logic
if command -v systemctl &>/dev/null; then

View File

@ -0,0 +1 @@
This prevents Git from removing this directory.

View File

@ -15,7 +15,12 @@ KillMode=control-group
Restart=on-failure
Type=forking
PIDFile=/var/lib/influxdb/influxd.pid
StateDirectory=influxdb
StateDirectoryMode=0750
LogsDirectory=influxdb
LogsDirectoryMode=0750
UMask=0027
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target

View File

@ -24,6 +24,13 @@ NAME=influxdb
USER=influxdb
GROUP=influxdb
if [ -n "${INFLUXD_SERVICE_UMASK:-}" ]
then
umask "${INFLUXD_SERVICE_UMASK}"
else
umask 0027
fi
# Check for sudo or root privileges before continuing
if [ "$UID" != "0" ]; then
echo "You must be root to run this script"
@ -40,10 +47,11 @@ fi
# PID file for the daemon
PIDFILE=/var/run/influxdb/influxd.pid
PIDDIR=`dirname $PIDFILE`
if [ ! -d "$PIDDIR" ]; then
mkdir -p $PIDDIR
chown $USER:$GROUP $PIDDIR
piddir="$(dirname "${PIDFILE}")"
if [ ! -d "${piddir}" ]; then
mkdir -p "${piddir}"
chown "${USER}:${GROUP}" "${piddir}"
chmod 0750 "${piddir}"
fi
# Max open files
@ -58,16 +66,20 @@ if [ -z "$STDOUT" ]; then
STDOUT=/var/log/influxdb/influxd.log
fi
if [ ! -f "$STDOUT" ]; then
mkdir -p $(dirname $STDOUT)
outdir="$(dirname "${STDOUT}")"
if [ ! -d "${outdir}" ]; then
mkdir -p "${outdir}"
chmod 0750 "${outdir}"
fi
if [ -z "$STDERR" ]; then
STDERR=/var/log/influxdb/influxd.log
fi
if [ ! -f "$STDERR" ]; then
mkdir -p $(dirname $STDERR)
errdir="$(dirname "${STDERR}")"
if [ ! -d "${errdir}" ]; then
mkdir -p "${errdir}"
chmod 0750 "${errdir}"
fi
# Override init script variables with DEFAULT values

View File

@ -0,0 +1,2 @@
PyYAML==6.0
regex==2023.6.3

11
.github/CODEOWNERS vendored
View File

@ -3,14 +3,3 @@
#
# Here is information about how to configure this file:
# https://help.github.com/en/articles/about-code-owners
# monitoring team will help to review swagger.yml changes.
#
http/swagger.yml @influxdata/monitoring-team
# dev tools
/pkger/ @influxdata/tools-team
# Storage code
#/storage/ @influxdata/storage-team
#/tsdb/ @influxdata/storage-team

View File

@ -108,7 +108,7 @@ Before you contribute to InfluxDB, please sign our [Individual Contributor Licen
### Install Go
InfluxDB requires Go 1.18.
InfluxDB requires Go 1.20.
At InfluxData we find `gvm`, a Go version manager, useful for installing Go.
For instructions on how to install it see [the gvm page on github](https://github.com/moovweb/gvm).
@ -116,8 +116,8 @@ For instructions on how to install it see [the gvm page on github](https://githu
After installing `gvm` you can install and set the default Go version by running the following:
```bash
$ gvm install go1.18
$ gvm use go1.18 --default
$ gvm install go1.20
$ gvm use go1.20 --default
```
InfluxDB requires Go module support. Set `GO111MODULE=on` or build the project outside of your `GOPATH` for it to succeed. For information about modules, please refer to the [wiki](https://github.com/golang/go/wiki/Modules).

View File

@ -288,7 +288,7 @@ func (s *Service) DeleteAnnotations(ctx context.Context, orgID platform.ID, dele
return nil
}
// DeleteAnnoation deletes a single annotation by ID
// DeleteAnnotation deletes a single annotation by ID
func (s *Service) DeleteAnnotation(ctx context.Context, id platform.ID) error {
s.store.Mu.Lock()
defer s.store.Mu.Unlock()

View File

@ -134,8 +134,7 @@ func TestAnnotationsCRUD(t *testing.T) {
}
t.Run("create annotations", func(t *testing.T) {
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
tests := []struct {
name string
@ -169,9 +168,7 @@ func TestAnnotationsCRUD(t *testing.T) {
})
t.Run("select with filters", func(t *testing.T) {
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
populateAnnotationsData(t, svc)
tests := []struct {
@ -335,8 +332,7 @@ func TestAnnotationsCRUD(t *testing.T) {
})
t.Run("get by id", func(t *testing.T) {
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
anns := populateAnnotationsData(t, svc)
tests := []struct {
@ -383,8 +379,7 @@ func TestAnnotationsCRUD(t *testing.T) {
t.Run("delete multiple with a filter", func(t *testing.T) {
t.Run("delete by stream id", func(t *testing.T) {
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
populateAnnotationsData(t, svc)
ctx := context.Background()
@ -485,8 +480,7 @@ func TestAnnotationsCRUD(t *testing.T) {
})
t.Run("delete with non-id filters", func(t *testing.T) {
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
populateAnnotationsData(t, svc)
tests := []struct {
@ -590,8 +584,7 @@ func TestAnnotationsCRUD(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
populateAnnotationsData(t, svc)
err := svc.DeleteAnnotations(ctx, tt.deleteOrgID, tt.filter)
@ -608,8 +601,7 @@ func TestAnnotationsCRUD(t *testing.T) {
})
t.Run("delete a single annotation by id", func(t *testing.T) {
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
ans := populateAnnotationsData(t, svc)
tests := []struct {
@ -652,8 +644,7 @@ func TestAnnotationsCRUD(t *testing.T) {
})
t.Run("update a single annotation by id", func(t *testing.T) {
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
ans := populateAnnotationsData(t, svc)
updatedTime := time.Time{}.Add(time.Minute)
@ -728,8 +719,7 @@ func TestAnnotationsCRUD(t *testing.T) {
})
t.Run("deleted streams cascade to deleted annotations", func(t *testing.T) {
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
ctx := context.Background()
ans := populateAnnotationsData(t, svc)
@ -762,8 +752,7 @@ func TestAnnotationsCRUD(t *testing.T) {
})
t.Run("renamed streams are reflected in subsequent annotation queries", func(t *testing.T) {
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
ctx := context.Background()
populateAnnotationsData(t, svc)
@ -817,8 +806,7 @@ func TestAnnotationsCRUD(t *testing.T) {
func TestStreamsCRUDSingle(t *testing.T) {
t.Parallel()
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
ctx := context.Background()
orgID := *influxdbtesting.IDPtr(1)
@ -907,8 +895,7 @@ func TestStreamsCRUDSingle(t *testing.T) {
func TestStreamsCRUDMany(t *testing.T) {
t.Parallel()
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
ctx := context.Background()
@ -1038,10 +1025,10 @@ func assertStreamNames(t *testing.T, want []string, got []influxdb.StoredStream)
require.ElementsMatch(t, want, storedNames)
}
func newTestService(t *testing.T) (*Service, func(t *testing.T)) {
func newTestService(t *testing.T) *Service {
t.Helper()
store, clean := sqlite.NewTestStore(t)
store := sqlite.NewTestStore(t)
ctx := context.Background()
sqliteMigrator := sqlite.NewMigrator(store, zap.NewNop())
@ -1050,5 +1037,5 @@ func newTestService(t *testing.T) (*Service, func(t *testing.T)) {
svc := NewService(store)
return svc, clean
return svc
}

View File

@ -26,12 +26,12 @@ func (s *tenantService) FindUser(ctx context.Context, filter influxdb.UserFilter
return s.FindUserFn(ctx, filter)
}
//FindOrganizationByID calls FindOrganizationByIDF.
// FindOrganizationByID calls FindOrganizationByIDF.
func (s *tenantService) FindOrganizationByID(ctx context.Context, id platform.ID) (*influxdb.Organization, error) {
return s.FindOrganizationByIDF(ctx, id)
}
//FindOrganization calls FindOrganizationF.
// FindOrganization calls FindOrganizationF.
func (s *tenantService) FindOrganization(ctx context.Context, filter influxdb.OrganizationFilter) (*influxdb.Organization, error) {
return s.FindOrganizationF(ctx, filter)
}

View File

@ -47,7 +47,7 @@ func IsAllowed(ctx context.Context, p influxdb.Permission) error {
return IsAllowedAll(ctx, []influxdb.Permission{p})
}
// IsAllowedAll checks to see if an action is authorized by ALL permissions.
// IsAllowedAny checks to see if an action is authorized by ANY permissions.
// Also see IsAllowed.
func IsAllowedAny(ctx context.Context, permissions []influxdb.Permission) error {
a, err := icontext.GetAuthorizer(ctx)
@ -97,9 +97,12 @@ func authorizeReadSystemBucket(ctx context.Context, bid, oid platform.ID) (influ
// AuthorizeReadBucket exists because buckets are a special case and should use this method.
// I.e., instead of:
// AuthorizeRead(ctx, influxdb.BucketsResourceType, b.ID, b.OrgID)
//
// AuthorizeRead(ctx, influxdb.BucketsResourceType, b.ID, b.OrgID)
//
// use:
// AuthorizeReadBucket(ctx, b.Type, b.ID, b.OrgID)
//
// AuthorizeReadBucket(ctx, b.Type, b.ID, b.OrgID)
func AuthorizeReadBucket(ctx context.Context, bt influxdb.BucketType, bid, oid platform.ID) (influxdb.Authorizer, influxdb.Permission, error) {
switch bt {
case influxdb.BucketTypeSystem:

View File

@ -449,7 +449,7 @@ func MemberPermissions(orgID platform.ID) []Permission {
return ps
}
// MemberPermissions are the default permissions for those who can see a resource.
// MemberBucketPermission are the default permissions for those who can see a resource.
func MemberBucketPermission(bucketID platform.ID) Permission {
return Permission{Action: ReadAction, Resource: Resource{Type: BucketsResourceType, ID: &bucketID}}
}

View File

@ -43,16 +43,7 @@ func newTestClient(t *testing.T) (*bolt.Client, func(), error) {
}
func TestClientOpen(t *testing.T) {
tempDir, err := os.MkdirTemp("", "")
if err != nil {
t.Fatalf("unable to create temporary test directory %v", err)
}
defer func() {
if err := os.RemoveAll(tempDir); err != nil {
t.Fatalf("unable to delete temporary test directory %s: %v", tempDir, err)
}
}()
tempDir := t.TempDir()
boltFile := filepath.Join(tempDir, "test", "bolt.db")

View File

@ -55,8 +55,7 @@ func Test_BuildTSI_ShardID_Without_BucketID(t *testing.T) {
}
func Test_BuildTSI_Invalid_Index_Already_Exists(t *testing.T) {
tempDir := newTempDirectory(t, "", "build-tsi")
defer os.RemoveAll(tempDir)
tempDir := t.TempDir()
os.MkdirAll(filepath.Join(tempDir, "data", "12345", "autogen", "1", "index"), 0777)
os.MkdirAll(filepath.Join(tempDir, "wal", "12345", "autogen", "1"), 0777)
@ -75,8 +74,7 @@ func Test_BuildTSI_Invalid_Index_Already_Exists(t *testing.T) {
}
func Test_BuildTSI_Valid(t *testing.T) {
tempDir := newTempDirectory(t, "", "build-tsi")
defer os.RemoveAll(tempDir)
tempDir := t.TempDir()
os.MkdirAll(filepath.Join(tempDir, "data", "12345", "autogen", "1"), 0777)
os.MkdirAll(filepath.Join(tempDir, "wal", "12345", "autogen", "1"), 0777)
@ -119,8 +117,7 @@ func Test_BuildTSI_Valid(t *testing.T) {
}
func Test_BuildTSI_Valid_Batch_Size_Exceeded(t *testing.T) {
tempDir := newTempDirectory(t, "", "build-tsi")
defer os.RemoveAll(tempDir)
tempDir := t.TempDir()
os.MkdirAll(filepath.Join(tempDir, "data", "12345", "autogen", "1"), 0777)
os.MkdirAll(filepath.Join(tempDir, "wal", "12345", "autogen", "1"), 0777)
@ -164,8 +161,7 @@ func Test_BuildTSI_Valid_Batch_Size_Exceeded(t *testing.T) {
func Test_BuildTSI_Valid_Verbose(t *testing.T) {
// Set up temp directory structure
tempDir := newTempDirectory(t, "", "build-tsi")
defer os.RemoveAll(tempDir)
tempDir := t.TempDir()
os.MkdirAll(filepath.Join(tempDir, "data", "12345", "autogen", "1"), 0777)
os.MkdirAll(filepath.Join(tempDir, "wal", "12345", "autogen", "1"), 0777)
@ -231,8 +227,7 @@ func Test_BuildTSI_Valid_Compact_Series(t *testing.T) {
t.Skip("mmap implementation on Windows prevents series-file from shrinking during compaction")
}
tempDir := newTempDirectory(t, "", "build-tsi")
defer os.RemoveAll(tempDir)
tempDir := t.TempDir()
os.MkdirAll(filepath.Join(tempDir, "data", "12345", "_series"), 0777)
@ -395,15 +390,6 @@ func runCommand(t *testing.T, params cmdParams, outs cmdOuts) {
}
}
func newTempDirectory(t *testing.T, parentDir string, dirName string) string {
t.Helper()
dir, err := os.MkdirTemp(parentDir, dirName)
require.NoError(t, err)
return dir
}
func newTempTsmFile(t *testing.T, path string, values []tsm1.Value) {
t.Helper()

View File

@ -12,8 +12,7 @@ import (
)
func Test_DeleteTSM_EmptyFile(t *testing.T) {
dir, file := createTSMFile(t, tsmParams{})
defer os.RemoveAll(dir)
_, file := createTSMFile(t, tsmParams{})
runCommand(t, testParams{
file: file,
@ -23,10 +22,9 @@ func Test_DeleteTSM_EmptyFile(t *testing.T) {
}
func Test_DeleteTSM_WrongExt(t *testing.T) {
dir, file := createTSMFile(t, tsmParams{
_, file := createTSMFile(t, tsmParams{
improperExt: true,
})
defer os.RemoveAll(dir)
runCommand(t, testParams{
file: file,
@ -37,7 +35,6 @@ func Test_DeleteTSM_WrongExt(t *testing.T) {
func Test_DeleteTSM_NotFile(t *testing.T) {
dir, _ := createTSMFile(t, tsmParams{})
defer os.RemoveAll(dir)
runCommand(t, testParams{
file: dir,
@ -47,10 +44,9 @@ func Test_DeleteTSM_NotFile(t *testing.T) {
}
func Test_DeleteTSM_SingleEntry_Valid(t *testing.T) {
dir, file := createTSMFile(t, tsmParams{
_, file := createTSMFile(t, tsmParams{
keys: []string{"cpu"},
})
defer os.RemoveAll(dir)
runCommand(t, testParams{
file: file,
@ -60,11 +56,10 @@ func Test_DeleteTSM_SingleEntry_Valid(t *testing.T) {
}
func Test_DeleteTSM_SingleEntry_Invalid(t *testing.T) {
dir, file := createTSMFile(t, tsmParams{
_, file := createTSMFile(t, tsmParams{
invalid: true,
keys: []string{"cpu"},
})
defer os.RemoveAll(dir)
runCommand(t, testParams{
file: file,
@ -74,10 +69,9 @@ func Test_DeleteTSM_SingleEntry_Invalid(t *testing.T) {
}
func Test_DeleteTSM_ManyEntries_Valid(t *testing.T) {
dir, file := createTSMFile(t, tsmParams{
_, file := createTSMFile(t, tsmParams{
keys: []string{"cpu", "foobar", "mem"},
})
defer os.RemoveAll(dir)
runCommand(t, testParams{
file: file,
@ -86,11 +80,10 @@ func Test_DeleteTSM_ManyEntries_Valid(t *testing.T) {
}
func Test_DeleteTSM_ManyEntries_Invalid(t *testing.T) {
dir, file := createTSMFile(t, tsmParams{
_, file := createTSMFile(t, tsmParams{
invalid: true,
keys: []string{"cpu", "foobar", "mem"},
})
defer os.RemoveAll(dir)
runCommand(t, testParams{
file: file,
@ -154,10 +147,10 @@ type tsmParams struct {
func createTSMFile(t *testing.T, params tsmParams) (string, string) {
t.Helper()
dir, err := os.MkdirTemp("", "deletetsm")
require.NoError(t, err)
dir := t.TempDir()
var file *os.File
var err error
if !params.improperExt {
file, err = os.CreateTemp(dir, "*."+tsm1.TSMFileExtension)
} else {

View File

@ -19,9 +19,7 @@ func Test_DumpTSI_NoError(t *testing.T) {
cmd.SetOut(b)
// Create the temp-dir for our un-tared files to live in
dir, err := os.MkdirTemp("", "dumptsitest-")
require.NoError(t, err)
defer os.RemoveAll(dir)
dir := t.TempDir()
// Untar the test data
file, err := os.Open("../tsi-test-data.tar.gz")

View File

@ -21,8 +21,7 @@ func Test_DumpTSM_NoFile(t *testing.T) {
}
func Test_DumpTSM_EmptyFile(t *testing.T) {
dir, file := makeTSMFile(t, tsmParams{})
defer os.RemoveAll(dir)
_, file := makeTSMFile(t, tsmParams{})
runCommand(t, cmdParams{
file: file,
@ -32,10 +31,9 @@ func Test_DumpTSM_EmptyFile(t *testing.T) {
}
func Test_DumpTSM_WrongExt(t *testing.T) {
dir, file := makeTSMFile(t, tsmParams{
_, file := makeTSMFile(t, tsmParams{
wrongExt: true,
})
defer os.RemoveAll(dir)
runCommand(t, cmdParams{
file: file,
@ -46,7 +44,6 @@ func Test_DumpTSM_WrongExt(t *testing.T) {
func Test_DumpTSM_NotFile(t *testing.T) {
dir, _ := makeTSMFile(t, tsmParams{})
defer os.RemoveAll(dir)
runCommand(t, cmdParams{
file: dir,
@ -56,10 +53,9 @@ func Test_DumpTSM_NotFile(t *testing.T) {
}
func Test_DumpTSM_Valid(t *testing.T) {
dir, file := makeTSMFile(t, tsmParams{
_, file := makeTSMFile(t, tsmParams{
keys: []string{"cpu"},
})
defer os.RemoveAll(dir)
runCommand(t, cmdParams{
file: file,
@ -72,11 +68,10 @@ func Test_DumpTSM_Valid(t *testing.T) {
}
func Test_DumpTSM_Invalid(t *testing.T) {
dir, file := makeTSMFile(t, tsmParams{
_, file := makeTSMFile(t, tsmParams{
invalid: true,
keys: []string{"cpu"},
})
defer os.RemoveAll(dir)
runCommand(t, cmdParams{
file: file,
@ -86,10 +81,9 @@ func Test_DumpTSM_Invalid(t *testing.T) {
}
func Test_DumpTSM_ManyKeys(t *testing.T) {
dir, file := makeTSMFile(t, tsmParams{
_, file := makeTSMFile(t, tsmParams{
keys: []string{"cpu", "foobar", "mem"},
})
defer os.RemoveAll(dir)
runCommand(t, cmdParams{
file: file,
@ -103,10 +97,9 @@ func Test_DumpTSM_ManyKeys(t *testing.T) {
}
func Test_DumpTSM_FilterKey(t *testing.T) {
dir, file := makeTSMFile(t, tsmParams{
_, file := makeTSMFile(t, tsmParams{
keys: []string{"cpu", "foobar", "mem"},
})
defer os.RemoveAll(dir)
runCommand(t, cmdParams{
file: file,
@ -187,8 +180,7 @@ type tsmParams struct {
func makeTSMFile(t *testing.T, params tsmParams) (string, string) {
t.Helper()
dir, err := os.MkdirTemp("", "dumptsm")
require.NoError(t, err)
dir := t.TempDir()
ext := tsm1.TSMFileExtension
if params.wrongExt {

View File

@ -36,8 +36,7 @@ func Test_DumpWal_Bad_Path(t *testing.T) {
func Test_DumpWal_Wrong_File_Type(t *testing.T) {
// Creates a temporary .txt file (wrong extension)
dir, file := newTempWal(t, false, false)
defer os.RemoveAll(dir)
file := newTempWal(t, false, false)
params := cmdParams{
walPaths: []string{file},
@ -48,8 +47,7 @@ func Test_DumpWal_Wrong_File_Type(t *testing.T) {
}
func Test_DumpWal_File_Valid(t *testing.T) {
dir, file := newTempWal(t, true, false)
defer os.RemoveAll(dir)
file := newTempWal(t, true, false)
params := cmdParams{
walPaths: []string{file},
@ -67,8 +65,7 @@ func Test_DumpWal_File_Valid(t *testing.T) {
}
func Test_DumpWal_Find_Duplicates_None(t *testing.T) {
dir, file := newTempWal(t, true, false)
defer os.RemoveAll(dir)
file := newTempWal(t, true, false)
params := cmdParams{
findDuplicates: true,
@ -80,8 +77,7 @@ func Test_DumpWal_Find_Duplicates_None(t *testing.T) {
}
func Test_DumpWal_Find_Duplicates_Present(t *testing.T) {
dir, file := newTempWal(t, true, true)
defer os.RemoveAll(dir)
file := newTempWal(t, true, true)
params := cmdParams{
findDuplicates: true,
@ -92,21 +88,25 @@ func Test_DumpWal_Find_Duplicates_Present(t *testing.T) {
runCommand(t, params)
}
func newTempWal(t *testing.T, validExt bool, withDuplicate bool) (string, string) {
func newTempWal(t *testing.T, validExt bool, withDuplicate bool) string {
t.Helper()
dir, err := os.MkdirTemp("", "dump-wal")
require.NoError(t, err)
var file *os.File
dir := t.TempDir()
if !validExt {
file, err := os.CreateTemp(dir, "dumpwaltest*.txt")
require.NoError(t, err)
return dir, file.Name()
t.Cleanup(func() {
file.Close()
})
return file.Name()
}
file, err = os.CreateTemp(dir, "dumpwaltest*"+"."+tsm1.WALFileExtension)
file, err := os.CreateTemp(dir, "dumpwaltest*"+"."+tsm1.WALFileExtension)
require.NoError(t, err)
t.Cleanup(func() {
file.Close()
})
p1 := tsm1.NewValue(10, 1.1)
p2 := tsm1.NewValue(1, int64(1))
@ -132,7 +132,7 @@ func newTempWal(t *testing.T, validExt bool, withDuplicate bool) (string, string
// Write to WAL File
writeWalFile(t, file, values)
return dir, file.Name()
return file.Name()
}
func writeWalFile(t *testing.T, file *os.File, vals map[string][]tsm1.Value) {

View File

@ -8,8 +8,10 @@ import (
"github.com/influxdata/influxdb/v2/cmd/influxd/inspect/dump_wal"
"github.com/influxdata/influxdb/v2/cmd/influxd/inspect/export_index"
"github.com/influxdata/influxdb/v2/cmd/influxd/inspect/export_lp"
"github.com/influxdata/influxdb/v2/cmd/influxd/inspect/report_db"
"github.com/influxdata/influxdb/v2/cmd/influxd/inspect/report_tsi"
"github.com/influxdata/influxdb/v2/cmd/influxd/inspect/report_tsm"
typecheck "github.com/influxdata/influxdb/v2/cmd/influxd/inspect/type_conflicts"
"github.com/influxdata/influxdb/v2/cmd/influxd/inspect/verify_seriesfile"
"github.com/influxdata/influxdb/v2/cmd/influxd/inspect/verify_tombstone"
"github.com/influxdata/influxdb/v2/cmd/influxd/inspect/verify_tsm"
@ -33,6 +35,22 @@ func NewCommand(v *viper.Viper) (*cobra.Command, error) {
if err != nil {
return nil, err
}
reportDB, err := report_db.NewReportDBCommand(v)
if err != nil {
return nil, err
}
checkSchema, err := typecheck.NewCheckSchemaCommand(v)
if err != nil {
return nil, err
}
mergeSchema, err := typecheck.NewMergeSchemaCommand(v)
if err != nil {
return nil, err
}
base.AddCommand(exportLp)
base.AddCommand(report_tsi.NewReportTSICommand())
base.AddCommand(export_index.NewExportIndexCommand())
@ -46,6 +64,9 @@ func NewCommand(v *viper.Viper) (*cobra.Command, error) {
base.AddCommand(verify_wal.NewVerifyWALCommand())
base.AddCommand(report_tsm.NewReportTSMCommand())
base.AddCommand(build_tsi.NewBuildTSICommand())
base.AddCommand(reportDB)
base.AddCommand(checkSchema)
base.AddCommand(mergeSchema)
return base, nil
}

View File

@ -0,0 +1,242 @@
package aggregators
import (
"fmt"
"strings"
"sync"
"text/tabwriter"
report "github.com/influxdata/influxdb/v2/cmd/influxd/inspect/report_tsm"
"github.com/influxdata/influxdb/v2/models"
)
type rollupNodeMap map[string]RollupNode
type RollupNode interface {
sync.Locker
report.Counter
Children() rollupNodeMap
RecordSeries(bucket, rp, ms string, key, field []byte, tags models.Tags)
Print(tw *tabwriter.Writer, printTags bool, bucket, rp, ms string) error
isLeaf() bool
child(key string, isLeaf bool) NodeWrapper
}
type NodeWrapper struct {
RollupNode
}
var detailedHeader = []string{"bucket", "retention policy", "measurement", "series", "fields", "tag total", "tags"}
var simpleHeader = []string{"bucket", "retention policy", "measurement", "series"}
type RollupNodeFactory struct {
header []string
EstTitle string
NewNode func(isLeaf bool) NodeWrapper
counter func() report.Counter
}
var nodeFactory *RollupNodeFactory
func CreateNodeFactory(detailed, exact bool) *RollupNodeFactory {
estTitle := " (est.)"
newCounterFn := report.NewHLLCounter
if exact {
newCounterFn = report.NewExactCounter
estTitle = ""
}
if detailed {
nodeFactory = newDetailedNodeFactory(newCounterFn, estTitle)
} else {
nodeFactory = newSimpleNodeFactory(newCounterFn, estTitle)
}
return nodeFactory
}
func (f *RollupNodeFactory) PrintHeader(tw *tabwriter.Writer) error {
_, err := fmt.Fprintln(tw, strings.Join(f.header, "\t"))
return err
}
func (f *RollupNodeFactory) PrintDivider(tw *tabwriter.Writer) error {
divLine := f.makeTabDivider()
_, err := fmt.Fprintln(tw, divLine)
return err
}
func (f *RollupNodeFactory) makeTabDivider() string {
div := make([]string, 0, len(f.header))
for _, s := range f.header {
div = append(div, strings.Repeat("-", len(s)))
}
return strings.Join(div, "\t")
}
func newSimpleNodeFactory(newCounterFn func() report.Counter, est string) *RollupNodeFactory {
return &RollupNodeFactory{
header: simpleHeader,
EstTitle: est,
NewNode: func(isLeaf bool) NodeWrapper { return NodeWrapper{newSimpleNode(isLeaf, newCounterFn)} },
counter: newCounterFn,
}
}
func newDetailedNodeFactory(newCounterFn func() report.Counter, est string) *RollupNodeFactory {
return &RollupNodeFactory{
header: detailedHeader,
EstTitle: est,
NewNode: func(isLeaf bool) NodeWrapper { return NodeWrapper{newDetailedNode(isLeaf, newCounterFn)} },
counter: newCounterFn,
}
}
type simpleNode struct {
sync.Mutex
report.Counter
rollupNodeMap
}
func (s *simpleNode) Children() rollupNodeMap {
return s.rollupNodeMap
}
func (s *simpleNode) child(key string, isLeaf bool) NodeWrapper {
if s.isLeaf() {
panic("Trying to get the child to a leaf node")
}
s.Lock()
defer s.Unlock()
c, ok := s.Children()[key]
if !ok {
c = nodeFactory.NewNode(isLeaf)
s.Children()[key] = c
}
return NodeWrapper{c}
}
func (s *simpleNode) isLeaf() bool {
return s.Children() == nil
}
func newSimpleNode(isLeaf bool, fn func() report.Counter) *simpleNode {
s := &simpleNode{Counter: fn()}
if !isLeaf {
s.rollupNodeMap = make(rollupNodeMap)
} else {
s.rollupNodeMap = nil
}
return s
}
func (s *simpleNode) RecordSeries(bucket, rp, _ string, key, _ []byte, _ models.Tags) {
s.Lock()
defer s.Unlock()
s.recordSeriesNoLock(bucket, rp, key)
}
func (s *simpleNode) recordSeriesNoLock(bucket, rp string, key []byte) {
s.Add([]byte(fmt.Sprintf("%s.%s.%s", bucket, rp, key)))
}
func (s *simpleNode) Print(tw *tabwriter.Writer, _ bool, bucket, rp, ms string) error {
_, err := fmt.Fprintf(tw, "%s\t%s\t%s\t%d\n",
bucket,
rp,
ms,
s.Count())
return err
}
type detailedNode struct {
simpleNode
fields report.Counter
tags map[string]report.Counter
}
func newDetailedNode(isLeaf bool, fn func() report.Counter) *detailedNode {
d := &detailedNode{
simpleNode: simpleNode{
Counter: fn(),
},
fields: fn(),
tags: make(map[string]report.Counter),
}
if !isLeaf {
d.simpleNode.rollupNodeMap = make(rollupNodeMap)
} else {
d.simpleNode.rollupNodeMap = nil
}
return d
}
func (d *detailedNode) RecordSeries(bucket, rp, ms string, key, field []byte, tags models.Tags) {
d.Lock()
defer d.Unlock()
d.simpleNode.recordSeriesNoLock(bucket, rp, key)
d.fields.Add([]byte(fmt.Sprintf("%s.%s.%s.%s", bucket, rp, ms, field)))
for _, t := range tags {
// Add database, retention policy, and measurement
// to correctly aggregate in inner (non-leaf) nodes
canonTag := fmt.Sprintf("%s.%s.%s.%s", bucket, rp, ms, t.Key)
tc, ok := d.tags[canonTag]
if !ok {
tc = nodeFactory.counter()
d.tags[canonTag] = tc
}
tc.Add(t.Value)
}
}
func (d *detailedNode) Print(tw *tabwriter.Writer, printTags bool, bucket, rp, ms string) error {
seriesN := d.Count()
fieldsN := d.fields.Count()
var tagKeys []string
tagN := uint64(0)
if printTags {
tagKeys = make([]string, 0, len(d.tags))
}
for k, v := range d.tags {
c := v.Count()
tagN += c
if printTags {
tagKeys = append(tagKeys, fmt.Sprintf("%q: %d", k[strings.LastIndex(k, ".")+1:], c))
}
}
_, err := fmt.Fprintf(tw, "%s\t%s\t%s\t%d\t%d\t%d\t%s\n",
bucket,
rp,
ms,
seriesN,
fieldsN,
tagN,
strings.Join(tagKeys, ", "))
return err
}
func (r *NodeWrapper) Record(depth, totalDepth int, bucket, rp, measurement string, key []byte, field []byte, tags models.Tags) {
r.RecordSeries(bucket, rp, measurement, key, field, tags)
switch depth {
case 2:
if depth < totalDepth {
// Create measurement level in tree
c := r.child(measurement, true)
c.RecordSeries(bucket, rp, measurement, key, field, tags)
}
case 1:
if depth < totalDepth {
// Create retention policy level in tree
c := r.child(rp, (depth+1) == totalDepth)
c.Record(depth+1, totalDepth, bucket, rp, measurement, key, field, tags)
}
case 0:
if depth < totalDepth {
// Create database level in tree
c := r.child(bucket, (depth+1) == totalDepth)
c.Record(depth+1, totalDepth, bucket, rp, measurement, key, field, tags)
}
default:
}
}

View File

@ -0,0 +1,330 @@
package aggregators
import (
"bytes"
"sync"
"testing"
"github.com/influxdata/influxdb/v2/models"
"github.com/stretchr/testify/require"
)
type result struct {
fields uint64
tags uint64
series uint64
}
type test struct {
db string
rp string
key []byte
}
// Ensure that tags and fields and series which differ only in database, retention policy, or measurement
// are correctly counted.
func Test_canonicalize(t *testing.T) {
totalDepth := 3
// measurement,tag1=tag1_value1,tag2=tag2_value1#!~#field1
tests := []test{
{
db: "db1",
rp: "rp1",
key: []byte("m1,t1=t1_v1,t2=t2_v1#!~#f1"),
},
{
db: "db1",
rp: "rp1",
key: []byte("m1,t1=t1_v2,t2=t2_v1#!~#f1"),
},
{
db: "db1",
rp: "rp1",
key: []byte("m1,t1=t1_v1,t2=t2_v2#!~#f1"),
},
{
db: "db1",
rp: "rp1",
key: []byte("m1,t1=t1_v2,t2=t2_v2#!~#f1"),
},
{
db: "db1",
rp: "rp1",
key: []byte("m1,t1=t1_v2,t2=t2_v2#!~#f2"),
},
{
db: "db1",
rp: "rp2",
key: []byte("m1,t1=t1_v1,t2=t2_v1#!~#f1"),
},
{
db: "db1",
rp: "rp2",
key: []byte("m1,t1=t1_v2,t2=t2_v1#!~#f1"),
},
{
db: "db1",
rp: "rp2",
key: []byte("m1,t1=t1_v1,t2=t2_v2#!~#f1"),
},
{
db: "db1",
rp: "rp2",
key: []byte("m1,t1=t1_v2,t2=t2_v2#!~#f3"),
},
{
db: "db1",
rp: "rp2",
key: []byte("m1,t1=t1_v2,t2=t2_v2#!~#f2"),
},
{
db: "db1",
rp: "rp1",
key: []byte("m2,t1=t1_v1,t2=t2_v1#!~#f1"),
},
{
db: "db1",
rp: "rp1",
key: []byte("m2,t1=t1_v2,t2=t2_v1#!~#f1"),
},
{
db: "db1",
rp: "rp1",
key: []byte("m2,t1=t1_v1,t2=t2_v2#!~#f1"),
},
{
db: "db1",
rp: "rp1",
key: []byte("m2,t1=t1_v2,t2=t2_v2#!~#f1"),
},
{
db: "db1",
rp: "rp1",
key: []byte("m2,t1=t1_v2,t2=t2_v2#!~#f2"),
},
{
db: "db1",
rp: "rp2",
key: []byte("m2,t1=t1_v1,t2=t2_v1#!~#f1"),
},
{
db: "db1",
rp: "rp2",
key: []byte("m2,t1=t1_v2,t2=t2_v1#!~#f1"),
},
{
db: "db1",
rp: "rp2",
key: []byte("m2,t1=t1_v1,t2=t2_v2#!~#f1"),
},
{
db: "db1",
rp: "rp2",
key: []byte("m2,t1=t1_v2,t2=t2_v2#!~#f1"),
},
{
db: "db1",
rp: "rp2",
key: []byte("m2,t1=t1_v2,t2=t2_v2#!~#f2"),
},
{
db: "db2",
rp: "rp1",
key: []byte("m1,t1=t1_v1,t2=t2_v1#!~#f1"),
},
{
db: "db2",
rp: "rp1",
key: []byte("m1,t1=t1_v2,t2=t2_v1#!~#f1"),
},
{
db: "db2",
rp: "rp1",
key: []byte("m1,t1=t1_v1,t2=t2_v2#!~#f1"),
},
{
db: "db2",
rp: "rp1",
key: []byte("m1,t1=t1_v2,t2=t2_v2#!~#f1"),
},
{
db: "db2",
rp: "rp1",
key: []byte("m1,t1=t1_v2,t2=t2_v2#!~#f2"),
},
{
db: "db2",
rp: "rp2",
key: []byte("m1,t1=t1_v1,t2=t2_v1#!~#f1"),
},
{
db: "db2",
rp: "rp2",
key: []byte("m1,t1=t1_v2,t2=t2_v1#!~#f1"),
},
{
db: "db2",
rp: "rp2",
key: []byte("m1,t1=t1_v1,t2=t2_v2#!~#f1"),
},
{
db: "db2",
rp: "rp2",
key: []byte("m1,t1=t1_v2,t2=t2_v2#!~#f1"),
},
{
db: "db2",
rp: "rp2",
key: []byte("m1,t1=t1_v2,t2=t2_v2#!~#f2"),
},
{
db: "db2",
rp: "rp1",
key: []byte("m2,t1=t1_v1,t2=t2_v1#!~#f1"),
},
{
db: "db2",
rp: "rp1",
key: []byte("m2,t1=t1_v2,t2=t2_v1#!~#f1"),
},
{
db: "db2",
rp: "rp1",
key: []byte("m2,t1=t1_v1,t2=t2_v2#!~#f1"),
},
{
db: "db2",
rp: "rp1",
key: []byte("m2,t1=t1_v2,t2=t2_v2#!~#f1"),
},
{
db: "db2",
rp: "rp1",
key: []byte("m2,t1=t1_v2,t2=t2_v2#!~#f2"),
},
{
db: "db2",
rp: "rp2",
key: []byte("m2,t1=t1_v1,t2=t2_v1#!~#f1"),
},
{
db: "db2",
rp: "rp2",
key: []byte("m2,t1=t1_v2,t2=t2_v1#!~#f1"),
},
{
db: "db2",
rp: "rp2",
key: []byte("m2,t1=t1_v1,t2=t2_v2#!~#f1"),
},
{
db: "db2",
rp: "rp2",
key: []byte("m2,t1=t1_v2,t2=t2_v2#!~#f1"),
},
{
db: "db2",
rp: "rp2",
key: []byte("m2,t1=t1_v2,t2=t2_v2#!~#f2"),
},
}
results := map[string]map[string]map[string]*result{
"db1": {
"rp1": {
"m1": {2, 4, 5},
"m2": {2, 4, 5},
"": {4, 8, 10},
},
"rp2": {
"m1": {3, 4, 5},
"m2": {2, 4, 5},
"": {5, 8, 10},
},
"": {
"": {9, 16, 20},
},
},
"db2": {
"rp1": {
"m1": {2, 4, 5},
"m2": {2, 4, 5},
"": {4, 8, 10},
},
"rp2": {
"m1": {2, 4, 5},
"m2": {2, 4, 5},
"": {4, 8, 10},
},
"": {
"": {8, 16, 20},
},
},
"": {
"": {
"": {17, 32, 40},
},
},
}
testLoop(t, false, true, totalDepth, tests, results)
testLoop(t, true, true, totalDepth, tests, results)
testLoop(t, false, false, totalDepth, tests, results)
testLoop(t, true, false, totalDepth, tests, results)
}
func testLoop(t *testing.T, detailed bool, exact bool, totalDepth int, tests []test, results map[string]map[string]map[string]*result) {
factory := CreateNodeFactory(detailed, exact)
tree := factory.NewNode(totalDepth == 0)
wg := sync.WaitGroup{}
tf := func() {
for i := range tests {
seriesKey, field, _ := bytes.Cut(tests[i].key, []byte("#!~#"))
measurement, tags := models.ParseKey(seriesKey)
tree.Record(0, totalDepth, tests[i].db, tests[i].rp, measurement, tests[i].key, field, tags)
}
wg.Done()
}
const concurrency = 5
wg.Add(concurrency)
for j := 0; j < concurrency; j++ {
go tf()
}
wg.Wait()
for d, db := range tree.Children() {
for r, rp := range db.Children() {
for m, measure := range rp.Children() {
checkNode(t, measure, results[d][r][m], d, r, m)
}
checkNode(t, rp, results[d][r][""], d, r, "")
}
checkNode(t, db, results[d][""][""], d, "", "")
}
checkNode(t, tree, results[""][""][""], "", "", "")
}
func checkNode(t *testing.T, measure RollupNode, results *result, d string, r string, m string) {
mr, ok := measure.(NodeWrapper)
if !ok {
t.Fatalf("internal error: expected a NodeWrapper type")
}
switch node := mr.RollupNode.(type) {
case *detailedNode:
require.Equalf(t, results.series, node.Count(), "series count wrong. db: %q, rp: %q, ms: %q", d, r, m)
require.Equalf(t, results.fields, node.fields.Count(), "field count wrong. db: %q, rp: %q, ms: %q", d, r, m)
tagSum := uint64(0)
for _, t := range node.tags {
tagSum += t.Count()
}
require.Equalf(t, results.tags, tagSum, "tag value count wrong. db: %q, rp: %q, ms: %q", d, r, m)
case *simpleNode:
require.Equalf(t, results.series, node.Count(), "series count wrong. db: %q, rp: %q, ms: %q", d, r, m)
default:
t.Fatalf("internal error: unknown node type")
}
}

View File

@ -0,0 +1,189 @@
package report_db
import (
"bytes"
"context"
"errors"
"fmt"
"io"
"os"
"text/tabwriter"
"github.com/influxdata/influxdb/v2/cmd/influxd/inspect/report_db/aggregators"
"github.com/influxdata/influxdb/v2/kit/cli"
"github.com/influxdata/influxdb/v2/models"
"github.com/influxdata/influxdb/v2/pkg/reporthelper"
"github.com/influxdata/influxdb/v2/tsdb/engine/tsm1"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"golang.org/x/sync/errgroup"
)
// ReportDB represents the program execution for "influxd report-db".
type ReportDB struct {
// Standard input/output, overridden for testing.
Stderr io.Writer
Stdout io.Writer
dbPath string
exact bool
detailed bool
// How many goroutines to dedicate to calculating cardinality.
concurrency int
// t, d, r, m for Total, Database, Retention Policy, Measurement
rollup string
}
func NewReportDBCommand(v *viper.Viper) (*cobra.Command, error) {
flags := &ReportDB{
Stderr: os.Stderr,
Stdout: os.Stdout,
}
cmd := &cobra.Command{
Use: "report-db",
Short: "Estimates cloud 2 cardinality for a database",
Args: cobra.NoArgs,
RunE: func(cmd *cobra.Command, _ []string) error {
return reportDBRunE(cmd, flags)
},
}
opts := []cli.Opt{
{
DestP: &flags.dbPath,
Flag: "db-path",
Desc: "path to database",
Required: true,
},
{
DestP: &flags.concurrency,
Flag: "c",
Desc: "set worker concurrency, defaults to one",
Default: 1,
},
{
DestP: &flags.detailed,
Flag: "detailed",
Desc: "include counts for fields, tags",
Default: false,
},
{
DestP: &flags.exact,
Flag: "exact",
Desc: "report exact counts",
Default: false,
},
{
DestP: &flags.rollup,
Flag: "rollup",
Desc: "rollup level - t: total, b: bucket, r: retention policy, m: measurement",
Default: "m",
},
}
if err := cli.BindOptions(v, cmd, opts); err != nil {
return nil, err
}
return cmd, nil
}
func reportDBRunE(_ *cobra.Command, reportdb *ReportDB) error {
var legalRollups = map[string]int{"m": 3, "r": 2, "b": 1, "t": 0}
if reportdb.dbPath == "" {
return errors.New("path to database must be provided")
}
totalDepth, ok := legalRollups[reportdb.rollup]
if !ok {
return fmt.Errorf("invalid rollup specified: %q", reportdb.rollup)
}
factory := aggregators.CreateNodeFactory(reportdb.detailed, reportdb.exact)
totalsTree := factory.NewNode(totalDepth == 0)
g, ctx := errgroup.WithContext(context.Background())
g.SetLimit(reportdb.concurrency)
processTSM := func(bucket, rp, id, path string) error {
file, err := os.OpenFile(path, os.O_RDONLY, 0600)
if err != nil {
_, _ = fmt.Fprintf(reportdb.Stderr, "error: %s: %v. Skipping.\n", path, err)
return nil
}
reader, err := tsm1.NewTSMReader(file)
if err != nil {
_, _ = fmt.Fprintf(reportdb.Stderr, "error: %s: %v. Skipping.\n", file.Name(), err)
// NewTSMReader won't close the file handle on failure, so do it here.
_ = file.Close()
return nil
}
defer func() {
// The TSMReader will close the underlying file handle here.
if err := reader.Close(); err != nil {
_, _ = fmt.Fprintf(reportdb.Stderr, "error closing: %s: %v.\n", file.Name(), err)
}
}()
seriesCount := reader.KeyCount()
for i := 0; i < seriesCount; i++ {
func() {
key, _ := reader.KeyAt(i)
seriesKey, field, _ := bytes.Cut(key, []byte("#!~#"))
measurement, tags := models.ParseKey(seriesKey)
totalsTree.Record(0, totalDepth, bucket, rp, measurement, key, field, tags)
}()
}
return nil
}
done := ctx.Done()
err := reporthelper.WalkShardDirs(reportdb.dbPath, func(bucket, rp, id, path string) error {
select {
case <-done:
return nil
default:
g.Go(func() error {
return processTSM(bucket, rp, id, path)
})
return nil
}
})
if err != nil {
_, _ = fmt.Fprintf(reportdb.Stderr, "%s: %v\n", reportdb.dbPath, err)
return err
}
err = g.Wait()
if err != nil {
_, _ = fmt.Fprintf(reportdb.Stderr, "%s: %v\n", reportdb.dbPath, err)
return err
}
tw := tabwriter.NewWriter(reportdb.Stdout, 8, 2, 1, ' ', 0)
if err = factory.PrintHeader(tw); err != nil {
return err
}
if err = factory.PrintDivider(tw); err != nil {
return err
}
for d, bucket := range totalsTree.Children() {
for r, rp := range bucket.Children() {
for m, measure := range rp.Children() {
err = measure.Print(tw, true, fmt.Sprintf("%q", d), fmt.Sprintf("%q", r), fmt.Sprintf("%q", m))
if err != nil {
return err
}
}
if err = rp.Print(tw, false, fmt.Sprintf("%q", d), fmt.Sprintf("%q", r), ""); err != nil {
return err
}
}
if err = bucket.Print(tw, false, fmt.Sprintf("%q", d), "", ""); err != nil {
return err
}
}
if err = totalsTree.Print(tw, false, "Total"+factory.EstTitle, "", ""); err != nil {
return err
}
return tw.Flush()
}

View File

@ -34,10 +34,7 @@ const (
func Test_ReportTSI_GeneratedData(t *testing.T) {
shardlessPath := newTempDirectories(t, false)
defer os.RemoveAll(shardlessPath)
shardPath := newTempDirectories(t, true)
defer os.RemoveAll(shardPath)
tests := []cmdParams{
{
@ -69,9 +66,7 @@ func Test_ReportTSI_GeneratedData(t *testing.T) {
func Test_ReportTSI_TestData(t *testing.T) {
// Create temp directory for extracted test data
path, err := os.MkdirTemp("", "report-tsi-test-")
require.NoError(t, err)
defer os.RemoveAll(path)
path := t.TempDir()
// Extract test data
file, err := os.Open("../tsi-test-data.tar.gz")
@ -125,10 +120,9 @@ func Test_ReportTSI_TestData(t *testing.T) {
func newTempDirectories(t *testing.T, withShards bool) string {
t.Helper()
dataDir, err := os.MkdirTemp("", "reporttsi")
require.NoError(t, err)
dataDir := t.TempDir()
err = os.MkdirAll(filepath.Join(dataDir, bucketID, "autogen"), 0777)
err := os.MkdirAll(filepath.Join(dataDir, bucketID, "autogen"), 0777)
require.NoError(t, err)
if withShards {

View File

@ -91,20 +91,20 @@ func (a *args) isShardDir(dir string) error {
}
func (a *args) Run(cmd *cobra.Command) error {
// Create the cardinality counter
newCounterFn := newHLLCounter
// Create the cardinality Counter
newCounterFn := NewHLLCounter
estTitle := " (est)"
if a.exact {
estTitle = ""
newCounterFn = newExactCounter
newCounterFn = NewExactCounter
}
totalSeries := newCounterFn()
tagCardinalities := map[string]counter{}
measCardinalities := map[string]counter{}
fieldCardinalities := map[string]counter{}
tagCardinalities := map[string]Counter{}
measCardinalities := map[string]Counter{}
fieldCardinalities := map[string]Counter{}
dbCardinalities := map[string]counter{}
dbCardinalities := map[string]Counter{}
start := time.Now()
@ -233,13 +233,13 @@ type printArgs struct {
fileCount int
minTime, maxTime int64
estTitle string
totalSeries counter
totalSeries Counter
detailed bool
tagCardinalities map[string]counter
measCardinalities map[string]counter
fieldCardinalities map[string]counter
dbCardinalities map[string]counter
tagCardinalities map[string]Counter
measCardinalities map[string]Counter
fieldCardinalities map[string]Counter
dbCardinalities map[string]Counter
}
func printSummary(cmd *cobra.Command, p printArgs) {
@ -277,7 +277,7 @@ func printSummary(cmd *cobra.Command, p printArgs) {
}
// sortKeys is a quick helper to return the sorted set of a map's keys
func sortKeys(vals map[string]counter) (keys []string) {
func sortKeys(vals map[string]Counter) (keys []string) {
for k := range vals {
keys = append(keys, k)
}
@ -335,14 +335,14 @@ func (a *args) walkShardDirs(root string, fn func(db, rp, id, path string) error
return nil
}
// counter abstracts a method of counting keys.
type counter interface {
// Counter abstracts a method of counting keys.
type Counter interface {
Add(key []byte)
Count() uint64
}
// newHLLCounter returns an approximate counter using HyperLogLogs for cardinality estimation.
func newHLLCounter() counter {
// NewHLLCounter returns an approximate Counter using HyperLogLogs for cardinality estimation.
func NewHLLCounter() Counter {
return hllpp.New()
}
@ -359,7 +359,7 @@ func (c *exactCounter) Count() uint64 {
return uint64(len(c.m))
}
func newExactCounter() counter {
func NewExactCounter() Counter {
return &exactCounter{
m: make(map[string]struct{}),
}

View File

@ -13,16 +13,15 @@ import (
)
func Test_Invalid_NotDir(t *testing.T) {
dir, err := os.MkdirTemp("", "")
require.NoError(t, err)
dir := t.TempDir()
file, err := os.CreateTemp(dir, "")
require.NoError(t, err)
defer os.RemoveAll(dir)
runCommand(t, testInfo{
dir: file.Name(),
expectOut: []string{"Files: 0"},
})
require.NoError(t, file.Close())
}
func Test_Invalid_EmptyDir(t *testing.T) {

View File

@ -0,0 +1,155 @@
package typecheck
import (
"errors"
"fmt"
"io"
"io/fs"
"os"
"path"
"path/filepath"
"strings"
"github.com/influxdata/influxdb/v2/kit/cli"
"github.com/influxdata/influxdb/v2/tsdb"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
)
type TypeConflictChecker struct {
Path string
SchemaFile string
ConflictsFile string
Logger *zap.Logger
logLevel zapcore.Level
}
func NewCheckSchemaCommand(v *viper.Viper) (*cobra.Command, error) {
flags := TypeConflictChecker{}
cmd := &cobra.Command{
Use: "check-schema",
Short: "Check for conflicts in the types between shards",
Args: cobra.NoArgs,
RunE: func(cmd *cobra.Command, _ []string) error {
return checkSchemaRunE(cmd, flags)
},
}
opts := []cli.Opt{
{
DestP: &flags.Path,
Flag: "path",
Desc: "Path under which fields.idx files are located",
Default: ".",
},
{
DestP: &flags.SchemaFile,
Flag: "schema-file",
Desc: "Filename schema data should be written to",
Default: "schema.json",
},
{
DestP: &flags.ConflictsFile,
Flag: "conflicts-file",
Desc: "Filename conflicts data should be written to",
Default: "conflicts.json",
},
{
DestP: &flags.logLevel,
Flag: "log-level",
Desc: "The level of logging used througout the command",
Default: zap.InfoLevel,
},
}
if err := cli.BindOptions(v, cmd, opts); err != nil {
return nil, err
}
return cmd, nil
}
func checkSchemaRunE(_ *cobra.Command, tc TypeConflictChecker) error {
logconf := zap.NewProductionConfig()
logconf.Level = zap.NewAtomicLevelAt(tc.logLevel)
logger, err := logconf.Build()
if err != nil {
return err
}
tc.Logger = logger
// Get a set of every measurement/field/type tuple present.
var schema Schema
schema, err = tc.readFields()
if err != nil {
return err
}
if err := schema.WriteSchemaFile(tc.SchemaFile); err != nil {
return err
}
if err := schema.WriteConflictsFile(tc.ConflictsFile); err != nil {
return err
}
return nil
}
func (tc *TypeConflictChecker) readFields() (Schema, error) {
schema := NewSchema()
var root string
fi, err := os.Stat(tc.Path)
if err != nil {
return nil, err
}
if fi.IsDir() {
root = tc.Path
} else {
root = path.Dir(tc.Path)
}
fileSystem := os.DirFS(".")
err = fs.WalkDir(fileSystem, root, func(path string, d fs.DirEntry, err error) error {
if err != nil {
return fmt.Errorf("error walking file: %w", err)
}
if filepath.Base(path) == tsdb.FieldsChangeFile {
fmt.Printf("WARN: A %s file was encountered at %s. The database was not shutdown properly, results of this command may be incomplete\n",
tsdb.FieldsChangeFile,
path,
)
return nil
}
if filepath.Base(path) != "fields.idx" {
return nil
}
dirs := strings.Split(path, string(os.PathSeparator))
bucket := dirs[len(dirs)-4]
rp := dirs[len(dirs)-3]
fmt.Printf("Processing %s\n", path)
mfs, err := tsdb.NewMeasurementFieldSet(path, tc.Logger)
if err != nil {
if errors.Is(err, io.EOF) {
return nil
}
return fmt.Errorf("unable to open file %q: %w", path, err)
}
defer mfs.Close()
measurements := mfs.MeasurementNames()
for _, m := range measurements {
for f, typ := range mfs.FieldsByString(m).FieldSet() {
schema.AddField(bucket, rp, m, f, typ.String())
}
}
return nil
})
return schema, err
}

View File

@ -0,0 +1,76 @@
package typecheck
import (
"errors"
"github.com/influxdata/influxdb/v2/kit/cli"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
type MergeFilesCommand struct {
OutputFile string
ConflictsFile string
}
func NewMergeSchemaCommand(v *viper.Viper) (*cobra.Command, error) {
flags := MergeFilesCommand{}
cmd := &cobra.Command{
Use: "merge-schema",
Short: "Merge a set of schema files from the check-schema command",
Args: cobra.MinimumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
return mergeSchemaRunE(cmd, args, flags)
},
}
opts := []cli.Opt{
{
DestP: &flags.OutputFile,
Flag: "schema-file",
Desc: "Filename for the output file",
Default: "schema.json",
},
{
DestP: &flags.ConflictsFile,
Flag: "conflicts-file",
Desc: "Filename conflicts data should be written to",
Default: "conflicts.json",
},
}
if err := cli.BindOptions(v, cmd, opts); err != nil {
return nil, err
}
return cmd, nil
}
func mergeSchemaRunE(_ *cobra.Command, args []string, mf MergeFilesCommand) error {
return mf.mergeFiles(args)
}
func (rc *MergeFilesCommand) mergeFiles(filenames []string) error {
if len(filenames) < 1 {
return errors.New("at least 1 file must be specified")
}
schema, err := SchemaFromFile(filenames[0])
if err != nil {
return err
}
for _, filename := range filenames[1:] {
other, err := SchemaFromFile(filename)
if err != nil {
return err
}
schema.Merge(other)
}
if err := schema.WriteConflictsFile(rc.ConflictsFile); err != nil {
return err
}
return schema.WriteSchemaFile(rc.OutputFile)
}

View File

@ -0,0 +1,149 @@
package typecheck
import (
"encoding/json"
"fmt"
"io"
"os"
"strings"
errors2 "github.com/influxdata/influxdb/v2/pkg/errors"
)
type UniqueField struct {
Database string `json:"database"`
Retention string `json:"retention"`
Measurement string `json:"measurement"`
Field string `json:"field"`
}
type FieldTypes map[string]struct{}
type Schema map[string]FieldTypes
func (ft FieldTypes) MarshalText() (text []byte, err error) {
s := make([]string, 0, len(ft))
for f := range ft {
s = append(s, f)
}
return []byte(strings.Join(s, ",")), nil
}
func (ft *FieldTypes) UnmarshalText(text []byte) error {
if *ft == nil {
*ft = make(FieldTypes)
}
for _, ty := range strings.Split(string(text), ",") {
(*ft)[ty] = struct{}{}
}
return nil
}
func NewSchema() Schema {
return make(Schema)
}
func SchemaFromFile(filename string) (Schema, error) {
f, err := os.Open(filename)
if err != nil {
return nil, fmt.Errorf("unable to open schema file %q: %w", filename, err)
}
s := NewSchema()
if err := s.Decode(f); err != nil {
return nil, fmt.Errorf("unable to decode schema file %q: %w", filename, err)
}
return s, nil
}
func (uf *UniqueField) String() string {
return fmt.Sprintf("%q.%q.%q.%q", uf.Database, uf.Retention, uf.Measurement, uf.Field)
}
func (s Schema) AddField(database, retention, measurement, field, dataType string) {
uf := UniqueField{
Database: database,
Retention: retention,
Measurement: measurement,
Field: field,
}
s.AddFormattedField(uf.String(), dataType)
}
func (s Schema) AddFormattedField(field string, dataType string) {
if _, ok := s[field]; !ok {
s[field] = make(map[string]struct{})
}
s[field][dataType] = struct{}{}
}
func (s Schema) Merge(schema Schema) {
for field, types := range schema {
for t := range types {
s.AddFormattedField(field, t)
}
}
}
func (s Schema) Conflicts() Schema {
cs := NewSchema()
for field, t := range s {
if len(t) > 1 {
for ty := range t {
cs.AddFormattedField(field, ty)
}
}
}
return cs
}
func (s Schema) WriteSchemaFile(filename string) error {
if len(s) == 0 {
fmt.Println("No schema file generated: no valid measurements/fields found")
return nil
}
if err := s.encodeSchema(filename); err != nil {
return fmt.Errorf("unable to write schema file to %q: %w", filename, err)
}
fmt.Printf("Schema file written successfully to: %q\n", filename)
return nil
}
func (s Schema) WriteConflictsFile(filename string) error {
conflicts := s.Conflicts()
if len(conflicts) == 0 {
fmt.Println("No conflicts file generated: no conflicts found")
return nil
}
if err := conflicts.encodeSchema(filename); err != nil {
return fmt.Errorf("unable to write conflicts file to %q: %w", filename, err)
}
fmt.Printf("Conflicts file written successfully to: %q\n", filename)
return nil
}
func (s Schema) encodeSchema(filename string) (rErr error) {
schemaFile, err := os.Create(filename)
defer errors2.Capture(&rErr, schemaFile.Close)
if err != nil {
return fmt.Errorf("unable to create schema file: %w", err)
}
return s.Encode(schemaFile)
}
func (s Schema) Encode(w io.Writer) error {
enc := json.NewEncoder(w)
enc.SetIndent("", " ")
if err := enc.Encode(s); err != nil {
return fmt.Errorf("unable to encode schema: %w", err)
}
return nil
}
func (s Schema) Decode(r io.Reader) error {
if err := json.NewDecoder(r).Decode(&s); err != nil {
return fmt.Errorf("unable to decode schema: %w", err)
}
return nil
}

View File

@ -0,0 +1,78 @@
package typecheck_test
import (
"bytes"
"testing"
typecheck "github.com/influxdata/influxdb/v2/cmd/influxd/inspect/type_conflicts"
"github.com/stretchr/testify/assert"
)
func TestSchema_Encoding(t *testing.T) {
s := typecheck.NewSchema()
b := bytes.Buffer{}
s.AddField("db1", "rp1", "foo", "v2", "float")
s.AddField("db1", "rp1", "foo", "v2", "bool")
s.AddField("db1", "rp1", "bZ", "v1", "int")
err := s.Encode(&b)
assert.NoError(t, err, "encode failed unexpectedly")
s2 := typecheck.NewSchema()
err = s2.Decode(&b)
assert.NoError(t, err, "decode failed unexpectedly")
assert.Len(t, s2, 2, "wrong number of fields - expected %d, got %d", 2, len(s))
for f1, fields1 := range s {
assert.Len(t,
s2[f1],
len(fields1),
"differing number of types for a conflicted field %s: expected %d, got %d",
f1,
len(fields1),
len(s2[f1]))
}
}
type filler struct {
typecheck.UniqueField
typ string
}
func TestSchema_Merge(t *testing.T) {
const expectedConflicts = 2
s1Fill := []filler{
{typecheck.UniqueField{"db1", "rp1", "m1", "f1"}, "integer"},
{typecheck.UniqueField{"db2", "rp1", "m1", "f1"}, "float"},
{typecheck.UniqueField{"db1", "rp2", "m1", "f1"}, "string"},
{typecheck.UniqueField{"db1", "rp1", "m2", "f1"}, "string"},
{typecheck.UniqueField{"db1", "rp1", "m1", "f2"}, "float"},
{typecheck.UniqueField{"db2", "rp2", "m2", "f2"}, "integer"},
}
s2Fill := []filler{
{typecheck.UniqueField{"db1", "rp1", "m1", "f1"}, "integer"},
{typecheck.UniqueField{"db2", "rp1", "m1", "f1"}, "string"},
{typecheck.UniqueField{"db2", "rp2", "m2", "f2"}, "float"},
{typecheck.UniqueField{"db1", "rp2", "m1", "f1"}, "string"},
{typecheck.UniqueField{"db1", "rp1", "m2", "f1"}, "string"},
{typecheck.UniqueField{"db1", "rp1", "m1", "f2"}, "float"},
{typecheck.UniqueField{"db2", "rp2", "m2", "f2"}, "integer"},
}
s1 := typecheck.NewSchema()
s2 := typecheck.NewSchema()
fillSchema(s1, s1Fill)
fillSchema(s2, s2Fill)
s1.Merge(s2)
conflicts := s1.Conflicts()
assert.Len(t, conflicts, expectedConflicts, "wrong number of type conflicts detected: expected %d, got %d", expectedConflicts, len(conflicts))
}
func fillSchema(s typecheck.Schema, fill []filler) {
for _, f := range fill {
s.AddFormattedField(f.String(), f.typ)
}
}

View File

@ -76,11 +76,10 @@ type Test struct {
func NewTest(t *testing.T) *Test {
t.Helper()
dir, err := os.MkdirTemp("", "verify-seriesfile-")
require.NoError(t, err)
dir := t.TempDir()
// create a series file in the directory
err = func() error {
err := func() error {
seriesFile := tsdb.NewSeriesFile(dir)
if err := seriesFile.Open(); err != nil {
return err
@ -128,7 +127,6 @@ func NewTest(t *testing.T) *Test {
return seriesFile.Close()
}()
if err != nil {
os.RemoveAll(dir)
t.Fatal(err)
}

View File

@ -21,12 +21,11 @@ const (
// Run tests on a directory with no Tombstone files
func TestVerifies_InvalidFileType(t *testing.T) {
path, err := os.MkdirTemp("", "verify-tombstone")
require.NoError(t, err)
path := t.TempDir()
_, err = os.CreateTemp(path, "verifytombstonetest*"+".txt")
f, err := os.CreateTemp(path, "verifytombstonetest*"+".txt")
require.NoError(t, err)
defer os.RemoveAll(path)
require.NoError(t, f.Close())
verify := NewVerifyTombstoneCommand()
verify.SetArgs([]string{"--engine-path", path})
@ -43,7 +42,6 @@ func TestVerifies_InvalidFileType(t *testing.T) {
// Run tests on an empty Tombstone file (treated as v1)
func TestVerifies_InvalidEmptyFile(t *testing.T) {
path, _ := NewTempTombstone(t)
defer os.RemoveAll(path)
verify := NewVerifyTombstoneCommand()
verify.SetArgs([]string{"--engine-path", path})
@ -60,7 +58,6 @@ func TestVerifies_InvalidEmptyFile(t *testing.T) {
// Runs tests on an invalid V2 Tombstone File
func TestVerifies_InvalidV2(t *testing.T) {
path, file := NewTempTombstone(t)
defer os.RemoveAll(path)
WriteTombstoneHeader(t, file, v2header)
WriteBadData(t, file)
@ -74,7 +71,6 @@ func TestVerifies_InvalidV2(t *testing.T) {
func TestVerifies_ValidTS(t *testing.T) {
path, file := NewTempTombstone(t)
defer os.RemoveAll(path)
ts := tsm1.NewTombstoner(file.Name(), nil)
require.NoError(t, ts.Add([][]byte{[]byte("foobar")}))
@ -90,7 +86,6 @@ func TestVerifies_ValidTS(t *testing.T) {
// Runs tests on an invalid V3 Tombstone File
func TestVerifies_InvalidV3(t *testing.T) {
path, file := NewTempTombstone(t)
defer os.RemoveAll(path)
WriteTombstoneHeader(t, file, v3header)
WriteBadData(t, file)
@ -105,7 +100,6 @@ func TestVerifies_InvalidV3(t *testing.T) {
// Runs tests on an invalid V4 Tombstone File
func TestVerifies_InvalidV4(t *testing.T) {
path, file := NewTempTombstone(t)
defer os.RemoveAll(path)
WriteTombstoneHeader(t, file, v4header)
WriteBadData(t, file)
@ -121,7 +115,6 @@ func TestVerifies_InvalidV4(t *testing.T) {
// is not needed, but was part of old command.
func TestTombstone_VeryVeryVerbose(t *testing.T) {
path, file := NewTempTombstone(t)
defer os.RemoveAll(path)
WriteTombstoneHeader(t, file, v4header)
WriteBadData(t, file)
@ -136,8 +129,7 @@ func TestTombstone_VeryVeryVerbose(t *testing.T) {
func NewTempTombstone(t *testing.T) (string, *os.File) {
t.Helper()
dir, err := os.MkdirTemp("", "verify-tombstone")
require.NoError(t, err)
dir := t.TempDir()
file, err := os.CreateTemp(dir, "verifytombstonetest*"+"."+tsm1.TombstoneFileExtension)
require.NoError(t, err)

View File

@ -69,8 +69,7 @@ func TestValidUTF8(t *testing.T) {
func newUTFTest(t *testing.T, withError bool) string {
t.Helper()
dir, err := os.MkdirTemp("", "verify-tsm")
require.NoError(t, err)
dir := t.TempDir()
f, err := os.CreateTemp(dir, "verifytsmtest*"+"."+tsm1.TSMFileExtension)
require.NoError(t, err)
@ -94,8 +93,7 @@ func newUTFTest(t *testing.T, withError bool) string {
func newChecksumTest(t *testing.T, withError bool) string {
t.Helper()
dir, err := os.MkdirTemp("", "verify-tsm")
require.NoError(t, err)
dir := t.TempDir()
f, err := os.CreateTemp(dir, "verifytsmtest*"+"."+tsm1.TSMFileExtension)
require.NoError(t, err)

View File

@ -109,6 +109,8 @@ func (a args) Run(cmd *cobra.Command) error {
}
totalEntriesScanned += entriesScanned
_ = tw.Flush()
_ = reader.Close()
}
// Print Summary

View File

@ -21,12 +21,11 @@ type testInfo struct {
}
func TestVerifies_InvalidFileType(t *testing.T) {
path, err := os.MkdirTemp("", "verify-wal")
require.NoError(t, err)
path := t.TempDir()
_, err = os.CreateTemp(path, "verifywaltest*"+".txt")
f, err := os.CreateTemp(path, "verifywaltest*"+".txt")
require.NoError(t, err)
defer os.RemoveAll(path)
require.NoError(t, f.Close())
runCommand(testInfo{
t: t,
@ -37,8 +36,7 @@ func TestVerifies_InvalidFileType(t *testing.T) {
}
func TestVerifies_InvalidNotDir(t *testing.T) {
path, file := newTempWALInvalid(t, true)
defer os.RemoveAll(path)
_, file := newTempWALInvalid(t, true)
runCommand(testInfo{
t: t,
@ -50,7 +48,6 @@ func TestVerifies_InvalidNotDir(t *testing.T) {
func TestVerifies_InvalidEmptyFile(t *testing.T) {
path, _ := newTempWALInvalid(t, true)
defer os.RemoveAll(path)
runCommand(testInfo{
t: t,
@ -62,7 +59,6 @@ func TestVerifies_InvalidEmptyFile(t *testing.T) {
func TestVerifies_Invalid(t *testing.T) {
path, _ := newTempWALInvalid(t, false)
defer os.RemoveAll(path)
runCommand(testInfo{
t: t,
@ -74,7 +70,6 @@ func TestVerifies_Invalid(t *testing.T) {
func TestVerifies_Valid(t *testing.T) {
path := newTempWALValid(t)
defer os.RemoveAll(path)
runCommand(testInfo{
t: t,
@ -108,12 +103,13 @@ func runCommand(args testInfo) {
func newTempWALValid(t *testing.T) string {
t.Helper()
dir, err := os.MkdirTemp("", "verify-wal")
require.NoError(t, err)
dir := t.TempDir()
w := tsm1.NewWAL(dir, 0, 0, tsdb.EngineTags{})
defer w.Close()
require.NoError(t, w.Open())
t.Cleanup(func() {
require.NoError(t, w.Close())
})
p1 := tsm1.NewValue(1, 1.1)
p2 := tsm1.NewValue(1, int64(1))
@ -129,7 +125,7 @@ func newTempWALValid(t *testing.T) string {
"cpu,host=A#!~#unsigned": {p5},
}
_, err = w.WriteMulti(context.Background(), values)
_, err := w.WriteMulti(context.Background(), values)
require.NoError(t, err)
return dir
@ -138,18 +134,14 @@ func newTempWALValid(t *testing.T) string {
func newTempWALInvalid(t *testing.T, empty bool) (string, *os.File) {
t.Helper()
dir, err := os.MkdirTemp("", "verify-wal")
require.NoError(t, err)
dir := t.TempDir()
file, err := os.CreateTemp(dir, "verifywaltest*."+tsm1.WALFileExtension)
require.NoError(t, err)
t.Cleanup(func() { file.Close() })
if !empty {
writer, err := os.OpenFile(file.Name(), os.O_APPEND|os.O_WRONLY, 0644)
require.NoError(t, err)
defer writer.Close()
written, err := writer.Write([]byte("foobar"))
written, err := file.Write([]byte("foobar"))
require.NoError(t, err)
require.Equal(t, 6, written)
}

View File

@ -2,7 +2,6 @@ package launcher_test
import (
"context"
"os"
"testing"
"github.com/influxdata/influx-cli/v2/clients/backup"
@ -18,9 +17,7 @@ func TestBackupRestore_Full(t *testing.T) {
t.Parallel()
ctx := context.Background()
backupDir, err := os.MkdirTemp("", "")
require.NoError(t, err)
defer os.RemoveAll(backupDir)
backupDir := t.TempDir()
// Boot a server, write some data, and take a backup.
l1 := launcher.RunAndSetupNewLauncherOrFail(ctx, t, func(o *launcher.InfluxdOpts) {
@ -83,7 +80,7 @@ func TestBackupRestore_Full(t *testing.T) {
l2.ResetHTTPCLient()
// Check that orgs and buckets were reset to match the original server's metadata.
_, err = l2.OrgService(t).FindOrganizationByID(ctx, l2.Org.ID)
_, err := l2.OrgService(t).FindOrganizationByID(ctx, l2.Org.ID)
require.Equal(t, errors.ENotFound, errors.ErrorCode(err))
rbkt1, err := l2.BucketService(t).FindBucket(ctx, influxdb.BucketFilter{OrganizationID: &l1.Org.ID, ID: &l1.Bucket.ID})
require.NoError(t, err)
@ -116,9 +113,7 @@ func TestBackupRestore_Partial(t *testing.T) {
t.Parallel()
ctx := context.Background()
backupDir, err := os.MkdirTemp("", "")
require.NoError(t, err)
defer os.RemoveAll(backupDir)
backupDir := t.TempDir()
// Boot a server, write some data, and take a backup.
l1 := launcher.RunAndSetupNewLauncherOrFail(ctx, t, func(o *launcher.InfluxdOpts) {

View File

@ -77,6 +77,7 @@ import (
telegrafservice "github.com/influxdata/influxdb/v2/telegraf/service"
"github.com/influxdata/influxdb/v2/telemetry"
"github.com/influxdata/influxdb/v2/tenant"
"github.com/prometheus/client_golang/prometheus/collectors"
// needed for tsm1
_ "github.com/influxdata/influxdb/v2/tsdb/engine/tsm1"
@ -90,7 +91,6 @@ import (
"github.com/influxdata/influxdb/v2/vault"
pzap "github.com/influxdata/influxdb/v2/zap"
"github.com/opentracing/opentracing-go"
"github.com/prometheus/client_golang/prometheus"
jaegerconfig "github.com/uber/jaeger-client-go/config"
"go.uber.org/zap"
)
@ -249,7 +249,7 @@ func (m *Launcher) run(ctx context.Context, opts *InfluxdOpts) (err error) {
}
m.reg = prom.NewRegistry(m.log.With(zap.String("service", "prom_registry")))
m.reg.MustRegister(prometheus.NewGoCollector())
m.reg.MustRegister(collectors.NewGoCollector())
// Open KV and SQL stores.
procID, err := m.openMetaStores(ctx, opts)
@ -787,6 +787,7 @@ func (m *Launcher) run(ctx context.Context, opts *InfluxdOpts) (err error) {
}
userHTTPServer := ts.NewUserHTTPHandler(m.log)
meHTTPServer := ts.NewMeHTTPHandler(m.log)
onboardHTTPServer := tenant.NewHTTPOnboardHandler(m.log, onboardSvc)
// feature flagging for new labels service
@ -897,8 +898,8 @@ func (m *Launcher) run(ctx context.Context, opts *InfluxdOpts) (err error) {
http.WithResourceHandler(labelHandler),
http.WithResourceHandler(sessionHTTPServer.SignInResourceHandler()),
http.WithResourceHandler(sessionHTTPServer.SignOutResourceHandler()),
http.WithResourceHandler(userHTTPServer.MeResourceHandler()),
http.WithResourceHandler(userHTTPServer.UserResourceHandler()),
http.WithResourceHandler(userHTTPServer),
http.WithResourceHandler(meHTTPServer),
http.WithResourceHandler(orgHTTPServer),
http.WithResourceHandler(bucketHTTPServer),
http.WithResourceHandler(v1AuthHTTPServer),
@ -924,7 +925,7 @@ func (m *Launcher) run(ctx context.Context, opts *InfluxdOpts) (err error) {
}
// If we are in testing mode we allow all data to be flushed and removed.
if opts.Testing {
httpHandler = http.DebugFlush(ctx, httpHandler, m.flushers)
httpHandler = http.Debug(ctx, httpHandler, m.flushers, onboardSvc)
}
if !opts.ReportingDisabled {

View File

@ -12,13 +12,9 @@ import (
)
func TestCopyDirAndDirSize(t *testing.T) {
tmpdir, err := os.MkdirTemp("", "tcd")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmpdir)
tmpdir := t.TempDir()
err = os.MkdirAll(filepath.Join(tmpdir, "1", "1", "1"), 0700)
err := os.MkdirAll(filepath.Join(tmpdir, "1", "1", "1"), 0700)
if err != nil {
t.Fatal(err)
}
@ -49,11 +45,7 @@ func TestCopyDirAndDirSize(t *testing.T) {
}
assert.Equal(t, uint64(1600), size)
targetDir, err := os.MkdirTemp("", "tcd")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(targetDir)
targetDir := t.TempDir()
targetDir = filepath.Join(targetDir, "x")
err = CopyDir(tmpdir, targetDir, nil, func(path string) bool {
base := filepath.Base(path)

View File

@ -29,10 +29,7 @@ import (
)
func TestPathValidations(t *testing.T) {
tmpdir, err := os.MkdirTemp("", "")
require.Nil(t, err)
defer os.RemoveAll(tmpdir)
tmpdir := t.TempDir()
v1Dir := filepath.Join(tmpdir, "v1db")
v2Dir := filepath.Join(tmpdir, "v2db")
@ -41,7 +38,7 @@ func TestPathValidations(t *testing.T) {
configsPath := filepath.Join(v2Dir, "configs")
enginePath := filepath.Join(v2Dir, "engine")
err = os.MkdirAll(filepath.Join(enginePath, "db"), 0777)
err := os.MkdirAll(filepath.Join(enginePath, "db"), 0777)
require.Nil(t, err)
sourceOpts := &optionsV1{
@ -89,10 +86,7 @@ func TestPathValidations(t *testing.T) {
}
func TestClearTargetPaths(t *testing.T) {
tmpdir, err := os.MkdirTemp("", "")
require.NoError(t, err)
defer os.RemoveAll(tmpdir)
tmpdir := t.TempDir()
v2Dir := filepath.Join(tmpdir, "v2db")
boltPath := filepath.Join(v2Dir, bolt.DefaultFilename)
@ -101,7 +95,7 @@ func TestClearTargetPaths(t *testing.T) {
cqPath := filepath.Join(v2Dir, "cqs")
configPath := filepath.Join(v2Dir, "config")
err = os.MkdirAll(filepath.Join(enginePath, "db"), 0777)
err := os.MkdirAll(filepath.Join(enginePath, "db"), 0777)
require.NoError(t, err)
err = os.WriteFile(boltPath, []byte{1}, 0777)
require.NoError(t, err)
@ -176,11 +170,9 @@ func TestDbURL(t *testing.T) {
func TestUpgradeRealDB(t *testing.T) {
ctx := context.Background()
tmpdir, err := os.MkdirTemp("", "")
require.NoError(t, err)
tmpdir := t.TempDir()
defer os.RemoveAll(tmpdir)
err = testutil.Unzip(filepath.Join("testdata", "v1db.zip"), tmpdir)
err := testutil.Unzip(filepath.Join("testdata", "v1db.zip"), tmpdir)
require.NoError(t, err)
v1ConfigPath := filepath.Join(tmpdir, "v1.conf")

View File

@ -55,7 +55,9 @@ func (p *prometheusScraper) parse(r io.Reader, header http.Header, target influx
now := time.Now()
mediatype, params, err := mime.ParseMediaType(header.Get("Content-Type"))
if err != nil {
if err != nil && err.Error() == "mime: no media type" {
mediatype = "text/plain"
} else if err != nil {
return collected, err
}
// Prepare output
@ -93,12 +95,20 @@ func (p *prometheusScraper) parse(r io.Reader, header http.Header, target influx
// summary metric
fields = makeQuantiles(m)
fields["count"] = float64(m.GetSummary().GetSampleCount())
fields["sum"] = float64(m.GetSummary().GetSampleSum())
ss := float64(m.GetSummary().GetSampleSum())
if !math.IsNaN(ss) {
fields["sum"] = ss
}
case dto.MetricType_HISTOGRAM:
// histogram metric
fields = makeBuckets(m)
fields["count"] = float64(m.GetHistogram().GetSampleCount())
fields["sum"] = float64(m.GetHistogram().GetSampleSum())
ss := float64(m.GetHistogram().GetSampleSum())
if !math.IsNaN(ss) {
fields["sum"] = ss
}
default:
// standard metric
fields = getNameAndValue(m)

108
go.mod
View File

@ -1,9 +1,9 @@
module github.com/influxdata/influxdb/v2
go 1.18
go 1.20
require (
github.com/BurntSushi/toml v0.4.1
github.com/BurntSushi/toml v1.2.1
github.com/Masterminds/squirrel v1.5.0
github.com/NYTimes/gziphandler v1.0.1
github.com/RoaringBitmap/roaring v0.4.16
@ -22,14 +22,14 @@ require (
github.com/go-stack/stack v1.8.0
github.com/golang-jwt/jwt v3.2.1+incompatible
github.com/golang/gddo v0.0.0-20181116215533-9bd4a3295021
github.com/golang/mock v1.5.0
github.com/golang/mock v1.6.0
github.com/golang/snappy v0.0.4
github.com/google/btree v1.0.1
github.com/google/go-cmp v0.5.7
github.com/google/go-cmp v0.5.9
github.com/google/go-jsonnet v0.17.0
github.com/hashicorp/vault/api v1.0.2
github.com/influxdata/cron v0.0.0-20201006132531-4bb0a200dcbe
github.com/influxdata/flux v0.188.1
github.com/influxdata/flux v0.194.3
github.com/influxdata/httprouter v1.3.1-0.20191122104820-ee83e2772f69
github.com/influxdata/influx-cli/v2 v2.2.1-0.20221028161653-3285a03e9e28
github.com/influxdata/influxql v1.1.1-0.20211004132434-7e7d61973256
@ -39,23 +39,23 @@ require (
github.com/jsternberg/zap-logfmt v1.2.0
github.com/jwilder/encoding v0.0.0-20170811194829-b4e1701a28ef
github.com/kevinburke/go-bindata v3.22.0+incompatible
github.com/mattn/go-isatty v0.0.14
github.com/mattn/go-isatty v0.0.16
github.com/mattn/go-sqlite3 v1.14.7
github.com/matttproud/golang_protobuf_extensions v1.0.1
github.com/matttproud/golang_protobuf_extensions v1.0.4
github.com/mileusna/useragent v0.0.0-20190129205925-3e331f0949a5
github.com/mna/pigeon v1.0.1-0.20180808201053-bb0192cfc2ae
github.com/opentracing/opentracing-go v1.2.0
github.com/pkg/errors v0.9.1
github.com/prometheus/client_golang v1.5.1
github.com/prometheus/client_golang v1.11.1
github.com/prometheus/client_model v0.2.0
github.com/prometheus/common v0.9.1
github.com/prometheus/common v0.30.0
github.com/retailnext/hllpp v1.0.1-0.20180308014038-101a6d2f8b52
github.com/spf13/cast v1.3.0
github.com/spf13/cobra v1.0.0
github.com/spf13/pflag v1.0.5
github.com/spf13/viper v1.6.1
github.com/stretchr/testify v1.8.0
github.com/testcontainers/testcontainers-go v0.0.0-20190108154635-47c0da630f72
github.com/stretchr/testify v1.8.1
github.com/testcontainers/testcontainers-go v0.18.0
github.com/tinylib/msgp v1.1.0
github.com/uber/jaeger-client-go v2.28.0+incompatible
github.com/xlab/treeprint v1.0.0
@ -63,16 +63,16 @@ require (
go.etcd.io/bbolt v1.3.6
go.uber.org/multierr v1.6.0
go.uber.org/zap v1.16.0
golang.org/x/crypto v0.0.0-20220331220935-ae2d96664a29
golang.org/x/sync v0.0.0-20220513210516-0976fa681c29
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a
golang.org/x/text v0.3.7
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba
golang.org/x/tools v0.1.11-0.20220316014157-77aa08bb151a
golang.org/x/crypto v0.1.0
golang.org/x/sync v0.1.0
golang.org/x/sys v0.5.0
golang.org/x/text v0.7.0
golang.org/x/time v0.0.0-20220210224613-90d013bbcef8
golang.org/x/tools v0.5.0
google.golang.org/protobuf v1.28.1
gopkg.in/yaml.v2 v2.3.0
gopkg.in/yaml.v2 v2.4.0
gopkg.in/yaml.v3 v3.0.1
honnef.co/go/tools v0.3.0
honnef.co/go/tools v0.4.0
)
require (
@ -82,6 +82,7 @@ require (
github.com/AlecAivazis/survey/v2 v2.3.4 // indirect
github.com/Azure/azure-pipeline-go v0.2.3 // indirect
github.com/Azure/azure-storage-blob-go v0.14.0 // indirect
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
github.com/Azure/go-autorest v14.2.0+incompatible // indirect
github.com/Azure/go-autorest/autorest v0.11.9 // indirect
github.com/Azure/go-autorest/autorest/adal v0.9.13 // indirect
@ -93,11 +94,11 @@ require (
github.com/DATA-DOG/go-sqlmock v1.4.1 // indirect
github.com/Masterminds/semver v1.4.2 // indirect
github.com/Masterminds/sprig v2.16.0+incompatible // indirect
github.com/Microsoft/go-winio v0.4.11 // indirect
github.com/Microsoft/go-winio v0.5.2 // indirect
github.com/SAP/go-hdb v0.14.1 // indirect
github.com/aokoli/goutils v1.0.1 // indirect
github.com/apache/arrow/go/arrow v0.0.0-20211112161151-bc219186db40 // indirect
github.com/aws/aws-sdk-go v1.30.12 // indirect
github.com/aws/aws-sdk-go v1.34.0 // indirect
github.com/aws/aws-sdk-go-v2 v1.11.0 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.0.0 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.6.1 // indirect
@ -112,43 +113,45 @@ require (
github.com/benbjohnson/immutable v0.3.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/bonitoo-io/go-sql-bigquery v0.3.4-1.4.0 // indirect
github.com/cespare/xxhash/v2 v2.1.1 // indirect
github.com/cenkalti/backoff/v4 v4.2.0 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/containerd/containerd v1.6.18 // indirect
github.com/deepmap/oapi-codegen v1.6.0 // indirect
github.com/denisenkom/go-mssqldb v0.10.0 // indirect
github.com/dimchansky/utfbom v1.1.0 // indirect
github.com/docker/distribution v2.7.0+incompatible // indirect
github.com/docker/docker v1.13.1 // indirect
github.com/docker/distribution v2.8.2+incompatible // indirect
github.com/docker/docker v23.0.3+incompatible // indirect
github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-units v0.3.3 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/eclipse/paho.mqtt.golang v1.2.0 // indirect
github.com/editorconfig/editorconfig-core-go/v2 v2.1.1 // indirect
github.com/fatih/color v1.13.0 // indirect
github.com/form3tech-oss/jwt-go v3.2.5+incompatible // indirect
github.com/fsnotify/fsnotify v1.4.7 // indirect
github.com/fsnotify/fsnotify v1.5.4 // indirect
github.com/gabriel-vasile/mimetype v1.4.0 // indirect
github.com/glycerine/go-unsnap-stream v0.0.0-20181221182339-f9677308dec2 // indirect
github.com/glycerine/goconvey v0.0.0-20180728074245-46e3a41ad493 // indirect
github.com/go-sql-driver/mysql v1.5.0 // indirect
github.com/go-sql-driver/mysql v1.6.0 // indirect
github.com/goccy/go-json v0.9.6 // indirect
github.com/gofrs/uuid v3.3.0+incompatible // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-sql/civil v0.0.0-20190719163853-cb61b32ac6fe // indirect
github.com/golang/geo v0.0.0-20190916061304-5b978397cfec // indirect
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/flatbuffers v22.9.30-0.20221019131441-5792623df42e+incompatible // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/googleapis/gax-go/v2 v2.0.5 // indirect
github.com/hashicorp/errwrap v1.0.0 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-cleanhttp v0.5.1 // indirect
github.com/hashicorp/go-multierror v1.0.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/hashicorp/go-retryablehttp v0.6.4 // indirect
github.com/hashicorp/go-rootcerts v1.0.0 // indirect
github.com/hashicorp/go-sockaddr v1.0.2 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/hashicorp/vault/sdk v0.1.8 // indirect
github.com/huandu/xstrings v1.0.0 // indirect
github.com/imdario/mergo v0.3.9 // indirect
github.com/imdario/mergo v0.3.12 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/influxdata/gosnowflake v1.6.9 // indirect
github.com/influxdata/influxdb-client-go/v2 v2.3.1-0.20210518120617-5d1fff431040 // indirect
@ -162,56 +165,61 @@ require (
github.com/lann/builder v0.0.0-20180802200727-47ae307949d0 // indirect
github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0 // indirect
github.com/lib/pq v1.2.0 // indirect
github.com/magiconair/properties v1.8.1 // indirect
github.com/magiconair/properties v1.8.7 // indirect
github.com/mattn/go-colorable v0.1.12 // indirect
github.com/mattn/go-ieproxy v0.0.1 // indirect
github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/mitchellh/mapstructure v1.1.2 // indirect
github.com/moby/patternmatcher v0.5.0 // indirect
github.com/moby/sys/sequential v0.5.0 // indirect
github.com/moby/term v0.0.0-20221128092401-c43b287e0e0f // indirect
github.com/morikuni/aec v1.0.0 // indirect
github.com/mschoch/smat v0.0.0-20160514031455-90eadee771ae // indirect
github.com/onsi/ginkgo v1.11.0 // indirect
github.com/onsi/gomega v1.8.1 // indirect
github.com/opencontainers/go-digest v1.0.0-rc1 // indirect
github.com/pelletier/go-toml v1.2.0 // indirect
github.com/onsi/ginkgo v1.12.1 // indirect
github.com/onsi/gomega v1.10.3 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.1.0-rc2 // indirect
github.com/opencontainers/runc v1.1.5 // indirect
github.com/pelletier/go-toml v1.9.5 // indirect
github.com/philhofer/fwd v1.0.0 // indirect
github.com/pierrec/lz4 v2.0.5+incompatible // indirect
github.com/pierrec/lz4/v4 v4.1.12 // indirect
github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/procfs v0.0.11 // indirect
github.com/prometheus/procfs v0.7.3 // indirect
github.com/ryanuber/go-glob v1.0.0 // indirect
github.com/satori/go.uuid v1.2.1-0.20181028125025-b2ce2384e17b // indirect
github.com/segmentio/kafka-go v0.2.0 // indirect
github.com/sergi/go-diff v1.1.0 // indirect
github.com/sirupsen/logrus v1.8.1 // indirect
github.com/spf13/afero v1.1.2 // indirect
github.com/sirupsen/logrus v1.9.0 // indirect
github.com/spf13/afero v1.2.2 // indirect
github.com/spf13/jwalterweatherman v1.0.0 // indirect
github.com/stretchr/objx v0.4.0 // indirect
github.com/stretchr/objx v0.5.0 // indirect
github.com/subosito/gotenv v1.2.0 // indirect
github.com/uber-go/tally v3.3.15+incompatible // indirect
github.com/uber/athenadriver v1.1.4 // indirect
github.com/uber/jaeger-lib v2.4.1+incompatible // indirect
github.com/vertica/vertica-sql-go v1.1.1 // indirect
github.com/willf/bitset v1.1.9 // indirect
github.com/willf/bitset v1.1.11 // indirect
github.com/yudai/golcs v0.0.0-20170316035057-ecda9a501e82 // indirect
github.com/yudai/pp v2.0.1+incompatible // indirect
go.opencensus.io v0.23.0 // indirect
go.uber.org/atomic v1.7.0 // indirect
golang.org/x/exp v0.0.0-20211216164055-b2b84827b756 // indirect
golang.org/x/exp/typeparams v0.0.0-20220218215828-6cf2b201936e // indirect
golang.org/x/exp/typeparams v0.0.0-20221208152030-732eee02a75a // indirect
golang.org/x/lint v0.0.0-20210508222113-6edffad5e616 // indirect
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3 // indirect
golang.org/x/net v0.0.0-20220401154927-543a649e0bdd // indirect
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c // indirect
golang.org/x/term v0.0.0-20220526004731-065cf7ba2467 // indirect
golang.org/x/mod v0.7.0 // indirect
golang.org/x/net v0.7.0 // indirect
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f // indirect
golang.org/x/term v0.5.0 // indirect
golang.org/x/xerrors v0.0.0-20220411194840-2f41105eb62f // indirect
gonum.org/v1/gonum v0.11.0 // indirect
google.golang.org/api v0.47.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20220126215142-9970aeb2e350 // indirect
google.golang.org/grpc v1.44.0 // indirect
google.golang.org/genproto v0.0.0-20220617124728-180714bec0ad // indirect
google.golang.org/grpc v1.47.0 // indirect
gopkg.in/ini.v1 v1.51.0 // indirect
gopkg.in/square/go-jose.v2 v2.3.1 // indirect
gopkg.in/square/go-jose.v2 v2.5.1 // indirect
)
replace github.com/nats-io/nats-streaming-server v0.11.2 => github.com/influxdata/nats-streaming-server v0.11.3-0.20201112040610-c277f7560803

265
go.sum
View File

@ -50,7 +50,8 @@ github.com/Azure/azure-pipeline-go v0.2.3 h1:7U9HBg1JFK3jHl5qmo4CTZKFTVgMwdFHMVt
github.com/Azure/azure-pipeline-go v0.2.3/go.mod h1:x841ezTBIMG6O3lAcl8ATHnsOPVl2bqk7S3ta6S6u4k=
github.com/Azure/azure-storage-blob-go v0.14.0 h1:1BCg74AmVdYwO3dlKwtFU1V0wU2PZdREkXvAmZJRUlM=
github.com/Azure/azure-storage-blob-go v0.14.0/go.mod h1:SMqIBi+SuiQH32bvyjngEewEeXoPfKMgWlBDaYf6fck=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 h1:UQHMgLO+TxOElx5B5HZ4hJQsoJ/PvUvKRhJHDQXO8P8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/Azure/go-autorest v14.2.0+incompatible h1:V5VMDjClD3GiElqLWO7mz2MxNAK/vTfRHdAubSIPRgs=
github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest/autorest v0.11.9 h1:P0ZF0dEYoUPUVDQo3mA1CvH5b8mKev7DDcmTwauuNME=
@ -72,8 +73,8 @@ github.com/Azure/go-autorest/logger v0.2.1/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZ
github.com/Azure/go-autorest/tracing v0.6.0 h1:TYi4+3m5t6K48TGI9AUdb+IzbnSxvnvUMfuitfgcfuo=
github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/toml v0.4.1 h1:GaI7EiDXDRfa8VshkTj7Fym7ha+y8/XxIgD2okUIjLw=
github.com/BurntSushi/toml v0.4.1/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
github.com/BurntSushi/toml v1.2.1 h1:9F2/+DoOYIOksmaJFPw1tGFy1eDnIJXg+UHjuD8lTak=
github.com/BurntSushi/toml v1.2.1/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/DATA-DOG/go-sqlmock v1.4.1 h1:ThlnYciV1iM/V0OSF/dtkqWb6xo5qITT1TJBG1MRDJM=
github.com/DATA-DOG/go-sqlmock v1.4.1/go.mod h1:f/Ixk793poVmq4qj/V1dPUg2JEAKC73Q5eFN3EC/SaM=
@ -86,13 +87,13 @@ github.com/Masterminds/sprig v2.16.0+incompatible h1:QZbMUPxRQ50EKAq3LFMnxddMu88
github.com/Masterminds/sprig v2.16.0+incompatible/go.mod h1:y6hNFY5UBTIWBxnzTeuNhlNS5hqE0NB0E6fgfo2Br3o=
github.com/Masterminds/squirrel v1.5.0 h1:JukIZisrUXadA9pl3rMkjhiamxiB0cXiu+HGp/Y8cY8=
github.com/Masterminds/squirrel v1.5.0/go.mod h1:NNaOrjSoIDfDA40n7sr2tPNZRfjzjA400rg+riTZj10=
github.com/Microsoft/go-winio v0.4.11 h1:zoIOcVf0xPN1tnMVbTtEdI+P8OofVk3NObnwOQ6nK2Q=
github.com/Microsoft/go-winio v0.4.11/go.mod h1:VhR8bwka0BXejwEJY73c50VrPtXAaKcyvVC4A4RozmA=
github.com/Microsoft/go-winio v0.5.2 h1:a9IhgEQBCUEk6QCdml9CiJGhAws+YwffDHEMp1VMrpA=
github.com/Microsoft/go-winio v0.5.2/go.mod h1:WpS1mjBmmwHBEWmogvA2mj8546UReBk4v8QkMxJ6pZY=
github.com/Microsoft/hcsshim v0.9.6 h1:VwnDOgLeoi2du6dAznfmspNqTiwczvjv4K7NxuY9jsY=
github.com/NYTimes/gziphandler v1.0.1 h1:iLrQrdwjDd52kHDA5op2UBJFjmOb9g+7scBan4RN8F0=
github.com/NYTimes/gziphandler v1.0.1/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2 h1:+vx7roKuyA63nhn5WAunQHLTznkw5W8b1Xc0dNjp83s=
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2/go.mod h1:HBCaDeC1lPdgDeDbhX8XFpy1jqjK0IBG8W5K+xYqA0w=
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5/go.mod h1:lmUJ/7eu/Q8D7ML55dXQrVaamCz2vxCfdQBasLZfHKk=
github.com/OneOfOne/xxhash v1.2.2 h1:KMrpdQIwFcEqXDklaen+P1axHaj9BSKzvpUUfnHldSE=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/RoaringBitmap/roaring v0.4.16 h1:NholfewybRLOwACgfqfzn/N5xa6keKNs4fP00t0cwLo=
@ -108,6 +109,7 @@ github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuy
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
github.com/andreyvit/diff v0.0.0-20170406064948-c7f18ee00883 h1:bvNMNQO63//z+xNgfBlViaCIJKLlCJ6/fmUseuG0wVQ=
github.com/andreyvit/diff v0.0.0-20170406064948-c7f18ee00883/go.mod h1:rCTlJbsFo29Kk6CurOXKm700vrz8f0KW0JNfpkRJY/8=
github.com/andybalholm/brotli v1.0.3/go.mod h1:fO7iG3H7G2nSZ7m0zPUDn85XEX2GTukHGRSepvi9Eig=
@ -129,8 +131,8 @@ github.com/aryann/difflib v0.0.0-20170710044230-e206f873d14a/go.mod h1:DAHtR1m6l
github.com/aws/aws-lambda-go v1.13.3/go.mod h1:4UKl9IzQMoD+QF79YdCuzCwp8VbmG4VAQwij/eHl5CU=
github.com/aws/aws-sdk-go v1.27.0/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aws/aws-sdk-go v1.29.16/go.mod h1:1KvfttTE3SPKMpo8g2c6jL3ZKfXtFvKscTgahTma5Xg=
github.com/aws/aws-sdk-go v1.30.12 h1:KrjyosZvkpJjcwMk0RNxMZewQ47v7+ZkbQDXjWsJMs8=
github.com/aws/aws-sdk-go v1.30.12/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZveU8YkpAk0=
github.com/aws/aws-sdk-go v1.34.0 h1:brux2dRrlwCF5JhTL7MUT3WUwo9zfDHZZp3+g3Mvlmo=
github.com/aws/aws-sdk-go v1.34.0/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZveU8YkpAk0=
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
github.com/aws/aws-sdk-go-v2 v1.11.0 h1:HxyD62DyNhCfiFGUHqJ/xITD6rAjJ7Dm/2nLxLmO4Ag=
github.com/aws/aws-sdk-go-v2 v1.11.0/go.mod h1:SQfA+m2ltnu1cA0soUkj4dRSsmITiVQUJvBIZjzfPyQ=
@ -184,14 +186,19 @@ github.com/c-bata/go-prompt v0.2.2 h1:uyKRz6Z6DUyj49QVijyM339UJV9yhbr70gESwbNU3e
github.com/cactus/go-statsd-client/statsd v0.0.0-20191106001114-12b4e2b38748/go.mod h1:l/bIBLeOl9eX+wxJAzxS4TveKRtAqlyDpHjhkfO0MEI=
github.com/casbin/casbin/v2 v2.1.2/go.mod h1:YcPU1XXisHhLzuxH9coDNf2FbKpjGlbCg3n9yuLkIJQ=
github.com/cenkalti/backoff v2.2.1+incompatible/go.mod h1:90ReRw6GdpyfrHakVjL/QHaoyV4aDUVVkXQJJJ3NXXM=
github.com/cenkalti/backoff/v4 v4.2.0 h1:HN5dHm3WBOgndBH6E8V0q2jIYIR3s9yglV8k/+MN3u4=
github.com/cenkalti/backoff/v4 v4.2.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.1.2 h1:YRXhKfTDauu4ajMg1TPgFO5jnlC2HCbmLXMcTG5cbYE=
github.com/cespare/xxhash/v2 v2.1.2/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/checkpoint-restore/go-criu/v5 v5.3.0/go.mod h1:E/eQpaFtUKGOOSEBZgmKAcn+zUUwWxqcaKZlF54wK8E=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/cilium/ebpf v0.7.0/go.mod h1:/oI2+1shJiTGAMgl6/RgJr36Eo1jzrRcAWbcXO2usCA=
github.com/clbanning/x2j v0.0.0-20191024224557-825249438eec/go.mod h1:jMjuTZXRI4dUb/I5gc9Hdhagfvm9+RyrPryS/auMzxE=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
@ -201,14 +208,20 @@ github.com/cncf/udpa/go v0.0.0-20210930031921-04548b0d99d4/go.mod h1:6pvJx4me5XP
github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20211001041855-01bcc9b48dfe/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8=
github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI=
github.com/containerd/console v1.0.3/go.mod h1:7LqA/THxQ86k76b8c/EMSiaJ3h1eZkMkXar0TQ1gf3U=
github.com/containerd/containerd v1.6.18 h1:qZbsLvmyu+Vlty0/Ex5xc0z2YtKpIsb5n45mAMI+2Ns=
github.com/containerd/containerd v1.6.18/go.mod h1:1RdCUu95+gc2v9t3IL+zIlpClSmew7/0YS8O5eQZrOw=
github.com/containerd/continuity v0.3.0 h1:nisirsYROK15TAMVukJOUyGJjz4BNQJBVsNvAXZJ/eg=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
@ -218,6 +231,7 @@ github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ3
github.com/creack/pty v1.1.17 h1:QeVUsEDNrLBW4tMgZHvxy18sKtr6VI492kBhUfhDJNI=
github.com/creack/pty v1.1.17/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4=
github.com/cyberdelia/templates v0.0.0-20141128023046-ca7fffd4298c/go.mod h1:GyV+0YP4qX0UQ7r2MoYZ+AvYDp12OF5yg4q8rGnyNh4=
github.com/cyphar/filepath-securejoin v0.2.3/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@ -231,15 +245,15 @@ github.com/dgryski/go-bitstream v0.0.0-20180413035011-3522498ce2c8/go.mod h1:VMa
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/dimchansky/utfbom v1.1.0 h1:FcM3g+nofKgUteL8dm/UpdRXNC9KmADgTpLKsu0TRo4=
github.com/dimchansky/utfbom v1.1.0/go.mod h1:rO41eb7gLfo8SF1jd9F8HplJm1Fewwi4mQvIirEdv+8=
github.com/docker/distribution v2.7.0+incompatible h1:neUDAlf3wX6Ml4HdqTrbcOHXtfRN0TFIwt6YFL7N9RU=
github.com/docker/distribution v2.7.0+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v0.7.3-0.20180815000130-e05b657120a6/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker v1.13.1 h1:IkZjBSIc8hBjLpqeAbeE5mca5mNgeatLHBy3GO78BWo=
github.com/docker/docker v1.13.1/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/distribution v2.8.2+incompatible h1:T3de5rq0dB1j30rp0sA2rER+m322EBzniBPB6ZIzuh8=
github.com/docker/distribution v2.8.2+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v23.0.3+incompatible h1:9GhVsShNWz1hO//9BNg/dpMnZW25KydO4wtVxWAIbho=
github.com/docker/docker v23.0.3+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
github.com/docker/go-units v0.3.3 h1:Xk8S3Xj5sLGlG5g67hJmYMmUgXv5N4PhkjJHHqrwnTk=
github.com/docker/go-units v0.3.3/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo=
@ -265,6 +279,7 @@ github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.m
github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210512163311-63b5d3c536b0/go.mod h1:hliV/p42l8fGbc6Y9bQ70uLwIvmJyVE5k4iMKlh8wCQ=
github.com/envoyproxy/go-control-plane v0.9.10-0.20210907150352-cf90f659a021/go.mod h1:AFq3mo9L8Lqqiid3OhADV3RfLJnjiw63cSpi+fDTRC0=
github.com/envoyproxy/go-control-plane v0.10.2-0.20220325020618-49ff273808a1/go.mod h1:KJwIaB5Mv44NWtYuAOFCVOjcI94vtpEz2JU/D2v6IjE=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
@ -281,10 +296,12 @@ github.com/franela/goblin v0.0.0-20200105215937-c9ffbefa60db/go.mod h1:7dvUGVsVB
github.com/franela/goreq v0.0.0-20171204163338-bcd34c9993f8/go.mod h1:ZhphrRTfi2rbfLwlschooIH4+wKKDR4Pdxhh+TRoA20=
github.com/frankban/quicktest v1.11.0/go.mod h1:K+q6oSqb0W0Ininfk863uOk1lMy69l/P6txr3mVT54s=
github.com/frankban/quicktest v1.11.2/go.mod h1:K+q6oSqb0W0Ininfk863uOk1lMy69l/P6txr3mVT54s=
github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k=
github.com/frankban/quicktest v1.13.0 h1:yNZif1OkDfNoDfb9zZa9aXIpejNR4F23Wely0c+Qdqk=
github.com/frankban/quicktest v1.13.0/go.mod h1:qLE0fzW0VuyUAJgPU19zByoIr0HtCHN/r/VLSOOIySU=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.5.4 h1:jRbGcIw6P2Meqdwuo0H1p6JVLbL5DHKAKlYndzMwVZI=
github.com/fsnotify/fsnotify v1.5.4/go.mod h1:OVB6XrOHzAwXMpEM7uPOzcehqUV2UqJxmVXmkdnm1bU=
github.com/gabriel-vasile/mimetype v1.4.0 h1:Cn9dkdYsMIu56tGho+fqzh7XmvY2YyGU0FnbhiOsEro=
github.com/gabriel-vasile/mimetype v1.4.0/go.mod h1:fA8fi6KUiG7MgQQ+mEWotXoEOvmxRtOJlERCzSmRvr8=
github.com/getkin/kin-openapi v0.53.0/go.mod h1:7Yn5whZr5kJi6t+kShccXS8ae1APpYTW6yheSwk8Yi4=
@ -306,6 +323,7 @@ github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.10.0/go.mod h1:xUsJbQ/Fp4kEt7AFgCuvyX4a71u8h9jB8tj/ORgOZ7o=
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
github.com/go-latex/latex v0.0.0-20210118124228-b3d85cf34e07/go.mod h1:CO1AlKB2CSIqUrmQPqA0gdRIlnLEY0gK5JGjh37zN5U=
github.com/go-ldap/ldap v3.0.2+incompatible/go.mod h1:qfd9rJvER9Q0/D/Sqn1DfHRoBp40uXYvFoEVrNEPqRc=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
@ -315,14 +333,17 @@ github.com/go-logr/logr v0.4.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTg
github.com/go-openapi/jsonpointer v0.19.5/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-sql-driver/mysql v1.4.0/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
github.com/go-sql-driver/mysql v1.5.0 h1:ozyZYNQW3x3HtqT1jira07DN2PArx2v7/mN66gGcHOs=
github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=
github.com/go-sql-driver/mysql v1.6.0 h1:BCTh4TKNUYmOmMUcQ3IipzF5prigylS7XXjEkfCHuOE=
github.com/go-sql-driver/mysql v1.6.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=
github.com/go-stack/stack v1.8.0 h1:5SgMzNM5HxrEjV0ww2lTmX6E2Izsfxas4+YHWRs3Lsk=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-test/deep v1.0.1/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA=
github.com/goccy/go-json v0.7.10/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
github.com/goccy/go-json v0.9.6 h1:5/4CtRQdtsX0sal8fdVhTaiMN01Ri8BExZZ8iRmHQ6E=
github.com/goccy/go-json v0.9.6/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.0.6/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gofrs/uuid v3.3.0+incompatible h1:8K4tyRfvU1CYPgJsveYFQMhpFd/wXNM7iK6rR7UHz84=
github.com/gofrs/uuid v3.3.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
github.com/gogo/googleapis v1.1.0/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s=
@ -346,8 +367,9 @@ github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4er
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e h1:1r7pUrabqp18hOBcwBwiTsbnFeTZHV9eER/QT5JVZxY=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
@ -355,8 +377,9 @@ github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt
github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4=
github.com/golang/mock v1.5.0 h1:jlYHihg//f7RRwuPfptm04yp4s7O6Kw8EZiVYIGcH0g=
github.com/golang/mock v1.5.0/go.mod h1:CWnOUgYIOo4TcNZ0wHX3YZCqsaM1I1Jvs6v3mP3KVu8=
github.com/golang/mock v1.6.0 h1:ErTB+efbowRARo13NNdxyJji2egdxLGQhRaY+DUumQc=
github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
@ -400,8 +423,8 @@ github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.7 h1:81/ik6ipDQS2aGcBfIN5dHDB36BwrStyeAQquSYCV4o=
github.com/google/go-cmp v0.5.7/go.mod h1:n+brtR0CgQNWTVd5ZUFpTBC8YFBDLK/h/bpaJ8/DtOE=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-jsonnet v0.17.0 h1:/9NIEfhK1NQRKl3sP2536b2+x5HnZMdql7x3yK/l8JY=
github.com/google/go-jsonnet v0.17.0/go.mod h1:sOcuej3UW1vpPTZOr8L7RQimqai1a57bt5j22LzGZCw=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
@ -448,8 +471,9 @@ github.com/grpc-ecosystem/grpc-gateway v1.9.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
github.com/hashicorp/consul/api v1.3.0/go.mod h1:MmDNSzIMUjNpY/mQ398R4bk2FnqQLoPndWW5VkKPlCE=
github.com/hashicorp/consul/sdk v0.3.0/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-cleanhttp v0.5.1 h1:dH3aiDG9Jvb5r5+bYHsikaOUIpcM0xvgMXVoDkXMzJM=
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
@ -459,8 +483,9 @@ github.com/hashicorp/go-hclog v0.9.2 h1:CG6TE5H9/JXsFWJCfoIVpKFIkFe6ysEuHirp4DxC
github.com/hashicorp/go-hclog v0.9.2/go.mod h1:5CU+agLiy3J7N7QjHK5d05KxGsuXiQLrjA0H7acj2lQ=
github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
github.com/hashicorp/go-multierror v1.0.0 h1:iVjPR7a6H0tWELX5NxNe7bYopibicUzc7uPribsnS6o=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hashicorp/go-plugin v1.0.0/go.mod h1:++UyYGoz3o5w9ZzAdZxtQKrWWP+iqPBn3cQptSMzBuY=
github.com/hashicorp/go-retryablehttp v0.5.3/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs=
github.com/hashicorp/go-retryablehttp v0.6.4 h1:BbgctKO892xEyOXnGiaAwIoSq1QZ/SS4AhjoAh9DnfY=
@ -492,7 +517,6 @@ github.com/hashicorp/yamux v0.0.0-20180604194846-3520598351bb/go.mod h1:+NfK9FKe
github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM=
github.com/hinshun/vt10x v0.0.0-20220119200601-820417d04eec h1:qv2VnGeEQHchGaZ/u7lxST/RaJw+cv273q79D81Xbog=
github.com/hinshun/vt10x v0.0.0-20220119200601-820417d04eec/go.mod h1:Q48J4R4DvxnHolD5P8pOtXigYlRuPLGl6moFx3ulM68=
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/huandu/xstrings v1.0.0 h1:pO2K/gKgKaat5LdpAhxhluX2GPQMaI3W5FUz/I/UnWk=
github.com/huandu/xstrings v1.0.0/go.mod h1:4qWG/gcEcfX4z/mBDHJ++3ReCw9ibxbsNJbcucJdbSo=
@ -500,14 +524,14 @@ github.com/hudl/fargo v1.3.0/go.mod h1:y3CKSmjA+wD2gak7sUSXTAoopbhU08POFhmITJgmK
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/imdario/mergo v0.3.4/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.9 h1:UauaLniWCFHWd+Jp9oCEkTBj8VO/9DKg3PV3VCNMDIg=
github.com/imdario/mergo v0.3.9/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.12 h1:b6R2BslTbIEToALKP7LxUvijTsNI9TAe80pLWN2g/HU=
github.com/imdario/mergo v0.3.12/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/influxdata/cron v0.0.0-20201006132531-4bb0a200dcbe h1:7j4SdN/BvQwN6WoUq7mv0kg5U9NhnFBxPGMafYRKym0=
github.com/influxdata/cron v0.0.0-20201006132531-4bb0a200dcbe/go.mod h1:XabtPPW2qsCg0tl+kjaPU+cFS+CjQXEXbT1VJvHT4og=
github.com/influxdata/flux v0.188.1 h1:mwduZYGUOKewKs8Smhp64hGUpPEuQWrCXS/Xuh+1OIo=
github.com/influxdata/flux v0.188.1/go.mod h1:HdQg0JxHSQhJhEProUY/7QRi9eqnM0HP5L1fH3EtS/c=
github.com/influxdata/flux v0.194.3 h1:3PKCi41NrUfFSz3Dp2Rt2Rs+bREP9VPRgrq8H14Ymag=
github.com/influxdata/flux v0.194.3/go.mod h1:hAo8pb/Rxp6afj8/roEzxANO5PNVObAdXtv2dBp1E6U=
github.com/influxdata/gosnowflake v1.6.9 h1:BhE39Mmh8bC+Rvd4QQsP2gHypfeYIH1wqW1AjGWxxrE=
github.com/influxdata/gosnowflake v1.6.9/go.mod h1:9W/BvCXOKx2gJtQ+jdi1Vudev9t9/UDOEHnlJZ/y1nU=
github.com/influxdata/httprouter v1.3.1-0.20191122104820-ee83e2772f69 h1:WQsmW0fXO4ZE/lFGIE84G6rIV5SJN3P3sjIXAP1a8eU=
@ -543,10 +567,12 @@ github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfC
github.com/jmoiron/sqlx v1.3.4 h1:wv+0IJZfL5z0uZoUjlpKgHkgaFSYD+r9CfrXjEXsO7w=
github.com/jmoiron/sqlx v1.3.4/go.mod h1:2BljVx/86SuTyjE+aPYlHCTNvZrnJXghYGpNiXLBMCQ=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jstemmer/go-junit-report v0.9.1 h1:6QPYqodiu3GuPL+7mfx+NwDdp2eTkp9IfEUpgAwUN0o=
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
@ -555,6 +581,7 @@ github.com/jsternberg/zap-logfmt v1.2.0/go.mod h1:kz+1CUmCutPWABnNkOu9hOHKdT2q3T
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/jung-kurt/gofpdf v1.0.0/go.mod h1:7Id9E/uU8ce6rXgefFLlgrJj/GYY22cpxn+r32jIOes=
github.com/jung-kurt/gofpdf v1.0.3-0.20190309125859-24315acbbda5/go.mod h1:7Id9E/uU8ce6rXgefFLlgrJj/GYY22cpxn+r32jIOes=
github.com/jwilder/encoding v0.0.0-20170811194829-b4e1701a28ef h1:2jNeR4YUziVtswNP9sEFAI913cVrzH85T+8Q6LpYbT0=
@ -574,10 +601,11 @@ github.com/klauspost/compress v1.14.2 h1:S0OHlFk/Gbon/yauFJ4FfJJF5V0fc5HbBTJazi2
github.com/klauspost/compress v1.14.2/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.1 h1:Fmg33tUaq4/8ym9TJN1x7sLJnHVwhP33CNkpYV/7rwI=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.0 h1:WgNl7dwNpEZ6jJ9k1snq4pZsg7DOEN8hP9Xw0Tsjwk0=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
@ -594,8 +622,9 @@ github.com/lightstep/lightstep-tracer-common/golang/gogo v0.0.0-20190605223551-b
github.com/lightstep/lightstep-tracer-go v0.18.1/go.mod h1:jlF1pusYV4pidLvZ+XD0UBX0ZE6WURAspgAczcDHrL4=
github.com/lyft/protoc-gen-validate v0.0.13/go.mod h1:XbGvPuh87YZc5TdIa2/I4pLk0QoUACkjt2znoq26NVQ=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.7 h1:IeQXZAiQcpL9mgcAe1Nu6cX9LLw6ExEHKjN0VQdvPDY=
github.com/magiconair/properties v1.8.7/go.mod h1:Dhd985XPs7jluiymwWYZ0G4Z61jb3vdS329zhj2hYo0=
github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/matryer/moq v0.0.0-20190312154309-6cfb0558e1bd/go.mod h1:9ELz6aaclSIGnZBoaSLZ3NAl1VTufbOrXBPvtcy6WiQ=
@ -615,20 +644,22 @@ github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hd
github.com/mattn/go-isatty v0.0.9/go.mod h1:YNRxwqDuOph6SZLI9vUUz6OYw3QyUt7WiY2yME+cCiQ=
github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE=
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-isatty v0.0.14 h1:yVuAays6BHfxijgZPzw+3Zlu5yQgKGP2/hcQbHb7S9Y=
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/mattn/go-isatty v0.0.16 h1:bq3VjFmv/sOjHtdEhmkEV4x1AJtvUvOJ2PFAZ5+peKQ=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/mattn/go-runewidth v0.0.13 h1:lTGmDsbAYt5DmK6OnoV7EuIF1wEIFAcxld6ypU4OSgU=
github.com/mattn/go-sqlite3 v1.14.6/go.mod h1:NyWgC/yNuGj7Q9rpYnZvas74GogHl5/Z4A/KQRfk6bU=
github.com/mattn/go-sqlite3 v1.14.7 h1:fxWBnXkxfM6sRiuH3bqJ4CfzZojMOLVc0UTsTglEghA=
github.com/mattn/go-sqlite3 v1.14.7/go.mod h1:NyWgC/yNuGj7Q9rpYnZvas74GogHl5/Z4A/KQRfk6bU=
github.com/mattn/go-tty v0.0.4 h1:NVikla9X8MN0SQAqCYzpGyXv0jY7MNl3HOWD2dkle7E=
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo=
github.com/matttproud/golang_protobuf_extensions v1.0.4/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b h1:j7+1HpAFS1zy5+Q4qx1fWh90gTKwiN4QCGoY9TWyyO4=
github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/miekg/dns v1.1.22 h1:Jm64b3bO9kP43ddLjL2EY3Io6bmy1qGb9Xxz6TqS6rc=
github.com/miekg/dns v1.1.25 h1:dFwPR6SfLtrSwgDcIq2bcU/gVutB4sNApq2HBdqcakg=
github.com/mileusna/useragent v0.0.0-20190129205925-3e331f0949a5 h1:pXqZHmHOz6LN+zbbUgqyGgAWRnnZEI40IzG3tMsXcSI=
github.com/mileusna/useragent v0.0.0-20190129205925-3e331f0949a5/go.mod h1:JWhYAp2EXqUtsxTKdeGlY8Wp44M7VxThC9FEoNGi2IE=
github.com/minio/asm2plan9s v0.0.0-20200509001527-cdd76441f9d8/go.mod h1:mC1jAcsrzbxHt8iiaC+zU4b1ylILSosueou12R++wfY=
@ -649,13 +680,24 @@ github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh
github.com/mitchellh/reflectwalk v1.0.0/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=
github.com/mna/pigeon v1.0.1-0.20180808201053-bb0192cfc2ae h1:mQO+oxi0kpii/TX+ltfTCFuYkOjEn53JhaOObiMuvnk=
github.com/mna/pigeon v1.0.1-0.20180808201053-bb0192cfc2ae/go.mod h1:Iym28+kJVnC1hfQvv5MUtI6AiFFzvQjHcvI4RFTG/04=
github.com/moby/patternmatcher v0.5.0 h1:YCZgJOeULcxLw1Q+sVR636pmS7sPEn1Qo2iAN6M7DBo=
github.com/moby/patternmatcher v0.5.0/go.mod h1:hDPoyOpDY7OrrMDLaYoY3hf52gNCR/YOUYxkhApJIxc=
github.com/moby/sys/mountinfo v0.5.0/go.mod h1:3bMD3Rg+zkqx8MRYPi7Pyb0Ie97QEBmdxbhnCLlSvSU=
github.com/moby/sys/sequential v0.5.0 h1:OPvI35Lzn9K04PBbCLW0g4LcFAJgHsvXsRyewg5lXtc=
github.com/moby/sys/sequential v0.5.0/go.mod h1:tH2cOOs5V9MlPiXcQzRC+eEyab644PWKGRYaaV5ZZlo=
github.com/moby/term v0.0.0-20221128092401-c43b287e0e0f h1:J/7hjLaHLD7epG0m6TBMGmp4NQ+ibBYLfeyJWdAIFLA=
github.com/moby/term v0.0.0-20221128092401-c43b287e0e0f/go.mod h1:15ce4BGCFxt7I5NQKT+HV0yEDxmf6fSysfEDiVo3zFM=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
github.com/mrunalp/fileutils v0.5.0/go.mod h1:M1WthSahJixYnrXQl/DFQuteStB1weuxD2QJNHXfbSQ=
github.com/mschoch/smat v0.0.0-20160514031455-90eadee771ae h1:VeRdUYdCw49yizlSbMEn2SZ+gT+3IUKx8BqxyQdz+BY=
github.com/mschoch/smat v0.0.0-20160514031455-90eadee771ae/go.mod h1:qAyveg+e4CE+eKJXWVjKXM4ck2QobLqTDytGJbLLhJg=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/nats-io/jwt v0.3.0/go.mod h1:fRYCDE99xlTsqUzISS1Bi75UBJ6ljOJQOAAu5VglpSg=
github.com/nats-io/jwt v0.3.2/go.mod h1:/euKqTS1ZD+zzjYrY7pseZrTtWQSjujC7xjPc8wL6eU=
github.com/nats-io/nats-server/v2 v2.1.2/go.mod h1:Afk+wRZqkMQs/p45uXdrVLuab3gwv3Z8C4HTBu8GD/k=
@ -664,21 +706,29 @@ github.com/nats-io/nkeys v0.1.0/go.mod h1:xpnFELMwJABBLVhffcfd1MZx6VsNRFpEugbxzi
github.com/nats-io/nkeys v0.1.3/go.mod h1:xpnFELMwJABBLVhffcfd1MZx6VsNRFpEugbxziKVo7w=
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/nxadm/tail v1.4.4 h1:DQuhQpB1tVlglWS2hLQ5OV6B5r8aGxSrPc5Qo6uTN78=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/oklog/oklog v0.3.2/go.mod h1:FCV+B7mhrz4o+ueLpx+KqkyXRGMWOYEvfiXtdGtbWGs=
github.com/oklog/run v1.0.0/go.mod h1:dlhp/R75TPv97u0XWUtDeV/lRKWPKSdTuV0TZvrmrQA=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/olekukonko/tablewriter v0.0.0-20170122224234-a0225b3f23b5/go.mod h1:vsDQFd/mU46D+Z4whnwzcISnGGzXWMclvtLoiIKAKIo=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.11.0 h1:JAKSXpt1YjtLA7YpPiqO9ss6sNXEsPfSGdwN0UHqzrw=
github.com/onsi/ginkgo v1.11.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1 h1:mFwc4LvZ0xpSvDZ3E+k8Yte0hLOMxXUlP+yXtJqkYfQ=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.8.1 h1:C5Dqfs/LeauYDX0jJXIe2SWmwCbGzx9yF8C8xy3Lh34=
github.com/onsi/gomega v1.8.1/go.mod h1:Ho0h+IUsWyvy1OpqCwxlQ/21gkhVunqlU8fDGcoTdcA=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.3 h1:gph6h/qe9GSUw1NhH1gp+qb+h8rXD8Cy60Z32Qw3ELA=
github.com/onsi/gomega v1.10.3/go.mod h1:V9xEwhxec5O8UDM77eCW8vLymOMltsqPVYWrpDsH8xc=
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7/go.mod h1:HzydrMdWErDVzsI23lYNej1Htcns9BCg93Dk0bBINWk=
github.com/opencontainers/go-digest v1.0.0-rc1 h1:WzifXhOVOEOuFYOJAW6aQqW0TooG2iki3E3Ii+WN7gQ=
github.com/opencontainers/go-digest v1.0.0-rc1/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
github.com/opencontainers/image-spec v1.0.1/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.0-rc2 h1:2zx/Stx4Wc5pIPDvIxHXvXtQFW/7XWJGmnM7r3wg034=
github.com/opencontainers/image-spec v1.1.0-rc2/go.mod h1:3OVijpioIKYWTqjiG0zfF6wvoJ4fAXGbjdZuI2NgsRQ=
github.com/opencontainers/runc v1.1.5 h1:L44KXEpKmfWDcS02aeGm8QNTFXTo2D+8MYGDIJ/GDEs=
github.com/opencontainers/runc v1.1.5/go.mod h1:1J5XiS+vdZ3wCyZybsuxXZWGrgSr8fFJHLXuG2PsnNg=
github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/selinux v1.10.0/go.mod h1:2i0OySw99QjzBBQByd1Gr9gSjvuho1lHsJxIJ3gGbJI=
github.com/opentracing-contrib/go-observer v0.0.0-20170622124052-a52f23424492/go.mod h1:Ngi6UdF0k5OKD5t5wlmGhe/EDKPoUM3BXZSSfIuJbis=
github.com/opentracing/basictracer-go v1.0.0/go.mod h1:QfBfYuafItcjQuMwinw9GhYKwFXS9KnPs5lxoYwgW74=
github.com/opentracing/opentracing-go v1.0.2/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
@ -693,8 +743,9 @@ github.com/pact-foundation/pact-go v1.0.4/go.mod h1:uExwJY4kCzNPcHRj+hCR/HBbOOIw
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pascaldekloe/goe v0.1.0/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pborman/uuid v1.2.0/go.mod h1:X/NO0urCmaxf9VXbdlT7C2Yzkj2IKimNn4k+gtPdI/k=
github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.9.5 h1:4yBQzkHv+7BHq2PQUZF3Mx0IYxG7LsP222s7Agd3ve8=
github.com/pelletier/go-toml v1.9.5/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
github.com/performancecopilot/speed v3.0.0+incompatible/go.mod h1:/CLtqpZ5gBg1M9iaPbIdPPGyKcA8hKdoy6hAWba7Yac=
github.com/philhofer/fwd v1.0.0 h1:UbZqGr5Y38ApvM/V/jEljVxwocdweyH+vmYvRPBnbqQ=
github.com/philhofer/fwd v1.0.0/go.mod h1:gk3iGcWd9+svBvR0sR+KPcfE+RNWozjowpeBVG3ZVNU=
@ -723,8 +774,10 @@ github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829/go.mod
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.3.0/go.mod h1:hJaj2vgQTGQmVCsAACORcieXFeDPbaTKGT+JTgUa3og=
github.com/prometheus/client_golang v1.5.1 h1:bdHYieyGlH+6OLEk2YQha8THib30KP0/yD0YH9m6xcA=
github.com/prometheus/client_golang v1.5.1/go.mod h1:e9GMxYsXl05ICDXkRhurwBS4Q3OK1iX/F2sw+iXX5zU=
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
github.com/prometheus/client_golang v1.11.1 h1:+4eQaD7vAZ6DsfsxB15hbE0odUjGI5ARs9yskGu1v4s=
github.com/prometheus/client_golang v1.11.1/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
@ -737,15 +790,19 @@ github.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y8
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA=
github.com/prometheus/common v0.9.1 h1:KOMtN28tlbam3/7ZKEYKHhKoJZYYj3gMH4uc62x7X7U=
github.com/prometheus/common v0.9.1/go.mod h1:yhUN8i9wzaXS3w1O07YhxHEBxD+W35wd8bs7vj7HSQ4=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
github.com/prometheus/common v0.30.0 h1:JEkYlQnpzrzQFxi6gnukFPdQ+ac82oRhzMcIduJu/Ug=
github.com/prometheus/common v0.30.0/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A=
github.com/prometheus/procfs v0.0.11 h1:DhHlBtkHWPYi8O2y31JkK0TF+DGM+51OopZjH/Ia5qI=
github.com/prometheus/procfs v0.0.11/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/procfs v0.7.3 h1:4jVXhlkAyzOScmCkXBTOLRLTz8EeU+eyjrwB/EPq0VU=
github.com/prometheus/procfs v0.7.3/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/retailnext/hllpp v1.0.1-0.20180308014038-101a6d2f8b52 h1:RnWNS9Hlm8BIkjr6wx8li5abe0fr73jljLycdfemTp0=
@ -754,6 +811,7 @@ github.com/rivo/uniseg v0.2.0 h1:S1pD9weZBuJdFmowNwbpi7BJ8TNftyUImj/0WQi72jY=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.8.1 h1:geMPLpDpQOgVyCg5z5GoRwLHepNdb71NXb67XFkP+Eg=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/rs/zerolog v1.21.0/go.mod h1:ZPhntP/xmq1nnND05hhpAh2QMhSsA4UN3MGZ6O2J3hM=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
@ -763,10 +821,8 @@ github.com/ryanuber/columnize v2.1.0+incompatible/go.mod h1:sm1tb6uqfes/u+d4ooFo
github.com/ryanuber/go-glob v1.0.0 h1:iQh3xXAumdQ+4Ufa5b25cRpC5TYKlno6hsv6Cb3pkBk=
github.com/ryanuber/go-glob v1.0.0/go.mod h1:807d1WSdnB0XRJzKNil9Om6lcp/3a0v4qIHxIXzX/Yc=
github.com/samuel/go-zookeeper v0.0.0-20190923202752-2cc03de413da/go.mod h1:gi+0XIa01GRL2eRQVjQkKGqKF3SF9vZR/HnPullcV2E=
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
github.com/satori/go.uuid v1.2.1-0.20181028125025-b2ce2384e17b h1:gQZ0qzfKHQIybLANtM3mBXNUtOfsCFXeTsnBqCsx1KM=
github.com/satori/go.uuid v1.2.1-0.20181028125025-b2ce2384e17b/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/seccomp/libseccomp-golang v0.9.2-0.20220502022130-f33da4d89646/go.mod h1:JA8cRccbGaA1s33RQf7Y1+q9gHmZX1yB/z9WDN1C6fg=
github.com/segmentio/kafka-go v0.2.0 h1:HtCSf6B4gN/87yc5qTl7WsxPKQIIGXLPPM1bMCPOsoY=
github.com/segmentio/kafka-go v0.2.0/go.mod h1:X6itGqS9L4jDletMsxZ7Dz+JFWxM6JHfPOCvTvk+EJo=
github.com/sergi/go-diff v1.1.0 h1:we8PVUC3FE2uYfodKH/nBHMSetSfHDR6scGdBi+erh0=
@ -774,8 +830,11 @@ github.com/sergi/go-diff v1.1.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNX
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.9.0 h1:trlNQbNUG3OdDrDil03MCb1H2o9nJ1x4/5LYw7byDE0=
github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/assertions v1.0.1 h1:voD4ITNjPL5jjBfgR/r8fPIIBrliWrWHeiJApdr3r4w=
github.com/smartystreets/assertions v1.0.1/go.mod h1:kHHU4qYBaI3q23Pp3VPrmWhuIUrLW/7eUrw0BU5VaoM=
@ -787,8 +846,9 @@ github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4k
github.com/sony/gobreaker v0.4.1/go.mod h1:ZKptC7FHNvhBz7dN2LGjPVBz2sZJmc0/PkyDJOjmxWY=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72 h1:qLC7fQah7D6K1B0ujays3HV9gkFtllcxhzImRR7ArPQ=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2 h1:m8/z1t7/fwjysjQRYbP0RD+bUIF/8tJwPdEZsI83ACI=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2 h1:5jhuqJyZCZf2JRofRvN/nIFgIWNzPa3/Vz8mYylgbWc=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/cast v1.3.0 h1:oget//CVOEoFewqQxwr0Ej5yjygnqGkvggSE/gB35Q8=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
@ -808,8 +868,9 @@ github.com/streadway/amqp v0.0.0-20190827072141-edfb9018d271/go.mod h1:AZpEONHx3
github.com/streadway/handy v0.0.0-20190108123426-d5acb3125c2a/go.mod h1:qNTQ5P5JnDBl6z3cMAg/SywNDC5ABu5ApDIw6lUbRmI=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0 h1:M2gUjqZET1qApGOWNSnZ49BAIMX4F/1plDv3+l31EJ4=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
@ -818,12 +879,14 @@ github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1FQKckRals=
github.com/stretchr/testify v1.8.0 h1:pSgiaMZlXftHpm5L7V1+rVB+AZJydKsMxsQBIJw4PKk=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/testcontainers/testcontainers-go v0.0.0-20190108154635-47c0da630f72 h1:3dsrMloqeog2f5ZoQCWJbTPR/tKIDFePkB0zg3GLjY8=
github.com/testcontainers/testcontainers-go v0.0.0-20190108154635-47c0da630f72/go.mod h1:wt/nMz68+kIO4RoguOZzsdv1B3kTYw+SuIKyJYRQpgE=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/testcontainers/testcontainers-go v0.18.0 h1:8RXrcIQv5xX/uBOSmZd297gzvA7F0yuRA37/918o7Yg=
github.com/testcontainers/testcontainers-go v0.18.0/go.mod h1:rLC7hR2SWRjJZZNrUYiTKvUXCziNxzZiYtz9icTWYNQ=
github.com/tinylib/msgp v1.1.0 h1:9fQd+ICuRIu/ue4vxJZu6/LzxN0HwMds2nq/0cFvxHU=
github.com/tinylib/msgp v1.1.0/go.mod h1:+d+yLhGm8mzTaHzB+wgMYrodPfmZrzkirds8fDWklFE=
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
@ -844,8 +907,10 @@ github.com/valyala/fasttemplate v1.0.1/go.mod h1:UQGH1tvbgY+Nz5t2n7tXsz52dQxojPU
github.com/valyala/fasttemplate v1.2.1/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ=
github.com/vertica/vertica-sql-go v1.1.1 h1:sZYijzBbvdAbJcl4cYlKjR+Eh/X1hGKzukWuhh8PjvI=
github.com/vertica/vertica-sql-go v1.1.1/go.mod h1:fGr44VWdEvL+f+Qt5LkKLOT7GoxaWdoUCnPBU9h6t04=
github.com/willf/bitset v1.1.9 h1:GBtFynGY9ZWZmEC9sWuu41/7VBXPFCOAbCbqTflOg9c=
github.com/willf/bitset v1.1.9/go.mod h1:RjeCKbqT1RxIR/KWY6phxZiaY1IyutSBfGjNPySAYV4=
github.com/vishvananda/netlink v1.1.0/go.mod h1:cTgwzPIzzgDAYoQrMm0EdrjRUBkTqKYppBueQtXaqoE=
github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU=
github.com/willf/bitset v1.1.11 h1:N7Z7E9UvjW+sGsEl7k/SJrvY2reP1A07MrGuCjIOjRE=
github.com/willf/bitset v1.1.11/go.mod h1:83CECat5yLh5zVOf4P1ErAgKA5UDvKtgyUABdr3+MjI=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xlab/treeprint v1.0.0 h1:J0TkWtiuYgtdlrkkrDLISYBQ92M+X5m4LrIIMKrbDTs=
github.com/xlab/treeprint v1.0.0/go.mod h1:IoImgRak9i3zJyuxOKUP1v4UZd1tMoKkq/Cimt1uhCg=
@ -919,8 +984,8 @@ golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20201221181555-eec23a3978ad/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20211117183948-ae814b36b871/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20220331220935-ae2d96664a29 h1:tkVvjkPTB7pnW3jnid7kNyAMPVWllTNOf/qKDze4p9o=
golang.org/x/crypto v0.0.0-20220331220935-ae2d96664a29/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.1.0 h1:MDRAIl0xIo9Io2xV565hzXHw3zVseKrJKodhohM5CjU=
golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw=
golang.org/x/exp v0.0.0-20180321215751-8460e604b9de/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20180807140117-3d87b88a115f/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
@ -938,8 +1003,8 @@ golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EH
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
golang.org/x/exp v0.0.0-20211216164055-b2b84827b756 h1:/5Bs7sWi0i3rOVO5KnM55OwugpsD4bRW1zywKoZjbkI=
golang.org/x/exp v0.0.0-20211216164055-b2b84827b756/go.mod h1:b9TAUYHmRtqA6klRHApnXMnj+OyLce4yF5cZCUbk2ps=
golang.org/x/exp/typeparams v0.0.0-20220218215828-6cf2b201936e h1:qyrTQ++p1afMkO4DPEeLGq/3oTsdlvdH4vqZUBWzUKM=
golang.org/x/exp/typeparams v0.0.0-20220218215828-6cf2b201936e/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk=
golang.org/x/exp/typeparams v0.0.0-20221208152030-732eee02a75a h1:Jw5wfR+h9mnIYH+OtGT2im5wV1YGGDora5vTv/aa5bE=
golang.org/x/exp/typeparams v0.0.0-20221208152030-732eee02a75a/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk=
golang.org/x/image v0.0.0-20180708004352-c73c2afc3b81/go.mod h1:ux5Hcp/YLpHSI86hEcLt0YII63i6oz57MZXIpbrjZUs=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
@ -977,8 +1042,9 @@ golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.5.1/go.mod h1:5OXOZSfqPIIbmVBIIKWRFfZjPR0E5r58TLhUjH0a2Ro=
golang.org/x/mod v0.6.0-dev.0.20211013180041-c96bc1413d57/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY=
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3 h1:kQgndtyPBW/JIYERgdxfwMYh3AVStj88WQTlNDi2a+o=
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY=
golang.org/x/mod v0.7.0 h1:LapD9S96VoQRhi/GrNTqeBJFrUjs5UHCAtTlgwA5oZA=
golang.org/x/mod v0.7.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@ -1015,23 +1081,26 @@ golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/
golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201006153459-a7d1128ccaa0/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210505024714-0287a6fb4125/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210614182718-04defd469f4e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211118161319-6a13c67c3ce4/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220401154927-543a649e0bdd h1:zYlwaUHTmxuf6H7hwO2dgwqozQmH7zf4x+/qql4oVWc=
golang.org/x/net v0.0.0-20220401154927-543a649e0bdd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.7.0 h1:rJrUqqhjsgNp7KqAIc25s9pZnjU7TUcSY7HcVZjdn1g=
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@ -1044,8 +1113,9 @@ golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ
golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210427180440-81ed05c6b58c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c h1:pkQiBZBvdos9qq4wBAHqlzuZHEXo07pqV06ef90u1WI=
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f h1:Qmd2pbz05z7z6lm0DrgQVVPuBm92jqujBKMHMOlOQEw=
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@ -1057,8 +1127,8 @@ golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220513210516-0976fa681c29 h1:w8s32wxx3sY+OjLlv9qltkLU5yvJzxjjgiHWLjdIcw4=
golang.org/x/sync v0.0.0-20220513210516-0976fa681c29/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@ -1067,7 +1137,6 @@ golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5h
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181122145206-62eef0e2fa9b/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181228144115-9a3f9b0469bb/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190129075346-302c3dd5f1cc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@ -1078,13 +1147,17 @@ golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606203320-7fc4e5ec1444/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191112214154-59a1497f0cea/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191220142924-d4481acd189f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@ -1102,6 +1175,8 @@ golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200826173525-f9321e4c35a6/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200828194041-157a740278f4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@ -1124,22 +1199,29 @@ golang.org/x/sys v0.0.0-20210503080704-8803ae5d1324/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210601080250-7ecdf8ef093b/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210616045830-e2b7044e8c71/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210906170528-6f6e22806c34/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211116061358-0a5406a5449c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211117180635-dee7805ff2e1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a h1:dGzPydgVsqGcTRVwiLJ1jVbufYwmzD3LfVPLKsKg+0k=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0 h1:MUK/U/4lj1t1oPg0HfuXDN/Z1wv31ZJ/YcPiGccS4DU=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210503060354-a79de5458b56/go.mod h1:tfny5GFUkzUvx4ps4ajbZsCe5lw1metzhBm9T3x7oIY=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.0.0-20220526004731-065cf7ba2467 h1:CBpWXWQpIRjzmkkA+M7q9Fqnwd2mZr3AFqexg8YTfoM=
golang.org/x/term v0.0.0-20220526004731-065cf7ba2467/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0 h1:n2a8QNdAb0sZNpU9R1ALUXBbY+w51fCQDN+7EdxNBsY=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@ -1149,15 +1231,17 @@ golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0 h1:4BRB4x83lYWy72KwLD/qYDuTu7q9PjSagHvijDw7cLo=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20201208040808-7e3f01d25324/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba h1:O8mE0/t419eoIwhTFpKVkHiTs/Igowgfkj25AcZrtiE=
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20220210224613-90d013bbcef8 h1:vVKdlvoWBphwdxWKrFZEuM0kGgGLxUOYcY4U/2Vjg44=
golang.org/x/time v0.0.0-20220210224613-90d013bbcef8/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180525024113-a5b4c53f6e8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@ -1222,8 +1306,8 @@ golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.4/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.8-0.20211029000441-d6a9af8af023/go.mod h1:nABZi5QlRsZVlzPpHl034qft6wpY4eDcsTt5AaioBiU=
golang.org/x/tools v0.1.10/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E=
golang.org/x/tools v0.1.11-0.20220316014157-77aa08bb151a h1:ofrrl6c6NG5/IOSx/R1cyiQxxjqlur0h/TvbUhkH0II=
golang.org/x/tools v0.1.11-0.20220316014157-77aa08bb151a/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E=
golang.org/x/tools v0.5.0 h1:+bSpV5HIeWkuvgaMfI3UmKRThoTA5ODJTUd8T17NO+4=
golang.org/x/tools v0.5.0/go.mod h1:N+Kgy78s5I24c24dU8OfWNEotWjutIs8SnJvn5IDq+k=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@ -1321,8 +1405,8 @@ google.golang.org/genproto v0.0.0-20210513213006-bf773b8c8384/go.mod h1:P3QM42oQ
google.golang.org/genproto v0.0.0-20210517163617-5e0236093d7a/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A=
google.golang.org/genproto v0.0.0-20210601144548-a796c710e9b6/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A=
google.golang.org/genproto v0.0.0-20210630183607-d20f26d13c79/go.mod h1:yiaVoXHpRzHGyxV3o4DktVWY4mSUErTKaeEOq6C3t3U=
google.golang.org/genproto v0.0.0-20220126215142-9970aeb2e350 h1:YxHp5zqIcAShDEvRr5/0rVESVS+njYF68PSdazrNLJo=
google.golang.org/genproto v0.0.0-20220126215142-9970aeb2e350/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20220617124728-180714bec0ad h1:kqrS+lhvaMHCxul6sKQvKJ8nAAhlVItmZV822hYFH/U=
google.golang.org/genproto v0.0.0-20220617124728-180714bec0ad/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
google.golang.org/grpc v1.14.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
@ -1353,10 +1437,9 @@ google.golang.org/grpc v1.37.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQ
google.golang.org/grpc v1.37.1/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.39.0/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE=
google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
google.golang.org/grpc v1.41.0/go.mod h1:U3l9uK9J0sini8mHphKoXyaqDA/8VyGnDee1zzIUK6k=
google.golang.org/grpc v1.44.0 h1:weqSxi/TMs1SqFRMHCtBgXRs8k3X39QIDEZ0pRcttUg=
google.golang.org/grpc v1.44.0/go.mod h1:k+4IHHFw41K8+bbowsex27ge2rCb65oeWqe4jJ590SU=
google.golang.org/grpc v1.47.0 h1:9n77onPX5F3qfFCqjy9dhn8PbNQsIKeVU04J9G7umt8=
google.golang.org/grpc v1.47.0/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
@ -1370,6 +1453,7 @@ google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlba
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w=
google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
@ -1382,7 +1466,6 @@ gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntN
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/gcfg.v1 v1.2.3/go.mod h1:yesOnuUOFQAhST5vPY4nbZsb/huCgGGXlipJsBn0b3o=
gopkg.in/ini.v1 v1.42.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
@ -1390,8 +1473,9 @@ gopkg.in/ini.v1 v1.46.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/ini.v1 v1.51.0 h1:AQvPpx3LzTDM0AjnIRlVFwFFGC+npRopjZxLJj6gdno=
gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/square/go-jose.v2 v2.3.1 h1:SK5KegNXmKmqE342YYN2qPHEnUYeoMiXXl1poUlI+o4=
gopkg.in/square/go-jose.v2 v2.3.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/square/go-jose.v2 v2.5.1 h1:7odma5RETjNHWJnR32wx8t+Io4djHE1PqxCFx3iiZ2w=
gopkg.in/square/go-jose.v2 v2.5.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/warnings.v0 v0.1.2/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI=
@ -1402,13 +1486,14 @@ gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0 h1:clyUAQHOM3G0M3f5vQj7LuJrETvjVot3Z5el9nffUtU=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
gotest.tools/v3 v3.0.3 h1:4AuOwCGf4lLR9u3YOe2awrHygurzhO/HeQ6laiA6Sx0=
honnef.co/go/tools v0.0.0-20180728063816-88497007e858/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
@ -1418,8 +1503,8 @@ honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
honnef.co/go/tools v0.1.3/go.mod h1:NgwopIslSNH47DimFoV78dnkksY2EFtX0ajyb3K/las=
honnef.co/go/tools v0.3.0 h1:2LdYUZ7CIxnYgskbUZfY7FPggmqnh6shBqfWa8Tn3XU=
honnef.co/go/tools v0.3.0/go.mod h1:vlRD9XErLMGT+mDuofSr0mMMquscM/1nQqtRSsh6m70=
honnef.co/go/tools v0.4.0 h1:lyXVV1c8wUBJRKqI8JgIpT8TW1VDagfYYaxbKa/HoL8=
honnef.co/go/tools v0.4.0/go.mod h1:36ZgoUOrqOk1GxwHhyryEkq8FQWkUO2xGuSMhUCcdvA=
rsc.io/binaryregexp v0.2.0 h1:HfqmD5MEmC0zvwBuF187nq9mdnXjXsSivRiXN7SmRkE=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=

View File

@ -841,7 +841,8 @@ func (s *CheckService) DeleteCheck(ctx context.Context, id platform.ID) error {
}
// TODO(gavincabbage): These structures should be in a common place, like other models,
// but the common influxdb.Check is an interface that is not appropriate for an API client.
//
// but the common influxdb.Check is an interface that is not appropriate for an API client.
type Checks struct {
Checks []*Check `json:"checks"`
Links *influxdb.PagingLinks `json:"links"`

View File

@ -2,7 +2,10 @@ package http
import (
"context"
"encoding/json"
"net/http"
"github.com/influxdata/influxdb/v2"
)
// Flusher flushes data from a store to reset; used for testing.
@ -10,15 +13,39 @@ type Flusher interface {
Flush(ctx context.Context)
}
// DebugFlush clears all services for testing.
func DebugFlush(ctx context.Context, next http.Handler, f Flusher) http.HandlerFunc {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
func Debug(ctx context.Context, next http.Handler, f Flusher, service influxdb.OnboardingService) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == "/debug/flush" {
// DebugFlush clears all services for testing.
f.Flush(ctx)
w.Header().Set("Content-Type", "text/html; charset=utf-8")
w.WriteHeader(http.StatusOK)
return
}
if r.URL.Path == "/debug/provision" {
data := &influxdb.OnboardingRequest{
User: "dev_user",
Password: "password",
Org: "InfluxData",
Bucket: "project",
}
res, err := service.OnboardInitialUser(ctx, data)
if err != nil {
w.WriteHeader(http.StatusBadRequest)
w.Write([]byte(err.Error()))
return
}
body, err := json.Marshal(res)
if err != nil {
w.WriteHeader(http.StatusBadRequest)
w.Write([]byte(err.Error()))
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
w.Write(body)
return
}
next.ServeHTTP(w, r)
})
}
}

View File

@ -197,6 +197,24 @@ func decodeDeleteRequest(ctx context.Context, r *http.Request, orgSvc influxdb.O
}
return false, nil
})
var walkError error
influxql.WalkFunc(expr, func(e influxql.Node) {
if v, ok := e.(*influxql.BinaryExpr); ok {
if vv, ok := v.LHS.(*influxql.VarRef); ok && v.Op == influxql.EQ {
if vv.Val == "_field" {
walkError = &errors.Error{
Code: errors.ENotImplemented,
Msg: "",
Err: fmt.Errorf("delete by field is not supported"),
}
}
}
}
})
if walkError != nil {
return nil, nil, walkError
}
if err != nil {
return nil, nil, &errors.Error{
Code: errors.EInvalid,

View File

@ -379,6 +379,60 @@ func TestDelete(t *testing.T) {
}`,
},
},
{
name: "unsupported delete by field",
args: args{
queryParams: map[string][]string{
"org": {"org1"},
"bucket": {"buck1"},
},
body: []byte(`{
"start":"2009-01-01T23:00:00Z",
"stop":"2019-11-10T01:00:00Z",
"predicate": "_field=\"cpu\""
}`),
authorizer: &influxdb.Authorization{
UserID: user1ID,
Status: influxdb.Active,
Permissions: []influxdb.Permission{
{
Action: influxdb.WriteAction,
Resource: influxdb.Resource{
Type: influxdb.BucketsResourceType,
ID: influxtesting.IDPtr(platform.ID(2)),
OrgID: influxtesting.IDPtr(platform.ID(1)),
},
},
},
},
},
fields: fields{
DeleteService: mock.NewDeleteService(),
BucketService: &mock.BucketService{
FindBucketFn: func(ctx context.Context, f influxdb.BucketFilter) (*influxdb.Bucket, error) {
return &influxdb.Bucket{
ID: platform.ID(2),
Name: "bucket1",
}, nil
},
},
OrganizationService: &mock.OrganizationService{
FindOrganizationF: func(ctx context.Context, f influxdb.OrganizationFilter) (*influxdb.Organization, error) {
return &influxdb.Organization{
ID: platform.ID(1),
Name: "org1",
}, nil
},
},
},
wants: wants{
statusCode: http.StatusNotImplemented,
body: `{
"code": "not implemented",
"message": "delete by field is not supported"
}`,
},
},
{
name: "complex delete",
args: args{

View File

@ -611,7 +611,8 @@ func (s *NotificationEndpointService) FindNotificationEndpoints(ctx context.Cont
// CreateNotificationEndpoint creates a new notification endpoint and sets b.ID with the new identifier.
// TODO(@jsteenb2): this is unsatisfactory, we have no way of grabbing the new notification endpoint without
// serious hacky hackertoning. Put it on the list...
//
// serious hacky hackertoning. Put it on the list...
func (s *NotificationEndpointService) CreateNotificationEndpoint(ctx context.Context, ne influxdb.NotificationEndpoint, userID platform.ID) error {
var resp notificationEndpointDecoder
err := s.Client.
@ -667,9 +668,10 @@ func (s *NotificationEndpointService) PatchNotificationEndpoint(ctx context.Cont
// DeleteNotificationEndpoint removes a notification endpoint by ID, returns secret fields, orgID for further deletion.
// TODO: axe this delete design, makes little sense in how its currently being done. Right now, as an http client,
// I am forced to know how the store handles this and then figure out what the server does in between me and that store,
// then see what falls out :flushed... for now returning nothing for secrets, orgID, and only returning an error. This makes
// the code/design smell super obvious imo
//
// I am forced to know how the store handles this and then figure out what the server does in between me and that store,
// then see what falls out :flushed... for now returning nothing for secrets, orgID, and only returning an error. This makes
// the code/design smell super obvious imo
func (s *NotificationEndpointService) DeleteNotificationEndpoint(ctx context.Context, id platform.ID) ([]influxdb.SecretField, platform.ID, error) {
if !id.Valid() {
return nil, 0, fmt.Errorf("invalid ID: please provide a valid ID")

View File

@ -1212,8 +1212,8 @@ func (r *UnsignedCumulativeSumReducer) Emit() []UnsignedPoint {
// FloatHoltWintersReducer forecasts a series into the future.
// This is done using the Holt-Winters damped method.
// 1. Using the series the initial values are calculated using a SSE.
// 2. The series is forecasted into the future using the iterative relations.
// 1. Using the series the initial values are calculated using a SSE.
// 2. The series is forecasted into the future using the iterative relations.
type FloatHoltWintersReducer struct {
// Season period
m int
@ -1240,7 +1240,7 @@ type FloatHoltWintersReducer struct {
}
const (
// Arbitrary weight for initializing some intial guesses.
// Arbitrary weight for initializing some initial guesses.
// This should be in the range [0,1]
hwWeight = 0.5
// Epsilon value for the minimization process

View File

@ -404,11 +404,10 @@ func (itr *floatSortedMergeIterator) pop() (*FloatPoint, error) {
// floatSortedMergeHeap represents a heap of floatSortedMergeHeapItems.
// Items are sorted with the following priority:
// - By their measurement name;
// - By their tag keys/values;
// - By time; or
// - By their Aux field values.
//
// - By their measurement name;
// - By their tag keys/values;
// - By time; or
// - By their Aux field values.
type floatSortedMergeHeap struct {
opt IteratorOptions
items []*floatSortedMergeHeapItem
@ -3068,11 +3067,10 @@ func (itr *integerSortedMergeIterator) pop() (*IntegerPoint, error) {
// integerSortedMergeHeap represents a heap of integerSortedMergeHeapItems.
// Items are sorted with the following priority:
// - By their measurement name;
// - By their tag keys/values;
// - By time; or
// - By their Aux field values.
//
// - By their measurement name;
// - By their tag keys/values;
// - By time; or
// - By their Aux field values.
type integerSortedMergeHeap struct {
opt IteratorOptions
items []*integerSortedMergeHeapItem
@ -5732,11 +5730,10 @@ func (itr *unsignedSortedMergeIterator) pop() (*UnsignedPoint, error) {
// unsignedSortedMergeHeap represents a heap of unsignedSortedMergeHeapItems.
// Items are sorted with the following priority:
// - By their measurement name;
// - By their tag keys/values;
// - By time; or
// - By their Aux field values.
//
// - By their measurement name;
// - By their tag keys/values;
// - By time; or
// - By their Aux field values.
type unsignedSortedMergeHeap struct {
opt IteratorOptions
items []*unsignedSortedMergeHeapItem
@ -8396,11 +8393,10 @@ func (itr *stringSortedMergeIterator) pop() (*StringPoint, error) {
// stringSortedMergeHeap represents a heap of stringSortedMergeHeapItems.
// Items are sorted with the following priority:
// - By their measurement name;
// - By their tag keys/values;
// - By time; or
// - By their Aux field values.
//
// - By their measurement name;
// - By their tag keys/values;
// - By time; or
// - By their Aux field values.
type stringSortedMergeHeap struct {
opt IteratorOptions
items []*stringSortedMergeHeapItem
@ -11046,11 +11042,10 @@ func (itr *booleanSortedMergeIterator) pop() (*BooleanPoint, error) {
// booleanSortedMergeHeap represents a heap of booleanSortedMergeHeapItems.
// Items are sorted with the following priority:
// - By their measurement name;
// - By their tag keys/values;
// - By time; or
// - By their Aux field values.
//
// - By their measurement name;
// - By their tag keys/values;
// - By time; or
// - By their Aux field values.
type booleanSortedMergeHeap struct {
opt IteratorOptions
items []*booleanSortedMergeHeapItem

View File

@ -75,19 +75,33 @@ func rewriteShowFieldKeyCardinalityStatement(stmt *influxql.ShowFieldKeyCardinal
Args: []influxql.Expr{
&influxql.Call{
Name: "distinct",
Args: []influxql.Expr{&influxql.VarRef{Val: "_fieldKey"}},
Args: []influxql.Expr{&influxql.VarRef{Val: "fieldKey"}},
},
},
},
Alias: "count",
},
},
Sources: rewriteSources2(stmt.Sources, stmt.Database),
Condition: stmt.Condition,
Dimensions: stmt.Dimensions,
Offset: stmt.Offset,
Limit: stmt.Limit,
OmitTime: true,
Sources: influxql.Sources{
&influxql.SubQuery{
Statement: &influxql.SelectStatement{
Fields: []*influxql.Field{
{Expr: &influxql.VarRef{Val: "fieldKey"}},
{Expr: &influxql.VarRef{Val: "fieldType"}},
},
Sources: rewriteSources(stmt.Sources, "_fieldKeys", stmt.Database),
Condition: rewriteSourcesCondition(stmt.Sources, nil),
OmitTime: true,
Dedupe: true,
IsRawQuery: true,
},
},
},
}, nil
}

View File

@ -52,6 +52,22 @@ func TestRewriteStatement(t *testing.T) {
stmt: `SHOW FIELD KEYS ON db0 FROM mydb.myrp2./c.*/`,
s: `SELECT fieldKey, fieldType FROM mydb.myrp2._fieldKeys WHERE _name =~ /c.*/`,
},
{
stmt: "SHOW FIELD KEY CARDINALITY",
s: "SELECT count(distinct(fieldKey)) AS count FROM (SELECT fieldKey, fieldType FROM _fieldKeys WHERE _name =~ /.+/)",
},
{
stmt: "SHOW FIELD KEY CARDINALITY ON db0",
s: "SELECT count(distinct(fieldKey)) AS count FROM (SELECT fieldKey, fieldType FROM db0.._fieldKeys WHERE _name =~ /.+/)",
},
{
stmt: "SHOW FIELD KEY CARDINALITY ON db0 FROM /tsm1.*/",
s: "SELECT count(distinct(fieldKey)) AS count FROM (SELECT fieldKey, fieldType FROM db0.._fieldKeys WHERE _name =~ /tsm1.*/)",
},
{
stmt: "SHOW FIELD KEY CARDINALITY ON db0 FROM /tsm1.*/ WHERE 1 = 1",
s: "SELECT count(distinct(fieldKey)) AS count FROM (SELECT fieldKey, fieldType FROM db0.._fieldKeys WHERE _name =~ /tsm1.*/) WHERE 1 = 1",
},
{
stmt: `SHOW SERIES`,
s: `SELECT "key" FROM _series`,

View File

@ -0,0 +1,33 @@
package rand
import (
"math/rand"
"sync"
)
// LockedSource is taken from the Go "math/rand" package.
// The default rand functions use a similar type under the hood, this does not introduce any additional
// locking than using the default functions.
type LockedSource struct {
lk sync.Mutex
src rand.Source
}
func NewLockedSourceFromSeed(seed int64) *LockedSource {
return &LockedSource{
src: rand.NewSource(seed),
}
}
func (r *LockedSource) Int63() (n int64) {
r.lk.Lock()
n = r.src.Int63()
r.lk.Unlock()
return
}
func (r *LockedSource) Seed(seed int64) {
r.lk.Lock()
r.src.Seed(seed)
r.lk.Unlock()
}

View File

@ -3,46 +3,45 @@
// This is a small simplification over viper to move most of the boilerplate
// into one place.
//
//
// In this example the flags can be set with MYPROGRAM_MONITOR_HOST and
// MYPROGRAM_NUMBER or with the flags --monitor-host and --number
//
// var flags struct {
// monitorHost string
// number int
// }
// var flags struct {
// monitorHost string
// number int
// }
//
// func main() {
// cmd := cli.NewCommand(&cli.Program{
// Run: run,
// Name: "myprogram",
// Opts: []cli.Opt{
// {
// DestP: &flags.monitorHost,
// Flag: "monitor-host",
// Default: "http://localhost:8086",
// Desc: "host to send influxdb metrics",
// },
// {
// DestP: &flags.number,
// Flag: "number",
// Default: 2,
// Desc: "number of times to loop",
// func main() {
// cmd := cli.NewCommand(&cli.Program{
// Run: run,
// Name: "myprogram",
// Opts: []cli.Opt{
// {
// DestP: &flags.monitorHost,
// Flag: "monitor-host",
// Default: "http://localhost:8086",
// Desc: "host to send influxdb metrics",
// },
// {
// DestP: &flags.number,
// Flag: "number",
// Default: 2,
// Desc: "number of times to loop",
//
// },
// },
// })
// },
// },
// })
//
// if err := cmd.Execute(); err != nil {
// fmt.Fprintln(os.Stderr, err)
// os.Exit(1)
// }
// }
// if err := cmd.Execute(); err != nil {
// fmt.Fprintln(os.Stderr, err)
// os.Exit(1)
// }
// }
//
// func run() error {
// for i := 0; i < number; i++ {
// fmt.Printf("%d\n", i)
// feturn nil
// }
// }
// func run() error {
// for i := 0; i < number; i++ {
// fmt.Printf("%d\n", i)
// feturn nil
// }
// }
package cli

View File

@ -184,9 +184,7 @@ func Test_NewProgram(t *testing.T) {
for _, tt := range tests {
for _, writer := range configWriters {
fn := func(t *testing.T) {
testDir, err := os.MkdirTemp("", "")
require.NoError(t, err)
defer os.RemoveAll(testDir)
testDir := t.TempDir()
confFile, err := writer.writeFn(testDir, config)
require.NoError(t, err)
@ -286,9 +284,12 @@ func writeTomlConfig(dir string, config interface{}) (string, error) {
if err != nil {
return "", err
}
defer w.Close()
if err := toml.NewEncoder(w).Encode(config); err != nil {
return "", err
}
return confFile, nil
}
@ -304,9 +305,12 @@ func yamlConfigWriter(shortExt bool) configWriter {
if err != nil {
return "", err
}
defer w.Close()
if err := yaml.NewEncoder(w).Encode(config); err != nil {
return "", err
}
return confFile, nil
}
}
@ -382,9 +386,7 @@ func Test_ConfigPrecedence(t *testing.T) {
for _, tt := range tests {
fn := func(t *testing.T) {
testDir, err := os.MkdirTemp("", "")
require.NoError(t, err)
defer os.RemoveAll(testDir)
testDir := t.TempDir()
defer setEnvVar("TEST_CONFIG_PATH", testDir)()
if tt.writeJson {
@ -429,9 +431,7 @@ func Test_ConfigPrecedence(t *testing.T) {
}
func Test_ConfigPathDotDirectory(t *testing.T) {
testDir, err := os.MkdirTemp("", "")
require.NoError(t, err)
defer os.RemoveAll(testDir)
testDir := t.TempDir()
tests := []struct {
name string
@ -460,7 +460,7 @@ func Test_ConfigPathDotDirectory(t *testing.T) {
configDir := filepath.Join(testDir, tc.dir)
require.NoError(t, os.Mkdir(configDir, 0700))
_, err = writeTomlConfig(configDir, config)
_, err := writeTomlConfig(configDir, config)
require.NoError(t, err)
defer setEnvVar("TEST_CONFIG_PATH", configDir)()
@ -487,9 +487,7 @@ func Test_ConfigPathDotDirectory(t *testing.T) {
}
func Test_LoadConfigCwd(t *testing.T) {
testDir, err := os.MkdirTemp("", "")
require.NoError(t, err)
defer os.RemoveAll(testDir)
testDir := t.TempDir()
pwd, err := os.Getwd()
require.NoError(t, err)

View File

@ -32,33 +32,39 @@
// First, I add an entry to `flags.yml`.
//
// ```yaml
// - name: My Feature
// description: My feature is awesome
// key: myFeature
// default: false
// expose: true
// contact: My Name
// - name: My Feature
// description: My feature is awesome
// key: myFeature
// default: false
// expose: true
// contact: My Name
//
// ```
//
// My flag type is inferred to be boolean by my default of `false` when I run
// `make flags` and the `feature` package now includes `func MyFeature() BoolFlag`.
//
// I use this to control my backend code with
// # I use this to control my backend code with
//
// ```go
// if feature.MyFeature.Enabled(ctx) {
// // new code...
// } else {
// // new code...
// }
//
// if feature.MyFeature.Enabled(ctx) {
// // new code...
// } else {
//
// // new code...
// }
//
// ```
//
// and the `/api/v2/flags` response provides the same information to the frontend.
//
// ```json
// {
// "myFeature": false
// }
//
// {
// "myFeature": false
// }
//
// ```
//
// While `false` by default, I can turn on my experimental feature by starting
@ -71,5 +77,4 @@
// ```
// influxd --feature-flags flag1=value1,flag2=value2
// ```
//
package feature

View File

@ -78,8 +78,9 @@ func ExposedFlagsFromContext(ctx context.Context, byKey ByKeyFn) map[string]inte
// to be removed, e.g. enabling debug tracing for an organization.
//
// TODO(gavincabbage): This may become a stale date, which can then
// be used to trigger a notification to the contact when the flag
// has become stale, to encourage flag cleanup.
//
// be used to trigger a notification to the contact when the flag
// has become stale, to encourage flag cleanup.
type Lifetime int
const (

View File

@ -39,24 +39,31 @@ const (
// further help operators.
//
// To create a simple error,
// &Error{
// Code:ENotFound,
// }
//
// &Error{
// Code:ENotFound,
// }
//
// To show where the error happens, add Op.
// &Error{
// Code: ENotFound,
// Op: "bolt.FindUserByID"
// }
//
// &Error{
// Code: ENotFound,
// Op: "bolt.FindUserByID"
// }
//
// To show an error with a unpredictable value, add the value in Msg.
// &Error{
// Code: EConflict,
// Message: fmt.Sprintf("organization with name %s already exist", aName),
// }
//
// &Error{
// Code: EConflict,
// Message: fmt.Sprintf("organization with name %s already exist", aName),
// }
//
// To show an error wrapped with another error.
// &Error{
// Code:EInternal,
// Err: err,
// }.
//
// &Error{
// Code:EInternal,
// Err: err,
// }.
type Error struct {
Code string
Msg string

View File

@ -19,7 +19,8 @@ import (
// LogError adds a span log for an error.
// Returns unchanged error, so useful to wrap as in:
// return 0, tracing.LogError(err)
//
// return 0, tracing.LogError(err)
func LogError(span opentracing.Span, err error) error {
if err == nil {
return nil
@ -115,24 +116,25 @@ func (s *Span) Finish() {
// Context without parent span reference triggers root span construction.
// This function never returns nil values.
//
// Performance
// # Performance
//
// This function incurs a small performance penalty, roughly 1000 ns/op, 376 B/op, 6 allocs/op.
// Jaeger timestamp and duration precision is only µs, so this is pretty negligible.
//
// Alternatives
// # Alternatives
//
// If this performance penalty is too much, try these, which are also demonstrated in benchmark tests:
// // Create a root span
// span := opentracing.StartSpan("operation name")
// ctx := opentracing.ContextWithSpan(context.Background(), span)
//
// // Create a child span
// span := opentracing.StartSpan("operation name", opentracing.ChildOf(sc))
// ctx := opentracing.ContextWithSpan(context.Background(), span)
// // Create a root span
// span := opentracing.StartSpan("operation name")
// ctx := opentracing.ContextWithSpan(context.Background(), span)
//
// // Sugar to create a child span
// span, ctx := opentracing.StartSpanFromContext(ctx, "operation name")
// // Create a child span
// span := opentracing.StartSpan("operation name", opentracing.ChildOf(sc))
// ctx := opentracing.ContextWithSpan(context.Background(), span)
//
// // Sugar to create a child span
// span, ctx := opentracing.StartSpanFromContext(ctx, "operation name")
func StartSpanFromContext(ctx context.Context, opts ...opentracing.StartSpanOption) (opentracing.Span, context.Context) {
if ctx == nil {
panic("StartSpanFromContext called with nil context")

View File

@ -17,37 +17,37 @@ import (
//
// The following is an illustration of its use:
//
// byUserID := func(v []byte) ([]byte, error) {
// auth := &influxdb.Authorization{}
// byUserID := func(v []byte) ([]byte, error) {
// auth := &influxdb.Authorization{}
//
// if err := json.Unmarshal(v, auth); err != nil {
// return err
// }
// if err := json.Unmarshal(v, auth); err != nil {
// return err
// }
//
// return auth.UserID.Encode()
// }
// return auth.UserID.Encode()
// }
//
// // configure a write only index
// indexByUser := NewIndex(NewSource([]byte(`authorizationsbyuserv1/), byUserID))
// // configure a write only index
// indexByUser := NewIndex(NewSource([]byte(`authorizationsbyuserv1/), byUserID))
//
// indexByUser.Insert(tx, someUserID, someAuthID)
// indexByUser.Insert(tx, someUserID, someAuthID)
//
// indexByUser.Delete(tx, someUserID, someAuthID)
// indexByUser.Delete(tx, someUserID, someAuthID)
//
// indexByUser.Walk(tx, someUserID, func(k, v []byte) error {
// auth := &influxdb.Authorization{}
// if err := json.Unmarshal(v, auth); err != nil {
// return err
// }
// indexByUser.Walk(tx, someUserID, func(k, v []byte) error {
// auth := &influxdb.Authorization{}
// if err := json.Unmarshal(v, auth); err != nil {
// return err
// }
//
// // do something with auth
// // do something with auth
//
// return nil
// })
// return nil
// })
//
// // verify the current index against the source and return the differences
// // found in each
// diff, err := indexByUser.Verify(ctx, tx)
// // verify the current index against the source and return the differences
// // found in each
// diff, err := indexByUser.Verify(ctx, tx)
type Index struct {
IndexMapping

View File

@ -9,11 +9,11 @@
//
// This package is arranged like so:
//
// doc.go - this piece of documentation.
// all.go - definition of Migration array referencing each of the name migrations in number migration files (below).
// migration.go - an implementation of migration.Spec for convenience.
// 000X_migration_name.go (example) - N files contains the specific implementations of each migration enumerated in `all.go`.
// ...
// doc.go - this piece of documentation.
// all.go - definition of Migration array referencing each of the name migrations in number migration files (below).
// migration.go - an implementation of migration.Spec for convenience.
// 000X_migration_name.go (example) - N files contains the specific implementations of each migration enumerated in `all.go`.
// ...
//
// Managing this list of files and all.go can be fiddly.
// There is a buildable cli utility called `kvmigrate` in the `internal/cmd/kvmigrate` package.

View File

@ -39,17 +39,17 @@ func NewOrganizationService() *OrganizationService {
}
}
//FindOrganizationByID calls FindOrganizationByIDF.
// FindOrganizationByID calls FindOrganizationByIDF.
func (s *OrganizationService) FindOrganizationByID(ctx context.Context, id platform2.ID) (*platform.Organization, error) {
return s.FindOrganizationByIDF(ctx, id)
}
//FindOrganization calls FindOrganizationF.
// FindOrganization calls FindOrganizationF.
func (s *OrganizationService) FindOrganization(ctx context.Context, filter platform.OrganizationFilter) (*platform.Organization, error) {
return s.FindOrganizationF(ctx, filter)
}
//FindOrganizations calls FindOrganizationsF.
// FindOrganizations calls FindOrganizationsF.
func (s *OrganizationService) FindOrganizations(ctx context.Context, filter platform.OrganizationFilter, opt ...platform.FindOptions) ([]*platform.Organization, int, error) {
return s.FindOrganizationsF(ctx, filter, opt...)
}

View File

@ -230,7 +230,7 @@ func BenchmarkTagKeysSet_UnionBytes(b *testing.B) {
bytes.Split([]byte("tag04,tag05"), commaB),
}
rand.Seed(20040409)
seededRand := rand.New(rand.NewSource(20040409))
tests := []int{
10,
@ -245,7 +245,7 @@ func BenchmarkTagKeysSet_UnionBytes(b *testing.B) {
var km models.TagKeysSet
for i := 0; i < b.N; i++ {
for j := 0; j < n; j++ {
km.UnionBytes(keys[rand.Int()%len(keys)])
km.UnionBytes(keys[seededRand.Int()%len(keys)])
}
km.Clear()
}

View File

@ -20,8 +20,7 @@ var (
func TestCreateAndGetNotebook(t *testing.T) {
t.Parallel()
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
ctx := context.Background()
// getting an invalid id should return an error
@ -59,8 +58,7 @@ func TestCreateAndGetNotebook(t *testing.T) {
func TestUpdate(t *testing.T) {
t.Parallel()
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
ctx := context.Background()
testCreate := &influxdb.NotebookReqBody{
@ -108,8 +106,7 @@ func TestUpdate(t *testing.T) {
func TestDelete(t *testing.T) {
t.Parallel()
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
ctx := context.Background()
// attempting to delete a non-existant notebook should return an error
@ -145,8 +142,7 @@ func TestDelete(t *testing.T) {
func TestList(t *testing.T) {
t.Parallel()
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
ctx := context.Background()
orgID := idGen.ID()
@ -195,8 +191,8 @@ func TestList(t *testing.T) {
}
}
func newTestService(t *testing.T) (*Service, func(t *testing.T)) {
store, clean := sqlite.NewTestStore(t)
func newTestService(t *testing.T) *Service {
store := sqlite.NewTestStore(t)
ctx := context.Background()
sqliteMigrator := sqlite.NewMigrator(store, zap.NewNop())
@ -205,5 +201,5 @@ func newTestService(t *testing.T) (*Service, func(t *testing.T)) {
svc := NewService(store)
return svc, clean
return svc
}

View File

@ -47,20 +47,20 @@ const MaxWritesPending = 1024
// queues can have a max size configured such that when the size of all
// segments on disk exceeds the size, write will fail.
//
// ┌─────┐
// │Head │
// ├─────┘
//
//
// ┌─────────────────┐ ┌─────────────────┐┌─────────────────┐
// │Segment 1 - 10MB │ │Segment 2 - 10MB ││Segment 3 - 10MB │
// └─────────────────┘ └─────────────────┘└─────────────────┘
//
//
//
// ┌─────┐
// │Tail │
// └─────┘
// ┌─────┐
// │Head │
// ├─────┘
//
//
// ┌─────────────────┐ ┌─────────────────┐┌─────────────────┐
// │Segment 1 - 10MB │ │Segment 2 - 10MB ││Segment 3 - 10MB │
// └─────────────────┘ └─────────────────┘└─────────────────┘
//
//
//
// ┌─────┐
// │Tail │
// └─────┘
type Queue struct {
mu sync.RWMutex
@ -609,13 +609,13 @@ func (l *Queue) trimHead(force bool) error {
// lengths + block with a single footer point to the position in the segment of the
// current Head block.
//
// ┌──────────────────────────┐ ┌──────────────────────────┐ ┌────────────┐
// │ Block 1 │ │ Block 2 │ │ Footer │
// └──────────────────────────┘ └──────────────────────────┘ └────────────┘
// ┌────────────┐┌────────────┐ ┌────────────┐┌────────────┐ ┌────────────┐
// │Block 1 Len ││Block 1 Body│ │Block 2 Len ││Block 2 Body│ │Head Offset │
// │ 8 bytes ││ N bytes │ │ 8 bytes ││ N bytes │ │ 8 bytes │
// └────────────┘└────────────┘ └────────────┘└────────────┘ └────────────┘
// ┌──────────────────────────┐ ┌──────────────────────────┐ ┌────────────┐
// │ Block 1 │ │ Block 2 │ │ Footer │
// └──────────────────────────┘ └──────────────────────────┘ └────────────┘
// ┌────────────┐┌────────────┐ ┌────────────┐┌────────────┐ ┌────────────┐
// │Block 1 Len ││Block 1 Body│ │Block 2 Len ││Block 2 Body│ │Head Offset │
// │ 8 bytes ││ N bytes │ │ 8 bytes ││ N bytes │ │ 8 bytes │
// └────────────┘└────────────┘ └────────────┘└────────────┘ └────────────┘
//
// The footer holds the pointer to the Head entry at the end of the segment to allow writes
// to seek to the end and write sequentially (vs having to seek back to the beginning of

View File

@ -396,11 +396,11 @@ func TestQueue_TotalBytes(t *testing.T) {
// This test verifies the queue will advance in the following scenario:
//
// * There is one segment
// * The segment is not full
// * The segment record size entry is corrupted, resulting in
// currentRecordSize + pos > fileSize and
// therefore the Advance would fail.
// - There is one segment
// - The segment is not full
// - The segment record size entry is corrupted, resulting in
// currentRecordSize + pos > fileSize and
// therefore the Advance would fail.
func TestQueue_AdvanceSingleCorruptSegment(t *testing.T) {
q, dir := newTestQueue(t, withVerify(func([]byte) error { return nil }))
defer os.RemoveAll(dir)
@ -605,11 +605,7 @@ func ReadSegment(segment *segment) string {
}
func TestSegment_repair(t *testing.T) {
dir, err := os.MkdirTemp("", "hh_queue")
if err != nil {
t.Fatalf("failed to create temp dir: %v", err)
}
defer os.RemoveAll(dir)
dir := t.TempDir()
examples := []struct {
In *TestSegment
@ -703,6 +699,9 @@ func TestSegment_repair(t *testing.T) {
example.VerifyFn = func([]byte) error { return nil }
}
segment := mustCreateSegment(example.In, dir, example.VerifyFn)
t.Cleanup(func() {
segment.close()
})
if got, exp := ReadSegment(segment), example.Expected.String(); got != exp {
t.Errorf("[example %d]\ngot: %s\nexp: %s\n\n", i+1, got, exp)

View File

@ -468,8 +468,9 @@ func Decode(dst *[240]uint64, v uint64) (n int, err error) {
// Decode writes the uncompressed values from src to dst. It returns the number
// of values written or an error.
//go:nocheckptr
// nocheckptr while the underlying struct layout doesn't change
//
//go:nocheckptr
func DecodeAll(dst, src []uint64) (value int, err error) {
j := 0
for _, v := range src {
@ -482,8 +483,9 @@ func DecodeAll(dst, src []uint64) (value int, err error) {
// DecodeBytesBigEndian writes the compressed, big-endian values from src to dst. It returns the number
// of values written or an error.
//go:nocheckptr
// nocheckptr while the underlying struct layout doesn't change
//
//go:nocheckptr
func DecodeBytesBigEndian(dst []uint64, src []byte) (value int, err error) {
if len(src)&7 != 0 {
return 0, errors.New("src length is not multiple of 8")

View File

@ -76,6 +76,8 @@ func combine(fns ...func() []uint64) func() []uint64 {
// TestEncodeAll ensures 100% test coverage of simple8b.EncodeAll and
// verifies all output by comparing the original input with the output of simple8b.DecodeAll
func TestEncodeAll(t *testing.T) {
//lint:ignore SA1019 This function was deprecated for good reasons that aren't important to us since its just used for testing.
// Ignoring seems better than all the effort to address the underlying concern. https://github.com/golang/go/issues/56319
rand.Seed(0)
tests := []struct {

View File

@ -2,11 +2,12 @@ package errors
// Capture is a wrapper function which can be used to capture errors from closing via a defer.
// An example:
// func Example() (err error) {
// f, _ := os.Open(...)
// defer errors.Capture(&err, f.Close)()
// ...
// return
//
// func Example() (err error) {
// f, _ := os.Open(...)
// defer errors.Capture(&err, f.Close)()
// ...
// return
//
// Doing this will result in the error from the f.Close() call being
// put in the error via a ptr, if the error is not nil

View File

@ -4,10 +4,10 @@
//
// The differences are that the implementation in this package:
//
// * uses an AMD64 optimised xxhash algorithm instead of murmur;
// * uses some AMD64 optimisations for things like clz;
// * works with []byte rather than a Hash64 interface, to reduce allocations;
// * implements encoding.BinaryMarshaler and encoding.BinaryUnmarshaler
// - uses an AMD64 optimised xxhash algorithm instead of murmur;
// - uses some AMD64 optimisations for things like clz;
// - works with []byte rather than a Hash64 interface, to reduce allocations;
// - implements encoding.BinaryMarshaler and encoding.BinaryUnmarshaler
//
// Based on some rough benchmarking, this implementation of HyperLogLog++ is
// around twice as fast as the github.com/clarkduvall/hyperloglog implementation.

View File

@ -0,0 +1,71 @@
// Package reporthelper reports statistics about TSM files.
package reporthelper
import (
"fmt"
"os"
"path/filepath"
"sort"
"strconv"
"strings"
"github.com/influxdata/influxdb/v2/tsdb/engine/tsm1"
)
func IsShardDir(dir string) error {
name := filepath.Base(dir)
if id, err := strconv.Atoi(name); err != nil || id < 1 {
return fmt.Errorf("not a valid shard dir: %v", dir)
}
return nil
}
func WalkShardDirs(root string, fn func(db, rp, id, path string) error) error {
type location struct {
db, rp, id, path string
}
var dirs []location
if err := filepath.Walk(root, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if info.IsDir() {
return nil
}
if filepath.Ext(info.Name()) == "."+tsm1.TSMFileExtension {
shardDir := filepath.Dir(path)
if err := IsShardDir(shardDir); err != nil {
return err
}
absPath, err := filepath.Abs(path)
if err != nil {
return err
}
parts := strings.Split(absPath, string(filepath.Separator))
db, rp, id := parts[len(parts)-4], parts[len(parts)-3], parts[len(parts)-2]
dirs = append(dirs, location{db: db, rp: rp, id: id, path: path})
return nil
}
return nil
}); err != nil {
return err
}
sort.Slice(dirs, func(i, j int) bool {
a, _ := strconv.Atoi(dirs[i].id)
b, _ := strconv.Atoi(dirs[j].id)
return a < b
})
for _, shard := range dirs {
if err := fn(shard.db, shard.rp, shard.id, shard.path); err != nil {
return err
}
}
return nil
}

View File

@ -3,7 +3,7 @@ Package tracing provides a way for capturing hierarchical traces.
To start a new trace with a root span named select
trace, span := tracing.NewTrace("select")
trace, span := tracing.NewTrace("select")
It is recommended that a span be forwarded to callees using the
context package. Firstly, create a new context with the span associated
@ -21,6 +21,5 @@ Once the trace is complete, it may be converted to a graph with the Tree method.
The tree is intended to be used with the Walk function in order to generate
different presentations. The default Tree#String method returns a tree.
*/
package tracing

View File

@ -50,7 +50,7 @@ func Bool(key string, val bool) Field {
}
}
/// Int64 adds an int64-valued key:value pair to a Span.LogFields() record
// / Int64 adds an int64-valued key:value pair to a Span.LogFields() record
func Int64(key string, val int64) Field {
return Field{
key: key,

View File

@ -1,4 +1,4 @@
//Package wire is used to serialize a trace.
// Package wire is used to serialize a trace.
package wire
//go:generate protoc --go_out=. binary.proto

View File

@ -276,6 +276,7 @@ type Field struct {
Key string `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"`
FieldType FieldType `protobuf:"varint,2,opt,name=FieldType,proto3,enum=wire.FieldType" json:"FieldType,omitempty"`
// Types that are assignable to Value:
//
// *Field_NumericVal
// *Field_StringVal
Value isField_Value `protobuf_oneof:"value"`

View File

@ -4,10 +4,15 @@ import (
"fmt"
"math/rand"
"time"
rand2 "github.com/influxdata/influxdb/v2/internal/rand"
)
var seededRand *rand.Rand
func init() {
rand.Seed(time.Now().UnixNano())
lockedSource := rand2.NewLockedSourceFromSeed(time.Now().UnixNano())
seededRand = rand.New(lockedSource)
}
var (
@ -878,5 +883,5 @@ var (
// formatted as "adjective_surname". For example 'focused_turing'. If retry is non-zero, a random
// integer between 0 and 10 will be added to the end of the name, e.g `focused_turing3`
func GetRandomName() string {
return fmt.Sprintf("%s-%s", left[rand.Intn(len(left))], right[rand.Intn(len(right))])
return fmt.Sprintf("%s-%s", left[seededRand.Intn(len(left))], right[seededRand.Intn(len(right))])
}

View File

@ -1182,8 +1182,8 @@ type color struct {
}
// TODO:
// - verify templates are desired
// - template colors so references can be shared
// - verify templates are desired
// - template colors so references can be shared
type colors []*color
func (c colors) influxViewColors() []influxdb.ViewColor {
@ -1218,8 +1218,9 @@ func (c colors) strings() []string {
}
// TODO: looks like much of these are actually getting defaults in
// the UI. looking at system charts, seeing lots of failures for missing
// color types or no colors at all.
//
// the UI. looking at system charts, seeing lots of failures for missing
// color types or no colors at all.
func (c colors) hasTypes(types ...string) []validationErr {
tMap := make(map[string]bool)
for _, cc := range c {

View File

@ -27,13 +27,13 @@ func SetGlobalProfiling(enabled bool) {
}
// collectAllProfiles generates a tarball containing:
// - goroutine profile
// - blocking profile
// - mutex profile
// - heap profile
// - allocations profile
// - (optionally) trace profile
// - (optionally) CPU profile
// - goroutine profile
// - blocking profile
// - mutex profile
// - heap profile
// - allocations profile
// - (optionally) trace profile
// - (optionally) CPU profile
//
// All information is added to a tar archive and then compressed, before being
// returned to the requester as an archive file. Where profiles support debug

View File

@ -20,7 +20,7 @@ type EventRecorder struct {
// descriptive of the type of metric being recorded. Possible values may include write, query,
// task, dashboard, etc.
//
// The general structure of the metrics produced from the metric recorder should be
// # The general structure of the metrics produced from the metric recorder should be
//
// http_<subsystem>_request_count{org_id=<org_id>, status=<status>, endpoint=<endpoint>} ...
// http_<subsystem>_request_bytes{org_id=<org_id>, status=<status>, endpoint=<endpoint>} ...

View File

@ -72,9 +72,11 @@ func (e *NoContentEncoder) Encode(w io.Writer, results flux.ResultIterator) (int
// Otherwise one can decode the response body to get the error. For example:
// ```
// _, err = csv.NewResultDecoder(csv.ResultDecoderConfig{}).Decode(bytes.NewReader(res))
// if err != nil {
// // we got some runtime error
// }
//
// if err != nil {
// // we got some runtime error
// }
//
// ```
type NoContentWithErrorDialect struct {
csv.ResultEncoderConfig

View File

@ -12,6 +12,8 @@ import (
"github.com/influxdata/flux/execute"
"github.com/influxdata/flux/memory"
"github.com/influxdata/flux/values"
influxdb2 "github.com/influxdata/influxdb/v2"
"github.com/influxdata/influxdb/v2/authorizer"
"github.com/influxdata/influxdb/v2/kit/platform"
"github.com/influxdata/influxdb/v2/kit/platform/errors"
"github.com/influxdata/influxdb/v2/models"
@ -120,6 +122,15 @@ func (p Provider) WriterFor(ctx context.Context, conf influxdb.Config) (influxdb
return nil, err
}
// err will be set if we are not authorized, we don't care about the other return values.
_, _, err = authorizer.AuthorizeWrite(ctx, influxdb2.BucketsResourceType, bucketID, reqOrgID)
if err != nil {
return nil, &errors.Error{
Code: errors.EForbidden,
Msg: "user not authorized to write",
}
}
return &localPointsWriter{
ctx: ctx,
buf: make([]models.Point, 1<<14),

View File

@ -10,6 +10,8 @@ import (
"github.com/influxdata/flux/execute"
"github.com/influxdata/flux/execute/table"
"github.com/influxdata/flux/execute/table/static"
influxdb2 "github.com/influxdata/influxdb/v2"
context2 "github.com/influxdata/influxdb/v2/context"
"github.com/influxdata/influxdb/v2/kit/platform"
"github.com/influxdata/influxdb/v2/kit/platform/errors"
"github.com/influxdata/influxdb/v2/mock"
@ -220,3 +222,84 @@ func TestProvider_SeriesCardinalityReader_MissingRequestContext(t *testing.T) {
require.Equal(t, wantErr, gotErr)
}
func TestWriterFor(t *testing.T) {
t.Parallel()
auth := influxdb2.Authorization{
Status: influxdb2.Active,
Permissions: []influxdb2.Permission{
{
Action: influxdb2.WriteAction,
Resource: influxdb2.Resource{
Type: influxdb2.BucketsResourceType,
},
},
},
}
provider := influxdb.Provider{
Reader: storageflux.NewReader(&mock.ReadsStore{}),
BucketLookup: mock.BucketLookup{},
}
conf := influxdb.Config{
Bucket: influxdb.NameOrID{
Name: "my-bucket",
},
}
ctx := context.Background()
req := query.Request{
OrganizationID: platform.ID(2),
}
ctx = query.ContextWithRequest(ctx, &req)
ctx = context2.SetAuthorizer(ctx, &auth)
_, gotErr := provider.WriterFor(ctx, conf)
require.Nil(t, gotErr)
}
func TestWriterFor_Error(t *testing.T) {
t.Parallel()
auth := influxdb2.Authorization{
Status: influxdb2.Active,
Permissions: []influxdb2.Permission{
{
Action: influxdb2.ReadAction,
Resource: influxdb2.Resource{
Type: influxdb2.BucketsResourceType,
},
},
},
}
provider := influxdb.Provider{
Reader: storageflux.NewReader(&mock.ReadsStore{}),
BucketLookup: mock.BucketLookup{},
}
conf := influxdb.Config{
Bucket: influxdb.NameOrID{
Name: "my-bucket",
},
}
ctx := context.Background()
req := query.Request{
OrganizationID: platform.ID(2),
}
ctx = query.ContextWithRequest(ctx, &req)
ctx = context2.SetAuthorizer(ctx, &auth)
_, gotErr := provider.WriterFor(ctx, conf)
wantErr := &errors.Error{
Code: errors.EForbidden,
Msg: "user not authorized to write",
}
require.Equal(t, wantErr, gotErr)
}

View File

@ -656,10 +656,8 @@ func (SortedPivotRule) Rewrite(ctx context.Context, pn plan.Node) (plan.Node, bo
return pn, false, nil
}
//
// Push Down of window aggregates.
// ReadRangePhys |> window |> { min, max, mean, count, sum }
//
type PushDownWindowAggregateRule struct{}
func (PushDownWindowAggregateRule) Name() string {
@ -1040,10 +1038,8 @@ func (p GroupWindowAggregateTransposeRule) Rewrite(ctx context.Context, pn plan.
return fnNode, true, nil
}
//
// Push Down of group aggregates.
// ReadGroupPhys |> { count }
//
type PushDownGroupAggregateRule struct{}
func (PushDownGroupAggregateRule) Name() string {

View File

@ -57,8 +57,7 @@ var (
func TestCreateAndGetConnection(t *testing.T) {
t.Parallel()
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
// Getting an invalid ID should return an error.
got, err := svc.GetRemoteConnection(ctx, initID)
@ -79,8 +78,7 @@ func TestCreateAndGetConnection(t *testing.T) {
func TestUpdateAndGetConnection(t *testing.T) {
t.Parallel()
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
// Updating a nonexistent ID fails.
updated, err := svc.UpdateRemoteConnection(ctx, initID, updateReq)
@ -106,8 +104,7 @@ func TestUpdateAndGetConnection(t *testing.T) {
func TestUpdateNoop(t *testing.T) {
t.Parallel()
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
// Create a connection.
created, err := svc.CreateRemoteConnection(ctx, createReq)
@ -128,8 +125,7 @@ func TestUpdateNoop(t *testing.T) {
func TestDeleteConnection(t *testing.T) {
t.Parallel()
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
// Deleting a nonexistent ID should return an error.
require.Equal(t, errRemoteNotFound, svc.DeleteRemoteConnection(ctx, initID))
@ -167,8 +163,7 @@ func TestListConnections(t *testing.T) {
t.Run("list all", func(t *testing.T) {
t.Parallel()
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
allConns := setup(t, svc)
listed, err := svc.ListRemoteConnections(ctx, influxdb.RemoteConnectionListFilter{OrgID: connection.OrgID})
@ -179,8 +174,7 @@ func TestListConnections(t *testing.T) {
t.Run("list by name", func(t *testing.T) {
t.Parallel()
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
allConns := setup(t, svc)
listed, err := svc.ListRemoteConnections(ctx, influxdb.RemoteConnectionListFilter{
@ -194,8 +188,7 @@ func TestListConnections(t *testing.T) {
t.Run("list by URL", func(t *testing.T) {
t.Parallel()
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
allConns := setup(t, svc)
listed, err := svc.ListRemoteConnections(ctx, influxdb.RemoteConnectionListFilter{
@ -209,8 +202,7 @@ func TestListConnections(t *testing.T) {
t.Run("list by other org ID", func(t *testing.T) {
t.Parallel()
svc, clean := newTestService(t)
defer clean(t)
svc := newTestService(t)
setup(t, svc)
listed, err := svc.ListRemoteConnections(ctx, influxdb.RemoteConnectionListFilter{OrgID: platform.ID(1000)})
@ -219,8 +211,8 @@ func TestListConnections(t *testing.T) {
})
}
func newTestService(t *testing.T) (*service, func(t *testing.T)) {
store, clean := sqlite.NewTestStore(t)
func newTestService(t *testing.T) *service {
store := sqlite.NewTestStore(t)
logger := zaptest.NewLogger(t)
sqliteMigrator := sqlite.NewMigrator(store, logger)
require.NoError(t, sqliteMigrator.Up(ctx, migrations.AllUp))
@ -230,5 +222,5 @@ func newTestService(t *testing.T) (*service, func(t *testing.T)) {
idGenerator: mock.NewIncrementingIDGenerator(initID),
}
return &svc, clean
return &svc
}

View File

@ -20,20 +20,21 @@ var ErrMaxQueueSizeTooSmall = errors.Error{
// Replication contains all info about a replication that should be returned to users.
type Replication struct {
ID platform.ID `json:"id" db:"id"`
OrgID platform.ID `json:"orgID" db:"org_id"`
Name string `json:"name" db:"name"`
Description *string `json:"description,omitempty" db:"description"`
RemoteID platform.ID `json:"remoteID" db:"remote_id"`
LocalBucketID platform.ID `json:"localBucketID" db:"local_bucket_id"`
RemoteBucketID *platform.ID `json:"remoteBucketID" db:"remote_bucket_id"`
RemoteBucketName string `json:"RemoteBucketName" db:"remote_bucket_name"`
MaxQueueSizeBytes int64 `json:"maxQueueSizeBytes" db:"max_queue_size_bytes"`
CurrentQueueSizeBytes int64 `json:"currentQueueSizeBytes" db:"current_queue_size_bytes"`
LatestResponseCode *int32 `json:"latestResponseCode,omitempty" db:"latest_response_code"`
LatestErrorMessage *string `json:"latestErrorMessage,omitempty" db:"latest_error_message"`
DropNonRetryableData bool `json:"dropNonRetryableData" db:"drop_non_retryable_data"`
MaxAgeSeconds int64 `json:"maxAgeSeconds" db:"max_age_seconds"`
ID platform.ID `json:"id" db:"id"`
OrgID platform.ID `json:"orgID" db:"org_id"`
Name string `json:"name" db:"name"`
Description *string `json:"description,omitempty" db:"description"`
RemoteID platform.ID `json:"remoteID" db:"remote_id"`
LocalBucketID platform.ID `json:"localBucketID" db:"local_bucket_id"`
RemoteBucketID *platform.ID `json:"remoteBucketID" db:"remote_bucket_id"`
RemoteBucketName string `json:"RemoteBucketName" db:"remote_bucket_name"`
MaxQueueSizeBytes int64 `json:"maxQueueSizeBytes" db:"max_queue_size_bytes"`
CurrentQueueSizeBytes int64 `json:"currentQueueSizeBytes"`
RemainingBytesToBeSynced int64 `json:"remainingBytesToBeSynced"`
LatestResponseCode *int32 `json:"latestResponseCode,omitempty" db:"latest_response_code"`
LatestErrorMessage *string `json:"latestErrorMessage,omitempty" db:"latest_error_message"`
DropNonRetryableData bool `json:"dropNonRetryableData" db:"drop_non_retryable_data"`
MaxAgeSeconds int64 `json:"maxAgeSeconds" db:"max_age_seconds"`
}
// ReplicationListFilter is a selection filter for listing replications.

View File

@ -4,6 +4,7 @@ import (
"errors"
"fmt"
"io"
"io/fs"
"math"
"os"
"path/filepath"
@ -21,7 +22,7 @@ import (
const (
scannerAdvanceInterval = 10 * time.Second
purgeInterval = 60 * time.Second
defaultMaxAge = 168 * time.Hour / time.Second
defaultMaxAge = 7 * 24 * time.Hour // 1 week
)
type remoteWriter interface {
@ -72,7 +73,7 @@ func NewDurableQueueManager(log *zap.Logger, queuePath string, metrics *metrics.
}
// InitializeQueue creates and opens a new durable queue which is associated with a replication stream.
func (qm *durableQueueManager) InitializeQueue(replicationID platform.ID, maxQueueSizeBytes int64, orgID platform.ID, localBucketID platform.ID, maxAge int64) error {
func (qm *durableQueueManager) InitializeQueue(replicationID platform.ID, maxQueueSizeBytes int64, orgID platform.ID, localBucketID platform.ID, maxAgeSeconds int64) error {
qm.mutex.Lock()
defer qm.mutex.Unlock()
@ -112,7 +113,7 @@ func (qm *durableQueueManager) InitializeQueue(replicationID platform.ID, maxQue
}
// Map new durable queue and scanner to its corresponding replication stream via replication ID
rq := qm.newReplicationQueue(replicationID, orgID, localBucketID, newQueue, maxAge)
rq := qm.newReplicationQueue(replicationID, orgID, localBucketID, newQueue, maxAgeSeconds)
qm.replicationQueues[replicationID] = rq
rq.Open()
@ -315,6 +316,23 @@ func (qm *durableQueueManager) CurrentQueueSizes(ids []platform.ID) (map[platfor
return sizes, nil
}
// Returns the remaining number of bytes in Queue to be read:
func (qm *durableQueueManager) RemainingQueueSizes(ids []platform.ID) (map[platform.ID]int64, error) {
qm.mutex.RLock()
defer qm.mutex.RUnlock()
sizes := make(map[platform.ID]int64, len(ids))
for _, id := range ids {
if _, exist := qm.replicationQueues[id]; !exist {
return nil, fmt.Errorf("durable queue not found for replication ID %q", id)
}
sizes[id] = qm.replicationQueues[id].queue.TotalBytes()
}
return sizes, nil
}
// StartReplicationQueues updates the durableQueueManager.replicationQueues map, fully removing any partially deleted
// queues (present on disk, but not tracked in sqlite), opening all current queues, and logging info for each.
func (qm *durableQueueManager) StartReplicationQueues(trackedReplications map[platform.ID]*influxdb.TrackedReplication) error {
@ -341,9 +359,31 @@ func (qm *durableQueueManager) StartReplicationQueues(trackedReplications map[pl
// Open and map the queue struct to its replication ID
if err := queue.Open(); err != nil {
qm.logger.Error("failed to open replication stream durable queue", zap.Error(err), zap.String("id", id.String()))
errOccurred = true
continue
// This could have errored after a backup/restore (we do not persist the replicationq).
// Check if the dir exists, create if it doesn't, then open and carry on
if pErr, ok := err.(*fs.PathError); ok {
path := pErr.Path
if _, err := os.Stat(path); err != nil && os.IsNotExist(err) {
if err := os.MkdirAll(path, 0777); err != nil {
qm.logger.Error("error attempting to recreate missing replication queue", zap.Error(err), zap.String("id", id.String()), zap.String("path", path))
errOccurred = true
continue
}
if err := queue.Open(); err != nil {
qm.logger.Error("error attempting to open replication queue", zap.Error(err), zap.String("id", id.String()), zap.String("path", path))
errOccurred = true
continue
}
qm.replicationQueues[id] = qm.newReplicationQueue(id, repl.OrgID, repl.LocalBucketID, queue, repl.MaxAgeSeconds)
qm.replicationQueues[id].Open()
qm.logger.Info("Opened replication stream", zap.String("id", id.String()), zap.String("path", queue.Dir()))
}
} else {
qm.logger.Error("failed to open replication stream durable queue", zap.Error(err), zap.String("id", id.String()), zap.String("path", queue.Dir()))
errOccurred = true
}
} else {
qm.replicationQueues[id] = qm.newReplicationQueue(id, repl.OrgID, repl.LocalBucketID, queue, repl.MaxAgeSeconds)
qm.replicationQueues[id].Open()
@ -439,15 +479,15 @@ func (qm *durableQueueManager) EnqueueData(replicationID platform.ID, data []byt
return nil
}
func (qm *durableQueueManager) newReplicationQueue(id platform.ID, orgID platform.ID, localBucketID platform.ID, queue *durablequeue.Queue, maxAge int64) *replicationQueue {
func (qm *durableQueueManager) newReplicationQueue(id platform.ID, orgID platform.ID, localBucketID platform.ID, queue *durablequeue.Queue, maxAgeSeconds int64) *replicationQueue {
logger := qm.logger.With(zap.String("replication_id", id.String()))
done := make(chan struct{})
// check for max age minimum
var maxAgeTime time.Duration
if maxAge < 0 {
if maxAgeSeconds < 0 {
maxAgeTime = defaultMaxAge
} else {
maxAgeTime = time.Duration(maxAge)
maxAgeTime = time.Duration(maxAgeSeconds) * time.Second
}
return &replicationQueue{

View File

@ -35,12 +35,13 @@ func TestCreateNewQueueDirExists(t *testing.T) {
t.Parallel()
queuePath, qm := initQueueManager(t)
defer os.RemoveAll(filepath.Dir(queuePath))
err := qm.InitializeQueue(id1, maxQueueSizeBytes, orgID1, localBucketID1, 0)
require.NoError(t, err)
require.DirExists(t, filepath.Join(queuePath, id1.String()))
shutdown(t, qm)
}
func TestEnqueueScan(t *testing.T) {
@ -78,9 +79,10 @@ func TestEnqueueScan(t *testing.T) {
}
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
queuePath, qm := initQueueManager(t)
defer os.RemoveAll(filepath.Dir(queuePath))
_, qm := initQueueManager(t)
// Create new queue
err := qm.InitializeQueue(id1, maxQueueSizeBytes, orgID1, localBucketID1, 0)
@ -97,6 +99,9 @@ func TestEnqueueScan(t *testing.T) {
// Check queue position
closeRq(rq)
scan, err := rq.queue.NewScanner()
t.Cleanup(func() {
require.NoError(t, rq.queue.Close())
})
if tt.writeFuncReturn == nil {
require.ErrorIs(t, err, io.EOF)
@ -115,8 +120,7 @@ func TestEnqueueScan(t *testing.T) {
func TestCreateNewQueueDuplicateID(t *testing.T) {
t.Parallel()
queuePath, qm := initQueueManager(t)
defer os.RemoveAll(filepath.Dir(queuePath))
_, qm := initQueueManager(t)
// Create a valid new queue
err := qm.InitializeQueue(id1, maxQueueSizeBytes, orgID1, localBucketID1, 0)
@ -125,13 +129,14 @@ func TestCreateNewQueueDuplicateID(t *testing.T) {
// Try to initialize another queue with the same replication ID
err = qm.InitializeQueue(id1, maxQueueSizeBytes, orgID1, localBucketID1, 0)
require.EqualError(t, err, "durable queue already exists for replication ID \"0000000000000001\"")
shutdown(t, qm)
}
func TestDeleteQueueDirRemoved(t *testing.T) {
t.Parallel()
queuePath, qm := initQueueManager(t)
defer os.RemoveAll(filepath.Dir(queuePath))
// Create a valid new queue
err := qm.InitializeQueue(id1, maxQueueSizeBytes, orgID1, localBucketID1, 0)
@ -147,8 +152,7 @@ func TestDeleteQueueDirRemoved(t *testing.T) {
func TestDeleteQueueNonexistentID(t *testing.T) {
t.Parallel()
queuePath, qm := initQueueManager(t)
defer os.RemoveAll(filepath.Dir(queuePath))
_, qm := initQueueManager(t)
// Delete nonexistent queue
err := qm.DeleteQueue(id1)
@ -158,8 +162,7 @@ func TestDeleteQueueNonexistentID(t *testing.T) {
func TestUpdateMaxQueueSizeNonexistentID(t *testing.T) {
t.Parallel()
queuePath, qm := initQueueManager(t)
defer os.RemoveAll(filepath.Dir(queuePath))
_, qm := initQueueManager(t)
// Update nonexistent queue
err := qm.UpdateMaxQueueSize(id1, influxdb.DefaultReplicationMaxQueueSizeBytes)
@ -170,7 +173,6 @@ func TestStartReplicationQueue(t *testing.T) {
t.Parallel()
queuePath, qm := initQueueManager(t)
defer os.RemoveAll(filepath.Dir(queuePath))
// Create new queue
err := qm.InitializeQueue(id1, maxQueueSizeBytes, orgID1, localBucketID1, 0)
@ -199,13 +201,14 @@ func TestStartReplicationQueue(t *testing.T) {
// Ensure queue is open by trying to remove, will error if open
err = qm.replicationQueues[id1].queue.Remove()
require.Errorf(t, err, "queue is open")
require.NoError(t, qm.replicationQueues[id1].queue.Close())
}
func TestStartReplicationQueuePartialDelete(t *testing.T) {
t.Parallel()
queuePath, qm := initQueueManager(t)
defer os.RemoveAll(filepath.Dir(queuePath))
// Create new queue
err := qm.InitializeQueue(id1, maxQueueSizeBytes, orgID1, localBucketID1, 0)
@ -233,7 +236,6 @@ func TestStartReplicationQueuesMultiple(t *testing.T) {
t.Parallel()
queuePath, qm := initQueueManager(t)
defer os.RemoveAll(filepath.Dir(queuePath))
// Create queue1
err := qm.InitializeQueue(id1, maxQueueSizeBytes, orgID1, localBucketID1, 0)
@ -280,13 +282,15 @@ func TestStartReplicationQueuesMultiple(t *testing.T) {
require.Errorf(t, err, "queue is open")
err = qm.replicationQueues[id2].queue.Remove()
require.Errorf(t, err, "queue is open")
require.NoError(t, qm.replicationQueues[id1].queue.Close())
require.NoError(t, qm.replicationQueues[id2].queue.Close())
}
func TestStartReplicationQueuesMultipleWithPartialDelete(t *testing.T) {
t.Parallel()
queuePath, qm := initQueueManager(t)
defer os.RemoveAll(filepath.Dir(queuePath))
// Create queue1
err := qm.InitializeQueue(id1, maxQueueSizeBytes, orgID1, localBucketID1, 0)
@ -325,14 +329,14 @@ func TestStartReplicationQueuesMultipleWithPartialDelete(t *testing.T) {
// Ensure queue1 is open by trying to remove, will error if open
err = qm.replicationQueues[id1].queue.Remove()
require.Errorf(t, err, "queue is open")
require.NoError(t, qm.replicationQueues[id1].queue.Close())
}
func initQueueManager(t *testing.T) (string, *durableQueueManager) {
t.Helper()
enginePath, err := os.MkdirTemp("", "engine")
require.NoError(t, err)
queuePath := filepath.Join(enginePath, "replicationq")
queuePath := filepath.Join(t.TempDir(), "replicationq")
logger := zaptest.NewLogger(t)
qm := NewDurableQueueManager(logger, queuePath, metrics.NewReplicationsMetrics(), replicationsMock.NewMockHttpConfigStore(nil))
@ -403,9 +407,7 @@ func getTestRemoteWriter(t *testing.T, expected string) remoteWriter {
func TestEnqueueData(t *testing.T) {
t.Parallel()
queuePath, err := os.MkdirTemp("", "testqueue")
require.NoError(t, err)
defer os.RemoveAll(queuePath)
queuePath := t.TempDir()
logger := zaptest.NewLogger(t)
qm := NewDurableQueueManager(logger, queuePath, metrics.NewReplicationsMetrics(), replicationsMock.NewMockHttpConfigStore(nil))
@ -417,6 +419,11 @@ func TestEnqueueData(t *testing.T) {
require.NoError(t, err)
// Empty queues are 8 bytes for the footer.
require.Equal(t, map[platform.ID]int64{id1: 8}, sizes)
// Remaining queue should initially be empty:
rsizes, err := qm.RemainingQueueSizes([]platform.ID{id1})
require.NoError(t, err)
// Empty queue = 0 bytes:
require.Equal(t, map[platform.ID]int64{id1: 0}, rsizes)
data := "some fake data"
@ -424,12 +431,20 @@ func TestEnqueueData(t *testing.T) {
rq, ok := qm.replicationQueues[id1]
require.True(t, ok)
closeRq(rq)
t.Cleanup(func() {
require.NoError(t, rq.queue.Close())
})
go func() { <-rq.receive }() // absorb the receive to avoid testcase deadlock
require.NoError(t, qm.EnqueueData(id1, []byte(data), 1))
sizes, err = qm.CurrentQueueSizes([]platform.ID{id1})
require.NoError(t, err)
require.Greater(t, sizes[id1], int64(8))
rsizes, err = qm.RemainingQueueSizes([]platform.ID{id1})
require.NoError(t, err)
require.Greater(t, rsizes[id1], int64(0))
// Difference between disk size and queue should only be footer size
require.Equal(t, sizes[id1]-rsizes[id1], int64(8))
written, err := qm.replicationQueues[id1].queue.Current()
require.NoError(t, err)
@ -450,7 +465,6 @@ func TestSendWrite(t *testing.T) {
}
path, qm := initQueueManager(t)
defer os.RemoveAll(path)
require.NoError(t, qm.InitializeQueue(id1, maxQueueSizeBytes, orgID1, localBucketID1, 0))
require.DirExists(t, filepath.Join(path, id1.String()))
@ -458,6 +472,9 @@ func TestSendWrite(t *testing.T) {
rq, ok := qm.replicationQueues[id1]
require.True(t, ok)
closeRq(rq)
t.Cleanup(func() {
require.NoError(t, rq.queue.Close())
})
go func() { <-rq.receive }() // absorb the receive to avoid testcase deadlock
// Create custom remote writer that does some expected behavior
@ -481,8 +498,17 @@ func TestSendWrite(t *testing.T) {
require.True(t, scan.Next())
require.Equal(t, []byte(points[pointIndex]), scan.Bytes())
require.NoError(t, scan.Err())
// Initial Queue size should be size of data + footer
rsizesI, err := qm.RemainingQueueSizes([]platform.ID{id1})
require.NoError(t, err)
require.Equal(t, rsizesI[id1], int64(8+len(points[pointIndex])))
// Send the write to the "remote" with a success
rq.SendWrite()
// Queue becomes empty after write:
rsizesJ, err := qm.RemainingQueueSizes([]platform.ID{id1})
require.NoError(t, err)
require.Equal(t, rsizesJ[id1], int64(0))
// Make sure the data is no longer in the queue
_, err = rq.queue.NewScanner()
require.Equal(t, io.EOF, err)
@ -496,9 +522,15 @@ func TestSendWrite(t *testing.T) {
require.True(t, scan.Next())
require.Equal(t, []byte(points[pointIndex]), scan.Bytes())
require.NoError(t, scan.Err())
rsizesI, err = qm.RemainingQueueSizes([]platform.ID{id1})
require.NoError(t, err)
// Send the write to the "remote" with a FAILURE
shouldFailThisWrite = true
rq.SendWrite()
// Queue size should not have decreased if write has failed:
rsizesJ, err = qm.RemainingQueueSizes([]platform.ID{id1})
require.NoError(t, err)
require.Equal(t, rsizesJ[id1], rsizesI[id1])
// Make sure the data is still in the queue
scan, err = rq.queue.NewScanner()
require.NoError(t, err)
@ -508,6 +540,11 @@ func TestSendWrite(t *testing.T) {
// Send the write to the "remote" again, with a SUCCESS
shouldFailThisWrite = false
rq.SendWrite()
// Queue Becomes empty after a successful write
rsizesJ, err = qm.RemainingQueueSizes([]platform.ID{id1})
require.NoError(t, err)
require.Equal(t, rsizesJ[id1], int64(0))
// Make sure the data is no longer in the queue
_, err = rq.queue.NewScanner()
require.Equal(t, io.EOF, err)
@ -517,7 +554,6 @@ func TestEnqueueData_WithMetrics(t *testing.T) {
t.Parallel()
path, qm := initQueueManager(t)
defer os.RemoveAll(path)
require.NoError(t, qm.InitializeQueue(id1, maxQueueSizeBytes, orgID1, localBucketID1, 0))
require.DirExists(t, filepath.Join(path, id1.String()))
@ -525,6 +561,9 @@ func TestEnqueueData_WithMetrics(t *testing.T) {
rq, ok := qm.replicationQueues[id1]
require.True(t, ok)
closeRq(rq)
t.Cleanup(func() {
require.NoError(t, rq.queue.Close())
})
reg := prom.NewRegistry(zaptest.NewLogger(t))
reg.MustRegister(qm.metrics.PrometheusCollectors()...)
@ -559,7 +598,6 @@ func TestEnqueueData_EnqueueFailure(t *testing.T) {
t.Parallel()
path, qm := initQueueManager(t)
defer os.RemoveAll(path)
require.NoError(t, qm.InitializeQueue(id1, maxQueueSizeBytes, orgID1, localBucketID1, 0))
require.DirExists(t, filepath.Join(path, id1.String()))
@ -592,7 +630,6 @@ func TestGoroutineReceives(t *testing.T) {
t.Parallel()
path, qm := initQueueManager(t)
defer os.RemoveAll(path)
require.NoError(t, qm.InitializeQueue(id1, maxQueueSizeBytes, orgID1, localBucketID1, 0))
require.DirExists(t, filepath.Join(path, id1.String()))
@ -600,6 +637,9 @@ func TestGoroutineReceives(t *testing.T) {
require.True(t, ok)
require.NotNil(t, rq)
closeRq(rq) // atypical from normal behavior, but lets us receive channels to test
t.Cleanup(func() {
require.NoError(t, rq.queue.Close())
})
go func() { require.NoError(t, qm.EnqueueData(id1, []byte("1234"), 1)) }()
select {
@ -615,7 +655,6 @@ func TestGoroutineCloses(t *testing.T) {
t.Parallel()
path, qm := initQueueManager(t)
defer os.RemoveAll(path)
require.NoError(t, qm.InitializeQueue(id1, maxQueueSizeBytes, orgID1, localBucketID1, 0))
require.DirExists(t, filepath.Join(path, id1.String()))
@ -640,7 +679,9 @@ func TestGetReplications(t *testing.T) {
t.Parallel()
path, qm := initQueueManager(t)
defer os.RemoveAll(path)
t.Cleanup(func() {
shutdown(t, qm)
})
// Initialize 3 queues (2nd and 3rd share the same orgID and localBucket)
require.NoError(t, qm.InitializeQueue(id1, maxQueueSizeBytes, orgID1, localBucketID1, 0))
@ -665,3 +706,44 @@ func TestGetReplications(t *testing.T) {
repls = qm.GetReplications(orgID2, localBucketID2)
require.ElementsMatch(t, expectedRepls, repls)
}
func TestReplicationStartMissingQueue(t *testing.T) {
t.Parallel()
queuePath, qm := initQueueManager(t)
// Create new queue
err := qm.InitializeQueue(id1, maxQueueSizeBytes, orgID1, localBucketID1, 0)
require.NoError(t, err)
require.DirExists(t, filepath.Join(queuePath, id1.String()))
// Represents the replications tracked in sqlite, this one is tracked
trackedReplications := make(map[platform.ID]*influxdb.TrackedReplication)
trackedReplications[id1] = &influxdb.TrackedReplication{
MaxQueueSizeBytes: maxQueueSizeBytes,
MaxAgeSeconds: 0,
OrgID: orgID1,
LocalBucketID: localBucketID1,
}
// Simulate server shutdown by closing all queues and clearing replicationQueues map
shutdown(t, qm)
// Delete the queue to simulate restoring from a backup
err = os.RemoveAll(filepath.Join(queuePath))
require.NoError(t, err)
// Call startup function
err = qm.StartReplicationQueues(trackedReplications)
require.NoError(t, err)
t.Cleanup(func() {
shutdown(t, qm)
})
// Make sure queue is stored in map
require.NotNil(t, qm.replicationQueues[id1])
// Ensure queue is open by trying to remove, will error if open
err = qm.replicationQueues[id1].queue.Remove()
require.Errorf(t, err, "queue is open")
}

Some files were not shown because too many files have changed in this diff Show More