notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
also drops a skipped test that has been skipped for over a year. Tried
unskipping it, but now it fails for all sorts of reasons, without the
race flag enabled.
Renaming Generate in anticipation of a new method that will onboard
users other than the initial user. The intent is to simplify multi-user
setups.
Co-authored-by: Chris Goller <goller@gmail.com>
this work is the first of making resources ALL unique by metadata.name. The
displayName is a means to rename an existing resource. This is all to support
pkger idempotency. The metadata.name field will be the unique identifier within
a pkg.
* refactor(storage): move type ByTagKey to the only package that uses it
* refactor(tsdb): use types in tsdb/cursors
* refactor(tsdb): remove unused type SeriesIDElems
* refactor(tsdb): inline only use of tsdb.ReadAllSeriesIDIterator
* refactor(tsdb): move series file to its own package
* refactor(storage): remove platform->influxdb aliases
* fix: allow authorized label service to be called indirectly
17071 exists because pkger loads all service resources as authorized on
start, resulting in them all being authorized when referenced indirectly
(not hit directly via api by consumer). Rather than restructure pkger to
only authorize direct services, this allows proper indirect auth to
labels (the cause of 17071).
* Add orgService to tests
* Add resource types to find orgID from
* refactor(storage): add readSource field accessors
* refactor(storage): remove unused limitSeriesCursor
* refactor(storage): export IndexSeriesCursor
This allows IDPE to use the same implementation, rather than duplicate
code. Also copied unit tests from IDPE.
* chore: go fmt
The tasks subsystem will now use the flux language service to parse and
evaluate flux instead of directly interacting with the parser or
runtime. This helps break the dependency on the libflux parser for the
base influxdb package.
This includes the task notification packages which were changed at the
same time.
* fix(storage): simplify storage/seriesCursor
storage/seriesCursor releases series file and TSI references sooner.
Remove unhelpful request object, inherited from 1.x
* chore(storage): replace SeriesCursor interface with sole implementation
* fix(backup): handle backup with no credentials file
Backups and restores should work whether or not the original installation uses
a credentials file and whether or not the backup contains a credentials file.
Prior to this change influxql requests were sent to the same back end as Flux queries.
This MAY not always be the case. Now InfluxQL queries are specifically routed to the InfluxQLService.
In the case of this OSS build the FluxService and InfluxQLService are the same.
this also extend dry run to provide env refs to it. the refactoring was
to enable that bit. Having the ability to dry run with the env ref entries
means we can dry run the pkg with the env ref values to see the impact before
the application takes place.
this is the last step for pkger to follow the service definition pattern
that is in the works. Some bits from http were moved into kit/transport/http
for reusability. End result is to hopefully axe http pkg for resuable types in
kit. Long ways off still...
* feat(backup): `influx backup` creates data backup
* feat(backup): initial restore work
* feat(restore): initial restore impl
Adds a restore tool which does offline restore of data and metadata.
* fix(restore): pr cleanup
* fix(restore): fix data dir creation
* fix(restore): pr cleanup
* chore: amend CHANGELOG
* fix: restore to empty dir fails differently
* feat(backup): backup and restore credentials
Saves the credentials file to backups and restores it from backups.
Additionally adds some logging for errors when fetching backup files.
* fix(restore): add missed commit
* fix(restore): pr cleanup
* fix(restore): fix default credentials restore path
* fix(backup): actually copy the credentials file for the backup
* fix: dirs get 0777, files get 0666
* fix: small review feedback
Co-authored-by: tmgordeeva <tanya@influxdata.com>
this work is to support pkger, but was able to add back in the
skipped tests. seeing failures upstream, and didn't catch it in
influxdb b/c the tests were being skipped.
closes: #14799
this is a blocker for anyone who hits the endpoint services internally. They
had to know that they need to also know of hte secret service then do all that
put/delete alongside the operation. This makes that unified inside the store tx.
one other thing this does is make obvious the dependencies that
notification services has. In this case it is the secrets service it
depends on.
this is a step towards providing a shared http client that manages pooling connections,
timeouts, and reducing GC for by not creating/GCing a client each req. Bring on the red!
tests failign from a data race caused in the tests setup. an incrementing
const needs something to serialzie it (atmoic in this case) to remove that
data race. This touches that up.
* chore: Remove several instances of WithLogger
* chore: unexport Logger fields
* chore: unexport some more Logger fields
* chore: go fmt
chore: fix test
chore: s/logger/log
chore: fix test
chore: revert http.Handler.Handler constructor initialization
* refactor: integrate review feedback, fix all test nop loggers
* refactor: capitalize all log messages
* refactor: rename two logger to log
Signed-off-by: Lorenzo Affetti <lorenzo.affetti@gmail.com>
Signed-off-by: Julius Volz <julius.volz@gmail.com>
move to internal
update flux to v0.50
Revert "move to internal"
This reverts commit bcd4caffbd44135f1dbeac4163cb2a22a751f45a.
promtests/internal --> internal/promtests
the flags have to match the flags with the exception of beign lower case and
all `_` be changed to `-` in the flag. This is a result of using the cobra
flag to env var mapping with the `-` replaced to `_`.
The secret service is tested by creating a secret and then attempting to
use it in a flux query. There is one test where accessing the secret
should work and one where it should return that the action is forbidden.
To have checks and notifications happen transactionally we need to be
able to alert the task system when a new task was created using the checks and notifications systems.
These two new middlewares allow us to inform the task system of a update
to a task that was created through the check or notification systems.
This command performs verification of TSM blocks
* expected and actual CRC-32 checksums match
* expected and actual min and max timestamps match decoded
data
Adds export tooling to `influxd inspect export-blocks` so that we
can dump out block data in SQL format for better analysis during
the debugging process.
The controller implementation is primarily used by influxdb so it
shouldn't be part of the flux repository. This copies the code from flux
to influxdb so it can be removed from the next flux release.
Co-authored-by: Jade McGough <jade@influxdata.com>
* Add session renew option to launcher and use in middlewhere
* pass session options to services
* Update SessionAutoRenew to SessionRenewDisabled
* Add test for service constructor defaults
* Update changelog
Until https://github.com/influxdata/flux/issues/1215 is fixed, the query
writer must make sure to always place a range directly after a from
operation. Otherwise the query will fail planning.
feat(http): add prometheus counters for tracking write/query usage
feat(http/metric): add metric recoder for recording http metrics
feat(prometheus): implement metric.Recorder for prometheus metrics
fix(prometheus): remove erroneous fmt.Printlns
feat(http): add prometheus registry to API backend
This was done as exposing prometheus metrics to a higher level was quite
difficult. It was much simple to simply pass the registry down to
anything that needs it.
feat(cmd/influxd/launcher): pass prom registry in on api backend
feat(http): collect metrics for write and query endpoints
This was much messier than I would have preferred. Future work is
outlined in TODOs.
review(influxdb): rename metric.Metric to metric.Event
This replaces usages of the spec compiler with the ast compiler and it
removes the error message referencing the spec compiler as an available
input.
It does not remove any of the code using the spec compiler that is
involved for proxying requests and it does not remove it from the API.
The synchronous executor was missing a call to ResultIterator.Release.
The asynchronous executor wasn't even calling Query.Statistics.
Also add a test that the scheduler records the statistics to the run
log, and that the statistics are visible from the launcher test. The
launcher test is the most likely place to catch if something goes wrong
in the full stack.
This commit consists of several improvements or changes:
* migrate the influxd binary to cobra.Command
* introduce a default run sub-command to start the server
* register the run sub-command flags with viper
to maintain compatibility with the existing behavior of automatic
binding of flags to environment variables.
Closes#12602
feat(influxdb): add generic store for documents
feat(influxdb): support authorizations in document store
feat(influxdb): support orgs in user resource mapping
feat(influxdb): add read-only included field on documents
feat(influxdb): add labels support to documents service
fix(influxdb): rename data field to content on documents
feat(influxdb): add with org id options for document store
feat(http): add templates swagger
feat(influxdb): add documentation to document options
doc(kv): add documentation for kv document store
test(kv): pull document tests in to the testing package
fix(http): fix swagger specification of templates endpoints
Previously the APIBackend understood only a ProxyQueryService,
but it needs to understand that there are two implementations of the
ProxyQueryService one for handling InfluxQL queries and one for handling
Flux queries. The names of the fields have been updated to make this
clear. As well as the FluxBackend is now initialized using the
FluxService explicitly.
* feat(kv:inmem:bolt): implement user service in a kv
* refactor(kv): use consistent func receiver name
* feat(kv): add initial basic auth service
* refactor(passwords): move auth interface into own file
* refactor(passwords): rename basic auth files to passwords
* refactor(passwords): rename from BasicAuth to Passwords
* refactor(kv): copy bolt user test into kv
Co-authored-by: Michael Desa <mjdesa@gmail.com>
* feat(kv): add inmem testing to kv store
* fix(kv): remove extra user index initialization
* feat(kv): attempt at making errors nice
* fix(http): return not found error if filter is invalid
* fix(http): s/platform/influxdb/ for user service
* fix(http): s/platform/influxdb/ for user service
* feat(kv): initial port of telegraf configs to kv
* feat(kv): first pass at migrating bolt org service to kv
* feat(kv): first pass at bucket service
* feat(kv): first pass at migrating kvlog to kv package
* feat(kv): add resource op logs
* feat(kv): first pass at user resource mapping migration
* feat(kv): add urm usage to bucket and org services
* feat(kv): first pass at kv authz service
* feat(kv): add cascading auth delete for users
* feat(kv): first pass d authorizer.OrganizationService in kv
* feat(cmd/influxd/launcher): user kv services where appropriate
* fix(kv): initialize authorizations
* fix(influxdb): use same buckets while slowly migrating stuff
* fix(kv): make staticcheck pass
* feat(kv): add dashboards to kv
review: make suggestions from pr review
fix: use common bucket names for bolt/kv stores
* test(kv): add complete password test coverage
* chore(kv): fixes for staticcheck
* feat(kv): implement labels generically on kv
* feat(kv): implement macro service
* feat(kv): add source service
* feat(kv): add session service
* feat(kv): add kv secret service
* refactor(kv): update telegraf and urm with error messages
* feat(kv): add lookup service
* feat(kv): add kv onboarding service
* refactor(kv): update telegraf to avoid repetition
* feat(cmd/influxd): use kv lookup service
* feat(kv): add telegraf to lookup service
* feat(cmd/influxd): use kv telegraf service
* feat(kv): initial port of scrapers in bolt to kv
* feat(kv): update scraper error messaging
* feat(cmd/influxd): add kv scraper
* feat(kv): add inmem backend tests
* refactor(kv): copy paste errors
* refactor(kv): add code to password errors
* fix(testing): update error messages for incorrect passwords
* feat(kv:inmem:bolt): implement user service in a kv
* refactor(kv): use consistent func receiver name
* refactor(kv): copy bolt user test into kv
Co-authored-by: Michael Desa <mjdesa@gmail.com>
* feat(kv): add inmem testing to kv store
* fix(kv): remove extra user index initialization
* feat(kv): attempt at making errors nice
* fix(http): return not found error if filter is invalid
* fix(http): s/platform/influxdb/ for user service
* feat(kv): first pass at migrating bolt org service to kv
* feat(kv): first pass at bucket service
* feat(kv): first pass at migrating kvlog to kv package
* feat(kv): add resource op logs
* feat(kv): first pass at user resource mapping migration
* feat(kv): add urm usage to bucket and org services
* feat(kv): first pass at kv authz service
* feat(kv): add cascading auth delete for users
* feat(kv): first pass d authorizer.OrganizationService in kv
* feat(cmd/influxd/launcher): user kv services where appropriate
* feat(kv): add initial basic auth service
* refactor(passwords): move auth interface into own file
* refactor(passwords): rename basic auth files to passwords
* fix(http): s/platform/influxdb/ for user service
* fix(kv): initialize authorizations
* fix(influxdb): use same buckets while slowly migrating stuff
* fix(kv): make staticcheck pass
* feat(kv): add dashboards to kv
review: make suggestions from pr review
fix: use common bucket names for bolt/kv stores
* feat(kv): implement labels generically on kv
* refactor(passwords): rename from BasicAuth to Passwords
* test(kv): add complete password test coverage
* chore(kv): fixes for staticcheck
* feat(kv): implement macro service
* feat(kv): add source service
* feat(kv): add session service
* feat(kv): initial port of telegraf configs to kv
* feat(kv): initial port of scrapers in bolt to kv
* feat(kv): add kv secret service
* refactor(kv): update telegraf and urm with error messages
* feat(kv): add lookup service
* feat(kv): add kv onboarding service
* refactor(kv): update telegraf to avoid repetition
* feat(cmd/influxd): use kv lookup service
* feat(kv): add telegraf to lookup service
* feat(cmd/influxd): use kv telegraf service
* feat(kv): update scraper error messaging
* feat(cmd/influxd): add kv scraper
* feat(kv): add inmem backend tests
* refactor(kv): copy paste errors
* refactor(kv): add code to password errors
* fix(testing): update error messages for incorrect passwords
* feat(http): initial support for flushing all key/values from kv store
* feat(kv): rename macro to variable
* feat(cmd/influxd/launcher): user kv services where appropriate
* refactor(passwords): rename from BasicAuth to Passwords
* feat(kv): implement macro service
* test(ui): introduce cypress
* test(ui): introduce first typescript test
* test(ui/e2e): add ci job
* chore: update gitignore to ignore test outputs
* feat(inmem): in memory influxdb
* test(e2e): adding pinger that checks if influxdb is alive
* hackathon
* hack
* hack
* hack
* hack
* Revert "feat(inmem): in memory influxdb"
This reverts commit 30ddf032003e704643b07ce80df61c3299ea7295.
* hack
* hack
* hack
* hack
* hack
* hack
* hack
* hack
* hack
* hack
* hack
* hack
* hack
* chore: lint ignore node_modules
* hack
* hack
* hack
* add user and flush
* hack
* remove unused vars
* hack
* hack
* ci(circle): prefix e2e artifacts
* change test to testid
* update cypress
* moar testid
* fix npm warnings
* remove absolte path
* chore(ci): remove /home/circleci proto mkdir hack
* wip: crud resources e2e
* fix(inmem): use inmem kv store services
* test(dashboard): add first dashboard crud tests
* hack
* undo hack
* fix: use response from setup for orgID
* chore: wip
* add convenience getByTitle function
* test(e2e): ui can create orgs
* test(e2e): add test for org deletion and update
* test(e2e): introduce task creation test
* test(e2e): create and update of buckets on org view
* chore: move types to declaration file
* chore: use route fixture in dashboard tests
* chore(ci): hack back
* test(ui): update snapshots
* chore: package-lock
* chore: remove macros
* fix: launcher rebase issues
* fix: compile errors
* fix: compile errors
* feat(cmd/influxdb): add explicit testing, asset-path, and store flags
Co-authored-by: Andrew Watkins <watts@influxdb.com>
* fix(cmd/influxd): set default HTTP handler and flags
Co-authored-by: Andrew Watkins <watts@influxdb.com>
* build(Makefile): add run-e2e and PHONY
* feat(kv:inmem:bolt): implement user service in a kv
* refactor(kv): use consistent func receiver name
* feat(kv): add initial basic auth service
* refactor(passwords): move auth interface into own file
* refactor(passwords): rename basic auth files to passwords
* refactor(passwords): rename from BasicAuth to Passwords
* refactor(kv): copy bolt user test into kv
Co-authored-by: Michael Desa <mjdesa@gmail.com>
* feat(kv): add inmem testing to kv store
* fix(kv): remove extra user index initialization
* feat(kv): attempt at making errors nice
* fix(http): return not found error if filter is invalid
* fix(http): s/platform/influxdb/ for user service
* fix(http): s/platform/influxdb/ for user service
* feat(kv): initial port of telegraf configs to kv
* feat(kv): initial port of scrapers in bolt to kv
* feat(kv): first pass at migrating bolt org service to kv
* feat(kv): first pass at bucket service
* feat(kv): first pass at migrating kvlog to kv package
* feat(kv): add resource op logs
* feat(kv): first pass at user resource mapping migration
* feat(kv): add urm usage to bucket and org services
* feat(kv): first pass at kv authz service
* feat(kv): add cascading auth delete for users
* feat(kv): first pass d authorizer.OrganizationService in kv
* feat(cmd/influxd/launcher): user kv services where appropriate
* fix(kv): initialize authorizations
* fix(influxdb): use same buckets while slowly migrating stuff
* fix(kv): make staticcheck pass
* feat(kv): add dashboards to kv
review: make suggestions from pr review
fix: use common bucket names for bolt/kv stores
* test(kv): add complete password test coverage
* chore(kv): fixes for staticcheck
* feat(kv): implement labels generically on kv
* feat(kv): implement macro service
* feat(kv): add source service
* feat(kv): add session service
* feat(kv): add kv secret service
* refactor(kv): update telegraf and urm with error messages
* feat(kv): add lookup service
* feat(kv): add kv onboarding service
* refactor(kv): update telegraf to avoid repetition
* feat(cmd/influxd): use kv lookup service
* feat(kv): add telegraf to lookup service
* feat(cmd/influxd): use kv telegraf service
* feat(kv): update scraper error messaging
* feat(cmd/influxd): add kv scraper
* feat(kv): add inmem backend tests
* refactor(kv): copy paste errors
* refactor(kv): add code to password errors
* fix(testing): update error messages for incorrect passwords
* feat(kv): rename macro to variable
* refactor(kv): auth/bucket/org/user unique checks return errors now
* feat(inmem): add way to get all bucket names from store
* feat(inmem): Buckets to return slice of bytes rather than strings
* feat(inmem): add locks around Buckets to avoid races
* feat(cmd/influx): check for unauthorized error in wrapCheckSetup
* chore(e2e): add video and screenshot artifcats to gitignore
* docs(ci): add build instructions for e2e tests
* feat(kv): add id lookup for authorized resources
Task ID is now a required value on run and log filters. It was
effectively required by all implementations before anyway, so now those
types reflect that requirement.
Organization ID was removed from those same fields. The TaskService
looks up the organization ID via the task in cases where we need it at a
lower layer.
Immediately before the executor calls out to the query service, the
executor loads the authorizer associated with the task, and associates
that authorizer with the context used to execute the query.
Accept token when creating or updating a task, but only report back the
authorization ID.
This means the executor and the platform adapter are now both aware of
an Authorization Service.
With the ongoing authorization work, creation arguments will differ from
what's returned on reads. More specifically, creation will accept a
token, but reads will report back a token ID.
This refactor facilitates that authorization work, and also brings the
code closer to the swagger definition, for the TaskCreateRequest type in
particular.
Previously, scrapers would scrape the target 10 times. This was
because each scraper subscriber was not put into a queue group.
I've added the queue group "metrics" and now we the subcribers
will only scrape the target once.
Additionally, I moved to using nats memory instead of nats file
store. We don't need durability for scraper runs across restarts.
We used go-bindata to embed all JSON files in proto to be shipped
with our released influxdb.
We are committing the bin_gen.go so that it simplifies the building
via make and go build.
If you would like to add another proto JSON file, run make
to generate a new bin_gen.go.
Co-Authored-By: Deniz Kusefoglu <deniz@influxdata.com>
feat(bolt): add function to find a resources organization id
rename platform to influxdb
Co-authored-by: Leonardo Di Donato <leodidonato@gmail.com>
Co-authored-by: Michael Desa <mjdesa@gmail.com>
fix(bolt): rename FindResoureOrganization to FindResoureOrganizationID
feat(authorizer): add authorized user resource mapping service
Co-authored-by: Leonardo Di Donato <leodidonato@gmail.com>
Co-authored-by: Michael Desa <mjdesa@gmail.com>
feat(influxdb): wire up authorized user resource mapping
Co-authored-by: Leonardo Di Donato <leodidonato@gmail.com>
Co-authored-by: Michael Desa <mjdesa@gmail.com>
fix(authorizer): remove unused field from tests
Co-authored-by: Leonardo Di Donato <leodidonato@gmail.com>
Co-authored-by: Michael Desa <mjdesa@gmail.com>
I did this with a dumb editor macro, so some comments changed too.
Also rename root package from platform to influxdb.
In interest of minimizing risk, anyone importing the root package has
now aliased it to "platform" so that no changes beyond imports were
necessary in those files.
Lastly, replace the old platform module to local path /dev/null so that
nobody can accidentally reintroduce a platform dependency while
migrating platform code to influxdb.
feat(http): add http handler for proto service
feat(mock): add mock proto service
test(http): add proto handler tests
fix(platform): add view as option when adding a cell
feat(platform): add dashboard to proto struct
feat(fs): add filesystem implementation of proto
feat(http): add protos endpoints to api handler
feat(cmd/influxd/launcher): add protos path to server
doc(http): add protos to swagger
test(cmd/influxd/launcher): add --protos-path to launcher tests
fix(fs): remove unused args from test
fix(http): use platform.Error where appropriate
feat(platform): add functional options for platform errors
fix(testing): set dashboard ids properly in dashboard tests
feat(bolt): add dashboard specific views
fix(bolt): delete view when cell is removed or dashboard is deleted
all tests use a unique bucket based on the test file name. copied all tests over from flux repo
the tests are currently disabled due to engine consistency issues: https://github.com/influxdata/flux/issues/613
fix(http): add members/secrets/labels links on org response
fix(http:cmd/influxd): use secret service in api backend
fix(bolt): return empty list if there are no secrets for an org
chore(vault): add description of vault usage
* refactor(cmd/influxd): move driver code for influxd main package to sub-package so it can be reused.
* chore(query/influxql): moved query_test.go and requisite files to the influxql dir to be closer to the code it tests. (#2013)
This commit moves the `Main.cancel()` execution to the `main_test.go`
file so it's only executed for tests. This was interfering with the
shutdown process on the regular `influxd` binary.
Previously, the WithTicker option would call TickScheduler.Tick every
time the underlying time.Ticker sent a time on its channel. This meant
we used a 1s period, which meant that in the worst case, we would see a
tick at about 999ms after the second rollover.
This change increases the underlying time.Ticker frequency, but only
calls TickScheduler.Tick after a second rolls over. Since we now use a
tick frequency of 100ms, during normal operation, TickScheduler.Tick
will be called within 0.1s after the second rolls over.