* feat(storage): first array cursor
* feat: add first and last to rpc messages
* test(launcher): push down group first and group last
* feat(storage): window first array cursor
* test(launcher): push down bare first and bare last
* feat(storage): add capabilities for group first and group last
* refactor: rename first to limit
* refactor: make zero value for every period meaningful
* refactor: standardize launcher pushdown tests
This enables a new rule that will push down the full `aggregateWindow`
query including the `duplicate` and `window(every: inf)` that recombines
the tables. When the full rule is used, the table is not split into
tables for each window and instead retains itself as a single table. The
start or stop column is renamed to `_time` and `_start` and `_stop` will
be the boundaries of the query.
This adds a launcher test for the read window aggregate push down to
verify that it is done when a query is sent with the appropriate
pattern, the output is correct, and that the metric is incremented that
signals the push down happened.
this bug surfaces when you do not provide a stack ID to the apply call.
the new stack is created, but the resources created are not associated
with it. This remedies that issue.
this ability exports all resources associated with a stack by the same
metadata.name fields as the original application had done it. This can
be used as a means to snapshot the current state of the stack. This can
be used for source control or other means.
closes: #18271
Switch to use the new user handler. We have been using the tenant backend for some
time now and just need to switch over to using tenant front to back.
We have reached the stage wehre the new tenant service is being used and
is stable but we want to get it in more hands and used as the default service.
one thing to note here is we are deleting the default value on the host
flag when it is registered. The config is the fallback and has the default
value set. If the host flag has a default, the determination if the user
set it or not is ambiguous. We can't have that.
closes: #17812
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
also drops a skipped test that has been skipped for over a year. Tried
unskipping it, but now it fails for all sorts of reasons, without the
race flag enabled.
the pkger.ValidSkipParseError option allows our server to be the one to validate the
the pkg is accurate. If a user has an older version of the UI and our cloud gets updated
with new validation rules,they'll get immediate access to that change without having to
rol their CLI build.
also fixes issue where we swallow initial errors when check setup middleware
fails.
Renaming Generate in anticipation of a new method that will onboard
users other than the initial user. The intent is to simplify multi-user
setups.
Co-authored-by: Chris Goller <goller@gmail.com>
this is applied to all the resoruces that have had the spec.name field applied.
all resources that have not will work in the same way before this commit.
this work is the first of making resources ALL unique by metadata.name. The
displayName is a means to rename an existing resource. This is all to support
pkger idempotency. The metadata.name field will be the unique identifier within
a pkg.
* refactor(storage): move type ByTagKey to the only package that uses it
* refactor(tsdb): use types in tsdb/cursors
* refactor(tsdb): remove unused type SeriesIDElems
* refactor(tsdb): inline only use of tsdb.ReadAllSeriesIDIterator
* refactor(tsdb): move series file to its own package
* refactor(storage): remove platform->influxdb aliases
* fix: allow authorized label service to be called indirectly
17071 exists because pkger loads all service resources as authorized on
start, resulting in them all being authorized when referenced indirectly
(not hit directly via api by consumer). Rather than restructure pkger to
only authorize direct services, this allows proper indirect auth to
labels (the cause of 17071).
* Add orgService to tests
* Add resource types to find orgID from
* refactor(storage): add readSource field accessors
* refactor(storage): remove unused limitSeriesCursor
* refactor(storage): export IndexSeriesCursor
This allows IDPE to use the same implementation, rather than duplicate
code. Also copied unit tests from IDPE.
* chore: go fmt
The tasks subsystem will now use the flux language service to parse and
evaluate flux instead of directly interacting with the parser or
runtime. This helps break the dependency on the libflux parser for the
base influxdb package.
This includes the task notification packages which were changed at the
same time.
This updates the repl to support the new influxdb source and use it by
default in the repl. It will automatically set some default variables
for the influxdb source to make it easier to use the cli. In particular,
it will set the default organization, token, and the host. The
organization gets set to the one specified in the repl command and the
token gets filled in with the user installed one. The host defaults to
localhost but will change to whichever one was specified on the cli.
In addition, this will replace the http client with one that sets
insecure skip verify if the `--skip-verify` flag is used.
* fix(storage): simplify storage/seriesCursor
storage/seriesCursor releases series file and TSI references sooner.
Remove unhelpful request object, inherited from 1.x
* chore(storage): replace SeriesCursor interface with sole implementation
The repl no longer takes in a querier and it will run everything
locally. The spec interface will now not be used and will be removed
from the http endpoint at some point.
* fix(backup): handle backup with no credentials file
Backups and restores should work whether or not the original installation uses
a credentials file and whether or not the backup contains a credentials file.
Prior to this change influxql requests were sent to the same back end as Flux queries.
This MAY not always be the case. Now InfluxQL queries are specifically routed to the InfluxQLService.
In the case of this OSS build the FluxService and InfluxQLService are the same.
there was an issue where you could call, `influx pkg summarize`
and the influx cli would actually prescribe that to `influx pkg` cmd
and pass summarize as an arg. This removes that ambiguity
also provides an interface to mix and match everything together. you can
now provide `-f` flags for file or directories, `-u` flags for urls, and
use the | to pipe in a pkg. all of which can be done at the same time.
this also extend dry run to provide env refs to it. the refactoring was
to enable that bit. Having the ability to dry run with the env ref entries
means we can dry run the pkg with the env ref values to see the impact before
the application takes place.
this is the last step for pkger to follow the service definition pattern
that is in the works. Some bits from http were moved into kit/transport/http
for reusability. End result is to hopefully axe http pkg for resuable types in
kit. Long ways off still...
* feat(backup): `influx backup` creates data backup
* feat(backup): initial restore work
* feat(restore): initial restore impl
Adds a restore tool which does offline restore of data and metadata.
* fix(restore): pr cleanup
* fix(restore): fix data dir creation
* fix(restore): pr cleanup
* chore: amend CHANGELOG
* fix: restore to empty dir fails differently
* feat(backup): backup and restore credentials
Saves the credentials file to backups and restores it from backups.
Additionally adds some logging for errors when fetching backup files.
* fix(restore): add missed commit
* fix(restore): pr cleanup
* fix(restore): fix default credentials restore path
* fix(backup): actually copy the credentials file for the backup
* fix: dirs get 0777, files get 0666
* fix: small review feedback
Co-authored-by: tmgordeeva <tanya@influxdata.com>
also makes the yaml decoder the default. To foten we end up in application/octet-stream
which is the default for many different mime types. This provides a mechanism
around that so that when the automagical detection fails it can allow the user
to provide it via the CLI.
feat(pkger): export dashboard and variable *ToResource methods
fix(pkger): add empty selected _measurement to builder config
feat(chronograf): add note & note visibility to dashboard cell
The 1.x DashboardCell struct had migrated since we brought the code into
the InfluxDB codebase. This allows us to migrate cells that were created
since then.
feat(cmd/chronograf-migrator): add 1.x chronograf migrator tool
feat(chronograf-migrator): add function to transpile queries
fix: update spelling of todo comment pkger/models.go
Co-Authored-By: Deniz Kusefoglu <deniz@influxdata.com>
fix(chronograf): add type to DashboardQuery
The type has evolved since this code was moved over from chronograf.
Previously, we did not have access to flux as a type of query.
feat(chonograf-migrator): transpile influxql query to flux if possible
fix(chronograf): omit fields when empty on old chronograf structs
fix: make linter not mad at me
feat(chronograf-migrator): lowercase variable names
fix(pkger): add empty selected measurement to builder config
chore(chronograf-migrator): add basic readme
chore(pkger): export Variable and Dashboard ToResource methods
fix(chronograf-migrator): move flags out of init call