Merge branch 'master' into feat/use-algo-w
commit
0d6e4e310b
|
@ -98,7 +98,7 @@ jobs:
|
|||
destination: screenshots
|
||||
selenium_accept:
|
||||
docker:
|
||||
- image: circleci/node:13.0.1-stretch-browsers
|
||||
- image: circleci/node:lts-stretch-browsers
|
||||
- image: quay.io/influxdb/influx:nightly
|
||||
command: [--e2e-testing=true]
|
||||
steps:
|
||||
|
@ -150,7 +150,7 @@ jobs:
|
|||
command: |
|
||||
set +e
|
||||
cd ui
|
||||
yarn install --frozen-lockfile
|
||||
yarn install
|
||||
yarn prettier
|
||||
name: "Install Dependencies"
|
||||
- run: make ui_client
|
||||
|
@ -186,7 +186,7 @@ jobs:
|
|||
command: |
|
||||
set +e
|
||||
cd ui
|
||||
yarn install --frozen-lockfile
|
||||
yarn install
|
||||
name: "Install Dependencies"
|
||||
- run: make ui_client
|
||||
- run:
|
||||
|
@ -427,6 +427,16 @@ workflows:
|
|||
e2e:
|
||||
jobs:
|
||||
- e2e
|
||||
hourly-e2e:
|
||||
triggers:
|
||||
- schedule:
|
||||
cron: '0 * * * *'
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- master
|
||||
jobs:
|
||||
- e2e
|
||||
nightly:
|
||||
triggers:
|
||||
- schedule:
|
||||
|
|
|
@ -4,6 +4,13 @@
|
|||
# Here is information about how to configure this file:
|
||||
# https://help.github.com/en/articles/about-code-owners
|
||||
|
||||
# API and monitoring team will help to review swagger.yml changes.
|
||||
# monitoring team will help to review swagger.yml changes.
|
||||
#
|
||||
http/swagger.yml @influxdata/api @influxdata/monitoring-team
|
||||
http/swagger.yml @influxdata/monitoring-team
|
||||
|
||||
# dev tools
|
||||
/pkger/ @influxdata/tools-team
|
||||
|
||||
# Storage code
|
||||
/storage/ @influxdata/storage-team-assigner
|
||||
/tsdb/ @influxdata/storage-team-assigner
|
||||
|
|
66
CHANGELOG.md
66
CHANGELOG.md
|
@ -1,8 +1,72 @@
|
|||
## Undecided
|
||||
## v2.0.0-beta.6 [unreleased]
|
||||
|
||||
### Features
|
||||
|
||||
1. [17085](https://github.com/influxdata/influxdb/pull/17085): Clicking on bucket name takes user to Data Explorer with bucket selected
|
||||
1. [17095](https://github.com/influxdata/influxdb/pull/17095): Extend pkger dashboards with table view support
|
||||
1. [17114](https://github.com/influxdata/influxdb/pull/17114): Allow for retention to be provided to influx setup command as a duration
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
1. [17039](https://github.com/influxdata/influxdb/pull/17039): Fixed issue where tasks are exported for notification rules
|
||||
1. [17042](https://github.com/influxdata/influxdb/pull/17042): Fixed issue where tasks are not exported when exporting by org id
|
||||
1. [17070](https://github.com/influxdata/influxdb/pull/17070): Fixed issue where tasks with imports in query break in pkger
|
||||
1. [17028](https://github.com/influxdata/influxdb/pull/17028): Fixed issue where selecting an aggregate function in the script editor was not adding the function to a new line
|
||||
1. [17072](https://github.com/influxdata/influxdb/pull/17072): Fixed issue where creating a variable of type map was piping the incorrect value when map variables were used in queries
|
||||
1. [17050](https://github.com/influxdata/influxdb/pull/17050): Added missing user names to auth CLI commands
|
||||
1. [17091](https://github.com/influxdata/influxdb/pull/17091): Require Content-Type for query endpoint
|
||||
1. [17113](https://github.com/influxdata/influxdb/pull/17113): Disabled group functionality for check query builder
|
||||
1. [17120](https://github.com/influxdata/influxdb/pull/17120): Fixed cell configuration error that was popping up when users create a dashboard and accessed the disk usage cell for the first time
|
||||
|
||||
## v2.0.0-beta.5 [2020-02-27]
|
||||
|
||||
### Features
|
||||
|
||||
1. [16991](https://github.com/influxdata/influxdb/pull/16991): Update Flux functions list for v0.61
|
||||
1. [16574](https://github.com/influxdata/influxdb/pull/16574): Add secure flag to session cookie
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
1. [16919](https://github.com/influxdata/influxdb/pull/16919): Sort dashboards on homepage alphabetically
|
||||
1. [16934](https://github.com/influxdata/influxdb/pull/16934): Tokens page now sorts by status
|
||||
1. [16931](https://github.com/influxdata/influxdb/pull/16931): Set the default value of tags in a Check
|
||||
1. [16935](https://github.com/influxdata/influxdb/pull/16935): Fix sort by variable type
|
||||
1. [16973](https://github.com/influxdata/influxdb/pull/16973): Calculate correct stacked line cumulative when lines are different lengths
|
||||
1. [17010](https://github.com/influxdata/influxdb/pull/17010): Fixed scrollbar issue where resource cards would overflow the parent container rather than be hidden and scrollable
|
||||
1. [16992](https://github.com/influxdata/influxdb/pull/16992): Query Builder now groups on column values, not tag values
|
||||
1. [17013](https://github.com/influxdata/influxdb/pull/17013): Scatterplots can once again render the tooltip correctly
|
||||
1. [17027](https://github.com/influxdata/influxdb/pull/17027): Drop pkger gauge chart requirement for color threshold type
|
||||
1. [17040](https://github.com/influxdata/influxdb/pull/17040): Fixed bug that was preventing the interval status on the dashboard header from refreshing on selections
|
||||
1. [16961](https://github.com/influxdata/influxdb/pull/16961): Remove cli confirmation of secret, add an optional parameter of secret value
|
||||
|
||||
## v2.0.0-beta.4 [2020-02-14]
|
||||
|
||||
### Features
|
||||
|
||||
1. [16855](https://github.com/influxdata/influxdb/pull/16855): Added labels to buckets in UI
|
||||
1. [16842](https://github.com/influxdata/influxdb/pull/16842): Connect monaco editor to Flux LSP server
|
||||
1. [16856](https://github.com/influxdata/influxdb/pull/16856): Update Flux to v0.59.6
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
1. [16852](https://github.com/influxdata/influxdb/pull/16852): Revert for bad indexing of UserResourceMappings and Authorizations
|
||||
1. [15911](https://github.com/influxdata/influxdb/pull/15911): Gauge no longer allowed to become too small
|
||||
1. [16878](https://github.com/influxdata/influxdb/pull/16878): Fix issue with INFLUX_TOKEN env vars being overridden by default token
|
||||
|
||||
## v2.0.0-beta.3 [2020-02-11]
|
||||
|
||||
### Features
|
||||
|
||||
1. [16765](https://github.com/influxdata/influxdb/pull/16765): Extend influx cli pkg command with ability to take multiple files and directories
|
||||
1. [16767](https://github.com/influxdata/influxdb/pull/16767): Extend influx cli pkg command with ability to take multiple urls, files, directories, and stdin at the same time
|
||||
1. [16786](https://github.com/influxdata/influxdb/pull/16786): influx cli can manage secrets.
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
1. [16733](https://github.com/influxdata/influxdb/pull/16733): Fix notification rule renaming panics from UI
|
||||
1. [16769](https://github.com/influxdata/influxdb/pull/16769): Fix the tooltip for stacked line graphs
|
||||
1. [16825](https://github.com/influxdata/influxdb/pull/16825): Fixed false success notification for read-only users creating dashboards
|
||||
1. [16822](https://github.com/influxdata/influxdb/pull/16822): Fix issue with pkger/http stack crashing on dupe content type
|
||||
|
||||
## v2.0.0-beta.2 [2020-01-24]
|
||||
|
||||
|
|
|
@ -148,6 +148,7 @@ To run tests for just the Javascript component use:
|
|||
|
||||
```bash
|
||||
$ make test-js
|
||||
```
|
||||
|
||||
To run tests for just the Go/Rust components use:
|
||||
|
||||
|
|
|
@ -7,7 +7,6 @@ import (
|
|||
"github.com/influxdata/influxdb"
|
||||
platcontext "github.com/influxdata/influxdb/context"
|
||||
"github.com/influxdata/influxdb/kit/tracing"
|
||||
"github.com/influxdata/influxdb/task/backend"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
|
@ -273,7 +272,7 @@ func (ts *taskServiceValidator) RetryRun(ctx context.Context, taskID, runID infl
|
|||
return nil, err
|
||||
}
|
||||
|
||||
if task.Status != string(backend.TaskActive) {
|
||||
if task.Status != string(influxdb.TaskActive) {
|
||||
return nil, ErrInactiveTask
|
||||
}
|
||||
|
||||
|
@ -301,7 +300,7 @@ func (ts *taskServiceValidator) ForceRun(ctx context.Context, taskID influxdb.ID
|
|||
return nil, err
|
||||
}
|
||||
|
||||
if task.Status != string(backend.TaskActive) {
|
||||
if task.Status != string(influxdb.TaskActive) {
|
||||
return nil, ErrInactiveTask
|
||||
}
|
||||
|
||||
|
|
|
@ -13,7 +13,6 @@ import (
|
|||
"github.com/influxdata/influxdb/kv"
|
||||
"github.com/influxdata/influxdb/mock"
|
||||
_ "github.com/influxdata/influxdb/query/builtin"
|
||||
"github.com/influxdata/influxdb/task/backend"
|
||||
"github.com/pkg/errors"
|
||||
"go.uber.org/zap/zaptest"
|
||||
)
|
||||
|
@ -54,7 +53,7 @@ func mockTaskService(orgID, taskID, runID influxdb.ID) influxdb.TaskService {
|
|||
ID: taskID,
|
||||
OrganizationID: orgID,
|
||||
Name: "cows",
|
||||
Status: string(backend.TaskActive),
|
||||
Status: string(influxdb.TaskActive),
|
||||
Flux: `option task = {
|
||||
name: "my_task",
|
||||
every: 1s,
|
||||
|
|
2
check.go
2
check.go
|
@ -60,7 +60,7 @@ type CheckService interface {
|
|||
// FindCheck returns the first check that matches filter.
|
||||
FindCheck(ctx context.Context, filter CheckFilter) (Check, error)
|
||||
|
||||
// FindChecks returns a list of checks that match filter and the total count of matching checkns.
|
||||
// FindChecks returns a list of checks that match filter and the total count of matching checks.
|
||||
// Additional options provide pagination & sorting.
|
||||
FindChecks(ctx context.Context, filter CheckFilter, opt ...FindOptions) ([]Check, int, error)
|
||||
|
||||
|
|
|
@ -8,9 +8,9 @@ import (
|
|||
"net/http/httptest"
|
||||
"testing"
|
||||
|
||||
"github.com/influxdata/httprouter"
|
||||
"github.com/influxdata/influxdb/chronograf"
|
||||
"github.com/influxdata/influxdb/chronograf/mocks"
|
||||
"github.com/influxdata/httprouter"
|
||||
)
|
||||
|
||||
func TestService_Annotations(t *testing.T) {
|
||||
|
|
|
@ -13,9 +13,9 @@ import (
|
|||
"testing"
|
||||
|
||||
"github.com/google/go-cmp/cmp"
|
||||
"github.com/influxdata/httprouter"
|
||||
"github.com/influxdata/influxdb/chronograf"
|
||||
"github.com/influxdata/influxdb/chronograf/mocks"
|
||||
"github.com/influxdata/httprouter"
|
||||
)
|
||||
|
||||
func Test_Cells_CorrectAxis(t *testing.T) {
|
||||
|
|
|
@ -8,9 +8,9 @@ import (
|
|||
"net/http/httptest"
|
||||
"testing"
|
||||
|
||||
"github.com/influxdata/httprouter"
|
||||
"github.com/influxdata/influxdb/chronograf"
|
||||
"github.com/influxdata/influxdb/chronograf/mocks"
|
||||
"github.com/influxdata/httprouter"
|
||||
)
|
||||
|
||||
func TestService_GetDatabases(t *testing.T) {
|
||||
|
|
|
@ -8,9 +8,9 @@ import (
|
|||
"net/http/httptest"
|
||||
"testing"
|
||||
|
||||
"github.com/influxdata/httprouter"
|
||||
"github.com/influxdata/influxdb/chronograf"
|
||||
"github.com/influxdata/influxdb/chronograf/mocks"
|
||||
"github.com/influxdata/httprouter"
|
||||
)
|
||||
|
||||
func TestService_Influx(t *testing.T) {
|
||||
|
|
|
@ -8,9 +8,9 @@ import (
|
|||
"net/http/httptest"
|
||||
"testing"
|
||||
|
||||
"github.com/influxdata/httprouter"
|
||||
"github.com/influxdata/influxdb/chronograf"
|
||||
"github.com/influxdata/influxdb/chronograf/mocks"
|
||||
"github.com/influxdata/httprouter"
|
||||
)
|
||||
|
||||
func TestService_Permissions(t *testing.T) {
|
||||
|
|
|
@ -7,9 +7,9 @@ import (
|
|||
"net/http/httptest"
|
||||
"testing"
|
||||
|
||||
"github.com/influxdata/httprouter"
|
||||
"github.com/influxdata/influxdb/chronograf"
|
||||
"github.com/influxdata/influxdb/chronograf/mocks"
|
||||
"github.com/influxdata/httprouter"
|
||||
)
|
||||
|
||||
func TestService_Queries(t *testing.T) {
|
||||
|
|
|
@ -10,13 +10,12 @@ import (
|
|||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func cmdAuth() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "auth",
|
||||
Aliases: []string{"authorization"},
|
||||
Short: "Authorization management commands",
|
||||
Run: seeHelp,
|
||||
}
|
||||
func cmdAuth(f *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
cmd := opt.newCmd("auth", nil)
|
||||
cmd.Aliases = []string{"authorization"}
|
||||
cmd.Short = "Authorization management commands"
|
||||
cmd.Run = seeHelp
|
||||
|
||||
cmd.AddCommand(
|
||||
authActiveCmd(),
|
||||
authCreateCmd(),
|
||||
|
@ -67,7 +66,7 @@ func authCreateCmd() *cobra.Command {
|
|||
cmd := &cobra.Command{
|
||||
Use: "create",
|
||||
Short: "Create authorization",
|
||||
RunE: wrapCheckSetup(authorizationCreateF),
|
||||
RunE: checkSetupRunEMiddleware(&flags)(authorizationCreateF),
|
||||
}
|
||||
authCreateFlags.org.register(cmd, false)
|
||||
|
||||
|
@ -280,9 +279,10 @@ var authorizationFindFlags struct {
|
|||
|
||||
func authFindCmd() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "find",
|
||||
Short: "Find authorization",
|
||||
RunE: wrapCheckSetup(authorizationFindF),
|
||||
Use: "list",
|
||||
Short: "List authorizations",
|
||||
Aliases: []string{"find", "ls"},
|
||||
RunE: checkSetupRunEMiddleware(&flags)(authorizationFindF),
|
||||
}
|
||||
|
||||
cmd.Flags().StringVarP(&authorizationFindFlags.user, "user", "u", "", "The user")
|
||||
|
@ -314,6 +314,11 @@ func authorizationFindF(cmd *cobra.Command, args []string) error {
|
|||
return err
|
||||
}
|
||||
|
||||
us, err := newUserService()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
filter := platform.AuthorizationFilter{}
|
||||
if authorizationFindFlags.id != "" {
|
||||
fID, err := platform.IDFromString(authorizationFindFlags.id)
|
||||
|
@ -363,11 +368,16 @@ func authorizationFindF(cmd *cobra.Command, args []string) error {
|
|||
for _, p := range a.Permissions {
|
||||
permissions = append(permissions, p.String())
|
||||
}
|
||||
user, err := us.FindUserByID(context.Background(), a.UserID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
w.Write(map[string]interface{}{
|
||||
"ID": a.ID,
|
||||
"Token": a.Token,
|
||||
"Status": a.Status,
|
||||
"User": user.Name,
|
||||
"UserID": a.UserID.String(),
|
||||
"Permissions": permissions,
|
||||
})
|
||||
|
@ -386,7 +396,7 @@ func authDeleteCmd() *cobra.Command {
|
|||
cmd := &cobra.Command{
|
||||
Use: "delete",
|
||||
Short: "Delete authorization",
|
||||
RunE: wrapCheckSetup(authorizationDeleteF),
|
||||
RunE: checkSetupRunEMiddleware(&flags)(authorizationDeleteF),
|
||||
}
|
||||
|
||||
cmd.Flags().StringVarP(&authorizationDeleteFlags.id, "id", "i", "", "The authorization ID (required)")
|
||||
|
@ -452,7 +462,7 @@ func authActiveCmd() *cobra.Command {
|
|||
cmd := &cobra.Command{
|
||||
Use: "active",
|
||||
Short: "Active authorization",
|
||||
RunE: wrapCheckSetup(authorizationActiveF),
|
||||
RunE: checkSetupRunEMiddleware(&flags)(authorizationActiveF),
|
||||
}
|
||||
|
||||
cmd.Flags().StringVarP(&authorizationActiveFlags.id, "id", "i", "", "The authorization ID (required)")
|
||||
|
@ -467,6 +477,11 @@ func authorizationActiveF(cmd *cobra.Command, args []string) error {
|
|||
return err
|
||||
}
|
||||
|
||||
us, err := newUserService()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var id platform.ID
|
||||
if err := id.DecodeFromString(authorizationActiveFlags.id); err != nil {
|
||||
return err
|
||||
|
@ -499,10 +514,16 @@ func authorizationActiveF(cmd *cobra.Command, args []string) error {
|
|||
ps = append(ps, p.String())
|
||||
}
|
||||
|
||||
user, err := us.FindUserByID(context.Background(), a.UserID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
w.Write(map[string]interface{}{
|
||||
"ID": a.ID.String(),
|
||||
"Token": a.Token,
|
||||
"Status": a.Status,
|
||||
"User": user.Name,
|
||||
"UserID": a.UserID.String(),
|
||||
"Permissions": ps,
|
||||
})
|
||||
|
@ -520,7 +541,7 @@ func authInactiveCmd() *cobra.Command {
|
|||
cmd := &cobra.Command{
|
||||
Use: "inactive",
|
||||
Short: "Inactive authorization",
|
||||
RunE: wrapCheckSetup(authorizationInactiveF),
|
||||
RunE: checkSetupRunEMiddleware(&flags)(authorizationInactiveF),
|
||||
}
|
||||
|
||||
cmd.Flags().StringVarP(&authorizationInactiveFlags.id, "id", "i", "", "The authorization ID (required)")
|
||||
|
|
|
@ -14,18 +14,16 @@ import (
|
|||
"go.uber.org/multierr"
|
||||
)
|
||||
|
||||
func cmdBackup() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "backup",
|
||||
Short: "Backup the data in InfluxDB",
|
||||
Long: fmt.Sprintf(
|
||||
`Backs up data and meta data for the running InfluxDB instance.
|
||||
func cmdBackup(f *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
cmd := opt.newCmd("backup", backupF)
|
||||
cmd.Short = "Backup the data in InfluxDB"
|
||||
cmd.Long = fmt.Sprintf(
|
||||
`Backs up data and meta data for the running InfluxDB instance.
|
||||
Downloaded files are written to the directory indicated by --path.
|
||||
The target directory, and any parent directories, are created automatically.
|
||||
Data file have extension .tsm; meta data is written to %s in the same directory.`,
|
||||
bolt.DefaultFilename),
|
||||
RunE: backupF,
|
||||
}
|
||||
bolt.DefaultFilename)
|
||||
|
||||
opts := flagOpts{
|
||||
{
|
||||
DestP: &backupFlags.Path,
|
||||
|
|
|
@ -3,23 +3,24 @@ package main
|
|||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/influxdb"
|
||||
"github.com/influxdata/influxdb/cmd/influx/internal"
|
||||
"github.com/influxdata/influxdb/http"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
type bucketSVCsFn func() (influxdb.BucketService, influxdb.OrganizationService, error)
|
||||
|
||||
func cmdBucket(opts ...genericCLIOptFn) *cobra.Command {
|
||||
return newCmdBucketBuilder(newBucketSVCs, opts...).cmd()
|
||||
func cmdBucket(f *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
builder := newCmdBucketBuilder(newBucketSVCs, opt)
|
||||
builder.globalFlags = f
|
||||
return builder.cmd()
|
||||
}
|
||||
|
||||
type cmdBucketBuilder struct {
|
||||
genericCLIOpts
|
||||
*globalFlags
|
||||
|
||||
svcFn bucketSVCsFn
|
||||
|
||||
|
@ -31,17 +32,9 @@ type cmdBucketBuilder struct {
|
|||
retention time.Duration
|
||||
}
|
||||
|
||||
func newCmdBucketBuilder(svcsFn bucketSVCsFn, opts ...genericCLIOptFn) *cmdBucketBuilder {
|
||||
opt := genericCLIOpts{
|
||||
in: os.Stdin,
|
||||
w: os.Stdout,
|
||||
}
|
||||
for _, o := range opts {
|
||||
o(&opt)
|
||||
}
|
||||
|
||||
func newCmdBucketBuilder(svcsFn bucketSVCsFn, opts genericCLIOpts) *cmdBucketBuilder {
|
||||
return &cmdBucketBuilder{
|
||||
genericCLIOpts: opt,
|
||||
genericCLIOpts: opts,
|
||||
svcFn: svcsFn,
|
||||
}
|
||||
}
|
||||
|
@ -78,7 +71,7 @@ func (b *cmdBucketBuilder) cmdCreate() *cobra.Command {
|
|||
opts.mustRegister(cmd)
|
||||
|
||||
cmd.Flags().StringVarP(&b.description, "description", "d", "", "Description of bucket that will be created")
|
||||
cmd.Flags().DurationVarP(&b.retention, "retention", "r", 0, "Duration in nanoseconds data will live in bucket")
|
||||
cmd.Flags().DurationVarP(&b.retention, "retention", "r", 0, "Duration bucket will retain data. 0 is infinite. Default is 0.")
|
||||
b.org.register(cmd, false)
|
||||
|
||||
return cmd
|
||||
|
@ -108,7 +101,7 @@ func (b *cmdBucketBuilder) cmdCreateRunEFn(*cobra.Command, []string) error {
|
|||
return fmt.Errorf("failed to create bucket: %v", err)
|
||||
}
|
||||
|
||||
w := internal.NewTabWriter(b.w)
|
||||
w := b.newTabWriter()
|
||||
w.WriteHeaders("ID", "Name", "Retention", "OrganizationID")
|
||||
w.Write(map[string]interface{}{
|
||||
"ID": bkt.ID.String(),
|
||||
|
@ -152,7 +145,7 @@ func (b *cmdBucketBuilder) cmdDeleteRunEFn(cmd *cobra.Command, args []string) er
|
|||
return fmt.Errorf("failed to delete bucket with id %q: %v", id, err)
|
||||
}
|
||||
|
||||
w := internal.NewTabWriter(b.w)
|
||||
w := b.newTabWriter()
|
||||
w.WriteHeaders("ID", "Name", "Retention", "OrganizationID", "Deleted")
|
||||
w.Write(map[string]interface{}{
|
||||
"ID": bkt.ID.String(),
|
||||
|
@ -167,8 +160,9 @@ func (b *cmdBucketBuilder) cmdDeleteRunEFn(cmd *cobra.Command, args []string) er
|
|||
}
|
||||
|
||||
func (b *cmdBucketBuilder) cmdFind() *cobra.Command {
|
||||
cmd := b.newCmd("find", b.cmdFindRunEFn)
|
||||
cmd.Short = "Find buckets"
|
||||
cmd := b.newCmd("list", b.cmdFindRunEFn)
|
||||
cmd.Short = "List buckets"
|
||||
cmd.Aliases = []string{"find", "ls"}
|
||||
|
||||
opts := flagOpts{
|
||||
{
|
||||
|
@ -225,7 +219,7 @@ func (b *cmdBucketBuilder) cmdFindRunEFn(cmd *cobra.Command, args []string) erro
|
|||
return fmt.Errorf("failed to retrieve buckets: %s", err)
|
||||
}
|
||||
|
||||
w := internal.NewTabWriter(b.w)
|
||||
w := b.newTabWriter()
|
||||
w.HideHeaders(!b.headers)
|
||||
w.WriteHeaders("ID", "Name", "Retention", "OrganizationID")
|
||||
for _, b := range buckets {
|
||||
|
@ -259,7 +253,7 @@ func (b *cmdBucketBuilder) cmdUpdate() *cobra.Command {
|
|||
cmd.Flags().StringVarP(&b.id, "id", "i", "", "The bucket ID (required)")
|
||||
cmd.Flags().StringVarP(&b.description, "description", "d", "", "Description of bucket that will be created")
|
||||
cmd.MarkFlagRequired("id")
|
||||
cmd.Flags().DurationVarP(&b.retention, "retention", "r", 0, "New duration data will live in bucket")
|
||||
cmd.Flags().DurationVarP(&b.retention, "retention", "r", 0, "Duration bucket will retain data. 0 is infinite. Default is 0.")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
@ -291,7 +285,7 @@ func (b *cmdBucketBuilder) cmdUpdateRunEFn(cmd *cobra.Command, args []string) er
|
|||
return fmt.Errorf("failed to update bucket: %v", err)
|
||||
}
|
||||
|
||||
w := internal.NewTabWriter(b.w)
|
||||
w := b.newTabWriter()
|
||||
w.WriteHeaders("ID", "Name", "Retention", "OrganizationID")
|
||||
w.Write(map[string]interface{}{
|
||||
"ID": bkt.ID.String(),
|
||||
|
|
|
@ -18,8 +18,6 @@ import (
|
|||
)
|
||||
|
||||
func TestCmdBucket(t *testing.T) {
|
||||
setViperOptions()
|
||||
|
||||
orgID := influxdb.ID(9000)
|
||||
|
||||
fakeSVCFn := func(svc influxdb.BucketService) bucketSVCsFn {
|
||||
|
@ -94,7 +92,7 @@ func TestCmdBucket(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
cmdFn := func(expectedBkt influxdb.Bucket) *cobra.Command {
|
||||
cmdFn := func(expectedBkt influxdb.Bucket) func(*globalFlags, genericCLIOpts) *cobra.Command {
|
||||
svc := mock.NewBucketService()
|
||||
svc.CreateBucketFn = func(ctx context.Context, bucket *influxdb.Bucket) error {
|
||||
if expectedBkt != *bucket {
|
||||
|
@ -103,18 +101,22 @@ func TestCmdBucket(t *testing.T) {
|
|||
return nil
|
||||
}
|
||||
|
||||
builder := newCmdBucketBuilder(fakeSVCFn(svc), out(ioutil.Discard))
|
||||
cmd := builder.cmdCreate()
|
||||
cmd.RunE = builder.cmdCreateRunEFn
|
||||
return cmd
|
||||
return func(g *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
return newCmdBucketBuilder(fakeSVCFn(svc), opt).cmd()
|
||||
}
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
fn := func(t *testing.T) {
|
||||
defer addEnvVars(t, tt.envVars)()
|
||||
|
||||
cmd := cmdFn(tt.expectedBucket)
|
||||
cmd.SetArgs(tt.flags)
|
||||
builder := newInfluxCmdBuilder(
|
||||
in(new(bytes.Buffer)),
|
||||
out(ioutil.Discard),
|
||||
)
|
||||
cmd := builder.cmd(cmdFn(tt.expectedBucket))
|
||||
cmd.SetArgs(append([]string{"bucket", "create"}, tt.flags...))
|
||||
|
||||
require.NoError(t, cmd.Execute())
|
||||
}
|
||||
|
||||
|
@ -140,7 +142,7 @@ func TestCmdBucket(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
cmdFn := func(expectedID influxdb.ID) *cobra.Command {
|
||||
cmdFn := func(expectedID influxdb.ID) func(*globalFlags, genericCLIOpts) *cobra.Command {
|
||||
svc := mock.NewBucketService()
|
||||
svc.FindBucketByIDFn = func(ctx context.Context, id influxdb.ID) (*influxdb.Bucket, error) {
|
||||
return &influxdb.Bucket{ID: id}, nil
|
||||
|
@ -152,17 +154,22 @@ func TestCmdBucket(t *testing.T) {
|
|||
return nil
|
||||
}
|
||||
|
||||
builder := newCmdBucketBuilder(fakeSVCFn(svc), out(ioutil.Discard))
|
||||
cmd := builder.cmdDelete()
|
||||
cmd.RunE = builder.cmdDeleteRunEFn
|
||||
return cmd
|
||||
return func(g *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
return newCmdBucketBuilder(fakeSVCFn(svc), opt).cmd()
|
||||
}
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
fn := func(t *testing.T) {
|
||||
cmd := cmdFn(tt.expectedID)
|
||||
builder := newInfluxCmdBuilder(
|
||||
in(new(bytes.Buffer)),
|
||||
out(ioutil.Discard),
|
||||
)
|
||||
|
||||
cmd := builder.cmd(cmdFn(tt.expectedID))
|
||||
idFlag := tt.flag + tt.expectedID.String()
|
||||
cmd.SetArgs([]string{idFlag})
|
||||
cmd.SetArgs([]string{"bucket", "delete", idFlag})
|
||||
|
||||
require.NoError(t, cmd.Execute())
|
||||
}
|
||||
|
||||
|
@ -170,7 +177,7 @@ func TestCmdBucket(t *testing.T) {
|
|||
}
|
||||
})
|
||||
|
||||
t.Run("find", func(t *testing.T) {
|
||||
t.Run("list", func(t *testing.T) {
|
||||
type called struct {
|
||||
name string
|
||||
id influxdb.ID
|
||||
|
@ -182,6 +189,7 @@ func TestCmdBucket(t *testing.T) {
|
|||
name string
|
||||
expected called
|
||||
flags []string
|
||||
command string
|
||||
envVars map[string]string
|
||||
}{
|
||||
{
|
||||
|
@ -243,9 +251,23 @@ func TestCmdBucket(t *testing.T) {
|
|||
flags: []string{"-i=" + influxdb.ID(1).String()},
|
||||
expected: called{orgID: 2, name: "name1", id: 1},
|
||||
},
|
||||
{
|
||||
name: "ls alias",
|
||||
command: "ls",
|
||||
envVars: envVarsZeroMap,
|
||||
flags: []string{"--org-id=" + influxdb.ID(3).String()},
|
||||
expected: called{orgID: 3},
|
||||
},
|
||||
{
|
||||
name: "find alias",
|
||||
command: "find",
|
||||
envVars: envVarsZeroMap,
|
||||
flags: []string{"--org-id=" + influxdb.ID(3).String()},
|
||||
expected: called{orgID: 3},
|
||||
},
|
||||
}
|
||||
|
||||
cmdFn := func() (*cobra.Command, *called) {
|
||||
cmdFn := func() (func(*globalFlags, genericCLIOpts) *cobra.Command, *called) {
|
||||
calls := new(called)
|
||||
|
||||
svc := mock.NewBucketService()
|
||||
|
@ -265,18 +287,28 @@ func TestCmdBucket(t *testing.T) {
|
|||
return nil, 0, nil
|
||||
}
|
||||
|
||||
builder := newCmdBucketBuilder(fakeSVCFn(svc), in(new(bytes.Buffer)), out(ioutil.Discard))
|
||||
cmd := builder.cmdFind()
|
||||
cmd.RunE = builder.cmdFindRunEFn
|
||||
return cmd, calls
|
||||
return func(g *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
return newCmdBucketBuilder(fakeSVCFn(svc), opt).cmd()
|
||||
}, calls
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
fn := func(t *testing.T) {
|
||||
defer addEnvVars(t, tt.envVars)()
|
||||
|
||||
cmd, calls := cmdFn()
|
||||
cmd.SetArgs(tt.flags)
|
||||
builder := newInfluxCmdBuilder(
|
||||
in(new(bytes.Buffer)),
|
||||
out(ioutil.Discard),
|
||||
)
|
||||
|
||||
cmdFn, calls := cmdFn()
|
||||
cmd := builder.cmd(cmdFn)
|
||||
|
||||
if tt.command == "" {
|
||||
tt.command = "list"
|
||||
}
|
||||
|
||||
cmd.SetArgs(append([]string{"bucket", tt.command}, tt.flags...))
|
||||
|
||||
require.NoError(t, cmd.Execute())
|
||||
assert.Equal(t, tt.expected, *calls)
|
||||
|
@ -347,7 +379,7 @@ func TestCmdBucket(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
cmdFn := func(expectedUpdate influxdb.BucketUpdate) *cobra.Command {
|
||||
cmdFn := func(expectedUpdate influxdb.BucketUpdate) func(*globalFlags, genericCLIOpts) *cobra.Command {
|
||||
svc := mock.NewBucketService()
|
||||
svc.UpdateBucketFn = func(ctx context.Context, id influxdb.ID, upd influxdb.BucketUpdate) (*influxdb.Bucket, error) {
|
||||
if id != 3 {
|
||||
|
@ -359,18 +391,23 @@ func TestCmdBucket(t *testing.T) {
|
|||
return &influxdb.Bucket{}, nil
|
||||
}
|
||||
|
||||
builder := newCmdBucketBuilder(fakeSVCFn(svc), out(ioutil.Discard))
|
||||
cmd := builder.cmdUpdate()
|
||||
cmd.RunE = builder.cmdUpdateRunEFn
|
||||
return cmd
|
||||
return func(g *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
return newCmdBucketBuilder(fakeSVCFn(svc), opt).cmd()
|
||||
}
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
fn := func(t *testing.T) {
|
||||
defer addEnvVars(t, tt.envVars)()
|
||||
|
||||
cmd := cmdFn(tt.expected)
|
||||
cmd.SetArgs(tt.flags)
|
||||
builder := newInfluxCmdBuilder(
|
||||
in(new(bytes.Buffer)),
|
||||
out(ioutil.Discard),
|
||||
)
|
||||
|
||||
cmd := builder.cmd(cmdFn(tt.expected))
|
||||
|
||||
cmd.SetArgs(append([]string{"bucket", "update"}, tt.flags...))
|
||||
require.NoError(t, cmd.Execute())
|
||||
}
|
||||
|
||||
|
|
|
@ -11,14 +11,11 @@ import (
|
|||
|
||||
var deleteFlags http.DeleteRequest
|
||||
|
||||
func cmdDelete() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "delete points from an influxDB bucket",
|
||||
Short: "Delete points from influxDB",
|
||||
Long: `Delete points from influxDB, by specify start, end time
|
||||
and a sql like predicate string.`,
|
||||
RunE: wrapCheckSetup(fluxDeleteF),
|
||||
}
|
||||
func cmdDelete(f *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
cmd := opt.newCmd("delete", fluxDeleteF)
|
||||
cmd.Short = "Delete points from influxDB"
|
||||
cmd.Long = `Delete points from influxDB, by specify start, end time
|
||||
and a sql like predicate string.`
|
||||
|
||||
opts := flagOpts{
|
||||
{
|
||||
|
|
|
@ -9,30 +9,35 @@ import (
|
|||
platform "github.com/influxdata/influxdb"
|
||||
)
|
||||
|
||||
type tabWriter struct {
|
||||
// TabWriter wraps tab writer headers logic.
|
||||
type TabWriter struct {
|
||||
writer *tabwriter.Writer
|
||||
headers []string
|
||||
hideHeaders bool
|
||||
}
|
||||
|
||||
func NewTabWriter(w io.Writer) *tabWriter {
|
||||
return &tabWriter{
|
||||
// NewTabWriter creates a new tab writer.
|
||||
func NewTabWriter(w io.Writer) *TabWriter {
|
||||
return &TabWriter{
|
||||
writer: tabwriter.NewWriter(w, 0, 8, 1, '\t', 0),
|
||||
}
|
||||
}
|
||||
|
||||
func (w *tabWriter) HideHeaders(b bool) {
|
||||
// HideHeaders will set the hideHeaders flag.
|
||||
func (w *TabWriter) HideHeaders(b bool) {
|
||||
w.hideHeaders = b
|
||||
}
|
||||
|
||||
func (w *tabWriter) WriteHeaders(h ...string) {
|
||||
// WriteHeaders will write headers.
|
||||
func (w *TabWriter) WriteHeaders(h ...string) {
|
||||
w.headers = h
|
||||
if !w.hideHeaders {
|
||||
fmt.Fprintln(w.writer, strings.Join(h, "\t"))
|
||||
}
|
||||
}
|
||||
|
||||
func (w *tabWriter) Write(m map[string]interface{}) {
|
||||
// Write will write the map into embed tab writer.
|
||||
func (w *TabWriter) Write(m map[string]interface{}) {
|
||||
body := make([]interface{}, len(w.headers))
|
||||
types := make([]string, len(w.headers))
|
||||
for i, h := range w.headers {
|
||||
|
@ -45,7 +50,11 @@ func (w *tabWriter) Write(m map[string]interface{}) {
|
|||
fmt.Fprintf(w.writer, formatString+"\n", body...)
|
||||
}
|
||||
|
||||
func (w *tabWriter) Flush() {
|
||||
// Flush should be called after the last call to Write to ensure
|
||||
// that any data buffered in the Writer is written to output. Any
|
||||
// incomplete escape sequence at the end is considered
|
||||
// complete for formatting purposes.
|
||||
func (w *TabWriter) Flush() {
|
||||
w.writer.Flush()
|
||||
}
|
||||
|
||||
|
|
|
@ -8,6 +8,7 @@ import (
|
|||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/influxdata/influxdb"
|
||||
"github.com/influxdata/influxdb/bolt"
|
||||
|
@ -51,7 +52,9 @@ func newHTTPClient() (*httpc.Client, error) {
|
|||
}
|
||||
|
||||
type (
|
||||
runEWrapFn func(fn func(*cobra.Command, []string) error) func(*cobra.Command, []string) error
|
||||
cobraRunEFn func(cmd *cobra.Command, args []string) error
|
||||
|
||||
cobraRuneEMiddleware func(fn cobraRunEFn) cobraRunEFn
|
||||
|
||||
genericCLIOptFn func(*genericCLIOpts)
|
||||
)
|
||||
|
@ -59,11 +62,12 @@ type (
|
|||
type genericCLIOpts struct {
|
||||
in io.Reader
|
||||
w io.Writer
|
||||
runEWrapFn runEWrapFn
|
||||
runEWrapFn cobraRuneEMiddleware
|
||||
}
|
||||
|
||||
func (o genericCLIOpts) newCmd(use string, runE func(*cobra.Command, []string) error) *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Args: cobra.NoArgs,
|
||||
Use: use,
|
||||
RunE: runE,
|
||||
}
|
||||
|
@ -74,6 +78,10 @@ func (o genericCLIOpts) newCmd(use string, runE func(*cobra.Command, []string) e
|
|||
return cmd
|
||||
}
|
||||
|
||||
func (o genericCLIOpts) newTabWriter() *internal.TabWriter {
|
||||
return internal.NewTabWriter(o.w)
|
||||
}
|
||||
|
||||
func in(r io.Reader) genericCLIOptFn {
|
||||
return func(o *genericCLIOpts) {
|
||||
o.in = r
|
||||
|
@ -86,52 +94,50 @@ func out(w io.Writer) genericCLIOptFn {
|
|||
}
|
||||
}
|
||||
|
||||
func runEWrap(fn runEWrapFn) genericCLIOptFn {
|
||||
return func(opts *genericCLIOpts) {
|
||||
opts.runEWrapFn = fn
|
||||
}
|
||||
}
|
||||
|
||||
var flags struct {
|
||||
type globalFlags struct {
|
||||
token string
|
||||
host string
|
||||
local bool
|
||||
skipVerify bool
|
||||
}
|
||||
|
||||
func influxCmd(opts ...genericCLIOptFn) *cobra.Command {
|
||||
var flags globalFlags
|
||||
|
||||
type cmdInfluxBuilder struct {
|
||||
genericCLIOpts
|
||||
|
||||
once sync.Once
|
||||
}
|
||||
|
||||
func newInfluxCmdBuilder(optFns ...genericCLIOptFn) *cmdInfluxBuilder {
|
||||
builder := new(cmdInfluxBuilder)
|
||||
|
||||
opt := genericCLIOpts{
|
||||
in: os.Stdin,
|
||||
w: os.Stdout,
|
||||
in: os.Stdin,
|
||||
w: os.Stdout,
|
||||
runEWrapFn: checkSetupRunEMiddleware(&flags),
|
||||
}
|
||||
for _, o := range opts {
|
||||
o(&opt)
|
||||
for _, optFn := range optFns {
|
||||
optFn(&opt)
|
||||
}
|
||||
|
||||
cmd := opt.newCmd("influx", nil)
|
||||
builder.genericCLIOpts = opt
|
||||
return builder
|
||||
}
|
||||
|
||||
func (b *cmdInfluxBuilder) cmd(childCmdFns ...func(f *globalFlags, opt genericCLIOpts) *cobra.Command) *cobra.Command {
|
||||
b.once.Do(func() {
|
||||
// enforce that viper options only ever get set once
|
||||
setViperOptions()
|
||||
})
|
||||
|
||||
cmd := b.newCmd("influx", nil)
|
||||
cmd.Short = "Influx Client"
|
||||
cmd.SilenceUsage = true
|
||||
|
||||
setViperOptions()
|
||||
|
||||
runEWrapper := runEWrap(wrapCheckSetup)
|
||||
|
||||
cmd.AddCommand(
|
||||
cmdAuth(),
|
||||
cmdBackup(),
|
||||
cmdBucket(runEWrapper),
|
||||
cmdDelete(),
|
||||
cmdOrganization(runEWrapper),
|
||||
cmdPing(),
|
||||
cmdPkg(runEWrapper),
|
||||
cmdQuery(),
|
||||
cmdTranspile(),
|
||||
cmdREPL(),
|
||||
cmdSetup(),
|
||||
cmdTask(),
|
||||
cmdUser(runEWrapper),
|
||||
cmdWrite(),
|
||||
)
|
||||
for _, childCmd := range childCmdFns {
|
||||
cmd.AddCommand(childCmd(&flags, b.genericCLIOpts))
|
||||
}
|
||||
|
||||
fOpts := flagOpts{
|
||||
{
|
||||
|
@ -151,10 +157,12 @@ func influxCmd(opts ...genericCLIOptFn) *cobra.Command {
|
|||
}
|
||||
fOpts.mustRegister(cmd)
|
||||
|
||||
// this is after the flagOpts register b/c we don't want to show the default value
|
||||
// in the usage display. This will add it as the token value, then if a token flag
|
||||
// is provided too, the flag will take precedence.
|
||||
flags.token = getTokenFromDefaultPath()
|
||||
if flags.token == "" {
|
||||
// this is after the flagOpts register b/c we don't want to show the default value
|
||||
// in the usage display. This will add it as the token value, then if a token flag
|
||||
// is provided too, the flag will take precedence.
|
||||
flags.token = getTokenFromDefaultPath()
|
||||
}
|
||||
|
||||
cmd.PersistentFlags().BoolVar(&flags.local, "local", false, "Run commands locally against the filesystem")
|
||||
cmd.PersistentFlags().BoolVar(&flags.skipVerify, "skip-verify", false, "SkipVerify controls whether a client verifies the server's certificate chain and host name.")
|
||||
|
@ -167,6 +175,27 @@ func influxCmd(opts ...genericCLIOptFn) *cobra.Command {
|
|||
return cmd
|
||||
}
|
||||
|
||||
func influxCmd(opts ...genericCLIOptFn) *cobra.Command {
|
||||
builder := newInfluxCmdBuilder(opts...)
|
||||
return builder.cmd(
|
||||
cmdAuth,
|
||||
cmdBackup,
|
||||
cmdBucket,
|
||||
cmdDelete,
|
||||
cmdOrganization,
|
||||
cmdPing,
|
||||
cmdPkg,
|
||||
cmdQuery,
|
||||
cmdTranspile,
|
||||
cmdREPL,
|
||||
cmdSecret,
|
||||
cmdSetup,
|
||||
cmdTask,
|
||||
cmdUser,
|
||||
cmdWrite,
|
||||
)
|
||||
}
|
||||
|
||||
func fetchSubCommand(parent *cobra.Command, args []string) *cobra.Command {
|
||||
var err error
|
||||
var cmd *cobra.Command
|
||||
|
@ -222,10 +251,10 @@ func writeTokenToPath(tok, path, dir string) error {
|
|||
return ioutil.WriteFile(path, []byte(tok), 0600)
|
||||
}
|
||||
|
||||
func checkSetup(host string) error {
|
||||
func checkSetup(host string, skipVerify bool) error {
|
||||
s := &http.SetupService{
|
||||
Addr: flags.host,
|
||||
InsecureSkipVerify: flags.skipVerify,
|
||||
Addr: host,
|
||||
InsecureSkipVerify: skipVerify,
|
||||
}
|
||||
|
||||
isOnboarding, err := s.IsOnboarding(context.Background())
|
||||
|
@ -240,29 +269,20 @@ func checkSetup(host string) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func wrapCheckSetup(fn func(*cobra.Command, []string) error) func(*cobra.Command, []string) error {
|
||||
return wrapErrorFmt(func(cmd *cobra.Command, args []string) error {
|
||||
err := fn(cmd, args)
|
||||
if err == nil {
|
||||
return nil
|
||||
func checkSetupRunEMiddleware(f *globalFlags) cobraRuneEMiddleware {
|
||||
return func(fn cobraRunEFn) cobraRunEFn {
|
||||
return func(cmd *cobra.Command, args []string) error {
|
||||
err := fn(cmd, args)
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
if setupErr := checkSetup(f.host, f.skipVerify); setupErr != nil && influxdb.EUnauthorized != influxdb.ErrorCode(setupErr) {
|
||||
return internal.ErrorFmt(setupErr)
|
||||
}
|
||||
|
||||
return internal.ErrorFmt(err)
|
||||
}
|
||||
|
||||
if setupErr := checkSetup(flags.host); setupErr != nil && influxdb.EUnauthorized != influxdb.ErrorCode(setupErr) {
|
||||
return setupErr
|
||||
}
|
||||
|
||||
return err
|
||||
})
|
||||
}
|
||||
|
||||
func wrapErrorFmt(fn func(*cobra.Command, []string) error) func(*cobra.Command, []string) error {
|
||||
return func(cmd *cobra.Command, args []string) error {
|
||||
err := fn(cmd, args)
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
return internal.ErrorFmt(err)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -4,23 +4,24 @@ import (
|
|||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
|
||||
"github.com/influxdata/influxdb/http"
|
||||
|
||||
"github.com/influxdata/influxdb"
|
||||
"github.com/influxdata/influxdb/cmd/influx/internal"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
type orgSVCFn func() (influxdb.OrganizationService, influxdb.UserResourceMappingService, influxdb.UserService, error)
|
||||
|
||||
func cmdOrganization(opts ...genericCLIOptFn) *cobra.Command {
|
||||
return newCmdOrgBuilder(newOrgServices, opts...).cmd()
|
||||
func cmdOrganization(f *globalFlags, opts genericCLIOpts) *cobra.Command {
|
||||
builder := newCmdOrgBuilder(newOrgServices, opts)
|
||||
builder.globalFlags = f
|
||||
return builder.cmd()
|
||||
}
|
||||
|
||||
type cmdOrgBuilder struct {
|
||||
genericCLIOpts
|
||||
*globalFlags
|
||||
|
||||
svcFn orgSVCFn
|
||||
|
||||
|
@ -30,17 +31,9 @@ type cmdOrgBuilder struct {
|
|||
name string
|
||||
}
|
||||
|
||||
func newCmdOrgBuilder(svcFn orgSVCFn, opts ...genericCLIOptFn) *cmdOrgBuilder {
|
||||
opt := genericCLIOpts{
|
||||
in: os.Stdin,
|
||||
w: os.Stdout,
|
||||
}
|
||||
for _, o := range opts {
|
||||
o(&opt)
|
||||
}
|
||||
|
||||
func newCmdOrgBuilder(svcFn orgSVCFn, opts genericCLIOpts) *cmdOrgBuilder {
|
||||
return &cmdOrgBuilder{
|
||||
genericCLIOpts: opt,
|
||||
genericCLIOpts: opts,
|
||||
svcFn: svcFn,
|
||||
}
|
||||
}
|
||||
|
@ -88,7 +81,7 @@ func (b *cmdOrgBuilder) createRunEFn(cmd *cobra.Command, args []string) error {
|
|||
return fmt.Errorf("failed to create organization: %v", err)
|
||||
}
|
||||
|
||||
w := internal.NewTabWriter(b.w)
|
||||
w := b.newTabWriter()
|
||||
w.WriteHeaders("ID", "Name")
|
||||
w.Write(map[string]interface{}{
|
||||
"ID": org.ID.String(),
|
||||
|
@ -138,7 +131,7 @@ func (b *cmdOrgBuilder) deleteRunEFn(cmd *cobra.Command, args []string) error {
|
|||
return fmt.Errorf("failed to delete org with id %q: %v", id, err)
|
||||
}
|
||||
|
||||
w := internal.NewTabWriter(b.w)
|
||||
w := b.newTabWriter()
|
||||
w.WriteHeaders("ID", "Name", "Deleted")
|
||||
w.Write(map[string]interface{}{
|
||||
"ID": o.ID.String(),
|
||||
|
@ -151,8 +144,9 @@ func (b *cmdOrgBuilder) deleteRunEFn(cmd *cobra.Command, args []string) error {
|
|||
}
|
||||
|
||||
func (b *cmdOrgBuilder) cmdFind() *cobra.Command {
|
||||
cmd := b.newCmd("find", b.findRunEFn)
|
||||
cmd.Short = "Find organizations"
|
||||
cmd := b.newCmd("list", b.findRunEFn)
|
||||
cmd.Short = "List organizations"
|
||||
cmd.Aliases = []string{"find", "ls"}
|
||||
|
||||
opts := flagOpts{
|
||||
{
|
||||
|
@ -199,7 +193,7 @@ func (b *cmdOrgBuilder) findRunEFn(cmd *cobra.Command, args []string) error {
|
|||
return fmt.Errorf("failed find orgs: %v", err)
|
||||
}
|
||||
|
||||
w := internal.NewTabWriter(b.w)
|
||||
w := b.newTabWriter()
|
||||
w.WriteHeaders("ID", "Name")
|
||||
for _, o := range orgs {
|
||||
w.Write(map[string]interface{}{
|
||||
|
@ -269,7 +263,7 @@ func (b *cmdOrgBuilder) updateRunEFn(cmd *cobra.Command, args []string) error {
|
|||
return fmt.Errorf("failed to update org: %v", err)
|
||||
}
|
||||
|
||||
w := internal.NewTabWriter(b.w)
|
||||
w := b.newTabWriter()
|
||||
w.WriteHeaders("ID", "Name")
|
||||
w.Write(map[string]interface{}{
|
||||
"ID": o.ID.String(),
|
||||
|
@ -297,6 +291,7 @@ func (b *cmdOrgBuilder) cmdMember() *cobra.Command {
|
|||
func (b *cmdOrgBuilder) cmdMemberList() *cobra.Command {
|
||||
cmd := b.newCmd("list", b.memberListRunEFn)
|
||||
cmd.Short = "List organization members"
|
||||
cmd.Aliases = []string{"find", "ls"}
|
||||
|
||||
opts := flagOpts{
|
||||
{
|
||||
|
@ -348,7 +343,7 @@ func (b *cmdOrgBuilder) memberListRunEFn(cmd *cobra.Command, args []string) erro
|
|||
}
|
||||
|
||||
ctx := context.Background()
|
||||
return memberList(ctx, b.w, urmSVC, userSVC, influxdb.UserResourceMappingFilter{
|
||||
return memberList(ctx, b, urmSVC, userSVC, influxdb.UserResourceMappingFilter{
|
||||
ResourceType: influxdb.OrgsResourceType,
|
||||
ResourceID: organization.ID,
|
||||
UserType: influxdb.Member,
|
||||
|
@ -538,7 +533,7 @@ func newOrganizationService() (influxdb.OrganizationService, error) {
|
|||
}, nil
|
||||
}
|
||||
|
||||
func memberList(ctx context.Context, w io.Writer, urmSVC influxdb.UserResourceMappingService, userSVC influxdb.UserService, f influxdb.UserResourceMappingFilter) error {
|
||||
func memberList(ctx context.Context, b *cmdOrgBuilder, urmSVC influxdb.UserResourceMappingService, userSVC influxdb.UserService, f influxdb.UserResourceMappingFilter) error {
|
||||
mps, _, err := urmSVC.FindUserResourceMappings(ctx, f)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to find members: %v", err)
|
||||
|
@ -582,7 +577,7 @@ func memberList(ctx context.Context, w io.Writer, urmSVC influxdb.UserResourceMa
|
|||
}
|
||||
}
|
||||
|
||||
tw := internal.NewTabWriter(w)
|
||||
tw := b.newTabWriter()
|
||||
tw.WriteHeaders("ID", "Name", "Status")
|
||||
for _, m := range urs {
|
||||
tw.Write(map[string]interface{}{
|
||||
|
|
|
@ -17,8 +17,6 @@ import (
|
|||
)
|
||||
|
||||
func TestCmdOrg(t *testing.T) {
|
||||
setViperOptions()
|
||||
|
||||
fakeOrgSVCFn := func(svc influxdb.OrganizationService) orgSVCFn {
|
||||
return func() (influxdb.OrganizationService, influxdb.UserResourceMappingService, influxdb.UserService, error) {
|
||||
return svc, mock.NewUserResourceMappingService(), mock.NewUserService(), nil
|
||||
|
@ -55,7 +53,7 @@ func TestCmdOrg(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
cmdFn := func(expectedOrg influxdb.Organization) *cobra.Command {
|
||||
cmdFn := func(expectedOrg influxdb.Organization) func(*globalFlags, genericCLIOpts) *cobra.Command {
|
||||
svc := mock.NewOrganizationService()
|
||||
svc.CreateOrganizationF = func(ctx context.Context, org *influxdb.Organization) error {
|
||||
if expectedOrg != *org {
|
||||
|
@ -64,15 +62,21 @@ func TestCmdOrg(t *testing.T) {
|
|||
return nil
|
||||
}
|
||||
|
||||
builder := newCmdOrgBuilder(fakeOrgSVCFn(svc), out(ioutil.Discard))
|
||||
cmd := builder.cmdCreate()
|
||||
return cmd
|
||||
return func(_ *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
builder := newCmdOrgBuilder(fakeOrgSVCFn(svc), opt)
|
||||
return builder.cmd()
|
||||
}
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
fn := func(t *testing.T) {
|
||||
cmd := cmdFn(tt.expected)
|
||||
cmd.SetArgs(tt.flags)
|
||||
builder := newInfluxCmdBuilder(
|
||||
in(new(bytes.Buffer)),
|
||||
out(ioutil.Discard),
|
||||
)
|
||||
cmd := builder.cmd(cmdFn(tt.expected))
|
||||
cmd.SetArgs(append([]string{"org", "create"}, tt.flags...))
|
||||
|
||||
require.NoError(t, cmd.Execute())
|
||||
}
|
||||
|
||||
|
@ -98,7 +102,7 @@ func TestCmdOrg(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
cmdFn := func(expectedID influxdb.ID) *cobra.Command {
|
||||
cmdFn := func(expectedID influxdb.ID) func(*globalFlags, genericCLIOpts) *cobra.Command {
|
||||
svc := mock.NewOrganizationService()
|
||||
svc.FindOrganizationByIDF = func(ctx context.Context, id influxdb.ID) (*influxdb.Organization, error) {
|
||||
return &influxdb.Organization{ID: id}, nil
|
||||
|
@ -110,16 +114,22 @@ func TestCmdOrg(t *testing.T) {
|
|||
return nil
|
||||
}
|
||||
|
||||
builder := newCmdOrgBuilder(fakeOrgSVCFn(svc), out(ioutil.Discard))
|
||||
cmd := builder.cmdDelete()
|
||||
return cmd
|
||||
return func(g *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
builder := newCmdOrgBuilder(fakeOrgSVCFn(svc), opt)
|
||||
return builder.cmd()
|
||||
}
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
fn := func(t *testing.T) {
|
||||
cmd := cmdFn(tt.expectedID)
|
||||
builder := newInfluxCmdBuilder(
|
||||
in(new(bytes.Buffer)),
|
||||
out(ioutil.Discard),
|
||||
)
|
||||
cmd := builder.cmd(cmdFn(tt.expectedID))
|
||||
idFlag := tt.flag + tt.expectedID.String()
|
||||
cmd.SetArgs([]string{idFlag})
|
||||
cmd.SetArgs([]string{"org", "find", idFlag})
|
||||
|
||||
require.NoError(t, cmd.Execute())
|
||||
}
|
||||
|
||||
|
@ -127,7 +137,7 @@ func TestCmdOrg(t *testing.T) {
|
|||
}
|
||||
})
|
||||
|
||||
t.Run("find", func(t *testing.T) {
|
||||
t.Run("list", func(t *testing.T) {
|
||||
type called struct {
|
||||
name string
|
||||
id influxdb.ID
|
||||
|
@ -137,6 +147,7 @@ func TestCmdOrg(t *testing.T) {
|
|||
name string
|
||||
expected called
|
||||
flags []string
|
||||
command string
|
||||
envVars map[string]string
|
||||
}{
|
||||
{
|
||||
|
@ -169,9 +180,23 @@ func TestCmdOrg(t *testing.T) {
|
|||
flags: []string{"-i=" + influxdb.ID(1).String()},
|
||||
expected: called{name: "name1", id: 1},
|
||||
},
|
||||
{
|
||||
name: "ls alias",
|
||||
command: "ls",
|
||||
flags: []string{"--name=name1"},
|
||||
envVars: envVarsZeroMap,
|
||||
expected: called{name: "name1"},
|
||||
},
|
||||
{
|
||||
name: "find alias",
|
||||
command: "find",
|
||||
flags: []string{"--name=name1"},
|
||||
envVars: envVarsZeroMap,
|
||||
expected: called{name: "name1"},
|
||||
},
|
||||
}
|
||||
|
||||
cmdFn := func() (*cobra.Command, *called) {
|
||||
cmdFn := func() (func(*globalFlags, genericCLIOpts) *cobra.Command, *called) {
|
||||
calls := new(called)
|
||||
|
||||
svc := mock.NewOrganizationService()
|
||||
|
@ -185,17 +210,28 @@ func TestCmdOrg(t *testing.T) {
|
|||
return nil, 0, nil
|
||||
}
|
||||
|
||||
builder := newCmdOrgBuilder(fakeOrgSVCFn(svc), in(new(bytes.Buffer)), out(ioutil.Discard))
|
||||
cmd := builder.cmdFind()
|
||||
return cmd, calls
|
||||
return func(g *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
builder := newCmdOrgBuilder(fakeOrgSVCFn(svc), opt)
|
||||
return builder.cmd()
|
||||
}, calls
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
fn := func(t *testing.T) {
|
||||
defer addEnvVars(t, tt.envVars)()
|
||||
|
||||
cmd, calls := cmdFn()
|
||||
cmd.SetArgs(tt.flags)
|
||||
builder := newInfluxCmdBuilder(
|
||||
in(new(bytes.Buffer)),
|
||||
out(ioutil.Discard),
|
||||
)
|
||||
cmdFn, calls := cmdFn()
|
||||
cmd := builder.cmd(cmdFn)
|
||||
|
||||
if tt.command == "" {
|
||||
tt.command = "list"
|
||||
}
|
||||
|
||||
cmd.SetArgs(append([]string{"org", tt.command}, tt.flags...))
|
||||
|
||||
require.NoError(t, cmd.Execute())
|
||||
assert.Equal(t, tt.expected, *calls)
|
||||
|
@ -260,7 +296,7 @@ func TestCmdOrg(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
cmdFn := func(expectedUpdate influxdb.OrganizationUpdate) *cobra.Command {
|
||||
cmdFn := func(expectedUpdate influxdb.OrganizationUpdate) func(*globalFlags, genericCLIOpts) *cobra.Command {
|
||||
svc := mock.NewOrganizationService()
|
||||
svc.UpdateOrganizationF = func(ctx context.Context, id influxdb.ID, upd influxdb.OrganizationUpdate) (*influxdb.Organization, error) {
|
||||
if id != 3 {
|
||||
|
@ -272,17 +308,23 @@ func TestCmdOrg(t *testing.T) {
|
|||
return &influxdb.Organization{}, nil
|
||||
}
|
||||
|
||||
builder := newCmdOrgBuilder(fakeOrgSVCFn(svc), out(ioutil.Discard))
|
||||
cmd := builder.cmdUpdate()
|
||||
return cmd
|
||||
return func(g *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
builder := newCmdOrgBuilder(fakeOrgSVCFn(svc), opt)
|
||||
return builder.cmd()
|
||||
}
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
fn := func(t *testing.T) {
|
||||
defer addEnvVars(t, tt.envVars)()
|
||||
|
||||
cmd := cmdFn(tt.expected)
|
||||
cmd.SetArgs(tt.flags)
|
||||
builder := newInfluxCmdBuilder(
|
||||
in(new(bytes.Buffer)),
|
||||
out(ioutil.Discard),
|
||||
)
|
||||
cmd := builder.cmd(cmdFn(tt.expected))
|
||||
cmd.SetArgs(append([]string{"org", "update"}, tt.flags...))
|
||||
|
||||
require.NoError(t, cmd.Execute())
|
||||
}
|
||||
|
||||
|
@ -306,13 +348,18 @@ func TestCmdOrg(t *testing.T) {
|
|||
}
|
||||
)
|
||||
|
||||
testMemberFn := func(t *testing.T, cmdFn func() (*cobra.Command, *called), testCases ...testCase) {
|
||||
testMemberFn := func(t *testing.T, cmdName string, cmdFn func() (func(*globalFlags, genericCLIOpts) *cobra.Command, *called), testCases ...testCase) {
|
||||
for _, tt := range testCases {
|
||||
fn := func(t *testing.T) {
|
||||
defer addEnvVars(t, tt.envVars)()
|
||||
|
||||
cmd, calls := cmdFn()
|
||||
cmd.SetArgs(tt.memberFlags)
|
||||
builder := newInfluxCmdBuilder(
|
||||
in(new(bytes.Buffer)),
|
||||
out(ioutil.Discard),
|
||||
)
|
||||
nestedCmd, calls := cmdFn()
|
||||
cmd := builder.cmd(nestedCmd)
|
||||
cmd.SetArgs(append([]string{"org", "members", cmdName}, tt.memberFlags...))
|
||||
|
||||
require.NoError(t, cmd.Execute())
|
||||
assert.Equal(t, tt.expected, *calls)
|
||||
|
@ -363,7 +410,7 @@ func TestCmdOrg(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
cmdFn := func() (*cobra.Command, *called) {
|
||||
cmdFn := func() (func(*globalFlags, genericCLIOpts) *cobra.Command, *called) {
|
||||
calls := new(called)
|
||||
|
||||
svc := mock.NewOrganizationService()
|
||||
|
@ -377,16 +424,19 @@ func TestCmdOrg(t *testing.T) {
|
|||
return &influxdb.Organization{ID: 1}, nil
|
||||
}
|
||||
|
||||
builder := newCmdOrgBuilder(fakeOrgSVCFn(svc), in(new(bytes.Buffer)), out(ioutil.Discard))
|
||||
cmd := builder.cmdMemberList()
|
||||
return cmd, calls
|
||||
return func(g *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
builder := newCmdOrgBuilder(fakeOrgSVCFn(svc), opt)
|
||||
return builder.cmd()
|
||||
}, calls
|
||||
}
|
||||
|
||||
testMemberFn(t, cmdFn, tests...)
|
||||
testMemberFn(t, "list", cmdFn, tests...)
|
||||
testMemberFn(t, "ls", cmdFn, tests[0:1]...)
|
||||
testMemberFn(t, "find", cmdFn, tests[0:1]...)
|
||||
})
|
||||
|
||||
t.Run("add", func(t *testing.T) {
|
||||
cmdFn := func() (*cobra.Command, *called) {
|
||||
cmdFn := func() (func(*globalFlags, genericCLIOpts) *cobra.Command, *called) {
|
||||
calls := new(called)
|
||||
|
||||
svc := mock.NewOrganizationService()
|
||||
|
@ -405,9 +455,10 @@ func TestCmdOrg(t *testing.T) {
|
|||
return nil
|
||||
}
|
||||
|
||||
builder := newCmdOrgBuilder(fakeOrgUrmSVCsFn(svc, urmSVC), in(new(bytes.Buffer)), out(ioutil.Discard))
|
||||
cmd := builder.cmdMemberAdd()
|
||||
return cmd, calls
|
||||
return func(g *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
builder := newCmdOrgBuilder(fakeOrgUrmSVCsFn(svc, urmSVC), opt)
|
||||
return builder.cmd()
|
||||
}, calls
|
||||
}
|
||||
|
||||
addTests := []testCase{
|
||||
|
@ -449,11 +500,11 @@ func TestCmdOrg(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
testMemberFn(t, cmdFn, addTests...)
|
||||
testMemberFn(t, "add", cmdFn, addTests...)
|
||||
})
|
||||
|
||||
t.Run("remove", func(t *testing.T) {
|
||||
cmdFn := func() (*cobra.Command, *called) {
|
||||
cmdFn := func() (func(*globalFlags, genericCLIOpts) *cobra.Command, *called) {
|
||||
calls := new(called)
|
||||
|
||||
svc := mock.NewOrganizationService()
|
||||
|
@ -472,9 +523,10 @@ func TestCmdOrg(t *testing.T) {
|
|||
return nil
|
||||
}
|
||||
|
||||
builder := newCmdOrgBuilder(fakeOrgUrmSVCsFn(svc, urmSVC), in(new(bytes.Buffer)), out(ioutil.Discard))
|
||||
cmd := builder.cmdMemberRemove()
|
||||
return cmd, calls
|
||||
return func(g *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
builder := newCmdOrgBuilder(fakeOrgUrmSVCsFn(svc, urmSVC), opt)
|
||||
return builder.cmd()
|
||||
}, calls
|
||||
}
|
||||
|
||||
addTests := []testCase{
|
||||
|
@ -516,7 +568,7 @@ func TestCmdOrg(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
testMemberFn(t, cmdFn, addTests...)
|
||||
testMemberFn(t, "remove", cmdFn, addTests...)
|
||||
})
|
||||
})
|
||||
}
|
||||
|
|
|
@ -10,43 +10,42 @@ import (
|
|||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func cmdPing() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "ping",
|
||||
Short: "Check the InfluxDB /health endpoint",
|
||||
Long: `Checks the health of a running InfluxDB instance by querying /health. Does not require valid token.`,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
if flags.local {
|
||||
return fmt.Errorf("local flag not supported for ping command")
|
||||
}
|
||||
func cmdPing(f *globalFlags, opts genericCLIOpts) *cobra.Command {
|
||||
runE := func(cmd *cobra.Command, args []string) error {
|
||||
if flags.local {
|
||||
return fmt.Errorf("local flag not supported for ping command")
|
||||
}
|
||||
|
||||
c := http.Client{
|
||||
Timeout: 5 * time.Second,
|
||||
}
|
||||
url := flags.host + "/health"
|
||||
resp, err := c.Get(url)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
if resp.StatusCode/100 != 2 {
|
||||
return fmt.Errorf("got %d from '%s'", resp.StatusCode, url)
|
||||
}
|
||||
c := http.Client{
|
||||
Timeout: 5 * time.Second,
|
||||
}
|
||||
url := flags.host + "/health"
|
||||
resp, err := c.Get(url)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
if resp.StatusCode/100 != 2 {
|
||||
return fmt.Errorf("got %d from '%s'", resp.StatusCode, url)
|
||||
}
|
||||
|
||||
var healthResponse check.Response
|
||||
if err = json.NewDecoder(resp.Body).Decode(&healthResponse); err != nil {
|
||||
return err
|
||||
}
|
||||
var healthResponse check.Response
|
||||
if err = json.NewDecoder(resp.Body).Decode(&healthResponse); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if healthResponse.Status == check.StatusPass {
|
||||
fmt.Println("OK")
|
||||
} else {
|
||||
return fmt.Errorf("health check failed: '%s'", healthResponse.Message)
|
||||
}
|
||||
if healthResponse.Status == check.StatusPass {
|
||||
fmt.Println("OK")
|
||||
} else {
|
||||
return fmt.Errorf("health check failed: '%s'", healthResponse.Message)
|
||||
}
|
||||
|
||||
return nil
|
||||
},
|
||||
return nil
|
||||
}
|
||||
|
||||
cmd := opts.newCmd("ping", runE)
|
||||
cmd.Short = "Check the InfluxDB /health endpoint"
|
||||
cmd.Long = `Checks the health of a running InfluxDB instance by querying /health. Does not require valid token.`
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
|
|
@ -29,8 +29,8 @@ import (
|
|||
|
||||
type pkgSVCsFn func() (pkger.SVC, influxdb.OrganizationService, error)
|
||||
|
||||
func cmdPkg(opts ...genericCLIOptFn) *cobra.Command {
|
||||
return newCmdPkgBuilder(newPkgerSVC, opts...).cmd()
|
||||
func cmdPkg(f *globalFlags, opts genericCLIOpts) *cobra.Command {
|
||||
return newCmdPkgBuilder(newPkgerSVC, opts).cmd()
|
||||
}
|
||||
|
||||
type cmdPkgBuilder struct {
|
||||
|
@ -45,12 +45,13 @@ type cmdPkgBuilder struct {
|
|||
disableTableBorders bool
|
||||
org organization
|
||||
quiet bool
|
||||
recurse bool
|
||||
urls []string
|
||||
|
||||
applyOpts struct {
|
||||
envRefs []string
|
||||
force string
|
||||
secrets []string
|
||||
url string
|
||||
}
|
||||
exportOpts struct {
|
||||
resourceType string
|
||||
|
@ -66,17 +67,9 @@ type cmdPkgBuilder struct {
|
|||
}
|
||||
}
|
||||
|
||||
func newCmdPkgBuilder(svcFn pkgSVCsFn, opts ...genericCLIOptFn) *cmdPkgBuilder {
|
||||
opt := genericCLIOpts{
|
||||
in: os.Stdin,
|
||||
w: os.Stdout,
|
||||
}
|
||||
for _, o := range opts {
|
||||
o(&opt)
|
||||
}
|
||||
|
||||
func newCmdPkgBuilder(svcFn pkgSVCsFn, opts genericCLIOpts) *cmdPkgBuilder {
|
||||
return &cmdPkgBuilder{
|
||||
genericCLIOpts: opt,
|
||||
genericCLIOpts: opts,
|
||||
svcFn: svcFn,
|
||||
}
|
||||
}
|
||||
|
@ -96,16 +89,13 @@ func (b *cmdPkgBuilder) cmdPkgApply() *cobra.Command {
|
|||
cmd.Short = "Apply a pkg to create resources"
|
||||
|
||||
b.org.register(cmd, false)
|
||||
cmd.Flags().StringVarP(&b.encoding, "encoding", "e", "", "Encoding for the input stream. If a file is provided will gather encoding type from file extension. If extension provided will override.")
|
||||
b.registerPkgFileFlags(cmd)
|
||||
cmd.Flags().BoolVarP(&b.quiet, "quiet", "q", false, "Disable output printing")
|
||||
cmd.Flags().StringVar(&b.applyOpts.force, "force", "", `TTY input, if package will have destructive changes, proceed if set "true"`)
|
||||
cmd.Flags().StringVarP(&b.applyOpts.url, "url", "u", "", "URL to retrieve a package.")
|
||||
cmd.Flags().BoolVarP(&b.disableColor, "disable-color", "c", false, "Disable color in output")
|
||||
cmd.Flags().BoolVar(&b.disableTableBorders, "disable-table-borders", false, "Disable table borders")
|
||||
|
||||
b.applyOpts.secrets = []string{}
|
||||
cmd.Flags().StringSliceVarP(&b.files, "file", "f", nil, "Path to package file")
|
||||
cmd.MarkFlagFilename("file", "yaml", "yml", "json", "jsonnet")
|
||||
cmd.Flags().StringSliceVar(&b.applyOpts.secrets, "secret", nil, "Secrets to provide alongside the package; format should --secret=SECRET_KEY=SECRET_VALUE --secret=SECRET_KEY_2=SECRET_VALUE_2")
|
||||
cmd.Flags().StringSliceVar(&b.applyOpts.envRefs, "env-ref", nil, "Environment references to provide alongside the package; format should --env-ref=REF_KEY=REF_VALUE --env-ref=REF_KEY_2=REF_VALUE_2")
|
||||
|
||||
|
@ -132,15 +122,7 @@ func (b *cmdPkgBuilder) pkgApplyRunEFn(cmd *cobra.Command, args []string) error
|
|||
return err
|
||||
}
|
||||
|
||||
var (
|
||||
pkg *pkger.Pkg
|
||||
isTTY bool
|
||||
)
|
||||
if b.applyOpts.url != "" {
|
||||
pkg, err = pkger.Parse(b.applyEncoding(), pkger.FromHTTPRequest(b.applyOpts.url))
|
||||
} else {
|
||||
pkg, isTTY, err = b.readPkgStdInOrFile(b.files)
|
||||
}
|
||||
pkg, isTTY, err := b.readPkg()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -295,12 +277,15 @@ func (b *cmdPkgBuilder) pkgExportAllRunEFn(cmd *cobra.Command, args []string) er
|
|||
return err
|
||||
}
|
||||
|
||||
return b.writePkg(cmd.OutOrStdout(), pkgSVC, b.file, pkger.CreateWithAllOrgResources(orgID))
|
||||
orgOpt := pkger.CreateWithAllOrgResources(pkger.CreateByOrgIDOpt{
|
||||
OrgID: orgID,
|
||||
})
|
||||
return b.writePkg(cmd.OutOrStdout(), pkgSVC, b.file, orgOpt)
|
||||
}
|
||||
|
||||
func (b *cmdPkgBuilder) cmdPkgSummary() *cobra.Command {
|
||||
runE := func(cmd *cobra.Command, args []string) error {
|
||||
pkg, _, err := b.readPkgStdInOrFile(b.files)
|
||||
pkg, _, err := b.readPkg()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -312,7 +297,7 @@ func (b *cmdPkgBuilder) cmdPkgSummary() *cobra.Command {
|
|||
cmd := b.newCmd("summary", runE)
|
||||
cmd.Short = "Summarize the provided package"
|
||||
|
||||
cmd.Flags().StringSliceVarP(&b.files, "file", "f", nil, "input file for pkg; if none provided will use TTY input")
|
||||
b.registerPkgFileFlags(cmd)
|
||||
cmd.Flags().BoolVarP(&b.disableColor, "disable-color", "c", false, "Disable color in output")
|
||||
cmd.Flags().BoolVar(&b.disableTableBorders, "disable-table-borders", false, "Disable table borders")
|
||||
|
||||
|
@ -321,7 +306,7 @@ func (b *cmdPkgBuilder) cmdPkgSummary() *cobra.Command {
|
|||
|
||||
func (b *cmdPkgBuilder) cmdPkgValidate() *cobra.Command {
|
||||
runE := func(cmd *cobra.Command, args []string) error {
|
||||
pkg, _, err := b.readPkgStdInOrFile(b.files)
|
||||
pkg, _, err := b.readPkg()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -331,12 +316,22 @@ func (b *cmdPkgBuilder) cmdPkgValidate() *cobra.Command {
|
|||
cmd := b.newCmd("validate", runE)
|
||||
cmd.Short = "Validate the provided package"
|
||||
|
||||
cmd.Flags().StringVarP(&b.encoding, "encoding", "e", "", "Encoding for the input stream. If a file is provided will gather encoding type from file extension. If extension provided will override.")
|
||||
cmd.Flags().StringSliceVarP(&b.files, "file", "f", nil, "input file for pkg; if none provided will use TTY input")
|
||||
b.registerPkgFileFlags(cmd)
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (b *cmdPkgBuilder) registerPkgFileFlags(cmd *cobra.Command) {
|
||||
cmd.Flags().StringSliceVarP(&b.files, "file", "f", nil, "Path to package file")
|
||||
cmd.MarkFlagFilename("file", "yaml", "yml", "json", "jsonnet")
|
||||
cmd.Flags().BoolVarP(&b.recurse, "recurse", "R", false, "Process the directory used in -f, --file recursively. Useful when you want to manage related manifests organized within the same directory.")
|
||||
|
||||
cmd.Flags().StringSliceVarP(&b.urls, "url", "u", nil, "URL to a package file")
|
||||
|
||||
cmd.Flags().StringVarP(&b.encoding, "encoding", "e", "", "Encoding for the input stream. If a file is provided will gather encoding type from file extension. If extension provided will override.")
|
||||
cmd.MarkFlagFilename("encoding", "yaml", "yml", "json", "jsonnet")
|
||||
}
|
||||
|
||||
func (b *cmdPkgBuilder) writePkg(w io.Writer, pkgSVC pkger.SVC, outPath string, opts ...pkger.CreatePkgSetFn) error {
|
||||
pkg, err := pkgSVC.CreatePkg(context.Background(), opts...)
|
||||
if err != nil {
|
||||
|
@ -356,28 +351,70 @@ func (b *cmdPkgBuilder) writePkg(w io.Writer, pkgSVC pkger.SVC, outPath string,
|
|||
return ioutil.WriteFile(outPath, buf.Bytes(), os.ModePerm)
|
||||
}
|
||||
|
||||
func (b *cmdPkgBuilder) readPkgStdInOrFile(files []string) (*pkger.Pkg, bool, error) {
|
||||
if len(files) > 0 {
|
||||
var rawPkgs []*pkger.Pkg
|
||||
for _, file := range files {
|
||||
pkg, err := pkger.Parse(b.applyEncoding(), pkger.FromFile(file), pkger.ValidSkipParseError())
|
||||
if err != nil {
|
||||
return nil, false, err
|
||||
}
|
||||
rawPkgs = append(rawPkgs, pkg)
|
||||
func (b *cmdPkgBuilder) readRawPkgsFromFiles(filePaths []string, recurse bool) ([]*pkger.Pkg, error) {
|
||||
mFiles := make(map[string]struct{})
|
||||
for _, f := range filePaths {
|
||||
files, err := readFilesFromPath(f, recurse)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
pkg, err := pkger.Combine(rawPkgs...)
|
||||
for _, ff := range files {
|
||||
mFiles[ff] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
var rawPkgs []*pkger.Pkg
|
||||
for f := range mFiles {
|
||||
pkg, err := pkger.Parse(b.convertFileEncoding(f), pkger.FromFile(f), pkger.ValidSkipParseError())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
rawPkgs = append(rawPkgs, pkg)
|
||||
}
|
||||
|
||||
return rawPkgs, nil
|
||||
}
|
||||
|
||||
func (b *cmdPkgBuilder) readRawPkgsFromURLs(urls []string) ([]*pkger.Pkg, error) {
|
||||
mURLs := make(map[string]struct{})
|
||||
for _, f := range urls {
|
||||
mURLs[f] = struct{}{}
|
||||
}
|
||||
|
||||
var rawPkgs []*pkger.Pkg
|
||||
for u := range mURLs {
|
||||
pkg, err := pkger.Parse(b.convertURLEncoding(u), pkger.FromHTTPRequest(u), pkger.ValidSkipParseError())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
rawPkgs = append(rawPkgs, pkg)
|
||||
}
|
||||
return rawPkgs, nil
|
||||
}
|
||||
|
||||
func (b *cmdPkgBuilder) readPkg() (*pkger.Pkg, bool, error) {
|
||||
pkgs, err := b.readRawPkgsFromFiles(b.files, b.recurse)
|
||||
if err != nil {
|
||||
return nil, false, err
|
||||
}
|
||||
|
||||
urlPkgs, err := b.readRawPkgsFromURLs(b.urls)
|
||||
if err != nil {
|
||||
return nil, false, err
|
||||
}
|
||||
pkgs = append(pkgs, urlPkgs...)
|
||||
|
||||
if _, err := b.inStdIn(); err != nil {
|
||||
pkg, err := pkger.Combine(pkgs...)
|
||||
return pkg, false, err
|
||||
}
|
||||
|
||||
var isTTY bool
|
||||
|
||||
if _, err := b.inStdIn(); err == nil {
|
||||
isTTY = true
|
||||
stdinPkg, err := pkger.Parse(b.convertEncoding(), pkger.FromReader(b.in), pkger.ValidSkipParseError())
|
||||
if err != nil {
|
||||
return nil, true, err
|
||||
}
|
||||
|
||||
pkg, err := pkger.Parse(b.applyEncoding(), pkger.FromReader(b.in))
|
||||
return pkg, isTTY, err
|
||||
pkg, err := pkger.Combine(append(pkgs, stdinPkg)...)
|
||||
return pkg, true, err
|
||||
}
|
||||
|
||||
func (b *cmdPkgBuilder) inStdIn() (*os.File, error) {
|
||||
|
@ -421,17 +458,40 @@ func (b *cmdPkgBuilder) getInput(msg, defaultVal string) string {
|
|||
return getInput(ui, msg, defaultVal)
|
||||
}
|
||||
|
||||
func (b *cmdPkgBuilder) applyEncoding() pkger.Encoding {
|
||||
urlBase := path.Ext(b.applyOpts.url)
|
||||
ext := filepath.Ext(b.file)
|
||||
func (b *cmdPkgBuilder) convertURLEncoding(url string) pkger.Encoding {
|
||||
urlBase := path.Ext(url)
|
||||
switch {
|
||||
case ext == ".json" || b.encoding == "json" || urlBase == ".json":
|
||||
case strings.HasPrefix(urlBase, ".jsonnet"):
|
||||
return pkger.EncodingJsonnet
|
||||
case strings.HasPrefix(urlBase, ".json"):
|
||||
return pkger.EncodingJSON
|
||||
case ext == ".yml" || ext == ".yaml" ||
|
||||
b.encoding == "yml" || b.encoding == "yaml" ||
|
||||
urlBase == ".yml" || urlBase == ".yaml":
|
||||
case strings.HasPrefix(urlBase, ".yml") || strings.HasPrefix(urlBase, ".yaml"):
|
||||
return pkger.EncodingYAML
|
||||
case ext == ".jsonnet" || b.encoding == "jsonnet" || urlBase == ".jsonnet":
|
||||
}
|
||||
return b.convertEncoding()
|
||||
}
|
||||
|
||||
func (b *cmdPkgBuilder) convertFileEncoding(file string) pkger.Encoding {
|
||||
ext := filepath.Ext(file)
|
||||
switch {
|
||||
case strings.HasPrefix(ext, ".jsonnet"):
|
||||
return pkger.EncodingJsonnet
|
||||
case strings.HasPrefix(ext, ".json"):
|
||||
return pkger.EncodingJSON
|
||||
case strings.HasPrefix(ext, ".yml") || strings.HasPrefix(ext, ".yaml"):
|
||||
return pkger.EncodingYAML
|
||||
}
|
||||
|
||||
return b.convertEncoding()
|
||||
}
|
||||
|
||||
func (b *cmdPkgBuilder) convertEncoding() pkger.Encoding {
|
||||
switch {
|
||||
case b.encoding == "json":
|
||||
return pkger.EncodingJSON
|
||||
case b.encoding == "yml" || b.encoding == "yaml":
|
||||
return pkger.EncodingYAML
|
||||
case b.encoding == "jsonnet":
|
||||
return pkger.EncodingJsonnet
|
||||
default:
|
||||
return pkger.EncodingSource
|
||||
|
@ -940,6 +1000,48 @@ func formatDuration(d time.Duration) string {
|
|||
return d.String()
|
||||
}
|
||||
|
||||
func readFilesFromPath(filePath string, recurse bool) ([]string, error) {
|
||||
info, err := os.Stat(filePath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !info.IsDir() {
|
||||
return []string{filePath}, nil
|
||||
}
|
||||
|
||||
dirFiles, err := ioutil.ReadDir(filePath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
mFiles := make(map[string]struct{})
|
||||
assign := func(ss ...string) {
|
||||
for _, s := range ss {
|
||||
mFiles[s] = struct{}{}
|
||||
}
|
||||
}
|
||||
for _, f := range dirFiles {
|
||||
fileP := filepath.Join(filePath, f.Name())
|
||||
if f.IsDir() {
|
||||
if recurse {
|
||||
rFiles, err := readFilesFromPath(fileP, recurse)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
assign(rFiles...)
|
||||
}
|
||||
continue
|
||||
}
|
||||
assign(fileP)
|
||||
}
|
||||
|
||||
var files []string
|
||||
for f := range mFiles {
|
||||
files = append(files, f)
|
||||
}
|
||||
return files, nil
|
||||
}
|
||||
|
||||
func mapKeys(provided, kvPairs []string) map[string]string {
|
||||
out := make(map[string]string)
|
||||
for _, k := range provided {
|
||||
|
|
|
@ -3,18 +3,20 @@ package main
|
|||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"errors"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/influxdata/influxdb"
|
||||
"github.com/influxdata/influxdb/kit/errors"
|
||||
"github.com/influxdata/influxdb/mock"
|
||||
|
||||
"github.com/influxdata/influxdb"
|
||||
"github.com/influxdata/influxdb/pkger"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
@ -32,8 +34,6 @@ func TestCmdPkg(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
setViperOptions()
|
||||
|
||||
t.Run("export all", func(t *testing.T) {
|
||||
expectedOrgID := influxdb.ID(9000)
|
||||
|
||||
|
@ -45,9 +45,7 @@ func TestCmdPkg(t *testing.T) {
|
|||
name: "yaml out with org id",
|
||||
encoding: pkger.EncodingYAML,
|
||||
filename: "pkg_0.yml",
|
||||
flags: []flagArg{
|
||||
{name: "org-id", val: expectedOrgID.String()},
|
||||
},
|
||||
args: []string{"--org-id=" + expectedOrgID.String()},
|
||||
},
|
||||
},
|
||||
{
|
||||
|
@ -55,9 +53,7 @@ func TestCmdPkg(t *testing.T) {
|
|||
name: "yaml out with org name",
|
||||
encoding: pkger.EncodingYAML,
|
||||
filename: "pkg_0.yml",
|
||||
flags: []flagArg{
|
||||
{name: "org", val: "influxdata"},
|
||||
},
|
||||
args: []string{"--org=influxdata"},
|
||||
},
|
||||
},
|
||||
{
|
||||
|
@ -78,7 +74,7 @@ func TestCmdPkg(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
cmdFn := func() *cobra.Command {
|
||||
cmdFn := func(_ *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
pkgSVC := &fakePkgSVC{
|
||||
createFn: func(_ context.Context, opts ...pkger.CreatePkgSetFn) (*pkger.Pkg, error) {
|
||||
opt := pkger.CreateOpt{}
|
||||
|
@ -87,7 +83,7 @@ func TestCmdPkg(t *testing.T) {
|
|||
return nil, err
|
||||
}
|
||||
}
|
||||
if !opt.OrgIDs[expectedOrgID] {
|
||||
if opt.OrgIDs[0].OrgID != expectedOrgID {
|
||||
return nil, errors.New("did not provide expected orgID")
|
||||
}
|
||||
|
||||
|
@ -100,10 +96,10 @@ func TestCmdPkg(t *testing.T) {
|
|||
return &pkg, nil
|
||||
},
|
||||
}
|
||||
builder := newCmdPkgBuilder(fakeSVCFn(pkgSVC), in(new(bytes.Buffer)))
|
||||
return builder.cmdPkgExportAll()
|
||||
return newCmdPkgBuilder(fakeSVCFn(pkgSVC), opt).cmd()
|
||||
}
|
||||
for _, tt := range tests {
|
||||
tt.pkgFileArgs.args = append([]string{"pkg", "export", "all"}, tt.pkgFileArgs.args...)
|
||||
testPkgWrites(t, cmdFn, tt.pkgFileArgs, func(t *testing.T, pkg *pkger.Pkg) {
|
||||
sum := pkg.Summary()
|
||||
|
||||
|
@ -192,7 +188,7 @@ func TestCmdPkg(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
cmdFn := func() *cobra.Command {
|
||||
cmdFn := func(_ *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
pkgSVC := &fakePkgSVC{
|
||||
createFn: func(_ context.Context, opts ...pkger.CreatePkgSetFn) (*pkger.Pkg, error) {
|
||||
var opt pkger.CreateOpt
|
||||
|
@ -218,19 +214,21 @@ func TestCmdPkg(t *testing.T) {
|
|||
return &pkg, nil
|
||||
},
|
||||
}
|
||||
builder := newCmdPkgBuilder(fakeSVCFn(pkgSVC), in(new(bytes.Buffer)))
|
||||
return builder.cmdPkgExport()
|
||||
|
||||
builder := newCmdPkgBuilder(fakeSVCFn(pkgSVC), opt)
|
||||
return builder.cmd()
|
||||
}
|
||||
for _, tt := range tests {
|
||||
tt.flags = append(tt.flags,
|
||||
flagArg{"buckets", idsStr(tt.bucketIDs...)},
|
||||
flagArg{"endpoints", idsStr(tt.endpointIDs...)},
|
||||
flagArg{"dashboards", idsStr(tt.dashIDs...)},
|
||||
flagArg{"labels", idsStr(tt.labelIDs...)},
|
||||
flagArg{"rules", idsStr(tt.ruleIDs...)},
|
||||
flagArg{"tasks", idsStr(tt.taskIDs...)},
|
||||
flagArg{"telegraf-configs", idsStr(tt.telegrafIDs...)},
|
||||
flagArg{"variables", idsStr(tt.varIDs...)},
|
||||
tt.args = append(tt.args,
|
||||
"pkg", "export",
|
||||
"--buckets="+idsStr(tt.bucketIDs...),
|
||||
"--endpoints="+idsStr(tt.endpointIDs...),
|
||||
"--dashboards="+idsStr(tt.dashIDs...),
|
||||
"--labels="+idsStr(tt.labelIDs...),
|
||||
"--rules="+idsStr(tt.ruleIDs...),
|
||||
"--tasks="+idsStr(tt.taskIDs...),
|
||||
"--telegraf-configs="+idsStr(tt.telegrafIDs...),
|
||||
"--variables="+idsStr(tt.varIDs...),
|
||||
)
|
||||
|
||||
testPkgWrites(t, cmdFn, tt.pkgFileArgs, func(t *testing.T, pkg *pkger.Pkg) {
|
||||
|
@ -282,8 +280,17 @@ func TestCmdPkg(t *testing.T) {
|
|||
|
||||
t.Run("validate", func(t *testing.T) {
|
||||
t.Run("pkg is valid returns no error", func(t *testing.T) {
|
||||
cmd := newCmdPkgBuilder(fakeSVCFn(new(fakePkgSVC))).cmdPkgValidate()
|
||||
builder := newInfluxCmdBuilder(
|
||||
in(new(bytes.Buffer)),
|
||||
out(ioutil.Discard),
|
||||
)
|
||||
cmd := builder.cmd(func(f *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
return newCmdPkgBuilder(fakeSVCFn(new(fakePkgSVC)), opt).cmd()
|
||||
})
|
||||
|
||||
cmd.SetArgs([]string{
|
||||
"pkg",
|
||||
"validate",
|
||||
"--file=../../pkger/testdata/bucket.yml",
|
||||
"-f=../../pkger/testdata/label.yml",
|
||||
})
|
||||
|
@ -293,45 +300,113 @@ func TestCmdPkg(t *testing.T) {
|
|||
t.Run("pkg is invalid returns error", func(t *testing.T) {
|
||||
// pkgYml is invalid because it is missing a name and wrong apiVersion
|
||||
const pkgYml = `apiVersion: 0.1.0
|
||||
kind: Bucket
|
||||
metadata:
|
||||
`
|
||||
b := newCmdPkgBuilder(fakeSVCFn(new(fakePkgSVC)), in(strings.NewReader(pkgYml)), out(ioutil.Discard))
|
||||
cmd := b.cmdPkgValidate()
|
||||
kind: Bucket
|
||||
metadata:
|
||||
`
|
||||
builder := newInfluxCmdBuilder(
|
||||
in(strings.NewReader(pkgYml)),
|
||||
out(ioutil.Discard),
|
||||
)
|
||||
cmd := builder.cmd(func(f *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
return newCmdPkgBuilder(fakeSVCFn(new(fakePkgSVC)), opt).cmd()
|
||||
})
|
||||
cmd.SetArgs([]string{"pkg", "validate"})
|
||||
|
||||
require.Error(t, cmd.Execute())
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
type flagArg struct{ name, val string }
|
||||
func Test_readFilesFromPath(t *testing.T) {
|
||||
t.Run("single file", func(t *testing.T) {
|
||||
dir := newTempDir(t)
|
||||
defer os.RemoveAll(dir)
|
||||
|
||||
func (s flagArg) String() string {
|
||||
return fmt.Sprintf("--%s=%s", s.name, s.val)
|
||||
}
|
||||
f := newTempFile(t, dir)
|
||||
|
||||
func flagArgs(fArgs ...flagArg) []string {
|
||||
var args []string
|
||||
for _, f := range fArgs {
|
||||
args = append(args, f.String())
|
||||
}
|
||||
return args
|
||||
files, err := readFilesFromPath(f.Name(), false)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, []string{f.Name()}, files)
|
||||
|
||||
files, err = readFilesFromPath(f.Name(), true)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, []string{f.Name()}, files)
|
||||
})
|
||||
|
||||
t.Run("dir with no files", func(t *testing.T) {
|
||||
dir := newTempDir(t)
|
||||
defer os.RemoveAll(dir)
|
||||
|
||||
files, err := readFilesFromPath(dir, false)
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, files)
|
||||
})
|
||||
|
||||
t.Run("dir with only files", func(t *testing.T) {
|
||||
dir := newTempDir(t)
|
||||
defer os.RemoveAll(dir)
|
||||
|
||||
filePaths := []string{
|
||||
newTempFile(t, dir).Name(),
|
||||
newTempFile(t, dir).Name(),
|
||||
}
|
||||
sort.Strings(filePaths)
|
||||
|
||||
files, err := readFilesFromPath(dir, false)
|
||||
require.NoError(t, err)
|
||||
sort.Strings(files)
|
||||
assert.Equal(t, filePaths, files)
|
||||
|
||||
files, err = readFilesFromPath(dir, true)
|
||||
require.NoError(t, err)
|
||||
sort.Strings(files)
|
||||
assert.Equal(t, filePaths, files)
|
||||
})
|
||||
|
||||
t.Run("dir with nested dir that has files", func(t *testing.T) {
|
||||
dir := newTempDir(t)
|
||||
defer os.RemoveAll(dir)
|
||||
|
||||
nestedDir := filepath.Join(dir, "/nested/twice")
|
||||
require.NoError(t, os.MkdirAll(nestedDir, os.ModePerm))
|
||||
|
||||
filePaths := []string{
|
||||
newTempFile(t, nestedDir).Name(),
|
||||
newTempFile(t, nestedDir).Name(),
|
||||
}
|
||||
sort.Strings(filePaths)
|
||||
|
||||
files, err := readFilesFromPath(dir, false)
|
||||
require.NoError(t, err)
|
||||
sort.Strings(files)
|
||||
assert.Empty(t, files)
|
||||
|
||||
files, err = readFilesFromPath(dir, true)
|
||||
require.NoError(t, err)
|
||||
sort.Strings(files)
|
||||
assert.Equal(t, filePaths, files)
|
||||
})
|
||||
}
|
||||
|
||||
type pkgFileArgs struct {
|
||||
name string
|
||||
filename string
|
||||
encoding pkger.Encoding
|
||||
flags []flagArg
|
||||
args []string
|
||||
envVars map[string]string
|
||||
}
|
||||
|
||||
func testPkgWrites(t *testing.T, newCmdFn func() *cobra.Command, args pkgFileArgs, assertFn func(t *testing.T, pkg *pkger.Pkg)) {
|
||||
func testPkgWrites(t *testing.T, newCmdFn func(*globalFlags, genericCLIOpts) *cobra.Command, args pkgFileArgs, assertFn func(t *testing.T, pkg *pkger.Pkg)) {
|
||||
t.Helper()
|
||||
|
||||
defer addEnvVars(t, args.envVars)()
|
||||
|
||||
wrappedCmdFn := func() *cobra.Command {
|
||||
cmd := newCmdFn()
|
||||
wrappedCmdFn := func(w io.Writer) *cobra.Command {
|
||||
builder := newInfluxCmdBuilder(
|
||||
in(new(bytes.Buffer)),
|
||||
out(w),
|
||||
)
|
||||
cmd := builder.cmd(newCmdFn)
|
||||
cmd.SetArgs([]string{}) // clears mess from test runner coming into cobra cli via stdin
|
||||
return cmd
|
||||
}
|
||||
|
@ -340,7 +415,7 @@ func testPkgWrites(t *testing.T, newCmdFn func() *cobra.Command, args pkgFileArg
|
|||
t.Run(path.Join(args.name, "buffer"), testPkgWritesToBuffer(wrappedCmdFn, args, assertFn))
|
||||
}
|
||||
|
||||
func testPkgWritesFile(newCmdFn func() *cobra.Command, args pkgFileArgs, assertFn func(t *testing.T, pkg *pkger.Pkg)) func(t *testing.T) {
|
||||
func testPkgWritesFile(newCmdFn func(w io.Writer) *cobra.Command, args pkgFileArgs, assertFn func(t *testing.T, pkg *pkger.Pkg)) func(t *testing.T) {
|
||||
return func(t *testing.T) {
|
||||
t.Helper()
|
||||
|
||||
|
@ -349,8 +424,8 @@ func testPkgWritesFile(newCmdFn func() *cobra.Command, args pkgFileArgs, assertF
|
|||
|
||||
pathToFile := filepath.Join(tempDir, args.filename)
|
||||
|
||||
cmd := newCmdFn()
|
||||
cmd.SetArgs(flagArgs(append(args.flags, flagArg{name: "file", val: pathToFile})...))
|
||||
cmd := newCmdFn(ioutil.Discard)
|
||||
cmd.SetArgs(append(args.args, "--file="+pathToFile))
|
||||
|
||||
require.NoError(t, cmd.Execute())
|
||||
|
||||
|
@ -361,14 +436,13 @@ func testPkgWritesFile(newCmdFn func() *cobra.Command, args pkgFileArgs, assertF
|
|||
}
|
||||
}
|
||||
|
||||
func testPkgWritesToBuffer(newCmdFn func() *cobra.Command, args pkgFileArgs, assertFn func(t *testing.T, pkg *pkger.Pkg)) func(t *testing.T) {
|
||||
func testPkgWritesToBuffer(newCmdFn func(w io.Writer) *cobra.Command, args pkgFileArgs, assertFn func(t *testing.T, pkg *pkger.Pkg)) func(t *testing.T) {
|
||||
return func(t *testing.T) {
|
||||
t.Helper()
|
||||
|
||||
var buf bytes.Buffer
|
||||
cmd := newCmdFn()
|
||||
cmd.SetOut(&buf)
|
||||
cmd.SetArgs(flagArgs(args.flags...))
|
||||
cmd := newCmdFn(&buf)
|
||||
cmd.SetArgs(args.args)
|
||||
|
||||
require.NoError(t, cmd.Execute())
|
||||
|
||||
|
@ -414,6 +488,14 @@ func newTempDir(t *testing.T) string {
|
|||
return tempDir
|
||||
}
|
||||
|
||||
func newTempFile(t *testing.T, dir string) *os.File {
|
||||
t.Helper()
|
||||
|
||||
f, err := ioutil.TempFile(dir, "")
|
||||
require.NoError(t, err)
|
||||
return f
|
||||
}
|
||||
|
||||
func idsStr(ids ...influxdb.ID) string {
|
||||
var idStrs []string
|
||||
for _, id := range ids {
|
||||
|
|
|
@ -16,15 +16,13 @@ var queryFlags struct {
|
|||
org organization
|
||||
}
|
||||
|
||||
func cmdQuery() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "query [query literal or @/path/to/query.flux]",
|
||||
Short: "Execute a Flux query",
|
||||
Long: `Execute a literal Flux query provided as a string,
|
||||
or execute a literal Flux query contained in a file by specifying the file prefixed with an @ sign.`,
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: wrapCheckSetup(fluxQueryF),
|
||||
}
|
||||
func cmdQuery(f *globalFlags, opts genericCLIOpts) *cobra.Command {
|
||||
cmd := opts.newCmd("query [query literal or @/path/to/query.flux]", fluxQueryF)
|
||||
cmd.Short = "Execute a Flux query"
|
||||
cmd.Long = `Execute a literal Flux query provided as a string,
|
||||
or execute a literal Flux query contained in a file by specifying the file prefixed with an @ sign.`
|
||||
cmd.Args = cobra.ExactArgs(1)
|
||||
|
||||
queryFlags.org.register(cmd, true)
|
||||
|
||||
return cmd
|
||||
|
|
|
@ -21,13 +21,11 @@ var replFlags struct {
|
|||
org organization
|
||||
}
|
||||
|
||||
func cmdREPL() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "repl",
|
||||
Short: "Interactive Flux REPL (read-eval-print-loop)",
|
||||
Args: cobra.NoArgs,
|
||||
RunE: wrapCheckSetup(replF),
|
||||
}
|
||||
func cmdREPL(f *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
cmd := opt.newCmd("repl", replF)
|
||||
cmd.Short = "Interactive Flux REPL (read-eval-print-loop)"
|
||||
cmd.Args = cobra.NoArgs
|
||||
|
||||
replFlags.org.register(cmd, false)
|
||||
|
||||
return cmd
|
||||
|
|
|
@ -0,0 +1,186 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"github.com/influxdata/influxdb"
|
||||
"github.com/influxdata/influxdb/http"
|
||||
"github.com/spf13/cobra"
|
||||
input "github.com/tcnksm/go-input"
|
||||
)
|
||||
|
||||
type secretSVCsFn func() (influxdb.SecretService, influxdb.OrganizationService, func(*input.UI) string, error)
|
||||
|
||||
func cmdSecret(f *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
builder := newCmdSecretBuilder(newSecretSVCs, opt)
|
||||
builder.globalFlags = f
|
||||
return builder.cmd()
|
||||
}
|
||||
|
||||
type cmdSecretBuilder struct {
|
||||
genericCLIOpts
|
||||
*globalFlags
|
||||
|
||||
svcFn secretSVCsFn
|
||||
|
||||
key string
|
||||
value string
|
||||
org organization
|
||||
}
|
||||
|
||||
func newCmdSecretBuilder(svcsFn secretSVCsFn, opt genericCLIOpts) *cmdSecretBuilder {
|
||||
return &cmdSecretBuilder{
|
||||
genericCLIOpts: opt,
|
||||
svcFn: svcsFn,
|
||||
}
|
||||
}
|
||||
|
||||
func (b *cmdSecretBuilder) cmd() *cobra.Command {
|
||||
cmd := b.newCmd("secret", nil)
|
||||
cmd.Short = "Secret management commands"
|
||||
cmd.Run = seeHelp
|
||||
cmd.AddCommand(
|
||||
b.cmdDelete(),
|
||||
b.cmdFind(),
|
||||
b.cmdUpdate(),
|
||||
)
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (b *cmdSecretBuilder) cmdUpdate() *cobra.Command {
|
||||
cmd := b.newCmd("update", b.cmdUpdateRunEFn)
|
||||
cmd.Short = "Update secret"
|
||||
cmd.Flags().StringVarP(&b.key, "key", "k", "", "The secret key (required)")
|
||||
cmd.Flags().StringVarP(&b.value, "value", "v", "", "Optional secret value for scripting convenience, using this might expose the secret to your local history")
|
||||
cmd.MarkFlagRequired("key")
|
||||
b.org.register(cmd, false)
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (b *cmdSecretBuilder) cmdDelete() *cobra.Command {
|
||||
cmd := b.newCmd("delete", b.cmdDeleteRunEFn)
|
||||
cmd.Short = "Delete secret"
|
||||
|
||||
cmd.Flags().StringVarP(&b.key, "key", "k", "", "The secret key (required)")
|
||||
cmd.MarkFlagRequired("key")
|
||||
b.org.register(cmd, false)
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (b *cmdSecretBuilder) cmdUpdateRunEFn(cmd *cobra.Command, args []string) error {
|
||||
scrSVC, orgSVC, getSecretFn, err := b.svcFn()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
orgID, err := b.org.getID(orgSVC)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
ui := &input.UI{
|
||||
Writer: b.genericCLIOpts.w,
|
||||
Reader: b.genericCLIOpts.in,
|
||||
}
|
||||
var secret string
|
||||
if b.value != "" {
|
||||
secret = b.value
|
||||
} else {
|
||||
secret = getSecretFn(ui)
|
||||
}
|
||||
|
||||
if err := scrSVC.PatchSecrets(ctx, orgID, map[string]string{b.key: secret}); err != nil {
|
||||
return fmt.Errorf("failed to update secret with key %q: %v", b.key, err)
|
||||
}
|
||||
|
||||
w := b.newTabWriter()
|
||||
w.WriteHeaders("Key", "OrgID", "Updated")
|
||||
w.Write(map[string]interface{}{
|
||||
"Key": b.key,
|
||||
"OrgID": orgID,
|
||||
"Updated": true,
|
||||
})
|
||||
w.Flush()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *cmdSecretBuilder) cmdDeleteRunEFn(cmd *cobra.Command, args []string) error {
|
||||
scrSVC, orgSVC, _, err := b.svcFn()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
orgID, err := b.org.getID(orgSVC)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
if err := scrSVC.DeleteSecret(ctx, orgID, b.key); err != nil {
|
||||
return fmt.Errorf("failed to delete secret with key %q: %v", b.key, err)
|
||||
}
|
||||
|
||||
w := b.newTabWriter()
|
||||
w.WriteHeaders("Key", "OrgID", "Deleted")
|
||||
w.Write(map[string]interface{}{
|
||||
"Key": b.key,
|
||||
"OrgID": orgID,
|
||||
"Deleted": true,
|
||||
})
|
||||
w.Flush()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *cmdSecretBuilder) cmdFind() *cobra.Command {
|
||||
cmd := b.newCmd("list", b.cmdFindRunEFn)
|
||||
cmd.Short = "List secrets"
|
||||
cmd.Aliases = []string{"find", "ls"}
|
||||
b.org.register(cmd, false)
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func (b *cmdSecretBuilder) cmdFindRunEFn(cmd *cobra.Command, args []string) error {
|
||||
|
||||
scrSVC, orgSVC, _, err := b.svcFn()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
orgID, err := b.org.getID(orgSVC)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
secrets, err := scrSVC.GetSecretKeys(context.Background(), orgID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to retrieve secret keys: %s", err)
|
||||
}
|
||||
|
||||
w := b.newTabWriter()
|
||||
w.WriteHeaders("Key", "OrganizationID")
|
||||
for _, s := range secrets {
|
||||
w.Write(map[string]interface{}{
|
||||
"Key": s,
|
||||
"OrganizationID": orgID,
|
||||
})
|
||||
}
|
||||
w.Flush()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func newSecretSVCs() (influxdb.SecretService, influxdb.OrganizationService, func(*input.UI) string, error) {
|
||||
httpClient, err := newHTTPClient()
|
||||
if err != nil {
|
||||
return nil, nil, nil, err
|
||||
}
|
||||
orgSvc := &http.OrganizationService{Client: httpClient}
|
||||
|
||||
return &http.SecretService{Client: httpClient}, orgSvc, getSecret, nil
|
||||
}
|
|
@ -0,0 +1,240 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"testing"
|
||||
|
||||
"github.com/influxdata/influxdb"
|
||||
"github.com/influxdata/influxdb/mock"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
input "github.com/tcnksm/go-input"
|
||||
)
|
||||
|
||||
func TestCmdSecret(t *testing.T) {
|
||||
orgID := influxdb.ID(9000)
|
||||
|
||||
fakeSVCFn := func(svc influxdb.SecretService, fn func(*input.UI) string) secretSVCsFn {
|
||||
return func() (influxdb.SecretService, influxdb.OrganizationService, func(*input.UI) string, error) {
|
||||
return svc, &mock.OrganizationService{
|
||||
FindOrganizationF: func(ctx context.Context, filter influxdb.OrganizationFilter) (*influxdb.Organization, error) {
|
||||
return &influxdb.Organization{ID: orgID, Name: "influxdata"}, nil
|
||||
},
|
||||
}, fn, nil
|
||||
}
|
||||
}
|
||||
|
||||
t.Run("list", func(t *testing.T) {
|
||||
type called []string
|
||||
tests := []struct {
|
||||
name string
|
||||
expected called
|
||||
flags []string
|
||||
command string
|
||||
envVars map[string]string
|
||||
}{
|
||||
{
|
||||
name: "org id",
|
||||
flags: []string{"--org-id=" + influxdb.ID(3).String()},
|
||||
envVars: envVarsZeroMap,
|
||||
expected: called{"k1", "k2", "k3"},
|
||||
},
|
||||
{
|
||||
name: "org",
|
||||
flags: []string{"--org=rg"},
|
||||
envVars: envVarsZeroMap,
|
||||
expected: called{"k1", "k2", "k3"},
|
||||
},
|
||||
{
|
||||
name: "env vars",
|
||||
envVars: map[string]string{
|
||||
"INFLUX_ORG": "rg",
|
||||
},
|
||||
flags: []string{},
|
||||
expected: called{"k1", "k2", "k3"},
|
||||
},
|
||||
{
|
||||
name: "ls alias",
|
||||
command: "ls",
|
||||
flags: []string{"--org=rg"},
|
||||
envVars: envVarsZeroMap,
|
||||
expected: called{"k1", "k2", "k3"},
|
||||
},
|
||||
{
|
||||
name: "find alias",
|
||||
command: "find",
|
||||
flags: []string{"--org=rg"},
|
||||
envVars: envVarsZeroMap,
|
||||
expected: called{"k1", "k2", "k3"},
|
||||
},
|
||||
}
|
||||
|
||||
cmdFn := func() (func(*globalFlags, genericCLIOpts) *cobra.Command, *called) {
|
||||
calls := new(called)
|
||||
svc := mock.NewSecretService()
|
||||
svc.GetSecretKeysFn = func(ctx context.Context, organizationID influxdb.ID) ([]string, error) {
|
||||
if !organizationID.Valid() {
|
||||
return []string{}, nil
|
||||
}
|
||||
*calls = []string{"k1", "k2", "k3"}
|
||||
return []string{}, nil
|
||||
}
|
||||
|
||||
return func(g *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
builder := newCmdSecretBuilder(fakeSVCFn(svc, nil), opt)
|
||||
return builder.cmd()
|
||||
}, calls
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
fn := func(t *testing.T) {
|
||||
defer addEnvVars(t, tt.envVars)()
|
||||
|
||||
builder := newInfluxCmdBuilder(
|
||||
in(new(bytes.Buffer)),
|
||||
out(ioutil.Discard),
|
||||
)
|
||||
nestedCmdFn, calls := cmdFn()
|
||||
cmd := builder.cmd(nestedCmdFn)
|
||||
|
||||
if tt.command == "" {
|
||||
tt.command = "list"
|
||||
}
|
||||
|
||||
cmd.SetArgs(append([]string{"secret", tt.command}, tt.flags...))
|
||||
|
||||
require.NoError(t, cmd.Execute())
|
||||
assert.Equal(t, tt.expected, *calls)
|
||||
}
|
||||
|
||||
t.Run(tt.name, fn)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("delete", func(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
expectedKey string
|
||||
flags []string
|
||||
}{
|
||||
{
|
||||
name: "with key",
|
||||
expectedKey: "key1",
|
||||
flags: []string{
|
||||
"--org=org name", "--key=key1",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "shorts",
|
||||
expectedKey: "key1",
|
||||
flags: []string{"-o=" + orgID.String(), "-k=key1"},
|
||||
},
|
||||
}
|
||||
|
||||
cmdFn := func(expectedKey string) func(*globalFlags, genericCLIOpts) *cobra.Command {
|
||||
svc := mock.NewSecretService()
|
||||
svc.DeleteSecretFn = func(ctx context.Context, orgID influxdb.ID, ks ...string) error {
|
||||
if expectedKey != ks[0] {
|
||||
return fmt.Errorf("unexpected id:\n\twant= %s\n\tgot= %s", expectedKey, ks[0])
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
return func(g *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
builder := newCmdSecretBuilder(fakeSVCFn(svc, nil), opt)
|
||||
return builder.cmd()
|
||||
}
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
fn := func(t *testing.T) {
|
||||
builder := newInfluxCmdBuilder(
|
||||
in(new(bytes.Buffer)),
|
||||
out(ioutil.Discard),
|
||||
)
|
||||
cmd := builder.cmd(cmdFn(tt.expectedKey))
|
||||
cmd.SetArgs(append([]string{"secret", "delete"}, tt.flags...))
|
||||
|
||||
require.NoError(t, cmd.Execute())
|
||||
}
|
||||
|
||||
t.Run(tt.name, fn)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("update", func(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
expectedKey string
|
||||
flags []string
|
||||
}{
|
||||
{
|
||||
name: "with key",
|
||||
expectedKey: "key1",
|
||||
flags: []string{
|
||||
"--org=org name", "--key=key1",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "with key and value",
|
||||
expectedKey: "key1",
|
||||
flags: []string{
|
||||
"--org=org name", "--key=key1", "--value=v1",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "shorts",
|
||||
expectedKey: "key1",
|
||||
flags: []string{"-o=" + orgID.String(), "-k=key1"},
|
||||
},
|
||||
{
|
||||
name: "shorts with value",
|
||||
expectedKey: "key1",
|
||||
flags: []string{"-o=" + orgID.String(), "-k=key1", "-v=v1"},
|
||||
},
|
||||
}
|
||||
|
||||
cmdFn := func(expectedKey string) func(*globalFlags, genericCLIOpts) *cobra.Command {
|
||||
svc := mock.NewSecretService()
|
||||
svc.PatchSecretsFn = func(ctx context.Context, orgID influxdb.ID, m map[string]string) error {
|
||||
var key string
|
||||
for k := range m {
|
||||
key = k
|
||||
break
|
||||
}
|
||||
if expectedKey != key {
|
||||
return fmt.Errorf("unexpected id:\n\twant= %s\n\tgot= %s", expectedKey, key)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
getSctFn := func(*input.UI) string {
|
||||
return "ss"
|
||||
}
|
||||
|
||||
return func(g *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
builder := newCmdSecretBuilder(fakeSVCFn(svc, getSctFn), opt)
|
||||
return builder.cmd()
|
||||
}
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
fn := func(t *testing.T) {
|
||||
builder := newInfluxCmdBuilder(
|
||||
in(new(bytes.Buffer)),
|
||||
out(ioutil.Discard),
|
||||
)
|
||||
cmd := builder.cmd(cmdFn(tt.expectedKey))
|
||||
cmd.SetArgs(append([]string{"secret", "update"}, tt.flags...))
|
||||
|
||||
require.NoError(t, cmd.Execute())
|
||||
}
|
||||
|
||||
t.Run(tt.name, fn)
|
||||
}
|
||||
})
|
||||
}
|
|
@ -6,6 +6,7 @@ import (
|
|||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
platform "github.com/influxdata/influxdb"
|
||||
"github.com/influxdata/influxdb/cmd/influx/internal"
|
||||
|
@ -20,23 +21,20 @@ var setupFlags struct {
|
|||
token string
|
||||
org string
|
||||
bucket string
|
||||
retention int
|
||||
retention time.Duration
|
||||
force bool
|
||||
}
|
||||
|
||||
func cmdSetup() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "setup",
|
||||
Short: "Setup instance with initial user, org, bucket",
|
||||
RunE: wrapErrorFmt(setupF),
|
||||
}
|
||||
func cmdSetup(f *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
cmd := opt.newCmd("setup", setupF)
|
||||
cmd.Short = "Setup instance with initial user, org, bucket"
|
||||
|
||||
cmd.Flags().StringVarP(&setupFlags.username, "username", "u", "", "primary username")
|
||||
cmd.Flags().StringVarP(&setupFlags.password, "password", "p", "", "password for username")
|
||||
cmd.Flags().StringVarP(&setupFlags.token, "token", "t", "", "token for username, else auto-generated")
|
||||
cmd.Flags().StringVarP(&setupFlags.org, "org", "o", "", "primary organization name")
|
||||
cmd.Flags().StringVarP(&setupFlags.bucket, "bucket", "b", "", "primary bucket name")
|
||||
cmd.Flags().IntVarP(&setupFlags.retention, "retention", "r", -1, "retention period in hours, else infinite")
|
||||
cmd.Flags().DurationVarP(&setupFlags.retention, "retention", "r", -1, "Duration bucket will retain data. 0 is infinite. Default is 0.")
|
||||
cmd.Flags().BoolVarP(&setupFlags.force, "force", "f", false, "skip confirmation prompt")
|
||||
|
||||
return cmd
|
||||
|
@ -124,12 +122,14 @@ func onboardingRequest() (*platform.OnboardingRequest, error) {
|
|||
|
||||
func nonInteractive() (*platform.OnboardingRequest, error) {
|
||||
req := &platform.OnboardingRequest{
|
||||
User: setupFlags.username,
|
||||
Password: setupFlags.password,
|
||||
Token: setupFlags.token,
|
||||
Org: setupFlags.org,
|
||||
Bucket: setupFlags.bucket,
|
||||
RetentionPeriod: uint(setupFlags.retention),
|
||||
User: setupFlags.username,
|
||||
Password: setupFlags.password,
|
||||
Token: setupFlags.token,
|
||||
Org: setupFlags.org,
|
||||
Bucket: setupFlags.bucket,
|
||||
// TODO: this manipulation is required by the API, something that
|
||||
// we should fixup to be a duration instead
|
||||
RetentionPeriod: uint(setupFlags.retention / time.Hour),
|
||||
}
|
||||
|
||||
if setupFlags.retention < 0 {
|
||||
|
@ -236,10 +236,32 @@ You have entered:
|
|||
}
|
||||
}
|
||||
|
||||
var (
|
||||
errPasswordIsNotMatch = fmt.Errorf("passwords do not match")
|
||||
errPasswordIsTooShort = fmt.Errorf("passwords is too short")
|
||||
)
|
||||
var errPasswordNotMatch = fmt.Errorf("passwords do not match")
|
||||
|
||||
var errPasswordIsTooShort error = fmt.Errorf("password is too short")
|
||||
|
||||
func getSecret(ui *input.UI) (secret string) {
|
||||
var err error
|
||||
query := string(promptWithColor("Please type your secret", colorCyan))
|
||||
for {
|
||||
secret, err = ui.Ask(query, &input.Options{
|
||||
Required: true,
|
||||
HideOrder: true,
|
||||
Hide: true,
|
||||
Mask: false,
|
||||
})
|
||||
switch err {
|
||||
case input.ErrInterrupted:
|
||||
os.Exit(1)
|
||||
default:
|
||||
if secret = strings.TrimSpace(secret); secret == "" {
|
||||
continue
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
return secret
|
||||
}
|
||||
|
||||
func getPassword(ui *input.UI, showNew bool) (password string) {
|
||||
newStr := ""
|
||||
|
@ -247,7 +269,7 @@ func getPassword(ui *input.UI, showNew bool) (password string) {
|
|||
newStr = " new"
|
||||
}
|
||||
var err error
|
||||
enterPasswd:
|
||||
enterPassword:
|
||||
query := string(promptWithColor("Please type your"+newStr+" password", colorCyan))
|
||||
for {
|
||||
password, err = ui.Ask(query, &input.Options{
|
||||
|
@ -266,8 +288,8 @@ enterPasswd:
|
|||
case input.ErrInterrupted:
|
||||
os.Exit(1)
|
||||
case errPasswordIsTooShort:
|
||||
ui.Writer.Write(promptWithColor("Password too short - minimum length is 8 characters!", colorRed))
|
||||
goto enterPasswd
|
||||
ui.Writer.Write(promptWithColor("Password too short - minimum length is 8 characters!\n\r", colorRed))
|
||||
continue
|
||||
default:
|
||||
if password = strings.TrimSpace(password); password == "" {
|
||||
continue
|
||||
|
@ -283,7 +305,7 @@ enterPasswd:
|
|||
Hide: true,
|
||||
ValidateFunc: func(s string) error {
|
||||
if s != password {
|
||||
return errPasswordIsNotMatch
|
||||
return errPasswordNotMatch
|
||||
}
|
||||
return nil
|
||||
},
|
||||
|
@ -295,7 +317,7 @@ enterPasswd:
|
|||
// Nothing.
|
||||
default:
|
||||
ui.Writer.Write(promptWithColor("Passwords do not match!\n", colorRed))
|
||||
goto enterPasswd
|
||||
goto enterPassword
|
||||
}
|
||||
break
|
||||
}
|
||||
|
|
|
@ -7,33 +7,32 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/influxdata/flux/repl"
|
||||
platform "github.com/influxdata/influxdb"
|
||||
"github.com/influxdata/influxdb"
|
||||
"github.com/influxdata/influxdb/cmd/influx/internal"
|
||||
"github.com/influxdata/influxdb/http"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func cmdTask() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "task",
|
||||
Short: "Task management commands",
|
||||
RunE: wrapCheckSetup(func(cmd *cobra.Command, args []string) error {
|
||||
if flags.local {
|
||||
return fmt.Errorf("local flag not supported for task command")
|
||||
}
|
||||
func cmdTask(f *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
runE := func(cmd *cobra.Command, args []string) error {
|
||||
if flags.local {
|
||||
return fmt.Errorf("local flag not supported for task command")
|
||||
}
|
||||
|
||||
seeHelp(cmd, args)
|
||||
return nil
|
||||
}),
|
||||
seeHelp(cmd, args)
|
||||
return nil
|
||||
}
|
||||
|
||||
cmd := opt.newCmd("task", runE)
|
||||
cmd.Short = "Task management commands"
|
||||
|
||||
cmd.AddCommand(
|
||||
taskLogCmd(),
|
||||
taskRunCmd(),
|
||||
taskCreateCmd(),
|
||||
taskDeleteCmd(),
|
||||
taskFindCmd(),
|
||||
taskUpdateCmd(),
|
||||
taskLogCmd(opt),
|
||||
taskRunCmd(opt),
|
||||
taskCreateCmd(opt),
|
||||
taskDeleteCmd(opt),
|
||||
taskFindCmd(opt),
|
||||
taskUpdateCmd(opt),
|
||||
)
|
||||
|
||||
return cmd
|
||||
|
@ -43,13 +42,10 @@ var taskCreateFlags struct {
|
|||
org organization
|
||||
}
|
||||
|
||||
func taskCreateCmd() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "create [query literal or @/path/to/query.flux]",
|
||||
Short: "Create task",
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: wrapCheckSetup(taskCreateF),
|
||||
}
|
||||
func taskCreateCmd(opt genericCLIOpts) *cobra.Command {
|
||||
cmd := opt.newCmd("create [query literal or @/path/to/query.flux]", taskCreateF)
|
||||
cmd.Args = cobra.ExactArgs(1)
|
||||
cmd.Short = "Create task"
|
||||
|
||||
taskCreateFlags.org.register(cmd, false)
|
||||
|
||||
|
@ -61,9 +57,13 @@ func taskCreateF(cmd *cobra.Command, args []string) error {
|
|||
return err
|
||||
}
|
||||
|
||||
client, err := newHTTPClient()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
s := &http.TaskService{
|
||||
Addr: flags.host,
|
||||
Token: flags.token,
|
||||
Client: client,
|
||||
InsecureSkipVerify: flags.skipVerify,
|
||||
}
|
||||
|
||||
|
@ -72,7 +72,7 @@ func taskCreateF(cmd *cobra.Command, args []string) error {
|
|||
return fmt.Errorf("error parsing flux script: %s", err)
|
||||
}
|
||||
|
||||
tc := platform.TaskCreate{
|
||||
tc := influxdb.TaskCreate{
|
||||
Flux: flux,
|
||||
Organization: taskCreateFlags.org.name,
|
||||
}
|
||||
|
@ -126,17 +126,15 @@ var taskFindFlags struct {
|
|||
org organization
|
||||
}
|
||||
|
||||
func taskFindCmd() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "find",
|
||||
Short: "Find tasks",
|
||||
RunE: wrapCheckSetup(taskFindF),
|
||||
}
|
||||
func taskFindCmd(opt genericCLIOpts) *cobra.Command {
|
||||
cmd := opt.newCmd("list", taskFindF)
|
||||
cmd.Short = "List tasks"
|
||||
cmd.Aliases = []string{"find", "ls"}
|
||||
|
||||
taskFindFlags.org.register(cmd, false)
|
||||
cmd.Flags().StringVarP(&taskFindFlags.id, "id", "i", "", "task ID")
|
||||
cmd.Flags().StringVarP(&taskFindFlags.user, "user-id", "n", "", "task owner ID")
|
||||
cmd.Flags().IntVarP(&taskFindFlags.limit, "limit", "", platform.TaskDefaultPageSize, "the number of tasks to find")
|
||||
cmd.Flags().IntVarP(&taskFindFlags.limit, "limit", "", influxdb.TaskDefaultPageSize, "the number of tasks to find")
|
||||
cmd.Flags().BoolVar(&taskFindFlags.headers, "headers", true, "To print the table headers; defaults true")
|
||||
|
||||
return cmd
|
||||
|
@ -146,15 +144,20 @@ func taskFindF(cmd *cobra.Command, args []string) error {
|
|||
if err := taskFindFlags.org.validOrgFlags(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
client, err := newHTTPClient()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
s := &http.TaskService{
|
||||
Addr: flags.host,
|
||||
Token: flags.token,
|
||||
Client: client,
|
||||
InsecureSkipVerify: flags.skipVerify,
|
||||
}
|
||||
|
||||
filter := platform.TaskFilter{}
|
||||
filter := influxdb.TaskFilter{}
|
||||
if taskFindFlags.user != "" {
|
||||
id, err := platform.IDFromString(taskFindFlags.user)
|
||||
id, err := influxdb.IDFromString(taskFindFlags.user)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -165,23 +168,22 @@ func taskFindF(cmd *cobra.Command, args []string) error {
|
|||
filter.Organization = taskFindFlags.org.name
|
||||
}
|
||||
if taskFindFlags.org.id != "" {
|
||||
id, err := platform.IDFromString(taskFindFlags.org.id)
|
||||
id, err := influxdb.IDFromString(taskFindFlags.org.id)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
filter.OrganizationID = id
|
||||
}
|
||||
|
||||
if taskFindFlags.limit < 1 || taskFindFlags.limit > platform.TaskMaxPageSize {
|
||||
return fmt.Errorf("limit must be between 1 and %d", platform.TaskMaxPageSize)
|
||||
if taskFindFlags.limit < 1 || taskFindFlags.limit > influxdb.TaskMaxPageSize {
|
||||
return fmt.Errorf("limit must be between 1 and %d", influxdb.TaskMaxPageSize)
|
||||
}
|
||||
filter.Limit = taskFindFlags.limit
|
||||
|
||||
var tasks []http.Task
|
||||
var err error
|
||||
|
||||
if taskFindFlags.id != "" {
|
||||
id, err := platform.IDFromString(taskFindFlags.id)
|
||||
id, err := influxdb.IDFromString(taskFindFlags.id)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -232,33 +234,34 @@ var taskUpdateFlags struct {
|
|||
status string
|
||||
}
|
||||
|
||||
func taskUpdateCmd() *cobra.Command {
|
||||
taskUpdateCmd := &cobra.Command{
|
||||
Use: "update",
|
||||
Short: "Update task",
|
||||
RunE: wrapCheckSetup(taskUpdateF),
|
||||
}
|
||||
func taskUpdateCmd(opt genericCLIOpts) *cobra.Command {
|
||||
cmd := opt.newCmd("update", taskUpdateF)
|
||||
cmd.Short = "Update task"
|
||||
|
||||
taskUpdateCmd.Flags().StringVarP(&taskUpdateFlags.id, "id", "i", "", "task ID (required)")
|
||||
taskUpdateCmd.Flags().StringVarP(&taskUpdateFlags.status, "status", "", "", "update task status")
|
||||
taskUpdateCmd.MarkFlagRequired("id")
|
||||
cmd.Flags().StringVarP(&taskUpdateFlags.id, "id", "i", "", "task ID (required)")
|
||||
cmd.Flags().StringVarP(&taskUpdateFlags.status, "status", "", "", "update task status")
|
||||
cmd.MarkFlagRequired("id")
|
||||
|
||||
return taskUpdateCmd
|
||||
return cmd
|
||||
}
|
||||
|
||||
func taskUpdateF(cmd *cobra.Command, args []string) error {
|
||||
client, err := newHTTPClient()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
s := &http.TaskService{
|
||||
Addr: flags.host,
|
||||
Token: flags.token,
|
||||
Client: client,
|
||||
InsecureSkipVerify: flags.skipVerify,
|
||||
}
|
||||
|
||||
var id platform.ID
|
||||
var id influxdb.ID
|
||||
if err := id.DecodeFromString(taskUpdateFlags.id); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
update := platform.TaskUpdate{}
|
||||
update := influxdb.TaskUpdate{}
|
||||
if taskUpdateFlags.status != "" {
|
||||
update.Status = &taskUpdateFlags.status
|
||||
}
|
||||
|
@ -305,12 +308,9 @@ var taskDeleteFlags struct {
|
|||
id string
|
||||
}
|
||||
|
||||
func taskDeleteCmd() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "delete",
|
||||
Short: "Delete task",
|
||||
RunE: wrapCheckSetup(taskDeleteF),
|
||||
}
|
||||
func taskDeleteCmd(opt genericCLIOpts) *cobra.Command {
|
||||
cmd := opt.newCmd("delete", taskDeleteF)
|
||||
cmd.Short = "Delete task"
|
||||
|
||||
cmd.Flags().StringVarP(&taskDeleteFlags.id, "id", "i", "", "task id (required)")
|
||||
cmd.MarkFlagRequired("id")
|
||||
|
@ -319,14 +319,18 @@ func taskDeleteCmd() *cobra.Command {
|
|||
}
|
||||
|
||||
func taskDeleteF(cmd *cobra.Command, args []string) error {
|
||||
client, err := newHTTPClient()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
s := &http.TaskService{
|
||||
Addr: flags.host,
|
||||
Token: flags.token,
|
||||
Client: client,
|
||||
InsecureSkipVerify: flags.skipVerify,
|
||||
}
|
||||
|
||||
var id platform.ID
|
||||
err := id.DecodeFromString(taskDeleteFlags.id)
|
||||
var id influxdb.ID
|
||||
err = id.DecodeFromString(taskDeleteFlags.id)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -366,15 +370,13 @@ func taskDeleteF(cmd *cobra.Command, args []string) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func taskLogCmd() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "log",
|
||||
Short: "Log related commands",
|
||||
Run: seeHelp,
|
||||
}
|
||||
func taskLogCmd(opt genericCLIOpts) *cobra.Command {
|
||||
cmd := opt.newCmd("log", nil)
|
||||
cmd.Run = seeHelp
|
||||
cmd.Short = "Log related commands"
|
||||
|
||||
cmd.AddCommand(
|
||||
taskLogFindCmd(),
|
||||
taskLogFindCmd(opt),
|
||||
)
|
||||
|
||||
return cmd
|
||||
|
@ -385,12 +387,10 @@ var taskLogFindFlags struct {
|
|||
runID string
|
||||
}
|
||||
|
||||
func taskLogFindCmd() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "find",
|
||||
Short: "find logs for task",
|
||||
RunE: wrapCheckSetup(taskLogFindF),
|
||||
}
|
||||
func taskLogFindCmd(opt genericCLIOpts) *cobra.Command {
|
||||
cmd := opt.newCmd("list", taskLogFindF)
|
||||
cmd.Short = "List logs for task"
|
||||
cmd.Aliases = []string{"find", "ls"}
|
||||
|
||||
cmd.Flags().StringVarP(&taskLogFindFlags.taskID, "task-id", "", "", "task id (required)")
|
||||
cmd.Flags().StringVarP(&taskLogFindFlags.runID, "run-id", "", "", "run id")
|
||||
|
@ -400,21 +400,25 @@ func taskLogFindCmd() *cobra.Command {
|
|||
}
|
||||
|
||||
func taskLogFindF(cmd *cobra.Command, args []string) error {
|
||||
client, err := newHTTPClient()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
s := &http.TaskService{
|
||||
Addr: flags.host,
|
||||
Token: flags.token,
|
||||
Client: client,
|
||||
InsecureSkipVerify: flags.skipVerify,
|
||||
}
|
||||
|
||||
var filter platform.LogFilter
|
||||
id, err := platform.IDFromString(taskLogFindFlags.taskID)
|
||||
var filter influxdb.LogFilter
|
||||
id, err := influxdb.IDFromString(taskLogFindFlags.taskID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
filter.Task = *id
|
||||
|
||||
if taskLogFindFlags.runID != "" {
|
||||
id, err := platform.IDFromString(taskLogFindFlags.runID)
|
||||
id, err := influxdb.IDFromString(taskLogFindFlags.runID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -445,15 +449,13 @@ func taskLogFindF(cmd *cobra.Command, args []string) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func taskRunCmd() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "run",
|
||||
Short: "Run related commands",
|
||||
Run: seeHelp,
|
||||
}
|
||||
func taskRunCmd(opt genericCLIOpts) *cobra.Command {
|
||||
cmd := opt.newCmd("run", nil)
|
||||
cmd.Run = seeHelp
|
||||
cmd.Short = "List runs for a task"
|
||||
cmd.AddCommand(
|
||||
taskRunFindCmd(),
|
||||
taskRunRetryCmd(),
|
||||
taskRunFindCmd(opt),
|
||||
taskRunRetryCmd(opt),
|
||||
)
|
||||
|
||||
return cmd
|
||||
|
@ -467,12 +469,10 @@ var taskRunFindFlags struct {
|
|||
limit int
|
||||
}
|
||||
|
||||
func taskRunFindCmd() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "find",
|
||||
Short: "find runs for a task",
|
||||
RunE: wrapCheckSetup(taskRunFindF),
|
||||
}
|
||||
func taskRunFindCmd(opt genericCLIOpts) *cobra.Command {
|
||||
cmd := opt.newCmd("list", taskRunFindF)
|
||||
cmd.Short = "List runs for a task"
|
||||
cmd.Aliases = []string{"find", "ls"}
|
||||
|
||||
cmd.Flags().StringVarP(&taskRunFindFlags.taskID, "task-id", "", "", "task id (required)")
|
||||
cmd.Flags().StringVarP(&taskRunFindFlags.runID, "run-id", "", "", "run id")
|
||||
|
@ -486,26 +486,30 @@ func taskRunFindCmd() *cobra.Command {
|
|||
}
|
||||
|
||||
func taskRunFindF(cmd *cobra.Command, args []string) error {
|
||||
client, err := newHTTPClient()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
s := &http.TaskService{
|
||||
Addr: flags.host,
|
||||
Token: flags.token,
|
||||
Client: client,
|
||||
InsecureSkipVerify: flags.skipVerify,
|
||||
}
|
||||
|
||||
filter := platform.RunFilter{
|
||||
filter := influxdb.RunFilter{
|
||||
Limit: taskRunFindFlags.limit,
|
||||
AfterTime: taskRunFindFlags.afterTime,
|
||||
BeforeTime: taskRunFindFlags.beforeTime,
|
||||
}
|
||||
taskID, err := platform.IDFromString(taskRunFindFlags.taskID)
|
||||
taskID, err := influxdb.IDFromString(taskRunFindFlags.taskID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
filter.Task = *taskID
|
||||
|
||||
var runs []*platform.Run
|
||||
var runs []*influxdb.Run
|
||||
if taskRunFindFlags.runID != "" {
|
||||
id, err := platform.IDFromString(taskRunFindFlags.runID)
|
||||
id, err := influxdb.IDFromString(taskRunFindFlags.runID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -557,12 +561,9 @@ var runRetryFlags struct {
|
|||
taskID, runID string
|
||||
}
|
||||
|
||||
func taskRunRetryCmd() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "retry",
|
||||
Short: "retry a run",
|
||||
RunE: wrapCheckSetup(runRetryF),
|
||||
}
|
||||
func taskRunRetryCmd(opt genericCLIOpts) *cobra.Command {
|
||||
cmd := opt.newCmd("retry", runRetryF)
|
||||
cmd.Short = "retry a run"
|
||||
|
||||
cmd.Flags().StringVarP(&runRetryFlags.taskID, "task-id", "i", "", "task id (required)")
|
||||
cmd.Flags().StringVarP(&runRetryFlags.runID, "run-id", "r", "", "run id (required)")
|
||||
|
@ -573,13 +574,17 @@ func taskRunRetryCmd() *cobra.Command {
|
|||
}
|
||||
|
||||
func runRetryF(cmd *cobra.Command, args []string) error {
|
||||
client, err := newHTTPClient()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
s := &http.TaskService{
|
||||
Addr: flags.host,
|
||||
Token: flags.token,
|
||||
Client: client,
|
||||
InsecureSkipVerify: flags.skipVerify,
|
||||
}
|
||||
|
||||
var taskID, runID platform.ID
|
||||
var taskID, runID influxdb.ID
|
||||
if err := taskID.DecodeFromString(runRetryFlags.taskID); err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
|
@ -16,18 +16,17 @@ var transpileFlags struct {
|
|||
Now string
|
||||
}
|
||||
|
||||
func cmdTranspile() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "transpile [InfluxQL query]",
|
||||
Short: "Transpile an InfluxQL query to Flux source code",
|
||||
Long: `Transpile an InfluxQL query to Flux source code.
|
||||
func cmdTranspile(f *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
cmd := opt.newCmd("transpile [InfluxQL query]", transpileF)
|
||||
cmd.Args = cobra.ExactArgs(1)
|
||||
cmd.Short = "Transpile an InfluxQL query to Flux source code"
|
||||
cmd.Long = `Transpile an InfluxQL query to Flux source code.
|
||||
|
||||
|
||||
The transpiled query assumes that the bucket name is the of the form '<database>/<retention policy>'.
|
||||
|
||||
The transpiled query will be written for absolute time ranges using the provided now() time.`,
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: transpileF,
|
||||
}
|
||||
The transpiled query will be written for absolute time ranges using the provided now() time.`
|
||||
|
||||
opts := flagOpts{
|
||||
{
|
||||
DestP: &transpileFlags.Now,
|
||||
|
|
|
@ -4,10 +4,8 @@ import (
|
|||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/influxdata/influxdb"
|
||||
"github.com/influxdata/influxdb/cmd/influx/internal"
|
||||
"github.com/influxdata/influxdb/http"
|
||||
"github.com/spf13/cobra"
|
||||
input "github.com/tcnksm/go-input"
|
||||
|
@ -26,12 +24,15 @@ type cmdUserDeps struct {
|
|||
getPassFn func(*input.UI, bool) string
|
||||
}
|
||||
|
||||
func cmdUser(opts ...genericCLIOptFn) *cobra.Command {
|
||||
return newCmdUserBuilder(newUserSVC, opts...).cmd()
|
||||
func cmdUser(f *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
builder := newCmdUserBuilder(newUserSVC, opt)
|
||||
builder.globalFlags = f
|
||||
return builder.cmd()
|
||||
}
|
||||
|
||||
type cmdUserBuilder struct {
|
||||
genericCLIOpts
|
||||
*globalFlags
|
||||
|
||||
svcFn userSVCsFn
|
||||
|
||||
|
@ -41,15 +42,7 @@ type cmdUserBuilder struct {
|
|||
org organization
|
||||
}
|
||||
|
||||
func newCmdUserBuilder(svcsFn userSVCsFn, opts ...genericCLIOptFn) *cmdUserBuilder {
|
||||
opt := genericCLIOpts{
|
||||
in: os.Stdin,
|
||||
w: os.Stdout,
|
||||
}
|
||||
for _, o := range opts {
|
||||
o(&opt)
|
||||
}
|
||||
|
||||
func newCmdUserBuilder(svcsFn userSVCsFn, opt genericCLIOpts) *cmdUserBuilder {
|
||||
return &cmdUserBuilder{
|
||||
genericCLIOpts: opt,
|
||||
svcFn: svcsFn,
|
||||
|
@ -184,7 +177,7 @@ func (b *cmdUserBuilder) cmdUpdateRunEFn(cmd *cobra.Command, args []string) erro
|
|||
return err
|
||||
}
|
||||
|
||||
w := internal.NewTabWriter(b.w)
|
||||
w := b.newTabWriter()
|
||||
w.WriteHeaders(
|
||||
"ID",
|
||||
"Name",
|
||||
|
@ -247,7 +240,7 @@ func (b *cmdUserBuilder) cmdCreateRunEFn(*cobra.Command, []string) error {
|
|||
for i, h := range headers {
|
||||
m[h] = vals[i]
|
||||
}
|
||||
w := internal.NewTabWriter(b.w)
|
||||
w := b.newTabWriter()
|
||||
w.WriteHeaders(headers...)
|
||||
w.Write(m)
|
||||
w.Flush()
|
||||
|
@ -287,8 +280,9 @@ func (b *cmdUserBuilder) cmdCreateRunEFn(*cobra.Command, []string) error {
|
|||
}
|
||||
|
||||
func (b *cmdUserBuilder) cmdFind() *cobra.Command {
|
||||
cmd := b.newCmd("find", b.cmdFindRunEFn)
|
||||
cmd.Short = "Find user"
|
||||
cmd := b.newCmd("list", b.cmdFindRunEFn)
|
||||
cmd.Short = "List users"
|
||||
cmd.Aliases = []string{"find", "ls"}
|
||||
|
||||
cmd.Flags().StringVarP(&b.id, "id", "i", "", "The user ID")
|
||||
cmd.Flags().StringVarP(&b.name, "name", "n", "", "The user name")
|
||||
|
@ -319,7 +313,7 @@ func (b *cmdUserBuilder) cmdFindRunEFn(*cobra.Command, []string) error {
|
|||
return err
|
||||
}
|
||||
|
||||
w := internal.NewTabWriter(b.w)
|
||||
w := b.newTabWriter()
|
||||
w.WriteHeaders(
|
||||
"ID",
|
||||
"Name",
|
||||
|
@ -366,7 +360,7 @@ func (b *cmdUserBuilder) cmdDeleteRunEFn(cmd *cobra.Command, args []string) erro
|
|||
return err
|
||||
}
|
||||
|
||||
w := internal.NewTabWriter(b.w)
|
||||
w := b.newTabWriter()
|
||||
w.WriteHeaders(
|
||||
"ID",
|
||||
"Name",
|
||||
|
|
|
@ -17,44 +17,40 @@ import (
|
|||
input "github.com/tcnksm/go-input"
|
||||
)
|
||||
|
||||
func newCMDUserDeps(
|
||||
userSVC influxdb.UserService,
|
||||
passSVC influxdb.PasswordsService,
|
||||
getPassFn func(*input.UI, bool) string,
|
||||
) cmdUserDeps {
|
||||
return cmdUserDeps{
|
||||
userSVC: userSVC,
|
||||
orgSvc: &mock.OrganizationService{
|
||||
FindOrganizationF: func(ctx context.Context, filter influxdb.OrganizationFilter) (*influxdb.Organization, error) {
|
||||
return &influxdb.Organization{ID: influxdb.ID(9000), Name: "influxdata"}, nil
|
||||
},
|
||||
},
|
||||
passSVC: passSVC,
|
||||
urmSVC: &mock.UserResourceMappingService{
|
||||
CreateMappingFn: func(context.Context, *platform.UserResourceMapping) error {
|
||||
return nil
|
||||
},
|
||||
},
|
||||
getPassFn: getPassFn,
|
||||
}
|
||||
}
|
||||
|
||||
func TestCmdUser(t *testing.T) {
|
||||
setViperOptions()
|
||||
|
||||
type userResult struct {
|
||||
user influxdb.User
|
||||
password string
|
||||
}
|
||||
|
||||
fakeSVCFn := func(dep cmdUserDeps) userSVCsFn {
|
||||
return func() (
|
||||
cmdUserDeps,
|
||||
error) {
|
||||
return func() (cmdUserDeps, error) {
|
||||
return dep, nil
|
||||
}
|
||||
}
|
||||
|
||||
newCMDUserDeps := func(
|
||||
userSVC influxdb.UserService,
|
||||
passSVC influxdb.PasswordsService,
|
||||
getPassFn func(*input.UI, bool) string,
|
||||
) cmdUserDeps {
|
||||
return cmdUserDeps{
|
||||
userSVC: userSVC,
|
||||
orgSvc: &mock.OrganizationService{
|
||||
FindOrganizationF: func(ctx context.Context, filter influxdb.OrganizationFilter) (*influxdb.Organization, error) {
|
||||
return &influxdb.Organization{ID: influxdb.ID(9000), Name: "influxdata"}, nil
|
||||
},
|
||||
},
|
||||
passSVC: passSVC,
|
||||
urmSVC: &mock.UserResourceMappingService{
|
||||
CreateMappingFn: func(context.Context, *platform.UserResourceMapping) error {
|
||||
return nil
|
||||
},
|
||||
},
|
||||
getPassFn: getPassFn,
|
||||
}
|
||||
}
|
||||
|
||||
t.Run("create", func(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
|
@ -116,7 +112,7 @@ func TestCmdUser(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
cmdFn := func(expected userResult) *cobra.Command {
|
||||
cmdFn := func(expected userResult) func(*globalFlags, genericCLIOpts) *cobra.Command {
|
||||
svc := mock.NewUserService()
|
||||
svc.CreateUserFn = func(ctx context.Context, User *influxdb.User) error {
|
||||
if expected.user != *User {
|
||||
|
@ -132,19 +128,25 @@ func TestCmdUser(t *testing.T) {
|
|||
return nil
|
||||
}
|
||||
|
||||
builder := newCmdUserBuilder(fakeSVCFn(newCMDUserDeps(svc, passSVC, nil)), out(ioutil.Discard))
|
||||
cmd := builder.cmdCreate()
|
||||
cmd.RunE = builder.cmdCreateRunEFn
|
||||
return cmd
|
||||
return func(g *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
builder := newCmdUserBuilder(fakeSVCFn(newCMDUserDeps(svc, passSVC, nil)), opt)
|
||||
return builder.cmd()
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
fn := func(t *testing.T) {
|
||||
defer addEnvVars(t, tt.envVars)()
|
||||
cmd := cmdFn(tt.expected)
|
||||
cmd.SetArgs(tt.flags)
|
||||
err := cmd.Execute()
|
||||
require.NoError(t, err)
|
||||
|
||||
builder := newInfluxCmdBuilder(
|
||||
in(new(bytes.Buffer)),
|
||||
out(ioutil.Discard),
|
||||
)
|
||||
cmd := builder.cmd(cmdFn(tt.expected))
|
||||
cmd.SetArgs(append([]string{"user", "create"}, tt.flags...))
|
||||
|
||||
require.NoError(t, cmd.Execute())
|
||||
}
|
||||
|
||||
t.Run(tt.name, fn)
|
||||
|
@ -169,7 +171,7 @@ func TestCmdUser(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
cmdFn := func(expectedID influxdb.ID) *cobra.Command {
|
||||
cmdFn := func(expectedID influxdb.ID) func(*globalFlags, genericCLIOpts) *cobra.Command {
|
||||
svc := mock.NewUserService()
|
||||
svc.FindUserByIDFn = func(ctx context.Context, id influxdb.ID) (*influxdb.User, error) {
|
||||
return &influxdb.User{ID: id}, nil
|
||||
|
@ -181,17 +183,22 @@ func TestCmdUser(t *testing.T) {
|
|||
return nil
|
||||
}
|
||||
|
||||
builder := newCmdUserBuilder(fakeSVCFn(newCMDUserDeps(svc, nil, nil)), out(ioutil.Discard))
|
||||
cmd := builder.cmdDelete()
|
||||
cmd.RunE = builder.cmdDeleteRunEFn
|
||||
return cmd
|
||||
return func(g *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
builder := newCmdUserBuilder(fakeSVCFn(newCMDUserDeps(svc, nil, nil)), opt)
|
||||
return builder.cmd()
|
||||
}
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
fn := func(t *testing.T) {
|
||||
cmd := cmdFn(tt.expectedID)
|
||||
builder := newInfluxCmdBuilder(
|
||||
in(new(bytes.Buffer)),
|
||||
out(ioutil.Discard),
|
||||
)
|
||||
cmd := builder.cmd(cmdFn(tt.expectedID))
|
||||
idFlag := tt.flag + tt.expectedID.String()
|
||||
cmd.SetArgs([]string{idFlag})
|
||||
cmd.SetArgs([]string{"user", "delete", idFlag})
|
||||
|
||||
require.NoError(t, cmd.Execute())
|
||||
}
|
||||
|
||||
|
@ -199,7 +206,7 @@ func TestCmdUser(t *testing.T) {
|
|||
}
|
||||
})
|
||||
|
||||
t.Run("find", func(t *testing.T) {
|
||||
t.Run("list", func(t *testing.T) {
|
||||
type called struct {
|
||||
name string
|
||||
id influxdb.ID
|
||||
|
@ -208,6 +215,7 @@ func TestCmdUser(t *testing.T) {
|
|||
tests := []struct {
|
||||
name string
|
||||
expected called
|
||||
command string
|
||||
flags []string
|
||||
}{
|
||||
{
|
||||
|
@ -232,9 +240,29 @@ func TestCmdUser(t *testing.T) {
|
|||
},
|
||||
expected: called{name: "name1", id: 1},
|
||||
},
|
||||
{
|
||||
name: "ls alias",
|
||||
command: "ls",
|
||||
flags: []string{
|
||||
"--id=" + influxdb.ID(2).String(),
|
||||
},
|
||||
expected: called{
|
||||
id: 2,
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "find alias",
|
||||
command: "find",
|
||||
flags: []string{
|
||||
"--id=" + influxdb.ID(2).String(),
|
||||
},
|
||||
expected: called{
|
||||
id: 2,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
cmdFn := func() (*cobra.Command, *called) {
|
||||
cmdFn := func() (func(*globalFlags, genericCLIOpts) *cobra.Command, *called) {
|
||||
calls := new(called)
|
||||
|
||||
svc := mock.NewUserService()
|
||||
|
@ -248,16 +276,26 @@ func TestCmdUser(t *testing.T) {
|
|||
return nil, 0, nil
|
||||
}
|
||||
|
||||
builder := newCmdUserBuilder(fakeSVCFn(newCMDUserDeps(svc, nil, nil)), in(new(bytes.Buffer)), out(ioutil.Discard))
|
||||
cmd := builder.cmdFind()
|
||||
cmd.RunE = builder.cmdFindRunEFn
|
||||
return cmd, calls
|
||||
return func(g *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
builder := newCmdUserBuilder(fakeSVCFn(newCMDUserDeps(svc, nil, nil)), opt)
|
||||
return builder.cmd()
|
||||
}, calls
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
fn := func(t *testing.T) {
|
||||
cmd, calls := cmdFn()
|
||||
cmd.SetArgs(tt.flags)
|
||||
builder := newInfluxCmdBuilder(
|
||||
in(new(bytes.Buffer)),
|
||||
out(ioutil.Discard),
|
||||
)
|
||||
nestedCmdFn, calls := cmdFn()
|
||||
cmd := builder.cmd(nestedCmdFn)
|
||||
|
||||
if tt.command == "" {
|
||||
tt.command = "list"
|
||||
}
|
||||
|
||||
cmd.SetArgs(append([]string{"user", tt.command}, tt.flags...))
|
||||
|
||||
require.NoError(t, cmd.Execute())
|
||||
assert.Equal(t, tt.expected, *calls)
|
||||
|
@ -305,7 +343,7 @@ func TestCmdUser(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
cmdFn := func(expectedUpdate influxdb.UserUpdate) *cobra.Command {
|
||||
cmdFn := func(expectedUpdate influxdb.UserUpdate) func(*globalFlags, genericCLIOpts) *cobra.Command {
|
||||
svc := mock.NewUserService()
|
||||
svc.UpdateUserFn = func(ctx context.Context, id influxdb.ID, upd influxdb.UserUpdate) (*influxdb.User, error) {
|
||||
if id != 3 {
|
||||
|
@ -317,16 +355,21 @@ func TestCmdUser(t *testing.T) {
|
|||
return &influxdb.User{}, nil
|
||||
}
|
||||
|
||||
builder := newCmdUserBuilder(fakeSVCFn(newCMDUserDeps(svc, nil, nil)), out(ioutil.Discard))
|
||||
cmd := builder.cmdUpdate()
|
||||
cmd.RunE = builder.cmdUpdateRunEFn
|
||||
return cmd
|
||||
return func(g *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
builder := newCmdUserBuilder(fakeSVCFn(newCMDUserDeps(svc, nil, nil)), opt)
|
||||
return builder.cmd()
|
||||
}
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
fn := func(t *testing.T) {
|
||||
cmd := cmdFn(tt.expected)
|
||||
cmd.SetArgs(tt.flags)
|
||||
builder := newInfluxCmdBuilder(
|
||||
in(new(bytes.Buffer)),
|
||||
out(ioutil.Discard),
|
||||
)
|
||||
cmd := builder.cmd(cmdFn(tt.expected))
|
||||
cmd.SetArgs(append([]string{"user", "update"}, tt.flags...))
|
||||
|
||||
require.NoError(t, cmd.Execute())
|
||||
}
|
||||
|
||||
|
@ -357,7 +400,7 @@ func TestCmdUser(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
cmdFn := func(expected string) *cobra.Command {
|
||||
cmdFn := func(expected string) func(*globalFlags, genericCLIOpts) *cobra.Command {
|
||||
svc := mock.NewUserService()
|
||||
svc.FindUserFn = func(ctx context.Context, f influxdb.UserFilter) (*influxdb.User, error) {
|
||||
usr := new(influxdb.User)
|
||||
|
@ -380,17 +423,22 @@ func TestCmdUser(t *testing.T) {
|
|||
getPassFn := func(*input.UI, bool) string {
|
||||
return expected
|
||||
}
|
||||
builder := newCmdUserBuilder(fakeSVCFn(newCMDUserDeps(svc, passSVC, getPassFn)),
|
||||
out(ioutil.Discard))
|
||||
cmd := builder.cmdPassword()
|
||||
cmd.RunE = builder.cmdPasswordRunEFn
|
||||
return cmd
|
||||
|
||||
return func(g *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
builder := newCmdUserBuilder(fakeSVCFn(newCMDUserDeps(svc, passSVC, getPassFn)), opt)
|
||||
return builder.cmd()
|
||||
}
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
fn := func(t *testing.T) {
|
||||
cmd := cmdFn(tt.expected)
|
||||
cmd.SetArgs(tt.flags)
|
||||
builder := newInfluxCmdBuilder(
|
||||
in(new(bytes.Buffer)),
|
||||
out(ioutil.Discard),
|
||||
)
|
||||
cmd := builder.cmd(cmdFn(tt.expected))
|
||||
cmd.SetArgs(append([]string{"user", "password"}, tt.flags...))
|
||||
|
||||
require.NoError(t, cmd.Execute())
|
||||
}
|
||||
|
||||
|
|
|
@ -23,15 +23,12 @@ var writeFlags struct {
|
|||
Precision string
|
||||
}
|
||||
|
||||
func cmdWrite() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "write line protocol or @/path/to/points.txt",
|
||||
Short: "Write points to InfluxDB",
|
||||
Long: `Write a single line of line protocol to InfluxDB,
|
||||
or add an entire file specified with an @ prefix.`,
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: wrapCheckSetup(fluxWriteF),
|
||||
}
|
||||
func cmdWrite(f *globalFlags, opt genericCLIOpts) *cobra.Command {
|
||||
cmd := opt.newCmd("write line protocol or @/path/to/points.txt", fluxWriteF)
|
||||
cmd.Args = cobra.ExactArgs(1)
|
||||
cmd.Short = "Write points to InfluxDB"
|
||||
cmd.Long = `Write a single line of line protocol to InfluxDB,
|
||||
or add an entire file specified with an @ prefix.`
|
||||
|
||||
opts := flagOpts{
|
||||
{
|
||||
|
|
|
@ -12,7 +12,7 @@ import (
|
|||
"github.com/influxdata/influxdb/kit/prom"
|
||||
"github.com/influxdata/influxdb/models"
|
||||
"github.com/influxdata/influxdb/storage"
|
||||
"github.com/influxdata/influxdb/storage/readservice"
|
||||
"github.com/influxdata/influxdb/storage/reads"
|
||||
"github.com/influxdata/influxdb/tsdb"
|
||||
"github.com/influxdata/influxdb/tsdb/cursors"
|
||||
"github.com/influxdata/influxql"
|
||||
|
@ -26,7 +26,7 @@ var _ Engine = (*storage.Engine)(nil)
|
|||
// to facilitate testing.
|
||||
type Engine interface {
|
||||
influxdb.DeleteService
|
||||
readservice.Viewer
|
||||
reads.Viewer
|
||||
storage.PointsWriter
|
||||
storage.BucketDeleter
|
||||
prom.PrometheusCollector
|
||||
|
@ -143,8 +143,8 @@ func (t *TemporaryEngine) CreateCursorIterator(ctx context.Context) (tsdb.Cursor
|
|||
}
|
||||
|
||||
// CreateSeriesCursor calls into the underlying engines CreateSeriesCursor.
|
||||
func (t *TemporaryEngine) CreateSeriesCursor(ctx context.Context, req storage.SeriesCursorRequest, cond influxql.Expr) (storage.SeriesCursor, error) {
|
||||
return t.engine.CreateSeriesCursor(ctx, req, cond)
|
||||
func (t *TemporaryEngine) CreateSeriesCursor(ctx context.Context, orgID, bucketID influxdb.ID, cond influxql.Expr) (storage.SeriesCursor, error) {
|
||||
return t.engine.CreateSeriesCursor(ctx, orgID, bucketID, cond)
|
||||
}
|
||||
|
||||
// TagKeys calls into the underlying engines TagKeys.
|
||||
|
|
|
@ -795,7 +795,7 @@ func (m *Launcher) run(ctx context.Context) (err error) {
|
|||
VariableService: variableSvc,
|
||||
PasswordsService: passwdsSvc,
|
||||
OnboardingService: onboardingSvc,
|
||||
InfluxQLService: nil, // No InfluxQL support
|
||||
InfluxQLService: storageQueryService,
|
||||
FluxService: storageQueryService,
|
||||
FluxLanguageService: fluxlang.DefaultService,
|
||||
TaskService: taskSvc,
|
||||
|
|
|
@ -246,98 +246,171 @@ spec:
|
|||
sum1, err := svc.Apply(timedCtx(5*time.Second), l.Org.ID, l.User.ID, newPkg(t))
|
||||
require.NoError(t, err)
|
||||
|
||||
labels := sum1.Labels
|
||||
require.Len(t, labels, 1)
|
||||
assert.NotZero(t, labels[0].ID)
|
||||
assert.Equal(t, "label_1", labels[0].Name)
|
||||
verifyCompleteSummary := func(t *testing.T, sum1 pkger.Summary, exportAllSum bool) {
|
||||
t.Helper()
|
||||
|
||||
bkts := sum1.Buckets
|
||||
require.Len(t, bkts, 1)
|
||||
assert.NotZero(t, bkts[0].ID)
|
||||
assert.Equal(t, "rucket_1", bkts[0].Name)
|
||||
hasLabelAssociations(t, bkts[0].LabelAssociations, 1, "label_1")
|
||||
|
||||
checks := sum1.Checks
|
||||
require.Len(t, checks, 2)
|
||||
for i, ch := range checks {
|
||||
assert.NotZero(t, ch.Check.GetID())
|
||||
assert.Equal(t, fmt.Sprintf("check_%d", i), ch.Check.GetName())
|
||||
hasLabelAssociations(t, ch.LabelAssociations, 1, "label_1")
|
||||
}
|
||||
|
||||
dashs := sum1.Dashboards
|
||||
require.Len(t, dashs, 1)
|
||||
assert.NotZero(t, dashs[0].ID)
|
||||
assert.Equal(t, "dash_1", dashs[0].Name)
|
||||
assert.Equal(t, "desc1", dashs[0].Description)
|
||||
hasLabelAssociations(t, dashs[0].LabelAssociations, 1, "label_1")
|
||||
require.Len(t, dashs[0].Charts, 1)
|
||||
assert.Equal(t, influxdb.ViewPropertyTypeSingleStat, dashs[0].Charts[0].Properties.GetType())
|
||||
|
||||
endpoints := sum1.NotificationEndpoints
|
||||
require.Len(t, endpoints, 1)
|
||||
assert.NotZero(t, endpoints[0].NotificationEndpoint.GetID())
|
||||
assert.Equal(t, "http_none_auth_notification_endpoint", endpoints[0].NotificationEndpoint.GetName())
|
||||
assert.Equal(t, "http none auth desc", endpoints[0].NotificationEndpoint.GetDescription())
|
||||
assert.Equal(t, influxdb.TaskStatusInactive, string(endpoints[0].NotificationEndpoint.GetStatus()))
|
||||
hasLabelAssociations(t, endpoints[0].LabelAssociations, 1, "label_1")
|
||||
|
||||
require.Len(t, sum1.NotificationRules, 1)
|
||||
rule := sum1.NotificationRules[0]
|
||||
assert.NotZero(t, rule.ID)
|
||||
assert.Equal(t, "rule_0", rule.Name)
|
||||
assert.Equal(t, pkger.SafeID(endpoints[0].NotificationEndpoint.GetID()), rule.EndpointID)
|
||||
assert.Equal(t, "http_none_auth_notification_endpoint", rule.EndpointName)
|
||||
assert.Equal(t, "http", rule.EndpointType)
|
||||
|
||||
require.Len(t, sum1.Tasks, 1)
|
||||
task := sum1.Tasks[0]
|
||||
assert.NotZero(t, task.ID)
|
||||
assert.Equal(t, "task_1", task.Name)
|
||||
assert.Equal(t, "desc_1", task.Description)
|
||||
|
||||
teles := sum1.TelegrafConfigs
|
||||
require.Len(t, teles, 1)
|
||||
assert.NotZero(t, teles[0].TelegrafConfig.ID)
|
||||
assert.Equal(t, l.Org.ID, teles[0].TelegrafConfig.OrgID)
|
||||
assert.Equal(t, "first_tele_config", teles[0].TelegrafConfig.Name)
|
||||
assert.Equal(t, "desc", teles[0].TelegrafConfig.Description)
|
||||
assert.Equal(t, telConf, teles[0].TelegrafConfig.Config)
|
||||
|
||||
vars := sum1.Variables
|
||||
require.Len(t, vars, 1)
|
||||
assert.NotZero(t, vars[0].ID)
|
||||
assert.Equal(t, "var_query_1", vars[0].Name)
|
||||
hasLabelAssociations(t, vars[0].LabelAssociations, 1, "label_1")
|
||||
varArgs := vars[0].Arguments
|
||||
require.NotNil(t, varArgs)
|
||||
assert.Equal(t, "query", varArgs.Type)
|
||||
assert.Equal(t, influxdb.VariableQueryValues{
|
||||
Query: "buckets() |> filter(fn: (r) => r.name !~ /^_/) |> rename(columns: {name: \"_value\"}) |> keep(columns: [\"_value\"])",
|
||||
Language: "flux",
|
||||
}, varArgs.Values)
|
||||
|
||||
newSumMapping := func(id pkger.SafeID, name string, rt influxdb.ResourceType) pkger.SummaryLabelMapping {
|
||||
return pkger.SummaryLabelMapping{
|
||||
ResourceName: name,
|
||||
LabelName: labels[0].Name,
|
||||
LabelID: labels[0].ID,
|
||||
ResourceID: id,
|
||||
ResourceType: rt,
|
||||
labels := sum1.Labels
|
||||
require.Len(t, labels, 1)
|
||||
if !exportAllSum {
|
||||
assert.NotZero(t, labels[0].ID)
|
||||
}
|
||||
assert.Equal(t, "label_1", labels[0].Name)
|
||||
|
||||
bkts := sum1.Buckets
|
||||
if exportAllSum {
|
||||
require.Len(t, bkts, 2)
|
||||
assert.Equal(t, l.Bucket.Name, bkts[0].Name)
|
||||
bkts = bkts[1:]
|
||||
}
|
||||
require.Len(t, bkts, 1)
|
||||
if !exportAllSum {
|
||||
assert.NotZero(t, bkts[0].ID)
|
||||
}
|
||||
assert.Equal(t, "rucket_1", bkts[0].Name)
|
||||
hasLabelAssociations(t, bkts[0].LabelAssociations, 1, "label_1")
|
||||
|
||||
checks := sum1.Checks
|
||||
require.Len(t, checks, 2)
|
||||
for i, ch := range checks {
|
||||
if !exportAllSum {
|
||||
assert.NotZero(t, ch.Check.GetID())
|
||||
}
|
||||
assert.Equal(t, fmt.Sprintf("check_%d", i), ch.Check.GetName())
|
||||
hasLabelAssociations(t, ch.LabelAssociations, 1, "label_1")
|
||||
}
|
||||
|
||||
dashs := sum1.Dashboards
|
||||
require.Len(t, dashs, 1)
|
||||
if !exportAllSum {
|
||||
assert.NotZero(t, dashs[0].ID)
|
||||
}
|
||||
assert.Equal(t, "dash_1", dashs[0].Name)
|
||||
assert.Equal(t, "desc1", dashs[0].Description)
|
||||
hasLabelAssociations(t, dashs[0].LabelAssociations, 1, "label_1")
|
||||
require.Len(t, dashs[0].Charts, 1)
|
||||
assert.Equal(t, influxdb.ViewPropertyTypeSingleStat, dashs[0].Charts[0].Properties.GetType())
|
||||
|
||||
endpoints := sum1.NotificationEndpoints
|
||||
require.Len(t, endpoints, 1)
|
||||
if !exportAllSum {
|
||||
assert.NotZero(t, endpoints[0].NotificationEndpoint.GetID())
|
||||
}
|
||||
assert.Equal(t, "http_none_auth_notification_endpoint", endpoints[0].NotificationEndpoint.GetName())
|
||||
assert.Equal(t, "http none auth desc", endpoints[0].NotificationEndpoint.GetDescription())
|
||||
assert.Equal(t, influxdb.TaskStatusInactive, string(endpoints[0].NotificationEndpoint.GetStatus()))
|
||||
hasLabelAssociations(t, endpoints[0].LabelAssociations, 1, "label_1")
|
||||
|
||||
require.Len(t, sum1.NotificationRules, 1)
|
||||
rule := sum1.NotificationRules[0]
|
||||
if !exportAllSum {
|
||||
assert.NotZero(t, rule.ID)
|
||||
}
|
||||
assert.Equal(t, "rule_0", rule.Name)
|
||||
assert.Equal(t, pkger.SafeID(endpoints[0].NotificationEndpoint.GetID()), rule.EndpointID)
|
||||
assert.Equal(t, "http_none_auth_notification_endpoint", rule.EndpointName)
|
||||
if !exportAllSum {
|
||||
assert.Equalf(t, "http", rule.EndpointType, "rule: %+v", rule)
|
||||
}
|
||||
|
||||
require.Len(t, sum1.Tasks, 1)
|
||||
task := sum1.Tasks[0]
|
||||
if !exportAllSum {
|
||||
assert.NotZero(t, task.ID)
|
||||
}
|
||||
assert.Equal(t, "task_1", task.Name)
|
||||
assert.Equal(t, "desc_1", task.Description)
|
||||
|
||||
teles := sum1.TelegrafConfigs
|
||||
require.Len(t, teles, 1)
|
||||
if !exportAllSum {
|
||||
assert.NotZero(t, teles[0].TelegrafConfig.ID)
|
||||
assert.Equal(t, l.Org.ID, teles[0].TelegrafConfig.OrgID)
|
||||
}
|
||||
assert.Equal(t, "first_tele_config", teles[0].TelegrafConfig.Name)
|
||||
assert.Equal(t, "desc", teles[0].TelegrafConfig.Description)
|
||||
assert.Equal(t, telConf, teles[0].TelegrafConfig.Config)
|
||||
|
||||
vars := sum1.Variables
|
||||
require.Len(t, vars, 1)
|
||||
if !exportAllSum {
|
||||
assert.NotZero(t, vars[0].ID)
|
||||
}
|
||||
assert.Equal(t, "var_query_1", vars[0].Name)
|
||||
hasLabelAssociations(t, vars[0].LabelAssociations, 1, "label_1")
|
||||
varArgs := vars[0].Arguments
|
||||
require.NotNil(t, varArgs)
|
||||
assert.Equal(t, "query", varArgs.Type)
|
||||
assert.Equal(t, influxdb.VariableQueryValues{
|
||||
Query: "buckets() |> filter(fn: (r) => r.name !~ /^_/) |> rename(columns: {name: \"_value\"}) |> keep(columns: [\"_value\"])",
|
||||
Language: "flux",
|
||||
}, varArgs.Values)
|
||||
|
||||
newSumMapping := func(id pkger.SafeID, name string, rt influxdb.ResourceType) pkger.SummaryLabelMapping {
|
||||
return pkger.SummaryLabelMapping{
|
||||
ResourceName: name,
|
||||
LabelName: labels[0].Name,
|
||||
LabelID: labels[0].ID,
|
||||
ResourceID: id,
|
||||
ResourceType: rt,
|
||||
}
|
||||
}
|
||||
|
||||
mappings := sum1.LabelMappings
|
||||
require.Len(t, mappings, 9)
|
||||
hasMapping(t, mappings, newSumMapping(bkts[0].ID, bkts[0].Name, influxdb.BucketsResourceType))
|
||||
hasMapping(t, mappings, newSumMapping(pkger.SafeID(checks[0].Check.GetID()), checks[0].Check.GetName(), influxdb.ChecksResourceType))
|
||||
hasMapping(t, mappings, newSumMapping(pkger.SafeID(checks[1].Check.GetID()), checks[1].Check.GetName(), influxdb.ChecksResourceType))
|
||||
hasMapping(t, mappings, newSumMapping(dashs[0].ID, dashs[0].Name, influxdb.DashboardsResourceType))
|
||||
hasMapping(t, mappings, newSumMapping(pkger.SafeID(endpoints[0].NotificationEndpoint.GetID()), endpoints[0].NotificationEndpoint.GetName(), influxdb.NotificationEndpointResourceType))
|
||||
hasMapping(t, mappings, newSumMapping(rule.ID, rule.Name, influxdb.NotificationRuleResourceType))
|
||||
hasMapping(t, mappings, newSumMapping(task.ID, task.Name, influxdb.TasksResourceType))
|
||||
hasMapping(t, mappings, newSumMapping(pkger.SafeID(teles[0].TelegrafConfig.ID), teles[0].TelegrafConfig.Name, influxdb.TelegrafsResourceType))
|
||||
hasMapping(t, mappings, newSumMapping(vars[0].ID, vars[0].Name, influxdb.VariablesResourceType))
|
||||
}
|
||||
|
||||
mappings := sum1.LabelMappings
|
||||
require.Len(t, mappings, 9)
|
||||
hasMapping(t, mappings, newSumMapping(bkts[0].ID, bkts[0].Name, influxdb.BucketsResourceType))
|
||||
hasMapping(t, mappings, newSumMapping(pkger.SafeID(checks[0].Check.GetID()), checks[0].Check.GetName(), influxdb.ChecksResourceType))
|
||||
hasMapping(t, mappings, newSumMapping(pkger.SafeID(checks[1].Check.GetID()), checks[1].Check.GetName(), influxdb.ChecksResourceType))
|
||||
hasMapping(t, mappings, newSumMapping(dashs[0].ID, dashs[0].Name, influxdb.DashboardsResourceType))
|
||||
hasMapping(t, mappings, newSumMapping(pkger.SafeID(endpoints[0].NotificationEndpoint.GetID()), endpoints[0].NotificationEndpoint.GetName(), influxdb.NotificationEndpointResourceType))
|
||||
hasMapping(t, mappings, newSumMapping(rule.ID, rule.Name, influxdb.NotificationRuleResourceType))
|
||||
hasMapping(t, mappings, newSumMapping(task.ID, task.Name, influxdb.TasksResourceType))
|
||||
hasMapping(t, mappings, newSumMapping(pkger.SafeID(teles[0].TelegrafConfig.ID), teles[0].TelegrafConfig.Name, influxdb.TelegrafsResourceType))
|
||||
hasMapping(t, mappings, newSumMapping(vars[0].ID, vars[0].Name, influxdb.VariablesResourceType))
|
||||
verifyCompleteSummary(t, sum1, false)
|
||||
|
||||
var (
|
||||
// used in dependent subtests
|
||||
sum1Bkts = sum1.Buckets
|
||||
sum1Checks = sum1.Checks
|
||||
sum1Dashs = sum1.Dashboards
|
||||
sum1Endpoints = sum1.NotificationEndpoints
|
||||
sum1Labels = sum1.Labels
|
||||
sum1Rules = sum1.NotificationRules
|
||||
sum1Tasks = sum1.Tasks
|
||||
sum1Teles = sum1.TelegrafConfigs
|
||||
sum1Vars = sum1.Variables
|
||||
)
|
||||
|
||||
t.Run("exporting all resources for an org", func(t *testing.T) {
|
||||
newPkg, err := svc.CreatePkg(timedCtx(2*time.Second), pkger.CreateWithAllOrgResources(
|
||||
pkger.CreateByOrgIDOpt{
|
||||
OrgID: l.Org.ID,
|
||||
},
|
||||
))
|
||||
require.NoError(t, err)
|
||||
|
||||
verifyCompleteSummary(t, newPkg.Summary(), true)
|
||||
|
||||
bucketsOnlyPkg, err := svc.CreatePkg(timedCtx(2*time.Second), pkger.CreateWithAllOrgResources(
|
||||
pkger.CreateByOrgIDOpt{
|
||||
OrgID: l.Org.ID,
|
||||
ResourceKinds: []pkger.Kind{pkger.KindBucket, pkger.KindTask},
|
||||
},
|
||||
))
|
||||
require.NoError(t, err)
|
||||
|
||||
bktsOnlySum := bucketsOnlyPkg.Summary()
|
||||
assert.NotEmpty(t, bktsOnlySum.Buckets)
|
||||
assert.NotEmpty(t, bktsOnlySum.Labels)
|
||||
assert.NotEmpty(t, bktsOnlySum.Tasks)
|
||||
assert.Empty(t, bktsOnlySum.Checks)
|
||||
assert.Empty(t, bktsOnlySum.Dashboards)
|
||||
assert.Empty(t, bktsOnlySum.NotificationEndpoints)
|
||||
assert.Empty(t, bktsOnlySum.NotificationRules)
|
||||
assert.Empty(t, bktsOnlySum.Variables)
|
||||
})
|
||||
|
||||
t.Run("pkg with same bkt-var-label does nto create new resources for them", func(t *testing.T) {
|
||||
// validate the new package doesn't create new resources for bkts/labels/vars
|
||||
|
@ -412,35 +485,35 @@ spec:
|
|||
resToClone := []pkger.ResourceToClone{
|
||||
{
|
||||
Kind: pkger.KindBucket,
|
||||
ID: influxdb.ID(bkts[0].ID),
|
||||
ID: influxdb.ID(sum1Bkts[0].ID),
|
||||
},
|
||||
{
|
||||
Kind: pkger.KindCheck,
|
||||
ID: checks[0].Check.GetID(),
|
||||
ID: sum1Checks[0].Check.GetID(),
|
||||
},
|
||||
{
|
||||
Kind: pkger.KindCheck,
|
||||
ID: checks[1].Check.GetID(),
|
||||
ID: sum1Checks[1].Check.GetID(),
|
||||
},
|
||||
{
|
||||
Kind: pkger.KindDashboard,
|
||||
ID: influxdb.ID(dashs[0].ID),
|
||||
ID: influxdb.ID(sum1Dashs[0].ID),
|
||||
},
|
||||
{
|
||||
Kind: pkger.KindLabel,
|
||||
ID: influxdb.ID(labels[0].ID),
|
||||
ID: influxdb.ID(sum1Labels[0].ID),
|
||||
},
|
||||
{
|
||||
Kind: pkger.KindNotificationEndpoint,
|
||||
ID: endpoints[0].NotificationEndpoint.GetID(),
|
||||
ID: sum1Endpoints[0].NotificationEndpoint.GetID(),
|
||||
},
|
||||
{
|
||||
Kind: pkger.KindTask,
|
||||
ID: influxdb.ID(task.ID),
|
||||
ID: influxdb.ID(sum1Tasks[0].ID),
|
||||
},
|
||||
{
|
||||
Kind: pkger.KindTelegraf,
|
||||
ID: teles[0].TelegrafConfig.ID,
|
||||
ID: sum1Teles[0].TelegrafConfig.ID,
|
||||
},
|
||||
}
|
||||
|
||||
|
@ -448,12 +521,12 @@ spec:
|
|||
{
|
||||
Kind: pkger.KindNotificationRule,
|
||||
Name: "new rule name",
|
||||
ID: influxdb.ID(rule.ID),
|
||||
ID: influxdb.ID(sum1Rules[0].ID),
|
||||
},
|
||||
{
|
||||
Kind: pkger.KindVariable,
|
||||
Name: "new name",
|
||||
ID: influxdb.ID(vars[0].ID),
|
||||
ID: influxdb.ID(sum1Vars[0].ID),
|
||||
},
|
||||
}
|
||||
|
||||
|
@ -494,30 +567,30 @@ spec:
|
|||
|
||||
newEndpoints := newSum.NotificationEndpoints
|
||||
require.Len(t, newEndpoints, 1)
|
||||
assert.Equal(t, endpoints[0].NotificationEndpoint.GetName(), newEndpoints[0].NotificationEndpoint.GetName())
|
||||
assert.Equal(t, endpoints[0].NotificationEndpoint.GetDescription(), newEndpoints[0].NotificationEndpoint.GetDescription())
|
||||
assert.Equal(t, sum1Endpoints[0].NotificationEndpoint.GetName(), newEndpoints[0].NotificationEndpoint.GetName())
|
||||
assert.Equal(t, sum1Endpoints[0].NotificationEndpoint.GetDescription(), newEndpoints[0].NotificationEndpoint.GetDescription())
|
||||
hasLabelAssociations(t, newEndpoints[0].LabelAssociations, 1, "label_1")
|
||||
|
||||
require.Len(t, newSum.NotificationRules, 1)
|
||||
newRule := newSum.NotificationRules[0]
|
||||
assert.Equal(t, "new rule name", newRule.Name)
|
||||
assert.Zero(t, newRule.EndpointID)
|
||||
assert.Equal(t, rule.EndpointName, newRule.EndpointName)
|
||||
assert.Equal(t, sum1Rules[0].EndpointName, newRule.EndpointName)
|
||||
hasLabelAssociations(t, newRule.LabelAssociations, 1, "label_1")
|
||||
|
||||
require.Len(t, newSum.Tasks, 1)
|
||||
newTask := newSum.Tasks[0]
|
||||
assert.Equal(t, task.Name, newTask.Name)
|
||||
assert.Equal(t, task.Description, newTask.Description)
|
||||
assert.Equal(t, task.Cron, newTask.Cron)
|
||||
assert.Equal(t, task.Every, newTask.Every)
|
||||
assert.Equal(t, task.Offset, newTask.Offset)
|
||||
assert.Equal(t, task.Query, newTask.Query)
|
||||
assert.Equal(t, task.Status, newTask.Status)
|
||||
assert.Equal(t, sum1Tasks[0].Name, newTask.Name)
|
||||
assert.Equal(t, sum1Tasks[0].Description, newTask.Description)
|
||||
assert.Equal(t, sum1Tasks[0].Cron, newTask.Cron)
|
||||
assert.Equal(t, sum1Tasks[0].Every, newTask.Every)
|
||||
assert.Equal(t, sum1Tasks[0].Offset, newTask.Offset)
|
||||
assert.Equal(t, sum1Tasks[0].Query, newTask.Query)
|
||||
assert.Equal(t, sum1Tasks[0].Status, newTask.Status)
|
||||
|
||||
require.Len(t, newSum.TelegrafConfigs, 1)
|
||||
assert.Equal(t, teles[0].TelegrafConfig.Name, newSum.TelegrafConfigs[0].TelegrafConfig.Name)
|
||||
assert.Equal(t, teles[0].TelegrafConfig.Description, newSum.TelegrafConfigs[0].TelegrafConfig.Description)
|
||||
assert.Equal(t, sum1Teles[0].TelegrafConfig.Name, newSum.TelegrafConfigs[0].TelegrafConfig.Name)
|
||||
assert.Equal(t, sum1Teles[0].TelegrafConfig.Description, newSum.TelegrafConfigs[0].TelegrafConfig.Description)
|
||||
hasLabelAssociations(t, newSum.TelegrafConfigs[0].LabelAssociations, 1, "label_1")
|
||||
|
||||
vars := newSum.Variables
|
||||
|
@ -555,12 +628,12 @@ spec:
|
|||
_, err = svc.Apply(ctx, l.Org.ID, 0, updatePkg)
|
||||
require.Error(t, err)
|
||||
|
||||
bkt, err := l.BucketService(t).FindBucketByID(ctx, influxdb.ID(bkts[0].ID))
|
||||
bkt, err := l.BucketService(t).FindBucketByID(ctx, influxdb.ID(sum1Bkts[0].ID))
|
||||
require.NoError(t, err)
|
||||
// make sure the desc change is not applied and is rolled back to prev desc
|
||||
assert.Equal(t, bkts[0].Description, bkt.Description)
|
||||
assert.Equal(t, sum1Bkts[0].Description, bkt.Description)
|
||||
|
||||
ch, err := l.CheckService().FindCheckByID(ctx, checks[0].Check.GetID())
|
||||
ch, err := l.CheckService().FindCheckByID(ctx, sum1Checks[0].Check.GetID())
|
||||
require.NoError(t, err)
|
||||
ch.SetOwnerID(0)
|
||||
deadman, ok := ch.(*check.Threshold)
|
||||
|
@ -568,22 +641,76 @@ spec:
|
|||
// validate the change to query is not persisting returned to previous state.
|
||||
// not checking entire bits, b/c we dont' save userID and so forth and makes a
|
||||
// direct comparison very annoying...
|
||||
assert.Equal(t, checks[0].Check.(*check.Threshold).Query.Text, deadman.Query.Text)
|
||||
assert.Equal(t, sum1Checks[0].Check.(*check.Threshold).Query.Text, deadman.Query.Text)
|
||||
|
||||
label, err := l.LabelService(t).FindLabelByID(ctx, influxdb.ID(labels[0].ID))
|
||||
label, err := l.LabelService(t).FindLabelByID(ctx, influxdb.ID(sum1Labels[0].ID))
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, labels[0].Properties.Description, label.Properties["description"])
|
||||
assert.Equal(t, sum1Labels[0].Properties.Description, label.Properties["description"])
|
||||
|
||||
endpoint, err := l.NotificationEndpointService(t).FindNotificationEndpointByID(ctx, endpoints[0].NotificationEndpoint.GetID())
|
||||
endpoint, err := l.NotificationEndpointService(t).FindNotificationEndpointByID(ctx, sum1Endpoints[0].NotificationEndpoint.GetID())
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, endpoints[0].NotificationEndpoint.GetDescription(), endpoint.GetDescription())
|
||||
assert.Equal(t, sum1Endpoints[0].NotificationEndpoint.GetDescription(), endpoint.GetDescription())
|
||||
|
||||
v, err := l.VariableService(t).FindVariableByID(ctx, influxdb.ID(vars[0].ID))
|
||||
v, err := l.VariableService(t).FindVariableByID(ctx, influxdb.ID(sum1Vars[0].ID))
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, vars[0].Description, v.Description)
|
||||
assert.Equal(t, sum1Vars[0].Description, v.Description)
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("apply a task pkg with a complex query", func(t *testing.T) {
|
||||
// validates bug: https://github.com/influxdata/influxdb/issues/17069
|
||||
|
||||
pkgStr := fmt.Sprintf(`
|
||||
apiVersion: %[1]s
|
||||
kind: Task
|
||||
metadata:
|
||||
name: Http.POST Synthetic (POST)
|
||||
spec:
|
||||
every: 5m
|
||||
query: |-
|
||||
import "strings"
|
||||
import "csv"
|
||||
import "http"
|
||||
import "system"
|
||||
|
||||
timeDiff = (t1, t2) => {
|
||||
return duration(v: uint(v: t2) - uint(v: t1))
|
||||
}
|
||||
timeDiffNum = (t1, t2) => {
|
||||
return uint(v: t2) - uint(v: t1)
|
||||
}
|
||||
urlToPost = "http://www.duckduckgo.com"
|
||||
timeBeforeCall = system.time()
|
||||
responseCode = http.post(url: urlToPost, data: bytes(v: "influxdata"))
|
||||
timeAfterCall = system.time()
|
||||
responseTime = timeDiff(t1: timeBeforeCall, t2: timeAfterCall)
|
||||
responseTimeNum = timeDiffNum(t1: timeBeforeCall, t2: timeAfterCall)
|
||||
data = "#group,false,false,true,true,true,true,true,true
|
||||
#datatype,string,long,string,string,string,string,string,string
|
||||
#default,mean,,,,,,,
|
||||
,result,table,service,response_code,time_before,time_after,response_time_duration,response_time_ns
|
||||
,,0,http_post_ping,${string(v: responseCode)},${string(v: timeBeforeCall)},${string(v: timeAfterCall)},${string(v: responseTime)},${string(v: responseTimeNum)}"
|
||||
theTable = csv.from(csv: data)
|
||||
|
||||
theTable
|
||||
|> map(fn: (r) =>
|
||||
({r with _time: now()}))
|
||||
|> map(fn: (r) =>
|
||||
({r with _measurement: "PingService", url: urlToPost, method: "POST"}))
|
||||
|> drop(columns: ["time_before", "time_after", "response_time_duration"])
|
||||
|> to(bucket: "Pingpire", orgID: "039346c3777a1000", fieldFn: (r) =>
|
||||
({"responseCode": r.response_code, "responseTime": int(v: r.response_time_ns)}))
|
||||
`, pkger.APIVersion)
|
||||
|
||||
pkg, err := pkger.Parse(pkger.EncodingYAML, pkger.FromString(pkgStr))
|
||||
require.NoError(t, err)
|
||||
|
||||
sum, err := svc.Apply(timedCtx(time.Second), l.Org.ID, l.User.ID, pkg)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.Len(t, sum.Tasks, 1)
|
||||
})
|
||||
|
||||
t.Run("apply a package with env refs", func(t *testing.T) {
|
||||
pkgStr := fmt.Sprintf(`
|
||||
apiVersion: %[1]s
|
||||
|
|
|
@ -62,7 +62,7 @@ func init() {
|
|||
{
|
||||
DestP: &flags.boltPath,
|
||||
Flag: "bolt-path",
|
||||
Default: filepath.Join(dir, http.DefaultTokenFile),
|
||||
Default: filepath.Join(dir, bolt.DefaultFilename),
|
||||
Desc: "path to target boltdb database",
|
||||
},
|
||||
{
|
||||
|
@ -75,7 +75,7 @@ func init() {
|
|||
DestP: &flags.credPath,
|
||||
Flag: "credentials-path",
|
||||
Default: filepath.Join(dir, http.DefaultTokenFile),
|
||||
Desc: "path to target persistent engine files",
|
||||
Desc: "path to target credentials file",
|
||||
},
|
||||
{
|
||||
DestP: &flags.backupPath,
|
||||
|
@ -136,6 +136,10 @@ func restoreE(cmd *cobra.Command, args []string) error {
|
|||
return fmt.Errorf("restore completed, but failed to cleanup temporary bolt file: %v", err)
|
||||
}
|
||||
|
||||
if err := removeTmpCred(); err != nil {
|
||||
return fmt.Errorf("restore completed, but failed to cleanup temporary credentials file: %v", err)
|
||||
}
|
||||
|
||||
if err := removeTmpEngine(); err != nil {
|
||||
return fmt.Errorf("restore completed, but failed to cleanup temporary engine data: %v", err)
|
||||
}
|
||||
|
@ -284,6 +288,14 @@ func restoreFile(backup string, target string, filetype string) error {
|
|||
func restoreCred() error {
|
||||
backupCred := filepath.Join(flags.backupPath, http.DefaultTokenFile)
|
||||
|
||||
_, err := os.Stat(backupCred)
|
||||
if os.IsNotExist(err) {
|
||||
fmt.Printf("No credentials file found in backup, skipping.\n")
|
||||
return nil
|
||||
} else if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := restoreFile(backupCred, flags.credPath, "credentials"); err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
8
go.mod
8
go.mod
|
@ -41,7 +41,7 @@ require (
|
|||
github.com/hashicorp/go-msgpack v0.0.0-20150518234257-fa3f63826f7c // indirect
|
||||
github.com/hashicorp/raft v1.0.0 // indirect
|
||||
github.com/hashicorp/vault/api v1.0.2
|
||||
github.com/influxdata/cron v0.0.0-20191112133922-ad5847cfab62
|
||||
github.com/influxdata/cron v0.0.0-20191203200038-ded12750aac6
|
||||
github.com/influxdata/flux v0.60.1-0.20200305155158-1add321ebf7a
|
||||
github.com/influxdata/httprouter v1.3.1-0.20191122104820-ee83e2772f69
|
||||
github.com/influxdata/influxql v0.0.0-20180925231337-1cbfca8e56b6
|
||||
|
@ -85,8 +85,8 @@ require (
|
|||
github.com/tinylib/msgp v1.1.0 // indirect
|
||||
github.com/tylerb/graceful v1.2.15
|
||||
github.com/uber-go/atomic v1.3.2 // indirect
|
||||
github.com/uber/jaeger-client-go v2.15.0+incompatible
|
||||
github.com/uber/jaeger-lib v1.5.0 // indirect
|
||||
github.com/uber/jaeger-client-go v2.16.0+incompatible
|
||||
github.com/uber/jaeger-lib v2.2.0+incompatible // indirect
|
||||
github.com/willf/bitset v1.1.9 // indirect
|
||||
github.com/yudai/gojsondiff v1.0.0
|
||||
github.com/yudai/golcs v0.0.0-20170316035057-ecda9a501e82 // indirect
|
||||
|
@ -97,7 +97,7 @@ require (
|
|||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859
|
||||
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58
|
||||
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0
|
||||
golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4
|
||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4
|
||||
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5
|
||||
google.golang.org/api v0.7.0
|
||||
|
|
16
go.sum
16
go.sum
|
@ -139,6 +139,8 @@ github.com/gogo/protobuf v1.2.1 h1:/s5zKNz0uPFCZ5hddgPdo2TK2TVrUNMn0OOX8/aZMTE=
|
|||
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
|
||||
github.com/golang/gddo v0.0.0-20181116215533-9bd4a3295021 h1:HYV500jCgk+IC68L5sWrLFIWMpaUFfXXpJSAb7XOoBk=
|
||||
github.com/golang/gddo v0.0.0-20181116215533-9bd4a3295021/go.mod h1:xEhNfoBDX1hzLm2Nf80qUvZ2sVwoMZ8d6IE2SrsQfh4=
|
||||
github.com/golang/geo v0.0.0-20190916061304-5b978397cfec h1:lJwO/92dFXWeXOZdoGXgptLmNLwynMSHUmU6besqtiw=
|
||||
github.com/golang/geo v0.0.0-20190916061304-5b978397cfec/go.mod h1:QZ0nwyI2jOfgRAoBvP+ab5aRr7c9x7lhGEJrKvBwjWI=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
|
||||
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
||||
|
@ -238,8 +240,8 @@ github.com/imdario/mergo v0.3.6 h1:xTNEAn+kxVO7dTZGu0CegyqKZmoWFI0rF8UxjlB2d28=
|
|||
github.com/imdario/mergo v0.3.6/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
|
||||
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
|
||||
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
|
||||
github.com/influxdata/cron v0.0.0-20191112133922-ad5847cfab62 h1:YipnPuvJKPAzyBhr7eXIMA49L2Eooga/NSytWdLLI8U=
|
||||
github.com/influxdata/cron v0.0.0-20191112133922-ad5847cfab62/go.mod h1:XabtPPW2qsCg0tl+kjaPU+cFS+CjQXEXbT1VJvHT4og=
|
||||
github.com/influxdata/cron v0.0.0-20191203200038-ded12750aac6 h1:OtjKkeWDjUbyMi82C7XXy7Tvm2LXMwiBBXyFIGNPaGA=
|
||||
github.com/influxdata/cron v0.0.0-20191203200038-ded12750aac6/go.mod h1:XabtPPW2qsCg0tl+kjaPU+cFS+CjQXEXbT1VJvHT4og=
|
||||
github.com/influxdata/flux v0.60.1-0.20200305155158-1add321ebf7a h1:Dczh6cTotsat9rt7GzD7EhGzpGFO5h0L3zAkQ1sJSds=
|
||||
github.com/influxdata/flux v0.60.1-0.20200305155158-1add321ebf7a/go.mod h1:BRxpm1xTUAZ+s+Mq6t0NZyaYtlGrw/8YoHoifso9vS8=
|
||||
github.com/influxdata/goreleaser v0.97.0-influx h1:jT5OrcW7WfS0e2QxfwmTBjhLvpIC9CDLRhNgZJyhj8s=
|
||||
|
@ -456,10 +458,10 @@ github.com/tylerb/graceful v1.2.15 h1:B0x01Y8fsJpogzZTkDg6BDi6eMf03s01lEKGdrv83o
|
|||
github.com/tylerb/graceful v1.2.15/go.mod h1:LPYTbOYmUTdabwRt0TGhLllQ0MUNbs0Y5q1WXJOI9II=
|
||||
github.com/uber-go/atomic v1.3.2 h1:Azu9lPBWRNKzYXSIwRfgRuDuS0YKsK4NFhiQv98gkxo=
|
||||
github.com/uber-go/atomic v1.3.2/go.mod h1:/Ct5t2lcmbJ4OSe/waGBoaVvVqtO0bmtfVNex1PFV8g=
|
||||
github.com/uber/jaeger-client-go v2.15.0+incompatible h1:NP3qsSqNxh8VYr956ur1N/1C1PjvOJnJykCzcD5QHbk=
|
||||
github.com/uber/jaeger-client-go v2.15.0+incompatible/go.mod h1:WVhlPFC8FDjOFMMWRy2pZqQJSXxYSwNYOkTr/Z6d3Kk=
|
||||
github.com/uber/jaeger-lib v1.5.0 h1:OHbgr8l656Ub3Fw5k9SWnBfIEwvoHQ+W2y+Aa9D1Uyo=
|
||||
github.com/uber/jaeger-lib v1.5.0/go.mod h1:ComeNDZlWwrWnDv8aPp0Ba6+uUTzImX/AauajbLI56U=
|
||||
github.com/uber/jaeger-client-go v2.16.0+incompatible h1:Q2Pp6v3QYiocMxomCaJuwQGFt7E53bPYqEgug/AoBtY=
|
||||
github.com/uber/jaeger-client-go v2.16.0+incompatible/go.mod h1:WVhlPFC8FDjOFMMWRy2pZqQJSXxYSwNYOkTr/Z6d3Kk=
|
||||
github.com/uber/jaeger-lib v2.2.0+incompatible h1:MxZXOiR2JuoANZ3J6DE/U0kSFv/eJ/GfSYVCjK7dyaw=
|
||||
github.com/uber/jaeger-lib v2.2.0+incompatible/go.mod h1:ComeNDZlWwrWnDv8aPp0Ba6+uUTzImX/AauajbLI56U=
|
||||
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
|
||||
github.com/willf/bitset v1.1.9 h1:GBtFynGY9ZWZmEC9sWuu41/7VBXPFCOAbCbqTflOg9c=
|
||||
github.com/willf/bitset v1.1.9/go.mod h1:RjeCKbqT1RxIR/KWY6phxZiaY1IyutSBfGjNPySAYV4=
|
||||
|
@ -566,6 +568,8 @@ golang.org/x/sys v0.0.0-20190531175056-4c3a928424d2/go.mod h1:h1NjWce9XRLGQEsW7w
|
|||
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0 h1:HyfiK1WMnHj5FXFXatD+Qs1A/xC2Run6RzeW1SyHxpc=
|
||||
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4 h1:sfkvUWPNGwSV+8/fNqctR5lS2AqCSqYwXdrjCxp/dXo=
|
||||
golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
|
|
|
@ -118,7 +118,6 @@ func NewAPIHandler(b *APIBackend, opts ...APIHandlerOptFn) *APIHandler {
|
|||
Router: newBaseChiRouter(b.HTTPErrorHandler),
|
||||
}
|
||||
|
||||
internalURM := b.UserResourceMappingService
|
||||
b.UserResourceMappingService = authorizer.NewURMService(b.OrgLookupService, b.UserResourceMappingService)
|
||||
|
||||
h.Mount("/api/v2", serveLinksHandler(b.HTTPErrorHandler))
|
||||
|
@ -165,6 +164,7 @@ func NewAPIHandler(b *APIBackend, opts ...APIHandlerOptFn) *APIHandler {
|
|||
|
||||
orgBackend := NewOrgBackend(b.Logger.With(zap.String("handler", "org")), b)
|
||||
orgBackend.OrganizationService = authorizer.NewOrgService(b.OrganizationService)
|
||||
orgBackend.SecretService = authorizer.NewSecretService(b.SecretService)
|
||||
h.Mount(prefixOrganizations, NewOrgHandler(b.Logger, orgBackend))
|
||||
|
||||
scraperBackend := NewScraperBackend(b.Logger.With(zap.String("handler", "scraper")), b)
|
||||
|
@ -188,9 +188,10 @@ func NewAPIHandler(b *APIBackend, opts ...APIHandlerOptFn) *APIHandler {
|
|||
|
||||
h.Mount("/api/v2/swagger.json", newSwaggerLoader(b.Logger.With(zap.String("service", "swagger-loader")), b.HTTPErrorHandler))
|
||||
|
||||
taskBackend := NewTaskBackend(b.Logger.With(zap.String("handler", "task")), b)
|
||||
taskLogger := b.Logger.With(zap.String("handler", "bucket"))
|
||||
taskBackend := NewTaskBackend(taskLogger, b)
|
||||
taskBackend.TaskService = authorizer.NewTaskService(taskLogger, b.TaskService)
|
||||
taskHandler := NewTaskHandler(b.Logger, taskBackend)
|
||||
taskHandler.UserResourceMappingService = internalURM
|
||||
h.Mount(prefixTasks, taskHandler)
|
||||
|
||||
telegrafBackend := NewTelegrafBackend(b.Logger.With(zap.String("handler", "telegraf")), b)
|
||||
|
|
|
@ -117,26 +117,17 @@ func (h *BackupHandler) handleCreate(w http.ResponseWriter, r *http.Request) {
|
|||
|
||||
files = append(files, bolt.DefaultFilename)
|
||||
|
||||
credBackupPath := filepath.Join(internalBackupPath, DefaultTokenFile)
|
||||
credsExist, err := h.backupCredentials(internalBackupPath)
|
||||
|
||||
credPath, err := defaultTokenPath()
|
||||
if err != nil {
|
||||
h.HandleHTTPError(ctx, err, w)
|
||||
return
|
||||
}
|
||||
token, err := ioutil.ReadFile(credPath)
|
||||
if err != nil {
|
||||
h.HandleHTTPError(ctx, err, w)
|
||||
return
|
||||
}
|
||||
|
||||
if err := ioutil.WriteFile(credBackupPath, []byte(token), 0600); err != nil {
|
||||
h.HandleHTTPError(ctx, err, w)
|
||||
return
|
||||
if credsExist {
|
||||
files = append(files, DefaultTokenFile)
|
||||
}
|
||||
|
||||
files = append(files, DefaultTokenFile)
|
||||
|
||||
b := backup{
|
||||
ID: id,
|
||||
Files: files,
|
||||
|
@ -148,6 +139,26 @@ func (h *BackupHandler) handleCreate(w http.ResponseWriter, r *http.Request) {
|
|||
}
|
||||
}
|
||||
|
||||
func (h *BackupHandler) backupCredentials(internalBackupPath string) (bool, error) {
|
||||
credBackupPath := filepath.Join(internalBackupPath, DefaultTokenFile)
|
||||
|
||||
credPath, err := defaultTokenPath()
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
token, err := ioutil.ReadFile(credPath)
|
||||
if err != nil && !os.IsNotExist(err) {
|
||||
return false, err
|
||||
} else if os.IsNotExist(err) {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
if err := ioutil.WriteFile(credBackupPath, []byte(token), 0600); err != nil {
|
||||
return false, err
|
||||
}
|
||||
return true, nil
|
||||
}
|
||||
|
||||
func (h *BackupHandler) handleFetchFile(w http.ResponseWriter, r *http.Request) {
|
||||
span, r := tracing.ExtractFromHTTPRequest(r, "BackupHandler.handleFetchFile")
|
||||
defer span.Finish()
|
||||
|
|
|
@ -278,7 +278,7 @@ type bucketResponse struct {
|
|||
Labels []influxdb.Label `json:"labels"`
|
||||
}
|
||||
|
||||
func newBucketResponse(b *influxdb.Bucket, labels []*influxdb.Label) *bucketResponse {
|
||||
func NewBucketResponse(b *influxdb.Bucket, labels []*influxdb.Label) *bucketResponse {
|
||||
res := &bucketResponse{
|
||||
Links: map[string]string{
|
||||
"labels": fmt.Sprintf("/api/v2/buckets/%s/labels", b.ID),
|
||||
|
@ -309,7 +309,7 @@ func newBucketsResponse(ctx context.Context, opts influxdb.FindOptions, f influx
|
|||
rs := make([]*bucketResponse, 0, len(bs))
|
||||
for _, b := range bs {
|
||||
labels, _ := labelService.FindResourceLabels(ctx, influxdb.LabelMappingFilter{ResourceID: b.ID})
|
||||
rs = append(rs, newBucketResponse(b, labels))
|
||||
rs = append(rs, NewBucketResponse(b, labels))
|
||||
}
|
||||
return &bucketsResponse{
|
||||
Links: newPagingLinks(prefixBuckets, opts, f, len(bs)),
|
||||
|
@ -332,7 +332,7 @@ func (h *BucketHandler) handlePostBucket(w http.ResponseWriter, r *http.Request)
|
|||
}
|
||||
h.log.Debug("Bucket created", zap.String("bucket", fmt.Sprint(bucket)))
|
||||
|
||||
h.api.Respond(w, http.StatusCreated, newBucketResponse(bucket, []*influxdb.Label{}))
|
||||
h.api.Respond(w, http.StatusCreated, NewBucketResponse(bucket, []*influxdb.Label{}))
|
||||
}
|
||||
|
||||
type postBucketRequest struct {
|
||||
|
@ -413,7 +413,7 @@ func (h *BucketHandler) handleGetBucket(w http.ResponseWriter, r *http.Request)
|
|||
|
||||
h.log.Debug("Bucket retrieved", zap.String("bucket", fmt.Sprint(b)))
|
||||
|
||||
h.api.Respond(w, http.StatusOK, newBucketResponse(b, labels))
|
||||
h.api.Respond(w, http.StatusOK, NewBucketResponse(b, labels))
|
||||
}
|
||||
|
||||
func bucketIDPath(id influxdb.ID) string {
|
||||
|
@ -608,7 +608,7 @@ func (h *BucketHandler) handlePatchBucket(w http.ResponseWriter, r *http.Request
|
|||
}
|
||||
h.log.Debug("Bucket updated", zap.String("bucket", fmt.Sprint(b)))
|
||||
|
||||
h.api.Respond(w, http.StatusOK, newBucketResponse(b, labels))
|
||||
h.api.Respond(w, http.StatusOK, NewBucketResponse(b, labels))
|
||||
}
|
||||
|
||||
// BucketService connects to Influx via HTTP using tokens to manage buckets
|
||||
|
|
|
@ -6,12 +6,16 @@ import (
|
|||
"fmt"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"path"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/influxdb/kit/tracing"
|
||||
|
||||
"github.com/influxdata/httprouter"
|
||||
"github.com/influxdata/influxdb"
|
||||
pctx "github.com/influxdata/influxdb/context"
|
||||
"github.com/influxdata/influxdb/notification/check"
|
||||
"github.com/influxdata/influxdb/pkg/httpc"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
|
@ -381,7 +385,7 @@ func decodeCheckFilter(ctx context.Context, r *http.Request) (*influxdb.CheckFil
|
|||
}
|
||||
f.OrgID = orgID
|
||||
} else if orgNameStr := q.Get("org"); orgNameStr != "" {
|
||||
*f.Org = orgNameStr
|
||||
f.Org = &orgNameStr
|
||||
}
|
||||
return f, opts, err
|
||||
}
|
||||
|
@ -683,3 +687,222 @@ func (h *CheckHandler) handleDeleteCheck(w http.ResponseWriter, r *http.Request)
|
|||
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
}
|
||||
|
||||
func checkIDPath(id influxdb.ID) string {
|
||||
return path.Join(prefixChecks, id.String())
|
||||
}
|
||||
|
||||
// CheckService is a client to interact with the handlers in this package over HTTP.
|
||||
// It does not implement influxdb.CheckService because it returns a concrete representation of the API response
|
||||
// and influxdb.Check as returned by that interface is not appropriate for this use case.
|
||||
type CheckService struct {
|
||||
Client *httpc.Client
|
||||
}
|
||||
|
||||
// FindCheckByID returns the Check matching the ID.
|
||||
func (s *CheckService) FindCheckByID(ctx context.Context, id influxdb.ID) (*Check, error) {
|
||||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
var cr Check
|
||||
err := s.Client.
|
||||
Get(checkIDPath(id)).
|
||||
DecodeJSON(&cr).
|
||||
Do(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &cr, nil
|
||||
}
|
||||
|
||||
// FindCheck returns the first check matching the filter.
|
||||
func (s *CheckService) FindCheck(ctx context.Context, filter influxdb.CheckFilter) (*Check, error) {
|
||||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
cs, n, err := s.FindChecks(ctx, filter)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if n == 0 && filter.Name != nil {
|
||||
return nil, &influxdb.Error{
|
||||
Code: influxdb.ENotFound,
|
||||
Op: influxdb.OpFindBucket,
|
||||
Msg: fmt.Sprintf("check %q not found", *filter.Name),
|
||||
}
|
||||
} else if n == 0 {
|
||||
return nil, &influxdb.Error{
|
||||
Code: influxdb.ENotFound,
|
||||
Op: influxdb.OpFindBucket,
|
||||
Msg: "check not found",
|
||||
}
|
||||
}
|
||||
|
||||
return cs[0], nil
|
||||
}
|
||||
|
||||
// FindChecks returns a list of checks that match filter and the total count of matching checks.
|
||||
// Additional options provide pagination & sorting.
|
||||
func (s *CheckService) FindChecks(ctx context.Context, filter influxdb.CheckFilter, opt ...influxdb.FindOptions) ([]*Check, int, error) {
|
||||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
params := findOptionParams(opt...)
|
||||
if filter.OrgID != nil {
|
||||
params = append(params, [2]string{"orgID", filter.OrgID.String()})
|
||||
}
|
||||
if filter.Org != nil {
|
||||
params = append(params, [2]string{"org", *filter.Org})
|
||||
}
|
||||
if filter.ID != nil {
|
||||
params = append(params, [2]string{"id", filter.ID.String()})
|
||||
}
|
||||
if filter.Name != nil {
|
||||
params = append(params, [2]string{"name", *filter.Name})
|
||||
}
|
||||
|
||||
var cr Checks
|
||||
err := s.Client.
|
||||
Get(prefixChecks).
|
||||
QueryParams(params...).
|
||||
DecodeJSON(&cr).
|
||||
Do(ctx)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
return cr.Checks, len(cr.Checks), nil
|
||||
}
|
||||
|
||||
// CreateCheck creates a new check.
|
||||
func (s *CheckService) CreateCheck(ctx context.Context, c *Check) (*Check, error) {
|
||||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
var r Check
|
||||
err := s.Client.
|
||||
PostJSON(c, prefixChecks).
|
||||
DecodeJSON(&r).
|
||||
Do(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &r, nil
|
||||
}
|
||||
|
||||
// UpdateCheck updates a check.
|
||||
func (s *CheckService) UpdateCheck(ctx context.Context, id influxdb.ID, u *Check) (*Check, error) {
|
||||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
var r Check
|
||||
err := s.Client.
|
||||
PutJSON(u, checkIDPath(id)).
|
||||
DecodeJSON(&r).
|
||||
Do(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &r, nil
|
||||
}
|
||||
|
||||
// PatchCheck changes the status, description or name of a check.
|
||||
func (s *CheckService) PatchCheck(ctx context.Context, id influxdb.ID, u influxdb.CheckUpdate) (*Check, error) {
|
||||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
var r Check
|
||||
err := s.Client.
|
||||
PutJSON(u, checkIDPath(id)).
|
||||
DecodeJSON(&r).
|
||||
Do(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &r, nil
|
||||
}
|
||||
|
||||
// DeleteCheck removes a check.
|
||||
func (s *CheckService) DeleteCheck(ctx context.Context, id influxdb.ID) error {
|
||||
return s.Client.
|
||||
Delete(checkIDPath(id)).
|
||||
Do(ctx)
|
||||
}
|
||||
|
||||
// TODO(gavincabbage): These structures should be in a common place, like other models,
|
||||
// but the common influxdb.Check is an interface that is not appropriate for an API client.
|
||||
type Checks struct {
|
||||
Checks []*Check `json:"checks"`
|
||||
Links *influxdb.PagingLinks `json:"links"`
|
||||
}
|
||||
|
||||
type Check struct {
|
||||
ID influxdb.ID `json:"id,omitempty"`
|
||||
Name string `json:"name"`
|
||||
OrgID influxdb.ID `json:"orgID,omitempty"`
|
||||
OwnerID influxdb.ID `json:"ownerID,omitempty"`
|
||||
CreatedAt time.Time `json:"createdAt,omitempty"`
|
||||
UpdatedAt time.Time `json:"updatedAt,omitempty"`
|
||||
Query *CheckQuery `json:"query"`
|
||||
Status influxdb.Status `json:"status"`
|
||||
Description string `json:"description"`
|
||||
LatestCompleted time.Time `json:"latestCompleted"`
|
||||
LastRunStatus string `json:"lastRunStatus"`
|
||||
LastRunError string `json:"lastRunError"`
|
||||
Labels []*influxdb.Label `json:"labels"`
|
||||
Links *CheckLinks `json:"links"`
|
||||
Type string `json:"type"`
|
||||
TimeSince string `json:"timeSince"`
|
||||
StaleTime string `json:"staleTime"`
|
||||
ReportZero bool `json:"reportZero"`
|
||||
Level string `json:"level"`
|
||||
Every string `json:"every"`
|
||||
Offset string `json:"offset"`
|
||||
Tags []*influxdb.Tag `json:"tags"`
|
||||
StatusMessageTemplate string `json:"statusMessageTemplate"`
|
||||
Thresholds []*CheckThreshold `json:"thresholds"`
|
||||
}
|
||||
|
||||
type CheckQuery struct {
|
||||
Text string `json:"text"`
|
||||
EditMode string `json:"editMode"`
|
||||
Name string `json:"name"`
|
||||
BuilderConfig *CheckBuilderConfig `json:"builderConfig"`
|
||||
}
|
||||
|
||||
type CheckBuilderConfig struct {
|
||||
Buckets []string `json:"buckets"`
|
||||
Tags []struct {
|
||||
Key string `json:"key"`
|
||||
Values []string `json:"values"`
|
||||
AggregateFunctionType string `json:"aggregateFunctionType"`
|
||||
} `json:"tags"`
|
||||
Functions []struct {
|
||||
Name string `json:"name"`
|
||||
} `json:"functions"`
|
||||
AggregateWindow struct {
|
||||
Period string `json:"period"`
|
||||
} `json:"aggregateWindow"`
|
||||
}
|
||||
|
||||
type CheckLinks struct {
|
||||
Self string `json:"self"`
|
||||
Labels string `json:"labels"`
|
||||
Members string `json:"members"`
|
||||
Owners string `json:"owners"`
|
||||
Query string `json:"query"`
|
||||
}
|
||||
|
||||
type CheckThreshold struct {
|
||||
check.ThresholdConfigBase
|
||||
Type string `json:"type"`
|
||||
Value float64 `json:"value,omitempty"`
|
||||
Min float64 `json:"min,omitempty"`
|
||||
Max float64 `json:"max,omitempty"`
|
||||
Within bool `json:"within"`
|
||||
}
|
||||
|
|
|
@ -14,13 +14,14 @@ import (
|
|||
// NewHTTPClient creates a new httpc.Client type. This call sets all
|
||||
// the options that are important to the http pkg on the httpc client.
|
||||
// The default status fn and so forth will all be set for the caller.
|
||||
func NewHTTPClient(addr, token string, insecureSkipVerify bool) (*httpc.Client, error) {
|
||||
// In addition, some options can be specified. Those will be added to the defaults.
|
||||
func NewHTTPClient(addr, token string, insecureSkipVerify bool, opts ...httpc.ClientOptFn) (*httpc.Client, error) {
|
||||
u, err := url.Parse(addr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
opts := []httpc.ClientOptFn{
|
||||
defaultOpts := []httpc.ClientOptFn{
|
||||
httpc.WithAddr(addr),
|
||||
httpc.WithContentType("application/json"),
|
||||
httpc.WithHTTPClient(NewClient(u.Scheme, insecureSkipVerify)),
|
||||
|
@ -28,8 +29,9 @@ func NewHTTPClient(addr, token string, insecureSkipVerify bool) (*httpc.Client,
|
|||
httpc.WithStatusFn(CheckError),
|
||||
}
|
||||
if token != "" {
|
||||
opts = append(opts, httpc.WithAuthToken(token))
|
||||
defaultOpts = append(defaultOpts, httpc.WithAuthToken(token))
|
||||
}
|
||||
opts = append(defaultOpts, opts...)
|
||||
return httpc.New(opts...)
|
||||
}
|
||||
|
||||
|
@ -42,21 +44,35 @@ type Service struct {
|
|||
*AuthorizationService
|
||||
*BackupService
|
||||
*BucketService
|
||||
*TaskService
|
||||
*DashboardService
|
||||
*OrganizationService
|
||||
*NotificationRuleService
|
||||
*UserService
|
||||
*VariableService
|
||||
*WriteService
|
||||
DocumentService
|
||||
*CheckService
|
||||
*NotificationEndpointService
|
||||
*UserResourceMappingService
|
||||
*TelegrafService
|
||||
*LabelService
|
||||
*SecretService
|
||||
}
|
||||
|
||||
// NewService returns a service that is an HTTP
|
||||
// client to a remote
|
||||
func NewService(addr, token string) (*Service, error) {
|
||||
httpClient, err := NewHTTPClient(addr, token, false)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// NewService returns a service that is an HTTP client to a remote.
|
||||
// Address and token are needed for those services that do not use httpc.Client,
|
||||
// but use those for configuring.
|
||||
// Usually one would do:
|
||||
//
|
||||
// ```
|
||||
// c := NewHTTPClient(addr, token, insecureSkipVerify)
|
||||
// s := NewService(c, addr token)
|
||||
// ```
|
||||
//
|
||||
// So one should provide the same `addr` and `token` to both calls to ensure consistency
|
||||
// in the behavior of the returned service.
|
||||
func NewService(httpClient *httpc.Client, addr, token string) (*Service, error) {
|
||||
return &Service{
|
||||
Addr: addr,
|
||||
Token: token,
|
||||
|
@ -65,15 +81,24 @@ func NewService(addr, token string) (*Service, error) {
|
|||
Addr: addr,
|
||||
Token: token,
|
||||
},
|
||||
BucketService: &BucketService{Client: httpClient},
|
||||
DashboardService: &DashboardService{Client: httpClient},
|
||||
OrganizationService: &OrganizationService{Client: httpClient},
|
||||
UserService: &UserService{Client: httpClient},
|
||||
VariableService: &VariableService{Client: httpClient},
|
||||
BucketService: &BucketService{Client: httpClient},
|
||||
TaskService: &TaskService{Client: httpClient},
|
||||
DashboardService: &DashboardService{Client: httpClient},
|
||||
OrganizationService: &OrganizationService{Client: httpClient},
|
||||
NotificationRuleService: &NotificationRuleService{Client: httpClient},
|
||||
UserService: &UserService{Client: httpClient},
|
||||
VariableService: &VariableService{Client: httpClient},
|
||||
WriteService: &WriteService{
|
||||
Addr: addr,
|
||||
Token: token,
|
||||
},
|
||||
DocumentService: NewDocumentService(httpClient),
|
||||
CheckService: &CheckService{Client: httpClient},
|
||||
NotificationEndpointService: &NotificationEndpointService{Client: httpClient},
|
||||
UserResourceMappingService: &UserResourceMappingService{Client: httpClient},
|
||||
TelegrafService: NewTelegrafService(httpClient),
|
||||
LabelService: &LabelService{Client: httpClient},
|
||||
SecretService: &SecretService{Client: httpClient},
|
||||
}, nil
|
||||
}
|
||||
|
||||
|
|
|
@ -5,15 +5,31 @@ import (
|
|||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"path"
|
||||
|
||||
"github.com/influxdata/httprouter"
|
||||
"github.com/influxdata/influxdb"
|
||||
pcontext "github.com/influxdata/influxdb/context"
|
||||
"github.com/influxdata/influxdb/kit/tracing"
|
||||
"github.com/influxdata/influxdb/pkg/httpc"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
const prefixDocuments = "/api/v2/documents"
|
||||
|
||||
// DocumentService is an interface HTTP-exposed portion of the document service.
|
||||
type DocumentService interface {
|
||||
CreateDocument(ctx context.Context, namespace string, orgID influxdb.ID, d *influxdb.Document) error
|
||||
GetDocuments(ctx context.Context, namespace string, orgID influxdb.ID) ([]*influxdb.Document, error)
|
||||
GetDocument(ctx context.Context, namespace string, id influxdb.ID) (*influxdb.Document, error)
|
||||
UpdateDocument(ctx context.Context, namespace string, d *influxdb.Document) error
|
||||
DeleteDocument(ctx context.Context, namespace string, id influxdb.ID) error
|
||||
|
||||
GetDocumentLabels(ctx context.Context, namespace string, id influxdb.ID) ([]*influxdb.Label, error)
|
||||
AddDocumentLabel(ctx context.Context, namespace string, did influxdb.ID, lid influxdb.ID) (*influxdb.Label, error)
|
||||
DeleteDocumentLabel(ctx context.Context, namespace string, did influxdb.ID, lid influxdb.ID) error
|
||||
}
|
||||
|
||||
// DocumentBackend is all services and associated parameters required to construct
|
||||
// the DocumentHandler.
|
||||
type DocumentBackend struct {
|
||||
|
@ -396,14 +412,14 @@ func (h *DocumentHandler) getDocument(w http.ResponseWriter, r *http.Request) (*
|
|||
func (h *DocumentHandler) handleGetDocument(w http.ResponseWriter, r *http.Request) {
|
||||
ctx := r.Context()
|
||||
|
||||
d, namspace, err := h.getDocument(w, r)
|
||||
d, namespace, err := h.getDocument(w, r)
|
||||
if err != nil {
|
||||
h.HandleHTTPError(ctx, err, w)
|
||||
return
|
||||
}
|
||||
h.log.Debug("Document retrieved", zap.String("document", fmt.Sprint(d)))
|
||||
|
||||
if err := encodeResponse(ctx, w, http.StatusOK, newDocumentResponse(namspace, d)); err != nil {
|
||||
if err := encodeResponse(ctx, w, http.StatusOK, newDocumentResponse(namespace, d)); err != nil {
|
||||
logEncodingError(h.log, r, err)
|
||||
return
|
||||
}
|
||||
|
@ -540,7 +556,7 @@ func (h *DocumentHandler) handlePutDocument(w http.ResponseWriter, r *http.Reque
|
|||
return
|
||||
}
|
||||
|
||||
ds, err := s.FindDocuments(ctx, influxdb.WhereID(req.Document.ID), influxdb.IncludeContent)
|
||||
ds, err := s.FindDocuments(ctx, influxdb.WhereID(req.Document.ID), influxdb.IncludeContent, influxdb.IncludeLabels)
|
||||
if err != nil {
|
||||
h.HandleHTTPError(ctx, err, w)
|
||||
return
|
||||
|
@ -602,3 +618,168 @@ func decodePutDocumentRequest(ctx context.Context, r *http.Request) (*putDocumen
|
|||
|
||||
return req, nil
|
||||
}
|
||||
|
||||
type documentService struct {
|
||||
Client *httpc.Client
|
||||
}
|
||||
|
||||
// NewDocumentService creates a client to connect to Influx via HTTP to manage documents.
|
||||
func NewDocumentService(client *httpc.Client) DocumentService {
|
||||
return &documentService{
|
||||
Client: client,
|
||||
}
|
||||
}
|
||||
|
||||
func buildDocumentsPath(namespace string) string {
|
||||
return path.Join(prefixDocuments, namespace)
|
||||
}
|
||||
func buildDocumentPath(namespace string, id influxdb.ID) string {
|
||||
return path.Join(prefixDocuments, namespace, id.String())
|
||||
}
|
||||
func buildDocumentLabelsPath(namespace string, id influxdb.ID) string {
|
||||
return path.Join(prefixDocuments, namespace, id.String(), "labels")
|
||||
}
|
||||
func buildDocumentLabelPath(namespace string, did influxdb.ID, lid influxdb.ID) string {
|
||||
return path.Join(prefixDocuments, namespace, did.String(), "labels", lid.String())
|
||||
}
|
||||
|
||||
// CreateDocument creates a document in the specified namespace.
|
||||
// Only the ids of the given labels will be used.
|
||||
// After the call, if successful, the input document will contain the new one.
|
||||
func (s *documentService) CreateDocument(ctx context.Context, namespace string, orgID influxdb.ID, d *influxdb.Document) error {
|
||||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
lids := make([]influxdb.ID, len(d.Labels))
|
||||
for i := 0; i < len(lids); i++ {
|
||||
lids[i] = d.Labels[i].ID
|
||||
}
|
||||
// Set a valid ID for proper marshaling.
|
||||
// It will be assigned by the backend in any case.
|
||||
d.ID = influxdb.ID(1)
|
||||
req := &postDocumentRequest{
|
||||
Document: d,
|
||||
OrgID: orgID,
|
||||
Labels: lids,
|
||||
}
|
||||
var resp documentResponse
|
||||
if err := s.Client.
|
||||
PostJSON(req, buildDocumentsPath(namespace)).
|
||||
DecodeJSON(&resp).
|
||||
Do(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
*d = *resp.Document
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetDocuments returns the documents for a `namespace` and an `orgID`.
|
||||
// Returned documents do not contain their content.
|
||||
func (s *documentService) GetDocuments(ctx context.Context, namespace string, orgID influxdb.ID) ([]*influxdb.Document, error) {
|
||||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
var resp documentsResponse
|
||||
r := s.Client.
|
||||
Get(buildDocumentsPath(namespace)).
|
||||
DecodeJSON(&resp)
|
||||
r = r.QueryParams([2]string{"orgID", orgID.String()})
|
||||
if err := r.Do(ctx); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
docs := make([]*influxdb.Document, len(resp.Documents))
|
||||
for i := 0; i < len(docs); i++ {
|
||||
docs[i] = resp.Documents[i].Document
|
||||
}
|
||||
return docs, nil
|
||||
}
|
||||
|
||||
// GetDocument returns the document with the specified id.
|
||||
func (s *documentService) GetDocument(ctx context.Context, namespace string, id influxdb.ID) (*influxdb.Document, error) {
|
||||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
var resp documentResponse
|
||||
if err := s.Client.
|
||||
Get(buildDocumentPath(namespace, id)).
|
||||
DecodeJSON(&resp).
|
||||
Do(ctx); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return resp.Document, nil
|
||||
}
|
||||
|
||||
// UpdateDocument updates the document with id `d.ID` and replaces the content of `d` with the patched value.
|
||||
func (s *documentService) UpdateDocument(ctx context.Context, namespace string, d *influxdb.Document) error {
|
||||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
var resp documentResponse
|
||||
if err := s.Client.
|
||||
PutJSON(d, buildDocumentPath(namespace, d.ID)).
|
||||
DecodeJSON(&resp).
|
||||
Do(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
*d = *resp.Document
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeleteDocument deletes the document with the given id.
|
||||
func (s *documentService) DeleteDocument(ctx context.Context, namespace string, id influxdb.ID) error {
|
||||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
if err := s.Client.
|
||||
Delete(buildDocumentPath(namespace, id)).
|
||||
Do(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetDocumentLabels returns the labels associated to the document with the given id.
|
||||
func (s *documentService) GetDocumentLabels(ctx context.Context, namespace string, id influxdb.ID) ([]*influxdb.Label, error) {
|
||||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
var resp labelsResponse
|
||||
if err := s.Client.
|
||||
Get(buildDocumentLabelsPath(namespace, id)).
|
||||
DecodeJSON(&resp).
|
||||
Do(ctx); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return resp.Labels, nil
|
||||
}
|
||||
|
||||
// AddDocumentLabel adds the label with id `lid` to the document with id `did`.
|
||||
func (s *documentService) AddDocumentLabel(ctx context.Context, namespace string, did influxdb.ID, lid influxdb.ID) (*influxdb.Label, error) {
|
||||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
mapping := &influxdb.LabelMapping{
|
||||
LabelID: lid,
|
||||
}
|
||||
var resp labelResponse
|
||||
if err := s.Client.
|
||||
PostJSON(mapping, buildDocumentLabelsPath(namespace, did)).
|
||||
DecodeJSON(&resp).
|
||||
Do(ctx); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &resp.Label, nil
|
||||
}
|
||||
|
||||
// DeleteDocumentLabel deletes the label with id `lid` from the document with id `did`.
|
||||
func (s *documentService) DeleteDocumentLabel(ctx context.Context, namespace string, did influxdb.ID, lid influxdb.ID) error {
|
||||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
if err := s.Client.
|
||||
Delete(buildDocumentLabelPath(namespace, did, lid)).
|
||||
Do(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -0,0 +1,700 @@
|
|||
package http
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/google/go-cmp/cmp"
|
||||
"github.com/influxdata/influxdb"
|
||||
icontext "github.com/influxdata/influxdb/context"
|
||||
httpmock "github.com/influxdata/influxdb/http/mock"
|
||||
"github.com/influxdata/influxdb/inmem"
|
||||
"github.com/influxdata/influxdb/kit/transport/http"
|
||||
"github.com/influxdata/influxdb/kv"
|
||||
"github.com/influxdata/influxdb/mock"
|
||||
itesting "github.com/influxdata/influxdb/testing"
|
||||
"go.uber.org/zap/zaptest"
|
||||
)
|
||||
|
||||
const namespace = "testing"
|
||||
|
||||
type fixture struct {
|
||||
Org *influxdb.Organization
|
||||
Labels []*influxdb.Label
|
||||
Document *influxdb.Document
|
||||
AnotherDocument *influxdb.Document
|
||||
}
|
||||
|
||||
func setup(t *testing.T) (func(auth influxdb.Authorizer) *httptest.Server, func(serverUrl string) DocumentService, fixture) {
|
||||
svc := kv.NewService(zaptest.NewLogger(t), inmem.NewKVStore())
|
||||
ctx := context.Background()
|
||||
// Need this to make resource creation work.
|
||||
// We are not testing authorization in the setup.
|
||||
ctx = icontext.SetAuthorizer(ctx, mock.Authorization{})
|
||||
if err := svc.Initialize(ctx); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
ds, err := svc.CreateDocumentStore(ctx, namespace)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create document store: %v", err)
|
||||
}
|
||||
|
||||
org := &influxdb.Organization{Name: "org"}
|
||||
itesting.MustCreateOrgs(ctx, svc, org)
|
||||
l1 := &influxdb.Label{Name: "l1", OrgID: org.ID}
|
||||
l2 := &influxdb.Label{Name: "l2", OrgID: org.ID}
|
||||
l3 := &influxdb.Label{Name: "l3", OrgID: org.ID}
|
||||
itesting.MustCreateLabels(ctx, svc, l1, l2, l3)
|
||||
doc := &influxdb.Document{
|
||||
Content: "I am a free document",
|
||||
}
|
||||
if err := ds.CreateDocument(ctx, doc,
|
||||
// make the org owner of the document
|
||||
influxdb.AuthorizedWithOrgID(mock.Authorization{}, org.ID),
|
||||
influxdb.WithLabel(l1.ID),
|
||||
influxdb.WithLabel(l3.ID)); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
adoc := &influxdb.Document{
|
||||
Content: "I am another document",
|
||||
}
|
||||
if err := ds.CreateDocument(ctx, adoc,
|
||||
// make the org owner of the document
|
||||
influxdb.AuthorizedWithOrgID(mock.Authorization{}, org.ID),
|
||||
influxdb.WithLabel(l1.ID),
|
||||
influxdb.WithLabel(l2.ID)); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
backend := NewMockDocumentBackend(t)
|
||||
backend.HTTPErrorHandler = http.ErrorHandler(0)
|
||||
backend.DocumentService = svc
|
||||
backend.LabelService = svc
|
||||
serverFn := func(auth influxdb.Authorizer) *httptest.Server {
|
||||
handler := httpmock.NewAuthMiddlewareHandler(NewDocumentHandler(backend), auth)
|
||||
return httptest.NewServer(handler)
|
||||
}
|
||||
clientFn := func(serverUrl string) DocumentService {
|
||||
return NewDocumentService(mustNewHTTPClient(t, serverUrl, ""))
|
||||
}
|
||||
f := fixture{
|
||||
Org: org,
|
||||
Labels: []*influxdb.Label{l1, l2, l3},
|
||||
Document: doc,
|
||||
AnotherDocument: adoc,
|
||||
}
|
||||
return serverFn, clientFn, f
|
||||
}
|
||||
|
||||
func (f fixture) authForDocument(action influxdb.Action) influxdb.Authorizer {
|
||||
return &influxdb.Authorization{
|
||||
Permissions: []influxdb.Permission{
|
||||
// Access the document with this specific action.
|
||||
{
|
||||
Action: action,
|
||||
Resource: influxdb.Resource{
|
||||
Type: influxdb.DocumentsResourceType,
|
||||
ID: &f.Document.ID,
|
||||
},
|
||||
},
|
||||
// Access all documents.
|
||||
{
|
||||
Action: influxdb.ReadAction,
|
||||
Resource: influxdb.Resource{
|
||||
Type: influxdb.OrgsResourceType,
|
||||
OrgID: &f.Org.ID,
|
||||
},
|
||||
},
|
||||
},
|
||||
Status: influxdb.Active,
|
||||
}
|
||||
}
|
||||
|
||||
// TestDocumentService tests all the service functions using the document HTTP client.
|
||||
func TestDocumentService(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
fn func(t *testing.T)
|
||||
}{
|
||||
{
|
||||
name: "CreateDocument",
|
||||
fn: CreateDocument,
|
||||
},
|
||||
{
|
||||
name: "GetDocument",
|
||||
fn: GetDocument,
|
||||
},
|
||||
{
|
||||
name: "GetDocuments",
|
||||
fn: GetDocuments,
|
||||
},
|
||||
{
|
||||
name: "UpdateDocument",
|
||||
fn: UpdateDocument,
|
||||
},
|
||||
{
|
||||
name: "DeleteDocument",
|
||||
fn: DeleteDocument,
|
||||
},
|
||||
|
||||
{
|
||||
name: "GetLabels",
|
||||
fn: GetLabels,
|
||||
},
|
||||
{
|
||||
name: "AddLabels",
|
||||
fn: AddLabels,
|
||||
},
|
||||
{
|
||||
name: "DeleteLabel",
|
||||
fn: DeleteLabel,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
tt.fn(t)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func CreateDocument(t *testing.T) {
|
||||
t.Run("with content", func(t *testing.T) {
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
server := serverFn(mock.Authorization{})
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
orgID := fx.Org.ID
|
||||
d := &influxdb.Document{
|
||||
Content: "I am the content",
|
||||
}
|
||||
if err := client.CreateDocument(context.Background(), namespace, orgID, d); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if d.ID <= 1 {
|
||||
t.Errorf("invalid document id: %v", d.ID)
|
||||
}
|
||||
if diff := cmp.Diff(d.Content, "I am the content"); diff != "" {
|
||||
t.Errorf("got unexpected content:\n\t%s", diff)
|
||||
}
|
||||
if len(d.Labels) > 0 {
|
||||
t.Errorf("got unexpected labels: %v", d.Labels)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("with content and labels", func(t *testing.T) {
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
server := serverFn(mock.Authorization{})
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
orgID := fx.Org.ID
|
||||
d := &influxdb.Document{
|
||||
Content: "I am the content",
|
||||
Labels: fx.Labels,
|
||||
}
|
||||
if err := client.CreateDocument(context.Background(), namespace, orgID, d); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if d.ID <= 1 {
|
||||
t.Errorf("invalid document id: %v", d.ID)
|
||||
}
|
||||
if diff := cmp.Diff(d.Content, "I am the content"); diff != "" {
|
||||
t.Errorf("got unexpected content:\n\t%s", diff)
|
||||
}
|
||||
if diff := cmp.Diff(d.Labels, fx.Labels); diff != "" {
|
||||
t.Errorf("got unexpected labels:\n\t%v", d.Labels)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("bad label", func(t *testing.T) {
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
server := serverFn(mock.Authorization{})
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
orgID := fx.Org.ID
|
||||
d := &influxdb.Document{
|
||||
Content: "I am the content",
|
||||
Labels: []*influxdb.Label{
|
||||
{
|
||||
ID: influxdb.ID(1),
|
||||
Name: "bad",
|
||||
},
|
||||
},
|
||||
}
|
||||
if err := client.CreateDocument(context.Background(), namespace, orgID, d); err != nil {
|
||||
if !strings.HasPrefix(err.Error(), "label not found") {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
} else {
|
||||
t.Error("expected error got none")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("unauthorized", func(t *testing.T) {
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
server := serverFn(&influxdb.Authorization{
|
||||
Status: influxdb.Active,
|
||||
})
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
d := &influxdb.Document{
|
||||
Content: "I am the content",
|
||||
}
|
||||
if err := client.CreateDocument(context.Background(), namespace, fx.Org.ID, d); err != nil {
|
||||
if !strings.HasPrefix(err.Error(), "authorization cannot add org as document owner") {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
} else {
|
||||
t.Error("expected error got none")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func GetDocument(t *testing.T) {
|
||||
t.Run("existing", func(t *testing.T) {
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
want := fx.Document
|
||||
server := serverFn(fx.authForDocument(influxdb.ReadAction))
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
got, err := client.GetDocument(context.Background(), namespace, want.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if diff := cmp.Diff(got, want); diff != "" {
|
||||
t.Errorf("got unexpected document:\n\t%s", diff)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("non existing", func(t *testing.T) {
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
id := fx.Document.ID + 42
|
||||
server := serverFn(&influxdb.Authorization{
|
||||
Permissions: []influxdb.Permission{
|
||||
{
|
||||
Action: influxdb.ReadAction,
|
||||
Resource: influxdb.Resource{
|
||||
Type: influxdb.DocumentsResourceType,
|
||||
ID: &id,
|
||||
},
|
||||
},
|
||||
},
|
||||
Status: influxdb.Active,
|
||||
})
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
if _, err := client.GetDocument(context.Background(), namespace, id); err != nil {
|
||||
if !strings.HasPrefix(err.Error(), "document not found") {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
} else {
|
||||
t.Error("expected error got none")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("unauthorized", func(t *testing.T) {
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
server := serverFn(&influxdb.Authorization{
|
||||
Status: influxdb.Active,
|
||||
})
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
if _, err := client.GetDocument(context.Background(), namespace, fx.Document.ID); err != nil {
|
||||
if !strings.HasPrefix(err.Error(), "authorization cannot access document") {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
} else {
|
||||
t.Error("expected error got none")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func GetDocuments(t *testing.T) {
|
||||
t.Run("get existing documents", func(t *testing.T) {
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
server := serverFn(&influxdb.Authorization{
|
||||
Permissions: []influxdb.Permission{
|
||||
{
|
||||
Action: influxdb.ReadAction,
|
||||
Resource: influxdb.Resource{
|
||||
Type: influxdb.OrgsResourceType,
|
||||
OrgID: &fx.Org.ID,
|
||||
},
|
||||
},
|
||||
},
|
||||
Status: influxdb.Active,
|
||||
})
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
got, err := client.GetDocuments(context.Background(), namespace, fx.Org.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
want := []*influxdb.Document{fx.Document, fx.AnotherDocument}
|
||||
want[0].Content = nil // response will not contain the content of documents
|
||||
want[1].Content = nil // response will not contain the content of documents
|
||||
if diff := cmp.Diff(got, want); diff != "" {
|
||||
t.Errorf("got unexpected document:\n\t%s", diff)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("unauthorized", func(t *testing.T) {
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
server := serverFn(&influxdb.Authorization{
|
||||
Status: influxdb.Active,
|
||||
})
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
if _, err := client.GetDocuments(context.Background(), namespace, fx.Org.ID); err != nil {
|
||||
if !strings.HasPrefix(err.Error(), "authorizer cannot access documents") {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
} else {
|
||||
t.Error("expected error got none")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func UpdateDocument(t *testing.T) {
|
||||
// TODO(affo): investigate why all these tests pass both with a mock.Authorization{}
|
||||
// and a fx.authForDocument(influxdb.WriteAction) and GetDocument tests need a specific auth.
|
||||
t.Run("update content", func(t *testing.T) {
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
server := serverFn(mock.Authorization{})
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
want := fx.Document
|
||||
want.Content = "new content"
|
||||
if err := client.UpdateDocument(context.Background(), namespace, want); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
got, err := client.GetDocument(context.Background(), namespace, want.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if diff := cmp.Diff(got, want); diff != "" {
|
||||
t.Errorf("got unexpected document:\n\t%s", diff)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("update labels", func(t *testing.T) {
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
server := serverFn(mock.Authorization{})
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
want := fx.Document
|
||||
want.Labels = want.Labels[:0]
|
||||
if err := client.UpdateDocument(context.Background(), namespace, want); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
got, err := client.GetDocument(context.Background(), namespace, want.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if diff := cmp.Diff(got, want); diff != "" {
|
||||
t.Errorf("got unexpected document:\n\t%s", diff)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("update meta", func(t *testing.T) {
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
server := serverFn(mock.Authorization{})
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
want := fx.Document
|
||||
want.Meta.Name = "new name"
|
||||
if err := client.UpdateDocument(context.Background(), namespace, want); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
got, err := client.GetDocument(context.Background(), namespace, want.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if diff := cmp.Diff(got, want); diff != "" {
|
||||
t.Errorf("got unexpected document:\n\t%s", diff)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("unauthorized - another document", func(t *testing.T) {
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
server := serverFn(fx.authForDocument(influxdb.WriteAction))
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
want := fx.AnotherDocument
|
||||
want.Content = "new content"
|
||||
if err := client.UpdateDocument(context.Background(), namespace, want); err != nil {
|
||||
if !strings.HasPrefix(err.Error(), "authorization cannot access document") {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
} else {
|
||||
t.Error("expected error got none")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("unauthorized - insufficient permissions", func(t *testing.T) {
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
server := serverFn(fx.authForDocument(influxdb.ReadAction))
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
want := fx.Document
|
||||
want.Content = "new content"
|
||||
if err := client.UpdateDocument(context.Background(), namespace, want); err != nil {
|
||||
if !strings.HasPrefix(err.Error(), "authorization cannot access document") {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
} else {
|
||||
t.Error("expected error got none")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func DeleteDocument(t *testing.T) {
|
||||
t.Run("existing", func(t *testing.T) {
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
want := fx.Document
|
||||
server := serverFn(fx.authForDocument(influxdb.WriteAction))
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
pre, err := client.GetDocuments(context.Background(), namespace, fx.Org.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
l := len(pre)
|
||||
if err := client.DeleteDocument(context.Background(), namespace, want.ID); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
got, err := client.GetDocuments(context.Background(), namespace, fx.Org.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
lgot := len(got)
|
||||
lwant := l - 1
|
||||
if diff := cmp.Diff(lgot, lwant); diff != "" {
|
||||
t.Errorf("got unexpected length of docs:\n\t%v", diff)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("non existing", func(t *testing.T) {
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
id := fx.Document.ID + 42
|
||||
server := serverFn(&influxdb.Authorization{
|
||||
Permissions: []influxdb.Permission{
|
||||
{
|
||||
Action: influxdb.ReadAction,
|
||||
Resource: influxdb.Resource{
|
||||
Type: influxdb.DocumentsResourceType,
|
||||
ID: &id,
|
||||
},
|
||||
},
|
||||
{
|
||||
Action: influxdb.ReadAction,
|
||||
Resource: influxdb.Resource{
|
||||
Type: influxdb.OrgsResourceType,
|
||||
OrgID: &fx.Org.ID,
|
||||
},
|
||||
},
|
||||
},
|
||||
Status: influxdb.Active,
|
||||
})
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
if err := client.DeleteDocument(context.Background(), namespace, id); err != nil {
|
||||
if !strings.HasPrefix(err.Error(), "document not found") {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
} else {
|
||||
t.Error("expected error got none")
|
||||
}
|
||||
got, err := client.GetDocuments(context.Background(), namespace, fx.Org.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
lgot := len(got)
|
||||
lwant := 2
|
||||
if diff := cmp.Diff(lgot, lwant); diff != "" {
|
||||
t.Errorf("got unexpected length of docs:\n\t%v", diff)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("unauthorized", func(t *testing.T) {
|
||||
// TODO(affo)
|
||||
t.Skip("should not be able to delete with read permission")
|
||||
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
server := serverFn(fx.authForDocument(influxdb.ReadAction))
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
if err := client.DeleteDocument(context.Background(), namespace, fx.Document.ID); err != nil {
|
||||
if !strings.HasPrefix(err.Error(), "authorization cannot access document") {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
} else {
|
||||
t.Error("expected error got none")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func GetLabels(t *testing.T) {
|
||||
t.Run("get labels", func(t *testing.T) {
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
server := serverFn(fx.authForDocument(influxdb.ReadAction))
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
got, err := client.GetDocumentLabels(context.Background(), namespace, fx.Document.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
want := fx.Document.Labels
|
||||
if diff := cmp.Diff(got, want); diff != "" {
|
||||
t.Errorf("got unexpected labels:\n\t%s", diff)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("unauthorized", func(t *testing.T) {
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
server := serverFn(fx.authForDocument(influxdb.ReadAction))
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
if _, err := client.GetDocumentLabels(context.Background(), namespace, fx.AnotherDocument.ID); err != nil {
|
||||
if !strings.HasPrefix(err.Error(), "authorization cannot access document") {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
} else {
|
||||
t.Error("expected error got none")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func AddLabels(t *testing.T) {
|
||||
t.Run("add one", func(t *testing.T) {
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
server := serverFn(fx.authForDocument(influxdb.WriteAction))
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
got, err := client.AddDocumentLabel(context.Background(), namespace, fx.Document.ID, fx.Labels[1].ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
want := fx.Labels[1]
|
||||
if diff := cmp.Diff(got, want); diff != "" {
|
||||
t.Errorf("got unexpected labels:\n\t%s", diff)
|
||||
}
|
||||
gotLs, err := client.GetDocumentLabels(context.Background(), namespace, fx.Document.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
wantLs := fx.Labels
|
||||
if diff := cmp.Diff(gotLs, wantLs); diff != "" {
|
||||
t.Errorf("got unexpected labels:\n\t%s", diff)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("unauthorized", func(t *testing.T) {
|
||||
// TODO(affo)
|
||||
t.Skip("should not be able to add label with read permission")
|
||||
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
server := serverFn(fx.authForDocument(influxdb.ReadAction))
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
if _, err := client.AddDocumentLabel(context.Background(), namespace, fx.Document.ID, fx.Labels[1].ID); err != nil {
|
||||
if !strings.HasPrefix(err.Error(), "authorization cannot access document") {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
} else {
|
||||
t.Error("expected error got none")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("add same twice", func(t *testing.T) {
|
||||
// TODO(affo)
|
||||
t.Skip("should not be able to add the same label twice")
|
||||
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
server := serverFn(fx.authForDocument(influxdb.WriteAction))
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
if _, err := client.AddDocumentLabel(context.Background(), namespace, fx.Document.ID, fx.Labels[0].ID); err != nil {
|
||||
if !strings.HasPrefix(err.Error(), "???") {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
} else {
|
||||
t.Error("expected error got none")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func DeleteLabel(t *testing.T) {
|
||||
t.Run("existing", func(t *testing.T) {
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
server := serverFn(fx.authForDocument(influxdb.WriteAction))
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
pre, err := client.GetDocumentLabels(context.Background(), namespace, fx.Document.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
l := len(pre)
|
||||
if err := client.DeleteDocumentLabel(context.Background(), namespace, fx.Document.ID, fx.Document.Labels[0].ID); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
got, err := client.GetDocumentLabels(context.Background(), namespace, fx.Document.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
lgot := len(got)
|
||||
lwant := l - 1
|
||||
if diff := cmp.Diff(lgot, lwant); diff != "" {
|
||||
t.Errorf("got unexpected length of docs:\n\t%v", diff)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("non existing", func(t *testing.T) {
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
server := serverFn(fx.authForDocument(influxdb.WriteAction))
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
if err := client.DeleteDocumentLabel(context.Background(), namespace, fx.Document.ID, fx.Labels[2].ID+42); err != nil {
|
||||
if !strings.HasPrefix(err.Error(), "label not found") {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
} else {
|
||||
t.Error("expected error got none")
|
||||
}
|
||||
got, err := client.GetDocumentLabels(context.Background(), namespace, fx.Document.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
lgot := len(got)
|
||||
lwant := 2
|
||||
if diff := cmp.Diff(lgot, lwant); diff != "" {
|
||||
t.Errorf("got unexpected length of labels:\n\t%v", diff)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("unauthorized", func(t *testing.T) {
|
||||
// TODO(affo)
|
||||
t.Skip("should not be able to delete label with read permission")
|
||||
|
||||
serverFn, clientFn, fx := setup(t)
|
||||
server := serverFn(fx.authForDocument(influxdb.ReadAction))
|
||||
defer server.Close()
|
||||
client := clientFn(server.URL)
|
||||
if err := client.DeleteDocumentLabel(context.Background(), namespace, fx.Document.ID, fx.Document.Labels[0].ID); err != nil {
|
||||
if !strings.HasPrefix(err.Error(), "authorization cannot access document") {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
} else {
|
||||
t.Error("expected error got none")
|
||||
}
|
||||
got, err := client.GetDocumentLabels(context.Background(), namespace, fx.Document.ID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
lgot := len(got)
|
||||
lwant := 2
|
||||
if diff := cmp.Diff(lgot, lwant); diff != "" {
|
||||
t.Errorf("got unexpected length of labels:\n\t%v", diff)
|
||||
}
|
||||
})
|
||||
}
|
|
@ -58,6 +58,13 @@ func CheckError(resp *http.Response) (err error) {
|
|||
}
|
||||
}
|
||||
|
||||
if resp.StatusCode == http.StatusUnsupportedMediaType {
|
||||
return &platform.Error{
|
||||
Code: platform.EInvalid,
|
||||
Msg: fmt.Sprintf("invalid media type: %q", resp.Header.Get("Content-Type")),
|
||||
}
|
||||
}
|
||||
|
||||
contentType := resp.Header.Get("Content-Type")
|
||||
if contentType == "" {
|
||||
// Assume JSON if there is no content-type.
|
||||
|
@ -76,11 +83,9 @@ func CheckError(resp *http.Response) (err error) {
|
|||
switch mediatype {
|
||||
case "application/json":
|
||||
pe := new(platform.Error)
|
||||
|
||||
parseErr := json.Unmarshal(buf.Bytes(), pe)
|
||||
if parseErr != nil {
|
||||
if err := json.Unmarshal(buf.Bytes(), pe); err != nil {
|
||||
line, _ := buf.ReadString('\n')
|
||||
return errors.Wrap(stderrors.New(strings.TrimSuffix(line, "\n")), parseErr.Error())
|
||||
return errors.Wrap(stderrors.New(strings.TrimSuffix(line, "\n")), err.Error())
|
||||
}
|
||||
return pe
|
||||
default:
|
||||
|
|
|
@ -6,15 +6,19 @@ import (
|
|||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"path"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/httprouter"
|
||||
"github.com/influxdata/influxdb"
|
||||
pctx "github.com/influxdata/influxdb/context"
|
||||
"github.com/influxdata/influxdb/notification/rule"
|
||||
"github.com/influxdata/influxdb/pkg/httpc"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
var _ influxdb.NotificationRuleStore = (*NotificationRuleService)(nil)
|
||||
|
||||
type statusDecode struct {
|
||||
Status *influxdb.Status `json:"status"`
|
||||
}
|
||||
|
@ -700,3 +704,164 @@ func (h *NotificationRuleHandler) handleDeleteNotificationRule(w http.ResponseWr
|
|||
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
}
|
||||
|
||||
// NotificationRuleService is an http client that implements the NotificationRuleStore interface
|
||||
type NotificationRuleService struct {
|
||||
Client *httpc.Client
|
||||
*UserResourceMappingService
|
||||
*OrganizationService
|
||||
}
|
||||
|
||||
// NewNotificationRuleService wraps an httpc.Client in a NotificationRuleService
|
||||
func NewNotificationRuleService(client *httpc.Client) *NotificationRuleService {
|
||||
return &NotificationRuleService{
|
||||
Client: client,
|
||||
UserResourceMappingService: &UserResourceMappingService{
|
||||
Client: client,
|
||||
},
|
||||
OrganizationService: &OrganizationService{
|
||||
Client: client,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
type notificationRuleCreateEncoder struct {
|
||||
nrc influxdb.NotificationRuleCreate
|
||||
}
|
||||
|
||||
func (n notificationRuleCreateEncoder) MarshalJSON() ([]byte, error) {
|
||||
b, err := n.nrc.NotificationRule.MarshalJSON()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var v map[string]interface{}
|
||||
err = json.Unmarshal(b, &v)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
v["status"] = n.nrc.Status
|
||||
return json.Marshal(v)
|
||||
}
|
||||
|
||||
type notificationRuleDecoder struct {
|
||||
rule influxdb.NotificationRule
|
||||
}
|
||||
|
||||
func (n *notificationRuleDecoder) UnmarshalJSON(b []byte) error {
|
||||
newRule, err := rule.UnmarshalJSON(b)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
n.rule = newRule
|
||||
return nil
|
||||
}
|
||||
|
||||
// CreateNotificationRule creates a new NotificationRule from a NotificationRuleCreate
|
||||
// the Status on the NotificationRuleCreate is used to determine the status (active/inactive) of the associated Task
|
||||
func (s *NotificationRuleService) CreateNotificationRule(ctx context.Context, nr influxdb.NotificationRuleCreate, userID influxdb.ID) error {
|
||||
var resp notificationRuleDecoder
|
||||
err := s.Client.
|
||||
PostJSON(notificationRuleCreateEncoder{nrc: nr}, prefixNotificationRules).
|
||||
DecodeJSON(&resp).
|
||||
Do(ctx)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
nr.NotificationRule.SetID(resp.rule.GetID())
|
||||
nr.NotificationRule.SetOrgID(resp.rule.GetOrgID())
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// FindNotificationRuleByID finds and returns one Notification Rule with a matching ID
|
||||
func (s *NotificationRuleService) FindNotificationRuleByID(ctx context.Context, id influxdb.ID) (influxdb.NotificationRule, error) {
|
||||
var resp notificationRuleResponse
|
||||
err := s.Client.
|
||||
Get(getNotificationRulesIDPath(id)).
|
||||
DecodeJSON(&resp).
|
||||
Do(ctx)
|
||||
|
||||
return resp.NotificationRule, err
|
||||
}
|
||||
|
||||
// FindNotificationRules returns a list of notification rules that match filter and the total count of matching notification rules.
|
||||
// Additional options provide pagination & sorting.
|
||||
func (s *NotificationRuleService) FindNotificationRules(ctx context.Context, filter influxdb.NotificationRuleFilter, opt ...influxdb.FindOptions) ([]influxdb.NotificationRule, int, error) {
|
||||
var params = findOptionParams(opt...)
|
||||
if filter.OrgID != nil {
|
||||
params = append(params, [2]string{"orgID", filter.OrgID.String()})
|
||||
}
|
||||
|
||||
if filter.Organization != nil {
|
||||
params = append(params, [2]string{"org", *filter.Organization})
|
||||
}
|
||||
|
||||
if len(filter.Tags) != 0 {
|
||||
// loop over tags and append a string of format key:value for each
|
||||
for _, tag := range filter.Tags {
|
||||
keyvalue := fmt.Sprintf("%s:%s", tag.Key, tag.Value)
|
||||
params = append(params, [2]string{"tag", keyvalue})
|
||||
}
|
||||
}
|
||||
|
||||
var resp struct {
|
||||
NotificationRules []notificationRuleDecoder
|
||||
}
|
||||
err := s.Client.
|
||||
Get(prefixNotificationRules).
|
||||
QueryParams(params...).
|
||||
DecodeJSON(&resp).
|
||||
Do(ctx)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
var rules []influxdb.NotificationRule
|
||||
for _, r := range resp.NotificationRules {
|
||||
rules = append(rules, r.rule)
|
||||
}
|
||||
return rules, len(rules), nil
|
||||
}
|
||||
|
||||
// UpdateNotificationRule updates a single notification rule.
|
||||
// Returns the new notification rule after update.
|
||||
func (s *NotificationRuleService) UpdateNotificationRule(ctx context.Context, id influxdb.ID, nr influxdb.NotificationRuleCreate, userID influxdb.ID) (influxdb.NotificationRule, error) {
|
||||
var resp notificationRuleDecoder
|
||||
err := s.Client.
|
||||
PutJSON(notificationRuleCreateEncoder{nrc: nr}, getNotificationRulesIDPath(id)).
|
||||
DecodeJSON(&resp).
|
||||
Do(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return resp.rule, nil
|
||||
}
|
||||
|
||||
// PatchNotificationRule updates a single notification rule with changeset.
|
||||
// Returns the new notification rule state after update.
|
||||
func (s *NotificationRuleService) PatchNotificationRule(ctx context.Context, id influxdb.ID, upd influxdb.NotificationRuleUpdate) (influxdb.NotificationRule, error) {
|
||||
var resp notificationRuleDecoder
|
||||
err := s.Client.
|
||||
PatchJSON(&upd, getNotificationRulesIDPath(id)).
|
||||
DecodeJSON(&resp).
|
||||
Do(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return resp.rule, nil
|
||||
}
|
||||
|
||||
// DeleteNotificationRule removes a notification rule by ID.
|
||||
func (s *NotificationRuleService) DeleteNotificationRule(ctx context.Context, id influxdb.ID) error {
|
||||
return s.Client.
|
||||
Delete(getNotificationRulesIDPath(id)).
|
||||
Do(ctx)
|
||||
}
|
||||
|
||||
func getNotificationRulesIDPath(id influxdb.ID) string {
|
||||
return path.Join(prefixNotificationRules, id.String())
|
||||
}
|
||||
|
|
|
@ -118,7 +118,7 @@ func newOnboardingResponse(results *platform.OnboardingResults) *onboardingRespo
|
|||
}
|
||||
return &onboardingResponse{
|
||||
User: newUserResponse(results.User),
|
||||
Bucket: newBucketResponse(results.Bucket, []*platform.Label{}),
|
||||
Bucket: NewBucketResponse(results.Bucket, []*platform.Label{}),
|
||||
Organization: newOrgResponse(*results.Org),
|
||||
Auth: newAuthResponse(results.Auth, results.Org, results.User, ps),
|
||||
}
|
||||
|
|
|
@ -5,6 +5,7 @@ import (
|
|||
"fmt"
|
||||
"net/http"
|
||||
"path"
|
||||
"strings"
|
||||
|
||||
"github.com/influxdata/httprouter"
|
||||
"github.com/influxdata/influxdb"
|
||||
|
@ -365,6 +366,10 @@ func (h *OrgHandler) handlePatchSecrets(w http.ResponseWriter, r *http.Request)
|
|||
h.API.Respond(w, http.StatusNoContent, nil)
|
||||
}
|
||||
|
||||
type secretsDeleteBody struct {
|
||||
Secrets []string `json:"secrets"`
|
||||
}
|
||||
|
||||
// handleDeleteSecrets is the HTTP handler for the DELETE /api/v2/orgs/:id/secrets route.
|
||||
func (h *OrgHandler) handleDeleteSecrets(w http.ResponseWriter, r *http.Request) {
|
||||
orgID, err := decodeIDFromCtx(r.Context(), "id")
|
||||
|
@ -373,9 +378,8 @@ func (h *OrgHandler) handleDeleteSecrets(w http.ResponseWriter, r *http.Request)
|
|||
return
|
||||
}
|
||||
|
||||
var reqBody struct {
|
||||
Secrets []string `json:"secrets"`
|
||||
}
|
||||
var reqBody secretsDeleteBody
|
||||
|
||||
if err := h.API.DecodeJSON(r.Body, &reqBody); err != nil {
|
||||
h.API.Err(w, err)
|
||||
return
|
||||
|
@ -426,6 +430,85 @@ func newOrganizationLogResponse(id influxdb.ID, es []*influxdb.OperationLogEntry
|
|||
}
|
||||
}
|
||||
|
||||
// SecretService connects to Influx via HTTP using tokens to manage secrets.
|
||||
type SecretService struct {
|
||||
Client *httpc.Client
|
||||
}
|
||||
|
||||
// LoadSecret is not implemented for http
|
||||
func (s *SecretService) LoadSecret(ctx context.Context, orgID influxdb.ID, k string) (string, error) {
|
||||
return "", &influxdb.Error{
|
||||
Code: influxdb.EMethodNotAllowed,
|
||||
Msg: "load secret is not implemented for http",
|
||||
}
|
||||
}
|
||||
|
||||
// PutSecret is not implemented for http.
|
||||
func (s *SecretService) PutSecret(ctx context.Context, orgID influxdb.ID, k string, v string) error {
|
||||
return &influxdb.Error{
|
||||
Code: influxdb.EMethodNotAllowed,
|
||||
Msg: "put secret is not implemented for http",
|
||||
}
|
||||
}
|
||||
|
||||
// GetSecretKeys get all secret keys mathing an org ID via HTTP.
|
||||
func (s *SecretService) GetSecretKeys(ctx context.Context, orgID influxdb.ID) ([]string, error) {
|
||||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
span.LogKV("org-id", orgID)
|
||||
|
||||
path := strings.Replace(organizationsIDSecretsPath, ":id", orgID.String(), 1)
|
||||
|
||||
var ss secretsResponse
|
||||
err := s.Client.
|
||||
Get(path).
|
||||
DecodeJSON(&ss).
|
||||
Do(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return ss.Secrets, nil
|
||||
}
|
||||
|
||||
// PutSecrets is not implemented for http.
|
||||
func (s *SecretService) PutSecrets(ctx context.Context, orgID influxdb.ID, m map[string]string) error {
|
||||
return &influxdb.Error{
|
||||
Code: influxdb.EMethodNotAllowed,
|
||||
Msg: "put secrets is not implemented for http",
|
||||
}
|
||||
}
|
||||
|
||||
// PatchSecrets will update the existing secret with new via http.
|
||||
func (s *SecretService) PatchSecrets(ctx context.Context, orgID influxdb.ID, m map[string]string) error {
|
||||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
if orgID != 0 {
|
||||
span.LogKV("org-id", orgID)
|
||||
}
|
||||
|
||||
path := strings.Replace(organizationsIDSecretsPath, ":id", orgID.String(), 1)
|
||||
|
||||
return s.Client.
|
||||
PatchJSON(m, path).
|
||||
Do(ctx)
|
||||
}
|
||||
|
||||
// DeleteSecret removes a single secret via HTTP.
|
||||
func (s *SecretService) DeleteSecret(ctx context.Context, orgID influxdb.ID, ks ...string) error {
|
||||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
path := strings.Replace(organizationsIDSecretsDeletePath, ":id", orgID.String(), 1)
|
||||
return s.Client.
|
||||
PostJSON(secretsDeleteBody{
|
||||
Secrets: ks,
|
||||
}, path).
|
||||
Do(ctx)
|
||||
}
|
||||
|
||||
// OrganizationService connects to Influx via HTTP using tokens to manage organizations.
|
||||
type OrganizationService struct {
|
||||
Client *httpc.Client
|
||||
|
|
|
@ -10,12 +10,12 @@ import (
|
|||
"net/http/httptest"
|
||||
"testing"
|
||||
|
||||
platform "github.com/influxdata/influxdb"
|
||||
"github.com/influxdata/influxdb"
|
||||
"github.com/influxdata/influxdb/inmem"
|
||||
kithttp "github.com/influxdata/influxdb/kit/transport/http"
|
||||
"github.com/influxdata/influxdb/kv"
|
||||
"github.com/influxdata/influxdb/mock"
|
||||
platformtesting "github.com/influxdata/influxdb/testing"
|
||||
influxdbtesting "github.com/influxdata/influxdb/testing"
|
||||
"go.uber.org/zap/zaptest"
|
||||
)
|
||||
|
||||
|
@ -33,14 +33,14 @@ func NewMockOrgBackend(t *testing.T) *OrgBackend {
|
|||
}
|
||||
}
|
||||
|
||||
func initOrganizationService(f platformtesting.OrganizationFields, t *testing.T) (platform.OrganizationService, string, func()) {
|
||||
func initOrganizationService(f influxdbtesting.OrganizationFields, t *testing.T) (influxdb.OrganizationService, string, func()) {
|
||||
t.Helper()
|
||||
svc := kv.NewService(zaptest.NewLogger(t), inmem.NewKVStore())
|
||||
svc.IDGenerator = f.IDGenerator
|
||||
svc.OrgBucketIDs = f.OrgBucketIDs
|
||||
svc.TimeGenerator = f.TimeGenerator
|
||||
if f.TimeGenerator == nil {
|
||||
svc.TimeGenerator = platform.RealTimeGenerator{}
|
||||
svc.TimeGenerator = influxdb.RealTimeGenerator{}
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
|
@ -57,6 +57,7 @@ func initOrganizationService(f platformtesting.OrganizationFields, t *testing.T)
|
|||
orgBackend := NewMockOrgBackend(t)
|
||||
orgBackend.HTTPErrorHandler = kithttp.ErrorHandler(0)
|
||||
orgBackend.OrganizationService = svc
|
||||
orgBackend.SecretService = svc
|
||||
handler := NewOrgHandler(zaptest.NewLogger(t), orgBackend)
|
||||
server := httptest.NewServer(handler)
|
||||
client := OrganizationService{
|
||||
|
@ -67,17 +68,52 @@ func initOrganizationService(f platformtesting.OrganizationFields, t *testing.T)
|
|||
return &client, "", done
|
||||
}
|
||||
|
||||
func initSecretService(f influxdbtesting.SecretServiceFields, t *testing.T) (influxdb.SecretService, func()) {
|
||||
t.Helper()
|
||||
svc := kv.NewService(zaptest.NewLogger(t), inmem.NewKVStore())
|
||||
|
||||
ctx := context.Background()
|
||||
if err := svc.Initialize(ctx); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
for _, ss := range f.Secrets {
|
||||
if err := svc.PutSecrets(ctx, ss.OrganizationID, ss.Env); err != nil {
|
||||
t.Fatalf("failed to populate secrets")
|
||||
}
|
||||
}
|
||||
|
||||
scrBackend := NewMockOrgBackend(t)
|
||||
scrBackend.HTTPErrorHandler = kithttp.ErrorHandler(0)
|
||||
scrBackend.SecretService = svc
|
||||
handler := NewOrgHandler(zaptest.NewLogger(t), scrBackend)
|
||||
server := httptest.NewServer(handler)
|
||||
client := SecretService{
|
||||
Client: mustNewHTTPClient(t, server.URL, ""),
|
||||
}
|
||||
done := server.Close
|
||||
|
||||
return &client, done
|
||||
}
|
||||
|
||||
func TestOrganizationService(t *testing.T) {
|
||||
t.Parallel()
|
||||
platformtesting.OrganizationService(initOrganizationService, t)
|
||||
influxdbtesting.OrganizationService(initOrganizationService, t)
|
||||
}
|
||||
|
||||
func TestSecretService(t *testing.T) {
|
||||
t.Parallel()
|
||||
influxdbtesting.DeleteSecrets(initSecretService, t)
|
||||
influxdbtesting.GetSecretKeys(initSecretService, t)
|
||||
influxdbtesting.PatchSecrets(initSecretService, t)
|
||||
}
|
||||
|
||||
func TestSecretService_handleGetSecrets(t *testing.T) {
|
||||
type fields struct {
|
||||
SecretService platform.SecretService
|
||||
SecretService influxdb.SecretService
|
||||
}
|
||||
type args struct {
|
||||
orgID platform.ID
|
||||
orgID influxdb.ID
|
||||
}
|
||||
type wants struct {
|
||||
statusCode int
|
||||
|
@ -95,7 +131,7 @@ func TestSecretService_handleGetSecrets(t *testing.T) {
|
|||
name: "get basic secrets",
|
||||
fields: fields{
|
||||
&mock.SecretService{
|
||||
GetSecretKeysFn: func(ctx context.Context, orgID platform.ID) ([]string, error) {
|
||||
GetSecretKeysFn: func(ctx context.Context, orgID influxdb.ID) ([]string, error) {
|
||||
return []string{"hello", "world"}, nil
|
||||
},
|
||||
},
|
||||
|
@ -124,7 +160,7 @@ func TestSecretService_handleGetSecrets(t *testing.T) {
|
|||
name: "get secrets when there are none",
|
||||
fields: fields{
|
||||
&mock.SecretService{
|
||||
GetSecretKeysFn: func(ctx context.Context, orgID platform.ID) ([]string, error) {
|
||||
GetSecretKeysFn: func(ctx context.Context, orgID influxdb.ID) ([]string, error) {
|
||||
return []string{}, nil
|
||||
},
|
||||
},
|
||||
|
@ -150,9 +186,9 @@ func TestSecretService_handleGetSecrets(t *testing.T) {
|
|||
name: "get secrets when organization has no secret keys",
|
||||
fields: fields{
|
||||
&mock.SecretService{
|
||||
GetSecretKeysFn: func(ctx context.Context, orgID platform.ID) ([]string, error) {
|
||||
return []string{}, &platform.Error{
|
||||
Code: platform.ENotFound,
|
||||
GetSecretKeysFn: func(ctx context.Context, orgID influxdb.ID) ([]string, error) {
|
||||
return []string{}, &influxdb.Error{
|
||||
Code: influxdb.ENotFound,
|
||||
Msg: "organization has no secret keys",
|
||||
}
|
||||
|
||||
|
@ -215,10 +251,10 @@ func TestSecretService_handleGetSecrets(t *testing.T) {
|
|||
|
||||
func TestSecretService_handlePatchSecrets(t *testing.T) {
|
||||
type fields struct {
|
||||
SecretService platform.SecretService
|
||||
SecretService influxdb.SecretService
|
||||
}
|
||||
type args struct {
|
||||
orgID platform.ID
|
||||
orgID influxdb.ID
|
||||
secrets map[string]string
|
||||
}
|
||||
type wants struct {
|
||||
|
@ -237,7 +273,7 @@ func TestSecretService_handlePatchSecrets(t *testing.T) {
|
|||
name: "get basic secrets",
|
||||
fields: fields{
|
||||
&mock.SecretService{
|
||||
PatchSecretsFn: func(ctx context.Context, orgID platform.ID, s map[string]string) error {
|
||||
PatchSecretsFn: func(ctx context.Context, orgID influxdb.ID, s map[string]string) error {
|
||||
return nil
|
||||
},
|
||||
},
|
||||
|
@ -297,10 +333,10 @@ func TestSecretService_handlePatchSecrets(t *testing.T) {
|
|||
|
||||
func TestSecretService_handleDeleteSecrets(t *testing.T) {
|
||||
type fields struct {
|
||||
SecretService platform.SecretService
|
||||
SecretService influxdb.SecretService
|
||||
}
|
||||
type args struct {
|
||||
orgID platform.ID
|
||||
orgID influxdb.ID
|
||||
secrets []string
|
||||
}
|
||||
type wants struct {
|
||||
|
@ -319,7 +355,7 @@ func TestSecretService_handleDeleteSecrets(t *testing.T) {
|
|||
name: "get basic secrets",
|
||||
fields: fields{
|
||||
&mock.SecretService{
|
||||
DeleteSecretFn: func(ctx context.Context, orgID platform.ID, s ...string) error {
|
||||
DeleteSecretFn: func(ctx context.Context, orgID influxdb.ID, s ...string) error {
|
||||
return nil
|
||||
},
|
||||
},
|
||||
|
|
|
@ -22,18 +22,24 @@ import (
|
|||
"github.com/influxdata/influxdb"
|
||||
"github.com/influxdata/influxdb/jsonweb"
|
||||
"github.com/influxdata/influxdb/query"
|
||||
transpiler "github.com/influxdata/influxdb/query/influxql"
|
||||
"github.com/influxdata/influxql"
|
||||
)
|
||||
|
||||
// QueryRequest is a flux query request.
|
||||
type QueryRequest struct {
|
||||
Type string `json:"type"`
|
||||
Query string `json:"query"`
|
||||
|
||||
// Flux fields
|
||||
Extern *ast.File `json:"extern,omitempty"`
|
||||
Spec *flux.Spec `json:"spec,omitempty"`
|
||||
AST *ast.Package `json:"ast,omitempty"`
|
||||
Query string `json:"query"`
|
||||
Type string `json:"type"`
|
||||
Dialect QueryDialect `json:"dialect"`
|
||||
|
||||
// InfluxQL fields
|
||||
Bucket string `json:"bucket,omitempty"`
|
||||
|
||||
Org *influxdb.Organization `json:"-"`
|
||||
|
||||
// PreferNoContent specifies if the Response to this request should
|
||||
|
@ -96,10 +102,14 @@ func (r QueryRequest) Validate() error {
|
|||
}
|
||||
}
|
||||
|
||||
if r.Type != "flux" {
|
||||
if r.Type != "flux" && r.Type != "influxql" {
|
||||
return fmt.Errorf(`unknown query type: %s`, r.Type)
|
||||
}
|
||||
|
||||
if r.Type == "influxql" && r.Bucket == "" {
|
||||
return fmt.Errorf("bucket parameter is required for influxql queries")
|
||||
}
|
||||
|
||||
if len(r.Dialect.CommentPrefix) > 1 {
|
||||
return fmt.Errorf("invalid dialect comment prefix: must be length 0 or 1")
|
||||
}
|
||||
|
@ -247,10 +257,22 @@ func (r QueryRequest) proxyRequest(now func() time.Time) (*query.ProxyRequest, e
|
|||
// Query is preferred over AST
|
||||
var compiler flux.Compiler
|
||||
if r.Query != "" {
|
||||
compiler = lang.FluxCompiler{
|
||||
Now: now(),
|
||||
Extern: r.Extern,
|
||||
Query: r.Query,
|
||||
switch r.Type {
|
||||
case "influxql":
|
||||
n := now()
|
||||
compiler = &transpiler.Compiler{
|
||||
Now: &n,
|
||||
Query: r.Query,
|
||||
Bucket: r.Bucket,
|
||||
}
|
||||
case "flux":
|
||||
fallthrough
|
||||
default:
|
||||
compiler = lang.FluxCompiler{
|
||||
Now: now(),
|
||||
Extern: r.Extern,
|
||||
Query: r.Query,
|
||||
}
|
||||
}
|
||||
} else if r.AST != nil {
|
||||
c := lang.ASTCompiler{
|
||||
|
@ -278,20 +300,25 @@ func (r QueryRequest) proxyRequest(now func() time.Time) (*query.ProxyRequest, e
|
|||
if r.PreferNoContent {
|
||||
dialect = &query.NoContentDialect{}
|
||||
} else {
|
||||
// TODO(nathanielc): Use commentPrefix and dateTimeFormat
|
||||
// once they are supported.
|
||||
encConfig := csv.ResultEncoderConfig{
|
||||
NoHeader: noHeader,
|
||||
Delimiter: delimiter,
|
||||
Annotations: r.Dialect.Annotations,
|
||||
}
|
||||
if r.PreferNoContentWithError {
|
||||
dialect = &query.NoContentWithErrorDialect{
|
||||
ResultEncoderConfig: encConfig,
|
||||
}
|
||||
if r.Type == "influxql" {
|
||||
// Use default transpiler dialect
|
||||
dialect = &transpiler.Dialect{}
|
||||
} else {
|
||||
dialect = &csv.Dialect{
|
||||
ResultEncoderConfig: encConfig,
|
||||
// TODO(nathanielc): Use commentPrefix and dateTimeFormat
|
||||
// once they are supported.
|
||||
encConfig := csv.ResultEncoderConfig{
|
||||
NoHeader: noHeader,
|
||||
Delimiter: delimiter,
|
||||
Annotations: r.Dialect.Annotations,
|
||||
}
|
||||
if r.PreferNoContentWithError {
|
||||
dialect = &query.NoContentWithErrorDialect{
|
||||
ResultEncoderConfig: encConfig,
|
||||
}
|
||||
} else {
|
||||
dialect = &csv.Dialect{
|
||||
ResultEncoderConfig: encConfig,
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -9,6 +9,7 @@ import (
|
|||
"io/ioutil"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
"github.com/NYTimes/gziphandler"
|
||||
|
@ -26,6 +27,7 @@ import (
|
|||
kithttp "github.com/influxdata/influxdb/kit/transport/http"
|
||||
"github.com/influxdata/influxdb/logger"
|
||||
"github.com/influxdata/influxdb/query"
|
||||
"github.com/influxdata/influxdb/query/influxql"
|
||||
"github.com/pkg/errors"
|
||||
prom "github.com/prometheus/client_golang/prometheus"
|
||||
"go.uber.org/zap"
|
||||
|
@ -55,7 +57,10 @@ func NewFluxBackend(log *zap.Logger, b *APIBackend) *FluxBackend {
|
|||
log: log,
|
||||
QueryEventRecorder: b.QueryEventRecorder,
|
||||
|
||||
ProxyQueryService: b.FluxService,
|
||||
ProxyQueryService: routingQueryService{
|
||||
InfluxQLService: b.InfluxQLService,
|
||||
DefaultService: b.FluxService,
|
||||
},
|
||||
OrganizationService: b.OrganizationService,
|
||||
FluxLanguageService: b.FluxLanguageService,
|
||||
}
|
||||
|
@ -610,3 +615,38 @@ func QueryHealthCheck(url string, insecureSkipVerify bool) check.Response {
|
|||
|
||||
return healthResponse
|
||||
}
|
||||
|
||||
// routingQueryService routes queries to specific query services based on their compiler type.
|
||||
type routingQueryService struct {
|
||||
// InfluxQLService handles queries with compiler type of "influxql"
|
||||
InfluxQLService query.ProxyQueryService
|
||||
// DefaultService handles all other queries
|
||||
DefaultService query.ProxyQueryService
|
||||
}
|
||||
|
||||
func (s routingQueryService) Check(ctx context.Context) check.Response {
|
||||
// Produce combined check response
|
||||
response := check.Response{
|
||||
Name: "internal-routingQueryService",
|
||||
Status: check.StatusPass,
|
||||
}
|
||||
def := s.DefaultService.Check(ctx)
|
||||
influxql := s.InfluxQLService.Check(ctx)
|
||||
if def.Status == check.StatusFail {
|
||||
response.Status = def.Status
|
||||
response.Message = def.Message
|
||||
} else if influxql.Status == check.StatusFail {
|
||||
response.Status = influxql.Status
|
||||
response.Message = influxql.Message
|
||||
}
|
||||
response.Checks = []check.Response{def, influxql}
|
||||
sort.Sort(response.Checks)
|
||||
return response
|
||||
}
|
||||
|
||||
func (s routingQueryService) Query(ctx context.Context, w io.Writer, req *query.ProxyRequest) (flux.Statistics, error) {
|
||||
if req.Request.Compiler.CompilerType() == influxql.CompilerType {
|
||||
return s.InfluxQLService.Query(ctx, w, req)
|
||||
}
|
||||
return s.DefaultService.Query(ctx, w, req)
|
||||
}
|
||||
|
|
|
@ -176,8 +176,9 @@ func decodeCookieSession(ctx context.Context, r *http.Request) (string, error) {
|
|||
// SetCookieSession adds a cookie for the session to an http request
|
||||
func SetCookieSession(key string, r *http.Request) {
|
||||
c := &http.Cookie{
|
||||
Name: cookieSessionName,
|
||||
Value: key,
|
||||
Name: cookieSessionName,
|
||||
Value: key,
|
||||
Secure: true,
|
||||
}
|
||||
|
||||
r.AddCookie(c)
|
||||
|
|
|
@ -3214,7 +3214,9 @@ paths:
|
|||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Query"
|
||||
oneOf:
|
||||
- $ref: "#/components/schemas/Query"
|
||||
- $ref: "#/components/schemas/InfluxQLQuery"
|
||||
application/vnd.flux:
|
||||
schema:
|
||||
type: string
|
||||
|
@ -6274,7 +6276,7 @@ components:
|
|||
description: Flux query script to be analyzed
|
||||
type: string
|
||||
Query:
|
||||
description: Query influx with specific return formatting.
|
||||
description: Query influx using the Flux language
|
||||
type: object
|
||||
required:
|
||||
- query
|
||||
|
@ -6285,23 +6287,29 @@ components:
|
|||
description: Query script to execute.
|
||||
type: string
|
||||
type:
|
||||
description: The type of query.
|
||||
description: The type of query. Must be "flux".
|
||||
type: string
|
||||
default: flux
|
||||
enum:
|
||||
- flux
|
||||
- influxql
|
||||
db:
|
||||
description: Required for `influxql` type queries.
|
||||
type: string
|
||||
rp:
|
||||
description: Required for `influxql` type queries.
|
||||
type: string
|
||||
cluster:
|
||||
description: Required for `influxql` type queries.
|
||||
type: string
|
||||
dialect:
|
||||
$ref: "#/components/schemas/Dialect"
|
||||
InfluxQLQuery:
|
||||
description: Query influx using the InfluxQL language
|
||||
type: object
|
||||
required:
|
||||
- query
|
||||
properties:
|
||||
query:
|
||||
description: InfluxQL query execute.
|
||||
type: string
|
||||
type:
|
||||
description: The type of query. Must be "influxql".
|
||||
type: string
|
||||
enum:
|
||||
- influxql
|
||||
bucket:
|
||||
description: Bucket is to be used instead of the database and retention policy specified in the InfluxQL query.
|
||||
type: string
|
||||
Package:
|
||||
description: Represents a complete package source tree.
|
||||
type: object
|
||||
|
@ -7145,42 +7153,64 @@ components:
|
|||
type: string
|
||||
package:
|
||||
$ref: "#/components/schemas/Pkg"
|
||||
packages:
|
||||
type: array
|
||||
items:
|
||||
$ref: "#/components/schemas/Pkg"
|
||||
secrets:
|
||||
type: object
|
||||
additionalProperties:
|
||||
type: string
|
||||
remote:
|
||||
type: object
|
||||
properties:
|
||||
url:
|
||||
type: string
|
||||
contentType:
|
||||
type: string
|
||||
required: ["url"]
|
||||
remotes:
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
url:
|
||||
type: string
|
||||
contentType:
|
||||
type: string
|
||||
required: ["url"]
|
||||
PkgCreateKind:
|
||||
type: string
|
||||
enum:
|
||||
- bucket
|
||||
- check
|
||||
- dashboard
|
||||
- label
|
||||
- notification_endpoint
|
||||
- notification_rule
|
||||
- task
|
||||
- telegraf
|
||||
- variable
|
||||
PkgCreate:
|
||||
type: object
|
||||
properties:
|
||||
orgIDs:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
type: object
|
||||
properties:
|
||||
orgID:
|
||||
type: string
|
||||
resourceFilters:
|
||||
type: object
|
||||
properties:
|
||||
byLabel:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
byResourceKind:
|
||||
type: array
|
||||
items:
|
||||
$ref: "#/components/schemas/PkgCreateKind"
|
||||
resources:
|
||||
type: object
|
||||
properties:
|
||||
id:
|
||||
type: string
|
||||
kind:
|
||||
type: string
|
||||
enum:
|
||||
- bucket
|
||||
- check
|
||||
- dashboard
|
||||
- label
|
||||
- notification_endpoint
|
||||
- notification_rule
|
||||
- task
|
||||
- telegraf
|
||||
- variable
|
||||
$ref: "#/components/schemas/PkgCreateKind"
|
||||
name:
|
||||
type: string
|
||||
required: [id, kind]
|
||||
|
@ -7203,7 +7233,6 @@ components:
|
|||
- NotificationEndpointPagerDuty
|
||||
- NotificationEndpointSlack
|
||||
- NotificationRule
|
||||
- NotificationEndpointHTTP
|
||||
- Task
|
||||
- Telegraf
|
||||
- Variable
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
package http
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
|
@ -10,7 +9,6 @@ import (
|
|||
"net/url"
|
||||
"path"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/httprouter"
|
||||
|
@ -18,7 +16,7 @@ import (
|
|||
pcontext "github.com/influxdata/influxdb/context"
|
||||
"github.com/influxdata/influxdb/kit/tracing"
|
||||
"github.com/influxdata/influxdb/kv"
|
||||
"github.com/influxdata/influxdb/task/backend"
|
||||
"github.com/influxdata/influxdb/pkg/httpc"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
|
@ -152,6 +150,8 @@ func NewTaskHandler(log *zap.Logger, b *TaskBackend) *TaskHandler {
|
|||
return h
|
||||
}
|
||||
|
||||
// Task is a package-specific Task format that preserves the expected format for the API,
|
||||
// where time values are represented as strings
|
||||
type Task struct {
|
||||
ID influxdb.ID `json:"id"`
|
||||
OrganizationID influxdb.ID `json:"orgID"`
|
||||
|
@ -604,7 +604,6 @@ func decodePostTaskRequest(ctx context.Context, r *http.Request) (*postTaskReque
|
|||
return nil, err
|
||||
}
|
||||
tc.OwnerID = auth.GetUserID()
|
||||
|
||||
// when creating a task we set the type so we can filter later.
|
||||
tc.Type = influxdb.TaskSystemType
|
||||
|
||||
|
@ -1411,6 +1410,7 @@ func (h *TaskHandler) getAuthorizationForTask(ctx context.Context, auth influxdb
|
|||
type TaskService struct {
|
||||
Addr string
|
||||
Token string
|
||||
Client *httpc.Client
|
||||
InsecureSkipVerify bool
|
||||
}
|
||||
|
||||
|
@ -1419,36 +1419,9 @@ func (t TaskService) FindTaskByID(ctx context.Context, id influxdb.ID) (*Task, e
|
|||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
u, err := NewURL(t.Addr, taskIDPath(id))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("GET", u.String(), nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
SetToken(t.Token, req)
|
||||
|
||||
hc := NewClient(u.Scheme, t.InsecureSkipVerify)
|
||||
resp, err := hc.Do(req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if err := CheckError(resp); err != nil {
|
||||
if influxdb.ErrorCode(err) == influxdb.ENotFound {
|
||||
// ErrTaskNotFound is expected as part of the FindTaskByID contract,
|
||||
// so return that actual error instead of a different error that looks like it.
|
||||
// TODO cleanup backend task service error implementation
|
||||
return nil, influxdb.ErrTaskNotFound
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var tr taskResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&tr); err != nil {
|
||||
err := t.Client.Get(taskIDPath(id)).DecodeJSON(&tr).Do(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
|
@ -1461,57 +1434,40 @@ func (t TaskService) FindTasks(ctx context.Context, filter influxdb.TaskFilter)
|
|||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
u, err := NewURL(t.Addr, prefixTasks)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
// slice of 2-capacity string slices for storing parameter key-value pairs
|
||||
var params [][2]string
|
||||
|
||||
val := url.Values{}
|
||||
if filter.After != nil {
|
||||
val.Add("after", filter.After.String())
|
||||
params = append(params, [2]string{"after", filter.After.String()})
|
||||
}
|
||||
if filter.OrganizationID != nil {
|
||||
val.Add("orgID", filter.OrganizationID.String())
|
||||
params = append(params, [2]string{"orgID", filter.OrganizationID.String()})
|
||||
}
|
||||
if filter.Organization != "" {
|
||||
val.Add("org", filter.Organization)
|
||||
params = append(params, [2]string{"org", filter.Organization})
|
||||
}
|
||||
if filter.User != nil {
|
||||
val.Add("user", filter.User.String())
|
||||
params = append(params, [2]string{"user", filter.User.String()})
|
||||
}
|
||||
if filter.Limit != 0 {
|
||||
val.Add("limit", strconv.Itoa(filter.Limit))
|
||||
params = append(params, [2]string{"limit", strconv.Itoa(filter.Limit)})
|
||||
}
|
||||
|
||||
if filter.Status != nil {
|
||||
val.Add("status", *filter.Status)
|
||||
params = append(params, [2]string{"status", *filter.Status})
|
||||
}
|
||||
|
||||
if filter.Type != nil {
|
||||
val.Add("type", *filter.Type)
|
||||
}
|
||||
|
||||
u.RawQuery = val.Encode()
|
||||
|
||||
req, err := http.NewRequest("GET", u.String(), nil)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
SetToken(t.Token, req)
|
||||
|
||||
hc := NewClient(u.Scheme, t.InsecureSkipVerify)
|
||||
resp, err := hc.Do(req)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if err := CheckError(resp); err != nil {
|
||||
return nil, 0, err
|
||||
params = append(params, [2]string{"type", *filter.Type})
|
||||
}
|
||||
|
||||
var tr tasksResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&tr); err != nil {
|
||||
err := t.Client.
|
||||
Get(prefixTasks).
|
||||
QueryParams(params...).
|
||||
DecodeJSON(&tr).
|
||||
Do(ctx)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
|
@ -1526,41 +1482,16 @@ func (t TaskService) FindTasks(ctx context.Context, filter influxdb.TaskFilter)
|
|||
func (t TaskService) CreateTask(ctx context.Context, tc influxdb.TaskCreate) (*Task, error) {
|
||||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
u, err := NewURL(t.Addr, prefixTasks)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
taskBytes, err := json.Marshal(tc)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("POST", u.String(), bytes.NewReader(taskBytes))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
SetToken(t.Token, req)
|
||||
|
||||
hc := NewClient(u.Scheme, t.InsecureSkipVerify)
|
||||
|
||||
resp, err := hc.Do(req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if err := CheckError(resp); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var tr taskResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&tr); err != nil {
|
||||
|
||||
err := t.Client.
|
||||
PostJSON(tc, prefixTasks).
|
||||
DecodeJSON(&tr).
|
||||
Do(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &tr.Task, nil
|
||||
}
|
||||
|
||||
|
@ -1569,38 +1500,11 @@ func (t TaskService) UpdateTask(ctx context.Context, id influxdb.ID, upd influxd
|
|||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
u, err := NewURL(t.Addr, taskIDPath(id))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
taskBytes, err := json.Marshal(upd)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("PATCH", u.String(), bytes.NewReader(taskBytes))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
SetToken(t.Token, req)
|
||||
|
||||
hc := NewClient(u.Scheme, t.InsecureSkipVerify)
|
||||
|
||||
resp, err := hc.Do(req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if err := CheckError(resp); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var tr taskResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&tr); err != nil {
|
||||
err := t.Client.
|
||||
PatchJSON(&upd, taskIDPath(id)).
|
||||
Do(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
|
@ -1612,28 +1516,9 @@ func (t TaskService) DeleteTask(ctx context.Context, id influxdb.ID) error {
|
|||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
u, err := NewURL(t.Addr, taskIDPath(id))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("DELETE", u.String(), nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
SetToken(t.Token, req)
|
||||
|
||||
hc := NewClient(u.Scheme, t.InsecureSkipVerify)
|
||||
|
||||
resp, err := hc.Do(req)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
return CheckErrorStatus(http.StatusNoContent, resp)
|
||||
return t.Client.
|
||||
Delete(taskIDPath(id)).
|
||||
Do(ctx)
|
||||
}
|
||||
|
||||
// FindLogs returns logs for a run.
|
||||
|
@ -1652,31 +1537,13 @@ func (t TaskService) FindLogs(ctx context.Context, filter influxdb.LogFilter) ([
|
|||
urlPath = path.Join(taskIDRunIDPath(filter.Task, *filter.Run), "logs")
|
||||
}
|
||||
|
||||
u, err := NewURL(t.Addr, urlPath)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("GET", u.String(), nil)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
SetToken(t.Token, req)
|
||||
|
||||
hc := NewClient(u.Scheme, t.InsecureSkipVerify)
|
||||
|
||||
resp, err := hc.Do(req)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if err := CheckError(resp); err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
var logs getLogsResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&logs); err != nil {
|
||||
err := t.Client.
|
||||
Get(urlPath).
|
||||
DecodeJSON(&logs).
|
||||
Do(ctx)
|
||||
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
|
@ -1688,48 +1555,29 @@ func (t TaskService) FindRuns(ctx context.Context, filter influxdb.RunFilter) ([
|
|||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
var params [][2]string
|
||||
|
||||
if !filter.Task.Valid() {
|
||||
return nil, 0, errors.New("task ID required")
|
||||
}
|
||||
|
||||
u, err := NewURL(t.Addr, taskIDRunsPath(filter.Task))
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
val := url.Values{}
|
||||
if filter.After != nil {
|
||||
val.Set("after", filter.After.String())
|
||||
params = append(params, [2]string{"after", filter.After.String()})
|
||||
}
|
||||
|
||||
if filter.Limit < 0 || filter.Limit > influxdb.TaskMaxPageSize {
|
||||
return nil, 0, influxdb.ErrOutOfBoundsLimit
|
||||
}
|
||||
val.Set("limit", strconv.Itoa(filter.Limit))
|
||||
|
||||
u.RawQuery = val.Encode()
|
||||
req, err := http.NewRequest("GET", u.String(), nil)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
SetToken(t.Token, req)
|
||||
|
||||
hc := NewClient(u.Scheme, t.InsecureSkipVerify)
|
||||
|
||||
resp, err := hc.Do(req)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if err := CheckError(resp); err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
params = append(params, [2]string{"limit", strconv.Itoa(filter.Limit)})
|
||||
|
||||
var rs runsResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&rs); err != nil {
|
||||
err := t.Client.
|
||||
Get(taskIDRunsPath(filter.Task)).
|
||||
QueryParams(params...).
|
||||
DecodeJSON(&rs).
|
||||
Do(ctx)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
|
@ -1746,27 +1594,13 @@ func (t TaskService) FindRunByID(ctx context.Context, taskID, runID influxdb.ID)
|
|||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
u, err := NewURL(t.Addr, taskIDRunIDPath(taskID, runID))
|
||||
var rs = &runResponse{}
|
||||
err := t.Client.
|
||||
Get(taskIDRunIDPath(taskID, runID)).
|
||||
DecodeJSON(rs).
|
||||
Do(ctx)
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("GET", u.String(), nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
SetToken(t.Token, req)
|
||||
|
||||
hc := NewClient(u.Scheme, t.InsecureSkipVerify)
|
||||
|
||||
resp, err := hc.Do(req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if err := CheckError(resp); err != nil {
|
||||
if influxdb.ErrorCode(err) == influxdb.ENotFound {
|
||||
// ErrRunNotFound is expected as part of the FindRunByID contract,
|
||||
// so return that actual error instead of a different error that looks like it.
|
||||
|
@ -1776,10 +1610,7 @@ func (t TaskService) FindRunByID(ctx context.Context, taskID, runID influxdb.ID)
|
|||
|
||||
return nil, err
|
||||
}
|
||||
var rs = &runResponse{}
|
||||
if err := json.NewDecoder(resp.Body).Decode(rs); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return convertRun(rs.httpRun), nil
|
||||
}
|
||||
|
||||
|
@ -1788,28 +1619,13 @@ func (t TaskService) RetryRun(ctx context.Context, taskID, runID influxdb.ID) (*
|
|||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
p := path.Join(taskIDRunIDPath(taskID, runID), "retry")
|
||||
u, err := NewURL(t.Addr, p)
|
||||
var rs runResponse
|
||||
err := t.Client.
|
||||
Post(nil, path.Join(taskIDRunIDPath(taskID, runID), "retry")).
|
||||
DecodeJSON(&rs).
|
||||
Do(ctx)
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("POST", u.String(), nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
SetToken(t.Token, req)
|
||||
|
||||
hc := NewClient(u.Scheme, t.InsecureSkipVerify)
|
||||
|
||||
resp, err := hc.Do(req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if err := CheckError(resp); err != nil {
|
||||
if influxdb.ErrorCode(err) == influxdb.ENotFound {
|
||||
// ErrRunNotFound is expected as part of the RetryRun contract,
|
||||
// so return that actual error instead of a different error that looks like it.
|
||||
|
@ -1817,17 +1633,13 @@ func (t TaskService) RetryRun(ctx context.Context, taskID, runID influxdb.ID) (*
|
|||
return nil, influxdb.ErrRunNotFound
|
||||
}
|
||||
// RequestStillQueuedError is also part of the contract.
|
||||
if e := backend.ParseRequestStillQueuedError(err.Error()); e != nil {
|
||||
if e := influxdb.ParseRequestStillQueuedError(err.Error()); e != nil {
|
||||
return nil, *e
|
||||
}
|
||||
|
||||
return nil, err
|
||||
}
|
||||
|
||||
rs := &runResponse{}
|
||||
if err := json.NewDecoder(resp.Body).Decode(rs); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return convertRun(rs.httpRun), nil
|
||||
}
|
||||
|
||||
|
@ -1836,28 +1648,18 @@ func (t TaskService) ForceRun(ctx context.Context, taskID influxdb.ID, scheduled
|
|||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
u, err := NewURL(t.Addr, taskIDRunsPath(taskID))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
type body struct {
|
||||
scheduledFor string
|
||||
}
|
||||
b := body{scheduledFor: time.Unix(scheduledFor, 0).UTC().Format(time.RFC3339)}
|
||||
|
||||
rs := &runResponse{}
|
||||
err := t.Client.
|
||||
PostJSON(b, taskIDRunsPath(taskID)).
|
||||
DecodeJSON(&rs).
|
||||
Do(ctx)
|
||||
|
||||
body := fmt.Sprintf(`{"scheduledFor": %q}`, time.Unix(scheduledFor, 0).UTC().Format(time.RFC3339))
|
||||
req, err := http.NewRequest("POST", u.String(), strings.NewReader(body))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
SetToken(t.Token, req)
|
||||
|
||||
hc := NewClient(u.Scheme, t.InsecureSkipVerify)
|
||||
|
||||
resp, err := hc.Do(req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if err := CheckError(resp); err != nil {
|
||||
if influxdb.ErrorCode(err) == influxdb.ENotFound {
|
||||
// ErrRunNotFound is expected as part of the RetryRun contract,
|
||||
// so return that actual error instead of a different error that looks like it.
|
||||
|
@ -1865,17 +1667,13 @@ func (t TaskService) ForceRun(ctx context.Context, taskID influxdb.ID, scheduled
|
|||
}
|
||||
|
||||
// RequestStillQueuedError is also part of the contract.
|
||||
if e := backend.ParseRequestStillQueuedError(err.Error()); e != nil {
|
||||
if e := influxdb.ParseRequestStillQueuedError(err.Error()); e != nil {
|
||||
return nil, *e
|
||||
}
|
||||
|
||||
return nil, err
|
||||
}
|
||||
|
||||
rs := &runResponse{}
|
||||
if err := json.NewDecoder(resp.Body).Decode(rs); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return convertRun(rs.httpRun), nil
|
||||
}
|
||||
|
||||
|
@ -1888,30 +1686,14 @@ func (t TaskService) CancelRun(ctx context.Context, taskID, runID influxdb.ID) e
|
|||
span, _ := tracing.StartSpanFromContext(ctx)
|
||||
defer span.Finish()
|
||||
|
||||
u, err := NewURL(t.Addr, cancelPath(taskID, runID))
|
||||
err := t.Client.
|
||||
Delete(cancelPath(taskID, runID)).
|
||||
Do(ctx)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("DELETE", u.String(), nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
SetToken(t.Token, req)
|
||||
|
||||
hc := NewClient(u.Scheme, t.InsecureSkipVerify)
|
||||
|
||||
resp, err := hc.Do(req)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if err := CheckError(resp); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
|
|
@ -14,13 +14,13 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/influxdata/httprouter"
|
||||
platform "github.com/influxdata/influxdb"
|
||||
"github.com/influxdata/influxdb"
|
||||
pcontext "github.com/influxdata/influxdb/context"
|
||||
|
||||
kithttp "github.com/influxdata/influxdb/kit/transport/http"
|
||||
"github.com/influxdata/influxdb/mock"
|
||||
_ "github.com/influxdata/influxdb/query/builtin"
|
||||
"github.com/influxdata/influxdb/task/backend"
|
||||
platformtesting "github.com/influxdata/influxdb/testing"
|
||||
influxdbtesting "github.com/influxdata/influxdb/testing"
|
||||
"go.uber.org/zap"
|
||||
"go.uber.org/zap/zaptest"
|
||||
)
|
||||
|
@ -34,17 +34,17 @@ func NewMockTaskBackend(t *testing.T) *TaskBackend {
|
|||
AuthorizationService: mock.NewAuthorizationService(),
|
||||
TaskService: &mock.TaskService{},
|
||||
OrganizationService: &mock.OrganizationService{
|
||||
FindOrganizationByIDF: func(ctx context.Context, id platform.ID) (*platform.Organization, error) {
|
||||
return &platform.Organization{ID: id, Name: "test"}, nil
|
||||
FindOrganizationByIDF: func(ctx context.Context, id influxdb.ID) (*influxdb.Organization, error) {
|
||||
return &influxdb.Organization{ID: id, Name: "test"}, nil
|
||||
},
|
||||
FindOrganizationF: func(ctx context.Context, filter platform.OrganizationFilter) (*platform.Organization, error) {
|
||||
org := &platform.Organization{}
|
||||
FindOrganizationF: func(ctx context.Context, filter influxdb.OrganizationFilter) (*influxdb.Organization, error) {
|
||||
org := &influxdb.Organization{}
|
||||
if filter.Name != nil {
|
||||
if *filter.Name == "non-existent-org" {
|
||||
return nil, &platform.Error{
|
||||
return nil, &influxdb.Error{
|
||||
Err: errors.New("org not found or unauthorized"),
|
||||
Msg: "org " + *filter.Name + " not found or unauthorized",
|
||||
Code: platform.ENotFound,
|
||||
Code: influxdb.ENotFound,
|
||||
}
|
||||
}
|
||||
org.Name = *filter.Name
|
||||
|
@ -64,8 +64,8 @@ func NewMockTaskBackend(t *testing.T) *TaskBackend {
|
|||
|
||||
func TestTaskHandler_handleGetTasks(t *testing.T) {
|
||||
type fields struct {
|
||||
taskService platform.TaskService
|
||||
labelService platform.LabelService
|
||||
taskService influxdb.TaskService
|
||||
labelService influxdb.LabelService
|
||||
}
|
||||
type wants struct {
|
||||
statusCode int
|
||||
|
@ -83,8 +83,8 @@ func TestTaskHandler_handleGetTasks(t *testing.T) {
|
|||
name: "get tasks",
|
||||
fields: fields{
|
||||
taskService: &mock.TaskService{
|
||||
FindTasksFn: func(ctx context.Context, f platform.TaskFilter) ([]*platform.Task, int, error) {
|
||||
tasks := []*platform.Task{
|
||||
FindTasksFn: func(ctx context.Context, f influxdb.TaskFilter) ([]*influxdb.Task, int, error) {
|
||||
tasks := []*influxdb.Task{
|
||||
{
|
||||
ID: 1,
|
||||
Name: "task1",
|
||||
|
@ -107,10 +107,10 @@ func TestTaskHandler_handleGetTasks(t *testing.T) {
|
|||
},
|
||||
},
|
||||
labelService: &mock.LabelService{
|
||||
FindResourceLabelsFn: func(ctx context.Context, f platform.LabelMappingFilter) ([]*platform.Label, error) {
|
||||
labels := []*platform.Label{
|
||||
FindResourceLabelsFn: func(ctx context.Context, f influxdb.LabelMappingFilter) ([]*influxdb.Label, error) {
|
||||
labels := []*influxdb.Label{
|
||||
{
|
||||
ID: platformtesting.MustIDBase16("fc3dc670a4be9b9a"),
|
||||
ID: influxdbtesting.MustIDBase16("fc3dc670a4be9b9a"),
|
||||
Name: "label",
|
||||
Properties: map[string]string{
|
||||
"color": "fff000",
|
||||
|
@ -192,8 +192,8 @@ func TestTaskHandler_handleGetTasks(t *testing.T) {
|
|||
getParams: "after=0000000000000001&limit=1",
|
||||
fields: fields{
|
||||
taskService: &mock.TaskService{
|
||||
FindTasksFn: func(ctx context.Context, f platform.TaskFilter) ([]*platform.Task, int, error) {
|
||||
tasks := []*platform.Task{
|
||||
FindTasksFn: func(ctx context.Context, f influxdb.TaskFilter) ([]*influxdb.Task, int, error) {
|
||||
tasks := []*influxdb.Task{
|
||||
{
|
||||
ID: 2,
|
||||
Name: "task2",
|
||||
|
@ -207,10 +207,10 @@ func TestTaskHandler_handleGetTasks(t *testing.T) {
|
|||
},
|
||||
},
|
||||
labelService: &mock.LabelService{
|
||||
FindResourceLabelsFn: func(ctx context.Context, f platform.LabelMappingFilter) ([]*platform.Label, error) {
|
||||
labels := []*platform.Label{
|
||||
FindResourceLabelsFn: func(ctx context.Context, f influxdb.LabelMappingFilter) ([]*influxdb.Label, error) {
|
||||
labels := []*influxdb.Label{
|
||||
{
|
||||
ID: platformtesting.MustIDBase16("fc3dc670a4be9b9a"),
|
||||
ID: influxdbtesting.MustIDBase16("fc3dc670a4be9b9a"),
|
||||
Name: "label",
|
||||
Properties: map[string]string{
|
||||
"color": "fff000",
|
||||
|
@ -266,8 +266,8 @@ func TestTaskHandler_handleGetTasks(t *testing.T) {
|
|||
getParams: "org=test2",
|
||||
fields: fields{
|
||||
taskService: &mock.TaskService{
|
||||
FindTasksFn: func(ctx context.Context, f platform.TaskFilter) ([]*platform.Task, int, error) {
|
||||
tasks := []*platform.Task{
|
||||
FindTasksFn: func(ctx context.Context, f influxdb.TaskFilter) ([]*influxdb.Task, int, error) {
|
||||
tasks := []*influxdb.Task{
|
||||
{
|
||||
ID: 2,
|
||||
Name: "task2",
|
||||
|
@ -281,10 +281,10 @@ func TestTaskHandler_handleGetTasks(t *testing.T) {
|
|||
},
|
||||
},
|
||||
labelService: &mock.LabelService{
|
||||
FindResourceLabelsFn: func(ctx context.Context, f platform.LabelMappingFilter) ([]*platform.Label, error) {
|
||||
labels := []*platform.Label{
|
||||
FindResourceLabelsFn: func(ctx context.Context, f influxdb.LabelMappingFilter) ([]*influxdb.Label, error) {
|
||||
labels := []*influxdb.Label{
|
||||
{
|
||||
ID: platformtesting.MustIDBase16("fc3dc670a4be9b9a"),
|
||||
ID: influxdbtesting.MustIDBase16("fc3dc670a4be9b9a"),
|
||||
Name: "label",
|
||||
Properties: map[string]string{
|
||||
"color": "fff000",
|
||||
|
@ -339,8 +339,8 @@ func TestTaskHandler_handleGetTasks(t *testing.T) {
|
|||
getParams: "org=non-existent-org",
|
||||
fields: fields{
|
||||
taskService: &mock.TaskService{
|
||||
FindTasksFn: func(ctx context.Context, f platform.TaskFilter) ([]*platform.Task, int, error) {
|
||||
tasks := []*platform.Task{
|
||||
FindTasksFn: func(ctx context.Context, f influxdb.TaskFilter) ([]*influxdb.Task, int, error) {
|
||||
tasks := []*influxdb.Task{
|
||||
{
|
||||
ID: 1,
|
||||
Name: "task1",
|
||||
|
@ -362,10 +362,10 @@ func TestTaskHandler_handleGetTasks(t *testing.T) {
|
|||
},
|
||||
},
|
||||
labelService: &mock.LabelService{
|
||||
FindResourceLabelsFn: func(ctx context.Context, f platform.LabelMappingFilter) ([]*platform.Label, error) {
|
||||
labels := []*platform.Label{
|
||||
FindResourceLabelsFn: func(ctx context.Context, f influxdb.LabelMappingFilter) ([]*influxdb.Label, error) {
|
||||
labels := []*influxdb.Label{
|
||||
{
|
||||
ID: platformtesting.MustIDBase16("fc3dc670a4be9b9a"),
|
||||
ID: influxdbtesting.MustIDBase16("fc3dc670a4be9b9a"),
|
||||
Name: "label",
|
||||
Properties: map[string]string{
|
||||
"color": "fff000",
|
||||
|
@ -422,10 +422,10 @@ func TestTaskHandler_handleGetTasks(t *testing.T) {
|
|||
|
||||
func TestTaskHandler_handlePostTasks(t *testing.T) {
|
||||
type args struct {
|
||||
taskCreate platform.TaskCreate
|
||||
taskCreate influxdb.TaskCreate
|
||||
}
|
||||
type fields struct {
|
||||
taskService platform.TaskService
|
||||
taskService influxdb.TaskService
|
||||
}
|
||||
type wants struct {
|
||||
statusCode int
|
||||
|
@ -442,15 +442,15 @@ func TestTaskHandler_handlePostTasks(t *testing.T) {
|
|||
{
|
||||
name: "create task",
|
||||
args: args{
|
||||
taskCreate: platform.TaskCreate{
|
||||
taskCreate: influxdb.TaskCreate{
|
||||
OrganizationID: 1,
|
||||
Flux: "abc",
|
||||
},
|
||||
},
|
||||
fields: fields{
|
||||
taskService: &mock.TaskService{
|
||||
CreateTaskFn: func(ctx context.Context, tc platform.TaskCreate) (*platform.Task, error) {
|
||||
return &platform.Task{
|
||||
CreateTaskFn: func(ctx context.Context, tc influxdb.TaskCreate) (*influxdb.Task, error) {
|
||||
return &influxdb.Task{
|
||||
ID: 1,
|
||||
Name: "task1",
|
||||
Description: "Brand New Task",
|
||||
|
@ -489,20 +489,20 @@ func TestTaskHandler_handlePostTasks(t *testing.T) {
|
|||
},
|
||||
},
|
||||
{
|
||||
name: "create task - platform error creating task",
|
||||
name: "create task - influxdb error creating task",
|
||||
args: args{
|
||||
taskCreate: platform.TaskCreate{
|
||||
taskCreate: influxdb.TaskCreate{
|
||||
OrganizationID: 1,
|
||||
Flux: "abc",
|
||||
},
|
||||
},
|
||||
fields: fields{
|
||||
taskService: &mock.TaskService{
|
||||
CreateTaskFn: func(ctx context.Context, tc platform.TaskCreate) (*platform.Task, error) {
|
||||
return nil, platform.NewError(
|
||||
platform.WithErrorErr(errors.New("something went wrong")),
|
||||
platform.WithErrorMsg("something really went wrong"),
|
||||
platform.WithErrorCode(platform.EInvalid),
|
||||
CreateTaskFn: func(ctx context.Context, tc influxdb.TaskCreate) (*influxdb.Task, error) {
|
||||
return nil, influxdb.NewError(
|
||||
influxdb.WithErrorErr(errors.New("something went wrong")),
|
||||
influxdb.WithErrorMsg("something really went wrong"),
|
||||
influxdb.WithErrorCode(influxdb.EInvalid),
|
||||
)
|
||||
},
|
||||
},
|
||||
|
@ -521,14 +521,14 @@ func TestTaskHandler_handlePostTasks(t *testing.T) {
|
|||
{
|
||||
name: "create task - error creating task",
|
||||
args: args{
|
||||
taskCreate: platform.TaskCreate{
|
||||
taskCreate: influxdb.TaskCreate{
|
||||
OrganizationID: 1,
|
||||
Flux: "abc",
|
||||
},
|
||||
},
|
||||
fields: fields{
|
||||
taskService: &mock.TaskService{
|
||||
CreateTaskFn: func(ctx context.Context, tc platform.TaskCreate) (*platform.Task, error) {
|
||||
CreateTaskFn: func(ctx context.Context, tc influxdb.TaskCreate) (*influxdb.Task, error) {
|
||||
return nil, errors.New("something bad happened")
|
||||
},
|
||||
},
|
||||
|
@ -554,7 +554,7 @@ func TestTaskHandler_handlePostTasks(t *testing.T) {
|
|||
}
|
||||
|
||||
r := httptest.NewRequest("POST", "http://any.url", bytes.NewReader(b))
|
||||
ctx := pcontext.SetAuthorizer(context.TODO(), new(platform.Authorization))
|
||||
ctx := pcontext.SetAuthorizer(context.TODO(), new(influxdb.Authorization))
|
||||
r = r.WithContext(ctx)
|
||||
|
||||
w := httptest.NewRecorder()
|
||||
|
@ -588,11 +588,11 @@ func TestTaskHandler_handlePostTasks(t *testing.T) {
|
|||
|
||||
func TestTaskHandler_handleGetRun(t *testing.T) {
|
||||
type fields struct {
|
||||
taskService platform.TaskService
|
||||
taskService influxdb.TaskService
|
||||
}
|
||||
type args struct {
|
||||
taskID platform.ID
|
||||
runID platform.ID
|
||||
taskID influxdb.ID
|
||||
runID influxdb.ID
|
||||
}
|
||||
type wants struct {
|
||||
statusCode int
|
||||
|
@ -610,12 +610,12 @@ func TestTaskHandler_handleGetRun(t *testing.T) {
|
|||
name: "get a run by id",
|
||||
fields: fields{
|
||||
taskService: &mock.TaskService{
|
||||
FindRunByIDFn: func(ctx context.Context, taskID platform.ID, runID platform.ID) (*platform.Run, error) {
|
||||
FindRunByIDFn: func(ctx context.Context, taskID influxdb.ID, runID influxdb.ID) (*influxdb.Run, error) {
|
||||
scheduledFor, _ := time.Parse(time.RFC3339, "2018-12-01T17:00:13Z")
|
||||
startedAt, _ := time.Parse(time.RFC3339Nano, "2018-12-01T17:00:03.155645Z")
|
||||
finishedAt, _ := time.Parse(time.RFC3339Nano, "2018-12-01T17:00:13.155645Z")
|
||||
requestedAt, _ := time.Parse(time.RFC3339, "2018-12-01T17:00:13Z")
|
||||
run := platform.Run{
|
||||
run := influxdb.Run{
|
||||
ID: runID,
|
||||
TaskID: taskID,
|
||||
Status: "success",
|
||||
|
@ -671,7 +671,7 @@ func TestTaskHandler_handleGetRun(t *testing.T) {
|
|||
Value: tt.args.runID.String(),
|
||||
},
|
||||
}))
|
||||
r = r.WithContext(pcontext.SetAuthorizer(r.Context(), &platform.Authorization{Permissions: platform.OperPermissions()}))
|
||||
r = r.WithContext(pcontext.SetAuthorizer(r.Context(), &influxdb.Authorization{Permissions: influxdb.OperPermissions()}))
|
||||
w := httptest.NewRecorder()
|
||||
taskBackend := NewMockTaskBackend(t)
|
||||
taskBackend.HTTPErrorHandler = kithttp.ErrorHandler(0)
|
||||
|
@ -702,10 +702,10 @@ func TestTaskHandler_handleGetRun(t *testing.T) {
|
|||
|
||||
func TestTaskHandler_handleGetRuns(t *testing.T) {
|
||||
type fields struct {
|
||||
taskService platform.TaskService
|
||||
taskService influxdb.TaskService
|
||||
}
|
||||
type args struct {
|
||||
taskID platform.ID
|
||||
taskID influxdb.ID
|
||||
}
|
||||
type wants struct {
|
||||
statusCode int
|
||||
|
@ -723,14 +723,14 @@ func TestTaskHandler_handleGetRuns(t *testing.T) {
|
|||
name: "get runs by task id",
|
||||
fields: fields{
|
||||
taskService: &mock.TaskService{
|
||||
FindRunsFn: func(ctx context.Context, f platform.RunFilter) ([]*platform.Run, int, error) {
|
||||
FindRunsFn: func(ctx context.Context, f influxdb.RunFilter) ([]*influxdb.Run, int, error) {
|
||||
scheduledFor, _ := time.Parse(time.RFC3339, "2018-12-01T17:00:13Z")
|
||||
startedAt, _ := time.Parse(time.RFC3339Nano, "2018-12-01T17:00:03.155645Z")
|
||||
finishedAt, _ := time.Parse(time.RFC3339Nano, "2018-12-01T17:00:13.155645Z")
|
||||
requestedAt, _ := time.Parse(time.RFC3339, "2018-12-01T17:00:13Z")
|
||||
runs := []*platform.Run{
|
||||
runs := []*influxdb.Run{
|
||||
{
|
||||
ID: platform.ID(2),
|
||||
ID: influxdb.ID(2),
|
||||
TaskID: f.Task,
|
||||
Status: "success",
|
||||
ScheduledFor: scheduledFor,
|
||||
|
@ -789,7 +789,7 @@ func TestTaskHandler_handleGetRuns(t *testing.T) {
|
|||
Value: tt.args.taskID.String(),
|
||||
},
|
||||
}))
|
||||
r = r.WithContext(pcontext.SetAuthorizer(r.Context(), &platform.Authorization{Permissions: platform.OperPermissions()}))
|
||||
r = r.WithContext(pcontext.SetAuthorizer(r.Context(), &influxdb.Authorization{Permissions: influxdb.OperPermissions()}))
|
||||
w := httptest.NewRecorder()
|
||||
taskBackend := NewMockTaskBackend(t)
|
||||
taskBackend.HTTPErrorHandler = kithttp.ErrorHandler(0)
|
||||
|
@ -830,16 +830,16 @@ func TestTaskHandler_NotFoundStatus(t *testing.T) {
|
|||
h.UserService = im
|
||||
h.OrganizationService = im
|
||||
|
||||
o := platform.Organization{Name: "o"}
|
||||
o := influxdb.Organization{Name: "o"}
|
||||
ctx := context.Background()
|
||||
if err := h.OrganizationService.CreateOrganization(ctx, &o); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Create a session to associate with the contexts, so authorization checks pass.
|
||||
authz := &platform.Authorization{Permissions: platform.OperPermissions()}
|
||||
authz := &influxdb.Authorization{Permissions: influxdb.OperPermissions()}
|
||||
|
||||
const taskID, runID = platform.ID(0xCCCCCC), platform.ID(0xAAAAAA)
|
||||
const taskID, runID = influxdb.ID(0xCCCCCC), influxdb.ID(0xAAAAAA)
|
||||
|
||||
var (
|
||||
okTask = []interface{}{taskID}
|
||||
|
@ -867,12 +867,12 @@ func TestTaskHandler_NotFoundStatus(t *testing.T) {
|
|||
{
|
||||
name: "get task",
|
||||
svc: &mock.TaskService{
|
||||
FindTaskByIDFn: func(_ context.Context, id platform.ID) (*platform.Task, error) {
|
||||
FindTaskByIDFn: func(_ context.Context, id influxdb.ID) (*influxdb.Task, error) {
|
||||
if id == taskID {
|
||||
return &platform.Task{ID: taskID, Organization: "o"}, nil
|
||||
return &influxdb.Task{ID: taskID, Organization: "o"}, nil
|
||||
}
|
||||
|
||||
return nil, platform.ErrTaskNotFound
|
||||
return nil, influxdb.ErrTaskNotFound
|
||||
},
|
||||
},
|
||||
method: http.MethodGet,
|
||||
|
@ -883,12 +883,12 @@ func TestTaskHandler_NotFoundStatus(t *testing.T) {
|
|||
{
|
||||
name: "update task",
|
||||
svc: &mock.TaskService{
|
||||
UpdateTaskFn: func(_ context.Context, id platform.ID, _ platform.TaskUpdate) (*platform.Task, error) {
|
||||
UpdateTaskFn: func(_ context.Context, id influxdb.ID, _ influxdb.TaskUpdate) (*influxdb.Task, error) {
|
||||
if id == taskID {
|
||||
return &platform.Task{ID: taskID, Organization: "o"}, nil
|
||||
return &influxdb.Task{ID: taskID, Organization: "o"}, nil
|
||||
}
|
||||
|
||||
return nil, platform.ErrTaskNotFound
|
||||
return nil, influxdb.ErrTaskNotFound
|
||||
},
|
||||
},
|
||||
method: http.MethodPatch,
|
||||
|
@ -900,12 +900,12 @@ func TestTaskHandler_NotFoundStatus(t *testing.T) {
|
|||
{
|
||||
name: "delete task",
|
||||
svc: &mock.TaskService{
|
||||
DeleteTaskFn: func(_ context.Context, id platform.ID) error {
|
||||
DeleteTaskFn: func(_ context.Context, id influxdb.ID) error {
|
||||
if id == taskID {
|
||||
return nil
|
||||
}
|
||||
|
||||
return platform.ErrTaskNotFound
|
||||
return influxdb.ErrTaskNotFound
|
||||
},
|
||||
},
|
||||
method: http.MethodDelete,
|
||||
|
@ -916,12 +916,12 @@ func TestTaskHandler_NotFoundStatus(t *testing.T) {
|
|||
{
|
||||
name: "get task logs",
|
||||
svc: &mock.TaskService{
|
||||
FindLogsFn: func(_ context.Context, f platform.LogFilter) ([]*platform.Log, int, error) {
|
||||
FindLogsFn: func(_ context.Context, f influxdb.LogFilter) ([]*influxdb.Log, int, error) {
|
||||
if f.Task == taskID {
|
||||
return nil, 0, nil
|
||||
}
|
||||
|
||||
return nil, 0, platform.ErrTaskNotFound
|
||||
return nil, 0, influxdb.ErrTaskNotFound
|
||||
},
|
||||
},
|
||||
method: http.MethodGet,
|
||||
|
@ -932,12 +932,12 @@ func TestTaskHandler_NotFoundStatus(t *testing.T) {
|
|||
{
|
||||
name: "get run logs",
|
||||
svc: &mock.TaskService{
|
||||
FindLogsFn: func(_ context.Context, f platform.LogFilter) ([]*platform.Log, int, error) {
|
||||
FindLogsFn: func(_ context.Context, f influxdb.LogFilter) ([]*influxdb.Log, int, error) {
|
||||
if f.Task != taskID {
|
||||
return nil, 0, platform.ErrTaskNotFound
|
||||
return nil, 0, influxdb.ErrTaskNotFound
|
||||
}
|
||||
if *f.Run != runID {
|
||||
return nil, 0, platform.ErrNoRunsFound
|
||||
return nil, 0, influxdb.ErrNoRunsFound
|
||||
}
|
||||
|
||||
return nil, 0, nil
|
||||
|
@ -951,9 +951,9 @@ func TestTaskHandler_NotFoundStatus(t *testing.T) {
|
|||
{
|
||||
name: "get runs: task not found",
|
||||
svc: &mock.TaskService{
|
||||
FindRunsFn: func(_ context.Context, f platform.RunFilter) ([]*platform.Run, int, error) {
|
||||
FindRunsFn: func(_ context.Context, f influxdb.RunFilter) ([]*influxdb.Run, int, error) {
|
||||
if f.Task != taskID {
|
||||
return nil, 0, platform.ErrTaskNotFound
|
||||
return nil, 0, influxdb.ErrTaskNotFound
|
||||
}
|
||||
|
||||
return nil, 0, nil
|
||||
|
@ -967,9 +967,9 @@ func TestTaskHandler_NotFoundStatus(t *testing.T) {
|
|||
{
|
||||
name: "get runs: task found but no runs found",
|
||||
svc: &mock.TaskService{
|
||||
FindRunsFn: func(_ context.Context, f platform.RunFilter) ([]*platform.Run, int, error) {
|
||||
FindRunsFn: func(_ context.Context, f influxdb.RunFilter) ([]*influxdb.Run, int, error) {
|
||||
if f.Task != taskID {
|
||||
return nil, 0, platform.ErrNoRunsFound
|
||||
return nil, 0, influxdb.ErrNoRunsFound
|
||||
}
|
||||
|
||||
return nil, 0, nil
|
||||
|
@ -983,12 +983,12 @@ func TestTaskHandler_NotFoundStatus(t *testing.T) {
|
|||
{
|
||||
name: "force run",
|
||||
svc: &mock.TaskService{
|
||||
ForceRunFn: func(_ context.Context, tid platform.ID, _ int64) (*platform.Run, error) {
|
||||
ForceRunFn: func(_ context.Context, tid influxdb.ID, _ int64) (*influxdb.Run, error) {
|
||||
if tid != taskID {
|
||||
return nil, platform.ErrTaskNotFound
|
||||
return nil, influxdb.ErrTaskNotFound
|
||||
}
|
||||
|
||||
return &platform.Run{ID: runID, TaskID: taskID, Status: backend.RunScheduled.String()}, nil
|
||||
return &influxdb.Run{ID: runID, TaskID: taskID, Status: influxdb.RunScheduled.String()}, nil
|
||||
},
|
||||
},
|
||||
method: http.MethodPost,
|
||||
|
@ -1000,15 +1000,15 @@ func TestTaskHandler_NotFoundStatus(t *testing.T) {
|
|||
{
|
||||
name: "get run",
|
||||
svc: &mock.TaskService{
|
||||
FindRunByIDFn: func(_ context.Context, tid, rid platform.ID) (*platform.Run, error) {
|
||||
FindRunByIDFn: func(_ context.Context, tid, rid influxdb.ID) (*influxdb.Run, error) {
|
||||
if tid != taskID {
|
||||
return nil, platform.ErrTaskNotFound
|
||||
return nil, influxdb.ErrTaskNotFound
|
||||
}
|
||||
if rid != runID {
|
||||
return nil, platform.ErrRunNotFound
|
||||
return nil, influxdb.ErrRunNotFound
|
||||
}
|
||||
|
||||
return &platform.Run{ID: runID, TaskID: taskID, Status: backend.RunScheduled.String()}, nil
|
||||
return &influxdb.Run{ID: runID, TaskID: taskID, Status: influxdb.RunScheduled.String()}, nil
|
||||
},
|
||||
},
|
||||
method: http.MethodGet,
|
||||
|
@ -1019,15 +1019,15 @@ func TestTaskHandler_NotFoundStatus(t *testing.T) {
|
|||
{
|
||||
name: "retry run",
|
||||
svc: &mock.TaskService{
|
||||
RetryRunFn: func(_ context.Context, tid, rid platform.ID) (*platform.Run, error) {
|
||||
RetryRunFn: func(_ context.Context, tid, rid influxdb.ID) (*influxdb.Run, error) {
|
||||
if tid != taskID {
|
||||
return nil, platform.ErrTaskNotFound
|
||||
return nil, influxdb.ErrTaskNotFound
|
||||
}
|
||||
if rid != runID {
|
||||
return nil, platform.ErrRunNotFound
|
||||
return nil, influxdb.ErrRunNotFound
|
||||
}
|
||||
|
||||
return &platform.Run{ID: runID, TaskID: taskID, Status: backend.RunScheduled.String()}, nil
|
||||
return &influxdb.Run{ID: runID, TaskID: taskID, Status: influxdb.RunScheduled.String()}, nil
|
||||
},
|
||||
},
|
||||
method: http.MethodPost,
|
||||
|
@ -1038,12 +1038,12 @@ func TestTaskHandler_NotFoundStatus(t *testing.T) {
|
|||
{
|
||||
name: "cancel run",
|
||||
svc: &mock.TaskService{
|
||||
CancelRunFn: func(_ context.Context, tid, rid platform.ID) error {
|
||||
CancelRunFn: func(_ context.Context, tid, rid influxdb.ID) error {
|
||||
if tid != taskID {
|
||||
return platform.ErrTaskNotFound
|
||||
return influxdb.ErrTaskNotFound
|
||||
}
|
||||
if rid != runID {
|
||||
return platform.ErrRunNotFound
|
||||
return influxdb.ErrRunNotFound
|
||||
}
|
||||
|
||||
return nil
|
||||
|
@ -1108,11 +1108,11 @@ func TestTaskHandler_NotFoundStatus(t *testing.T) {
|
|||
|
||||
func TestService_handlePostTaskLabel(t *testing.T) {
|
||||
type fields struct {
|
||||
LabelService platform.LabelService
|
||||
LabelService influxdb.LabelService
|
||||
}
|
||||
type args struct {
|
||||
labelMapping *platform.LabelMapping
|
||||
taskID platform.ID
|
||||
labelMapping *influxdb.LabelMapping
|
||||
taskID influxdb.ID
|
||||
}
|
||||
type wants struct {
|
||||
statusCode int
|
||||
|
@ -1130,8 +1130,8 @@ func TestService_handlePostTaskLabel(t *testing.T) {
|
|||
name: "add label to task",
|
||||
fields: fields{
|
||||
LabelService: &mock.LabelService{
|
||||
FindLabelByIDFn: func(ctx context.Context, id platform.ID) (*platform.Label, error) {
|
||||
return &platform.Label{
|
||||
FindLabelByIDFn: func(ctx context.Context, id influxdb.ID) (*influxdb.Label, error) {
|
||||
return &influxdb.Label{
|
||||
ID: 1,
|
||||
Name: "label",
|
||||
Properties: map[string]string{
|
||||
|
@ -1139,11 +1139,11 @@ func TestService_handlePostTaskLabel(t *testing.T) {
|
|||
},
|
||||
}, nil
|
||||
},
|
||||
CreateLabelMappingFn: func(ctx context.Context, m *platform.LabelMapping) error { return nil },
|
||||
CreateLabelMappingFn: func(ctx context.Context, m *influxdb.LabelMapping) error { return nil },
|
||||
},
|
||||
},
|
||||
args: args{
|
||||
labelMapping: &platform.LabelMapping{
|
||||
labelMapping: &influxdb.LabelMapping{
|
||||
ResourceID: 100,
|
||||
LabelID: 1,
|
||||
},
|
||||
|
@ -1215,37 +1215,37 @@ func TestTaskHandler_CreateTaskWithOrgName(t *testing.T) {
|
|||
ctx := context.Background()
|
||||
|
||||
// Set up user and org.
|
||||
u := &platform.User{Name: "u"}
|
||||
u := &influxdb.User{Name: "u"}
|
||||
if err := i.CreateUser(ctx, u); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
o := &platform.Organization{Name: "o"}
|
||||
o := &influxdb.Organization{Name: "o"}
|
||||
if err := i.CreateOrganization(ctx, o); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Source and destination buckets for use in task.
|
||||
bSrc := platform.Bucket{OrgID: o.ID, Name: "b-src"}
|
||||
bSrc := influxdb.Bucket{OrgID: o.ID, Name: "b-src"}
|
||||
if err := i.CreateBucket(ctx, &bSrc); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
bDst := platform.Bucket{OrgID: o.ID, Name: "b-dst"}
|
||||
bDst := influxdb.Bucket{OrgID: o.ID, Name: "b-dst"}
|
||||
if err := i.CreateBucket(ctx, &bDst); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
authz := platform.Authorization{OrgID: o.ID, UserID: u.ID, Permissions: platform.OperPermissions()}
|
||||
authz := influxdb.Authorization{OrgID: o.ID, UserID: u.ID, Permissions: influxdb.OperPermissions()}
|
||||
if err := i.CreateAuthorization(ctx, &authz); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
ts := &mock.TaskService{
|
||||
CreateTaskFn: func(_ context.Context, tc platform.TaskCreate) (*platform.Task, error) {
|
||||
CreateTaskFn: func(_ context.Context, tc influxdb.TaskCreate) (*influxdb.Task, error) {
|
||||
if tc.OrganizationID != o.ID {
|
||||
t.Fatalf("expected task to be created with org ID %s, got %s", o.ID, tc.OrganizationID)
|
||||
}
|
||||
|
||||
return &platform.Task{ID: 9, OrganizationID: o.ID, OwnerID: o.ID, AuthorizationID: authz.ID, Name: "x", Flux: tc.Flux}, nil
|
||||
return &influxdb.Task{ID: 9, OrganizationID: o.ID, OwnerID: o.ID, AuthorizationID: authz.ID, Name: "x", Flux: tc.Flux}, nil
|
||||
},
|
||||
}
|
||||
|
||||
|
@ -1265,7 +1265,7 @@ func TestTaskHandler_CreateTaskWithOrgName(t *testing.T) {
|
|||
|
||||
url := "http://localhost:9999/api/v2/tasks"
|
||||
|
||||
b, err := json.Marshal(platform.TaskCreate{
|
||||
b, err := json.Marshal(influxdb.TaskCreate{
|
||||
Flux: script,
|
||||
Organization: o.Name,
|
||||
})
|
||||
|
@ -1292,7 +1292,7 @@ func TestTaskHandler_CreateTaskWithOrgName(t *testing.T) {
|
|||
}
|
||||
|
||||
// The task should have been created with a valid token.
|
||||
var createdTask platform.Task
|
||||
var createdTask influxdb.Task
|
||||
if err := json.Unmarshal([]byte(body), &createdTask); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
@ -1309,38 +1309,38 @@ func TestTaskHandler_Sessions(t *testing.T) {
|
|||
ctx := context.Background()
|
||||
|
||||
// Set up user and org.
|
||||
u := &platform.User{Name: "u"}
|
||||
u := &influxdb.User{Name: "u"}
|
||||
if err := i.CreateUser(ctx, u); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
o := &platform.Organization{Name: "o"}
|
||||
o := &influxdb.Organization{Name: "o"}
|
||||
if err := i.CreateOrganization(ctx, o); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Map user to org.
|
||||
if err := i.CreateUserResourceMapping(ctx, &platform.UserResourceMapping{
|
||||
ResourceType: platform.OrgsResourceType,
|
||||
if err := i.CreateUserResourceMapping(ctx, &influxdb.UserResourceMapping{
|
||||
ResourceType: influxdb.OrgsResourceType,
|
||||
ResourceID: o.ID,
|
||||
UserID: u.ID,
|
||||
UserType: platform.Owner,
|
||||
UserType: influxdb.Owner,
|
||||
}); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Source and destination buckets for use in task.
|
||||
bSrc := platform.Bucket{OrgID: o.ID, Name: "b-src"}
|
||||
bSrc := influxdb.Bucket{OrgID: o.ID, Name: "b-src"}
|
||||
if err := i.CreateBucket(ctx, &bSrc); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
bDst := platform.Bucket{OrgID: o.ID, Name: "b-dst"}
|
||||
bDst := influxdb.Bucket{OrgID: o.ID, Name: "b-dst"}
|
||||
if err := i.CreateBucket(ctx, &bDst); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
sessionAllPermsCtx := pcontext.SetAuthorizer(context.Background(), &platform.Session{
|
||||
sessionAllPermsCtx := pcontext.SetAuthorizer(context.Background(), &influxdb.Session{
|
||||
UserID: u.ID,
|
||||
Permissions: platform.OperPermissions(),
|
||||
Permissions: influxdb.OperPermissions(),
|
||||
ExpiresAt: time.Now().Add(24 * time.Hour),
|
||||
})
|
||||
|
||||
|
@ -1361,33 +1361,33 @@ func TestTaskHandler_Sessions(t *testing.T) {
|
|||
|
||||
t.Run("get runs for a task", func(t *testing.T) {
|
||||
// Unique authorization to associate with our fake task.
|
||||
taskAuth := &platform.Authorization{OrgID: o.ID, UserID: u.ID}
|
||||
taskAuth := &influxdb.Authorization{OrgID: o.ID, UserID: u.ID}
|
||||
if err := i.CreateAuthorization(ctx, taskAuth); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
const taskID = platform.ID(12345)
|
||||
const runID = platform.ID(9876)
|
||||
const taskID = influxdb.ID(12345)
|
||||
const runID = influxdb.ID(9876)
|
||||
|
||||
var findRunsCtx context.Context
|
||||
ts := &mock.TaskService{
|
||||
FindRunsFn: func(ctx context.Context, f platform.RunFilter) ([]*platform.Run, int, error) {
|
||||
FindRunsFn: func(ctx context.Context, f influxdb.RunFilter) ([]*influxdb.Run, int, error) {
|
||||
findRunsCtx = ctx
|
||||
if f.Task != taskID {
|
||||
t.Fatalf("expected task ID %v, got %v", taskID, f.Task)
|
||||
}
|
||||
|
||||
return []*platform.Run{
|
||||
return []*influxdb.Run{
|
||||
{ID: runID, TaskID: taskID},
|
||||
}, 1, nil
|
||||
},
|
||||
|
||||
FindTaskByIDFn: func(ctx context.Context, id platform.ID) (*platform.Task, error) {
|
||||
FindTaskByIDFn: func(ctx context.Context, id influxdb.ID) (*influxdb.Task, error) {
|
||||
if id != taskID {
|
||||
return nil, platform.ErrTaskNotFound
|
||||
return nil, influxdb.ErrTaskNotFound
|
||||
}
|
||||
|
||||
return &platform.Task{
|
||||
return &influxdb.Task{
|
||||
ID: taskID,
|
||||
OrganizationID: o.ID,
|
||||
AuthorizationID: taskAuth.ID,
|
||||
|
@ -1416,23 +1416,23 @@ func TestTaskHandler_Sessions(t *testing.T) {
|
|||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if authr.Kind() != platform.AuthorizationKind {
|
||||
t.Fatalf("expected context's authorizer to be of kind %q, got %q", platform.AuthorizationKind, authr.Kind())
|
||||
if authr.Kind() != influxdb.AuthorizationKind {
|
||||
t.Fatalf("expected context's authorizer to be of kind %q, got %q", influxdb.AuthorizationKind, authr.Kind())
|
||||
}
|
||||
|
||||
orgID := authr.(*platform.Authorization).OrgID
|
||||
orgID := authr.(*influxdb.Authorization).OrgID
|
||||
|
||||
if orgID != o.ID {
|
||||
t.Fatalf("expected context's authorizer org ID to be %v, got %v", o.ID, orgID)
|
||||
}
|
||||
|
||||
// Other user without permissions on the task or authorization should be disallowed.
|
||||
otherUser := &platform.User{Name: "other-" + t.Name()}
|
||||
otherUser := &influxdb.User{Name: "other-" + t.Name()}
|
||||
if err := i.CreateUser(ctx, otherUser); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
valCtx = pcontext.SetAuthorizer(valCtx, &platform.Session{
|
||||
valCtx = pcontext.SetAuthorizer(valCtx, &influxdb.Session{
|
||||
UserID: otherUser.ID,
|
||||
ExpiresAt: time.Now().Add(24 * time.Hour),
|
||||
})
|
||||
|
@ -1454,17 +1454,17 @@ func TestTaskHandler_Sessions(t *testing.T) {
|
|||
|
||||
t.Run("get single run for a task", func(t *testing.T) {
|
||||
// Unique authorization to associate with our fake task.
|
||||
taskAuth := &platform.Authorization{OrgID: o.ID, UserID: u.ID}
|
||||
taskAuth := &influxdb.Authorization{OrgID: o.ID, UserID: u.ID}
|
||||
if err := i.CreateAuthorization(ctx, taskAuth); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
const taskID = platform.ID(12345)
|
||||
const runID = platform.ID(9876)
|
||||
const taskID = influxdb.ID(12345)
|
||||
const runID = influxdb.ID(9876)
|
||||
|
||||
var findRunByIDCtx context.Context
|
||||
ts := &mock.TaskService{
|
||||
FindRunByIDFn: func(ctx context.Context, tid, rid platform.ID) (*platform.Run, error) {
|
||||
FindRunByIDFn: func(ctx context.Context, tid, rid influxdb.ID) (*influxdb.Run, error) {
|
||||
findRunByIDCtx = ctx
|
||||
if tid != taskID {
|
||||
t.Fatalf("expected task ID %v, got %v", taskID, tid)
|
||||
|
@ -1473,15 +1473,15 @@ func TestTaskHandler_Sessions(t *testing.T) {
|
|||
t.Fatalf("expected run ID %v, got %v", runID, rid)
|
||||
}
|
||||
|
||||
return &platform.Run{ID: runID, TaskID: taskID}, nil
|
||||
return &influxdb.Run{ID: runID, TaskID: taskID}, nil
|
||||
},
|
||||
|
||||
FindTaskByIDFn: func(ctx context.Context, id platform.ID) (*platform.Task, error) {
|
||||
FindTaskByIDFn: func(ctx context.Context, id influxdb.ID) (*influxdb.Task, error) {
|
||||
if id != taskID {
|
||||
return nil, platform.ErrTaskNotFound
|
||||
return nil, influxdb.ErrTaskNotFound
|
||||
}
|
||||
|
||||
return &platform.Task{
|
||||
return &influxdb.Task{
|
||||
ID: taskID,
|
||||
OrganizationID: o.ID,
|
||||
AuthorizationID: taskAuth.ID,
|
||||
|
@ -1514,20 +1514,20 @@ func TestTaskHandler_Sessions(t *testing.T) {
|
|||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if authr.Kind() != platform.AuthorizationKind {
|
||||
t.Fatalf("expected context's authorizer to be of kind %q, got %q", platform.AuthorizationKind, authr.Kind())
|
||||
if authr.Kind() != influxdb.AuthorizationKind {
|
||||
t.Fatalf("expected context's authorizer to be of kind %q, got %q", influxdb.AuthorizationKind, authr.Kind())
|
||||
}
|
||||
if authr.Identifier() != taskAuth.ID {
|
||||
t.Fatalf("expected context's authorizer ID to be %v, got %v", taskAuth.ID, authr.Identifier())
|
||||
}
|
||||
|
||||
// Other user without permissions on the task or authorization should be disallowed.
|
||||
otherUser := &platform.User{Name: "other-" + t.Name()}
|
||||
otherUser := &influxdb.User{Name: "other-" + t.Name()}
|
||||
if err := i.CreateUser(ctx, otherUser); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
valCtx = pcontext.SetAuthorizer(valCtx, &platform.Session{
|
||||
valCtx = pcontext.SetAuthorizer(valCtx, &influxdb.Session{
|
||||
UserID: otherUser.ID,
|
||||
ExpiresAt: time.Now().Add(24 * time.Hour),
|
||||
})
|
||||
|
@ -1549,17 +1549,17 @@ func TestTaskHandler_Sessions(t *testing.T) {
|
|||
|
||||
t.Run("get logs for a run", func(t *testing.T) {
|
||||
// Unique authorization to associate with our fake task.
|
||||
taskAuth := &platform.Authorization{OrgID: o.ID, UserID: u.ID}
|
||||
taskAuth := &influxdb.Authorization{OrgID: o.ID, UserID: u.ID}
|
||||
if err := i.CreateAuthorization(ctx, taskAuth); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
const taskID = platform.ID(12345)
|
||||
const runID = platform.ID(9876)
|
||||
const taskID = influxdb.ID(12345)
|
||||
const runID = influxdb.ID(9876)
|
||||
|
||||
var findLogsCtx context.Context
|
||||
ts := &mock.TaskService{
|
||||
FindLogsFn: func(ctx context.Context, f platform.LogFilter) ([]*platform.Log, int, error) {
|
||||
FindLogsFn: func(ctx context.Context, f influxdb.LogFilter) ([]*influxdb.Log, int, error) {
|
||||
findLogsCtx = ctx
|
||||
if f.Task != taskID {
|
||||
t.Fatalf("expected task ID %v, got %v", taskID, f.Task)
|
||||
|
@ -1568,16 +1568,16 @@ func TestTaskHandler_Sessions(t *testing.T) {
|
|||
t.Fatalf("expected run ID %v, got %v", runID, *f.Run)
|
||||
}
|
||||
|
||||
line := platform.Log{Time: "time", Message: "a log line"}
|
||||
return []*platform.Log{&line}, 1, nil
|
||||
line := influxdb.Log{Time: "time", Message: "a log line"}
|
||||
return []*influxdb.Log{&line}, 1, nil
|
||||
},
|
||||
|
||||
FindTaskByIDFn: func(ctx context.Context, id platform.ID) (*platform.Task, error) {
|
||||
FindTaskByIDFn: func(ctx context.Context, id influxdb.ID) (*influxdb.Task, error) {
|
||||
if id != taskID {
|
||||
return nil, platform.ErrTaskNotFound
|
||||
return nil, influxdb.ErrTaskNotFound
|
||||
}
|
||||
|
||||
return &platform.Task{
|
||||
return &influxdb.Task{
|
||||
ID: taskID,
|
||||
OrganizationID: o.ID,
|
||||
AuthorizationID: taskAuth.ID,
|
||||
|
@ -1610,20 +1610,20 @@ func TestTaskHandler_Sessions(t *testing.T) {
|
|||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if authr.Kind() != platform.AuthorizationKind {
|
||||
t.Fatalf("expected context's authorizer to be of kind %q, got %q", platform.AuthorizationKind, authr.Kind())
|
||||
if authr.Kind() != influxdb.AuthorizationKind {
|
||||
t.Fatalf("expected context's authorizer to be of kind %q, got %q", influxdb.AuthorizationKind, authr.Kind())
|
||||
}
|
||||
if authr.Identifier() != taskAuth.ID {
|
||||
t.Fatalf("expected context's authorizer ID to be %v, got %v", taskAuth.ID, authr.Identifier())
|
||||
}
|
||||
|
||||
// Other user without permissions on the task or authorization should be disallowed.
|
||||
otherUser := &platform.User{Name: "other-" + t.Name()}
|
||||
otherUser := &influxdb.User{Name: "other-" + t.Name()}
|
||||
if err := i.CreateUser(ctx, otherUser); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
valCtx = pcontext.SetAuthorizer(valCtx, &platform.Session{
|
||||
valCtx = pcontext.SetAuthorizer(valCtx, &influxdb.Session{
|
||||
UserID: otherUser.ID,
|
||||
ExpiresAt: time.Now().Add(24 * time.Hour),
|
||||
})
|
||||
|
@ -1645,17 +1645,17 @@ func TestTaskHandler_Sessions(t *testing.T) {
|
|||
|
||||
t.Run("retry a run", func(t *testing.T) {
|
||||
// Unique authorization to associate with our fake task.
|
||||
taskAuth := &platform.Authorization{OrgID: o.ID, UserID: u.ID}
|
||||
taskAuth := &influxdb.Authorization{OrgID: o.ID, UserID: u.ID}
|
||||
if err := i.CreateAuthorization(ctx, taskAuth); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
const taskID = platform.ID(12345)
|
||||
const runID = platform.ID(9876)
|
||||
const taskID = influxdb.ID(12345)
|
||||
const runID = influxdb.ID(9876)
|
||||
|
||||
var retryRunCtx context.Context
|
||||
ts := &mock.TaskService{
|
||||
RetryRunFn: func(ctx context.Context, tid, rid platform.ID) (*platform.Run, error) {
|
||||
RetryRunFn: func(ctx context.Context, tid, rid influxdb.ID) (*influxdb.Run, error) {
|
||||
retryRunCtx = ctx
|
||||
if tid != taskID {
|
||||
t.Fatalf("expected task ID %v, got %v", taskID, tid)
|
||||
|
@ -1664,15 +1664,15 @@ func TestTaskHandler_Sessions(t *testing.T) {
|
|||
t.Fatalf("expected run ID %v, got %v", runID, rid)
|
||||
}
|
||||
|
||||
return &platform.Run{ID: 10 * runID, TaskID: taskID}, nil
|
||||
return &influxdb.Run{ID: 10 * runID, TaskID: taskID}, nil
|
||||
},
|
||||
|
||||
FindTaskByIDFn: func(ctx context.Context, id platform.ID) (*platform.Task, error) {
|
||||
FindTaskByIDFn: func(ctx context.Context, id influxdb.ID) (*influxdb.Task, error) {
|
||||
if id != taskID {
|
||||
return nil, platform.ErrTaskNotFound
|
||||
return nil, influxdb.ErrTaskNotFound
|
||||
}
|
||||
|
||||
return &platform.Task{
|
||||
return &influxdb.Task{
|
||||
ID: taskID,
|
||||
OrganizationID: o.ID,
|
||||
AuthorizationID: taskAuth.ID,
|
||||
|
@ -1705,20 +1705,20 @@ func TestTaskHandler_Sessions(t *testing.T) {
|
|||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if authr.Kind() != platform.AuthorizationKind {
|
||||
t.Fatalf("expected context's authorizer to be of kind %q, got %q", platform.AuthorizationKind, authr.Kind())
|
||||
if authr.Kind() != influxdb.AuthorizationKind {
|
||||
t.Fatalf("expected context's authorizer to be of kind %q, got %q", influxdb.AuthorizationKind, authr.Kind())
|
||||
}
|
||||
if authr.Identifier() != taskAuth.ID {
|
||||
t.Fatalf("expected context's authorizer ID to be %v, got %v", taskAuth.ID, authr.Identifier())
|
||||
}
|
||||
|
||||
// Other user without permissions on the task or authorization should be disallowed.
|
||||
otherUser := &platform.User{Name: "other-" + t.Name()}
|
||||
otherUser := &influxdb.User{Name: "other-" + t.Name()}
|
||||
if err := i.CreateUser(ctx, otherUser); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
valCtx = pcontext.SetAuthorizer(valCtx, &platform.Session{
|
||||
valCtx = pcontext.SetAuthorizer(valCtx, &influxdb.Session{
|
||||
UserID: otherUser.ID,
|
||||
ExpiresAt: time.Now().Add(24 * time.Hour),
|
||||
})
|
||||
|
|
|
@ -150,6 +150,17 @@ func BindOptions(cmd *cobra.Command, opts []Opt) {
|
|||
}
|
||||
mustBindPFlag(o.Flag, flagset)
|
||||
*destP = viper.GetStringSlice(envVar)
|
||||
case pflag.Value:
|
||||
if hasShort {
|
||||
flagset.VarP(destP, o.Flag, string(o.Short), o.Desc)
|
||||
} else {
|
||||
flagset.Var(destP, o.Flag, o.Desc)
|
||||
}
|
||||
if o.Default != nil {
|
||||
destP.Set(o.Default.(string))
|
||||
}
|
||||
mustBindPFlag(o.Flag, flagset)
|
||||
destP.Set(viper.GetString(envVar))
|
||||
default:
|
||||
// if you get a panic here, sorry about that!
|
||||
// anyway, go ahead and make a PR and add another type.
|
||||
|
|
|
@ -6,12 +6,36 @@ import (
|
|||
"time"
|
||||
)
|
||||
|
||||
type customFlag bool
|
||||
|
||||
func (c customFlag) String() string {
|
||||
if c == true {
|
||||
return "on"
|
||||
}
|
||||
return "off"
|
||||
}
|
||||
|
||||
func (c *customFlag) Set(s string) error {
|
||||
if s == "on" {
|
||||
*c = true
|
||||
} else {
|
||||
*c = false
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *customFlag) Type() string {
|
||||
return "fancy-bool"
|
||||
}
|
||||
|
||||
func ExampleNewCommand() {
|
||||
var monitorHost string
|
||||
var number int
|
||||
var sleep bool
|
||||
var duration time.Duration
|
||||
var stringSlice []string
|
||||
var fancyBool customFlag
|
||||
cmd := NewCommand(&Program{
|
||||
Run: func() error {
|
||||
fmt.Println(monitorHost)
|
||||
|
@ -21,6 +45,7 @@ func ExampleNewCommand() {
|
|||
fmt.Println(sleep)
|
||||
fmt.Println(duration)
|
||||
fmt.Println(stringSlice)
|
||||
fmt.Println(fancyBool)
|
||||
return nil
|
||||
},
|
||||
Name: "myprogram",
|
||||
|
@ -55,6 +80,12 @@ func ExampleNewCommand() {
|
|||
Default: []string{"foo", "bar"},
|
||||
Desc: "things come in lists",
|
||||
},
|
||||
{
|
||||
DestP: &fancyBool,
|
||||
Flag: "fancy-bool",
|
||||
Default: "on",
|
||||
Desc: "things that implement pflag.Value",
|
||||
},
|
||||
},
|
||||
})
|
||||
|
||||
|
@ -68,4 +99,5 @@ func ExampleNewCommand() {
|
|||
// true
|
||||
// 1m0s
|
||||
// [foo bar]
|
||||
// on
|
||||
}
|
||||
|
|
|
@ -643,6 +643,12 @@ func (s *DocumentStore) DeleteDocuments(ctx context.Context, opts ...influxdb.Do
|
|||
}
|
||||
|
||||
if err := s.service.deleteDocument(ctx, tx, s.namespace, id); err != nil {
|
||||
if IsNotFound(err) {
|
||||
return &influxdb.Error{
|
||||
Code: influxdb.ENotFound,
|
||||
Msg: influxdb.ErrDocumentNotFound,
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
|
|
@ -0,0 +1,83 @@
|
|||
package kv
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sync"
|
||||
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
type kvIndexer struct {
|
||||
log *zap.Logger
|
||||
kv Store
|
||||
ctx context.Context
|
||||
cancel context.CancelFunc
|
||||
indexChan chan indexBatch
|
||||
finished chan struct{}
|
||||
oncer sync.Once
|
||||
}
|
||||
|
||||
type indexBatch struct {
|
||||
bucketName []byte
|
||||
keys [][]byte
|
||||
}
|
||||
|
||||
func NewIndexer(log *zap.Logger, kv Store) *kvIndexer {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
i := &kvIndexer{
|
||||
log: log,
|
||||
kv: kv,
|
||||
ctx: ctx,
|
||||
cancel: cancel,
|
||||
indexChan: make(chan indexBatch, 10),
|
||||
finished: make(chan struct{}),
|
||||
}
|
||||
|
||||
go i.workIndexes()
|
||||
return i
|
||||
}
|
||||
|
||||
func (i *kvIndexer) AddToIndex(bucketName []byte, keys [][]byte) {
|
||||
// check for close
|
||||
select {
|
||||
case <-i.ctx.Done():
|
||||
return
|
||||
case i.indexChan <- indexBatch{bucketName, keys}:
|
||||
}
|
||||
}
|
||||
|
||||
func (i *kvIndexer) workIndexes() {
|
||||
defer close(i.finished)
|
||||
for batch := range i.indexChan {
|
||||
// open update tx
|
||||
err := i.kv.Update(i.ctx, func(tx Tx) error {
|
||||
// create a bucket for this batch
|
||||
bucket, err := tx.Bucket(batch.bucketName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// insert all the keys
|
||||
for _, key := range batch.keys {
|
||||
err := bucket.Put(key, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
//only option is to log
|
||||
i.log.Error("failed to update index bucket", zap.Error(err))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (i *kvIndexer) Stop() {
|
||||
i.cancel()
|
||||
i.oncer.Do(func() {
|
||||
close(i.indexChan)
|
||||
})
|
||||
|
||||
<-i.finished
|
||||
}
|
|
@ -0,0 +1,49 @@
|
|||
package kv_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/influxdata/influxdb/inmem"
|
||||
"github.com/influxdata/influxdb/kv"
|
||||
"go.uber.org/zap/zaptest"
|
||||
)
|
||||
|
||||
func TestIndexer(t *testing.T) {
|
||||
store := inmem.NewKVStore()
|
||||
|
||||
indexer := kv.NewIndexer(zaptest.NewLogger(t), store)
|
||||
indexes := [][]byte{
|
||||
[]byte("1"),
|
||||
[]byte("2"),
|
||||
[]byte("3"),
|
||||
[]byte("4"),
|
||||
}
|
||||
indexer.AddToIndex([]byte("bucket"), indexes)
|
||||
indexer.Stop()
|
||||
|
||||
count := 0
|
||||
err := store.View(context.Background(), func(tx kv.Tx) error {
|
||||
bucket, err := tx.Bucket([]byte("bucket"))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
cur, err := bucket.ForwardCursor(nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
for k, _ := cur.Next(); k != nil; k, _ = cur.Next() {
|
||||
if string(k) != string(indexes[count]) {
|
||||
t.Fatalf("failed to find correct index, found: %s, expected: %s", k, indexes[count])
|
||||
}
|
||||
count++
|
||||
}
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if count != 4 {
|
||||
t.Fatal("failed to retrieve indexes")
|
||||
}
|
||||
}
|
|
@ -18,17 +18,21 @@ var (
|
|||
_ influxdb.UserService = (*Service)(nil)
|
||||
)
|
||||
|
||||
type indexer interface {
|
||||
AddToIndex([]byte, [][]byte)
|
||||
Stop()
|
||||
}
|
||||
|
||||
// OpPrefix is the prefix for kv errors.
|
||||
const OpPrefix = "kv/"
|
||||
|
||||
// Service is the struct that influxdb services are implemented on.
|
||||
type Service struct {
|
||||
kv Store
|
||||
log *zap.Logger
|
||||
clock clock.Clock
|
||||
Config ServiceConfig
|
||||
audit resource.Logger
|
||||
|
||||
kv Store
|
||||
log *zap.Logger
|
||||
clock clock.Clock
|
||||
Config ServiceConfig
|
||||
audit resource.Logger
|
||||
IDGenerator influxdb.IDGenerator
|
||||
|
||||
// FluxLanguageService is used for parsing flux.
|
||||
|
@ -46,6 +50,8 @@ type Service struct {
|
|||
influxdb.TimeGenerator
|
||||
Hash Crypt
|
||||
|
||||
indexer indexer
|
||||
|
||||
checkStore *IndexStore
|
||||
endpointStore *IndexStore
|
||||
variableStore *IndexStore
|
||||
|
@ -66,6 +72,7 @@ func NewService(log *zap.Logger, kv Store, configs ...ServiceConfig) *Service {
|
|||
checkStore: newCheckStore(),
|
||||
endpointStore: newEndpointStore(),
|
||||
variableStore: newVariableStore(),
|
||||
indexer: NewIndexer(log, kv),
|
||||
}
|
||||
|
||||
if len(configs) > 0 {
|
||||
|
@ -180,6 +187,11 @@ func (s *Service) Initialize(ctx context.Context) error {
|
|||
|
||||
return s.initializeUsers(ctx, tx)
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
func (s *Service) Stop() {
|
||||
s.indexer.Stop()
|
||||
}
|
||||
|
||||
// WithResourceLogger sets the resource audit logger for the service.
|
||||
|
|
54
kv/task.go
54
kv/task.go
|
@ -6,11 +6,9 @@ import (
|
|||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/influxdb/resource"
|
||||
|
||||
"github.com/influxdata/influxdb"
|
||||
icontext "github.com/influxdata/influxdb/context"
|
||||
"github.com/influxdata/influxdb/task/backend"
|
||||
"github.com/influxdata/influxdb/resource"
|
||||
"github.com/influxdata/influxdb/task/options"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
@ -34,7 +32,6 @@ var (
|
|||
)
|
||||
|
||||
var _ influxdb.TaskService = (*Service)(nil)
|
||||
var _ backend.TaskControlService = (*Service)(nil)
|
||||
|
||||
type kvTask struct {
|
||||
ID influxdb.ID `json:"id"`
|
||||
|
@ -311,7 +308,21 @@ func (s *Service) findTasksByUser(ctx context.Context, tx Tx, filter influxdb.Ta
|
|||
return nil, 0, err
|
||||
}
|
||||
|
||||
matchFn := newTaskMatchFn(filter, org)
|
||||
var (
|
||||
afterSeen bool
|
||||
after = func(task *influxdb.Task) bool {
|
||||
if filter.After == nil || afterSeen {
|
||||
return true
|
||||
}
|
||||
|
||||
if task.ID == *filter.After {
|
||||
afterSeen = true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
matchFn = newTaskMatchFn(filter, org)
|
||||
)
|
||||
|
||||
for _, m := range maps {
|
||||
task, err := s.findTaskByIDWithAuth(ctx, tx, m.ResourceID)
|
||||
|
@ -323,6 +334,10 @@ func (s *Service) findTasksByUser(ctx context.Context, tx Tx, filter influxdb.Ta
|
|||
}
|
||||
|
||||
if matchFn == nil || matchFn(task) {
|
||||
if !after(task) {
|
||||
continue
|
||||
}
|
||||
|
||||
ts = append(ts, task)
|
||||
|
||||
if len(ts) >= filter.Limit {
|
||||
|
@ -389,6 +404,9 @@ func (s *Service) findTasksByOrg(ctx context.Context, tx Tx, filter influxdb.Tas
|
|||
return nil, 0, influxdb.ErrUnexpectedTaskBucketErr(err)
|
||||
}
|
||||
|
||||
// free cursor resources
|
||||
defer c.Close()
|
||||
|
||||
matchFn := newTaskMatchFn(filter, nil)
|
||||
|
||||
for k, v := c.Next(); k != nil; k, v = c.Next() {
|
||||
|
@ -401,6 +419,7 @@ func (s *Service) findTasksByOrg(ctx context.Context, tx Tx, filter influxdb.Tas
|
|||
if err != nil {
|
||||
if err == influxdb.ErrTaskNotFound {
|
||||
// we might have some crufty index's
|
||||
err = nil
|
||||
continue
|
||||
}
|
||||
return nil, 0, err
|
||||
|
@ -420,7 +439,7 @@ func (s *Service) findTasksByOrg(ctx context.Context, tx Tx, filter influxdb.Tas
|
|||
}
|
||||
}
|
||||
|
||||
return ts, len(ts), err
|
||||
return ts, len(ts), c.Err()
|
||||
}
|
||||
|
||||
type taskMatchFn func(*influxdb.Task) bool
|
||||
|
@ -500,6 +519,9 @@ func (s *Service) findAllTasks(ctx context.Context, tx Tx, filter influxdb.TaskF
|
|||
return nil, 0, influxdb.ErrUnexpectedTaskBucketErr(err)
|
||||
}
|
||||
|
||||
// free cursor resources
|
||||
defer c.Close()
|
||||
|
||||
matchFn := newTaskMatchFn(filter, nil)
|
||||
|
||||
for k, v := c.Next(); k != nil; k, v = c.Next() {
|
||||
|
@ -519,6 +541,10 @@ func (s *Service) findAllTasks(ctx context.Context, tx Tx, filter influxdb.TaskF
|
|||
}
|
||||
}
|
||||
|
||||
if err := c.Err(); err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
return ts, len(ts), err
|
||||
}
|
||||
|
||||
|
@ -571,7 +597,7 @@ func (s *Service) createTask(ctx context.Context, tx Tx, tc influxdb.TaskCreate)
|
|||
}
|
||||
|
||||
if tc.Status == "" {
|
||||
tc.Status = string(backend.TaskActive)
|
||||
tc.Status = string(influxdb.TaskActive)
|
||||
}
|
||||
|
||||
createdAt := s.clock.Now().Truncate(time.Second).UTC()
|
||||
|
@ -1134,7 +1160,7 @@ func (s *Service) retryRun(ctx context.Context, tx Tx, taskID, runID influxdb.ID
|
|||
}
|
||||
|
||||
r.ID = s.IDGenerator.ID()
|
||||
r.Status = backend.RunScheduled.String()
|
||||
r.Status = influxdb.RunScheduled.String()
|
||||
r.StartedAt = time.Time{}
|
||||
r.FinishedAt = time.Time{}
|
||||
r.RequestedAt = time.Time{}
|
||||
|
@ -1202,7 +1228,7 @@ func (s *Service) forceRun(ctx context.Context, tx Tx, taskID influxdb.ID, sched
|
|||
r := &influxdb.Run{
|
||||
ID: s.IDGenerator.ID(),
|
||||
TaskID: taskID,
|
||||
Status: backend.RunScheduled.String(),
|
||||
Status: influxdb.RunScheduled.String(),
|
||||
RequestedAt: time.Now().UTC(),
|
||||
ScheduledFor: t,
|
||||
Log: []influxdb.Log{},
|
||||
|
@ -1267,7 +1293,7 @@ func (s *Service) createRun(ctx context.Context, tx Tx, taskID influxdb.ID, sche
|
|||
TaskID: taskID,
|
||||
ScheduledFor: t,
|
||||
RunAt: runAt,
|
||||
Status: backend.RunScheduled.String(),
|
||||
Status: influxdb.RunScheduled.String(),
|
||||
Log: []influxdb.Log{},
|
||||
}
|
||||
|
||||
|
@ -1524,7 +1550,7 @@ func (s *Service) finishRun(ctx context.Context, tx Tx, taskID, runID influxdb.I
|
|||
}
|
||||
|
||||
// UpdateRunState sets the run state at the respective time.
|
||||
func (s *Service) UpdateRunState(ctx context.Context, taskID, runID influxdb.ID, when time.Time, state backend.RunStatus) error {
|
||||
func (s *Service) UpdateRunState(ctx context.Context, taskID, runID influxdb.ID, when time.Time, state influxdb.RunStatus) error {
|
||||
err := s.kv.Update(ctx, func(tx Tx) error {
|
||||
err := s.updateRunState(ctx, tx, taskID, runID, when, state)
|
||||
if err != nil {
|
||||
|
@ -1535,7 +1561,7 @@ func (s *Service) UpdateRunState(ctx context.Context, taskID, runID influxdb.ID,
|
|||
return err
|
||||
}
|
||||
|
||||
func (s *Service) updateRunState(ctx context.Context, tx Tx, taskID, runID influxdb.ID, when time.Time, state backend.RunStatus) error {
|
||||
func (s *Service) updateRunState(ctx context.Context, tx Tx, taskID, runID influxdb.ID, when time.Time, state influxdb.RunStatus) error {
|
||||
// find run
|
||||
run, err := s.findRunByID(ctx, tx, taskID, runID)
|
||||
if err != nil {
|
||||
|
@ -1545,9 +1571,9 @@ func (s *Service) updateRunState(ctx context.Context, tx Tx, taskID, runID influ
|
|||
// update state
|
||||
run.Status = state.String()
|
||||
switch state {
|
||||
case backend.RunStarted:
|
||||
case influxdb.RunStarted:
|
||||
run.StartedAt = when
|
||||
case backend.RunSuccess, backend.RunFail, backend.RunCanceled:
|
||||
case influxdb.RunSuccess, influxdb.RunFail, influxdb.RunCanceled:
|
||||
run.FinishedAt = when
|
||||
}
|
||||
|
||||
|
|
|
@ -14,7 +14,6 @@ import (
|
|||
"github.com/influxdata/influxdb/kv"
|
||||
_ "github.com/influxdata/influxdb/query/builtin"
|
||||
"github.com/influxdata/influxdb/query/fluxlang"
|
||||
"github.com/influxdata/influxdb/task/backend"
|
||||
"github.com/influxdata/influxdb/task/servicetest"
|
||||
"go.uber.org/zap/zaptest"
|
||||
)
|
||||
|
@ -133,7 +132,7 @@ func TestRetrieveTaskWithBadAuth(t *testing.T) {
|
|||
Flux: `option task = {name: "a task",every: 1h} from(bucket:"test") |> range(start:-1h)`,
|
||||
OrganizationID: ts.Org.ID,
|
||||
OwnerID: ts.User.ID,
|
||||
Status: string(backend.TaskActive),
|
||||
Status: string(influxdb.TaskActive),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
|
@ -185,7 +184,7 @@ func TestRetrieveTaskWithBadAuth(t *testing.T) {
|
|||
}
|
||||
|
||||
// test status filter
|
||||
active := string(backend.TaskActive)
|
||||
active := string(influxdb.TaskActive)
|
||||
tasksWithActiveFilter, _, err := ts.Service.FindTasks(ctx, influxdb.TaskFilter{Status: &active})
|
||||
if err != nil {
|
||||
t.Fatal("could not find tasks")
|
||||
|
@ -211,7 +210,7 @@ func TestService_UpdateTask_InactiveToActive(t *testing.T) {
|
|||
Flux: `option task = {name: "a task",every: 1h} from(bucket:"test") |> range(start:-1h)`,
|
||||
OrganizationID: ts.Org.ID,
|
||||
OwnerID: ts.User.ID,
|
||||
Status: string(backend.TaskActive),
|
||||
Status: string(influxdb.TaskActive),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal("CreateTask", err)
|
||||
|
@ -312,7 +311,7 @@ func TestTaskRunCancellation(t *testing.T) {
|
|||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if canceled.Status != backend.RunCanceled.String() {
|
||||
if canceled.Status != influxdb.RunCanceled.String() {
|
||||
t.Fatalf("expected task run to be cancelled")
|
||||
}
|
||||
}
|
||||
|
|
|
@ -0,0 +1,167 @@
|
|||
package mock
|
||||
|
||||
import (
|
||||
"github.com/influxdata/influxdb/models"
|
||||
"github.com/influxdata/influxdb/pkg/data/gen"
|
||||
"github.com/influxdata/influxdb/storage/reads"
|
||||
"github.com/influxdata/influxdb/tsdb"
|
||||
"github.com/influxdata/influxdb/tsdb/cursors"
|
||||
)
|
||||
|
||||
type GeneratorResultSet struct {
|
||||
sg gen.SeriesGenerator
|
||||
f floatTimeValuesGeneratorCursor
|
||||
i integerTimeValuesGeneratorCursor
|
||||
u unsignedTimeValuesGeneratorCursor
|
||||
s stringTimeValuesGeneratorCursor
|
||||
b booleanTimeValuesGeneratorCursor
|
||||
cur cursors.Cursor
|
||||
}
|
||||
|
||||
var _ reads.ResultSet = (*GeneratorResultSet)(nil)
|
||||
|
||||
// NewResultSetFromSeriesGenerator transforms a SeriesGenerator into a ResultSet,
|
||||
// which is useful for mocking data when a client requires a ResultSet.
|
||||
func NewResultSetFromSeriesGenerator(sg gen.SeriesGenerator) *GeneratorResultSet {
|
||||
return &GeneratorResultSet{sg: sg}
|
||||
}
|
||||
|
||||
func (g *GeneratorResultSet) Next() bool {
|
||||
return g.sg.Next()
|
||||
}
|
||||
|
||||
func (g *GeneratorResultSet) Cursor() cursors.Cursor {
|
||||
switch g.sg.FieldType() {
|
||||
case models.Float:
|
||||
g.f.tv = g.sg.TimeValuesGenerator()
|
||||
g.cur = &g.f
|
||||
case models.Integer:
|
||||
g.i.tv = g.sg.TimeValuesGenerator()
|
||||
g.cur = &g.i
|
||||
case models.Unsigned:
|
||||
g.u.tv = g.sg.TimeValuesGenerator()
|
||||
g.cur = &g.u
|
||||
case models.String:
|
||||
g.s.tv = g.sg.TimeValuesGenerator()
|
||||
g.cur = &g.s
|
||||
case models.Boolean:
|
||||
g.b.tv = g.sg.TimeValuesGenerator()
|
||||
g.cur = &g.b
|
||||
default:
|
||||
panic("unreachable")
|
||||
}
|
||||
|
||||
return g.cur
|
||||
}
|
||||
|
||||
func (g *GeneratorResultSet) Tags() models.Tags { return g.sg.Tags() }
|
||||
func (g *GeneratorResultSet) Close() {}
|
||||
func (g *GeneratorResultSet) Err() error { return nil }
|
||||
|
||||
func (g *GeneratorResultSet) Stats() cursors.CursorStats {
|
||||
var stats cursors.CursorStats
|
||||
stats.Add(g.f.Stats())
|
||||
stats.Add(g.i.Stats())
|
||||
stats.Add(g.u.Stats())
|
||||
stats.Add(g.s.Stats())
|
||||
stats.Add(g.b.Stats())
|
||||
return stats
|
||||
}
|
||||
|
||||
// cursors
|
||||
|
||||
type timeValuesGeneratorCursor struct {
|
||||
tv gen.TimeValuesSequence
|
||||
stats cursors.CursorStats
|
||||
}
|
||||
|
||||
func (t timeValuesGeneratorCursor) Close() {}
|
||||
func (t timeValuesGeneratorCursor) Err() error { return nil }
|
||||
func (t timeValuesGeneratorCursor) Stats() cursors.CursorStats { return t.stats }
|
||||
|
||||
type floatTimeValuesGeneratorCursor struct {
|
||||
timeValuesGeneratorCursor
|
||||
a tsdb.FloatArray
|
||||
}
|
||||
|
||||
func (c *floatTimeValuesGeneratorCursor) Next() *cursors.FloatArray {
|
||||
if c.tv.Next() {
|
||||
c.tv.Values().(gen.FloatValues).Copy(&c.a)
|
||||
c.stats.ScannedBytes += len(c.a.Values) * 8
|
||||
c.stats.ScannedValues += c.a.Len()
|
||||
} else {
|
||||
c.a.Timestamps = c.a.Timestamps[:0]
|
||||
c.a.Values = c.a.Values[:0]
|
||||
}
|
||||
return &c.a
|
||||
}
|
||||
|
||||
type integerTimeValuesGeneratorCursor struct {
|
||||
timeValuesGeneratorCursor
|
||||
a tsdb.IntegerArray
|
||||
}
|
||||
|
||||
func (c *integerTimeValuesGeneratorCursor) Next() *cursors.IntegerArray {
|
||||
if c.tv.Next() {
|
||||
c.tv.Values().(gen.IntegerValues).Copy(&c.a)
|
||||
c.stats.ScannedBytes += len(c.a.Values) * 8
|
||||
c.stats.ScannedValues += c.a.Len()
|
||||
} else {
|
||||
c.a.Timestamps = c.a.Timestamps[:0]
|
||||
c.a.Values = c.a.Values[:0]
|
||||
}
|
||||
return &c.a
|
||||
}
|
||||
|
||||
type unsignedTimeValuesGeneratorCursor struct {
|
||||
timeValuesGeneratorCursor
|
||||
a tsdb.UnsignedArray
|
||||
}
|
||||
|
||||
func (c *unsignedTimeValuesGeneratorCursor) Next() *cursors.UnsignedArray {
|
||||
if c.tv.Next() {
|
||||
c.tv.Values().(gen.UnsignedValues).Copy(&c.a)
|
||||
c.stats.ScannedBytes += len(c.a.Values) * 8
|
||||
c.stats.ScannedValues += c.a.Len()
|
||||
} else {
|
||||
c.a.Timestamps = c.a.Timestamps[:0]
|
||||
c.a.Values = c.a.Values[:0]
|
||||
}
|
||||
return &c.a
|
||||
}
|
||||
|
||||
type stringTimeValuesGeneratorCursor struct {
|
||||
timeValuesGeneratorCursor
|
||||
a tsdb.StringArray
|
||||
}
|
||||
|
||||
func (c *stringTimeValuesGeneratorCursor) Next() *cursors.StringArray {
|
||||
if c.tv.Next() {
|
||||
c.tv.Values().(gen.StringValues).Copy(&c.a)
|
||||
for _, v := range c.a.Values {
|
||||
c.stats.ScannedBytes += len(v)
|
||||
}
|
||||
c.stats.ScannedValues += c.a.Len()
|
||||
} else {
|
||||
c.a.Timestamps = c.a.Timestamps[:0]
|
||||
c.a.Values = c.a.Values[:0]
|
||||
}
|
||||
return &c.a
|
||||
}
|
||||
|
||||
type booleanTimeValuesGeneratorCursor struct {
|
||||
timeValuesGeneratorCursor
|
||||
a tsdb.BooleanArray
|
||||
}
|
||||
|
||||
func (c *booleanTimeValuesGeneratorCursor) Next() *cursors.BooleanArray {
|
||||
if c.tv.Next() {
|
||||
c.tv.Values().(gen.BooleanValues).Copy(&c.a)
|
||||
c.stats.ScannedBytes += len(c.a.Values)
|
||||
c.stats.ScannedValues += c.a.Len()
|
||||
} else {
|
||||
c.a.Timestamps = c.a.Timestamps[:0]
|
||||
c.a.Values = c.a.Values[:0]
|
||||
}
|
||||
return &c.a
|
||||
}
|
|
@ -0,0 +1,86 @@
|
|||
package mock_test
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/google/go-cmp/cmp"
|
||||
"github.com/influxdata/influxdb/mock"
|
||||
"github.com/influxdata/influxdb/pkg/data/gen"
|
||||
"github.com/influxdata/influxdb/storage/reads"
|
||||
"github.com/influxdata/influxdb/tsdb/cursors"
|
||||
)
|
||||
|
||||
func mustNewSpecFromToml(tb testing.TB, toml string) *gen.Spec {
|
||||
tb.Helper()
|
||||
|
||||
spec, err := gen.NewSpecFromToml(toml)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
return spec
|
||||
}
|
||||
|
||||
func TestNewResultSetFromSeriesGenerator(t *testing.T) {
|
||||
checkResult := func(t *testing.T, sg gen.SeriesGenerator, expData string, expStats cursors.CursorStats) {
|
||||
t.Helper()
|
||||
|
||||
rs := mock.NewResultSetFromSeriesGenerator(sg)
|
||||
var sb strings.Builder
|
||||
err := reads.ResultSetToLineProtocol(&sb, rs)
|
||||
if err != nil {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if got, exp := sb.String(), expData; !cmp.Equal(got, exp) {
|
||||
t.Errorf("unexpected value -got/+exp\n%s", cmp.Diff(got, exp))
|
||||
}
|
||||
|
||||
if got, exp := rs.Stats(), expStats; !cmp.Equal(got, exp) {
|
||||
t.Errorf("unexpected value -got/+exp\n%s", cmp.Diff(got, exp))
|
||||
}
|
||||
}
|
||||
|
||||
t.Run("float", func(t *testing.T) {
|
||||
spec := mustNewSpecFromToml(t, `
|
||||
[[measurements]]
|
||||
name = "m0"
|
||||
sample = 1.0
|
||||
tags = [
|
||||
{ name = "tag0", source = { type = "sequence", start = 0, count = 3 } },
|
||||
{ name = "tag1", source = { type = "sequence", start = 0, count = 2 } },
|
||||
]
|
||||
fields = [
|
||||
{ name = "v0", count = 3, source = 1.0 },
|
||||
]`)
|
||||
|
||||
sg := gen.NewSeriesGeneratorFromSpec(spec, gen.TimeRange{
|
||||
Start: time.Unix(1000, 0),
|
||||
End: time.Unix(2000, 0),
|
||||
})
|
||||
const expData = `m0,tag0=value0,tag1=value0 v0=1 1000000000000
|
||||
m0,tag0=value0,tag1=value0 v0=1 1333333000000
|
||||
m0,tag0=value0,tag1=value0 v0=1 1666666000000
|
||||
m0,tag0=value0,tag1=value1 v0=1 1000000000000
|
||||
m0,tag0=value0,tag1=value1 v0=1 1333333000000
|
||||
m0,tag0=value0,tag1=value1 v0=1 1666666000000
|
||||
m0,tag0=value1,tag1=value0 v0=1 1000000000000
|
||||
m0,tag0=value1,tag1=value0 v0=1 1333333000000
|
||||
m0,tag0=value1,tag1=value0 v0=1 1666666000000
|
||||
m0,tag0=value1,tag1=value1 v0=1 1000000000000
|
||||
m0,tag0=value1,tag1=value1 v0=1 1333333000000
|
||||
m0,tag0=value1,tag1=value1 v0=1 1666666000000
|
||||
m0,tag0=value2,tag1=value0 v0=1 1000000000000
|
||||
m0,tag0=value2,tag1=value0 v0=1 1333333000000
|
||||
m0,tag0=value2,tag1=value0 v0=1 1666666000000
|
||||
m0,tag0=value2,tag1=value1 v0=1 1000000000000
|
||||
m0,tag0=value2,tag1=value1 v0=1 1333333000000
|
||||
m0,tag0=value2,tag1=value1 v0=1 1666666000000
|
||||
`
|
||||
expStats := cursors.CursorStats{ScannedValues: 18, ScannedBytes: 18 * 8}
|
||||
checkResult(t, sg, expData, expStats)
|
||||
})
|
||||
|
||||
}
|
|
@ -135,7 +135,7 @@ type TaskControlService struct {
|
|||
ManualRunsFn func(ctx context.Context, taskID influxdb.ID) ([]*influxdb.Run, error)
|
||||
StartManualRunFn func(ctx context.Context, taskID, runID influxdb.ID) (*influxdb.Run, error)
|
||||
FinishRunFn func(ctx context.Context, taskID, runID influxdb.ID) (*influxdb.Run, error)
|
||||
UpdateRunStateFn func(ctx context.Context, taskID, runID influxdb.ID, when time.Time, state backend.RunStatus) error
|
||||
UpdateRunStateFn func(ctx context.Context, taskID, runID influxdb.ID, when time.Time, state influxdb.RunStatus) error
|
||||
AddRunLogFn func(ctx context.Context, taskID, runID influxdb.ID, when time.Time, log string) error
|
||||
}
|
||||
|
||||
|
@ -154,7 +154,7 @@ func (tcs *TaskControlService) StartManualRun(ctx context.Context, taskID, runID
|
|||
func (tcs *TaskControlService) FinishRun(ctx context.Context, taskID, runID influxdb.ID) (*influxdb.Run, error) {
|
||||
return tcs.FinishRunFn(ctx, taskID, runID)
|
||||
}
|
||||
func (tcs *TaskControlService) UpdateRunState(ctx context.Context, taskID, runID influxdb.ID, when time.Time, state backend.RunStatus) error {
|
||||
func (tcs *TaskControlService) UpdateRunState(ctx context.Context, taskID, runID influxdb.ID, when time.Time, state influxdb.RunStatus) error {
|
||||
return tcs.UpdateRunStateFn(ctx, taskID, runID, when, state)
|
||||
}
|
||||
func (tcs *TaskControlService) AddRunLog(ctx context.Context, taskID, runID influxdb.ID, when time.Time, log string) error {
|
||||
|
|
|
@ -2,6 +2,7 @@ package models_test
|
|||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
|
@ -16,9 +17,7 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/google/go-cmp/cmp"
|
||||
"github.com/influxdata/influxdb"
|
||||
"github.com/influxdata/influxdb/models"
|
||||
"github.com/influxdata/influxdb/tsdb"
|
||||
)
|
||||
|
||||
var (
|
||||
|
@ -37,6 +36,16 @@ var (
|
|||
sink interface{}
|
||||
)
|
||||
|
||||
type ID uint64
|
||||
|
||||
// EncodeName converts org/bucket pairs to the tsdb internal serialization
|
||||
func EncodeName(org, bucket ID) [16]byte {
|
||||
var nameBytes [16]byte
|
||||
binary.BigEndian.PutUint64(nameBytes[0:8], uint64(org))
|
||||
binary.BigEndian.PutUint64(nameBytes[8:16], uint64(bucket))
|
||||
return nameBytes
|
||||
}
|
||||
|
||||
func TestMarshal(t *testing.T) {
|
||||
got := tags.HashKey()
|
||||
if exp := ",apple=orange,foo=bar,host=serverA,region=uswest"; string(got) != exp {
|
||||
|
@ -2493,7 +2502,7 @@ func TestParsePointsWithOptions(t *testing.T) {
|
|||
for _, test := range tests {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
buf := test.read(t)
|
||||
encoded := tsdb.EncodeName(influxdb.ID(1000), influxdb.ID(2000))
|
||||
encoded := EncodeName(ID(1000), ID(2000))
|
||||
mm := models.EscapeMeasurement(encoded[:])
|
||||
|
||||
var stats models.ParserStats
|
||||
|
@ -3021,7 +3030,7 @@ func BenchmarkNewTagsKeyValues(b *testing.B) {
|
|||
func benchParseFile(b *testing.B, name string, repeat int, fn func(b *testing.B, buf []byte, mm []byte, now time.Time)) {
|
||||
b.Helper()
|
||||
buf := mustReadTestData(b, name, repeat)
|
||||
encoded := tsdb.EncodeName(influxdb.ID(1000), influxdb.ID(2000))
|
||||
encoded := EncodeName(ID(1000), ID(2000))
|
||||
mm := models.EscapeMeasurement(encoded[:])
|
||||
now := time.Now()
|
||||
|
||||
|
|
|
@ -60,7 +60,7 @@ func (s *HTTP) generateFluxASTBody(e *endpoint.HTTP) []ast.Statement {
|
|||
statements = append(statements, s.generateFluxASTEndpoint(e))
|
||||
statements = append(statements, s.generateFluxASTNotificationDefinition(e))
|
||||
statements = append(statements, s.generateFluxASTStatuses())
|
||||
statements = append(statements, s.generateAllStateChanges()...)
|
||||
statements = append(statements, s.generateLevelChecks()...)
|
||||
statements = append(statements, s.generateFluxASTNotifyPipe())
|
||||
|
||||
return statements
|
||||
|
|
|
@ -66,7 +66,7 @@ func (s *PagerDuty) GenerateFlux(e influxdb.NotificationEndpoint) (string, error
|
|||
func (s *PagerDuty) GenerateFluxAST(e *endpoint.PagerDuty) (*ast.Package, error) {
|
||||
f := flux.File(
|
||||
s.Name,
|
||||
flux.Imports("influxdata/influxdb/monitor", "pagerduty", "influxdata/influxdb/secrets"),
|
||||
flux.Imports("influxdata/influxdb/monitor", "pagerduty", "influxdata/influxdb/secrets", "experimental"),
|
||||
s.generateFluxASTBody(e),
|
||||
)
|
||||
return &ast.Package{Package: "main", Files: []*ast.File{f}}, nil
|
||||
|
@ -79,6 +79,7 @@ func (s *PagerDuty) generateFluxASTBody(e *endpoint.PagerDuty) []ast.Statement {
|
|||
statements = append(statements, s.generateFluxASTEndpoint(e))
|
||||
statements = append(statements, s.generateFluxASTNotificationDefinition(e))
|
||||
statements = append(statements, s.generateFluxASTStatuses())
|
||||
statements = append(statements, s.generateLevelChecks()...)
|
||||
statements = append(statements, s.generateFluxASTNotifyPipe(e.ClientURL))
|
||||
|
||||
return statements
|
||||
|
@ -170,7 +171,7 @@ func (s *PagerDuty) generateFluxASTNotifyPipe(url string) ast.Statement {
|
|||
|
||||
call := flux.Call(flux.Member("monitor", "notify"), flux.Object(props...))
|
||||
|
||||
return flux.ExpressionStatement(flux.Pipe(flux.Identifier("statuses"), call))
|
||||
return flux.ExpressionStatement(flux.Pipe(flux.Identifier("all_statuses"), call))
|
||||
}
|
||||
|
||||
func severityFromLevel() *ast.CallExpression {
|
||||
|
|
|
@ -3,6 +3,7 @@ package rule_test
|
|||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/andreyvit/diff"
|
||||
"github.com/influxdata/influxdb"
|
||||
"github.com/influxdata/influxdb/notification"
|
||||
"github.com/influxdata/influxdb/notification/endpoint"
|
||||
|
@ -10,11 +11,60 @@ import (
|
|||
)
|
||||
|
||||
func TestPagerDuty_GenerateFlux(t *testing.T) {
|
||||
want := `package main
|
||||
tests := []struct {
|
||||
name string
|
||||
rule *rule.PagerDuty
|
||||
endpoint *endpoint.PagerDuty
|
||||
script string
|
||||
}{
|
||||
{
|
||||
name: "notify on crit",
|
||||
endpoint: &endpoint.PagerDuty{
|
||||
Base: endpoint.Base{
|
||||
ID: idPtr(2),
|
||||
Name: "foo",
|
||||
},
|
||||
ClientURL: "http://localhost:7777/host/${r.host}",
|
||||
RoutingKey: influxdb.SecretField{
|
||||
Key: "pagerduty_token",
|
||||
},
|
||||
},
|
||||
rule: &rule.PagerDuty{
|
||||
MessageTemplate: "blah",
|
||||
Base: rule.Base{
|
||||
ID: 1,
|
||||
EndpointID: 2,
|
||||
Name: "foo",
|
||||
Every: mustDuration("1h"),
|
||||
StatusRules: []notification.StatusRule{
|
||||
{
|
||||
CurrentLevel: notification.Critical,
|
||||
},
|
||||
},
|
||||
TagRules: []notification.TagRule{
|
||||
{
|
||||
Tag: influxdb.Tag{
|
||||
Key: "foo",
|
||||
Value: "bar",
|
||||
},
|
||||
Operator: influxdb.Equal,
|
||||
},
|
||||
{
|
||||
Tag: influxdb.Tag{
|
||||
Key: "baz",
|
||||
Value: "bang",
|
||||
},
|
||||
Operator: influxdb.Equal,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
script: `package main
|
||||
// foo
|
||||
import "influxdata/influxdb/monitor"
|
||||
import "pagerduty"
|
||||
import "influxdata/influxdb/secrets"
|
||||
import "experimental"
|
||||
|
||||
option task = {name: "foo", every: 1h}
|
||||
|
||||
|
@ -28,8 +78,14 @@ notification = {
|
|||
}
|
||||
statuses = monitor.from(start: -2h, fn: (r) =>
|
||||
(r.foo == "bar" and r.baz == "bang"))
|
||||
crit = statuses
|
||||
|> filter(fn: (r) =>
|
||||
(r._level == "crit"))
|
||||
all_statuses = crit
|
||||
|> filter(fn: (r) =>
|
||||
(r._time > experimental.subDuration(from: now(), d: 1h)))
|
||||
|
||||
statuses
|
||||
all_statuses
|
||||
|> monitor.notify(data: notification, endpoint: pagerduty_endpoint(mapFn: (r) =>
|
||||
({
|
||||
routingKey: pagerduty_secret,
|
||||
|
@ -42,52 +98,196 @@ statuses
|
|||
source: notification._notification_rule_name,
|
||||
summary: r._message,
|
||||
timestamp: time(v: r._source_timestamp),
|
||||
})))`
|
||||
|
||||
s := &rule.PagerDuty{
|
||||
MessageTemplate: "blah",
|
||||
Base: rule.Base{
|
||||
ID: 1,
|
||||
EndpointID: 2,
|
||||
Name: "foo",
|
||||
Every: mustDuration("1h"),
|
||||
TagRules: []notification.TagRule{
|
||||
{
|
||||
Tag: influxdb.Tag{
|
||||
Key: "foo",
|
||||
Value: "bar",
|
||||
},
|
||||
Operator: influxdb.Equal,
|
||||
})))`,
|
||||
},
|
||||
{
|
||||
name: "notify on info to crit",
|
||||
endpoint: &endpoint.PagerDuty{
|
||||
Base: endpoint.Base{
|
||||
ID: idPtr(2),
|
||||
Name: "foo",
|
||||
},
|
||||
{
|
||||
Tag: influxdb.Tag{
|
||||
Key: "baz",
|
||||
Value: "bang",
|
||||
},
|
||||
Operator: influxdb.Equal,
|
||||
ClientURL: "http://localhost:7777/host/${r.host}",
|
||||
RoutingKey: influxdb.SecretField{
|
||||
Key: "pagerduty_token",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
rule: &rule.PagerDuty{
|
||||
MessageTemplate: "blah",
|
||||
Base: rule.Base{
|
||||
ID: 1,
|
||||
EndpointID: 2,
|
||||
Name: "foo",
|
||||
Every: mustDuration("1h"),
|
||||
StatusRules: []notification.StatusRule{
|
||||
{
|
||||
CurrentLevel: notification.Critical,
|
||||
PreviousLevel: statusRulePtr(notification.Info),
|
||||
},
|
||||
},
|
||||
TagRules: []notification.TagRule{
|
||||
{
|
||||
Tag: influxdb.Tag{
|
||||
Key: "foo",
|
||||
Value: "bar",
|
||||
},
|
||||
Operator: influxdb.Equal,
|
||||
},
|
||||
{
|
||||
Tag: influxdb.Tag{
|
||||
Key: "baz",
|
||||
Value: "bang",
|
||||
},
|
||||
Operator: influxdb.Equal,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
script: `package main
|
||||
// foo
|
||||
import "influxdata/influxdb/monitor"
|
||||
import "pagerduty"
|
||||
import "influxdata/influxdb/secrets"
|
||||
import "experimental"
|
||||
|
||||
id := influxdb.ID(2)
|
||||
e := &endpoint.PagerDuty{
|
||||
Base: endpoint.Base{
|
||||
ID: &id,
|
||||
Name: "foo",
|
||||
},
|
||||
ClientURL: "http://localhost:7777/host/${r.host}",
|
||||
RoutingKey: influxdb.SecretField{
|
||||
Key: "pagerduty_token",
|
||||
},
|
||||
}
|
||||
option task = {name: "foo", every: 1h}
|
||||
|
||||
pagerduty_secret = secrets.get(key: "pagerduty_token")
|
||||
pagerduty_endpoint = pagerduty.endpoint()
|
||||
notification = {
|
||||
_notification_rule_id: "0000000000000001",
|
||||
_notification_rule_name: "foo",
|
||||
_notification_endpoint_id: "0000000000000002",
|
||||
_notification_endpoint_name: "foo",
|
||||
}
|
||||
statuses = monitor.from(start: -2h, fn: (r) =>
|
||||
(r.foo == "bar" and r.baz == "bang"))
|
||||
info_to_crit = statuses
|
||||
|> monitor.stateChanges(fromLevel: "info", toLevel: "crit")
|
||||
all_statuses = info_to_crit
|
||||
|> filter(fn: (r) =>
|
||||
(r._time > experimental.subDuration(from: now(), d: 1h)))
|
||||
|
||||
all_statuses
|
||||
|> monitor.notify(data: notification, endpoint: pagerduty_endpoint(mapFn: (r) =>
|
||||
({
|
||||
routingKey: pagerduty_secret,
|
||||
client: "influxdata",
|
||||
clientURL: "http://localhost:7777/host/${r.host}",
|
||||
class: r._check_name,
|
||||
group: r._source_measurement,
|
||||
severity: pagerduty.severityFromLevel(level: r._level),
|
||||
eventAction: pagerduty.actionFromLevel(level: r._level),
|
||||
source: notification._notification_rule_name,
|
||||
summary: r._message,
|
||||
timestamp: time(v: r._source_timestamp),
|
||||
})))`,
|
||||
},
|
||||
{
|
||||
name: "notify on crit or ok to warn",
|
||||
endpoint: &endpoint.PagerDuty{
|
||||
Base: endpoint.Base{
|
||||
ID: idPtr(2),
|
||||
Name: "foo",
|
||||
},
|
||||
ClientURL: "http://localhost:7777/host/${r.host}",
|
||||
RoutingKey: influxdb.SecretField{
|
||||
Key: "pagerduty_token",
|
||||
},
|
||||
},
|
||||
rule: &rule.PagerDuty{
|
||||
MessageTemplate: "blah",
|
||||
Base: rule.Base{
|
||||
ID: 1,
|
||||
EndpointID: 2,
|
||||
Name: "foo",
|
||||
Every: mustDuration("1h"),
|
||||
StatusRules: []notification.StatusRule{
|
||||
{
|
||||
CurrentLevel: notification.Critical,
|
||||
},
|
||||
{
|
||||
CurrentLevel: notification.Warn,
|
||||
PreviousLevel: statusRulePtr(notification.Ok),
|
||||
},
|
||||
},
|
||||
TagRules: []notification.TagRule{
|
||||
{
|
||||
Tag: influxdb.Tag{
|
||||
Key: "foo",
|
||||
Value: "bar",
|
||||
},
|
||||
Operator: influxdb.Equal,
|
||||
},
|
||||
{
|
||||
Tag: influxdb.Tag{
|
||||
Key: "baz",
|
||||
Value: "bang",
|
||||
},
|
||||
Operator: influxdb.Equal,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
script: `package main
|
||||
// foo
|
||||
import "influxdata/influxdb/monitor"
|
||||
import "pagerduty"
|
||||
import "influxdata/influxdb/secrets"
|
||||
import "experimental"
|
||||
|
||||
option task = {name: "foo", every: 1h}
|
||||
|
||||
pagerduty_secret = secrets.get(key: "pagerduty_token")
|
||||
pagerduty_endpoint = pagerduty.endpoint()
|
||||
notification = {
|
||||
_notification_rule_id: "0000000000000001",
|
||||
_notification_rule_name: "foo",
|
||||
_notification_endpoint_id: "0000000000000002",
|
||||
_notification_endpoint_name: "foo",
|
||||
}
|
||||
statuses = monitor.from(start: -2h, fn: (r) =>
|
||||
(r.foo == "bar" and r.baz == "bang"))
|
||||
crit = statuses
|
||||
|> filter(fn: (r) =>
|
||||
(r._level == "crit"))
|
||||
ok_to_warn = statuses
|
||||
|> monitor.stateChanges(fromLevel: "ok", toLevel: "warn")
|
||||
all_statuses = union(tables: [crit, ok_to_warn])
|
||||
|> sort(columns: ["_time"])
|
||||
|> filter(fn: (r) =>
|
||||
(r._time > experimental.subDuration(from: now(), d: 1h)))
|
||||
|
||||
all_statuses
|
||||
|> monitor.notify(data: notification, endpoint: pagerduty_endpoint(mapFn: (r) =>
|
||||
({
|
||||
routingKey: pagerduty_secret,
|
||||
client: "influxdata",
|
||||
clientURL: "http://localhost:7777/host/${r.host}",
|
||||
class: r._check_name,
|
||||
group: r._source_measurement,
|
||||
severity: pagerduty.severityFromLevel(level: r._level),
|
||||
eventAction: pagerduty.actionFromLevel(level: r._level),
|
||||
source: notification._notification_rule_name,
|
||||
summary: r._message,
|
||||
timestamp: time(v: r._source_timestamp),
|
||||
})))`,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
script, err := tt.rule.GenerateFlux(tt.endpoint)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
if got, want := script, tt.script; got != want {
|
||||
t.Errorf("\n\nStrings do not match:\n\n%s", diff.LineDiff(got, want))
|
||||
|
||||
}
|
||||
|
||||
})
|
||||
}
|
||||
|
||||
f, err := s.GenerateFlux(e)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
if f != want {
|
||||
t.Errorf("scripts did not match. want:\n%v\n\ngot:\n%v", want, f)
|
||||
}
|
||||
}
|
||||
|
|
|
@ -123,11 +123,11 @@ func (b *Base) generateFluxASTNotificationDefinition(e influxdb.NotificationEndp
|
|||
return flux.DefineVariable("notification", flux.Object(ruleID, ruleName, endpointID, endpointName))
|
||||
}
|
||||
|
||||
func (b *Base) generateAllStateChanges() []ast.Statement {
|
||||
func (b *Base) generateLevelChecks() []ast.Statement {
|
||||
stmts := []ast.Statement{}
|
||||
tables := []ast.Expression{}
|
||||
for _, r := range b.StatusRules {
|
||||
stmt, table := b.generateStateChanges(r)
|
||||
stmt, table := b.generateLevelCheck(r)
|
||||
tables = append(tables, table)
|
||||
stmts = append(stmts, stmt)
|
||||
}
|
||||
|
@ -186,7 +186,7 @@ func (b *Base) generateAllStateChanges() []ast.Statement {
|
|||
return stmts
|
||||
}
|
||||
|
||||
func (b *Base) generateStateChanges(r notification.StatusRule) (ast.Statement, *ast.Identifier) {
|
||||
func (b *Base) generateLevelCheck(r notification.StatusRule) (ast.Statement, *ast.Identifier) {
|
||||
var name string
|
||||
var pipe *ast.PipeExpression
|
||||
if r.PreviousLevel == nil && r.CurrentLevel == notification.Any {
|
||||
|
|
|
@ -49,7 +49,7 @@ func (s *Slack) generateFluxASTBody(e *endpoint.Slack) []ast.Statement {
|
|||
statements = append(statements, s.generateFluxASTEndpoint(e))
|
||||
statements = append(statements, s.generateFluxASTNotificationDefinition(e))
|
||||
statements = append(statements, s.generateFluxASTStatuses())
|
||||
statements = append(statements, s.generateAllStateChanges()...)
|
||||
statements = append(statements, s.generateLevelChecks()...)
|
||||
statements = append(statements, s.generateFluxASTNotifyPipe())
|
||||
|
||||
return statements
|
||||
|
|
|
@ -152,7 +152,7 @@ func (c *Client) Req(method string, bFn BodyFn, urlPath string, rest ...string)
|
|||
}
|
||||
}
|
||||
if header != "" {
|
||||
headers.Add(header, headerVal)
|
||||
headers.Set(header, headerVal)
|
||||
}
|
||||
// w.Close here is necessary since we have to close any gzip writer
|
||||
// or other writer that requires closing.
|
||||
|
|
|
@ -57,6 +57,25 @@ func TestClient(t *testing.T) {
|
|||
return client, fakeDoer
|
||||
}
|
||||
|
||||
cookieAuthClient := func(status int, respFn respFn, opts ...ClientOptFn) (*Client, *fakeDoer) {
|
||||
const session = "secret"
|
||||
fakeDoer := &fakeDoer{
|
||||
doFn: func(r *http.Request) (*http.Response, error) {
|
||||
cookie, err := r.Cookie("session")
|
||||
if err != nil {
|
||||
return nil, errors.New("session cookie not found")
|
||||
}
|
||||
if cookie.Value != session {
|
||||
return nil, errors.New("unauthed cookie")
|
||||
}
|
||||
return respFn(status, r)
|
||||
},
|
||||
}
|
||||
client := newClient(t, "http://example.com", append(opts, WithSessionCookie(session))...)
|
||||
client.doer = fakeDoer
|
||||
return client, fakeDoer
|
||||
}
|
||||
|
||||
noAuthClient := func(status int, respFn respFn, opts ...ClientOptFn) (*Client, *fakeDoer) {
|
||||
fakeDoer := &fakeDoer{
|
||||
doFn: func(r *http.Request) (*http.Response, error) {
|
||||
|
@ -80,6 +99,10 @@ func TestClient(t *testing.T) {
|
|||
name: "token auth",
|
||||
clientFn: tokenAuthClient,
|
||||
},
|
||||
{
|
||||
name: "cookie auth",
|
||||
clientFn: cookieAuthClient,
|
||||
},
|
||||
}
|
||||
|
||||
encodingTests := []struct {
|
||||
|
|
|
@ -50,6 +50,20 @@ func WithAuthToken(token string) ClientOptFn {
|
|||
}
|
||||
}
|
||||
|
||||
// WithSessionCookie provides cookie auth for requests to mimic the browser.
|
||||
// Typically, session is influxdb.Session.Key.
|
||||
func WithSessionCookie(session string) ClientOptFn {
|
||||
return func(opts *clientOpt) error {
|
||||
fn := func(r *http.Request) {
|
||||
r.AddCookie(&http.Cookie{
|
||||
Name: "session",
|
||||
Value: session,
|
||||
})
|
||||
}
|
||||
return WithAuth(fn)(opts)
|
||||
}
|
||||
}
|
||||
|
||||
// WithContentType sets the content type that will be applied to the requests created
|
||||
// by the Client.
|
||||
func WithContentType(ct string) ClientOptFn {
|
||||
|
|
|
@ -262,6 +262,23 @@ func convertCellView(cell influxdb.Cell) chart {
|
|||
}
|
||||
ch.Note = p.Note
|
||||
ch.NoteOnEmpty = p.ShowNoteWhenEmpty
|
||||
case influxdb.TableViewProperties:
|
||||
setCommon(chartKindTable, p.ViewColors, p.DecimalPlaces, p.Queries)
|
||||
setNoteFixes(p.Note, p.ShowNoteWhenEmpty, "", "")
|
||||
ch.TimeFormat = p.TimeFormat
|
||||
ch.TableOptions = tableOptions{
|
||||
VerticalTimeAxis: p.TableOptions.VerticalTimeAxis,
|
||||
SortByField: p.TableOptions.SortBy.InternalName,
|
||||
Wrapping: p.TableOptions.Wrapping,
|
||||
FixFirstColumn: p.TableOptions.FixFirstColumn,
|
||||
}
|
||||
for _, fieldOpt := range p.FieldOptions {
|
||||
ch.FieldOptions = append(ch.FieldOptions, fieldOption{
|
||||
FieldName: fieldOpt.InternalName,
|
||||
DisplayName: fieldOpt.DisplayName,
|
||||
Visible: fieldOpt.Visible,
|
||||
})
|
||||
}
|
||||
case influxdb.XYViewProperties:
|
||||
setCommon(chartKindXY, p.ViewColors, influxdb.DecimalPlaces{}, p.Queries)
|
||||
setNoteFixes(p.Note, p.ShowNoteWhenEmpty, "", "")
|
||||
|
@ -299,6 +316,35 @@ func convertChartToResource(ch chart) Resource {
|
|||
r[fieldChartLegend] = ch.Legend
|
||||
}
|
||||
|
||||
if zero := new(tableOptions); ch.TableOptions != *zero {
|
||||
tRes := make(Resource)
|
||||
assignNonZeroBools(tRes, map[string]bool{
|
||||
fieldChartTableOptionVerticalTimeAxis: ch.TableOptions.VerticalTimeAxis,
|
||||
fieldChartTableOptionFixFirstColumn: ch.TableOptions.VerticalTimeAxis,
|
||||
})
|
||||
assignNonZeroStrings(tRes, map[string]string{
|
||||
fieldChartTableOptionSortBy: ch.TableOptions.SortByField,
|
||||
fieldChartTableOptionWrapping: ch.TableOptions.Wrapping,
|
||||
})
|
||||
r[fieldChartTableOptions] = tRes
|
||||
}
|
||||
|
||||
if len(ch.FieldOptions) > 0 {
|
||||
fieldOpts := make([]Resource, 0, len(ch.FieldOptions))
|
||||
for _, fo := range ch.FieldOptions {
|
||||
fRes := make(Resource)
|
||||
assignNonZeroBools(fRes, map[string]bool{
|
||||
fieldChartFieldOptionVisible: fo.Visible,
|
||||
})
|
||||
assignNonZeroStrings(fRes, map[string]string{
|
||||
fieldChartFieldOptionDisplayName: fo.DisplayName,
|
||||
fieldChartFieldOptionFieldName: fo.FieldName,
|
||||
})
|
||||
fieldOpts = append(fieldOpts, fRes)
|
||||
}
|
||||
r[fieldChartFieldOptions] = fieldOpts
|
||||
}
|
||||
|
||||
assignNonZeroBools(r, map[string]bool{
|
||||
fieldChartNoteOnEmpty: ch.NoteOnEmpty,
|
||||
fieldChartShade: ch.Shade,
|
||||
|
@ -314,6 +360,7 @@ func convertChartToResource(ch chart) Resource {
|
|||
fieldChartPosition: ch.Position,
|
||||
fieldChartTickPrefix: ch.TickPrefix,
|
||||
fieldChartTickSuffix: ch.TickSuffix,
|
||||
fieldChartTimeFormat: ch.TimeFormat,
|
||||
})
|
||||
|
||||
assignNonZeroInts(r, map[string]int{
|
||||
|
|
|
@ -23,9 +23,19 @@ func (s *HTTPRemoteService) CreatePkg(ctx context.Context, setters ...CreatePkgS
|
|||
return nil, err
|
||||
}
|
||||
}
|
||||
var orgIDs []string
|
||||
for orgID := range opt.OrgIDs {
|
||||
orgIDs = append(orgIDs, orgID.String())
|
||||
|
||||
var orgIDs []ReqCreateOrgIDOpt
|
||||
for _, org := range opt.OrgIDs {
|
||||
orgIDs = append(orgIDs, ReqCreateOrgIDOpt{
|
||||
OrgID: org.OrgID.String(),
|
||||
Filters: struct {
|
||||
ByLabel []string `json:"byLabel"`
|
||||
ByResourceKind []Kind `json:"byResourceKind"`
|
||||
}{
|
||||
ByLabel: org.LabelNames,
|
||||
ByResourceKind: org.ResourceKinds,
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
reqBody := ReqCreatePkg{
|
||||
|
|
|
@ -60,16 +60,43 @@ func (s *HTTPServer) Prefix() string {
|
|||
return RoutePrefix
|
||||
}
|
||||
|
||||
type (
|
||||
// ReqCreatePkg is a request body for the create pkg endpoint.
|
||||
ReqCreatePkg struct {
|
||||
OrgIDs []string `json:"orgIDs"`
|
||||
Resources []ResourceToClone `json:"resources"`
|
||||
// ReqCreateOrgIDOpt provides options to export resources by organization id.
|
||||
type ReqCreateOrgIDOpt struct {
|
||||
OrgID string `json:"orgID"`
|
||||
Filters struct {
|
||||
ByLabel []string `json:"byLabel"`
|
||||
ByResourceKind []Kind `json:"byResourceKind"`
|
||||
} `json:"resourceFilters"`
|
||||
}
|
||||
|
||||
// ReqCreatePkg is a request body for the create pkg endpoint.
|
||||
type ReqCreatePkg struct {
|
||||
OrgIDs []ReqCreateOrgIDOpt `json:"orgIDs"`
|
||||
Resources []ResourceToClone `json:"resources"`
|
||||
}
|
||||
|
||||
// OK validates a create request.
|
||||
func (r *ReqCreatePkg) OK() error {
|
||||
if len(r.Resources) == 0 && len(r.OrgIDs) == 0 {
|
||||
return &influxdb.Error{
|
||||
Code: influxdb.EUnprocessableEntity,
|
||||
Msg: "at least 1 resource or 1 org id must be provided",
|
||||
}
|
||||
}
|
||||
|
||||
// RespCreatePkg is a response body for the create pkg endpoint.
|
||||
RespCreatePkg []Object
|
||||
)
|
||||
for _, org := range r.OrgIDs {
|
||||
if _, err := influxdb.IDFromString(org.OrgID); err != nil {
|
||||
return &influxdb.Error{
|
||||
Code: influxdb.EInvalid,
|
||||
Msg: fmt.Sprintf("provided org id is invalid: %q", org.OrgID),
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// RespCreatePkg is a response body for the create pkg endpoint.
|
||||
type RespCreatePkg []Object
|
||||
|
||||
func (s *HTTPServer) createPkg(w http.ResponseWriter, r *http.Request) {
|
||||
encoding := pkgEncoding(r.Header)
|
||||
|
@ -81,23 +108,19 @@ func (s *HTTPServer) createPkg(w http.ResponseWriter, r *http.Request) {
|
|||
}
|
||||
defer r.Body.Close()
|
||||
|
||||
if len(reqBody.Resources) == 0 && len(reqBody.OrgIDs) == 0 {
|
||||
s.api.Err(w, &influxdb.Error{
|
||||
Code: influxdb.EUnprocessableEntity,
|
||||
Msg: "at least 1 resource or 1 org id must be provided",
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
opts := []CreatePkgSetFn{
|
||||
CreateWithExistingResources(reqBody.Resources...),
|
||||
}
|
||||
for _, orgIDStr := range reqBody.OrgIDs {
|
||||
orgID, err := influxdb.IDFromString(orgIDStr)
|
||||
orgID, err := influxdb.IDFromString(orgIDStr.OrgID)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
opts = append(opts, CreateWithAllOrgResources(*orgID))
|
||||
opts = append(opts, CreateWithAllOrgResources(CreateByOrgIDOpt{
|
||||
OrgID: *orgID,
|
||||
LabelNames: orgIDStr.Filters.ByLabel,
|
||||
ResourceKinds: orgIDStr.Filters.ByResourceKind,
|
||||
}))
|
||||
}
|
||||
|
||||
newPkg, err := s.svc.CreatePkg(r.Context(), opts...)
|
||||
|
@ -152,19 +175,45 @@ func (p PkgRemote) Encoding() Encoding {
|
|||
type ReqApplyPkg struct {
|
||||
DryRun bool `json:"dryRun" yaml:"dryRun"`
|
||||
OrgID string `json:"orgID" yaml:"orgID"`
|
||||
Remote PkgRemote `json:"remote" yaml:"remote"`
|
||||
Remotes []PkgRemote `json:"remotes" yaml:"remotes"`
|
||||
RawPkgs []json.RawMessage `json:"packages" yaml:"packages"`
|
||||
RawPkg json.RawMessage `json:"package" yaml:"package"`
|
||||
EnvRefs map[string]string `json:"envRefs"`
|
||||
Secrets map[string]string `json:"secrets"`
|
||||
}
|
||||
|
||||
// Pkg returns a pkg parsed and validated from the RawPkg field.
|
||||
func (r ReqApplyPkg) Pkg(encoding Encoding) (*Pkg, error) {
|
||||
if r.Remote.URL != "" {
|
||||
return Parse(r.Remote.Encoding(), FromHTTPRequest(r.Remote.URL))
|
||||
// Pkgs returns all pkgs associated with the request.
|
||||
func (r ReqApplyPkg) Pkgs(encoding Encoding) (*Pkg, error) {
|
||||
var rawPkgs []*Pkg
|
||||
for _, rem := range r.Remotes {
|
||||
if rem.URL == "" {
|
||||
continue
|
||||
}
|
||||
pkg, err := Parse(rem.Encoding(), FromHTTPRequest(rem.URL), ValidSkipParseError())
|
||||
if err != nil {
|
||||
return nil, &influxdb.Error{
|
||||
Code: influxdb.EUnprocessableEntity,
|
||||
Msg: fmt.Sprintf("pkg from url[%s] had an issue: %s", rem.URL, err.Error()),
|
||||
}
|
||||
}
|
||||
rawPkgs = append(rawPkgs, pkg)
|
||||
}
|
||||
|
||||
return Parse(encoding, FromReader(bytes.NewReader(r.RawPkg)))
|
||||
for i, rawPkg := range append(r.RawPkgs, r.RawPkg) {
|
||||
if rawPkg == nil {
|
||||
continue
|
||||
}
|
||||
pkg, err := Parse(encoding, FromReader(bytes.NewReader(rawPkg)), ValidSkipParseError())
|
||||
if err != nil {
|
||||
return nil, &influxdb.Error{
|
||||
Code: influxdb.EUnprocessableEntity,
|
||||
Msg: fmt.Sprintf("pkg [%d] had an issue: %s", i, err.Error()),
|
||||
}
|
||||
}
|
||||
rawPkgs = append(rawPkgs, pkg)
|
||||
}
|
||||
|
||||
return Combine(rawPkgs...)
|
||||
}
|
||||
|
||||
// RespApplyPkg is the response body for the apply pkg endpoint.
|
||||
|
@ -199,10 +248,10 @@ func (s *HTTPServer) applyPkg(w http.ResponseWriter, r *http.Request) {
|
|||
}
|
||||
userID := auth.GetUserID()
|
||||
|
||||
parsedPkg, err := reqBody.Pkg(encoding)
|
||||
parsedPkg, err := reqBody.Pkgs(encoding)
|
||||
if err != nil {
|
||||
s.api.Err(w, &influxdb.Error{
|
||||
Code: influxdb.EInvalid,
|
||||
Code: influxdb.EUnprocessableEntity,
|
||||
Err: err,
|
||||
})
|
||||
return
|
||||
|
@ -302,7 +351,7 @@ func convertParseErr(err error) []ValidationErr {
|
|||
|
||||
func newDecodeErr(encoding string, err error) *influxdb.Error {
|
||||
return &influxdb.Error{
|
||||
Msg: fmt.Sprintf("unable to unmarshal %s; Err: %v", encoding, err),
|
||||
Msg: fmt.Sprintf("unable to unmarshal %s", encoding),
|
||||
Code: influxdb.EInvalid,
|
||||
Err: err,
|
||||
}
|
||||
|
|
|
@ -103,9 +103,18 @@ func TestPkgerHTTPServer(t *testing.T) {
|
|||
reqBody: pkger.ReqApplyPkg{
|
||||
DryRun: true,
|
||||
OrgID: influxdb.ID(9000).String(),
|
||||
Remote: pkger.PkgRemote{
|
||||
Remotes: []pkger.PkgRemote{{
|
||||
URL: "https://gist.githubusercontent.com/jsteenb2/3a3b2b5fcbd6179b2494c2b54aa2feb0/raw/989d361db7a851a3c388eaed0b59dce7fca7fdf3/bucket_pkg.json",
|
||||
},
|
||||
}},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "app jsonnet",
|
||||
contentType: "application/x-jsonnet",
|
||||
reqBody: pkger.ReqApplyPkg{
|
||||
DryRun: true,
|
||||
OrgID: influxdb.ID(9000).String(),
|
||||
RawPkg: bucketPkgKinds(t, pkger.EncodingJsonnet),
|
||||
},
|
||||
},
|
||||
{
|
||||
|
@ -213,6 +222,107 @@ func TestPkgerHTTPServer(t *testing.T) {
|
|||
t.Run(tt.name, fn)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("with multiple pkgs", func(t *testing.T) {
|
||||
newBktPkg := func(t *testing.T, bktName string) json.RawMessage {
|
||||
t.Helper()
|
||||
|
||||
pkgStr := fmt.Sprintf(`[
|
||||
{
|
||||
"apiVersion": "%[1]s",
|
||||
"kind": "Bucket",
|
||||
"metadata": {
|
||||
"name": %q
|
||||
},
|
||||
"spec": {}
|
||||
}
|
||||
]`, pkger.APIVersion, bktName)
|
||||
|
||||
pkg, err := pkger.Parse(pkger.EncodingJSON, pkger.FromString(pkgStr))
|
||||
require.NoError(t, err)
|
||||
|
||||
pkgBytes, err := pkg.Encode(pkger.EncodingJSON)
|
||||
require.NoError(t, err)
|
||||
return pkgBytes
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
reqBody pkger.ReqApplyPkg
|
||||
expectedBkts []string
|
||||
}{
|
||||
{
|
||||
name: "retrieves package from a URL and raw pkgs",
|
||||
reqBody: pkger.ReqApplyPkg{
|
||||
DryRun: true,
|
||||
OrgID: influxdb.ID(9000).String(),
|
||||
Remotes: []pkger.PkgRemote{{
|
||||
ContentType: "json",
|
||||
URL: "https://gist.githubusercontent.com/jsteenb2/3a3b2b5fcbd6179b2494c2b54aa2feb0/raw/989d361db7a851a3c388eaed0b59dce7fca7fdf3/bucket_pkg.json",
|
||||
}},
|
||||
RawPkgs: []json.RawMessage{
|
||||
newBktPkg(t, "bkt1"),
|
||||
newBktPkg(t, "bkt2"),
|
||||
newBktPkg(t, "bkt3"),
|
||||
},
|
||||
},
|
||||
expectedBkts: []string{"bkt1", "bkt2", "bkt3", "rucket_11"},
|
||||
},
|
||||
{
|
||||
name: "retrieves packages from raw single and list",
|
||||
reqBody: pkger.ReqApplyPkg{
|
||||
DryRun: true,
|
||||
OrgID: influxdb.ID(9000).String(),
|
||||
RawPkg: newBktPkg(t, "bkt4"),
|
||||
RawPkgs: []json.RawMessage{
|
||||
newBktPkg(t, "bkt1"),
|
||||
newBktPkg(t, "bkt2"),
|
||||
newBktPkg(t, "bkt3"),
|
||||
},
|
||||
},
|
||||
expectedBkts: []string{"bkt1", "bkt2", "bkt3", "bkt4"},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
fn := func(t *testing.T) {
|
||||
svc := &fakeSVC{
|
||||
DryRunFn: func(ctx context.Context, orgID, userID influxdb.ID, pkg *pkger.Pkg, opts ...pkger.ApplyOptFn) (pkger.Summary, pkger.Diff, error) {
|
||||
if err := pkg.Validate(); err != nil {
|
||||
return pkger.Summary{}, pkger.Diff{}, err
|
||||
}
|
||||
sum := pkg.Summary()
|
||||
var diff pkger.Diff
|
||||
for _, b := range sum.Buckets {
|
||||
diff.Buckets = append(diff.Buckets, pkger.DiffBucket{
|
||||
Name: b.Name,
|
||||
})
|
||||
}
|
||||
return sum, diff, nil
|
||||
},
|
||||
}
|
||||
|
||||
pkgHandler := pkger.NewHTTPServer(zap.NewNop(), svc)
|
||||
svr := newMountedHandler(pkgHandler, 1)
|
||||
|
||||
testttp.
|
||||
PostJSON(t, "/api/v2/packages/apply", tt.reqBody).
|
||||
Do(svr).
|
||||
ExpectStatus(http.StatusOK).
|
||||
ExpectBody(func(buf *bytes.Buffer) {
|
||||
var resp pkger.RespApplyPkg
|
||||
decodeBody(t, buf, &resp)
|
||||
|
||||
require.Len(t, resp.Summary.Buckets, len(tt.expectedBkts))
|
||||
for i, expected := range tt.expectedBkts {
|
||||
assert.Equal(t, expected, resp.Summary.Buckets[i].Name)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
t.Run(tt.name, fn)
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("apply a pkg", func(t *testing.T) {
|
||||
|
|
116
pkger/models.go
116
pkger/models.go
|
@ -6,6 +6,7 @@ import (
|
|||
"fmt"
|
||||
"net/url"
|
||||
"reflect"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
@ -317,7 +318,7 @@ func (d DiffLabel) IsNew() bool {
|
|||
}
|
||||
|
||||
func (d DiffLabel) hasConflict() bool {
|
||||
return d.IsNew() || d.Old != nil && *d.Old != d.New
|
||||
return !d.IsNew() && d.Old != nil && *d.Old != d.New
|
||||
}
|
||||
|
||||
func newDiffLabel(l *label, i *influxdb.Label) DiffLabel {
|
||||
|
@ -585,6 +586,7 @@ const (
|
|||
chartKindScatter chartKind = "scatter"
|
||||
chartKindSingleStat chartKind = "single_stat"
|
||||
chartKindSingleStatPlusLine chartKind = "single_stat_plus_line"
|
||||
chartKindTable chartKind = "table"
|
||||
chartKindXY chartKind = "xy"
|
||||
)
|
||||
|
||||
|
@ -592,7 +594,7 @@ func (c chartKind) ok() bool {
|
|||
switch c {
|
||||
case chartKindGauge, chartKindHeatMap, chartKindHistogram,
|
||||
chartKindMarkdown, chartKindScatter, chartKindSingleStat,
|
||||
chartKindSingleStatPlusLine, chartKindXY:
|
||||
chartKindSingleStatPlusLine, chartKindTable, chartKindXY:
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
|
@ -1828,6 +1830,8 @@ func (t *task) Status() influxdb.Status {
|
|||
return influxdb.Status(t.status)
|
||||
}
|
||||
|
||||
var fluxRegex = regexp.MustCompile(`import\s+\".*\"`)
|
||||
|
||||
func (t *task) flux() string {
|
||||
taskOpts := []string{fmt.Sprintf("name: %q", t.name)}
|
||||
if t.cron != "" {
|
||||
|
@ -1839,11 +1843,24 @@ func (t *task) flux() string {
|
|||
if t.offset > 0 {
|
||||
taskOpts = append(taskOpts, fmt.Sprintf("offset: %s", t.offset))
|
||||
}
|
||||
|
||||
// this is required by the API, super nasty. Will be super challenging for
|
||||
// anyone outside org to figure out how to do this within an hour of looking
|
||||
// at the API :sadpanda:. Would be ideal to let the API translate the arguments
|
||||
// into this required form instead of forcing that complexity on the caller.
|
||||
return fmt.Sprintf("option task = { %s }\n%s", strings.Join(taskOpts, ", "), t.query)
|
||||
taskOptStr := fmt.Sprintf("\noption task = { %s }", strings.Join(taskOpts, ", "))
|
||||
|
||||
if indices := fluxRegex.FindAllIndex([]byte(t.query), -1); len(indices) > 0 {
|
||||
lastImportIdx := indices[len(indices)-1][1]
|
||||
pieces := append([]string{},
|
||||
t.query[:lastImportIdx],
|
||||
taskOptStr,
|
||||
t.query[lastImportIdx:],
|
||||
)
|
||||
return fmt.Sprint(strings.Join(pieces, "\n"))
|
||||
}
|
||||
|
||||
return fmt.Sprintf("%s\n%s", taskOptStr, t.query)
|
||||
}
|
||||
|
||||
func (t *task) summarize() SummaryTask {
|
||||
|
@ -2156,8 +2173,11 @@ const (
|
|||
fieldChartPosition = "position"
|
||||
fieldChartQueries = "queries"
|
||||
fieldChartShade = "shade"
|
||||
fieldChartFieldOptions = "fieldOptions"
|
||||
fieldChartTableOptions = "tableOptions"
|
||||
fieldChartTickPrefix = "tickPrefix"
|
||||
fieldChartTickSuffix = "tickSuffix"
|
||||
fieldChartTimeFormat = "timeFormat"
|
||||
fieldChartWidth = "width"
|
||||
fieldChartXCol = "xCol"
|
||||
fieldChartXPos = "xPos"
|
||||
|
@ -2188,6 +2208,8 @@ type chart struct {
|
|||
BinSize int
|
||||
BinCount int
|
||||
Position string
|
||||
FieldOptions []fieldOption
|
||||
TableOptions tableOptions
|
||||
TimeFormat string
|
||||
}
|
||||
|
||||
|
@ -2303,6 +2325,40 @@ func (c chart) properties() influxdb.ViewProperties {
|
|||
Axes: c.Axes.influxAxes(),
|
||||
Position: c.Position,
|
||||
}
|
||||
case chartKindTable:
|
||||
fieldOptions := make([]influxdb.RenamableField, 0, len(c.FieldOptions))
|
||||
for _, fieldOpt := range c.FieldOptions {
|
||||
fieldOptions = append(fieldOptions, influxdb.RenamableField{
|
||||
InternalName: fieldOpt.FieldName,
|
||||
DisplayName: fieldOpt.DisplayName,
|
||||
Visible: fieldOpt.Visible,
|
||||
})
|
||||
}
|
||||
sort.Slice(fieldOptions, func(i, j int) bool {
|
||||
return fieldOptions[i].InternalName < fieldOptions[j].InternalName
|
||||
})
|
||||
|
||||
return influxdb.TableViewProperties{
|
||||
Type: influxdb.ViewPropertyTypeTable,
|
||||
Note: c.Note,
|
||||
ShowNoteWhenEmpty: c.NoteOnEmpty,
|
||||
DecimalPlaces: influxdb.DecimalPlaces{
|
||||
IsEnforced: c.EnforceDecimals,
|
||||
Digits: int32(c.DecimalPlaces),
|
||||
},
|
||||
Queries: c.Queries.influxDashQueries(),
|
||||
ViewColors: c.Colors.influxViewColors(),
|
||||
TableOptions: influxdb.TableOptions{
|
||||
VerticalTimeAxis: c.TableOptions.VerticalTimeAxis,
|
||||
SortBy: influxdb.RenamableField{
|
||||
InternalName: c.TableOptions.SortByField,
|
||||
},
|
||||
Wrapping: c.TableOptions.Wrapping,
|
||||
FixFirstColumn: c.TableOptions.FixFirstColumn,
|
||||
},
|
||||
FieldOptions: fieldOptions,
|
||||
TimeFormat: c.TimeFormat,
|
||||
}
|
||||
case chartKindXY:
|
||||
return influxdb.XYViewProperties{
|
||||
Type: influxdb.ViewPropertyTypeXY,
|
||||
|
@ -2344,7 +2400,7 @@ func (c chart) validProperties() []validationErr {
|
|||
// chart kind specific validations
|
||||
switch c.Kind {
|
||||
case chartKindGauge:
|
||||
fails = append(fails, c.Colors.hasTypes(colorTypeMin, colorTypeThreshold, colorTypeMax)...)
|
||||
fails = append(fails, c.Colors.hasTypes(colorTypeMin, colorTypeMax)...)
|
||||
case chartKindHeatMap:
|
||||
fails = append(fails, c.Axes.hasAxes("x", "y")...)
|
||||
case chartKindHistogram:
|
||||
|
@ -2355,6 +2411,8 @@ func (c chart) validProperties() []validationErr {
|
|||
case chartKindSingleStatPlusLine:
|
||||
fails = append(fails, c.Axes.hasAxes("x", "y")...)
|
||||
fails = append(fails, validPosition(c.Position)...)
|
||||
case chartKindTable:
|
||||
fails = append(fails, validTableOptions(c.TableOptions)...)
|
||||
case chartKindXY:
|
||||
fails = append(fails, validGeometry(c.Geom)...)
|
||||
fails = append(fails, c.Axes.hasAxes("x", "y")...)
|
||||
|
@ -2415,6 +2473,56 @@ func (c chart) validBaseProps() []validationErr {
|
|||
return fails
|
||||
}
|
||||
|
||||
const (
|
||||
fieldChartFieldOptionDisplayName = "displayName"
|
||||
fieldChartFieldOptionFieldName = "fieldName"
|
||||
fieldChartFieldOptionVisible = "visible"
|
||||
)
|
||||
|
||||
type fieldOption struct {
|
||||
FieldName string
|
||||
DisplayName string
|
||||
Visible bool
|
||||
}
|
||||
|
||||
const (
|
||||
fieldChartTableOptionVerticalTimeAxis = "verticalTimeAxis"
|
||||
fieldChartTableOptionSortBy = "sortBy"
|
||||
fieldChartTableOptionWrapping = "wrapping"
|
||||
fieldChartTableOptionFixFirstColumn = "fixFirstColumn"
|
||||
)
|
||||
|
||||
type tableOptions struct {
|
||||
VerticalTimeAxis bool
|
||||
SortByField string
|
||||
Wrapping string
|
||||
FixFirstColumn bool
|
||||
}
|
||||
|
||||
func validTableOptions(opts tableOptions) []validationErr {
|
||||
var fails []validationErr
|
||||
|
||||
switch opts.Wrapping {
|
||||
case "", "single-line", "truncate", "wrap":
|
||||
default:
|
||||
fails = append(fails, validationErr{
|
||||
Field: fieldChartTableOptionWrapping,
|
||||
Msg: `chart table option should 1 in ["single-line", "truncate", "wrap"]`,
|
||||
})
|
||||
}
|
||||
|
||||
if len(fails) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
return []validationErr{
|
||||
{
|
||||
Field: fieldChartTableOptions,
|
||||
Nested: fails,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
const (
|
||||
colorTypeBackground = "background"
|
||||
colorTypeMin = "min"
|
||||
|
|
|
@ -123,4 +123,320 @@ func TestPkg(t *testing.T) {
|
|||
assert.Equal(t, label1.Name(), mapping1.LabelName)
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("Diff", func(t *testing.T) {
|
||||
t.Run("hasConflict", func(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
resource interface {
|
||||
hasConflict() bool
|
||||
}
|
||||
expected bool
|
||||
}{
|
||||
{
|
||||
name: "new bucket",
|
||||
resource: DiffBucket{
|
||||
Name: "new bucket",
|
||||
New: DiffBucketValues{
|
||||
Description: "new desc",
|
||||
},
|
||||
},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "existing bucket with no changes",
|
||||
resource: DiffBucket{
|
||||
ID: 3,
|
||||
Name: "new bucket",
|
||||
New: DiffBucketValues{
|
||||
Description: "new desc",
|
||||
RetentionRules: retentionRules{{
|
||||
Type: "expire",
|
||||
Seconds: 3600,
|
||||
}},
|
||||
},
|
||||
Old: &DiffBucketValues{
|
||||
Description: "new desc",
|
||||
RetentionRules: retentionRules{{
|
||||
Type: "expire",
|
||||
Seconds: 3600,
|
||||
}},
|
||||
},
|
||||
},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "existing bucket with desc changes",
|
||||
resource: DiffBucket{
|
||||
ID: 3,
|
||||
Name: "existing bucket",
|
||||
New: DiffBucketValues{
|
||||
Description: "new desc",
|
||||
RetentionRules: retentionRules{{
|
||||
Type: "expire",
|
||||
Seconds: 3600,
|
||||
}},
|
||||
},
|
||||
Old: &DiffBucketValues{
|
||||
Description: "newer desc",
|
||||
RetentionRules: retentionRules{{
|
||||
Type: "expire",
|
||||
Seconds: 3600,
|
||||
}},
|
||||
},
|
||||
},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "existing bucket with retention changes",
|
||||
resource: DiffBucket{
|
||||
ID: 3,
|
||||
Name: "existing bucket",
|
||||
New: DiffBucketValues{
|
||||
Description: "new desc",
|
||||
RetentionRules: retentionRules{{
|
||||
Type: "expire",
|
||||
Seconds: 3600,
|
||||
}},
|
||||
},
|
||||
Old: &DiffBucketValues{
|
||||
Description: "new desc",
|
||||
},
|
||||
},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "existing bucket with retention changes",
|
||||
resource: DiffBucket{
|
||||
ID: 3,
|
||||
Name: "existing bucket",
|
||||
New: DiffBucketValues{
|
||||
Description: "new desc",
|
||||
RetentionRules: retentionRules{{
|
||||
Type: "expire",
|
||||
Seconds: 3600,
|
||||
}},
|
||||
},
|
||||
Old: &DiffBucketValues{
|
||||
Description: "new desc",
|
||||
RetentionRules: retentionRules{{
|
||||
Type: "expire",
|
||||
Seconds: 360,
|
||||
}},
|
||||
},
|
||||
},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "existing bucket with retention changes",
|
||||
resource: DiffBucket{
|
||||
ID: 3,
|
||||
Name: "existing bucket",
|
||||
New: DiffBucketValues{
|
||||
Description: "new desc",
|
||||
RetentionRules: retentionRules{{
|
||||
Type: "expire",
|
||||
Seconds: 3600,
|
||||
}},
|
||||
},
|
||||
Old: &DiffBucketValues{
|
||||
Description: "new desc",
|
||||
RetentionRules: retentionRules{
|
||||
{
|
||||
Type: "expire",
|
||||
Seconds: 360,
|
||||
},
|
||||
{
|
||||
Type: "expire",
|
||||
Seconds: 36000,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "new label",
|
||||
resource: DiffLabel{
|
||||
Name: "new label",
|
||||
New: DiffLabelValues{
|
||||
Color: "new color",
|
||||
Description: "new desc",
|
||||
},
|
||||
},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "existing label with no changes",
|
||||
resource: DiffLabel{
|
||||
ID: 1,
|
||||
Name: "existing label",
|
||||
New: DiffLabelValues{
|
||||
Color: "color",
|
||||
Description: "desc",
|
||||
},
|
||||
Old: &DiffLabelValues{
|
||||
Color: "color",
|
||||
Description: "desc",
|
||||
},
|
||||
},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "existing label with changes",
|
||||
resource: DiffLabel{
|
||||
ID: 1,
|
||||
Name: "existing label",
|
||||
New: DiffLabelValues{
|
||||
Color: "color",
|
||||
Description: "desc",
|
||||
},
|
||||
Old: &DiffLabelValues{
|
||||
Color: "new color",
|
||||
Description: "new desc",
|
||||
},
|
||||
},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "new variable",
|
||||
resource: DiffVariable{
|
||||
Name: "new var",
|
||||
New: DiffVariableValues{
|
||||
Description: "new desc",
|
||||
Args: &influxdb.VariableArguments{
|
||||
Type: "constant",
|
||||
Values: &influxdb.VariableConstantValues{"1", "b"},
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "existing variable no changes",
|
||||
resource: DiffVariable{
|
||||
ID: 2,
|
||||
Name: "new var",
|
||||
New: DiffVariableValues{
|
||||
Description: "new desc",
|
||||
Args: &influxdb.VariableArguments{
|
||||
Type: "constant",
|
||||
Values: &influxdb.VariableConstantValues{"1", "b"},
|
||||
},
|
||||
},
|
||||
Old: &DiffVariableValues{
|
||||
Description: "new desc",
|
||||
Args: &influxdb.VariableArguments{
|
||||
Type: "constant",
|
||||
Values: &influxdb.VariableConstantValues{"1", "b"},
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: false,
|
||||
},
|
||||
{
|
||||
name: "existing variable with desc changes",
|
||||
resource: DiffVariable{
|
||||
ID: 3,
|
||||
Name: "new var",
|
||||
New: DiffVariableValues{
|
||||
Description: "new desc",
|
||||
Args: &influxdb.VariableArguments{
|
||||
Type: "constant",
|
||||
Values: &influxdb.VariableConstantValues{"1", "b"},
|
||||
},
|
||||
},
|
||||
Old: &DiffVariableValues{
|
||||
Description: "newer desc",
|
||||
Args: &influxdb.VariableArguments{
|
||||
Type: "constant",
|
||||
Values: &influxdb.VariableConstantValues{"1", "b"},
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "existing variable with constant arg changes",
|
||||
resource: DiffVariable{
|
||||
ID: 3,
|
||||
Name: "new var",
|
||||
New: DiffVariableValues{
|
||||
Description: "new desc",
|
||||
Args: &influxdb.VariableArguments{
|
||||
Type: "constant",
|
||||
Values: &influxdb.VariableConstantValues{"1", "b"},
|
||||
},
|
||||
},
|
||||
Old: &DiffVariableValues{
|
||||
Description: "new desc",
|
||||
Args: &influxdb.VariableArguments{
|
||||
Type: "constant",
|
||||
Values: &influxdb.VariableConstantValues{"1", "b", "new"},
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "existing variable with map arg changes",
|
||||
resource: DiffVariable{
|
||||
ID: 3,
|
||||
Name: "new var",
|
||||
New: DiffVariableValues{
|
||||
Description: "new desc",
|
||||
Args: &influxdb.VariableArguments{
|
||||
Type: "map",
|
||||
Values: &influxdb.VariableMapValues{"1": "b"},
|
||||
},
|
||||
},
|
||||
Old: &DiffVariableValues{
|
||||
Description: "new desc",
|
||||
Args: &influxdb.VariableArguments{
|
||||
Type: "map",
|
||||
Values: &influxdb.VariableMapValues{"1": "b", "2": "new"},
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: true,
|
||||
},
|
||||
{
|
||||
name: "existing variable with query arg changes",
|
||||
resource: DiffVariable{
|
||||
ID: 3,
|
||||
Name: "new var",
|
||||
New: DiffVariableValues{
|
||||
Description: "new desc",
|
||||
Args: &influxdb.VariableArguments{
|
||||
Type: "query",
|
||||
Values: &influxdb.VariableQueryValues{
|
||||
Query: "from(bucket: rucket)",
|
||||
Language: "flux",
|
||||
},
|
||||
},
|
||||
},
|
||||
Old: &DiffVariableValues{
|
||||
Description: "new desc",
|
||||
Args: &influxdb.VariableArguments{
|
||||
Type: "query",
|
||||
Values: &influxdb.VariableQueryValues{
|
||||
Query: "from(bucket: rucket) |> yield(name: threeve)",
|
||||
Language: "flux",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
fn := func(t *testing.T) {
|
||||
assert.Equal(t, tt.expected, tt.resource.hasConflict())
|
||||
}
|
||||
|
||||
t.Run(tt.name, fn)
|
||||
}
|
||||
})
|
||||
})
|
||||
}
|
||||
|
|
|
@ -1165,23 +1165,24 @@ func parseChart(r Resource) (chart, []validationErr) {
|
|||
c := chart{
|
||||
Kind: ck,
|
||||
Name: r.Name(),
|
||||
Prefix: r.stringShort(fieldPrefix),
|
||||
TickPrefix: r.stringShort(fieldChartTickPrefix),
|
||||
Suffix: r.stringShort(fieldSuffix),
|
||||
TickSuffix: r.stringShort(fieldChartTickSuffix),
|
||||
BinSize: r.intShort(fieldChartBinSize),
|
||||
BinCount: r.intShort(fieldChartBinCount),
|
||||
Geom: r.stringShort(fieldChartGeom),
|
||||
Height: r.intShort(fieldChartHeight),
|
||||
Note: r.stringShort(fieldChartNote),
|
||||
NoteOnEmpty: r.boolShort(fieldChartNoteOnEmpty),
|
||||
Position: r.stringShort(fieldChartPosition),
|
||||
Prefix: r.stringShort(fieldPrefix),
|
||||
Shade: r.boolShort(fieldChartShade),
|
||||
Suffix: r.stringShort(fieldSuffix),
|
||||
TickPrefix: r.stringShort(fieldChartTickPrefix),
|
||||
TickSuffix: r.stringShort(fieldChartTickSuffix),
|
||||
TimeFormat: r.stringShort(fieldChartTimeFormat),
|
||||
Width: r.intShort(fieldChartWidth),
|
||||
XCol: r.stringShort(fieldChartXCol),
|
||||
YCol: r.stringShort(fieldChartYCol),
|
||||
XPos: r.intShort(fieldChartXPos),
|
||||
YPos: r.intShort(fieldChartYPos),
|
||||
Height: r.intShort(fieldChartHeight),
|
||||
Width: r.intShort(fieldChartWidth),
|
||||
Geom: r.stringShort(fieldChartGeom),
|
||||
BinSize: r.intShort(fieldChartBinSize),
|
||||
BinCount: r.intShort(fieldChartBinCount),
|
||||
Position: r.stringShort(fieldChartPosition),
|
||||
}
|
||||
|
||||
if presLeg, ok := r[fieldChartLegend].(legend); ok {
|
||||
|
@ -1253,6 +1254,23 @@ func parseChart(r Resource) (chart, []validationErr) {
|
|||
}
|
||||
}
|
||||
|
||||
if tableOptsRes, ok := ifaceToResource(r[fieldChartTableOptions]); ok {
|
||||
c.TableOptions = tableOptions{
|
||||
VerticalTimeAxis: tableOptsRes.boolShort(fieldChartTableOptionVerticalTimeAxis),
|
||||
SortByField: tableOptsRes.stringShort(fieldChartTableOptionSortBy),
|
||||
Wrapping: tableOptsRes.stringShort(fieldChartTableOptionWrapping),
|
||||
FixFirstColumn: tableOptsRes.boolShort(fieldChartTableOptionFixFirstColumn),
|
||||
}
|
||||
}
|
||||
|
||||
for _, fieldOptRes := range r.slcResource(fieldChartFieldOptions) {
|
||||
c.FieldOptions = append(c.FieldOptions, fieldOption{
|
||||
FieldName: fieldOptRes.stringShort(fieldChartFieldOptionFieldName),
|
||||
DisplayName: fieldOptRes.stringShort(fieldChartFieldOptionDisplayName),
|
||||
Visible: fieldOptRes.boolShort(fieldChartFieldOptionVisible),
|
||||
})
|
||||
}
|
||||
|
||||
if failures = append(failures, c.validProperties()...); len(failures) > 0 {
|
||||
return chart{}, failures
|
||||
}
|
||||
|
|
|
@ -723,7 +723,7 @@ spec:
|
|||
})
|
||||
|
||||
t.Run("pkg with single dashboard and single chart", func(t *testing.T) {
|
||||
t.Run("single gauge chart", func(t *testing.T) {
|
||||
t.Run("gauge chart", func(t *testing.T) {
|
||||
t.Run("happy path", func(t *testing.T) {
|
||||
testfileRunner(t, "testdata/dashboard_gauge", func(t *testing.T, pkg *Pkg) {
|
||||
sum := pkg.Summary()
|
||||
|
@ -763,39 +763,6 @@ spec:
|
|||
|
||||
t.Run("handles invalid config", func(t *testing.T) {
|
||||
tests := []testPkgResourceError{
|
||||
{
|
||||
name: "missing threshold color type",
|
||||
validationErrs: 1,
|
||||
valFields: []string{"charts[0].colors"},
|
||||
pkgStr: `apiVersion: influxdata.com/v2alpha1
|
||||
kind: Dashboard
|
||||
metadata:
|
||||
name: dash_1
|
||||
spec:
|
||||
description: desc1
|
||||
charts:
|
||||
- kind: gauge
|
||||
name: gauge
|
||||
note: gauge note
|
||||
noteOnEmpty: true
|
||||
xPos: 1
|
||||
yPos: 2
|
||||
width: 6
|
||||
height: 3
|
||||
queries:
|
||||
- query: >
|
||||
from(bucket: v.bucket) |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r._measurement == "boltdb_writes_total") |> filter(fn: (r) => r._field == "counter")
|
||||
colors:
|
||||
- name: laser
|
||||
type: min
|
||||
hex: "#8F8AF4"
|
||||
value: 0
|
||||
- name: laser
|
||||
type: max
|
||||
hex: "#8F8AF4"
|
||||
value: 5000
|
||||
`,
|
||||
},
|
||||
{
|
||||
name: "color mixing a hex value",
|
||||
validationErrs: 1,
|
||||
|
@ -876,7 +843,7 @@ spec:
|
|||
})
|
||||
})
|
||||
|
||||
t.Run("single heatmap chart", func(t *testing.T) {
|
||||
t.Run("heatmap chart", func(t *testing.T) {
|
||||
t.Run("happy path", func(t *testing.T) {
|
||||
testfileRunner(t, "testdata/dashboard_heatmap", func(t *testing.T, pkg *Pkg) {
|
||||
sum := pkg.Summary()
|
||||
|
@ -1034,7 +1001,7 @@ spec:
|
|||
})
|
||||
})
|
||||
|
||||
t.Run("single histogram chart", func(t *testing.T) {
|
||||
t.Run("histogram chart", func(t *testing.T) {
|
||||
t.Run("happy path", func(t *testing.T) {
|
||||
testfileRunner(t, "testdata/dashboard_histogram", func(t *testing.T, pkg *Pkg) {
|
||||
sum := pkg.Summary()
|
||||
|
@ -1140,7 +1107,7 @@ spec:
|
|||
})
|
||||
})
|
||||
|
||||
t.Run("single markdown chart", func(t *testing.T) {
|
||||
t.Run("markdown chart", func(t *testing.T) {
|
||||
t.Run("happy path", func(t *testing.T) {
|
||||
testfileRunner(t, "testdata/dashboard_markdown", func(t *testing.T, pkg *Pkg) {
|
||||
sum := pkg.Summary()
|
||||
|
@ -1161,7 +1128,7 @@ spec:
|
|||
})
|
||||
})
|
||||
|
||||
t.Run("single scatter chart", func(t *testing.T) {
|
||||
t.Run("scatter chart", func(t *testing.T) {
|
||||
t.Run("happy path", func(t *testing.T) {
|
||||
testfileRunner(t, "testdata/dashboard_scatter", func(t *testing.T, pkg *Pkg) {
|
||||
sum := pkg.Summary()
|
||||
|
@ -2059,7 +2026,237 @@ spec:
|
|||
})
|
||||
})
|
||||
|
||||
t.Run("single xy chart", func(t *testing.T) {
|
||||
t.Run("table chart", func(t *testing.T) {
|
||||
t.Run("happy path", func(t *testing.T) {
|
||||
testfileRunner(t, "testdata/dashboard_table", func(t *testing.T, pkg *Pkg) {
|
||||
sum := pkg.Summary()
|
||||
require.Len(t, sum.Dashboards, 1)
|
||||
|
||||
actual := sum.Dashboards[0]
|
||||
assert.Equal(t, "dash_1", actual.Name)
|
||||
assert.Equal(t, "desc1", actual.Description)
|
||||
|
||||
require.Len(t, actual.Charts, 1)
|
||||
actualChart := actual.Charts[0]
|
||||
assert.Equal(t, 3, actualChart.Height)
|
||||
assert.Equal(t, 6, actualChart.Width)
|
||||
assert.Equal(t, 1, actualChart.XPosition)
|
||||
assert.Equal(t, 2, actualChart.YPosition)
|
||||
|
||||
props, ok := actualChart.Properties.(influxdb.TableViewProperties)
|
||||
require.True(t, ok)
|
||||
assert.Equal(t, "table note", props.Note)
|
||||
assert.True(t, props.ShowNoteWhenEmpty)
|
||||
assert.True(t, props.DecimalPlaces.IsEnforced)
|
||||
assert.Equal(t, int32(1), props.DecimalPlaces.Digits)
|
||||
assert.Equal(t, "YYYY:MMMM:DD", props.TimeFormat)
|
||||
|
||||
require.Len(t, props.Queries, 1)
|
||||
q := props.Queries[0]
|
||||
expectedQuery := `from(bucket: v.bucket) |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r._measurement == "boltdb_writes_total") |> filter(fn: (r) => r._field == "counter")`
|
||||
assert.Equal(t, expectedQuery, q.Text)
|
||||
assert.Equal(t, "advanced", q.EditMode)
|
||||
|
||||
require.Len(t, props.ViewColors, 1)
|
||||
c := props.ViewColors[0]
|
||||
assert.Equal(t, "laser", c.Name)
|
||||
assert.Equal(t, "min", c.Type)
|
||||
assert.Equal(t, "#8F8AF4", c.Hex)
|
||||
assert.Equal(t, 3.0, c.Value)
|
||||
|
||||
tableOpts := props.TableOptions
|
||||
assert.True(t, tableOpts.VerticalTimeAxis)
|
||||
assert.Equal(t, "_time", tableOpts.SortBy.InternalName)
|
||||
assert.Equal(t, "truncate", tableOpts.Wrapping)
|
||||
assert.True(t, tableOpts.FixFirstColumn)
|
||||
|
||||
require.Len(t, props.FieldOptions, 2)
|
||||
expectedField := influxdb.RenamableField{
|
||||
InternalName: "_time",
|
||||
DisplayName: "time (ms)",
|
||||
Visible: true,
|
||||
}
|
||||
assert.Equal(t, expectedField, props.FieldOptions[0])
|
||||
expectedField = influxdb.RenamableField{
|
||||
InternalName: "_value",
|
||||
DisplayName: "MB",
|
||||
Visible: true,
|
||||
}
|
||||
assert.Equal(t, expectedField, props.FieldOptions[1])
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("handles invalid config", func(t *testing.T) {
|
||||
tests := []testPkgResourceError{
|
||||
{
|
||||
name: "color missing hex value",
|
||||
validationErrs: 1,
|
||||
valFields: []string{"charts[0].colors[0].hex"},
|
||||
pkgStr: `
|
||||
apiVersion: influxdata.com/v2alpha1
|
||||
kind: Dashboard
|
||||
metadata:
|
||||
name: dash_1
|
||||
spec:
|
||||
description: desc1
|
||||
charts:
|
||||
- kind: Table
|
||||
name: table
|
||||
xPos: 1
|
||||
yPos: 2
|
||||
width: 6
|
||||
height: 3
|
||||
queries:
|
||||
- query: >
|
||||
from(bucket: v.bucket) |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r._measurement == "boltdb_writes_total") |> filter(fn: (r) => r._field == "counter")
|
||||
colors:
|
||||
- name: laser
|
||||
type: min
|
||||
hex:
|
||||
value: 3.0`,
|
||||
},
|
||||
{
|
||||
name: "missing query value",
|
||||
validationErrs: 1,
|
||||
valFields: []string{"charts[0].queries[0].query"},
|
||||
pkgStr: `
|
||||
apiVersion: influxdata.com/v2alpha1
|
||||
kind: Dashboard
|
||||
metadata:
|
||||
name: dash_1
|
||||
spec:
|
||||
description: desc1
|
||||
charts:
|
||||
- kind: Table
|
||||
name: table
|
||||
xPos: 1
|
||||
yPos: 2
|
||||
width: 6
|
||||
height: 3
|
||||
queries:
|
||||
- query:
|
||||
colors:
|
||||
- name: laser
|
||||
type: min
|
||||
hex: peru
|
||||
value: 3.0`,
|
||||
},
|
||||
{
|
||||
name: "no queries provided",
|
||||
validationErrs: 1,
|
||||
valFields: []string{"charts[0].queries"},
|
||||
pkgStr: `
|
||||
apiVersion: influxdata.com/v2alpha1
|
||||
kind: Dashboard
|
||||
metadata:
|
||||
name: dash_1
|
||||
spec:
|
||||
description: desc1
|
||||
charts:
|
||||
- kind: Table
|
||||
name: table
|
||||
xPos: 1
|
||||
yPos: 2
|
||||
width: 6
|
||||
height: 3
|
||||
colors:
|
||||
- name: laser
|
||||
type: min
|
||||
hex: peru
|
||||
value: 3.0`,
|
||||
},
|
||||
{
|
||||
name: "no width provided",
|
||||
validationErrs: 1,
|
||||
valFields: []string{"charts[0].width"},
|
||||
pkgStr: `
|
||||
apiVersion: influxdata.com/v2alpha1
|
||||
kind: Dashboard
|
||||
metadata:
|
||||
name: dash_1
|
||||
spec:
|
||||
description: desc1
|
||||
charts:
|
||||
- kind: Table
|
||||
name: table
|
||||
xPos: 1
|
||||
yPos: 2
|
||||
height: 3
|
||||
queries:
|
||||
- query: >
|
||||
from(bucket: v.bucket) |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r._measurement == "boltdb_writes_total") |> filter(fn: (r) => r._field == "counter")
|
||||
colors:
|
||||
- name: laser
|
||||
type: min
|
||||
hex: peru
|
||||
value: 3.0`,
|
||||
},
|
||||
{
|
||||
name: "no height provided",
|
||||
validationErrs: 1,
|
||||
valFields: []string{"charts[0].height"},
|
||||
pkgStr: `
|
||||
apiVersion: influxdata.com/v2alpha1
|
||||
kind: Dashboard
|
||||
metadata:
|
||||
name: dash_1
|
||||
spec:
|
||||
description: desc1
|
||||
charts:
|
||||
- kind: Table
|
||||
name: table
|
||||
xPos: 1
|
||||
yPos: 2
|
||||
width: 6
|
||||
queries:
|
||||
- query: >
|
||||
from(bucket: v.bucket) |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r._measurement == "boltdb_writes_total") |> filter(fn: (r) => r._field == "counter")
|
||||
colors:
|
||||
- name: laser
|
||||
type: min
|
||||
hex: peru
|
||||
value: 3.0`,
|
||||
},
|
||||
{
|
||||
name: "invalid wrapping table option",
|
||||
validationErrs: 1,
|
||||
valFields: []string{"charts[0].tableOptions.wrapping"},
|
||||
pkgStr: `
|
||||
apiVersion: influxdata.com/v2alpha1
|
||||
kind: Dashboard
|
||||
metadata:
|
||||
name: dash_1
|
||||
spec:
|
||||
description: desc1
|
||||
charts:
|
||||
- kind: Table
|
||||
name: table
|
||||
xPos: 1
|
||||
yPos: 2
|
||||
width: 6
|
||||
height: 3
|
||||
tableOptions:
|
||||
sortBy: _time
|
||||
wrapping: WRONGO wrapping
|
||||
queries:
|
||||
- query: >
|
||||
from(bucket: v.bucket) |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r._measurement == "boltdb_writes_total") |> filter(fn: (r) => r._field == "counter")
|
||||
colors:
|
||||
- name: laser
|
||||
type: min
|
||||
hex: "#8F8AF4"
|
||||
value: 3.0
|
||||
`,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
testPkgErrors(t, KindDashboard, tt)
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("xy chart", func(t *testing.T) {
|
||||
t.Run("happy path", func(t *testing.T) {
|
||||
testfileRunner(t, "testdata/dashboard_xy", func(t *testing.T, pkg *Pkg) {
|
||||
sum := pkg.Summary()
|
||||
|
@ -3709,6 +3906,12 @@ func testPkgErrors(t *testing.T, k Kind, tt testPkgResourceError) {
|
|||
pErr := err.(*parseErr)
|
||||
require.Len(t, pErr.Resources, resErrs)
|
||||
|
||||
defer func() {
|
||||
if t.Failed() {
|
||||
t.Logf("recieved unexpected err: %s", pErr)
|
||||
}
|
||||
}()
|
||||
|
||||
resErr := pErr.Resources[0]
|
||||
assert.Equal(t, k.String(), resErr.Kind)
|
||||
|
||||
|
@ -3773,7 +3976,7 @@ func nextField(t *testing.T, field string) (string, int) {
|
|||
}
|
||||
parts := strings.Split(fields[0], "[")
|
||||
if len(parts) == 1 {
|
||||
return "", 0
|
||||
return parts[0], -1
|
||||
}
|
||||
fieldName := parts[0]
|
||||
|
||||
|
|
183
pkger/service.go
183
pkger/service.go
|
@ -171,14 +171,24 @@ func NewService(opts ...ServiceSetterFn) *Service {
|
|||
}
|
||||
}
|
||||
|
||||
// CreatePkgSetFn is a functional input for setting the pkg fields.
|
||||
type CreatePkgSetFn func(opt *CreateOpt) error
|
||||
type (
|
||||
// CreatePkgSetFn is a functional input for setting the pkg fields.
|
||||
CreatePkgSetFn func(opt *CreateOpt) error
|
||||
|
||||
// CreateOpt are the options for creating a new package.
|
||||
type CreateOpt struct {
|
||||
OrgIDs map[influxdb.ID]bool
|
||||
Resources []ResourceToClone
|
||||
}
|
||||
// CreateOpt are the options for creating a new package.
|
||||
CreateOpt struct {
|
||||
OrgIDs []CreateByOrgIDOpt
|
||||
Resources []ResourceToClone
|
||||
}
|
||||
|
||||
// CreateByOrgIDOpt identifies an org to export resources for and provides
|
||||
// multiple filtering options.
|
||||
CreateByOrgIDOpt struct {
|
||||
OrgID influxdb.ID
|
||||
LabelNames []string
|
||||
ResourceKinds []Kind
|
||||
}
|
||||
)
|
||||
|
||||
// CreateWithExistingResources allows the create method to clone existing resources.
|
||||
func CreateWithExistingResources(resources ...ResourceToClone) CreatePkgSetFn {
|
||||
|
@ -195,15 +205,17 @@ func CreateWithExistingResources(resources ...ResourceToClone) CreatePkgSetFn {
|
|||
|
||||
// CreateWithAllOrgResources allows the create method to clone all existing resources
|
||||
// for the given organization.
|
||||
func CreateWithAllOrgResources(orgID influxdb.ID) CreatePkgSetFn {
|
||||
func CreateWithAllOrgResources(orgIDOpt CreateByOrgIDOpt) CreatePkgSetFn {
|
||||
return func(opt *CreateOpt) error {
|
||||
if orgID == 0 {
|
||||
if orgIDOpt.OrgID == 0 {
|
||||
return errors.New("orgID provided must not be zero")
|
||||
}
|
||||
if opt.OrgIDs == nil {
|
||||
opt.OrgIDs = make(map[influxdb.ID]bool)
|
||||
for _, k := range orgIDOpt.ResourceKinds {
|
||||
if err := k.OK(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
opt.OrgIDs[orgID] = true
|
||||
opt.OrgIDs = append(opt.OrgIDs, orgIDOpt)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
@ -221,19 +233,19 @@ func (s *Service) CreatePkg(ctx context.Context, setters ...CreatePkgSetFn) (*Pk
|
|||
Objects: make([]Object, 0, len(opt.Resources)),
|
||||
}
|
||||
|
||||
cloneAssFn := s.resourceCloneAssociationsGen()
|
||||
for orgID := range opt.OrgIDs {
|
||||
resourcesToClone, err := s.cloneOrgResources(ctx, orgID)
|
||||
for _, orgIDOpt := range opt.OrgIDs {
|
||||
resourcesToClone, err := s.cloneOrgResources(ctx, orgIDOpt.OrgID, orgIDOpt.ResourceKinds)
|
||||
if err != nil {
|
||||
return nil, internalErr(err)
|
||||
}
|
||||
opt.Resources = append(opt.Resources, resourcesToClone...)
|
||||
}
|
||||
|
||||
cloneAssFn := s.resourceCloneAssociationsGen()
|
||||
for _, r := range uniqResourcesToClone(opt.Resources) {
|
||||
newKinds, err := s.resourceCloneToKind(ctx, r, cloneAssFn)
|
||||
if err != nil {
|
||||
return nil, internalErr(err)
|
||||
return nil, internalErr(fmt.Errorf("failed to clone resource: resource_id=%s resource_kind=%s err=%q", r.ID, r.Kind, err))
|
||||
}
|
||||
pkg.Objects = append(pkg.Objects, newKinds...)
|
||||
}
|
||||
|
@ -271,51 +283,9 @@ func (s *Service) CreatePkg(ctx context.Context, setters ...CreatePkgSetFn) (*Pk
|
|||
return pkg, nil
|
||||
}
|
||||
|
||||
func (s *Service) cloneOrgResources(ctx context.Context, orgID influxdb.ID) ([]ResourceToClone, error) {
|
||||
resourceTypeGens := []struct {
|
||||
resType influxdb.ResourceType
|
||||
cloneFn func(context.Context, influxdb.ID) ([]ResourceToClone, error)
|
||||
}{
|
||||
{
|
||||
resType: KindBucket.ResourceType(),
|
||||
cloneFn: s.cloneOrgBuckets,
|
||||
},
|
||||
{
|
||||
resType: KindCheck.ResourceType(),
|
||||
cloneFn: s.cloneOrgChecks,
|
||||
},
|
||||
{
|
||||
resType: KindDashboard.ResourceType(),
|
||||
cloneFn: s.cloneOrgDashboards,
|
||||
},
|
||||
{
|
||||
resType: KindLabel.ResourceType(),
|
||||
cloneFn: s.cloneOrgLabels,
|
||||
},
|
||||
{
|
||||
resType: KindNotificationEndpoint.ResourceType(),
|
||||
cloneFn: s.cloneOrgNotificationEndpoints,
|
||||
},
|
||||
{
|
||||
resType: KindNotificationRule.ResourceType(),
|
||||
cloneFn: s.cloneOrgNotificationRules,
|
||||
},
|
||||
{
|
||||
resType: KindTask.ResourceType(),
|
||||
cloneFn: s.cloneOrgTasks,
|
||||
},
|
||||
{
|
||||
resType: KindTelegraf.ResourceType(),
|
||||
cloneFn: s.cloneOrgTelegrafs,
|
||||
},
|
||||
{
|
||||
resType: KindVariable.ResourceType(),
|
||||
cloneFn: s.cloneOrgVariables,
|
||||
},
|
||||
}
|
||||
|
||||
func (s *Service) cloneOrgResources(ctx context.Context, orgID influxdb.ID, resourceKinds []Kind) ([]ResourceToClone, error) {
|
||||
var resources []ResourceToClone
|
||||
for _, resGen := range resourceTypeGens {
|
||||
for _, resGen := range s.filterOrgResourceKinds(resourceKinds) {
|
||||
existingResources, err := resGen.cloneFn(ctx, orgID)
|
||||
if err != nil {
|
||||
return nil, ierrors.Wrap(err, "finding "+string(resGen.resType))
|
||||
|
@ -438,12 +408,12 @@ func (s *Service) cloneOrgNotificationRules(ctx context.Context, orgID influxdb.
|
|||
}
|
||||
|
||||
func (s *Service) cloneOrgTasks(ctx context.Context, orgID influxdb.ID) ([]ResourceToClone, error) {
|
||||
teles, _, err := s.taskSVC.FindTasks(ctx, influxdb.TaskFilter{OrganizationID: &orgID})
|
||||
tasks, _, err := s.taskSVC.FindTasks(ctx, influxdb.TaskFilter{OrganizationID: &orgID})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(teles) == 0 {
|
||||
if len(tasks) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
|
@ -454,20 +424,30 @@ func (s *Service) cloneOrgTasks(ctx context.Context, orgID influxdb.ID) ([]Resou
|
|||
return nil, err
|
||||
}
|
||||
|
||||
mTeles := make(map[influxdb.ID]*influxdb.Task)
|
||||
for i := range teles {
|
||||
t := teles[i]
|
||||
if t.Type == influxdb.TaskSystemType {
|
||||
continue
|
||||
}
|
||||
mTeles[t.ID] = t
|
||||
}
|
||||
for _, c := range checks {
|
||||
delete(mTeles, c.GetTaskID())
|
||||
rules, _, err := s.ruleSVC.FindNotificationRules(ctx, influxdb.NotificationRuleFilter{
|
||||
OrgID: &orgID,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
resources := make([]ResourceToClone, 0, len(mTeles))
|
||||
for _, t := range mTeles {
|
||||
mTasks := make(map[influxdb.ID]*influxdb.Task)
|
||||
for i := range tasks {
|
||||
t := tasks[i]
|
||||
if t.Type != influxdb.TaskSystemType {
|
||||
continue
|
||||
}
|
||||
mTasks[t.ID] = t
|
||||
}
|
||||
for _, c := range checks {
|
||||
delete(mTasks, c.GetTaskID())
|
||||
}
|
||||
for _, r := range rules {
|
||||
delete(mTasks, r.GetTaskID())
|
||||
}
|
||||
|
||||
resources := make([]ResourceToClone, 0, len(mTasks))
|
||||
for _, t := range mTasks {
|
||||
resources = append(resources, ResourceToClone{
|
||||
Kind: KindTask,
|
||||
ID: t.ID,
|
||||
|
@ -611,6 +591,61 @@ func (s *Service) exportNotificationRule(ctx context.Context, r ResourceToClone)
|
|||
return ruleToObject(rule, ruleEndpoint.GetName(), r.Name), endpointKind(ruleEndpoint, ""), nil
|
||||
}
|
||||
|
||||
type cloneResFn func(context.Context, influxdb.ID) ([]ResourceToClone, error)
|
||||
|
||||
func (s *Service) filterOrgResourceKinds(resourceKindFilters []Kind) []struct {
|
||||
resType influxdb.ResourceType
|
||||
cloneFn cloneResFn
|
||||
} {
|
||||
mKinds := map[Kind]cloneResFn{
|
||||
KindBucket: s.cloneOrgBuckets,
|
||||
KindCheck: s.cloneOrgChecks,
|
||||
KindDashboard: s.cloneOrgDashboards,
|
||||
KindLabel: s.cloneOrgLabels,
|
||||
KindNotificationEndpoint: s.cloneOrgNotificationEndpoints,
|
||||
KindNotificationRule: s.cloneOrgNotificationRules,
|
||||
KindTask: s.cloneOrgTasks,
|
||||
KindTelegraf: s.cloneOrgTelegrafs,
|
||||
KindVariable: s.cloneOrgVariables,
|
||||
}
|
||||
|
||||
newResGen := func(resType influxdb.ResourceType, cloneFn cloneResFn) struct {
|
||||
resType influxdb.ResourceType
|
||||
cloneFn cloneResFn
|
||||
} {
|
||||
return struct {
|
||||
resType influxdb.ResourceType
|
||||
cloneFn cloneResFn
|
||||
}{
|
||||
resType: resType,
|
||||
cloneFn: cloneFn,
|
||||
}
|
||||
}
|
||||
|
||||
var resourceTypeGens []struct {
|
||||
resType influxdb.ResourceType
|
||||
cloneFn cloneResFn
|
||||
}
|
||||
if len(resourceKindFilters) == 0 {
|
||||
for k, cloneFn := range mKinds {
|
||||
resourceTypeGens = append(resourceTypeGens, newResGen(k.ResourceType(), cloneFn))
|
||||
}
|
||||
return resourceTypeGens
|
||||
}
|
||||
|
||||
seenKinds := make(map[Kind]bool)
|
||||
for _, k := range resourceKindFilters {
|
||||
cloneFn, ok := mKinds[k]
|
||||
if !ok || seenKinds[k] {
|
||||
continue
|
||||
}
|
||||
seenKinds[k] = true
|
||||
resourceTypeGens = append(resourceTypeGens, newResGen(k.ResourceType(), cloneFn))
|
||||
}
|
||||
|
||||
return resourceTypeGens
|
||||
}
|
||||
|
||||
type (
|
||||
associations struct {
|
||||
associations []Resource
|
||||
|
|
|
@ -1822,6 +1822,42 @@ func TestService(t *testing.T) {
|
|||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "table",
|
||||
newName: "new name",
|
||||
expectedView: influxdb.View{
|
||||
ViewContents: influxdb.ViewContents{
|
||||
Name: "view name",
|
||||
},
|
||||
Properties: influxdb.TableViewProperties{
|
||||
Type: influxdb.ViewPropertyTypeTable,
|
||||
Note: "a note",
|
||||
ShowNoteWhenEmpty: true,
|
||||
Queries: []influxdb.DashboardQuery{newQuery()},
|
||||
ViewColors: []influxdb.ViewColor{{Type: "scale", Hex: "#8F8AF4", Value: 0}, {Type: "scale", Hex: "#8F8AF4", Value: 0}, {Type: "scale", Hex: "#8F8AF4", Value: 0}},
|
||||
TableOptions: influxdb.TableOptions{
|
||||
VerticalTimeAxis: true,
|
||||
SortBy: influxdb.RenamableField{
|
||||
InternalName: "_time",
|
||||
},
|
||||
Wrapping: "truncate",
|
||||
FixFirstColumn: true,
|
||||
},
|
||||
FieldOptions: []influxdb.RenamableField{
|
||||
{
|
||||
InternalName: "_time",
|
||||
DisplayName: "time (ms)",
|
||||
Visible: true,
|
||||
},
|
||||
},
|
||||
TimeFormat: "YYYY:MM:DD",
|
||||
DecimalPlaces: influxdb.DecimalPlaces{
|
||||
IsEnforced: true,
|
||||
Digits: 1,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
|
@ -2634,21 +2670,22 @@ func TestService(t *testing.T) {
|
|||
}, nil
|
||||
}
|
||||
|
||||
expectedRule := &rule.HTTP{
|
||||
Base: rule.Base{
|
||||
ID: 12,
|
||||
Name: "rule_0",
|
||||
EndpointID: 2,
|
||||
Every: mustDuration(t, time.Minute),
|
||||
StatusRules: []notification.StatusRule{{CurrentLevel: notification.Critical}},
|
||||
},
|
||||
}
|
||||
ruleSVC := mock.NewNotificationRuleStore()
|
||||
ruleSVC.FindNotificationRulesF = func(ctx context.Context, f influxdb.NotificationRuleFilter, _ ...influxdb.FindOptions) ([]influxdb.NotificationRule, int, error) {
|
||||
out := []influxdb.NotificationRule{&rule.HTTP{Base: rule.Base{ID: 91}}}
|
||||
out := []influxdb.NotificationRule{expectedRule}
|
||||
return out, len(out), nil
|
||||
}
|
||||
ruleSVC.FindNotificationRuleByIDF = func(ctx context.Context, id influxdb.ID) (influxdb.NotificationRule, error) {
|
||||
return &rule.HTTP{
|
||||
Base: rule.Base{
|
||||
ID: id,
|
||||
Name: "rule_0",
|
||||
EndpointID: 2,
|
||||
Every: mustDuration(t, time.Minute),
|
||||
StatusRules: []notification.StatusRule{{CurrentLevel: notification.Critical}},
|
||||
},
|
||||
}, nil
|
||||
return expectedRule, nil
|
||||
}
|
||||
|
||||
labelSVC := mock.NewLabelService()
|
||||
|
@ -2668,9 +2705,10 @@ func TestService(t *testing.T) {
|
|||
taskSVC := mock.NewTaskService()
|
||||
taskSVC.FindTasksFn = func(ctx context.Context, f influxdb.TaskFilter) ([]*influxdb.Task, int, error) {
|
||||
return []*influxdb.Task{
|
||||
{ID: 31},
|
||||
{ID: expectedCheck.TaskID}, // this one should be ignored in the return
|
||||
{ID: 99, Type: influxdb.TaskSystemType}, // this one should be skipped since it is a system task
|
||||
{ID: 31, Type: influxdb.TaskSystemType},
|
||||
{ID: expectedCheck.TaskID, Type: influxdb.TaskSystemType}, // this one should be ignored in the return
|
||||
{ID: expectedRule.TaskID, Type: influxdb.TaskSystemType}, // this one should be ignored in the return as well
|
||||
{ID: 99}, // this one should be skipped since it is not a system task
|
||||
}, 3, nil
|
||||
}
|
||||
taskSVC.FindTaskByIDFn = func(ctx context.Context, id influxdb.ID) (*influxdb.Task, error) {
|
||||
|
@ -2712,7 +2750,12 @@ func TestService(t *testing.T) {
|
|||
WithVariableSVC(varSVC),
|
||||
)
|
||||
|
||||
pkg, err := svc.CreatePkg(context.TODO(), CreateWithAllOrgResources(orgID))
|
||||
pkg, err := svc.CreatePkg(
|
||||
context.TODO(),
|
||||
CreateWithAllOrgResources(CreateByOrgIDOpt{
|
||||
OrgID: orgID,
|
||||
}),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
summary := pkg.Summary()
|
||||
|
@ -2722,7 +2765,7 @@ func TestService(t *testing.T) {
|
|||
|
||||
checks := summary.Checks
|
||||
require.Len(t, checks, 1)
|
||||
assert.Equal(t, "check_1", checks[0].Check.GetName())
|
||||
assert.Equal(t, expectedCheck.Name, checks[0].Check.GetName())
|
||||
|
||||
dashs := summary.Dashboards
|
||||
require.Len(t, dashs, 1)
|
||||
|
@ -2738,8 +2781,8 @@ func TestService(t *testing.T) {
|
|||
|
||||
rules := summary.NotificationRules
|
||||
require.Len(t, rules, 1)
|
||||
assert.Equal(t, "rule_0", rules[0].Name)
|
||||
assert.Equal(t, "http", rules[0].EndpointName)
|
||||
assert.Equal(t, expectedRule.Name, rules[0].Name)
|
||||
assert.Equal(t, expectedRule.Type(), rules[0].EndpointName)
|
||||
|
||||
require.Len(t, summary.Tasks, 1)
|
||||
task1 := summary.Tasks[0]
|
||||
|
|
|
@ -0,0 +1,57 @@
|
|||
[
|
||||
{
|
||||
"apiVersion": "influxdata.com/v2alpha1",
|
||||
"kind": "Dashboard",
|
||||
"metadata": {
|
||||
"name": "dash_1"
|
||||
},
|
||||
"spec": {
|
||||
"description": "desc1",
|
||||
"charts": [
|
||||
{
|
||||
"kind": "table",
|
||||
"name": "table",
|
||||
"note": "table note",
|
||||
"noteOnEmpty": true,
|
||||
"xPos": 1,
|
||||
"yPos": 2,
|
||||
"width": 6,
|
||||
"height": 3,
|
||||
"decimalPlaces": 1,
|
||||
"queries": [
|
||||
{
|
||||
"query": "from(bucket: v.bucket) |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r._measurement == \"boltdb_writes_total\") |> filter(fn: (r) => r._field == \"counter\")"
|
||||
}
|
||||
],
|
||||
"colors": [
|
||||
{
|
||||
"name": "laser",
|
||||
"type": "min",
|
||||
"hex": "#8F8AF4",
|
||||
"value": 3.0
|
||||
}
|
||||
],
|
||||
"fieldOptions": [
|
||||
{
|
||||
"fieldName": "_value",
|
||||
"displayName": "MB",
|
||||
"visible": true
|
||||
},
|
||||
{
|
||||
"fieldName": "_time",
|
||||
"displayName": "time (ms)",
|
||||
"visible": true
|
||||
}
|
||||
],
|
||||
"tableOptions": {
|
||||
"verticalTimeAxis": true,
|
||||
"sortBy": "_time",
|
||||
"wrapping": "truncate",
|
||||
"fixFirstColumn": true
|
||||
},
|
||||
"timeFormat": "YYYY:MMMM:DD"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
|
@ -0,0 +1,37 @@
|
|||
apiVersion: influxdata.com/v2alpha1
|
||||
kind: Dashboard
|
||||
metadata:
|
||||
name: dash_1
|
||||
spec:
|
||||
description: desc1
|
||||
charts:
|
||||
- kind: Table
|
||||
name: table
|
||||
note: table note
|
||||
noteOnEmpty: true
|
||||
decimalPlaces: 1
|
||||
xPos: 1
|
||||
yPos: 2
|
||||
width: 6
|
||||
height: 3
|
||||
fieldOptions:
|
||||
- fieldName: _time
|
||||
displayName: time (ms)
|
||||
visible: true
|
||||
- fieldName: _value
|
||||
displayName: MB
|
||||
visible: true
|
||||
tableOptions:
|
||||
verticalTimeAxis: true
|
||||
sortBy: _time
|
||||
wrapping: truncate
|
||||
fixFirstColumn: true
|
||||
timeFormat: YYYY:MMMM:DD
|
||||
queries:
|
||||
- query: >
|
||||
from(bucket: v.bucket) |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r._measurement == "boltdb_writes_total") |> filter(fn: (r) => r._field == "counter")
|
||||
colors:
|
||||
- name: laser
|
||||
type: min
|
||||
hex: "#8F8AF4"
|
||||
value: 3.0
|
|
@ -24,6 +24,7 @@ type Compiler struct {
|
|||
Cluster string `json:"cluster,omitempty"`
|
||||
DB string `json:"db,omitempty"`
|
||||
RP string `json:"rp,omitempty"`
|
||||
Bucket string `json:"bucket,omitempty"`
|
||||
Query string `json:"query"`
|
||||
Now *time.Time `json:"now,omitempty"`
|
||||
|
||||
|
@ -51,6 +52,7 @@ func (c *Compiler) Compile(ctx context.Context) (flux.Program, error) {
|
|||
transpiler := NewTranspilerWithConfig(
|
||||
c.dbrpMappingSvc,
|
||||
Config{
|
||||
Bucket: c.Bucket,
|
||||
Cluster: c.Cluster,
|
||||
DefaultDatabase: c.DB,
|
||||
DefaultRetentionPolicy: c.RP,
|
||||
|
|
|
@ -6,10 +6,13 @@ import (
|
|||
|
||||
// Config modifies the behavior of the Transpiler.
|
||||
type Config struct {
|
||||
// Bucket is the name of a bucket to use instead of the db/rp from the query.
|
||||
// If bucket is empty then the dbrp mapping is used.
|
||||
Bucket string
|
||||
DefaultDatabase string
|
||||
DefaultRetentionPolicy string
|
||||
Now time.Time
|
||||
Cluster string
|
||||
Now time.Time
|
||||
// FallbackToDBRP if true will use the naming convention of `db/rp`
|
||||
// for a bucket name when an mapping is not found
|
||||
FallbackToDBRP bool
|
||||
|
|
|
@ -50,97 +50,91 @@ func init() {
|
|||
}
|
||||
|
||||
var skipTests = map[string]string{
|
||||
"hardcoded_literal_1": "transpiler count query is off by 1 (https://github.com/influxdata/platform/issues/1278)",
|
||||
"hardcoded_literal_3": "transpiler count query is off by 1 (https://github.com/influxdata/platform/issues/1278)",
|
||||
"fuzz_join_within_cursor": "transpiler does not implement joining fields within a cursor (https://github.com/influxdata/platform/issues/1340)",
|
||||
"derivative_count": "add derivative support to the transpiler (https://github.com/influxdata/platform/issues/93)",
|
||||
"derivative_first": "add derivative support to the transpiler (https://github.com/influxdata/platform/issues/93)",
|
||||
"derivative_last": "add derivative support to the transpiler (https://github.com/influxdata/platform/issues/93)",
|
||||
"derivative_max": "add derivative support to the transpiler (https://github.com/influxdata/platform/issues/93)",
|
||||
"derivative_mean": "add derivative support to the transpiler (https://github.com/influxdata/platform/issues/93)",
|
||||
"derivative_median": "add derivative support to the transpiler (https://github.com/influxdata/platform/issues/93)",
|
||||
"derivative_min": "add derivative support to the transpiler (https://github.com/influxdata/platform/issues/93)",
|
||||
"derivative_mode": "add derivative support to the transpiler (https://github.com/influxdata/platform/issues/93)",
|
||||
"derivative_percentile_10": "add derivative support to the transpiler (https://github.com/influxdata/platform/issues/93)",
|
||||
"derivative_percentile_50": "add derivative support to the transpiler (https://github.com/influxdata/platform/issues/93)",
|
||||
"derivative_percentile_90": "add derivative support to the transpiler (https://github.com/influxdata/platform/issues/93)",
|
||||
"derivative_sum": "add derivative support to the transpiler (https://github.com/influxdata/platform/issues/93)",
|
||||
"regex_measurement_0": "Transpiler: regex on measurements not evaluated (https://github.com/influxdata/platform/issues/1592)",
|
||||
"regex_measurement_1": "Transpiler: regex on measurements not evaluated (https://github.com/influxdata/platform/issues/1592)",
|
||||
"regex_measurement_2": "Transpiler: regex on measurements not evaluated (https://github.com/influxdata/platform/issues/1592)",
|
||||
"regex_measurement_3": "Transpiler: regex on measurements not evaluated (https://github.com/influxdata/platform/issues/1592)",
|
||||
"regex_measurement_4": "Transpiler: regex on measurements not evaluated (https://github.com/influxdata/platform/issues/1592)",
|
||||
"regex_measurement_5": "Transpiler: regex on measurements not evaluated (https://github.com/influxdata/platform/issues/1592)",
|
||||
"regex_tag_0": "Transpiler: Returns results in wrong sort order for regex filter on tags (https://github.com/influxdata/platform/issues/1596)",
|
||||
"regex_tag_1": "Transpiler: Returns results in wrong sort order for regex filter on tags (https://github.com/influxdata/platform/issues/1596)",
|
||||
"regex_tag_2": "Transpiler: Returns results in wrong sort order for regex filter on tags (https://github.com/influxdata/platform/issues/1596)",
|
||||
"regex_tag_3": "Transpiler: Returns results in wrong sort order for regex filter on tags (https://github.com/influxdata/platform/issues/1596)",
|
||||
"explicit_type_0": "Transpiler should remove _start column (https://github.com/influxdata/platform/issues/1360)",
|
||||
"explicit_type_1": "Transpiler should remove _start column (https://github.com/influxdata/platform/issues/1360)",
|
||||
"fills_0": "need fill/Interpolate function (https://github.com/influxdata/platform/issues/272)",
|
||||
"random_math_0": "transpiler does not implement joining fields within a cursor (https://github.com/influxdata/platform/issues/1340)",
|
||||
"selector_0": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"selector_1": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"selector_2": "Transpiler: first function uses different series than influxQL (https://github.com/influxdata/platform/issues/1605)",
|
||||
"selector_6": "Transpiler: first function uses different series than influxQL (https://github.com/influxdata/platform/issues/1605)",
|
||||
"selector_7": "Transpiler: first function uses different series than influxQL (https://github.com/influxdata/platform/issues/1605)",
|
||||
"series_agg_0": "Transpiler: Implement difference (https://github.com/influxdata/platform/issues/1609)",
|
||||
"series_agg_1": "Transpiler: Implement stddev (https://github.com/influxdata/platform/issues/1610)",
|
||||
"series_agg_2": "Transpiler: Implement spread (https://github.com/influxdata/platform/issues/1611)",
|
||||
"series_agg_3": "Transpiler: Implement elapsed (https://github.com/influxdata/platform/issues/1612)",
|
||||
"series_agg_4": "Transpiler: Implement cumulative_sum (https://github.com/influxdata/platform/issues/1613)",
|
||||
"series_agg_5": "add derivative support to the transpiler (https://github.com/influxdata/platform/issues/93)",
|
||||
"series_agg_6": "Transpiler: Implement non_negative_derivative (https://github.com/influxdata/platform/issues/1614)",
|
||||
"series_agg_7": "Transpiler should remove _start column (https://github.com/influxdata/platform/issues/1360)",
|
||||
"series_agg_8": "Transpiler should remove _start column (https://github.com/influxdata/platform/issues/1360)",
|
||||
"series_agg_9": "Transpiler should remove _start column (https://github.com/influxdata/platform/issues/1360)",
|
||||
"Subquery_0": "Implement subqueries in the transpiler (https://github.com/influxdata/platform/issues/194)",
|
||||
"Subquery_1": "Implement subqueries in the transpiler (https://github.com/influxdata/platform/issues/194)",
|
||||
"Subquery_2": "Implement subqueries in the transpiler (https://github.com/influxdata/platform/issues/194)",
|
||||
"Subquery_3": "Implement subqueries in the transpiler (https://github.com/influxdata/platform/issues/194)",
|
||||
"Subquery_4": "Implement subqueries in the transpiler (https://github.com/influxdata/platform/issues/194)",
|
||||
"Subquery_5": "Implement subqueries in the transpiler (https://github.com/influxdata/platform/issues/194)",
|
||||
"NestedSubquery_0": "Implement subqueries in the transpiler (https://github.com/influxdata/platform/issues/194)",
|
||||
"NestedSubquery_1": "Implement subqueries in the transpiler (https://github.com/influxdata/platform/issues/194)",
|
||||
"NestedSubquery_2": "Implement subqueries in the transpiler (https://github.com/influxdata/platform/issues/194)",
|
||||
"NestedSubquery_3": "Implement subqueries in the transpiler (https://github.com/influxdata/platform/issues/194)",
|
||||
"SimulatedHTTP_0": "Implement subqueries in the transpiler (https://github.com/influxdata/platform/issues/194)",
|
||||
"SimulatedHTTP_1": "Implement subqueries in the transpiler (https://github.com/influxdata/platform/issues/194)",
|
||||
"SimulatedHTTP_2": "Implement subqueries in the transpiler (https://github.com/influxdata/platform/issues/194)",
|
||||
"SimulatedHTTP_3": "Implement subqueries in the transpiler (https://github.com/influxdata/platform/issues/194)",
|
||||
"SimulatedHTTP_4": "Implement subqueries in the transpiler (https://github.com/influxdata/platform/issues/194)",
|
||||
"SelectorMath_0": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_1": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_2": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_3": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_4": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_5": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_6": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_7": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_8": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_9": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_10": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_11": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_12": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_13": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_14": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_15": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_16": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_17": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_18": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_19": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_20": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_21": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_22": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_23": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_24": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_25": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_26": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_27": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_28": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_29": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_30": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"SelectorMath_31": "Transpiler: unimplemented functions: top and bottom (https://github.com/influxdata/platform/issues/1601)",
|
||||
"hardcoded_literal_1": "transpiler count query is off by 1 https://github.com/influxdata/influxdb/issues/10744",
|
||||
"hardcoded_literal_3": "transpiler count query is off by 1 https://github.com/influxdata/influxdb/issues/10744",
|
||||
"fuzz_join_within_cursor": "transpiler does not implement joining fields within a cursor https://github.com/influxdata/influxdb/issues/10743",
|
||||
"derivative_count": "add derivative support to the transpiler https://github.com/influxdata/influxdb/issues/10759",
|
||||
"derivative_first": "add derivative support to the transpiler https://github.com/influxdata/influxdb/issues/10759",
|
||||
"derivative_last": "add derivative support to the transpiler https://github.com/influxdata/influxdb/issues/10759",
|
||||
"derivative_max": "add derivative support to the transpiler https://github.com/influxdata/influxdb/issues/10759",
|
||||
"derivative_mean": "add derivative support to the transpiler https://github.com/influxdata/influxdb/issues/10759",
|
||||
"derivative_median": "add derivative support to the transpiler https://github.com/influxdata/influxdb/issues/10759",
|
||||
"derivative_min": "add derivative support to the transpiler https://github.com/influxdata/influxdb/issues/10759",
|
||||
"derivative_mode": "add derivative support to the transpiler https://github.com/influxdata/influxdb/issues/10759",
|
||||
"derivative_percentile_10": "add derivative support to the transpiler https://github.com/influxdata/influxdb/issues/10759",
|
||||
"derivative_percentile_50": "add derivative support to the transpiler https://github.com/influxdata/influxdb/issues/10759",
|
||||
"derivative_percentile_90": "add derivative support to the transpiler https://github.com/influxdata/influxdb/issues/10759",
|
||||
"derivative_sum": "add derivative support to the transpiler https://github.com/influxdata/influxdb/issues/10759",
|
||||
"regex_measurement_0": "Transpiler: regex on measurements not evaluated https://github.com/influxdata/influxdb/issues/10740",
|
||||
"regex_measurement_1": "Transpiler: regex on measurements not evaluated https://github.com/influxdata/influxdb/issues/10740",
|
||||
"regex_measurement_2": "Transpiler: regex on measurements not evaluated https://github.com/influxdata/influxdb/issues/10740",
|
||||
"regex_measurement_3": "Transpiler: regex on measurements not evaluated https://github.com/influxdata/influxdb/issues/10740",
|
||||
"regex_measurement_4": "Transpiler: regex on measurements not evaluated https://github.com/influxdata/influxdb/issues/10740",
|
||||
"regex_measurement_5": "Transpiler: regex on measurements not evaluated https://github.com/influxdata/influxdb/issues/10740",
|
||||
"regex_tag_0": "Transpiler: Returns results in wrong sort order for regex filter on tags https://github.com/influxdata/influxdb/issues/10739",
|
||||
"regex_tag_1": "Transpiler: Returns results in wrong sort order for regex filter on tags https://github.com/influxdata/influxdb/issues/10739",
|
||||
"regex_tag_2": "Transpiler: Returns results in wrong sort order for regex filter on tags https://github.com/influxdata/influxdb/issues/10739",
|
||||
"regex_tag_3": "Transpiler: Returns results in wrong sort order for regex filter on tags https://github.com/influxdata/influxdb/issues/10739",
|
||||
"explicit_type_0": "Transpiler should remove _start column https://github.com/influxdata/influxdb/issues/10742",
|
||||
"explicit_type_1": "Transpiler should remove _start column https://github.com/influxdata/influxdb/issues/10742",
|
||||
"fills_0": "need fill/Interpolate function https://github.com/influxdata/flux/issues/436",
|
||||
"random_math_0": "transpiler does not implement joining fields within a cursor https://github.com/influxdata/influxdb/issues/10743",
|
||||
"selector_0": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"selector_1": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"selector_2": "Transpiler: first function uses different series than influxQL https://github.com/influxdata/influxdb/issues/10737",
|
||||
"selector_6": "Transpiler: first function uses different series than influxQL https://github.com/influxdata/influxdb/issues/10737",
|
||||
"selector_7": "Transpiler: first function uses different series than influxQL https://github.com/influxdata/influxdb/issues/10737",
|
||||
"series_agg_3": "Transpiler: Implement elapsed https://github.com/influxdata/influxdb/issues/10733",
|
||||
"series_agg_4": "Transpiler: Implement cumulative_sum https://github.com/influxdata/influxdb/issues/10732",
|
||||
"series_agg_5": "add derivative support to the transpiler https://github.com/influxdata/influxdb/issues/10759",
|
||||
"series_agg_6": "Transpiler: Implement non_negative_derivative https://github.com/influxdata/influxdb/issues/10731",
|
||||
"Subquery_0": "Implement subqueries in the transpiler https://github.com/influxdata/influxdb/issues/10660",
|
||||
"Subquery_1": "Implement subqueries in the transpiler https://github.com/influxdata/influxdb/issues/10660",
|
||||
"Subquery_2": "Implement subqueries in the transpiler https://github.com/influxdata/influxdb/issues/10660",
|
||||
"Subquery_3": "Implement subqueries in the transpiler https://github.com/influxdata/influxdb/issues/10660",
|
||||
"Subquery_4": "Implement subqueries in the transpiler https://github.com/influxdata/influxdb/issues/10660",
|
||||
"Subquery_5": "Implement subqueries in the transpiler https://github.com/influxdata/influxdb/issues/10660",
|
||||
"NestedSubquery_0": "Implement subqueries in the transpiler https://github.com/influxdata/influxdb/issues/10660",
|
||||
"NestedSubquery_1": "Implement subqueries in the transpiler https://github.com/influxdata/influxdb/issues/10660",
|
||||
"NestedSubquery_2": "Implement subqueries in the transpiler https://github.com/influxdata/influxdb/issues/10660",
|
||||
"NestedSubquery_3": "Implement subqueries in the transpiler https://github.com/influxdata/influxdb/issues/10660",
|
||||
"SimulatedHTTP_0": "Implement subqueries in the transpiler https://github.com/influxdata/influxdb/issues/10660",
|
||||
"SimulatedHTTP_1": "Implement subqueries in the transpiler https://github.com/influxdata/influxdb/issues/10660",
|
||||
"SimulatedHTTP_2": "Implement subqueries in the transpiler https://github.com/influxdata/influxdb/issues/10660",
|
||||
"SimulatedHTTP_3": "Implement subqueries in the transpiler https://github.com/influxdata/influxdb/issues/10660",
|
||||
"SimulatedHTTP_4": "Implement subqueries in the transpiler https://github.com/influxdata/influxdb/issues/10660",
|
||||
"SelectorMath_0": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_1": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_2": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_3": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_4": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_5": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_6": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_7": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_8": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_9": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_10": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_11": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_12": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_13": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_14": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_15": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_16": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_17": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_18": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_19": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_20": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_21": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_22": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_23": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_24": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_25": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_26": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_27": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_28": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_29": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_30": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"SelectorMath_31": "Transpiler: unimplemented functions: top and bottom https://github.com/influxdata/influxdb/issues/10738",
|
||||
"ands": "algo-w: https://github.com/influxdata/influxdb/issues/16811",
|
||||
"ors": "algo-w: https://github.com/influxdata/influxdb/issues/16811",
|
||||
}
|
||||
|
|
|
@ -3,12 +3,24 @@ package influxql
|
|||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/flux/ast"
|
||||
"github.com/influxdata/flux/execute"
|
||||
"github.com/influxdata/influxql"
|
||||
)
|
||||
|
||||
func isTransformation(expr influxql.Expr) bool {
|
||||
if call, ok := expr.(*influxql.Call); ok {
|
||||
switch call.Name {
|
||||
// TODO(ethan): more to be added here.
|
||||
case "difference", "derivative", "cumulative_sum", "elapsed":
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// function contains the prototype for invoking a function.
|
||||
// TODO(jsternberg): This should do a lot more heavy lifting, but it mostly just
|
||||
// pre-validates that we know the function exists. The cursor creation should be
|
||||
|
@ -46,7 +58,7 @@ func parseFunction(expr *influxql.Call) (*function, error) {
|
|||
default:
|
||||
return nil, fmt.Errorf("expected field argument in %s()", expr.Name)
|
||||
}
|
||||
case "min", "max", "sum", "first", "last", "mean", "median":
|
||||
case "min", "max", "sum", "first", "last", "mean", "median", "difference", "stddev", "spread":
|
||||
if exp, got := 1, len(expr.Args); exp != got {
|
||||
return nil, fmt.Errorf("invalid number of arguments for %s, expected %d, got %d", expr.Name, exp, got)
|
||||
}
|
||||
|
@ -107,7 +119,7 @@ func createFunctionCursor(t *transpilerState, call *influxql.Call, in cursor, no
|
|||
parent: in,
|
||||
}
|
||||
switch call.Name {
|
||||
case "count", "min", "max", "sum", "first", "last", "mean":
|
||||
case "count", "min", "max", "sum", "first", "last", "mean", "difference", "stddev", "spread":
|
||||
value, ok := in.Value(call.Args[0])
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("undefined variable: %s", call.Args[0])
|
||||
|
@ -122,6 +134,49 @@ func createFunctionCursor(t *transpilerState, call *influxql.Call, in cursor, no
|
|||
}
|
||||
cur.value = value
|
||||
cur.exclude = map[influxql.Expr]struct{}{call.Args[0]: {}}
|
||||
case "elapsed":
|
||||
// TODO(ethan): https://github.com/influxdata/influxdb/issues/10733 to enable this.
|
||||
value, ok := in.Value(call.Args[0])
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("undefined variable: %s", call.Args[0])
|
||||
}
|
||||
unit := []ast.Duration{{
|
||||
Magnitude: 1,
|
||||
Unit: "ns",
|
||||
}}
|
||||
// elapsed has an optional unit parameter, default to 1ns
|
||||
// https://docs.influxdata.com/influxdb/v1.7/query_language/functions/#elapsed
|
||||
if len(call.Args) == 2 {
|
||||
switch arg := call.Args[1].(type) {
|
||||
case *influxql.DurationLiteral:
|
||||
unit = durationLiteral(arg.Val)
|
||||
default:
|
||||
return nil, errors.New("argument unit must be a duration type")
|
||||
}
|
||||
}
|
||||
cur.expr = &ast.PipeExpression{
|
||||
Argument: in.Expr(),
|
||||
Call: &ast.CallExpression{
|
||||
Callee: &ast.Identifier{
|
||||
Name: call.Name,
|
||||
},
|
||||
Arguments: []ast.Expression{
|
||||
&ast.ObjectExpression{
|
||||
Properties: []*ast.Property{
|
||||
{
|
||||
Key: &ast.Identifier{
|
||||
Name: "unit",
|
||||
},
|
||||
Value: &ast.DurationLiteral{
|
||||
Values: unit,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
cur.value = value
|
||||
case "median":
|
||||
value, ok := in.Value(call.Args[0])
|
||||
if !ok {
|
||||
|
@ -250,29 +305,70 @@ func createFunctionCursor(t *transpilerState, call *influxql.Call, in cursor, no
|
|||
},
|
||||
}
|
||||
}
|
||||
// err checked in caller
|
||||
interval, _ := t.stmt.GroupByInterval()
|
||||
var timeValue ast.Expression
|
||||
if interval > 0 {
|
||||
timeValue = &ast.MemberExpression{
|
||||
Object: &ast.Identifier{
|
||||
Name: "r",
|
||||
},
|
||||
Property: &ast.Identifier{
|
||||
Name: execute.DefaultStartColLabel,
|
||||
},
|
||||
}
|
||||
} else if isTransformation(call) || influxql.IsSelector(call) {
|
||||
timeValue = &ast.MemberExpression{
|
||||
Object: &ast.Identifier{
|
||||
Name: "r",
|
||||
},
|
||||
Property: &ast.Identifier{
|
||||
Name: execute.DefaultTimeColLabel,
|
||||
},
|
||||
}
|
||||
} else {
|
||||
valuer := influxql.NowValuer{Now: t.config.Now}
|
||||
_, tr, err := influxql.ConditionExpr(t.stmt.Condition, &valuer)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if tr.MinTime().UnixNano() == influxql.MinTime {
|
||||
timeValue = &ast.DateTimeLiteral{Value: time.Unix(0, 0).UTC()}
|
||||
} else {
|
||||
timeValue = &ast.MemberExpression{
|
||||
Object: &ast.Identifier{
|
||||
Name: "r",
|
||||
},
|
||||
Property: &ast.Identifier{
|
||||
Name: execute.DefaultStartColLabel,
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
cur.expr = &ast.PipeExpression{
|
||||
Argument: cur.expr,
|
||||
Call: &ast.CallExpression{
|
||||
Callee: &ast.Identifier{
|
||||
Name: "duplicate",
|
||||
Name: "map",
|
||||
},
|
||||
Arguments: []ast.Expression{
|
||||
&ast.ObjectExpression{
|
||||
Properties: []*ast.Property{
|
||||
{
|
||||
Key: &ast.Identifier{
|
||||
Name: "column",
|
||||
Name: "fn",
|
||||
},
|
||||
Value: &ast.StringLiteral{
|
||||
Value: execute.DefaultStartColLabel,
|
||||
},
|
||||
},
|
||||
{
|
||||
Key: &ast.Identifier{
|
||||
Name: "as",
|
||||
},
|
||||
Value: &ast.StringLiteral{
|
||||
Value: execute.DefaultTimeColLabel,
|
||||
Value: &ast.FunctionExpression{
|
||||
Params: []*ast.Property{{
|
||||
Key: &ast.Identifier{Name: "r"},
|
||||
}},
|
||||
Body: &ast.ObjectExpression{
|
||||
With: &ast.Identifier{Name: "r"},
|
||||
Properties: []*ast.Property{{
|
||||
Key: &ast.Identifier{Name: execute.DefaultTimeColLabel},
|
||||
Value: timeValue,
|
||||
}},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
|
|
|
@ -6,14 +6,15 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/influxdata/flux/ast"
|
||||
"github.com/influxdata/flux/execute"
|
||||
"github.com/influxdata/influxql"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
type groupInfo struct {
|
||||
call *influxql.Call
|
||||
refs []*influxql.VarRef
|
||||
selector bool
|
||||
call *influxql.Call
|
||||
refs []*influxql.VarRef
|
||||
needNormalization bool
|
||||
}
|
||||
|
||||
type groupVisitor struct {
|
||||
|
@ -85,9 +86,9 @@ func identifyGroups(stmt *influxql.SelectStatement) ([]*groupInfo, error) {
|
|||
call = v.calls[0].call
|
||||
}
|
||||
return []*groupInfo{{
|
||||
call: call,
|
||||
refs: v.refs,
|
||||
selector: true, // Always a selector if we are here.
|
||||
call: call,
|
||||
refs: v.refs,
|
||||
needNormalization: false, // Always a selector if we are here.
|
||||
}}, nil
|
||||
}
|
||||
|
||||
|
@ -98,9 +99,10 @@ func identifyGroups(stmt *influxql.SelectStatement) ([]*groupInfo, error) {
|
|||
groups = append(groups, &groupInfo{call: fn.call})
|
||||
}
|
||||
|
||||
// If there is exactly one group and that contains a selector, then mark it as so.
|
||||
if len(groups) == 1 && influxql.IsSelector(groups[0].call) {
|
||||
groups[0].selector = true
|
||||
// If there is exactly one group and that contains a selector or a transformation function,
|
||||
// then mark it does not need normalization.
|
||||
if len(groups) == 1 {
|
||||
groups[0].needNormalization = !isTransformation(groups[0].call) && !influxql.IsSelector(groups[0].call)
|
||||
}
|
||||
return groups, nil
|
||||
}
|
||||
|
@ -198,7 +200,7 @@ func (gr *groupInfo) createCursor(t *transpilerState) (cursor, error) {
|
|||
// Evaluate the conditional and insert a filter if a condition exists.
|
||||
if cond != nil {
|
||||
// // Generate a filter expression by evaluating the condition and wrapping it in a filter op.
|
||||
expr, err := t.mapField(cond, cur)
|
||||
expr, err := t.mapField(cond, cur, true)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "unable to evaluate condition")
|
||||
}
|
||||
|
@ -242,7 +244,7 @@ func (gr *groupInfo) createCursor(t *transpilerState) (cursor, error) {
|
|||
|
||||
// If a function call is present, evaluate the function call.
|
||||
if gr.call != nil {
|
||||
c, err := createFunctionCursor(t, gr.call, cur, !gr.selector || interval > 0)
|
||||
c, err := createFunctionCursor(t, gr.call, cur, gr.needNormalization || interval > 0)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -294,6 +296,8 @@ func (gr *groupInfo) group(t *transpilerState, in cursor) (cursor, error) {
|
|||
tags := []ast.Expression{
|
||||
&ast.StringLiteral{Value: "_measurement"},
|
||||
&ast.StringLiteral{Value: "_start"},
|
||||
&ast.StringLiteral{Value: "_stop"},
|
||||
&ast.StringLiteral{Value: "_field"},
|
||||
}
|
||||
if len(t.stmt.Dimensions) > 0 {
|
||||
// Maintain a set of the dimensions we have encountered.
|
||||
|
@ -367,7 +371,8 @@ func (gr *groupInfo) group(t *transpilerState, in cursor) (cursor, error) {
|
|||
}
|
||||
}
|
||||
case *influxql.Wildcard:
|
||||
return nil, errors.New("unimplemented: dimension wildcards")
|
||||
// Do not add a group call for wildcard, which means group by everything
|
||||
return in, nil
|
||||
case *influxql.RegexLiteral:
|
||||
return nil, errors.New("unimplemented: dimension regex wildcards")
|
||||
default:
|
||||
|
@ -413,6 +418,32 @@ func (gr *groupInfo) group(t *transpilerState, in cursor) (cursor, error) {
|
|||
cursor: in,
|
||||
}
|
||||
|
||||
in = &pipeCursor{
|
||||
expr: &ast.PipeExpression{
|
||||
Argument: in.Expr(),
|
||||
Call: &ast.CallExpression{
|
||||
Callee: &ast.Identifier{
|
||||
Name: "keep",
|
||||
},
|
||||
Arguments: []ast.Expression{
|
||||
&ast.ObjectExpression{
|
||||
Properties: []*ast.Property{{
|
||||
Key: &ast.Identifier{
|
||||
Name: "columns",
|
||||
},
|
||||
Value: &ast.ArrayExpression{
|
||||
Elements: append(tags,
|
||||
&ast.StringLiteral{Value: execute.DefaultTimeColLabel},
|
||||
&ast.StringLiteral{Value: execute.DefaultValueColLabel}),
|
||||
},
|
||||
}},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
cursor: in,
|
||||
}
|
||||
|
||||
if windowEvery > 0 {
|
||||
args := []*ast.Property{{
|
||||
Key: &ast.Identifier{
|
||||
|
|
|
@ -6,7 +6,6 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/influxdata/flux/ast"
|
||||
"github.com/influxdata/flux/execute"
|
||||
"github.com/influxdata/influxql"
|
||||
)
|
||||
|
||||
|
@ -41,32 +40,19 @@ func (t *transpilerState) mapFields(in cursor) (cursor, error) {
|
|||
panic("number of columns does not match the number of fields")
|
||||
}
|
||||
|
||||
properties := make([]*ast.Property, 0, len(t.stmt.Fields)+1)
|
||||
properties = append(properties, &ast.Property{
|
||||
Key: &ast.Identifier{
|
||||
Name: execute.DefaultTimeColLabel,
|
||||
},
|
||||
Value: &ast.MemberExpression{
|
||||
Object: &ast.Identifier{
|
||||
Name: "r",
|
||||
},
|
||||
Property: &ast.Identifier{
|
||||
Name: execute.DefaultTimeColLabel,
|
||||
},
|
||||
},
|
||||
})
|
||||
properties := make([]*ast.Property, 0, len(t.stmt.Fields))
|
||||
for i, f := range t.stmt.Fields {
|
||||
if ref, ok := f.Expr.(*influxql.VarRef); ok && ref.Val == "time" {
|
||||
// Skip past any time columns.
|
||||
continue
|
||||
}
|
||||
value, err := t.mapField(f.Expr, in)
|
||||
fieldName, err := t.mapField(f.Expr, in, false)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
properties = append(properties, &ast.Property{
|
||||
Key: &ast.Identifier{Name: columns[i]},
|
||||
Value: value,
|
||||
Key: fieldName.(ast.PropertyKey),
|
||||
Value: &ast.StringLiteral{Value: columns[i]},
|
||||
})
|
||||
}
|
||||
return &mapCursor{
|
||||
|
@ -74,31 +60,18 @@ func (t *transpilerState) mapFields(in cursor) (cursor, error) {
|
|||
Argument: in.Expr(),
|
||||
Call: &ast.CallExpression{
|
||||
Callee: &ast.Identifier{
|
||||
Name: "map",
|
||||
Name: "rename",
|
||||
},
|
||||
Arguments: []ast.Expression{
|
||||
&ast.ObjectExpression{
|
||||
Properties: []*ast.Property{
|
||||
{
|
||||
Key: &ast.Identifier{
|
||||
Name: "fn",
|
||||
},
|
||||
Value: &ast.FunctionExpression{
|
||||
Params: []*ast.Property{{
|
||||
Key: &ast.Identifier{Name: "r"},
|
||||
}},
|
||||
Body: &ast.ObjectExpression{
|
||||
Properties: properties,
|
||||
},
|
||||
},
|
||||
Properties: []*ast.Property{{
|
||||
Key: &ast.Identifier{
|
||||
Name: "columns",
|
||||
},
|
||||
{
|
||||
Key: &ast.Identifier{
|
||||
Name: "mergeKey",
|
||||
},
|
||||
Value: &ast.BooleanLiteral{Value: true},
|
||||
Value: &ast.ObjectExpression{
|
||||
Properties: properties,
|
||||
},
|
||||
},
|
||||
}},
|
||||
},
|
||||
},
|
||||
},
|
||||
|
@ -106,18 +79,21 @@ func (t *transpilerState) mapFields(in cursor) (cursor, error) {
|
|||
}, nil
|
||||
}
|
||||
|
||||
func (t *transpilerState) mapField(expr influxql.Expr, in cursor) (ast.Expression, error) {
|
||||
func (t *transpilerState) mapField(expr influxql.Expr, in cursor, returnMemberExpr bool) (ast.Expression, error) {
|
||||
if sym, ok := in.Value(expr); ok {
|
||||
var property ast.PropertyKey
|
||||
var mappedName ast.Expression
|
||||
if strings.HasPrefix(sym, "_") {
|
||||
property = &ast.Identifier{Name: sym}
|
||||
mappedName = &ast.Identifier{Name: sym}
|
||||
} else {
|
||||
property = &ast.StringLiteral{Value: sym}
|
||||
mappedName = &ast.StringLiteral{Value: sym}
|
||||
}
|
||||
return &ast.MemberExpression{
|
||||
Object: &ast.Identifier{Name: "r"},
|
||||
Property: property,
|
||||
}, nil
|
||||
if returnMemberExpr {
|
||||
return &ast.MemberExpression{
|
||||
Object: &ast.Identifier{Name: "r"},
|
||||
Property: mappedName.(ast.PropertyKey),
|
||||
}, nil
|
||||
}
|
||||
return mappedName, nil
|
||||
}
|
||||
|
||||
switch expr := expr.(type) {
|
||||
|
@ -131,7 +107,7 @@ func (t *transpilerState) mapField(expr influxql.Expr, in cursor) (ast.Expressio
|
|||
case *influxql.BinaryExpr:
|
||||
return t.evalBinaryExpr(expr, in)
|
||||
case *influxql.ParenExpr:
|
||||
return t.mapField(expr.Expr, in)
|
||||
return t.mapField(expr.Expr, in, returnMemberExpr)
|
||||
case *influxql.StringLiteral:
|
||||
if ts, err := expr.ToTimeLiteral(time.UTC); err == nil {
|
||||
return &ast.DateTimeLiteral{Value: ts.Val}, nil
|
||||
|
@ -194,11 +170,11 @@ func (t *transpilerState) evalBinaryExpr(expr *influxql.BinaryExpr, in cursor) (
|
|||
return nil, fmt.Errorf("unimplemented binary expression: %s", expr.Op)
|
||||
}
|
||||
|
||||
lhs, err := t.mapField(expr.LHS, in)
|
||||
lhs, err := t.mapField(expr.LHS, in, true)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
rhs, err := t.mapField(expr.RHS, in)
|
||||
rhs, err := t.mapField(expr.RHS, in, true)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
|
|
@ -35,10 +35,11 @@ func init() {
|
|||
` + fmt.Sprintf(`from(bucketID: "%s")`, bucketID.String()) + `
|
||||
|> range(start: 1677-09-21T00:12:43.145224194Z, stop: 2262-04-11T23:47:16.854775806Z)
|
||||
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "value")
|
||||
|> group(columns: ["_measurement", "_start"], mode: "by")
|
||||
|> group(columns: ["_measurement", "_start", "_stop", "_field"], mode: "by")
|
||||
|> keep(columns: ["_measurement", "_start", "_stop", "_field", "_time", "_value"])
|
||||
|> ` + name + `()
|
||||
|> duplicate(column: "_start", as: "_time")
|
||||
|> map(fn: (r) => ({_time: r._time, ` + name + `: r._value}), mergeKey: true)
|
||||
|> map(fn: (r) => ({r with _time: 1970-01-01T00:00:00Z}))
|
||||
|> rename(columns: {_value: "` + name + `"})
|
||||
|> yield(name: "0")
|
||||
`
|
||||
}),
|
||||
|
|
|
@ -12,10 +12,11 @@ func init() {
|
|||
|> range(start: 1677-09-21T00:12:43.145224194Z, stop: 2262-04-11T23:47:16.854775806Z)
|
||||
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "value")
|
||||
|> filter(fn: (r) => r["host"] == "server01")
|
||||
|> group(columns: ["_measurement", "_start"], mode: "by")
|
||||
|> group(columns: ["_measurement", "_start", "_stop", "_field"], mode: "by")
|
||||
|> keep(columns: ["_measurement", "_start", "_stop", "_field", "_time", "_value"])
|
||||
|> ` + name + `()
|
||||
|> duplicate(column: "_start", as: "_time")
|
||||
|> map(fn: (r) => ({_time: r._time, ` + name + `: r._value}), mergeKey: true)
|
||||
|> map(fn: (r) => ({r with _time: 1970-01-01T00:00:00Z}))
|
||||
|> rename(columns: {_value: "` + name + `"})
|
||||
|> yield(name: "0")
|
||||
`
|
||||
}),
|
||||
|
|
|
@ -11,10 +11,11 @@ func init() {
|
|||
` + fmt.Sprintf(`from(bucketID: "%s"`, bucketID.String()) + `)
|
||||
|> range(start: 1677-09-21T00:12:43.145224194Z, stop: 2262-04-11T23:47:16.854775806Z)
|
||||
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "value")
|
||||
|> group(columns: ["_measurement", "_start", "host"], mode: "by")
|
||||
|> group(columns: ["_measurement", "_start", "_stop", "_field", "host"], mode: "by")
|
||||
|> keep(columns: ["_measurement", "_start", "_stop", "_field", "host", "_time", "_value"])
|
||||
|> ` + name + `()
|
||||
|> duplicate(column: "_start", as: "_time")
|
||||
|> map(fn: (r) => ({_time: r._time, ` + name + `: r._value}), mergeKey: true)
|
||||
|> map(fn: (r) => ({r with _time: 1970-01-01T00:00:00Z}))
|
||||
|> rename(columns: {_value: "` + name + `"})
|
||||
|> yield(name: "0")
|
||||
`
|
||||
}),
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue