2019-10-23 17:09:04 +00:00
|
|
|
package pkger
|
|
|
|
|
|
|
|
import (
|
|
|
|
"context"
|
2019-10-26 02:11:47 +00:00
|
|
|
"errors"
|
2019-10-23 17:09:04 +00:00
|
|
|
"fmt"
|
feat: add --hardening-enabled option to limit flux/pkger HTTP requests (#23207)
Flux HTTP and template fetching requests do not perform IP address
checks for local addresses. This behavior on the one hand allows SSRF
(Server Side Request Forgery) attacks via authenticated requests but on
the other hand is useful for scenarios that have legitimate requirements
to fetch from private addresses (eg, hosting templates internally or
performing flux queries to local resources during development).
To not break existing installations, the default behavior will remain
the same but a new --hardening-enabled option is added to influxd to
turn on IP address verification and limit both flux and template
fetching HTTP requests to non-private addresses. We plan to enable new
security features that aren't suitable for the default install with this
option. Put another way, this new option is intended to be used to make
it easy to turn on all security options when running in production
environments. The 'Manage security and authorization' section of the
docs will also be updated for this option.
Specifically for flux, when --hardening-enabled is specified, we now
pass in PrivateIPValidator{} to the flux dependency configuration. The
flux url validator will then tap into the http.Client 'Control'
mechanism to validate the IP address since it is called after DNS lookup
but before the connection starts.
For pkger (template fetching), when --hardening-enabled is specified,
the template parser's HTTP client will be configured to also use
PrivateIPValidator{}. Note that /api/v2/stacks POST ('init', aka create)
and PATCH ('update') only store the new url to be applied later with
/api/v2/templates/apply. While it is possible to have InitStack() and
UpdateStack() mimic net.DialContext() to setup a go routine to perform a
DNS lookup and then loop through the returned addresses to verify none
are for a private IP before storing the url, this would add considerable
complexity to the stacks implementation. Since the stack's urls are
fetched when it is applied and the IP address is verified as part of
apply (see above), for now we'll keep this simple and not validate the
IPs of the stack's urls during init or update.
Lastly, update pkger/http_server_template_test.go's Templates() test for
disabled jsonnet to also check the contents of the 422 error (since the
flux validator also returns a 422 with different message). Also, fix the
URL in one of these tests to use a valid path.
2022-03-18 14:25:31 +00:00
|
|
|
"net/http"
|
2020-03-20 02:08:35 +00:00
|
|
|
"net/url"
|
2020-04-29 22:24:19 +00:00
|
|
|
"path"
|
2020-11-02 17:59:01 +00:00
|
|
|
"regexp"
|
2020-06-01 21:58:36 +00:00
|
|
|
"sort"
|
2019-10-23 17:09:04 +00:00
|
|
|
"strings"
|
2019-12-07 00:23:09 +00:00
|
|
|
"sync"
|
2019-10-23 17:09:04 +00:00
|
|
|
"time"
|
|
|
|
|
2020-06-11 23:45:09 +00:00
|
|
|
"github.com/go-stack/stack"
|
2020-04-03 17:39:20 +00:00
|
|
|
"github.com/influxdata/influxdb/v2"
|
|
|
|
ierrors "github.com/influxdata/influxdb/v2/kit/errors"
|
2021-04-07 18:42:55 +00:00
|
|
|
"github.com/influxdata/influxdb/v2/kit/platform"
|
|
|
|
errors2 "github.com/influxdata/influxdb/v2/kit/platform/errors"
|
2020-05-06 21:37:53 +00:00
|
|
|
icheck "github.com/influxdata/influxdb/v2/notification/check"
|
2020-04-23 00:05:10 +00:00
|
|
|
"github.com/influxdata/influxdb/v2/notification/rule"
|
2020-06-26 20:58:22 +00:00
|
|
|
"github.com/influxdata/influxdb/v2/pkger/internal/wordplay"
|
2020-04-03 17:39:20 +00:00
|
|
|
"github.com/influxdata/influxdb/v2/snowflake"
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
"github.com/influxdata/influxdb/v2/task/options"
|
2021-04-07 18:42:55 +00:00
|
|
|
"github.com/influxdata/influxdb/v2/task/taskmodel"
|
2019-10-23 17:09:04 +00:00
|
|
|
"go.uber.org/zap"
|
|
|
|
)
|
|
|
|
|
2019-11-04 23:15:53 +00:00
|
|
|
// APIVersion marks the current APIVersion for influx packages.
|
2020-01-13 19:13:37 +00:00
|
|
|
const APIVersion = "influxdata.com/v2alpha1"
|
2020-08-26 20:44:59 +00:00
|
|
|
const APIVersion2 = "influxdata.com/v2alpha2"
|
2019-11-04 23:15:53 +00:00
|
|
|
|
2020-07-07 22:07:11 +00:00
|
|
|
// Stack is an identifier for stateful application of a package(s). This stack
|
|
|
|
// will map created resources from the template(s) to existing resources on the
|
|
|
|
// platform. This stack is updated only after side effects of applying a template.
|
|
|
|
// If the template is applied, and no changes are had, then the stack is not updated.
|
|
|
|
type Stack struct {
|
2021-03-30 18:10:02 +00:00
|
|
|
ID platform.ID
|
|
|
|
OrgID platform.ID
|
2020-07-07 22:07:11 +00:00
|
|
|
CreatedAt time.Time `json:"createdAt"`
|
|
|
|
Events []StackEvent
|
|
|
|
}
|
|
|
|
|
|
|
|
func (s Stack) LatestEvent() StackEvent {
|
|
|
|
if len(s.Events) == 0 {
|
|
|
|
return StackEvent{}
|
|
|
|
}
|
|
|
|
sort.Slice(s.Events, func(i, j int) bool {
|
|
|
|
return s.Events[i].UpdatedAt.Before(s.Events[j].UpdatedAt)
|
|
|
|
})
|
|
|
|
return s.Events[len(s.Events)-1]
|
|
|
|
}
|
|
|
|
|
2020-03-20 02:08:35 +00:00
|
|
|
type (
|
2020-07-07 22:07:11 +00:00
|
|
|
StackEvent struct {
|
|
|
|
EventType StackEventType
|
2020-06-26 22:12:57 +00:00
|
|
|
Name string
|
|
|
|
Description string
|
|
|
|
Sources []string
|
|
|
|
TemplateURLs []string
|
|
|
|
Resources []StackResource
|
2020-07-07 22:07:11 +00:00
|
|
|
UpdatedAt time.Time `json:"updatedAt"`
|
|
|
|
}
|
2020-03-20 02:08:58 +00:00
|
|
|
|
2020-07-07 22:07:11 +00:00
|
|
|
StackCreate struct {
|
2021-03-30 18:10:02 +00:00
|
|
|
OrgID platform.ID
|
2020-07-07 22:07:11 +00:00
|
|
|
Name string
|
|
|
|
Description string
|
|
|
|
Sources []string
|
|
|
|
TemplateURLs []string
|
|
|
|
Resources []StackResource
|
2020-03-20 02:08:35 +00:00
|
|
|
}
|
|
|
|
|
2020-11-11 18:54:21 +00:00
|
|
|
// StackResource is a record for an individual resource side effect generated from
|
2020-06-30 21:54:00 +00:00
|
|
|
// applying a template.
|
2020-03-20 02:08:35 +00:00
|
|
|
StackResource struct {
|
2020-06-26 03:17:11 +00:00
|
|
|
APIVersion string
|
2021-03-30 18:10:02 +00:00
|
|
|
ID platform.ID
|
2020-08-27 21:25:28 +00:00
|
|
|
Name string
|
2020-06-26 03:17:11 +00:00
|
|
|
Kind Kind
|
|
|
|
MetaName string
|
|
|
|
Associations []StackResourceAssociation
|
2020-04-22 23:28:08 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// StackResourceAssociation associates a stack resource with another stack resource.
|
|
|
|
StackResourceAssociation struct {
|
2020-06-26 03:17:11 +00:00
|
|
|
Kind Kind
|
|
|
|
MetaName string
|
2020-03-20 02:08:35 +00:00
|
|
|
}
|
2020-06-17 03:12:37 +00:00
|
|
|
|
|
|
|
// StackUpdate provides a means to update an existing stack.
|
|
|
|
StackUpdate struct {
|
2021-03-30 18:10:02 +00:00
|
|
|
ID platform.ID
|
2020-06-26 20:58:22 +00:00
|
|
|
Name *string
|
|
|
|
Description *string
|
2020-06-26 22:12:57 +00:00
|
|
|
TemplateURLs []string
|
2020-06-26 20:58:22 +00:00
|
|
|
AdditionalResources []StackAdditionalResource
|
|
|
|
}
|
|
|
|
|
|
|
|
StackAdditionalResource struct {
|
|
|
|
APIVersion string
|
2021-03-30 18:10:02 +00:00
|
|
|
ID platform.ID
|
2020-06-26 20:58:22 +00:00
|
|
|
Kind Kind
|
|
|
|
MetaName string
|
2020-06-17 03:12:37 +00:00
|
|
|
}
|
2020-03-20 02:08:35 +00:00
|
|
|
)
|
|
|
|
|
2020-07-07 22:07:11 +00:00
|
|
|
type StackEventType uint
|
|
|
|
|
|
|
|
const (
|
|
|
|
StackEventCreate StackEventType = iota
|
|
|
|
StackEventUpdate
|
2020-07-09 23:33:42 +00:00
|
|
|
StackEventUninstalled
|
2020-07-07 22:07:11 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
func (e StackEventType) String() string {
|
|
|
|
switch e {
|
|
|
|
case StackEventCreate:
|
|
|
|
return "create"
|
2020-07-09 23:33:42 +00:00
|
|
|
case StackEventUninstalled:
|
|
|
|
return "uninstall"
|
2020-07-07 22:07:11 +00:00
|
|
|
case StackEventUpdate:
|
|
|
|
return "update"
|
|
|
|
default:
|
|
|
|
return "unknown"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-07-22 19:23:00 +00:00
|
|
|
const ResourceTypeStack influxdb.ResourceType = "stack"
|
|
|
|
|
2019-11-05 01:40:42 +00:00
|
|
|
// SVC is the packages service interface.
|
|
|
|
type SVC interface {
|
2021-03-30 18:10:02 +00:00
|
|
|
InitStack(ctx context.Context, userID platform.ID, stack StackCreate) (Stack, error)
|
|
|
|
UninstallStack(ctx context.Context, identifiers struct{ OrgID, UserID, StackID platform.ID }) (Stack, error)
|
|
|
|
DeleteStack(ctx context.Context, identifiers struct{ OrgID, UserID, StackID platform.ID }) error
|
|
|
|
ListStacks(ctx context.Context, orgID platform.ID, filter ListFilter) ([]Stack, error)
|
|
|
|
ReadStack(ctx context.Context, id platform.ID) (Stack, error)
|
2020-06-17 03:12:37 +00:00
|
|
|
UpdateStack(ctx context.Context, upd StackUpdate) (Stack, error)
|
2020-04-30 06:37:39 +00:00
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
Export(ctx context.Context, opts ...ExportOptFn) (*Template, error)
|
2021-03-30 18:10:02 +00:00
|
|
|
DryRun(ctx context.Context, orgID, userID platform.ID, opts ...ApplyOptFn) (ImpactSummary, error)
|
|
|
|
Apply(ctx context.Context, orgID, userID platform.ID, opts ...ApplyOptFn) (ImpactSummary, error)
|
2019-11-05 01:40:42 +00:00
|
|
|
}
|
|
|
|
|
2020-01-15 19:00:31 +00:00
|
|
|
// SVCMiddleware is a service middleware func.
|
|
|
|
type SVCMiddleware func(SVC) SVC
|
|
|
|
|
2019-11-07 00:45:00 +00:00
|
|
|
type serviceOpt struct {
|
2019-12-27 19:22:05 +00:00
|
|
|
logger *zap.Logger
|
|
|
|
|
|
|
|
applyReqLimit int
|
feat: add --hardening-enabled option to limit flux/pkger HTTP requests (#23207)
Flux HTTP and template fetching requests do not perform IP address
checks for local addresses. This behavior on the one hand allows SSRF
(Server Side Request Forgery) attacks via authenticated requests but on
the other hand is useful for scenarios that have legitimate requirements
to fetch from private addresses (eg, hosting templates internally or
performing flux queries to local resources during development).
To not break existing installations, the default behavior will remain
the same but a new --hardening-enabled option is added to influxd to
turn on IP address verification and limit both flux and template
fetching HTTP requests to non-private addresses. We plan to enable new
security features that aren't suitable for the default install with this
option. Put another way, this new option is intended to be used to make
it easy to turn on all security options when running in production
environments. The 'Manage security and authorization' section of the
docs will also be updated for this option.
Specifically for flux, when --hardening-enabled is specified, we now
pass in PrivateIPValidator{} to the flux dependency configuration. The
flux url validator will then tap into the http.Client 'Control'
mechanism to validate the IP address since it is called after DNS lookup
but before the connection starts.
For pkger (template fetching), when --hardening-enabled is specified,
the template parser's HTTP client will be configured to also use
PrivateIPValidator{}. Note that /api/v2/stacks POST ('init', aka create)
and PATCH ('update') only store the new url to be applied later with
/api/v2/templates/apply. While it is possible to have InitStack() and
UpdateStack() mimic net.DialContext() to setup a go routine to perform a
DNS lookup and then loop through the returned addresses to verify none
are for a private IP before storing the url, this would add considerable
complexity to the stacks implementation. Since the stack's urls are
fetched when it is applied and the IP address is verified as part of
apply (see above), for now we'll keep this simple and not validate the
IPs of the stack's urls during init or update.
Lastly, update pkger/http_server_template_test.go's Templates() test for
disabled jsonnet to also check the contents of the 422 error (since the
flux validator also returns a 422 with different message). Also, fix the
URL in one of these tests to use a valid path.
2022-03-18 14:25:31 +00:00
|
|
|
client *http.Client
|
2021-03-30 18:10:02 +00:00
|
|
|
idGen platform.IDGenerator
|
2020-06-26 20:58:22 +00:00
|
|
|
nameGen NameGenerator
|
2020-03-20 23:20:53 +00:00
|
|
|
timeGen influxdb.TimeGenerator
|
|
|
|
store Store
|
2019-12-27 19:22:05 +00:00
|
|
|
|
2019-12-10 21:35:23 +00:00
|
|
|
bucketSVC influxdb.BucketService
|
2019-12-18 07:05:28 +00:00
|
|
|
checkSVC influxdb.CheckService
|
2019-12-10 21:35:23 +00:00
|
|
|
dashSVC influxdb.DashboardService
|
2019-12-18 07:05:28 +00:00
|
|
|
labelSVC influxdb.LabelService
|
2019-12-10 21:35:23 +00:00
|
|
|
endpointSVC influxdb.NotificationEndpointService
|
2020-03-20 23:20:53 +00:00
|
|
|
orgSVC influxdb.OrganizationService
|
2019-12-20 17:10:10 +00:00
|
|
|
ruleSVC influxdb.NotificationRuleStore
|
2019-12-16 17:39:55 +00:00
|
|
|
secretSVC influxdb.SecretService
|
2021-04-07 18:42:55 +00:00
|
|
|
taskSVC taskmodel.TaskService
|
2019-12-10 21:35:23 +00:00
|
|
|
teleSVC influxdb.TelegrafConfigStore
|
|
|
|
varSVC influxdb.VariableService
|
2019-11-07 00:45:00 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// ServiceSetterFn is a means of setting dependencies on the Service type.
|
|
|
|
type ServiceSetterFn func(opt *serviceOpt)
|
|
|
|
|
feat: add --hardening-enabled option to limit flux/pkger HTTP requests (#23207)
Flux HTTP and template fetching requests do not perform IP address
checks for local addresses. This behavior on the one hand allows SSRF
(Server Side Request Forgery) attacks via authenticated requests but on
the other hand is useful for scenarios that have legitimate requirements
to fetch from private addresses (eg, hosting templates internally or
performing flux queries to local resources during development).
To not break existing installations, the default behavior will remain
the same but a new --hardening-enabled option is added to influxd to
turn on IP address verification and limit both flux and template
fetching HTTP requests to non-private addresses. We plan to enable new
security features that aren't suitable for the default install with this
option. Put another way, this new option is intended to be used to make
it easy to turn on all security options when running in production
environments. The 'Manage security and authorization' section of the
docs will also be updated for this option.
Specifically for flux, when --hardening-enabled is specified, we now
pass in PrivateIPValidator{} to the flux dependency configuration. The
flux url validator will then tap into the http.Client 'Control'
mechanism to validate the IP address since it is called after DNS lookup
but before the connection starts.
For pkger (template fetching), when --hardening-enabled is specified,
the template parser's HTTP client will be configured to also use
PrivateIPValidator{}. Note that /api/v2/stacks POST ('init', aka create)
and PATCH ('update') only store the new url to be applied later with
/api/v2/templates/apply. While it is possible to have InitStack() and
UpdateStack() mimic net.DialContext() to setup a go routine to perform a
DNS lookup and then loop through the returned addresses to verify none
are for a private IP before storing the url, this would add considerable
complexity to the stacks implementation. Since the stack's urls are
fetched when it is applied and the IP address is verified as part of
apply (see above), for now we'll keep this simple and not validate the
IPs of the stack's urls during init or update.
Lastly, update pkger/http_server_template_test.go's Templates() test for
disabled jsonnet to also check the contents of the 422 error (since the
flux validator also returns a 422 with different message). Also, fix the
URL in one of these tests to use a valid path.
2022-03-18 14:25:31 +00:00
|
|
|
// WithHTTPClient sets the http client for the service.
|
|
|
|
func WithHTTPClient(c *http.Client) ServiceSetterFn {
|
|
|
|
return func(o *serviceOpt) {
|
|
|
|
o.client = c
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-12-06 00:53:00 +00:00
|
|
|
// WithLogger sets the logger for the service.
|
|
|
|
func WithLogger(log *zap.Logger) ServiceSetterFn {
|
|
|
|
return func(o *serviceOpt) {
|
|
|
|
o.logger = log
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-03-20 23:20:53 +00:00
|
|
|
// WithIDGenerator sets the id generator for the service.
|
2021-03-30 18:10:02 +00:00
|
|
|
func WithIDGenerator(idGen platform.IDGenerator) ServiceSetterFn {
|
2020-03-20 23:20:53 +00:00
|
|
|
return func(opt *serviceOpt) {
|
|
|
|
opt.idGen = idGen
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// WithTimeGenerator sets the time generator for the service.
|
|
|
|
func WithTimeGenerator(timeGen influxdb.TimeGenerator) ServiceSetterFn {
|
|
|
|
return func(opt *serviceOpt) {
|
|
|
|
opt.timeGen = timeGen
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// WithStore sets the store for the service.
|
|
|
|
func WithStore(store Store) ServiceSetterFn {
|
|
|
|
return func(opt *serviceOpt) {
|
|
|
|
opt.store = store
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-11-07 00:45:00 +00:00
|
|
|
// WithBucketSVC sets the bucket service.
|
|
|
|
func WithBucketSVC(bktSVC influxdb.BucketService) ServiceSetterFn {
|
|
|
|
return func(opt *serviceOpt) {
|
|
|
|
opt.bucketSVC = bktSVC
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-12-18 07:05:28 +00:00
|
|
|
// WithCheckSVC sets the check service.
|
|
|
|
func WithCheckSVC(checkSVC influxdb.CheckService) ServiceSetterFn {
|
|
|
|
return func(opt *serviceOpt) {
|
|
|
|
opt.checkSVC = checkSVC
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-11-07 00:45:00 +00:00
|
|
|
// WithDashboardSVC sets the dashboard service.
|
|
|
|
func WithDashboardSVC(dashSVC influxdb.DashboardService) ServiceSetterFn {
|
|
|
|
return func(opt *serviceOpt) {
|
|
|
|
opt.dashSVC = dashSVC
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-06-26 20:58:22 +00:00
|
|
|
// WithLabelSVC sets the label service.
|
|
|
|
func WithLabelSVC(labelSVC influxdb.LabelService) ServiceSetterFn {
|
|
|
|
return func(opt *serviceOpt) {
|
|
|
|
opt.labelSVC = labelSVC
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func withNameGen(nameGen NameGenerator) ServiceSetterFn {
|
|
|
|
return func(opt *serviceOpt) {
|
|
|
|
opt.nameGen = nameGen
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-12-20 17:10:10 +00:00
|
|
|
// WithNotificationEndpointSVC sets the endpoint notification service.
|
|
|
|
func WithNotificationEndpointSVC(endpointSVC influxdb.NotificationEndpointService) ServiceSetterFn {
|
2019-12-10 21:35:23 +00:00
|
|
|
return func(opt *serviceOpt) {
|
|
|
|
opt.endpointSVC = endpointSVC
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-12-20 17:10:10 +00:00
|
|
|
// WithNotificationRuleSVC sets the endpoint rule service.
|
|
|
|
func WithNotificationRuleSVC(ruleSVC influxdb.NotificationRuleStore) ServiceSetterFn {
|
|
|
|
return func(opt *serviceOpt) {
|
|
|
|
opt.ruleSVC = ruleSVC
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-03-20 23:20:53 +00:00
|
|
|
// WithOrganizationService sets the organization service for the service.
|
|
|
|
func WithOrganizationService(orgSVC influxdb.OrganizationService) ServiceSetterFn {
|
|
|
|
return func(opt *serviceOpt) {
|
|
|
|
opt.orgSVC = orgSVC
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-12-16 17:39:55 +00:00
|
|
|
// WithSecretSVC sets the secret service.
|
|
|
|
func WithSecretSVC(secretSVC influxdb.SecretService) ServiceSetterFn {
|
|
|
|
return func(opt *serviceOpt) {
|
|
|
|
opt.secretSVC = secretSVC
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-12-27 19:22:05 +00:00
|
|
|
// WithTaskSVC sets the task service.
|
2021-04-07 18:42:55 +00:00
|
|
|
func WithTaskSVC(taskSVC taskmodel.TaskService) ServiceSetterFn {
|
2019-12-23 19:51:00 +00:00
|
|
|
return func(opt *serviceOpt) {
|
|
|
|
opt.taskSVC = taskSVC
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-12-04 01:00:15 +00:00
|
|
|
// WithTelegrafSVC sets the telegraf service.
|
|
|
|
func WithTelegrafSVC(telegrafSVC influxdb.TelegrafConfigStore) ServiceSetterFn {
|
|
|
|
return func(opt *serviceOpt) {
|
|
|
|
opt.teleSVC = telegrafSVC
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-11-07 00:45:00 +00:00
|
|
|
// WithVariableSVC sets the variable service.
|
|
|
|
func WithVariableSVC(varSVC influxdb.VariableService) ServiceSetterFn {
|
|
|
|
return func(opt *serviceOpt) {
|
|
|
|
opt.varSVC = varSVC
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-03-20 02:08:58 +00:00
|
|
|
// Store is the storage behavior the Service depends on.
|
2020-03-20 02:08:35 +00:00
|
|
|
type Store interface {
|
2020-03-20 23:20:53 +00:00
|
|
|
CreateStack(ctx context.Context, stack Stack) error
|
2021-03-30 18:10:02 +00:00
|
|
|
ListStacks(ctx context.Context, orgID platform.ID, filter ListFilter) ([]Stack, error)
|
|
|
|
ReadStackByID(ctx context.Context, id platform.ID) (Stack, error)
|
2020-03-20 23:20:53 +00:00
|
|
|
UpdateStack(ctx context.Context, stack Stack) error
|
2021-03-30 18:10:02 +00:00
|
|
|
DeleteStack(ctx context.Context, id platform.ID) error
|
2020-03-20 02:08:35 +00:00
|
|
|
}
|
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
// Service provides the template business logic including all the dependencies to make
|
2019-10-28 22:23:40 +00:00
|
|
|
// this resource sausage.
|
2019-10-23 17:09:04 +00:00
|
|
|
type Service struct {
|
2019-12-04 23:10:23 +00:00
|
|
|
log *zap.Logger
|
2019-10-30 21:13:42 +00:00
|
|
|
|
2020-03-20 02:08:35 +00:00
|
|
|
// internal dependencies
|
2020-03-20 23:20:53 +00:00
|
|
|
applyReqLimit int
|
feat: add --hardening-enabled option to limit flux/pkger HTTP requests (#23207)
Flux HTTP and template fetching requests do not perform IP address
checks for local addresses. This behavior on the one hand allows SSRF
(Server Side Request Forgery) attacks via authenticated requests but on
the other hand is useful for scenarios that have legitimate requirements
to fetch from private addresses (eg, hosting templates internally or
performing flux queries to local resources during development).
To not break existing installations, the default behavior will remain
the same but a new --hardening-enabled option is added to influxd to
turn on IP address verification and limit both flux and template
fetching HTTP requests to non-private addresses. We plan to enable new
security features that aren't suitable for the default install with this
option. Put another way, this new option is intended to be used to make
it easy to turn on all security options when running in production
environments. The 'Manage security and authorization' section of the
docs will also be updated for this option.
Specifically for flux, when --hardening-enabled is specified, we now
pass in PrivateIPValidator{} to the flux dependency configuration. The
flux url validator will then tap into the http.Client 'Control'
mechanism to validate the IP address since it is called after DNS lookup
but before the connection starts.
For pkger (template fetching), when --hardening-enabled is specified,
the template parser's HTTP client will be configured to also use
PrivateIPValidator{}. Note that /api/v2/stacks POST ('init', aka create)
and PATCH ('update') only store the new url to be applied later with
/api/v2/templates/apply. While it is possible to have InitStack() and
UpdateStack() mimic net.DialContext() to setup a go routine to perform a
DNS lookup and then loop through the returned addresses to verify none
are for a private IP before storing the url, this would add considerable
complexity to the stacks implementation. Since the stack's urls are
fetched when it is applied and the IP address is verified as part of
apply (see above), for now we'll keep this simple and not validate the
IPs of the stack's urls during init or update.
Lastly, update pkger/http_server_template_test.go's Templates() test for
disabled jsonnet to also check the contents of the 422 error (since the
flux validator also returns a 422 with different message). Also, fix the
URL in one of these tests to use a valid path.
2022-03-18 14:25:31 +00:00
|
|
|
client *http.Client
|
2021-03-30 18:10:02 +00:00
|
|
|
idGen platform.IDGenerator
|
2020-06-26 20:58:22 +00:00
|
|
|
nameGen NameGenerator
|
2020-03-20 23:20:53 +00:00
|
|
|
store Store
|
|
|
|
timeGen influxdb.TimeGenerator
|
2020-03-20 02:08:35 +00:00
|
|
|
|
|
|
|
// external service dependencies
|
2019-12-10 21:35:23 +00:00
|
|
|
bucketSVC influxdb.BucketService
|
2019-12-18 07:05:28 +00:00
|
|
|
checkSVC influxdb.CheckService
|
2019-12-10 21:35:23 +00:00
|
|
|
dashSVC influxdb.DashboardService
|
2019-12-18 07:05:28 +00:00
|
|
|
labelSVC influxdb.LabelService
|
2019-12-10 21:35:23 +00:00
|
|
|
endpointSVC influxdb.NotificationEndpointService
|
2020-03-20 23:20:53 +00:00
|
|
|
orgSVC influxdb.OrganizationService
|
2019-12-20 17:10:10 +00:00
|
|
|
ruleSVC influxdb.NotificationRuleStore
|
2019-12-16 17:39:55 +00:00
|
|
|
secretSVC influxdb.SecretService
|
2021-04-07 18:42:55 +00:00
|
|
|
taskSVC taskmodel.TaskService
|
2019-12-10 21:35:23 +00:00
|
|
|
teleSVC influxdb.TelegrafConfigStore
|
|
|
|
varSVC influxdb.VariableService
|
2019-10-23 17:09:04 +00:00
|
|
|
}
|
|
|
|
|
2019-12-12 19:09:32 +00:00
|
|
|
var _ SVC = (*Service)(nil)
|
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
// NewService is a constructor for a template Service.
|
2019-12-06 00:53:00 +00:00
|
|
|
func NewService(opts ...ServiceSetterFn) *Service {
|
|
|
|
opt := &serviceOpt{
|
2019-12-07 00:23:09 +00:00
|
|
|
logger: zap.NewNop(),
|
|
|
|
applyReqLimit: 5,
|
2020-03-23 23:51:43 +00:00
|
|
|
idGen: snowflake.NewDefaultIDGenerator(),
|
2020-06-26 20:58:22 +00:00
|
|
|
nameGen: wordplay.GetRandomName,
|
2020-03-23 23:51:43 +00:00
|
|
|
timeGen: influxdb.RealTimeGenerator{},
|
2019-12-06 00:53:00 +00:00
|
|
|
}
|
2019-11-07 00:45:00 +00:00
|
|
|
for _, o := range opts {
|
|
|
|
o(opt)
|
2019-10-23 17:09:04 +00:00
|
|
|
}
|
|
|
|
|
2019-11-07 00:45:00 +00:00
|
|
|
return &Service{
|
2020-03-20 23:20:53 +00:00
|
|
|
log: opt.logger,
|
|
|
|
|
2019-12-07 00:23:09 +00:00
|
|
|
applyReqLimit: opt.applyReqLimit,
|
feat: add --hardening-enabled option to limit flux/pkger HTTP requests (#23207)
Flux HTTP and template fetching requests do not perform IP address
checks for local addresses. This behavior on the one hand allows SSRF
(Server Side Request Forgery) attacks via authenticated requests but on
the other hand is useful for scenarios that have legitimate requirements
to fetch from private addresses (eg, hosting templates internally or
performing flux queries to local resources during development).
To not break existing installations, the default behavior will remain
the same but a new --hardening-enabled option is added to influxd to
turn on IP address verification and limit both flux and template
fetching HTTP requests to non-private addresses. We plan to enable new
security features that aren't suitable for the default install with this
option. Put another way, this new option is intended to be used to make
it easy to turn on all security options when running in production
environments. The 'Manage security and authorization' section of the
docs will also be updated for this option.
Specifically for flux, when --hardening-enabled is specified, we now
pass in PrivateIPValidator{} to the flux dependency configuration. The
flux url validator will then tap into the http.Client 'Control'
mechanism to validate the IP address since it is called after DNS lookup
but before the connection starts.
For pkger (template fetching), when --hardening-enabled is specified,
the template parser's HTTP client will be configured to also use
PrivateIPValidator{}. Note that /api/v2/stacks POST ('init', aka create)
and PATCH ('update') only store the new url to be applied later with
/api/v2/templates/apply. While it is possible to have InitStack() and
UpdateStack() mimic net.DialContext() to setup a go routine to perform a
DNS lookup and then loop through the returned addresses to verify none
are for a private IP before storing the url, this would add considerable
complexity to the stacks implementation. Since the stack's urls are
fetched when it is applied and the IP address is verified as part of
apply (see above), for now we'll keep this simple and not validate the
IPs of the stack's urls during init or update.
Lastly, update pkger/http_server_template_test.go's Templates() test for
disabled jsonnet to also check the contents of the 422 error (since the
flux validator also returns a 422 with different message). Also, fix the
URL in one of these tests to use a valid path.
2022-03-18 14:25:31 +00:00
|
|
|
client: opt.client,
|
2020-03-20 23:20:53 +00:00
|
|
|
idGen: opt.idGen,
|
2020-06-26 20:58:22 +00:00
|
|
|
nameGen: opt.nameGen,
|
2020-03-20 23:20:53 +00:00
|
|
|
store: opt.store,
|
|
|
|
timeGen: opt.timeGen,
|
|
|
|
|
|
|
|
bucketSVC: opt.bucketSVC,
|
|
|
|
checkSVC: opt.checkSVC,
|
|
|
|
labelSVC: opt.labelSVC,
|
|
|
|
dashSVC: opt.dashSVC,
|
|
|
|
endpointSVC: opt.endpointSVC,
|
|
|
|
orgSVC: opt.orgSVC,
|
|
|
|
ruleSVC: opt.ruleSVC,
|
|
|
|
secretSVC: opt.secretSVC,
|
|
|
|
taskSVC: opt.taskSVC,
|
|
|
|
teleSVC: opt.teleSVC,
|
|
|
|
varSVC: opt.varSVC,
|
2019-10-23 17:09:04 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-03-20 02:08:58 +00:00
|
|
|
// InitStack will create a new stack for the given user and its given org. The stack can be created
|
|
|
|
// with urls that point to the location of packages that are included as part of the stack when
|
|
|
|
// it is applied.
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) InitStack(ctx context.Context, userID platform.ID, stCreate StackCreate) (Stack, error) {
|
2020-07-07 22:07:11 +00:00
|
|
|
if err := validURLs(stCreate.TemplateURLs); err != nil {
|
2020-03-26 20:23:14 +00:00
|
|
|
return Stack{}, err
|
|
|
|
}
|
|
|
|
|
2021-12-30 17:55:45 +00:00
|
|
|
// Reject use of server-side jsonnet with stack templates
|
|
|
|
for _, u := range stCreate.TemplateURLs {
|
|
|
|
// While things like '.%6Aonnet' evaluate to the default encoding (yaml), let's unescape and catch those too
|
|
|
|
decoded, err := url.QueryUnescape(u)
|
|
|
|
if err != nil {
|
|
|
|
msg := fmt.Sprintf("stack template from url[%q] had an issue", u)
|
|
|
|
return Stack{}, influxErr(errors2.EInvalid, msg)
|
|
|
|
}
|
|
|
|
|
|
|
|
if strings.HasSuffix(strings.ToLower(decoded), "jsonnet") {
|
|
|
|
msg := fmt.Sprintf("stack template from url[%q] had an issue: %s", u, ErrInvalidEncoding.Error())
|
|
|
|
return Stack{}, influxErr(errors2.EUnprocessableEntity, msg)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-07-07 22:07:11 +00:00
|
|
|
if _, err := s.orgSVC.FindOrganizationByID(ctx, stCreate.OrgID); err != nil {
|
2021-03-30 18:10:02 +00:00
|
|
|
if errors2.ErrorCode(err) == errors2.ENotFound {
|
2020-07-07 22:07:11 +00:00
|
|
|
msg := fmt.Sprintf("organization dependency does not exist for id[%q]", stCreate.OrgID.String())
|
2021-03-30 18:10:02 +00:00
|
|
|
return Stack{}, influxErr(errors2.EConflict, msg)
|
2020-03-20 23:20:53 +00:00
|
|
|
}
|
|
|
|
return Stack{}, internalErr(err)
|
|
|
|
}
|
2020-03-20 02:08:58 +00:00
|
|
|
|
2020-03-20 23:20:53 +00:00
|
|
|
now := s.timeGen.Now()
|
2020-07-07 22:07:11 +00:00
|
|
|
newStack := Stack{
|
|
|
|
ID: s.idGen.ID(),
|
|
|
|
OrgID: stCreate.OrgID,
|
2020-03-20 23:20:53 +00:00
|
|
|
CreatedAt: now,
|
2020-07-07 22:07:11 +00:00
|
|
|
Events: []StackEvent{
|
|
|
|
{
|
|
|
|
EventType: StackEventCreate,
|
|
|
|
Name: stCreate.Name,
|
|
|
|
Description: stCreate.Description,
|
|
|
|
Resources: stCreate.Resources,
|
|
|
|
TemplateURLs: stCreate.TemplateURLs,
|
|
|
|
UpdatedAt: now,
|
|
|
|
},
|
|
|
|
},
|
2020-03-20 02:08:35 +00:00
|
|
|
}
|
2020-07-07 22:07:11 +00:00
|
|
|
if err := s.store.CreateStack(ctx, newStack); err != nil {
|
2020-03-20 23:20:53 +00:00
|
|
|
return Stack{}, internalErr(err)
|
2020-03-20 02:08:58 +00:00
|
|
|
}
|
|
|
|
|
2020-07-07 22:07:11 +00:00
|
|
|
return newStack, nil
|
2020-03-20 02:08:35 +00:00
|
|
|
}
|
|
|
|
|
2020-07-09 23:33:42 +00:00
|
|
|
// UninstallStack will remove all resources associated with the stack.
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) UninstallStack(ctx context.Context, identifiers struct{ OrgID, UserID, StackID platform.ID }) (Stack, error) {
|
2020-07-09 23:33:42 +00:00
|
|
|
uninstalledStack, err := s.uninstallStack(ctx, identifiers)
|
|
|
|
if err != nil {
|
|
|
|
return Stack{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
ev := uninstalledStack.LatestEvent()
|
|
|
|
ev.EventType = StackEventUninstalled
|
|
|
|
ev.Resources = nil
|
|
|
|
ev.UpdatedAt = s.timeGen.Now()
|
|
|
|
|
|
|
|
uninstalledStack.Events = append(uninstalledStack.Events, ev)
|
|
|
|
if err := s.store.UpdateStack(ctx, uninstalledStack); err != nil {
|
|
|
|
s.log.Error("unable to update stack after uninstalling resources", zap.Error(err))
|
|
|
|
}
|
|
|
|
return uninstalledStack, nil
|
|
|
|
}
|
|
|
|
|
2020-05-01 21:40:32 +00:00
|
|
|
// DeleteStack removes a stack and all the resources that have are associated with the stack.
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) DeleteStack(ctx context.Context, identifiers struct{ OrgID, UserID, StackID platform.ID }) (e error) {
|
2020-07-09 23:33:42 +00:00
|
|
|
deletedStack, err := s.uninstallStack(ctx, identifiers)
|
2021-03-30 18:10:02 +00:00
|
|
|
if errors2.ErrorCode(err) == errors2.ENotFound {
|
2020-05-01 21:40:32 +00:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2020-07-09 23:33:42 +00:00
|
|
|
|
|
|
|
return s.store.DeleteStack(ctx, deletedStack.ID)
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) uninstallStack(ctx context.Context, identifiers struct{ OrgID, UserID, StackID platform.ID }) (_ Stack, e error) {
|
2020-07-09 23:33:42 +00:00
|
|
|
stack, err := s.store.ReadStackByID(ctx, identifiers.StackID)
|
|
|
|
if err != nil {
|
|
|
|
return Stack{}, err
|
|
|
|
}
|
2020-05-01 21:40:32 +00:00
|
|
|
if stack.OrgID != identifiers.OrgID {
|
2021-03-30 18:10:02 +00:00
|
|
|
return Stack{}, &errors2.Error{
|
|
|
|
Code: errors2.EConflict,
|
2020-05-01 21:40:32 +00:00
|
|
|
Msg: "you do not have access to given stack ID",
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
// providing empty template will remove all applied resources
|
|
|
|
state, err := s.dryRun(ctx, identifiers.OrgID, new(Template), applyOptFromOptFns(ApplyWithStackID(identifiers.StackID)))
|
2020-05-01 21:40:32 +00:00
|
|
|
if err != nil {
|
2020-07-09 23:33:42 +00:00
|
|
|
return Stack{}, err
|
2020-05-01 21:40:32 +00:00
|
|
|
}
|
|
|
|
|
2020-06-11 23:45:09 +00:00
|
|
|
coordinator := newRollbackCoordinator(s.log, s.applyReqLimit)
|
2020-05-01 21:40:32 +00:00
|
|
|
defer coordinator.rollback(s.log, &e, identifiers.OrgID)
|
|
|
|
|
|
|
|
err = s.applyState(ctx, coordinator, identifiers.OrgID, identifiers.UserID, state, nil)
|
|
|
|
if err != nil {
|
2020-07-09 23:33:42 +00:00
|
|
|
return Stack{}, err
|
2020-05-01 21:40:32 +00:00
|
|
|
}
|
2020-07-09 23:33:42 +00:00
|
|
|
return stack, nil
|
2020-05-01 21:40:32 +00:00
|
|
|
}
|
|
|
|
|
2020-04-30 06:37:39 +00:00
|
|
|
// ListFilter are filter options for filtering stacks from being returned.
|
|
|
|
type ListFilter struct {
|
2021-03-30 18:10:02 +00:00
|
|
|
StackIDs []platform.ID
|
2020-04-30 06:37:39 +00:00
|
|
|
Names []string
|
|
|
|
}
|
|
|
|
|
|
|
|
// ListStacks returns a list of stacks.
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) ListStacks(ctx context.Context, orgID platform.ID, f ListFilter) ([]Stack, error) {
|
2020-04-30 06:37:39 +00:00
|
|
|
return s.store.ListStacks(ctx, orgID, f)
|
|
|
|
}
|
|
|
|
|
2020-06-17 03:12:37 +00:00
|
|
|
// ReadStack returns a stack that matches the given id.
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) ReadStack(ctx context.Context, id platform.ID) (Stack, error) {
|
2020-06-17 03:12:37 +00:00
|
|
|
return s.store.ReadStackByID(ctx, id)
|
|
|
|
}
|
|
|
|
|
|
|
|
// UpdateStack updates the stack by the given parameters.
|
|
|
|
func (s *Service) UpdateStack(ctx context.Context, upd StackUpdate) (Stack, error) {
|
|
|
|
existing, err := s.ReadStack(ctx, upd.ID)
|
|
|
|
if err != nil {
|
|
|
|
return Stack{}, err
|
|
|
|
}
|
|
|
|
|
2021-12-30 17:55:45 +00:00
|
|
|
// Reject use of server-side jsonnet with stack templates
|
|
|
|
for _, u := range upd.TemplateURLs {
|
|
|
|
// While things like '.%6Aonnet' evaluate to the default encoding (yaml), let's unescape and catch those too
|
|
|
|
decoded, err := url.QueryUnescape(u)
|
|
|
|
if err != nil {
|
|
|
|
msg := fmt.Sprintf("stack template from url[%q] had an issue", u)
|
|
|
|
return Stack{}, influxErr(errors2.EInvalid, msg)
|
|
|
|
}
|
|
|
|
|
|
|
|
if strings.HasSuffix(strings.ToLower(decoded), "jsonnet") {
|
|
|
|
msg := fmt.Sprintf("stack template from url[%q] had an issue: %s", u, ErrInvalidEncoding.Error())
|
|
|
|
return Stack{}, influxErr(errors2.EUnprocessableEntity, msg)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-06-26 20:58:22 +00:00
|
|
|
updatedStack := s.applyStackUpdate(existing, upd)
|
|
|
|
if err := s.store.UpdateStack(ctx, updatedStack); err != nil {
|
|
|
|
return Stack{}, err
|
|
|
|
}
|
|
|
|
|
|
|
|
return updatedStack, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (s *Service) applyStackUpdate(existing Stack, upd StackUpdate) Stack {
|
2020-07-07 22:07:11 +00:00
|
|
|
ev := existing.LatestEvent()
|
|
|
|
ev.EventType = StackEventUpdate
|
|
|
|
ev.UpdatedAt = s.timeGen.Now()
|
2020-06-17 03:12:37 +00:00
|
|
|
if upd.Name != nil {
|
2020-07-07 22:07:11 +00:00
|
|
|
ev.Name = *upd.Name
|
2020-06-17 03:12:37 +00:00
|
|
|
}
|
|
|
|
if upd.Description != nil {
|
2020-07-07 22:07:11 +00:00
|
|
|
ev.Description = *upd.Description
|
2020-06-17 03:12:37 +00:00
|
|
|
}
|
2020-06-26 22:12:57 +00:00
|
|
|
if upd.TemplateURLs != nil {
|
2020-07-07 22:07:11 +00:00
|
|
|
ev.TemplateURLs = upd.TemplateURLs
|
2020-06-17 03:12:37 +00:00
|
|
|
}
|
|
|
|
|
2020-06-26 20:58:22 +00:00
|
|
|
type key struct {
|
|
|
|
k Kind
|
2021-03-30 18:10:02 +00:00
|
|
|
id platform.ID
|
2020-06-26 20:58:22 +00:00
|
|
|
}
|
|
|
|
mExistingResources := make(map[key]bool)
|
|
|
|
mExistingNames := make(map[string]bool)
|
2020-07-07 22:07:11 +00:00
|
|
|
for _, r := range ev.Resources {
|
2020-06-26 20:58:22 +00:00
|
|
|
k := key{k: r.Kind, id: r.ID}
|
|
|
|
mExistingResources[k] = true
|
|
|
|
mExistingNames[r.MetaName] = true
|
|
|
|
}
|
2020-07-07 22:07:11 +00:00
|
|
|
|
|
|
|
var out []StackResource
|
2020-06-26 20:58:22 +00:00
|
|
|
for _, r := range upd.AdditionalResources {
|
|
|
|
k := key{k: r.Kind, id: r.ID}
|
|
|
|
if mExistingResources[k] {
|
|
|
|
continue
|
|
|
|
}
|
2020-06-17 03:12:37 +00:00
|
|
|
|
2020-06-26 20:58:22 +00:00
|
|
|
sr := StackResource{
|
|
|
|
APIVersion: r.APIVersion,
|
|
|
|
ID: r.ID,
|
|
|
|
Kind: r.Kind,
|
|
|
|
}
|
|
|
|
|
|
|
|
metaName := r.MetaName
|
|
|
|
if metaName == "" || mExistingNames[metaName] {
|
2020-07-07 22:07:11 +00:00
|
|
|
metaName = uniqMetaName(s.nameGen, s.idGen, mExistingNames)
|
2020-06-26 20:58:22 +00:00
|
|
|
}
|
|
|
|
mExistingNames[metaName] = true
|
|
|
|
sr.MetaName = metaName
|
2020-07-07 22:07:11 +00:00
|
|
|
|
|
|
|
out = append(out, sr)
|
2020-06-26 20:58:22 +00:00
|
|
|
}
|
2020-07-28 22:49:08 +00:00
|
|
|
|
|
|
|
ev.Resources = append(ev.Resources, out...)
|
|
|
|
sort.Slice(ev.Resources, func(i, j int) bool {
|
|
|
|
iName, jName := ev.Resources[i].MetaName, ev.Resources[j].MetaName
|
|
|
|
iKind, jKind := ev.Resources[i].Kind, ev.Resources[j].Kind
|
|
|
|
|
|
|
|
if iKind.is(jKind) {
|
|
|
|
return iName < jName
|
|
|
|
}
|
|
|
|
return kindPriorities[iKind] > kindPriorities[jKind]
|
|
|
|
})
|
|
|
|
|
2020-07-07 22:07:11 +00:00
|
|
|
existing.Events = append(existing.Events, ev)
|
2020-06-26 20:58:22 +00:00
|
|
|
return existing
|
2020-06-17 03:12:37 +00:00
|
|
|
}
|
|
|
|
|
2020-03-06 18:52:18 +00:00
|
|
|
type (
|
2020-06-30 21:54:00 +00:00
|
|
|
// ExportOptFn is a functional input for setting the template fields.
|
2020-06-25 04:41:04 +00:00
|
|
|
ExportOptFn func(opt *ExportOpt) error
|
2019-11-08 19:33:41 +00:00
|
|
|
|
2020-06-25 04:41:04 +00:00
|
|
|
// ExportOpt are the options for creating a new package.
|
|
|
|
ExportOpt struct {
|
2021-03-30 18:10:02 +00:00
|
|
|
StackID platform.ID
|
2020-06-26 19:41:44 +00:00
|
|
|
OrgIDs []ExportByOrgIDOpt
|
|
|
|
Resources []ResourceToClone
|
2020-03-06 18:52:18 +00:00
|
|
|
}
|
|
|
|
|
2020-06-25 04:41:04 +00:00
|
|
|
// ExportByOrgIDOpt identifies an org to export resources for and provides
|
2020-03-06 18:52:18 +00:00
|
|
|
// multiple filtering options.
|
2020-06-25 04:41:04 +00:00
|
|
|
ExportByOrgIDOpt struct {
|
2021-03-30 18:10:02 +00:00
|
|
|
OrgID platform.ID
|
2020-03-06 18:52:18 +00:00
|
|
|
LabelNames []string
|
|
|
|
ResourceKinds []Kind
|
|
|
|
}
|
|
|
|
)
|
2019-11-04 23:15:53 +00:00
|
|
|
|
2020-06-25 04:41:04 +00:00
|
|
|
// ExportWithExistingResources allows the create method to clone existing resources.
|
|
|
|
func ExportWithExistingResources(resources ...ResourceToClone) ExportOptFn {
|
|
|
|
return func(opt *ExportOpt) error {
|
2019-11-08 19:33:41 +00:00
|
|
|
for _, r := range resources {
|
|
|
|
if err := r.OK(); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
}
|
2019-11-21 00:38:12 +00:00
|
|
|
opt.Resources = append(opt.Resources, resources...)
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-06-25 04:41:04 +00:00
|
|
|
// ExportWithAllOrgResources allows the create method to clone all existing resources
|
2019-11-21 00:38:12 +00:00
|
|
|
// for the given organization.
|
2020-06-25 04:41:04 +00:00
|
|
|
func ExportWithAllOrgResources(orgIDOpt ExportByOrgIDOpt) ExportOptFn {
|
|
|
|
return func(opt *ExportOpt) error {
|
2020-03-06 18:52:18 +00:00
|
|
|
if orgIDOpt.OrgID == 0 {
|
2019-11-21 00:38:12 +00:00
|
|
|
return errors.New("orgID provided must not be zero")
|
|
|
|
}
|
2020-03-06 18:52:18 +00:00
|
|
|
for _, k := range orgIDOpt.ResourceKinds {
|
|
|
|
if err := k.OK(); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2019-11-21 00:38:12 +00:00
|
|
|
}
|
2020-03-06 18:52:18 +00:00
|
|
|
opt.OrgIDs = append(opt.OrgIDs, orgIDOpt)
|
2019-11-04 23:15:53 +00:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-06-25 04:41:04 +00:00
|
|
|
// ExportWithStackID provides an export for the given stack ID.
|
2021-03-30 18:10:02 +00:00
|
|
|
func ExportWithStackID(stackID platform.ID) ExportOptFn {
|
2020-06-25 04:41:04 +00:00
|
|
|
return func(opt *ExportOpt) error {
|
|
|
|
opt.StackID = stackID
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func exportOptFromOptFns(opts []ExportOptFn) (ExportOpt, error) {
|
|
|
|
var opt ExportOpt
|
|
|
|
for _, setter := range opts {
|
|
|
|
if err := setter(&opt); err != nil {
|
|
|
|
return ExportOpt{}, err
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return opt, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
// Export will produce a templates from the parameters provided.
|
2020-06-30 21:54:00 +00:00
|
|
|
func (s *Service) Export(ctx context.Context, setters ...ExportOptFn) (*Template, error) {
|
2020-06-25 04:41:04 +00:00
|
|
|
opt, err := exportOptFromOptFns(setters)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2020-06-26 03:17:11 +00:00
|
|
|
var stack Stack
|
2020-06-25 04:41:04 +00:00
|
|
|
if opt.StackID != 0 {
|
2020-06-26 03:17:11 +00:00
|
|
|
stack, err = s.store.ReadStackByID(ctx, opt.StackID)
|
2020-06-25 04:41:04 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2020-06-26 03:17:11 +00:00
|
|
|
var opts []ExportOptFn
|
2020-07-07 22:07:11 +00:00
|
|
|
for _, r := range stack.LatestEvent().Resources {
|
2020-06-26 03:17:11 +00:00
|
|
|
opts = append(opts, ExportWithExistingResources(ResourceToClone{
|
|
|
|
Kind: r.Kind,
|
|
|
|
ID: r.ID,
|
|
|
|
MetaName: r.MetaName,
|
2020-08-27 21:25:28 +00:00
|
|
|
Name: r.Name,
|
2020-06-26 03:17:11 +00:00
|
|
|
}))
|
|
|
|
}
|
|
|
|
|
2020-06-25 04:41:04 +00:00
|
|
|
opt, err = exportOptFromOptFns(append(setters, opts...))
|
|
|
|
if err != nil {
|
2019-11-08 19:33:41 +00:00
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-03-18 22:22:17 +00:00
|
|
|
exporter := newResourceExporter(s)
|
2019-11-04 23:15:53 +00:00
|
|
|
|
2020-03-06 18:52:18 +00:00
|
|
|
for _, orgIDOpt := range opt.OrgIDs {
|
|
|
|
resourcesToClone, err := s.cloneOrgResources(ctx, orgIDOpt.OrgID, orgIDOpt.ResourceKinds)
|
2019-11-21 00:38:12 +00:00
|
|
|
if err != nil {
|
2019-12-21 23:57:41 +00:00
|
|
|
return nil, internalErr(err)
|
2019-11-21 00:38:12 +00:00
|
|
|
}
|
|
|
|
|
2020-03-18 18:47:13 +00:00
|
|
|
if err := exporter.Export(ctx, resourcesToClone, orgIDOpt.LabelNames...); err != nil {
|
|
|
|
return nil, internalErr(err)
|
2019-11-04 23:15:53 +00:00
|
|
|
}
|
2019-11-08 19:33:41 +00:00
|
|
|
}
|
|
|
|
|
2020-03-18 18:47:13 +00:00
|
|
|
if err := exporter.Export(ctx, opt.Resources); err != nil {
|
|
|
|
return nil, internalErr(err)
|
|
|
|
}
|
2019-11-12 20:29:50 +00:00
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
template := &Template{Objects: exporter.Objects()}
|
|
|
|
if err := template.Validate(ValidWithoutResources()); err != nil {
|
2019-12-21 23:57:41 +00:00
|
|
|
return nil, failedValidationErr(err)
|
2019-11-04 23:15:53 +00:00
|
|
|
}
|
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
return template, nil
|
2020-06-25 04:41:04 +00:00
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) cloneOrgResources(ctx context.Context, orgID platform.ID, resourceKinds []Kind) ([]ResourceToClone, error) {
|
2019-11-21 00:38:12 +00:00
|
|
|
var resources []ResourceToClone
|
2020-03-06 18:52:18 +00:00
|
|
|
for _, resGen := range s.filterOrgResourceKinds(resourceKinds) {
|
2020-08-27 23:48:13 +00:00
|
|
|
existingResources, err := resGen.cloneFn(ctx, orgID)
|
2019-11-21 00:38:12 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, ierrors.Wrap(err, "finding "+string(resGen.resType))
|
|
|
|
}
|
|
|
|
resources = append(resources, existingResources...)
|
|
|
|
}
|
|
|
|
|
|
|
|
return resources, nil
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) cloneOrgBuckets(ctx context.Context, orgID platform.ID) ([]ResourceToClone, error) {
|
2019-11-21 00:38:12 +00:00
|
|
|
buckets, _, err := s.bucketSVC.FindBuckets(ctx, influxdb.BucketFilter{
|
|
|
|
OrganizationID: &orgID,
|
|
|
|
})
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
resources := make([]ResourceToClone, 0, len(buckets))
|
|
|
|
for _, b := range buckets {
|
|
|
|
if b.Type == influxdb.BucketTypeSystem {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
resources = append(resources, ResourceToClone{
|
|
|
|
Kind: KindBucket,
|
|
|
|
ID: b.ID,
|
2020-08-27 21:25:28 +00:00
|
|
|
Name: b.Name,
|
2019-11-21 00:38:12 +00:00
|
|
|
})
|
|
|
|
}
|
|
|
|
return resources, nil
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) cloneOrgChecks(ctx context.Context, orgID platform.ID) ([]ResourceToClone, error) {
|
2019-12-19 01:03:19 +00:00
|
|
|
checks, _, err := s.checkSVC.FindChecks(ctx, influxdb.CheckFilter{
|
|
|
|
OrgID: &orgID,
|
|
|
|
})
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
resources := make([]ResourceToClone, 0, len(checks))
|
|
|
|
for _, c := range checks {
|
|
|
|
resources = append(resources, ResourceToClone{
|
|
|
|
Kind: KindCheck,
|
|
|
|
ID: c.GetID(),
|
2020-08-27 21:25:28 +00:00
|
|
|
Name: c.GetName(),
|
2019-12-19 01:03:19 +00:00
|
|
|
})
|
|
|
|
}
|
|
|
|
return resources, nil
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) cloneOrgDashboards(ctx context.Context, orgID platform.ID) ([]ResourceToClone, error) {
|
2019-11-21 00:38:12 +00:00
|
|
|
dashs, _, err := s.dashSVC.FindDashboards(ctx, influxdb.DashboardFilter{
|
|
|
|
OrganizationID: &orgID,
|
|
|
|
}, influxdb.FindOptions{Limit: 100})
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
resources := make([]ResourceToClone, 0, len(dashs))
|
|
|
|
for _, d := range dashs {
|
|
|
|
resources = append(resources, ResourceToClone{
|
|
|
|
Kind: KindDashboard,
|
|
|
|
ID: d.ID,
|
|
|
|
})
|
|
|
|
}
|
|
|
|
return resources, nil
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) cloneOrgLabels(ctx context.Context, orgID platform.ID) ([]ResourceToClone, error) {
|
2020-08-27 21:25:28 +00:00
|
|
|
filter := influxdb.LabelFilter{
|
2019-11-21 00:38:12 +00:00
|
|
|
OrgID: &orgID,
|
2020-08-27 21:25:28 +00:00
|
|
|
}
|
|
|
|
|
2020-09-15 15:33:44 +00:00
|
|
|
labels, err := s.labelSVC.FindLabels(ctx, filter, influxdb.FindOptions{Limit: 100})
|
2019-11-21 00:38:12 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, ierrors.Wrap(err, "finding labels")
|
|
|
|
}
|
|
|
|
|
|
|
|
resources := make([]ResourceToClone, 0, len(labels))
|
|
|
|
for _, l := range labels {
|
|
|
|
resources = append(resources, ResourceToClone{
|
|
|
|
Kind: KindLabel,
|
|
|
|
ID: l.ID,
|
2020-08-27 21:25:28 +00:00
|
|
|
Name: l.Name,
|
2019-11-21 00:38:12 +00:00
|
|
|
})
|
|
|
|
}
|
|
|
|
return resources, nil
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) cloneOrgNotificationEndpoints(ctx context.Context, orgID platform.ID) ([]ResourceToClone, error) {
|
2019-12-16 20:10:45 +00:00
|
|
|
endpoints, _, err := s.endpointSVC.FindNotificationEndpoints(ctx, influxdb.NotificationEndpointFilter{
|
|
|
|
OrgID: &orgID,
|
|
|
|
})
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
resources := make([]ResourceToClone, 0, len(endpoints))
|
|
|
|
for _, e := range endpoints {
|
|
|
|
resources = append(resources, ResourceToClone{
|
|
|
|
Kind: KindNotificationEndpoint,
|
|
|
|
ID: e.GetID(),
|
2020-08-27 21:25:28 +00:00
|
|
|
Name: e.GetName(),
|
2019-12-16 20:10:45 +00:00
|
|
|
})
|
|
|
|
}
|
|
|
|
return resources, nil
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) cloneOrgNotificationRules(ctx context.Context, orgID platform.ID) ([]ResourceToClone, error) {
|
2019-12-20 20:51:27 +00:00
|
|
|
rules, _, err := s.ruleSVC.FindNotificationRules(ctx, influxdb.NotificationRuleFilter{
|
|
|
|
OrgID: &orgID,
|
|
|
|
})
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
resources := make([]ResourceToClone, 0, len(rules))
|
|
|
|
for _, r := range rules {
|
|
|
|
resources = append(resources, ResourceToClone{
|
|
|
|
Kind: KindNotificationRule,
|
|
|
|
ID: r.GetID(),
|
2020-08-27 21:25:28 +00:00
|
|
|
Name: r.GetName(),
|
2019-12-20 20:51:27 +00:00
|
|
|
})
|
|
|
|
}
|
|
|
|
return resources, nil
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) cloneOrgTasks(ctx context.Context, orgID platform.ID) ([]ResourceToClone, error) {
|
2020-07-06 19:18:32 +00:00
|
|
|
tasks, err := s.getAllTasks(ctx, orgID)
|
2019-12-27 17:21:05 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2020-02-28 00:02:11 +00:00
|
|
|
if len(tasks) == 0 {
|
2019-12-27 22:00:49 +00:00
|
|
|
return nil, nil
|
|
|
|
}
|
|
|
|
|
2020-07-06 19:18:32 +00:00
|
|
|
checks, err := s.getAllChecks(ctx, orgID)
|
2019-12-23 22:31:56 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2020-07-06 19:18:32 +00:00
|
|
|
rules, err := s.getNotificationRules(ctx, orgID)
|
2020-02-27 23:01:42 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2021-04-07 18:42:55 +00:00
|
|
|
mTasks := make(map[platform.ID]*taskmodel.Task)
|
2020-02-28 00:02:11 +00:00
|
|
|
for i := range tasks {
|
|
|
|
t := tasks[i]
|
2021-04-07 18:42:55 +00:00
|
|
|
if t.Type != taskmodel.TaskSystemType {
|
2019-12-27 22:00:49 +00:00
|
|
|
continue
|
|
|
|
}
|
2020-02-28 00:02:11 +00:00
|
|
|
mTasks[t.ID] = t
|
2019-12-27 17:21:05 +00:00
|
|
|
}
|
|
|
|
for _, c := range checks {
|
2020-02-28 00:02:11 +00:00
|
|
|
delete(mTasks, c.GetTaskID())
|
2019-12-27 17:21:05 +00:00
|
|
|
}
|
2020-02-27 23:01:42 +00:00
|
|
|
for _, r := range rules {
|
2020-02-28 00:02:11 +00:00
|
|
|
delete(mTasks, r.GetTaskID())
|
2020-02-27 23:01:42 +00:00
|
|
|
}
|
2019-12-27 17:21:05 +00:00
|
|
|
|
2020-02-28 00:02:11 +00:00
|
|
|
resources := make([]ResourceToClone, 0, len(mTasks))
|
|
|
|
for _, t := range mTasks {
|
2019-12-23 22:31:56 +00:00
|
|
|
resources = append(resources, ResourceToClone{
|
|
|
|
Kind: KindTask,
|
|
|
|
ID: t.ID,
|
|
|
|
})
|
|
|
|
}
|
|
|
|
return resources, nil
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) cloneOrgTelegrafs(ctx context.Context, orgID platform.ID) ([]ResourceToClone, error) {
|
2019-12-05 00:17:35 +00:00
|
|
|
teles, _, err := s.teleSVC.FindTelegrafConfigs(ctx, influxdb.TelegrafConfigFilter{OrgID: &orgID})
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2019-12-16 20:10:45 +00:00
|
|
|
|
2019-12-05 00:17:35 +00:00
|
|
|
resources := make([]ResourceToClone, 0, len(teles))
|
|
|
|
for _, t := range teles {
|
|
|
|
resources = append(resources, ResourceToClone{
|
|
|
|
Kind: KindTelegraf,
|
|
|
|
ID: t.ID,
|
|
|
|
})
|
|
|
|
}
|
|
|
|
return resources, nil
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) cloneOrgVariables(ctx context.Context, orgID platform.ID) ([]ResourceToClone, error) {
|
2019-11-21 00:38:12 +00:00
|
|
|
vars, err := s.varSVC.FindVariables(ctx, influxdb.VariableFilter{
|
|
|
|
OrganizationID: &orgID,
|
|
|
|
}, influxdb.FindOptions{Limit: 10000})
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
resources := make([]ResourceToClone, 0, len(vars))
|
|
|
|
for _, v := range vars {
|
|
|
|
resources = append(resources, ResourceToClone{
|
|
|
|
Kind: KindVariable,
|
|
|
|
ID: v.ID,
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
|
|
|
return resources, nil
|
|
|
|
}
|
|
|
|
|
2020-08-27 21:37:44 +00:00
|
|
|
type (
|
2021-03-30 18:10:02 +00:00
|
|
|
cloneResFn func(context.Context, platform.ID) ([]ResourceToClone, error)
|
2020-08-27 21:37:44 +00:00
|
|
|
resClone struct {
|
|
|
|
resType influxdb.ResourceType
|
|
|
|
cloneFn cloneResFn
|
|
|
|
}
|
|
|
|
)
|
2020-03-06 18:52:18 +00:00
|
|
|
|
2020-08-27 21:37:44 +00:00
|
|
|
func (s *Service) filterOrgResourceKinds(resourceKindFilters []Kind) []resClone {
|
2020-03-06 18:52:18 +00:00
|
|
|
mKinds := map[Kind]cloneResFn{
|
|
|
|
KindBucket: s.cloneOrgBuckets,
|
|
|
|
KindCheck: s.cloneOrgChecks,
|
|
|
|
KindDashboard: s.cloneOrgDashboards,
|
|
|
|
KindLabel: s.cloneOrgLabels,
|
|
|
|
KindNotificationEndpoint: s.cloneOrgNotificationEndpoints,
|
|
|
|
KindNotificationRule: s.cloneOrgNotificationRules,
|
|
|
|
KindTask: s.cloneOrgTasks,
|
|
|
|
KindTelegraf: s.cloneOrgTelegrafs,
|
|
|
|
KindVariable: s.cloneOrgVariables,
|
|
|
|
}
|
|
|
|
|
2020-08-27 21:37:44 +00:00
|
|
|
newResGen := func(resType influxdb.ResourceType, cloneFn cloneResFn) resClone {
|
|
|
|
return resClone{
|
2020-03-06 18:52:18 +00:00
|
|
|
resType: resType,
|
|
|
|
cloneFn: cloneFn,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-08-27 21:37:44 +00:00
|
|
|
var resourceTypeGens []resClone
|
2020-03-06 18:52:18 +00:00
|
|
|
if len(resourceKindFilters) == 0 {
|
|
|
|
for k, cloneFn := range mKinds {
|
|
|
|
resourceTypeGens = append(resourceTypeGens, newResGen(k.ResourceType(), cloneFn))
|
|
|
|
}
|
|
|
|
return resourceTypeGens
|
|
|
|
}
|
|
|
|
|
|
|
|
seenKinds := make(map[Kind]bool)
|
|
|
|
for _, k := range resourceKindFilters {
|
|
|
|
cloneFn, ok := mKinds[k]
|
|
|
|
if !ok || seenKinds[k] {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
seenKinds[k] = true
|
|
|
|
resourceTypeGens = append(resourceTypeGens, newResGen(k.ResourceType(), cloneFn))
|
|
|
|
}
|
|
|
|
|
|
|
|
return resourceTypeGens
|
|
|
|
}
|
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
// ImpactSummary represents the impact the application of a template will have on the system.
|
2020-06-25 04:46:03 +00:00
|
|
|
type ImpactSummary struct {
|
2020-06-15 21:13:38 +00:00
|
|
|
Sources []string
|
2021-03-30 18:10:02 +00:00
|
|
|
StackID platform.ID
|
2020-05-28 22:09:30 +00:00
|
|
|
Diff Diff
|
|
|
|
Summary Summary
|
|
|
|
}
|
|
|
|
|
2021-04-19 13:34:19 +00:00
|
|
|
var reCommunityTemplatesValidAddr = regexp.MustCompile(`(?:https://raw\.githubusercontent\.com/influxdata/community-templates/master/)(?P<name>\w+)(?:/.*)`)
|
2020-11-02 17:59:01 +00:00
|
|
|
|
|
|
|
func (i *ImpactSummary) communityName() string {
|
|
|
|
if len(i.Sources) == 0 {
|
|
|
|
return "custom"
|
|
|
|
}
|
|
|
|
|
|
|
|
// pull name `name` from community url https://raw.githubusercontent.com/influxdata/community-templates/master/name/name_template.yml
|
|
|
|
for j := range i.Sources {
|
|
|
|
finds := reCommunityTemplatesValidAddr.FindStringSubmatch(i.Sources[j])
|
|
|
|
if len(finds) == 2 {
|
|
|
|
return finds[1]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return "custom"
|
|
|
|
}
|
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
// DryRun provides a dry run of the template application. The template will be marked verified
|
2019-10-28 22:23:40 +00:00
|
|
|
// for later calls to Apply. This func will be run on an Apply if it has not been run
|
|
|
|
// already.
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) DryRun(ctx context.Context, orgID, userID platform.ID, opts ...ApplyOptFn) (ImpactSummary, error) {
|
2020-04-29 22:24:19 +00:00
|
|
|
opt := applyOptFromOptFns(opts...)
|
2020-06-30 21:54:00 +00:00
|
|
|
template, err := s.templateFromApplyOpts(ctx, opt)
|
2020-06-15 21:13:38 +00:00
|
|
|
if err != nil {
|
2020-06-25 04:46:03 +00:00
|
|
|
return ImpactSummary{}, err
|
2020-04-29 22:24:19 +00:00
|
|
|
}
|
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
state, err := s.dryRun(ctx, orgID, template, opt)
|
2020-04-17 02:27:58 +00:00
|
|
|
if err != nil {
|
2020-06-25 04:46:03 +00:00
|
|
|
return ImpactSummary{}, err
|
2020-04-17 02:27:58 +00:00
|
|
|
}
|
2020-05-28 22:09:30 +00:00
|
|
|
|
2020-06-25 04:46:03 +00:00
|
|
|
return ImpactSummary{
|
2020-06-30 21:54:00 +00:00
|
|
|
Sources: template.sources,
|
2020-05-28 23:30:04 +00:00
|
|
|
StackID: opt.StackID,
|
2020-05-28 22:09:30 +00:00
|
|
|
Diff: state.diff(),
|
2020-06-30 21:54:00 +00:00
|
|
|
Summary: newSummaryFromStateTemplate(state, template),
|
2020-05-28 22:09:30 +00:00
|
|
|
}, nil
|
2020-04-11 04:51:13 +00:00
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) dryRun(ctx context.Context, orgID platform.ID, template *Template, opt ApplyOpt) (*stateCoordinator, error) {
|
2019-11-14 00:43:28 +00:00
|
|
|
// so here's the deal, when we have issues with the parsing validation, we
|
|
|
|
// continue to do the diff anyhow. any resource that does not have a name
|
|
|
|
// will be skipped, and won't bleed into the dry run here. We can now return
|
|
|
|
// a error (parseErr) and valid diff/summary.
|
|
|
|
var parseErr error
|
2020-06-30 21:54:00 +00:00
|
|
|
err := template.Validate(ValidWithoutResources())
|
2020-04-29 22:24:19 +00:00
|
|
|
if err != nil && !IsParseErr(err) {
|
|
|
|
return nil, internalErr(err)
|
2019-11-06 18:02:45 +00:00
|
|
|
}
|
2020-04-29 22:24:19 +00:00
|
|
|
parseErr = err
|
2020-02-06 05:42:01 +00:00
|
|
|
|
|
|
|
if len(opt.EnvRefs) > 0 {
|
2020-06-30 21:54:00 +00:00
|
|
|
err := template.applyEnvRefs(opt.EnvRefs)
|
2020-02-06 05:42:01 +00:00
|
|
|
if err != nil && !IsParseErr(err) {
|
2020-04-17 02:27:58 +00:00
|
|
|
return nil, internalErr(err)
|
2020-02-06 05:42:01 +00:00
|
|
|
}
|
|
|
|
parseErr = err
|
|
|
|
}
|
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
state := newStateCoordinator(template, resourceActions{
|
2020-06-18 00:30:33 +00:00
|
|
|
skipKinds: opt.KindsToSkip,
|
2020-06-17 21:45:05 +00:00
|
|
|
skipResources: opt.ResourcesToSkip,
|
|
|
|
})
|
2020-04-11 04:51:13 +00:00
|
|
|
|
2020-04-01 00:01:45 +00:00
|
|
|
if opt.StackID > 0 {
|
2020-04-17 02:27:58 +00:00
|
|
|
if err := s.addStackState(ctx, opt.StackID, state); err != nil {
|
|
|
|
return nil, internalErr(err)
|
2020-04-01 00:01:45 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
if err := s.dryRunSecrets(ctx, orgID, template); err != nil {
|
2020-04-17 02:27:58 +00:00
|
|
|
return nil, err
|
2019-12-16 17:39:55 +00:00
|
|
|
}
|
|
|
|
|
2020-04-11 04:51:13 +00:00
|
|
|
s.dryRunBuckets(ctx, orgID, state.mBuckets)
|
2020-04-14 22:19:15 +00:00
|
|
|
s.dryRunChecks(ctx, orgID, state.mChecks)
|
2020-04-20 19:29:30 +00:00
|
|
|
s.dryRunDashboards(ctx, orgID, state.mDashboards)
|
2020-04-11 04:51:13 +00:00
|
|
|
s.dryRunLabels(ctx, orgID, state.mLabels)
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
s.dryRunTasks(ctx, orgID, state.mTasks)
|
2020-04-21 22:00:29 +00:00
|
|
|
s.dryRunTelegrafConfigs(ctx, orgID, state.mTelegrafs)
|
2020-04-14 23:18:34 +00:00
|
|
|
s.dryRunVariables(ctx, orgID, state.mVariables)
|
2020-04-29 22:24:19 +00:00
|
|
|
|
|
|
|
err = s.dryRunNotificationEndpoints(ctx, orgID, state.mEndpoints)
|
2020-04-15 19:46:17 +00:00
|
|
|
if err != nil {
|
2020-04-17 02:27:58 +00:00
|
|
|
return nil, ierrors.Wrap(err, "failed to dry run notification endpoints")
|
2020-04-15 19:46:17 +00:00
|
|
|
}
|
2020-04-11 04:51:13 +00:00
|
|
|
|
2020-04-17 02:27:58 +00:00
|
|
|
err = s.dryRunNotificationRules(ctx, orgID, state.mRules, state.mEndpoints)
|
2019-12-19 22:02:34 +00:00
|
|
|
if err != nil {
|
2020-04-17 02:27:58 +00:00
|
|
|
return nil, err
|
2019-12-19 22:02:34 +00:00
|
|
|
}
|
2019-11-07 00:45:00 +00:00
|
|
|
|
2020-04-17 02:27:58 +00:00
|
|
|
stateLabelMappings, err := s.dryRunLabelMappings(ctx, state)
|
2019-10-28 22:23:40 +00:00
|
|
|
if err != nil {
|
2020-04-17 02:27:58 +00:00
|
|
|
return nil, err
|
2019-10-28 22:23:40 +00:00
|
|
|
}
|
2020-04-11 04:51:13 +00:00
|
|
|
state.labelMappings = stateLabelMappings
|
2019-10-28 22:23:40 +00:00
|
|
|
|
2020-04-17 02:27:58 +00:00
|
|
|
return state, parseErr
|
2020-04-11 04:51:13 +00:00
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) dryRunBuckets(ctx context.Context, orgID platform.ID, bkts map[string]*stateBucket) {
|
2020-04-11 04:51:13 +00:00
|
|
|
for _, stateBkt := range bkts {
|
2020-04-14 20:21:05 +00:00
|
|
|
stateBkt.orgID = orgID
|
|
|
|
var existing *influxdb.Bucket
|
2020-04-11 04:51:13 +00:00
|
|
|
if stateBkt.ID() != 0 {
|
2020-04-14 20:21:05 +00:00
|
|
|
existing, _ = s.bucketSVC.FindBucketByID(ctx, stateBkt.ID())
|
2020-04-11 04:51:13 +00:00
|
|
|
} else {
|
2020-04-14 20:21:05 +00:00
|
|
|
existing, _ = s.bucketSVC.FindBucketByName(ctx, orgID, stateBkt.parserBkt.Name())
|
2020-04-11 04:51:13 +00:00
|
|
|
}
|
2020-04-14 20:21:05 +00:00
|
|
|
if IsNew(stateBkt.stateStatus) && existing != nil {
|
|
|
|
stateBkt.stateStatus = StateStatusExists
|
|
|
|
}
|
|
|
|
stateBkt.existing = existing
|
2020-04-11 04:51:13 +00:00
|
|
|
}
|
2019-10-28 22:23:40 +00:00
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) dryRunChecks(ctx context.Context, orgID platform.ID, checks map[string]*stateCheck) {
|
2020-04-14 22:19:15 +00:00
|
|
|
for _, c := range checks {
|
|
|
|
c.orgID = orgID
|
2019-12-18 07:05:28 +00:00
|
|
|
|
2020-04-14 22:19:15 +00:00
|
|
|
var existing influxdb.Check
|
|
|
|
if c.ID() != 0 {
|
|
|
|
existing, _ = s.checkSVC.FindCheckByID(ctx, c.ID())
|
|
|
|
} else {
|
|
|
|
name := c.parserCheck.Name()
|
|
|
|
existing, _ = s.checkSVC.FindCheck(ctx, influxdb.CheckFilter{
|
|
|
|
Name: &name,
|
|
|
|
OrgID: &orgID,
|
|
|
|
})
|
|
|
|
}
|
|
|
|
if IsNew(c.stateStatus) && existing != nil {
|
|
|
|
c.stateStatus = StateStatusExists
|
|
|
|
}
|
|
|
|
c.existing = existing
|
2019-12-18 07:05:28 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) dryRunDashboards(ctx context.Context, orgID platform.ID, dashs map[string]*stateDashboard) {
|
2020-04-20 19:29:30 +00:00
|
|
|
for _, stateDash := range dashs {
|
|
|
|
stateDash.orgID = orgID
|
|
|
|
var existing *influxdb.Dashboard
|
|
|
|
if stateDash.ID() != 0 {
|
|
|
|
existing, _ = s.dashSVC.FindDashboardByID(ctx, stateDash.ID())
|
|
|
|
}
|
|
|
|
if IsNew(stateDash.stateStatus) && existing != nil {
|
|
|
|
stateDash.stateStatus = StateStatusExists
|
|
|
|
}
|
|
|
|
stateDash.existing = existing
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) dryRunLabels(ctx context.Context, orgID platform.ID, labels map[string]*stateLabel) {
|
2020-06-30 21:54:00 +00:00
|
|
|
for _, l := range labels {
|
|
|
|
l.orgID = orgID
|
|
|
|
existingLabel, _ := s.findLabel(ctx, orgID, l)
|
|
|
|
if IsNew(l.stateStatus) && existingLabel != nil {
|
|
|
|
l.stateStatus = StateStatusExists
|
2020-04-14 20:21:05 +00:00
|
|
|
}
|
2020-06-30 21:54:00 +00:00
|
|
|
l.existing = existingLabel
|
2019-10-28 22:23:40 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) dryRunNotificationEndpoints(ctx context.Context, orgID platform.ID, endpoints map[string]*stateEndpoint) error {
|
2019-12-10 21:35:23 +00:00
|
|
|
existingEndpoints, _, err := s.endpointSVC.FindNotificationEndpoints(ctx, influxdb.NotificationEndpointFilter{
|
|
|
|
OrgID: &orgID,
|
|
|
|
}) // grab em all
|
|
|
|
if err != nil {
|
2020-04-15 19:46:17 +00:00
|
|
|
return internalErr(err)
|
2019-12-10 21:35:23 +00:00
|
|
|
}
|
|
|
|
|
2020-04-06 17:25:20 +00:00
|
|
|
mExistingByName := make(map[string]influxdb.NotificationEndpoint)
|
2021-03-30 18:10:02 +00:00
|
|
|
mExistingByID := make(map[platform.ID]influxdb.NotificationEndpoint)
|
2019-12-10 21:35:23 +00:00
|
|
|
for i := range existingEndpoints {
|
|
|
|
e := existingEndpoints[i]
|
2020-04-06 17:25:20 +00:00
|
|
|
mExistingByName[e.GetName()] = e
|
|
|
|
mExistingByID[e.GetID()] = e
|
|
|
|
}
|
|
|
|
|
2020-04-15 19:46:17 +00:00
|
|
|
findEndpoint := func(e *stateEndpoint) influxdb.NotificationEndpoint {
|
2020-04-06 17:25:20 +00:00
|
|
|
if iExisting, ok := mExistingByID[e.ID()]; ok {
|
|
|
|
return iExisting
|
|
|
|
}
|
2020-04-15 19:46:17 +00:00
|
|
|
if iExisting, ok := mExistingByName[e.parserEndpoint.Name()]; ok {
|
2020-04-06 17:25:20 +00:00
|
|
|
return iExisting
|
|
|
|
}
|
|
|
|
return nil
|
2019-12-10 21:35:23 +00:00
|
|
|
}
|
|
|
|
|
2020-04-15 19:46:17 +00:00
|
|
|
for _, newEndpoint := range endpoints {
|
2020-04-06 17:25:20 +00:00
|
|
|
existing := findEndpoint(newEndpoint)
|
2020-04-15 19:46:17 +00:00
|
|
|
if IsNew(newEndpoint.stateStatus) && existing != nil {
|
|
|
|
newEndpoint.stateStatus = StateStatusExists
|
|
|
|
}
|
2020-04-06 17:25:20 +00:00
|
|
|
newEndpoint.existing = existing
|
2019-12-10 21:35:23 +00:00
|
|
|
}
|
|
|
|
|
2020-04-15 19:46:17 +00:00
|
|
|
return nil
|
2019-12-10 21:35:23 +00:00
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) dryRunNotificationRules(ctx context.Context, orgID platform.ID, rules map[string]*stateRule, endpoints map[string]*stateEndpoint) error {
|
2020-04-23 00:05:10 +00:00
|
|
|
for _, rule := range rules {
|
|
|
|
rule.orgID = orgID
|
|
|
|
var existing influxdb.NotificationRule
|
|
|
|
if rule.ID() != 0 {
|
|
|
|
existing, _ = s.ruleSVC.FindNotificationRuleByID(ctx, rule.ID())
|
|
|
|
}
|
|
|
|
rule.existing = existing
|
2019-12-19 22:02:34 +00:00
|
|
|
}
|
|
|
|
|
2020-04-17 02:27:58 +00:00
|
|
|
for _, r := range rules {
|
2020-04-23 00:05:10 +00:00
|
|
|
if r.associatedEndpoint != nil {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
e, ok := endpoints[r.parserRule.endpointMetaName()]
|
2020-04-23 00:05:10 +00:00
|
|
|
if !IsRemoval(r.stateStatus) && !ok {
|
2020-06-30 21:54:00 +00:00
|
|
|
err := fmt.Errorf("failed to find notification endpoint %q dependency for notification rule %q", r.parserRule.endpointName, r.parserRule.MetaName())
|
2021-03-30 18:10:02 +00:00
|
|
|
return &errors2.Error{
|
|
|
|
Code: errors2.EUnprocessableEntity,
|
2020-04-17 02:27:58 +00:00
|
|
|
Err: err,
|
2020-04-08 17:08:28 +00:00
|
|
|
}
|
|
|
|
}
|
2020-04-17 02:27:58 +00:00
|
|
|
r.associatedEndpoint = e
|
2019-12-19 22:02:34 +00:00
|
|
|
}
|
2020-04-17 02:27:58 +00:00
|
|
|
|
|
|
|
return nil
|
2019-12-19 22:02:34 +00:00
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) dryRunSecrets(ctx context.Context, orgID platform.ID, template *Template) error {
|
2020-06-30 21:54:00 +00:00
|
|
|
templateSecrets := template.mSecrets
|
|
|
|
if len(templateSecrets) == 0 {
|
2019-12-16 17:39:55 +00:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
existingSecrets, err := s.secretSVC.GetSecretKeys(ctx, orgID)
|
|
|
|
if err != nil {
|
2021-03-30 18:10:02 +00:00
|
|
|
return &errors2.Error{Code: errors2.EInternal, Err: err}
|
2019-12-16 17:39:55 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
for _, secret := range existingSecrets {
|
2020-06-30 21:54:00 +00:00
|
|
|
templateSecrets[secret] = true // marked true since it exists in the platform
|
2019-12-16 17:39:55 +00:00
|
|
|
}
|
|
|
|
|
2019-12-27 19:22:05 +00:00
|
|
|
return nil
|
2019-12-16 17:39:55 +00:00
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) dryRunTasks(ctx context.Context, orgID platform.ID, tasks map[string]*stateTask) {
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
for _, stateTask := range tasks {
|
|
|
|
stateTask.orgID = orgID
|
2021-04-07 18:42:55 +00:00
|
|
|
var existing *taskmodel.Task
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
if stateTask.ID() != 0 {
|
|
|
|
existing, _ = s.taskSVC.FindTaskByID(ctx, stateTask.ID())
|
|
|
|
}
|
|
|
|
if IsNew(stateTask.stateStatus) && existing != nil {
|
|
|
|
stateTask.stateStatus = StateStatusExists
|
|
|
|
}
|
|
|
|
stateTask.existing = existing
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) dryRunTelegrafConfigs(ctx context.Context, orgID platform.ID, teleConfigs map[string]*stateTelegraf) {
|
2020-04-21 22:00:29 +00:00
|
|
|
for _, stateTele := range teleConfigs {
|
|
|
|
stateTele.orgID = orgID
|
|
|
|
var existing *influxdb.TelegrafConfig
|
|
|
|
if stateTele.ID() != 0 {
|
|
|
|
existing, _ = s.teleSVC.FindTelegrafConfigByID(ctx, stateTele.ID())
|
|
|
|
}
|
|
|
|
if IsNew(stateTele.stateStatus) && existing != nil {
|
|
|
|
stateTele.stateStatus = StateStatusExists
|
|
|
|
}
|
|
|
|
stateTele.existing = existing
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) dryRunVariables(ctx context.Context, orgID platform.ID, vars map[string]*stateVariable) {
|
2020-04-03 00:44:27 +00:00
|
|
|
existingVars, _ := s.getAllPlatformVariables(ctx, orgID)
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
mIDs := make(map[platform.ID]*influxdb.Variable)
|
2020-04-03 00:44:27 +00:00
|
|
|
mNames := make(map[string]*influxdb.Variable)
|
|
|
|
for _, v := range existingVars {
|
|
|
|
mIDs[v.ID] = v
|
|
|
|
mNames[v.Name] = v
|
|
|
|
}
|
|
|
|
|
2020-04-14 23:18:34 +00:00
|
|
|
for _, v := range vars {
|
2020-05-06 18:09:16 +00:00
|
|
|
existing := mNames[v.parserVar.Name()]
|
2020-04-14 23:18:34 +00:00
|
|
|
if v.ID() != 0 {
|
|
|
|
existing = mIDs[v.ID()]
|
2019-11-07 00:45:00 +00:00
|
|
|
}
|
2020-04-14 23:18:34 +00:00
|
|
|
if IsNew(v.stateStatus) && existing != nil {
|
|
|
|
v.stateStatus = StateStatusExists
|
|
|
|
}
|
|
|
|
v.existing = existing
|
2019-11-07 00:45:00 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-04-17 02:27:58 +00:00
|
|
|
func (s *Service) dryRunLabelMappings(ctx context.Context, state *stateCoordinator) ([]stateLabelMapping, error) {
|
2020-04-11 04:51:13 +00:00
|
|
|
stateLabelsByResName := make(map[string]*stateLabel)
|
|
|
|
for _, l := range state.mLabels {
|
2020-04-14 22:19:15 +00:00
|
|
|
if IsRemoval(l.stateStatus) {
|
2020-04-11 04:51:13 +00:00
|
|
|
continue
|
|
|
|
}
|
2020-04-14 20:21:05 +00:00
|
|
|
stateLabelsByResName[l.parserLabel.Name()] = l
|
2020-04-11 04:51:13 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
var mappings []stateLabelMapping
|
|
|
|
for _, b := range state.mBuckets {
|
2020-04-14 22:19:15 +00:00
|
|
|
if IsRemoval(b.stateStatus) {
|
2020-04-11 04:51:13 +00:00
|
|
|
continue
|
|
|
|
}
|
2020-04-17 02:27:58 +00:00
|
|
|
mm, err := s.dryRunResourceLabelMapping(ctx, state, stateLabelsByResName, b)
|
2020-04-11 04:51:13 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
mappings = append(mappings, mm...)
|
|
|
|
}
|
|
|
|
|
2020-04-14 22:19:15 +00:00
|
|
|
for _, c := range state.mChecks {
|
|
|
|
if IsRemoval(c.stateStatus) {
|
|
|
|
continue
|
|
|
|
}
|
2020-04-17 02:27:58 +00:00
|
|
|
mm, err := s.dryRunResourceLabelMapping(ctx, state, stateLabelsByResName, c)
|
2020-04-14 22:19:15 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
mappings = append(mappings, mm...)
|
|
|
|
}
|
|
|
|
|
2020-04-16 19:41:02 +00:00
|
|
|
for _, d := range state.mDashboards {
|
|
|
|
if IsRemoval(d.stateStatus) {
|
|
|
|
continue
|
|
|
|
}
|
2020-04-17 02:27:58 +00:00
|
|
|
mm, err := s.dryRunResourceLabelMapping(ctx, state, stateLabelsByResName, d)
|
2020-04-16 19:41:02 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
mappings = append(mappings, mm...)
|
|
|
|
}
|
|
|
|
|
2020-04-15 19:46:17 +00:00
|
|
|
for _, e := range state.mEndpoints {
|
|
|
|
if IsRemoval(e.stateStatus) {
|
|
|
|
continue
|
|
|
|
}
|
2020-04-17 02:27:58 +00:00
|
|
|
mm, err := s.dryRunResourceLabelMapping(ctx, state, stateLabelsByResName, e)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
mappings = append(mappings, mm...)
|
|
|
|
}
|
|
|
|
|
|
|
|
for _, r := range state.mRules {
|
|
|
|
if IsRemoval(r.stateStatus) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
mm, err := s.dryRunResourceLabelMapping(ctx, state, stateLabelsByResName, r)
|
2020-04-15 19:46:17 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
mappings = append(mappings, mm...)
|
|
|
|
}
|
|
|
|
|
2020-04-16 00:18:19 +00:00
|
|
|
for _, t := range state.mTasks {
|
|
|
|
if IsRemoval(t.stateStatus) {
|
|
|
|
continue
|
|
|
|
}
|
2020-04-17 02:27:58 +00:00
|
|
|
mm, err := s.dryRunResourceLabelMapping(ctx, state, stateLabelsByResName, t)
|
2020-04-16 00:18:19 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
mappings = append(mappings, mm...)
|
|
|
|
}
|
|
|
|
|
2020-04-16 18:27:30 +00:00
|
|
|
for _, t := range state.mTelegrafs {
|
|
|
|
if IsRemoval(t.stateStatus) {
|
|
|
|
continue
|
|
|
|
}
|
2020-04-17 02:27:58 +00:00
|
|
|
mm, err := s.dryRunResourceLabelMapping(ctx, state, stateLabelsByResName, t)
|
2020-04-16 18:27:30 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
mappings = append(mappings, mm...)
|
|
|
|
}
|
|
|
|
|
2020-04-14 23:18:34 +00:00
|
|
|
for _, v := range state.mVariables {
|
|
|
|
if IsRemoval(v.stateStatus) {
|
|
|
|
continue
|
|
|
|
}
|
2020-04-17 02:27:58 +00:00
|
|
|
mm, err := s.dryRunResourceLabelMapping(ctx, state, stateLabelsByResName, v)
|
2020-04-14 23:18:34 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
mappings = append(mappings, mm...)
|
|
|
|
}
|
|
|
|
|
2020-04-11 04:51:13 +00:00
|
|
|
return mappings, nil
|
|
|
|
}
|
|
|
|
|
2020-04-17 02:27:58 +00:00
|
|
|
func (s *Service) dryRunResourceLabelMapping(ctx context.Context, state *stateCoordinator, stateLabelsByResName map[string]*stateLabel, associatedResource interface {
|
2020-06-22 20:20:34 +00:00
|
|
|
labels() []*stateLabel
|
2020-04-11 04:51:13 +00:00
|
|
|
stateIdentity() stateIdentity
|
|
|
|
}) ([]stateLabelMapping, error) {
|
|
|
|
|
|
|
|
ident := associatedResource.stateIdentity()
|
2020-06-30 21:54:00 +00:00
|
|
|
templateResourceLabels := associatedResource.labels()
|
2020-04-11 04:51:13 +00:00
|
|
|
|
|
|
|
var mappings []stateLabelMapping
|
|
|
|
if !ident.exists() {
|
2020-06-30 21:54:00 +00:00
|
|
|
for _, l := range templateResourceLabels {
|
2020-04-11 04:51:13 +00:00
|
|
|
mappings = append(mappings, stateLabelMapping{
|
|
|
|
status: StateStatusNew,
|
|
|
|
resource: associatedResource,
|
2020-06-22 20:20:34 +00:00
|
|
|
label: l,
|
2020-04-11 04:51:13 +00:00
|
|
|
})
|
|
|
|
}
|
|
|
|
return mappings, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
existingLabels, err := s.labelSVC.FindResourceLabels(ctx, influxdb.LabelMappingFilter{
|
|
|
|
ResourceID: ident.id,
|
|
|
|
ResourceType: ident.resourceType,
|
|
|
|
})
|
2021-03-30 18:10:02 +00:00
|
|
|
if err != nil && errors2.ErrorCode(err) != errors2.ENotFound {
|
2020-05-06 18:09:16 +00:00
|
|
|
msgFmt := fmt.Sprintf("failed to find labels mappings for %s resource[%q]", ident.resourceType, ident.id)
|
|
|
|
return nil, ierrors.Wrap(err, msgFmt)
|
2020-04-11 04:51:13 +00:00
|
|
|
}
|
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
templateLabels := labelSlcToMap(templateResourceLabels)
|
2020-04-11 04:51:13 +00:00
|
|
|
for _, l := range existingLabels {
|
|
|
|
// if label is found in state then we track the mapping and mark it existing
|
|
|
|
// otherwise we continue on
|
2020-06-30 21:54:00 +00:00
|
|
|
delete(templateLabels, l.Name)
|
2020-04-11 04:51:13 +00:00
|
|
|
if sLabel, ok := stateLabelsByResName[l.Name]; ok {
|
|
|
|
mappings = append(mappings, stateLabelMapping{
|
|
|
|
status: StateStatusExists,
|
|
|
|
resource: associatedResource,
|
|
|
|
label: sLabel,
|
|
|
|
})
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// now we add labels that do not exist
|
2020-06-30 21:54:00 +00:00
|
|
|
for _, l := range templateLabels {
|
|
|
|
stLabel, found := state.getLabelByMetaName(l.MetaName())
|
2020-06-22 20:20:34 +00:00
|
|
|
if !found {
|
|
|
|
continue
|
|
|
|
}
|
2020-04-11 04:51:13 +00:00
|
|
|
mappings = append(mappings, stateLabelMapping{
|
|
|
|
status: StateStatusNew,
|
|
|
|
resource: associatedResource,
|
2020-06-22 20:20:34 +00:00
|
|
|
label: stLabel,
|
2020-04-11 04:51:13 +00:00
|
|
|
})
|
|
|
|
}
|
|
|
|
|
|
|
|
return mappings, nil
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) addStackState(ctx context.Context, stackID platform.ID, state *stateCoordinator) error {
|
2020-04-01 00:01:45 +00:00
|
|
|
stack, err := s.store.ReadStackByID(ctx, stackID)
|
|
|
|
if err != nil {
|
2020-05-04 23:48:40 +00:00
|
|
|
return ierrors.Wrap(err, "reading stack")
|
2020-04-11 04:51:13 +00:00
|
|
|
}
|
|
|
|
|
2020-04-24 17:59:58 +00:00
|
|
|
state.addStackState(stack)
|
2020-04-01 00:01:45 +00:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2020-06-17 21:45:05 +00:00
|
|
|
type (
|
|
|
|
// ApplyOpt is an option for applying a package.
|
|
|
|
ApplyOpt struct {
|
2020-06-30 21:54:00 +00:00
|
|
|
Templates []*Template
|
2020-07-28 18:27:52 +00:00
|
|
|
EnvRefs map[string]interface{}
|
2020-06-17 21:45:05 +00:00
|
|
|
MissingSecrets map[string]string
|
2021-03-30 18:10:02 +00:00
|
|
|
StackID platform.ID
|
2020-06-17 21:45:05 +00:00
|
|
|
ResourcesToSkip map[ActionSkipResource]bool
|
2020-06-18 00:30:33 +00:00
|
|
|
KindsToSkip map[Kind]bool
|
2020-06-17 21:45:05 +00:00
|
|
|
}
|
2019-12-27 19:22:05 +00:00
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
// ActionSkipResource provides an action from the consumer to use the template with
|
|
|
|
// modifications to the resource kind and template name that will be applied.
|
2020-06-17 21:45:05 +00:00
|
|
|
ActionSkipResource struct {
|
|
|
|
Kind Kind `json:"kind"`
|
|
|
|
MetaName string `json:"resourceTemplateName"`
|
|
|
|
}
|
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
// ActionSkipKind provides an action from the consumer to use the template with
|
2020-06-18 00:30:33 +00:00
|
|
|
// modifications to the resource kinds will be applied.
|
|
|
|
ActionSkipKind struct {
|
|
|
|
Kind Kind `json:"kind"`
|
|
|
|
}
|
|
|
|
|
2020-06-17 21:45:05 +00:00
|
|
|
// ApplyOptFn updates the ApplyOpt per the functional option.
|
|
|
|
ApplyOptFn func(opt *ApplyOpt)
|
|
|
|
)
|
2019-12-27 19:22:05 +00:00
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
// ApplyWithEnvRefs provides env refs to saturate the missing reference fields in the template.
|
2020-07-28 18:27:52 +00:00
|
|
|
func ApplyWithEnvRefs(envRefs map[string]interface{}) ApplyOptFn {
|
2020-04-01 00:01:45 +00:00
|
|
|
return func(o *ApplyOpt) {
|
2020-02-05 00:15:20 +00:00
|
|
|
o.EnvRefs = envRefs
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
// ApplyWithTemplate provides a template to the application/dry run.
|
|
|
|
func ApplyWithTemplate(template *Template) ApplyOptFn {
|
2020-06-15 21:13:38 +00:00
|
|
|
return func(opt *ApplyOpt) {
|
2020-06-30 21:54:00 +00:00
|
|
|
opt.Templates = append(opt.Templates, template)
|
2020-06-15 21:13:38 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-06-18 00:30:33 +00:00
|
|
|
// ApplyWithResourceSkip provides an action skip a resource in the application of a template.
|
2020-06-17 21:45:05 +00:00
|
|
|
func ApplyWithResourceSkip(action ActionSkipResource) ApplyOptFn {
|
|
|
|
return func(opt *ApplyOpt) {
|
|
|
|
if opt.ResourcesToSkip == nil {
|
|
|
|
opt.ResourcesToSkip = make(map[ActionSkipResource]bool)
|
|
|
|
}
|
|
|
|
switch action.Kind {
|
|
|
|
case KindCheckDeadman, KindCheckThreshold:
|
|
|
|
action.Kind = KindCheck
|
|
|
|
case KindNotificationEndpointHTTP,
|
|
|
|
KindNotificationEndpointPagerDuty,
|
|
|
|
KindNotificationEndpointSlack:
|
|
|
|
action.Kind = KindNotificationEndpoint
|
|
|
|
}
|
|
|
|
opt.ResourcesToSkip[action] = true
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-06-18 00:30:33 +00:00
|
|
|
// ApplyWithKindSkip provides an action skip a kidn in the application of a template.
|
|
|
|
func ApplyWithKindSkip(action ActionSkipKind) ApplyOptFn {
|
|
|
|
return func(opt *ApplyOpt) {
|
|
|
|
if opt.KindsToSkip == nil {
|
|
|
|
opt.KindsToSkip = make(map[Kind]bool)
|
|
|
|
}
|
|
|
|
switch action.Kind {
|
|
|
|
case KindCheckDeadman, KindCheckThreshold:
|
|
|
|
action.Kind = KindCheck
|
|
|
|
case KindNotificationEndpointHTTP,
|
|
|
|
KindNotificationEndpointPagerDuty,
|
|
|
|
KindNotificationEndpointSlack:
|
|
|
|
action.Kind = KindNotificationEndpoint
|
|
|
|
}
|
|
|
|
opt.KindsToSkip[action.Kind] = true
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
// ApplyWithSecrets provides secrets to the platform that the template will need.
|
2019-12-27 19:22:05 +00:00
|
|
|
func ApplyWithSecrets(secrets map[string]string) ApplyOptFn {
|
2020-04-01 00:01:45 +00:00
|
|
|
return func(o *ApplyOpt) {
|
2019-12-27 19:22:05 +00:00
|
|
|
o.MissingSecrets = secrets
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
// ApplyWithStackID associates the application of a template with a stack.
|
2021-03-30 18:10:02 +00:00
|
|
|
func ApplyWithStackID(stackID platform.ID) ApplyOptFn {
|
2020-04-01 00:01:45 +00:00
|
|
|
return func(o *ApplyOpt) {
|
|
|
|
o.StackID = stackID
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func applyOptFromOptFns(opts ...ApplyOptFn) ApplyOpt {
|
|
|
|
var opt ApplyOpt
|
|
|
|
for _, o := range opts {
|
|
|
|
o(&opt)
|
|
|
|
}
|
|
|
|
return opt
|
|
|
|
}
|
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
// Apply will apply all the resources identified in the provided template. The entire template will be applied
|
|
|
|
// in its entirety. If a failure happens midway then the entire template will be rolled back to the state
|
|
|
|
// from before the template were applied.
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) Apply(ctx context.Context, orgID, userID platform.ID, opts ...ApplyOptFn) (impact ImpactSummary, e error) {
|
2020-04-29 22:24:19 +00:00
|
|
|
opt := applyOptFromOptFns(opts...)
|
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
template, err := s.templateFromApplyOpts(ctx, opt)
|
2020-06-15 21:13:38 +00:00
|
|
|
if err != nil {
|
2020-06-25 04:46:03 +00:00
|
|
|
return ImpactSummary{}, err
|
2019-11-06 18:02:45 +00:00
|
|
|
}
|
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
if err := template.Validate(ValidWithoutResources()); err != nil {
|
2020-06-25 04:46:03 +00:00
|
|
|
return ImpactSummary{}, failedValidationErr(err)
|
2020-04-29 22:24:19 +00:00
|
|
|
}
|
2019-12-27 19:22:05 +00:00
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
if err := template.applyEnvRefs(opt.EnvRefs); err != nil {
|
2020-06-25 04:46:03 +00:00
|
|
|
return ImpactSummary{}, failedValidationErr(err)
|
2020-02-06 17:28:04 +00:00
|
|
|
}
|
2020-02-05 00:15:20 +00:00
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
state, err := s.dryRun(ctx, orgID, template, opt)
|
2020-04-11 04:51:13 +00:00
|
|
|
if err != nil {
|
2020-06-25 04:46:03 +00:00
|
|
|
return ImpactSummary{}, err
|
2019-10-28 22:23:40 +00:00
|
|
|
}
|
|
|
|
|
2020-05-28 23:30:04 +00:00
|
|
|
stackID := opt.StackID
|
|
|
|
// if stackID is not provided, a stack will be provided for the application.
|
|
|
|
if stackID == 0 {
|
2020-07-07 22:07:11 +00:00
|
|
|
newStack, err := s.InitStack(ctx, userID, StackCreate{OrgID: orgID})
|
2020-05-28 23:30:04 +00:00
|
|
|
if err != nil {
|
2020-06-25 04:46:03 +00:00
|
|
|
return ImpactSummary{}, err
|
2020-04-01 00:01:45 +00:00
|
|
|
}
|
2020-05-28 23:30:04 +00:00
|
|
|
stackID = newStack.ID
|
|
|
|
}
|
2020-04-01 00:01:45 +00:00
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
defer func(stackID platform.ID) {
|
2020-04-01 00:01:45 +00:00
|
|
|
updateStackFn := s.updateStackAfterSuccess
|
|
|
|
if e != nil {
|
|
|
|
updateStackFn = s.updateStackAfterRollback
|
2020-06-17 23:26:20 +00:00
|
|
|
if opt.StackID == 0 {
|
|
|
|
if err := s.store.DeleteStack(ctx, stackID); err != nil {
|
|
|
|
s.log.Error("failed to delete created stack", zap.Error(err))
|
|
|
|
}
|
|
|
|
}
|
2020-04-01 00:01:45 +00:00
|
|
|
}
|
2020-06-15 21:13:38 +00:00
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
err := updateStackFn(ctx, stackID, state, template.Sources())
|
2020-06-15 21:13:38 +00:00
|
|
|
if err != nil {
|
2020-04-01 00:01:45 +00:00
|
|
|
s.log.Error("failed to update stack", zap.Error(err))
|
|
|
|
}
|
2020-06-04 20:22:54 +00:00
|
|
|
}(stackID)
|
2020-04-01 00:01:45 +00:00
|
|
|
|
2020-06-11 23:45:09 +00:00
|
|
|
coordinator := newRollbackCoordinator(s.log, s.applyReqLimit)
|
2019-12-27 19:22:05 +00:00
|
|
|
defer coordinator.rollback(s.log, &e, orgID)
|
2019-10-26 02:11:47 +00:00
|
|
|
|
2020-04-17 02:27:58 +00:00
|
|
|
err = s.applyState(ctx, coordinator, orgID, userID, state, opt.MissingSecrets)
|
2020-04-01 00:01:45 +00:00
|
|
|
if err != nil {
|
2020-06-25 04:46:03 +00:00
|
|
|
return ImpactSummary{}, err
|
2020-04-01 00:01:45 +00:00
|
|
|
}
|
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
template.applySecrets(opt.MissingSecrets)
|
2020-04-17 02:27:58 +00:00
|
|
|
|
2020-06-25 04:46:03 +00:00
|
|
|
return ImpactSummary{
|
2020-06-30 21:54:00 +00:00
|
|
|
Sources: template.sources,
|
2020-05-28 23:30:04 +00:00
|
|
|
StackID: stackID,
|
2020-05-28 22:09:30 +00:00
|
|
|
Diff: state.diff(),
|
2020-06-30 21:54:00 +00:00
|
|
|
Summary: newSummaryFromStateTemplate(state, template),
|
2020-05-28 22:09:30 +00:00
|
|
|
}, nil
|
2020-04-01 00:01:45 +00:00
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) applyState(ctx context.Context, coordinator *rollbackCoordinator, orgID, userID platform.ID, state *stateCoordinator, missingSecrets map[string]string) (e error) {
|
2020-04-23 00:05:10 +00:00
|
|
|
endpointApp, ruleApp, err := s.applyNotificationGenerator(ctx, userID, state.rules(), state.endpoints())
|
|
|
|
if err != nil {
|
2020-05-06 18:09:16 +00:00
|
|
|
return ierrors.Wrap(err, "failed to setup notification generator")
|
2020-04-23 00:05:10 +00:00
|
|
|
}
|
|
|
|
|
2019-12-07 00:23:09 +00:00
|
|
|
// each grouping here runs for its entirety, then returns an error that
|
|
|
|
// is indicative of running all appliers provided. For instance, the labels
|
2019-12-10 22:51:11 +00:00
|
|
|
// may have 1 variable fail and one of the buckets fails. The errors aggregate so
|
|
|
|
// the caller will be informed of both the failed label variable the failed bucket.
|
2019-12-07 00:23:09 +00:00
|
|
|
// the groupings here allow for steps to occur before exiting. The first step is
|
2019-12-10 22:51:11 +00:00
|
|
|
// adding the dependencies, resources that are associated by other resources. Then the
|
|
|
|
// primary resources. Here we get all the errors associated with them.
|
2019-12-07 00:23:09 +00:00
|
|
|
// If those are all good, then we run the secondary(dependent) resources which
|
|
|
|
// rely on the primary resources having been created.
|
2019-12-10 22:51:11 +00:00
|
|
|
appliers := [][]applier{
|
2019-12-27 19:22:05 +00:00
|
|
|
{
|
2020-06-30 21:54:00 +00:00
|
|
|
// adds secrets that are referenced it the template, this allows user to
|
|
|
|
// provide data that does not rest in the template.
|
2020-04-01 00:01:45 +00:00
|
|
|
s.applySecrets(missingSecrets),
|
2019-12-27 19:22:05 +00:00
|
|
|
},
|
2019-12-10 22:51:11 +00:00
|
|
|
{
|
|
|
|
// deps for primary resources
|
2020-04-11 04:51:13 +00:00
|
|
|
s.applyLabels(ctx, state.labels()),
|
2019-12-10 22:51:11 +00:00
|
|
|
},
|
|
|
|
{
|
2019-12-27 19:22:05 +00:00
|
|
|
// primary resources, can have relationships to labels
|
2020-04-14 23:18:34 +00:00
|
|
|
s.applyVariables(ctx, state.variables()),
|
2020-04-11 04:51:13 +00:00
|
|
|
s.applyBuckets(ctx, state.buckets()),
|
2020-04-14 22:19:15 +00:00
|
|
|
s.applyChecks(ctx, state.checks()),
|
2020-04-20 19:29:30 +00:00
|
|
|
s.applyDashboards(ctx, state.dashboards()),
|
2020-04-23 00:05:10 +00:00
|
|
|
endpointApp,
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
s.applyTasks(ctx, state.tasks()),
|
2020-04-21 22:00:29 +00:00
|
|
|
s.applyTelegrafs(ctx, userID, state.telegrafConfigs()),
|
2019-12-10 22:51:11 +00:00
|
|
|
},
|
2019-10-24 23:59:01 +00:00
|
|
|
}
|
|
|
|
|
2019-12-10 22:51:11 +00:00
|
|
|
for _, group := range appliers {
|
2019-12-12 19:09:32 +00:00
|
|
|
if err := coordinator.runTilEnd(ctx, orgID, userID, group...); err != nil {
|
2020-04-17 02:27:58 +00:00
|
|
|
return internalErr(err)
|
2019-12-10 22:51:11 +00:00
|
|
|
}
|
2019-12-07 00:23:09 +00:00
|
|
|
}
|
2019-12-10 22:51:11 +00:00
|
|
|
|
2019-12-20 20:51:27 +00:00
|
|
|
// this has to be run after the above primary resources, because it relies on
|
|
|
|
// notification endpoints already being applied.
|
2020-04-23 00:05:10 +00:00
|
|
|
if err := coordinator.runTilEnd(ctx, orgID, userID, ruleApp); err != nil {
|
2020-04-17 02:27:58 +00:00
|
|
|
return err
|
2019-12-20 17:10:10 +00:00
|
|
|
}
|
|
|
|
|
2019-12-10 22:51:11 +00:00
|
|
|
// secondary resources
|
|
|
|
// this last grouping relies on the above 2 steps having completely successfully
|
2020-04-24 17:59:58 +00:00
|
|
|
secondary := []applier{
|
|
|
|
s.applyLabelMappings(ctx, state.labelMappings),
|
|
|
|
s.removeLabelMappings(ctx, state.labelMappingsToRemove),
|
|
|
|
}
|
2019-12-12 19:09:32 +00:00
|
|
|
if err := coordinator.runTilEnd(ctx, orgID, userID, secondary...); err != nil {
|
2020-04-17 02:27:58 +00:00
|
|
|
return internalErr(err)
|
2019-10-23 17:09:04 +00:00
|
|
|
}
|
|
|
|
|
2020-04-17 02:27:58 +00:00
|
|
|
return nil
|
2019-10-23 17:09:04 +00:00
|
|
|
}
|
|
|
|
|
2020-04-11 04:51:13 +00:00
|
|
|
func (s *Service) applyBuckets(ctx context.Context, buckets []*stateBucket) applier {
|
2019-10-30 21:13:42 +00:00
|
|
|
const resource = "bucket"
|
|
|
|
|
2019-12-07 00:23:09 +00:00
|
|
|
mutex := new(doMutex)
|
2020-04-11 04:51:13 +00:00
|
|
|
rollbackBuckets := make([]*stateBucket, 0, len(buckets))
|
2019-10-24 23:59:01 +00:00
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
createFn := func(ctx context.Context, i int, orgID, userID platform.ID) *applyErrBody {
|
2020-04-11 04:51:13 +00:00
|
|
|
var b *stateBucket
|
2019-12-07 00:23:09 +00:00
|
|
|
mutex.Do(func() {
|
2020-04-11 04:51:13 +00:00
|
|
|
buckets[i].orgID = orgID
|
|
|
|
b = buckets[i]
|
2019-12-07 00:23:09 +00:00
|
|
|
})
|
|
|
|
if !b.shouldApply() {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
influxBucket, err := s.applyBucket(ctx, b)
|
|
|
|
if err != nil {
|
|
|
|
return &applyErrBody{
|
2020-06-30 21:54:00 +00:00
|
|
|
name: b.parserBkt.MetaName(),
|
2019-12-07 00:23:09 +00:00
|
|
|
msg: err.Error(),
|
2019-10-24 23:59:01 +00:00
|
|
|
}
|
2019-12-07 00:23:09 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
mutex.Do(func() {
|
2019-10-30 17:55:13 +00:00
|
|
|
buckets[i].id = influxBucket.ID
|
2019-10-28 22:23:40 +00:00
|
|
|
rollbackBuckets = append(rollbackBuckets, buckets[i])
|
2019-12-07 00:23:09 +00:00
|
|
|
})
|
2019-10-23 17:09:04 +00:00
|
|
|
|
2019-12-07 00:23:09 +00:00
|
|
|
return nil
|
2019-10-23 17:09:04 +00:00
|
|
|
}
|
|
|
|
|
2019-10-24 23:59:01 +00:00
|
|
|
return applier{
|
2019-12-07 00:23:09 +00:00
|
|
|
creater: creater{
|
|
|
|
entries: len(buckets),
|
|
|
|
fn: createFn,
|
|
|
|
},
|
2019-10-24 23:59:01 +00:00
|
|
|
rollbacker: rollbacker{
|
2019-10-30 21:13:42 +00:00
|
|
|
resource: resource,
|
2021-03-30 18:10:02 +00:00
|
|
|
fn: func(_ platform.ID) error { return s.rollbackBuckets(ctx, rollbackBuckets) },
|
2019-10-24 23:59:01 +00:00
|
|
|
},
|
|
|
|
}
|
2019-10-23 17:09:04 +00:00
|
|
|
}
|
|
|
|
|
2020-04-11 04:51:13 +00:00
|
|
|
func (s *Service) rollbackBuckets(ctx context.Context, buckets []*stateBucket) error {
|
|
|
|
rollbackFn := func(b *stateBucket) error {
|
2020-06-26 22:12:57 +00:00
|
|
|
if !IsNew(b.stateStatus) && b.existing == nil || isSystemBucket(b.existing) {
|
2020-05-06 22:31:24 +00:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2020-04-01 00:01:45 +00:00
|
|
|
var err error
|
2020-05-06 20:48:50 +00:00
|
|
|
switch {
|
|
|
|
case IsRemoval(b.stateStatus):
|
2020-04-11 04:51:13 +00:00
|
|
|
err = ierrors.Wrap(s.bucketSVC.CreateBucket(ctx, b.existing), "rolling back removed bucket")
|
2020-05-06 20:48:50 +00:00
|
|
|
case IsExisting(b.stateStatus):
|
2020-04-01 00:01:45 +00:00
|
|
|
_, err = s.bucketSVC.UpdateBucket(ctx, b.ID(), influxdb.BucketUpdate{
|
2020-05-06 20:48:50 +00:00
|
|
|
Description: &b.existing.Description,
|
|
|
|
RetentionPeriod: &b.existing.RetentionPeriod,
|
2020-04-01 00:01:45 +00:00
|
|
|
})
|
2020-04-11 04:51:13 +00:00
|
|
|
err = ierrors.Wrap(err, "rolling back existing bucket to previous state")
|
2020-04-14 20:21:05 +00:00
|
|
|
default:
|
|
|
|
err = ierrors.Wrap(s.bucketSVC.DeleteBucket(ctx, b.ID()), "rolling back new bucket")
|
2019-10-28 22:23:40 +00:00
|
|
|
}
|
2020-04-01 00:01:45 +00:00
|
|
|
return err
|
|
|
|
}
|
2019-10-28 22:23:40 +00:00
|
|
|
|
2020-04-01 00:01:45 +00:00
|
|
|
var errs []string
|
|
|
|
for _, b := range buckets {
|
2020-04-02 22:28:11 +00:00
|
|
|
if err := rollbackFn(b); err != nil {
|
2020-04-01 00:01:45 +00:00
|
|
|
errs = append(errs, fmt.Sprintf("error for bucket[%q]: %s", b.ID(), err))
|
2019-10-23 17:09:04 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if len(errs) > 0 {
|
|
|
|
// TODO: fixup error
|
2020-04-01 00:01:45 +00:00
|
|
|
return errors.New(strings.Join(errs, ", "))
|
2019-10-23 17:09:04 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
2019-10-24 23:59:01 +00:00
|
|
|
|
2020-04-11 04:51:13 +00:00
|
|
|
func (s *Service) applyBucket(ctx context.Context, b *stateBucket) (influxdb.Bucket, error) {
|
2020-06-26 22:12:57 +00:00
|
|
|
if isSystemBucket(b.existing) {
|
|
|
|
return *b.existing, nil
|
|
|
|
}
|
2020-05-06 20:48:50 +00:00
|
|
|
switch {
|
|
|
|
case IsRemoval(b.stateStatus):
|
2020-04-01 00:01:45 +00:00
|
|
|
if err := s.bucketSVC.DeleteBucket(ctx, b.ID()); err != nil {
|
2021-03-30 18:10:02 +00:00
|
|
|
if errors2.ErrorCode(err) == errors2.ENotFound {
|
2020-05-06 20:48:50 +00:00
|
|
|
return influxdb.Bucket{}, nil
|
|
|
|
}
|
2020-06-18 17:25:43 +00:00
|
|
|
return influxdb.Bucket{}, applyFailErr("delete", b.stateIdentity(), err)
|
2020-04-01 00:01:45 +00:00
|
|
|
}
|
|
|
|
return *b.existing, nil
|
2020-05-06 20:48:50 +00:00
|
|
|
case IsExisting(b.stateStatus) && b.existing != nil:
|
2020-04-14 20:21:05 +00:00
|
|
|
rp := b.parserBkt.RetentionRules.RP()
|
|
|
|
newName := b.parserBkt.Name()
|
2019-10-30 17:55:13 +00:00
|
|
|
influxBucket, err := s.bucketSVC.UpdateBucket(ctx, b.ID(), influxdb.BucketUpdate{
|
2020-04-14 20:21:05 +00:00
|
|
|
Description: &b.parserBkt.Description,
|
2020-04-01 00:01:45 +00:00
|
|
|
Name: &newName,
|
2019-11-22 18:41:08 +00:00
|
|
|
RetentionPeriod: &rp,
|
2019-10-28 22:23:40 +00:00
|
|
|
})
|
|
|
|
if err != nil {
|
2020-06-18 17:25:43 +00:00
|
|
|
return influxdb.Bucket{}, applyFailErr("update", b.stateIdentity(), err)
|
2019-10-28 22:23:40 +00:00
|
|
|
}
|
|
|
|
return *influxBucket, nil
|
2020-04-14 20:21:05 +00:00
|
|
|
default:
|
|
|
|
rp := b.parserBkt.RetentionRules.RP()
|
|
|
|
influxBucket := influxdb.Bucket{
|
|
|
|
OrgID: b.orgID,
|
|
|
|
Description: b.parserBkt.Description,
|
|
|
|
Name: b.parserBkt.Name(),
|
|
|
|
RetentionPeriod: rp,
|
|
|
|
}
|
|
|
|
err := s.bucketSVC.CreateBucket(ctx, &influxBucket)
|
|
|
|
if err != nil {
|
2020-06-18 17:25:43 +00:00
|
|
|
return influxdb.Bucket{}, applyFailErr("create", b.stateIdentity(), err)
|
2020-04-14 20:21:05 +00:00
|
|
|
}
|
|
|
|
return influxBucket, nil
|
2019-10-28 22:23:40 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-04-14 22:19:15 +00:00
|
|
|
func (s *Service) applyChecks(ctx context.Context, checks []*stateCheck) applier {
|
2019-12-18 20:23:06 +00:00
|
|
|
const resource = "check"
|
|
|
|
|
|
|
|
mutex := new(doMutex)
|
2020-04-14 22:19:15 +00:00
|
|
|
rollbackChecks := make([]*stateCheck, 0, len(checks))
|
2019-12-18 20:23:06 +00:00
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
createFn := func(ctx context.Context, i int, orgID, userID platform.ID) *applyErrBody {
|
2020-04-14 22:19:15 +00:00
|
|
|
var c *stateCheck
|
2019-12-18 20:23:06 +00:00
|
|
|
mutex.Do(func() {
|
|
|
|
checks[i].orgID = orgID
|
2020-04-14 22:19:15 +00:00
|
|
|
c = checks[i]
|
2019-12-18 20:23:06 +00:00
|
|
|
})
|
|
|
|
|
2020-04-14 22:19:15 +00:00
|
|
|
influxCheck, err := s.applyCheck(ctx, c, userID)
|
2019-12-18 20:23:06 +00:00
|
|
|
if err != nil {
|
|
|
|
return &applyErrBody{
|
2020-06-30 21:54:00 +00:00
|
|
|
name: c.parserCheck.MetaName(),
|
2019-12-18 20:23:06 +00:00
|
|
|
msg: err.Error(),
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
mutex.Do(func() {
|
2020-04-14 22:19:15 +00:00
|
|
|
checks[i].id = influxCheck.GetID()
|
2019-12-18 20:23:06 +00:00
|
|
|
rollbackChecks = append(rollbackChecks, checks[i])
|
|
|
|
})
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
return applier{
|
|
|
|
creater: creater{
|
|
|
|
entries: len(checks),
|
|
|
|
fn: createFn,
|
|
|
|
},
|
|
|
|
rollbacker: rollbacker{
|
|
|
|
resource: resource,
|
2021-03-30 18:10:02 +00:00
|
|
|
fn: func(_ platform.ID) error { return s.rollbackChecks(ctx, rollbackChecks) },
|
2019-12-18 20:23:06 +00:00
|
|
|
},
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-04-14 22:19:15 +00:00
|
|
|
func (s *Service) rollbackChecks(ctx context.Context, checks []*stateCheck) error {
|
|
|
|
rollbackFn := func(c *stateCheck) error {
|
2020-04-02 22:28:11 +00:00
|
|
|
var err error
|
2020-05-06 21:37:53 +00:00
|
|
|
switch {
|
|
|
|
case IsRemoval(c.stateStatus):
|
2020-04-02 22:28:11 +00:00
|
|
|
err = s.checkSVC.CreateCheck(
|
|
|
|
ctx,
|
|
|
|
influxdb.CheckCreate{
|
|
|
|
Check: c.existing,
|
2020-04-14 22:19:15 +00:00
|
|
|
Status: c.parserCheck.Status(),
|
2020-04-02 22:28:11 +00:00
|
|
|
},
|
|
|
|
c.existing.GetOwnerID(),
|
|
|
|
)
|
2020-04-14 22:19:15 +00:00
|
|
|
c.id = c.existing.GetID()
|
2020-05-06 21:37:53 +00:00
|
|
|
case IsExisting(c.stateStatus):
|
|
|
|
if c.existing == nil {
|
|
|
|
return nil
|
|
|
|
}
|
2020-04-02 22:28:11 +00:00
|
|
|
_, err = s.checkSVC.UpdateCheck(ctx, c.ID(), influxdb.CheckCreate{
|
|
|
|
Check: c.summarize().Check,
|
2020-04-14 22:19:15 +00:00
|
|
|
Status: influxdb.Status(c.parserCheck.status),
|
2020-04-02 22:28:11 +00:00
|
|
|
})
|
2020-05-06 21:37:53 +00:00
|
|
|
default:
|
|
|
|
err = s.checkSVC.DeleteCheck(ctx, c.ID())
|
2019-12-18 20:23:06 +00:00
|
|
|
}
|
2020-04-02 22:28:11 +00:00
|
|
|
return err
|
|
|
|
}
|
2019-12-18 20:23:06 +00:00
|
|
|
|
2020-04-02 22:28:11 +00:00
|
|
|
var errs []string
|
|
|
|
for _, c := range checks {
|
|
|
|
if err := rollbackFn(c); err != nil {
|
|
|
|
errs = append(errs, fmt.Sprintf("error for check[%q]: %s", c.ID(), err))
|
2019-12-18 20:23:06 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if len(errs) > 0 {
|
2020-04-02 22:28:11 +00:00
|
|
|
return errors.New(strings.Join(errs, "; "))
|
2019-12-18 20:23:06 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) applyCheck(ctx context.Context, c *stateCheck, userID platform.ID) (influxdb.Check, error) {
|
2020-05-06 21:37:53 +00:00
|
|
|
switch {
|
|
|
|
case IsRemoval(c.stateStatus):
|
2020-04-02 22:28:11 +00:00
|
|
|
if err := s.checkSVC.DeleteCheck(ctx, c.ID()); err != nil {
|
2021-03-30 18:10:02 +00:00
|
|
|
if errors2.ErrorCode(err) == errors2.ENotFound {
|
2020-05-06 21:37:53 +00:00
|
|
|
return &icheck.Threshold{Base: icheck.Base{ID: c.ID()}}, nil
|
|
|
|
}
|
2020-06-18 17:25:43 +00:00
|
|
|
return nil, applyFailErr("delete", c.stateIdentity(), err)
|
2020-04-02 22:28:11 +00:00
|
|
|
}
|
|
|
|
return c.existing, nil
|
2020-05-06 21:37:53 +00:00
|
|
|
case IsExisting(c.stateStatus) && c.existing != nil:
|
2019-12-18 20:23:06 +00:00
|
|
|
influxCheck, err := s.checkSVC.UpdateCheck(ctx, c.ID(), influxdb.CheckCreate{
|
|
|
|
Check: c.summarize().Check,
|
2020-04-14 22:19:15 +00:00
|
|
|
Status: c.parserCheck.Status(),
|
2019-12-18 20:23:06 +00:00
|
|
|
})
|
|
|
|
if err != nil {
|
2020-06-18 17:25:43 +00:00
|
|
|
return nil, applyFailErr("update", c.stateIdentity(), err)
|
2019-12-18 20:23:06 +00:00
|
|
|
}
|
|
|
|
return influxCheck, nil
|
2020-04-14 22:19:15 +00:00
|
|
|
default:
|
|
|
|
checkStub := influxdb.CheckCreate{
|
|
|
|
Check: c.summarize().Check,
|
|
|
|
Status: c.parserCheck.Status(),
|
|
|
|
}
|
|
|
|
err := s.checkSVC.CreateCheck(ctx, checkStub, userID)
|
|
|
|
if err != nil {
|
2020-06-18 17:25:43 +00:00
|
|
|
return nil, applyFailErr("create", c.stateIdentity(), err)
|
2020-04-14 22:19:15 +00:00
|
|
|
}
|
|
|
|
return checkStub.Check, nil
|
2019-12-18 20:23:06 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-04-20 19:29:30 +00:00
|
|
|
func (s *Service) applyDashboards(ctx context.Context, dashboards []*stateDashboard) applier {
|
2019-10-30 21:13:42 +00:00
|
|
|
const resource = "dashboard"
|
|
|
|
|
2019-12-07 00:23:09 +00:00
|
|
|
mutex := new(doMutex)
|
2020-04-16 19:41:02 +00:00
|
|
|
rollbackDashboards := make([]*stateDashboard, 0, len(dashboards))
|
2019-12-07 00:23:09 +00:00
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
createFn := func(ctx context.Context, i int, orgID, userID platform.ID) *applyErrBody {
|
2020-04-16 19:41:02 +00:00
|
|
|
var d *stateDashboard
|
2019-12-07 00:23:09 +00:00
|
|
|
mutex.Do(func() {
|
2020-04-16 19:41:02 +00:00
|
|
|
dashboards[i].orgID = orgID
|
|
|
|
d = dashboards[i]
|
2019-12-07 00:23:09 +00:00
|
|
|
})
|
|
|
|
|
|
|
|
influxBucket, err := s.applyDashboard(ctx, d)
|
|
|
|
if err != nil {
|
|
|
|
return &applyErrBody{
|
2020-06-30 21:54:00 +00:00
|
|
|
name: d.parserDash.MetaName(),
|
2019-12-07 00:23:09 +00:00
|
|
|
msg: err.Error(),
|
2019-10-30 21:13:42 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-12-07 00:23:09 +00:00
|
|
|
mutex.Do(func() {
|
|
|
|
dashboards[i].id = influxBucket.ID
|
|
|
|
rollbackDashboards = append(rollbackDashboards, dashboards[i])
|
|
|
|
})
|
|
|
|
return nil
|
2019-10-30 21:13:42 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return applier{
|
2019-12-07 00:23:09 +00:00
|
|
|
creater: creater{
|
|
|
|
entries: len(dashboards),
|
|
|
|
fn: createFn,
|
|
|
|
},
|
2019-10-30 21:13:42 +00:00
|
|
|
rollbacker: rollbacker{
|
|
|
|
resource: resource,
|
2021-03-30 18:10:02 +00:00
|
|
|
fn: func(_ platform.ID) error {
|
2020-04-20 19:29:30 +00:00
|
|
|
return s.rollbackDashboards(ctx, rollbackDashboards)
|
2019-12-06 00:53:00 +00:00
|
|
|
},
|
2019-10-30 21:13:42 +00:00
|
|
|
},
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-04-16 19:41:02 +00:00
|
|
|
func (s *Service) applyDashboard(ctx context.Context, d *stateDashboard) (influxdb.Dashboard, error) {
|
2020-05-06 21:53:34 +00:00
|
|
|
switch {
|
|
|
|
case IsRemoval(d.stateStatus):
|
2020-04-20 19:29:30 +00:00
|
|
|
if err := s.dashSVC.DeleteDashboard(ctx, d.ID()); err != nil {
|
2021-03-30 18:10:02 +00:00
|
|
|
if errors2.ErrorCode(err) == errors2.ENotFound {
|
2020-05-06 21:53:34 +00:00
|
|
|
return influxdb.Dashboard{}, nil
|
|
|
|
}
|
2020-06-18 17:25:43 +00:00
|
|
|
return influxdb.Dashboard{}, applyFailErr("delete", d.stateIdentity(), err)
|
2020-04-20 19:29:30 +00:00
|
|
|
}
|
|
|
|
return *d.existing, nil
|
2020-05-06 21:53:34 +00:00
|
|
|
case IsExisting(d.stateStatus) && d.existing != nil:
|
2020-04-20 19:29:30 +00:00
|
|
|
name := d.parserDash.Name()
|
|
|
|
cells := convertChartsToCells(d.parserDash.Charts)
|
|
|
|
dash, err := s.dashSVC.UpdateDashboard(ctx, d.ID(), influxdb.DashboardUpdate{
|
|
|
|
Name: &name,
|
|
|
|
Description: &d.parserDash.Description,
|
|
|
|
Cells: &cells,
|
|
|
|
})
|
|
|
|
if err != nil {
|
2020-06-18 17:25:43 +00:00
|
|
|
return influxdb.Dashboard{}, applyFailErr("update", d.stateIdentity(), err)
|
2020-04-20 19:29:30 +00:00
|
|
|
}
|
|
|
|
return *dash, nil
|
|
|
|
default:
|
|
|
|
cells := convertChartsToCells(d.parserDash.Charts)
|
|
|
|
influxDashboard := influxdb.Dashboard{
|
|
|
|
OrganizationID: d.orgID,
|
|
|
|
Description: d.parserDash.Description,
|
|
|
|
Name: d.parserDash.Name(),
|
|
|
|
Cells: cells,
|
|
|
|
}
|
|
|
|
err := s.dashSVC.CreateDashboard(ctx, &influxDashboard)
|
|
|
|
if err != nil {
|
2020-06-18 17:25:43 +00:00
|
|
|
return influxdb.Dashboard{}, applyFailErr("create", d.stateIdentity(), err)
|
2020-04-20 19:29:30 +00:00
|
|
|
}
|
|
|
|
return influxDashboard, nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func (s *Service) rollbackDashboards(ctx context.Context, dashs []*stateDashboard) error {
|
|
|
|
rollbackFn := func(d *stateDashboard) error {
|
2020-05-06 21:53:34 +00:00
|
|
|
if !IsNew(d.stateStatus) && d.existing == nil {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2020-04-20 19:29:30 +00:00
|
|
|
var err error
|
2020-05-06 21:53:34 +00:00
|
|
|
switch {
|
|
|
|
case IsRemoval(d.stateStatus):
|
2020-04-20 19:29:30 +00:00
|
|
|
err = ierrors.Wrap(s.dashSVC.CreateDashboard(ctx, d.existing), "rolling back removed dashboard")
|
2020-05-06 21:53:34 +00:00
|
|
|
case IsExisting(d.stateStatus):
|
2020-04-20 19:29:30 +00:00
|
|
|
_, err := s.dashSVC.UpdateDashboard(ctx, d.ID(), influxdb.DashboardUpdate{
|
|
|
|
Name: &d.existing.Name,
|
|
|
|
Description: &d.existing.Description,
|
|
|
|
Cells: &d.existing.Cells,
|
|
|
|
})
|
|
|
|
return ierrors.Wrap(err, "failed to update dashboard")
|
|
|
|
default:
|
|
|
|
err = ierrors.Wrap(s.dashSVC.DeleteDashboard(ctx, d.ID()), "rolling back new dashboard")
|
|
|
|
}
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
var errs []string
|
|
|
|
for _, d := range dashs {
|
|
|
|
if err := rollbackFn(d); err != nil {
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
errs = append(errs, fmt.Sprintf("error for dashboard[%q]: %s", d.ID(), err))
|
2020-04-20 19:29:30 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if len(errs) > 0 {
|
|
|
|
// TODO: fixup error
|
|
|
|
return errors.New(strings.Join(errs, ", "))
|
2019-10-30 21:13:42 +00:00
|
|
|
}
|
|
|
|
|
2020-04-20 19:29:30 +00:00
|
|
|
return nil
|
2019-10-30 21:13:42 +00:00
|
|
|
}
|
|
|
|
|
2020-07-30 18:26:17 +00:00
|
|
|
func convertChartsToCells(ch []*chart) []*influxdb.Cell {
|
2019-11-01 18:11:42 +00:00
|
|
|
icells := make([]*influxdb.Cell, 0, len(ch))
|
2019-12-06 17:13:06 +00:00
|
|
|
for _, c := range ch {
|
2019-11-01 18:11:42 +00:00
|
|
|
icell := &influxdb.Cell{
|
|
|
|
CellProperty: influxdb.CellProperty{
|
2019-11-08 19:33:41 +00:00
|
|
|
X: int32(c.XPos),
|
|
|
|
Y: int32(c.YPos),
|
2019-11-01 18:11:42 +00:00
|
|
|
H: int32(c.Height),
|
|
|
|
W: int32(c.Width),
|
|
|
|
},
|
2019-12-06 17:13:06 +00:00
|
|
|
View: &influxdb.View{
|
|
|
|
ViewContents: influxdb.ViewContents{Name: c.Name},
|
|
|
|
Properties: c.properties(),
|
|
|
|
},
|
2019-11-01 18:11:42 +00:00
|
|
|
}
|
|
|
|
icells = append(icells, icell)
|
|
|
|
}
|
2019-12-06 17:13:06 +00:00
|
|
|
return icells
|
2019-11-01 18:11:42 +00:00
|
|
|
}
|
|
|
|
|
2020-04-11 04:51:13 +00:00
|
|
|
func (s *Service) applyLabels(ctx context.Context, labels []*stateLabel) applier {
|
2019-10-30 21:13:42 +00:00
|
|
|
const resource = "label"
|
|
|
|
|
2019-12-07 00:23:09 +00:00
|
|
|
mutex := new(doMutex)
|
2020-04-11 04:51:13 +00:00
|
|
|
rollBackLabels := make([]*stateLabel, 0, len(labels))
|
2019-10-24 23:59:01 +00:00
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
createFn := func(ctx context.Context, i int, orgID, userID platform.ID) *applyErrBody {
|
2020-04-11 04:51:13 +00:00
|
|
|
var l *stateLabel
|
2019-12-07 00:23:09 +00:00
|
|
|
mutex.Do(func() {
|
2020-04-11 04:51:13 +00:00
|
|
|
labels[i].orgID = orgID
|
|
|
|
l = labels[i]
|
2019-12-07 00:23:09 +00:00
|
|
|
})
|
|
|
|
if !l.shouldApply() {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
influxLabel, err := s.applyLabel(ctx, l)
|
|
|
|
if err != nil {
|
|
|
|
return &applyErrBody{
|
2020-06-30 21:54:00 +00:00
|
|
|
name: l.parserLabel.MetaName(),
|
2019-12-07 00:23:09 +00:00
|
|
|
msg: err.Error(),
|
2019-10-24 23:59:01 +00:00
|
|
|
}
|
2019-12-07 00:23:09 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
mutex.Do(func() {
|
2019-10-30 17:55:13 +00:00
|
|
|
labels[i].id = influxLabel.ID
|
2019-10-28 22:23:40 +00:00
|
|
|
rollBackLabels = append(rollBackLabels, labels[i])
|
2019-12-07 00:23:09 +00:00
|
|
|
})
|
2019-10-24 23:59:01 +00:00
|
|
|
|
2019-12-07 00:23:09 +00:00
|
|
|
return nil
|
2019-10-24 23:59:01 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return applier{
|
2019-12-07 00:23:09 +00:00
|
|
|
creater: creater{
|
|
|
|
entries: len(labels),
|
|
|
|
fn: createFn,
|
|
|
|
},
|
2019-10-24 23:59:01 +00:00
|
|
|
rollbacker: rollbacker{
|
2019-10-30 21:13:42 +00:00
|
|
|
resource: resource,
|
2021-03-30 18:10:02 +00:00
|
|
|
fn: func(_ platform.ID) error { return s.rollbackLabels(ctx, rollBackLabels) },
|
2019-10-24 23:59:01 +00:00
|
|
|
},
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-04-11 04:51:13 +00:00
|
|
|
func (s *Service) rollbackLabels(ctx context.Context, labels []*stateLabel) error {
|
|
|
|
rollbackFn := func(l *stateLabel) error {
|
2020-05-06 22:10:08 +00:00
|
|
|
if !IsNew(l.stateStatus) && l.existing == nil {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2020-04-01 23:44:17 +00:00
|
|
|
var err error
|
2020-05-06 22:10:08 +00:00
|
|
|
switch {
|
|
|
|
case IsRemoval(l.stateStatus):
|
2020-04-01 23:44:17 +00:00
|
|
|
err = s.labelSVC.CreateLabel(ctx, l.existing)
|
2020-05-06 22:10:08 +00:00
|
|
|
case IsExisting(l.stateStatus):
|
2020-04-01 23:44:17 +00:00
|
|
|
_, err = s.labelSVC.UpdateLabel(ctx, l.ID(), influxdb.LabelUpdate{
|
2020-04-14 20:21:05 +00:00
|
|
|
Name: l.parserLabel.Name(),
|
2020-04-01 23:44:17 +00:00
|
|
|
Properties: l.existing.Properties,
|
|
|
|
})
|
2020-04-14 20:21:05 +00:00
|
|
|
default:
|
|
|
|
err = s.labelSVC.DeleteLabel(ctx, l.ID())
|
2019-11-07 00:45:00 +00:00
|
|
|
}
|
2020-04-01 23:44:17 +00:00
|
|
|
return err
|
|
|
|
}
|
2019-11-07 00:45:00 +00:00
|
|
|
|
2020-04-01 23:44:17 +00:00
|
|
|
var errs []string
|
|
|
|
for _, l := range labels {
|
|
|
|
if err := rollbackFn(l); err != nil {
|
|
|
|
errs = append(errs, fmt.Sprintf("error for label[%q]: %s", l.ID(), err))
|
2019-10-24 23:59:01 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if len(errs) > 0 {
|
2020-04-01 23:44:17 +00:00
|
|
|
return errors.New(strings.Join(errs, ", "))
|
2019-10-24 23:59:01 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
2019-10-26 02:11:47 +00:00
|
|
|
|
2020-04-11 04:51:13 +00:00
|
|
|
func (s *Service) applyLabel(ctx context.Context, l *stateLabel) (influxdb.Label, error) {
|
|
|
|
var (
|
|
|
|
influxLabel *influxdb.Label
|
|
|
|
err error
|
|
|
|
)
|
2020-05-06 22:10:08 +00:00
|
|
|
switch {
|
|
|
|
case IsRemoval(l.stateStatus):
|
2020-04-11 04:51:13 +00:00
|
|
|
influxLabel, err = l.existing, s.labelSVC.DeleteLabel(ctx, l.ID())
|
2020-05-06 22:10:08 +00:00
|
|
|
case IsExisting(l.stateStatus) && l.existing != nil:
|
2020-04-11 04:51:13 +00:00
|
|
|
influxLabel, err = s.labelSVC.UpdateLabel(ctx, l.ID(), influxdb.LabelUpdate{
|
2020-04-14 20:21:05 +00:00
|
|
|
Name: l.parserLabel.Name(),
|
2019-10-28 22:23:40 +00:00
|
|
|
Properties: l.properties(),
|
|
|
|
})
|
2020-04-11 04:51:13 +00:00
|
|
|
err = ierrors.Wrap(err, "updating")
|
|
|
|
default:
|
|
|
|
creatLabel := l.toInfluxLabel()
|
|
|
|
influxLabel = &creatLabel
|
|
|
|
err = ierrors.Wrap(s.labelSVC.CreateLabel(ctx, &creatLabel), "creating")
|
2019-10-28 22:23:40 +00:00
|
|
|
}
|
2021-03-30 18:10:02 +00:00
|
|
|
if errors2.ErrorCode(err) == errors2.ENotFound {
|
2020-05-06 22:10:08 +00:00
|
|
|
return influxdb.Label{}, nil
|
|
|
|
}
|
2020-04-11 04:51:13 +00:00
|
|
|
if err != nil || influxLabel == nil {
|
2019-10-28 22:23:40 +00:00
|
|
|
return influxdb.Label{}, err
|
|
|
|
}
|
|
|
|
|
2020-04-11 04:51:13 +00:00
|
|
|
return *influxLabel, nil
|
2019-10-28 22:23:40 +00:00
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) applyNotificationEndpoints(ctx context.Context, userID platform.ID, endpoints []*stateEndpoint) (applier, func(platform.ID) error) {
|
2019-12-10 22:51:11 +00:00
|
|
|
mutex := new(doMutex)
|
2020-04-15 19:46:17 +00:00
|
|
|
rollbackEndpoints := make([]*stateEndpoint, 0, len(endpoints))
|
2019-12-10 22:51:11 +00:00
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
createFn := func(ctx context.Context, i int, orgID, userID platform.ID) *applyErrBody {
|
2020-04-15 19:46:17 +00:00
|
|
|
var endpoint *stateEndpoint
|
2019-12-10 22:51:11 +00:00
|
|
|
mutex.Do(func() {
|
2020-04-15 19:46:17 +00:00
|
|
|
endpoints[i].orgID = orgID
|
|
|
|
endpoint = endpoints[i]
|
2019-12-10 22:51:11 +00:00
|
|
|
})
|
|
|
|
|
2019-12-12 19:09:32 +00:00
|
|
|
influxEndpoint, err := s.applyNotificationEndpoint(ctx, endpoint, userID)
|
2019-12-10 22:51:11 +00:00
|
|
|
if err != nil {
|
|
|
|
return &applyErrBody{
|
2020-06-30 21:54:00 +00:00
|
|
|
name: endpoint.parserEndpoint.MetaName(),
|
2019-12-10 22:51:11 +00:00
|
|
|
msg: err.Error(),
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
mutex.Do(func() {
|
2020-05-06 22:31:24 +00:00
|
|
|
if influxEndpoint != nil {
|
|
|
|
endpoints[i].id = influxEndpoint.GetID()
|
|
|
|
for _, secret := range influxEndpoint.SecretFields() {
|
|
|
|
switch {
|
|
|
|
case strings.HasSuffix(secret.Key, "-routing-key"):
|
|
|
|
if endpoints[i].parserEndpoint.routingKey == nil {
|
|
|
|
endpoints[i].parserEndpoint.routingKey = new(references)
|
|
|
|
}
|
|
|
|
endpoints[i].parserEndpoint.routingKey.Secret = secret.Key
|
|
|
|
case strings.HasSuffix(secret.Key, "-token"):
|
|
|
|
if endpoints[i].parserEndpoint.token == nil {
|
|
|
|
endpoints[i].parserEndpoint.token = new(references)
|
|
|
|
}
|
|
|
|
endpoints[i].parserEndpoint.token.Secret = secret.Key
|
|
|
|
case strings.HasSuffix(secret.Key, "-username"):
|
|
|
|
if endpoints[i].parserEndpoint.username == nil {
|
|
|
|
endpoints[i].parserEndpoint.username = new(references)
|
|
|
|
}
|
|
|
|
endpoints[i].parserEndpoint.username.Secret = secret.Key
|
|
|
|
case strings.HasSuffix(secret.Key, "-password"):
|
|
|
|
if endpoints[i].parserEndpoint.password == nil {
|
|
|
|
endpoints[i].parserEndpoint.password = new(references)
|
|
|
|
}
|
|
|
|
endpoints[i].parserEndpoint.password.Secret = secret.Key
|
2020-04-09 19:35:19 +00:00
|
|
|
}
|
2019-12-16 17:39:55 +00:00
|
|
|
}
|
|
|
|
}
|
2019-12-10 22:51:11 +00:00
|
|
|
rollbackEndpoints = append(rollbackEndpoints, endpoints[i])
|
|
|
|
})
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
rollbackFn := func(_ platform.ID) error {
|
2020-04-23 00:05:10 +00:00
|
|
|
return s.rollbackNotificationEndpoints(ctx, userID, rollbackEndpoints)
|
|
|
|
}
|
|
|
|
|
2019-12-10 22:51:11 +00:00
|
|
|
return applier{
|
|
|
|
creater: creater{
|
|
|
|
entries: len(endpoints),
|
|
|
|
fn: createFn,
|
|
|
|
},
|
|
|
|
rollbacker: rollbacker{
|
2021-03-30 18:10:02 +00:00
|
|
|
fn: func(_ platform.ID) error {
|
2020-04-23 00:05:10 +00:00
|
|
|
return nil
|
2019-12-10 22:51:11 +00:00
|
|
|
},
|
|
|
|
},
|
2020-04-23 00:05:10 +00:00
|
|
|
}, rollbackFn
|
2019-12-10 22:51:11 +00:00
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) applyNotificationEndpoint(ctx context.Context, e *stateEndpoint, userID platform.ID) (influxdb.NotificationEndpoint, error) {
|
2020-05-06 22:31:24 +00:00
|
|
|
switch {
|
|
|
|
case IsRemoval(e.stateStatus):
|
2020-04-06 17:25:20 +00:00
|
|
|
_, _, err := s.endpointSVC.DeleteNotificationEndpoint(ctx, e.ID())
|
2021-03-30 18:10:02 +00:00
|
|
|
if err != nil && errors2.ErrorCode(err) != errors2.ENotFound {
|
2020-06-18 17:25:43 +00:00
|
|
|
return nil, applyFailErr("delete", e.stateIdentity(), err)
|
2020-04-06 17:25:20 +00:00
|
|
|
}
|
|
|
|
return e.existing, nil
|
2020-05-06 22:31:24 +00:00
|
|
|
case IsExisting(e.stateStatus) && e.existing != nil:
|
2019-12-10 22:51:11 +00:00
|
|
|
// stub out userID since we're always using hte http client which will fill it in for us with the token
|
|
|
|
// feels a bit broken that is required.
|
|
|
|
// TODO: look into this userID requirement
|
2020-06-18 17:25:43 +00:00
|
|
|
end, err := s.endpointSVC.UpdateNotificationEndpoint(
|
2020-04-06 17:25:20 +00:00
|
|
|
ctx,
|
|
|
|
e.ID(),
|
|
|
|
e.summarize().NotificationEndpoint,
|
|
|
|
userID,
|
|
|
|
)
|
2020-06-18 17:25:43 +00:00
|
|
|
return end, applyFailErr("update", e.stateIdentity(), err)
|
2020-04-15 19:46:17 +00:00
|
|
|
default:
|
|
|
|
actual := e.summarize().NotificationEndpoint
|
|
|
|
err := s.endpointSVC.CreateNotificationEndpoint(ctx, actual, userID)
|
|
|
|
if err != nil {
|
2020-06-18 17:25:43 +00:00
|
|
|
return nil, applyFailErr("create", e.stateIdentity(), err)
|
2020-04-15 19:46:17 +00:00
|
|
|
}
|
|
|
|
return actual, nil
|
2019-12-10 22:51:11 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) rollbackNotificationEndpoints(ctx context.Context, userID platform.ID, endpoints []*stateEndpoint) error {
|
2020-04-15 19:46:17 +00:00
|
|
|
rollbackFn := func(e *stateEndpoint) error {
|
2020-05-06 22:31:24 +00:00
|
|
|
if !IsNew(e.stateStatus) && e.existing == nil {
|
|
|
|
return nil
|
|
|
|
}
|
2020-04-06 17:25:20 +00:00
|
|
|
var err error
|
2020-04-15 19:46:17 +00:00
|
|
|
switch e.stateStatus {
|
|
|
|
case StateStatusRemove:
|
2020-04-06 17:25:20 +00:00
|
|
|
err = s.endpointSVC.CreateNotificationEndpoint(ctx, e.existing, userID)
|
2020-04-15 19:46:17 +00:00
|
|
|
err = ierrors.Wrap(err, "failed to rollback removed endpoint")
|
|
|
|
case StateStatusExists:
|
2020-04-06 17:25:20 +00:00
|
|
|
_, err = s.endpointSVC.UpdateNotificationEndpoint(ctx, e.ID(), e.existing, userID)
|
2020-04-15 19:46:17 +00:00
|
|
|
err = ierrors.Wrap(err, "failed to rollback updated endpoint")
|
|
|
|
default:
|
|
|
|
_, _, err = s.endpointSVC.DeleteNotificationEndpoint(ctx, e.ID())
|
|
|
|
err = ierrors.Wrap(err, "failed to rollback created endpoint")
|
2019-12-10 22:51:11 +00:00
|
|
|
}
|
2020-04-06 17:25:20 +00:00
|
|
|
return err
|
|
|
|
}
|
2019-12-10 22:51:11 +00:00
|
|
|
|
2020-04-06 17:25:20 +00:00
|
|
|
var errs []string
|
|
|
|
for _, e := range endpoints {
|
|
|
|
if err := rollbackFn(e); err != nil {
|
|
|
|
errs = append(errs, fmt.Sprintf("error for notification endpoint[%q]: %s", e.ID(), err))
|
2019-12-10 22:51:11 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if len(errs) > 0 {
|
2020-04-06 17:25:20 +00:00
|
|
|
return errors.New(strings.Join(errs, "; "))
|
2019-12-10 22:51:11 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) applyNotificationGenerator(ctx context.Context, userID platform.ID, rules []*stateRule, stateEndpoints []*stateEndpoint) (endpointApplier applier, ruleApplier applier, err error) {
|
2020-04-23 00:05:10 +00:00
|
|
|
mEndpoints := make(map[string]*stateEndpoint)
|
|
|
|
for _, e := range stateEndpoints {
|
2020-06-30 21:54:00 +00:00
|
|
|
mEndpoints[e.parserEndpoint.MetaName()] = e
|
2020-04-23 00:05:10 +00:00
|
|
|
}
|
|
|
|
|
2019-12-20 17:10:10 +00:00
|
|
|
var errs applyErrs
|
|
|
|
for _, r := range rules {
|
2020-04-23 00:05:10 +00:00
|
|
|
if IsRemoval(r.stateStatus) {
|
|
|
|
continue
|
|
|
|
}
|
2020-06-30 21:54:00 +00:00
|
|
|
v, ok := mEndpoints[r.endpointTemplateName()]
|
2019-12-20 17:10:10 +00:00
|
|
|
if !ok {
|
|
|
|
errs = append(errs, &applyErrBody{
|
2020-06-30 21:54:00 +00:00
|
|
|
name: r.parserRule.MetaName(),
|
|
|
|
msg: fmt.Sprintf("notification rule endpoint dependency does not exist; endpointName=%q", r.parserRule.associatedEndpoint.MetaName()),
|
2019-12-20 17:10:10 +00:00
|
|
|
})
|
|
|
|
continue
|
|
|
|
}
|
2020-04-17 02:27:58 +00:00
|
|
|
r.associatedEndpoint = v
|
2019-12-20 17:10:10 +00:00
|
|
|
}
|
|
|
|
|
2020-04-23 00:05:10 +00:00
|
|
|
err = errs.toError("notification_rules", "failed to find dependency")
|
2019-12-20 17:10:10 +00:00
|
|
|
if err != nil {
|
2020-04-23 00:05:10 +00:00
|
|
|
return applier{}, applier{}, err
|
2019-12-20 17:10:10 +00:00
|
|
|
}
|
|
|
|
|
2020-04-23 00:05:10 +00:00
|
|
|
endpointApp, endpointRollbackFn := s.applyNotificationEndpoints(ctx, userID, stateEndpoints)
|
|
|
|
ruleApp, ruleRollbackFn := s.applyNotificationRules(ctx, userID, rules)
|
|
|
|
|
|
|
|
// here we have to couple the endpoints to rules b/c of the dependency here when rolling back
|
|
|
|
// a deleted endpoint and rule. This forces the endpoints to be rolled back first so the
|
|
|
|
// reference for the rule has settled. The dependency has to be available before rolling back
|
|
|
|
// notification rules.
|
|
|
|
endpointApp.rollbacker = rollbacker{
|
2021-03-30 18:10:02 +00:00
|
|
|
fn: func(orgID platform.ID) error {
|
2020-04-23 00:05:10 +00:00
|
|
|
if err := endpointRollbackFn(orgID); err != nil {
|
|
|
|
s.log.Error("failed to roll back endpoints", zap.Error(err))
|
|
|
|
}
|
|
|
|
return ruleRollbackFn(orgID)
|
|
|
|
},
|
|
|
|
}
|
2019-12-20 17:10:10 +00:00
|
|
|
|
2020-04-23 00:05:10 +00:00
|
|
|
return endpointApp, ruleApp, nil
|
|
|
|
}
|
2019-12-20 17:10:10 +00:00
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) applyNotificationRules(ctx context.Context, userID platform.ID, rules []*stateRule) (applier, func(platform.ID) error) {
|
2019-12-20 17:10:10 +00:00
|
|
|
mutex := new(doMutex)
|
2020-04-17 02:27:58 +00:00
|
|
|
rollbackEndpoints := make([]*stateRule, 0, len(rules))
|
2019-12-20 17:10:10 +00:00
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
createFn := func(ctx context.Context, i int, orgID, userID platform.ID) *applyErrBody {
|
2020-04-17 02:27:58 +00:00
|
|
|
var rule *stateRule
|
2019-12-20 17:10:10 +00:00
|
|
|
mutex.Do(func() {
|
|
|
|
rules[i].orgID = orgID
|
2020-04-17 02:27:58 +00:00
|
|
|
rule = rules[i]
|
2019-12-20 17:10:10 +00:00
|
|
|
})
|
|
|
|
|
|
|
|
influxRule, err := s.applyNotificationRule(ctx, rule, userID)
|
|
|
|
if err != nil {
|
|
|
|
return &applyErrBody{
|
2020-06-30 21:54:00 +00:00
|
|
|
name: rule.parserRule.MetaName(),
|
2019-12-20 17:10:10 +00:00
|
|
|
msg: err.Error(),
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
mutex.Do(func() {
|
2020-05-06 22:54:14 +00:00
|
|
|
if influxRule != nil {
|
|
|
|
rules[i].id = influxRule.GetID()
|
|
|
|
}
|
2019-12-20 17:10:10 +00:00
|
|
|
rollbackEndpoints = append(rollbackEndpoints, rules[i])
|
|
|
|
})
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
rollbackFn := func(_ platform.ID) error {
|
2020-04-23 00:05:10 +00:00
|
|
|
return s.rollbackNotificationRules(ctx, userID, rollbackEndpoints)
|
|
|
|
}
|
|
|
|
|
2019-12-20 17:10:10 +00:00
|
|
|
return applier{
|
|
|
|
creater: creater{
|
|
|
|
entries: len(rules),
|
|
|
|
fn: createFn,
|
|
|
|
},
|
|
|
|
rollbacker: rollbacker{
|
2021-03-30 18:10:02 +00:00
|
|
|
fn: func(_ platform.ID) error { return nil },
|
2019-12-20 17:10:10 +00:00
|
|
|
},
|
2020-04-23 00:05:10 +00:00
|
|
|
}, rollbackFn
|
2019-12-20 17:10:10 +00:00
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) applyNotificationRule(ctx context.Context, r *stateRule, userID platform.ID) (influxdb.NotificationRule, error) {
|
2020-05-06 22:54:14 +00:00
|
|
|
switch {
|
|
|
|
case IsRemoval(r.stateStatus):
|
2020-04-23 00:05:10 +00:00
|
|
|
if err := s.ruleSVC.DeleteNotificationRule(ctx, r.ID()); err != nil {
|
2021-03-30 18:10:02 +00:00
|
|
|
if errors2.ErrorCode(err) == errors2.ENotFound {
|
2020-05-06 22:54:14 +00:00
|
|
|
return nil, nil
|
|
|
|
}
|
2020-06-18 17:25:43 +00:00
|
|
|
return nil, applyFailErr("delete", r.stateIdentity(), err)
|
2020-04-23 00:05:10 +00:00
|
|
|
}
|
|
|
|
return r.existing, nil
|
2020-05-06 22:54:14 +00:00
|
|
|
case IsExisting(r.stateStatus) && r.existing != nil:
|
2020-04-23 00:05:10 +00:00
|
|
|
ruleCreate := influxdb.NotificationRuleCreate{
|
|
|
|
NotificationRule: r.toInfluxRule(),
|
|
|
|
Status: r.parserRule.Status(),
|
|
|
|
}
|
|
|
|
influxRule, err := s.ruleSVC.UpdateNotificationRule(ctx, r.ID(), ruleCreate, userID)
|
|
|
|
if err != nil {
|
2020-06-18 17:25:43 +00:00
|
|
|
return nil, applyFailErr("update", r.stateIdentity(), err)
|
2020-04-23 00:05:10 +00:00
|
|
|
}
|
|
|
|
return influxRule, nil
|
|
|
|
default:
|
|
|
|
influxRule := influxdb.NotificationRuleCreate{
|
|
|
|
NotificationRule: r.toInfluxRule(),
|
|
|
|
Status: r.parserRule.Status(),
|
|
|
|
}
|
|
|
|
err := s.ruleSVC.CreateNotificationRule(ctx, influxRule, userID)
|
|
|
|
if err != nil {
|
2020-06-18 17:25:43 +00:00
|
|
|
return nil, applyFailErr("create", r.stateIdentity(), err)
|
2020-04-23 00:05:10 +00:00
|
|
|
}
|
|
|
|
return influxRule.NotificationRule, nil
|
2019-12-20 17:10:10 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) rollbackNotificationRules(ctx context.Context, userID platform.ID, rules []*stateRule) error {
|
2020-04-23 00:05:10 +00:00
|
|
|
rollbackFn := func(r *stateRule) error {
|
2020-05-06 22:54:14 +00:00
|
|
|
if !IsNew(r.stateStatus) && r.existing == nil {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
existingRuleFn := func(endpointID platform.ID) influxdb.NotificationRule {
|
2020-04-23 00:05:10 +00:00
|
|
|
switch rr := r.existing.(type) {
|
|
|
|
case *rule.HTTP:
|
|
|
|
rr.EndpointID = endpointID
|
|
|
|
case *rule.PagerDuty:
|
|
|
|
rr.EndpointID = endpointID
|
|
|
|
case *rule.Slack:
|
|
|
|
rr.EndpointID = endpointID
|
|
|
|
}
|
|
|
|
return r.existing
|
|
|
|
}
|
|
|
|
|
|
|
|
// setting status to unknown b/c these resources for two reasons:
|
|
|
|
// 1. we have no ability to find status via the Service, only to set it...
|
|
|
|
// 2. we have no way of inspecting an existing rule and pulling status from it
|
|
|
|
// 3. since this is a fallback condition, we set things to inactive as a user
|
2020-06-30 21:54:00 +00:00
|
|
|
// is likely to follow up this failure by fixing their template up then reapplying
|
2020-04-23 00:05:10 +00:00
|
|
|
unknownStatus := influxdb.Inactive
|
|
|
|
|
|
|
|
var err error
|
|
|
|
switch r.stateStatus {
|
|
|
|
case StateStatusRemove:
|
|
|
|
if r.associatedEndpoint == nil {
|
|
|
|
return internalErr(errors.New("failed to find endpoint dependency to rollback existing notification rule"))
|
|
|
|
}
|
|
|
|
influxRule := influxdb.NotificationRuleCreate{
|
|
|
|
NotificationRule: existingRuleFn(r.endpointID()),
|
|
|
|
Status: unknownStatus,
|
|
|
|
}
|
|
|
|
err = s.ruleSVC.CreateNotificationRule(ctx, influxRule, userID)
|
|
|
|
err = ierrors.Wrap(err, "failed to rollback created notification rule")
|
|
|
|
case StateStatusExists:
|
|
|
|
if r.associatedEndpoint == nil {
|
|
|
|
return internalErr(errors.New("failed to find endpoint dependency to rollback existing notification rule"))
|
|
|
|
}
|
|
|
|
|
|
|
|
influxRule := influxdb.NotificationRuleCreate{
|
|
|
|
NotificationRule: existingRuleFn(r.endpointID()),
|
|
|
|
Status: unknownStatus,
|
|
|
|
}
|
|
|
|
_, err = s.ruleSVC.UpdateNotificationRule(ctx, r.ID(), influxRule, r.existing.GetOwnerID())
|
|
|
|
err = ierrors.Wrap(err, "failed to rollback updated notification rule")
|
|
|
|
default:
|
|
|
|
err = s.ruleSVC.DeleteNotificationRule(ctx, r.ID())
|
|
|
|
err = ierrors.Wrap(err, "failed to rollback created notification rule")
|
|
|
|
}
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
2019-12-20 17:10:10 +00:00
|
|
|
var errs []string
|
2020-04-23 00:05:10 +00:00
|
|
|
for _, r := range rules {
|
|
|
|
if err := rollbackFn(r); err != nil {
|
|
|
|
errs = append(errs, fmt.Sprintf("error for notification rule[%q]: %s", r.ID(), err))
|
2019-12-20 17:10:10 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if len(errs) > 0 {
|
2020-04-23 00:05:10 +00:00
|
|
|
return errors.New(strings.Join(errs, "; "))
|
2019-12-20 17:10:10 +00:00
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2019-12-27 19:22:05 +00:00
|
|
|
func (s *Service) applySecrets(secrets map[string]string) applier {
|
|
|
|
const resource = "secrets"
|
|
|
|
|
|
|
|
if len(secrets) == 0 {
|
|
|
|
return applier{
|
2021-03-30 18:10:02 +00:00
|
|
|
rollbacker: rollbacker{fn: func(orgID platform.ID) error { return nil }},
|
2019-12-27 19:22:05 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
mutex := new(doMutex)
|
|
|
|
rollbackSecrets := make([]string, 0)
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
createFn := func(ctx context.Context, i int, orgID, userID platform.ID) *applyErrBody {
|
2019-12-27 19:22:05 +00:00
|
|
|
err := s.secretSVC.PutSecrets(ctx, orgID, secrets)
|
|
|
|
if err != nil {
|
|
|
|
return &applyErrBody{name: "secrets", msg: err.Error()}
|
|
|
|
}
|
|
|
|
|
|
|
|
mutex.Do(func() {
|
|
|
|
for key := range secrets {
|
|
|
|
rollbackSecrets = append(rollbackSecrets, key)
|
|
|
|
}
|
|
|
|
})
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
return applier{
|
|
|
|
creater: creater{
|
|
|
|
entries: 1,
|
|
|
|
fn: createFn,
|
|
|
|
},
|
|
|
|
rollbacker: rollbacker{
|
|
|
|
resource: resource,
|
2021-03-30 18:10:02 +00:00
|
|
|
fn: func(orgID platform.ID) error {
|
2019-12-27 19:22:05 +00:00
|
|
|
return s.secretSVC.DeleteSecret(context.Background(), orgID)
|
|
|
|
},
|
|
|
|
},
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
func (s *Service) applyTasks(ctx context.Context, tasks []*stateTask) applier {
|
2019-12-23 19:51:00 +00:00
|
|
|
const resource = "tasks"
|
|
|
|
|
|
|
|
mutex := new(doMutex)
|
2020-04-16 00:18:19 +00:00
|
|
|
rollbackTasks := make([]*stateTask, 0, len(tasks))
|
2019-12-23 19:51:00 +00:00
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
createFn := func(ctx context.Context, i int, orgID, userID platform.ID) *applyErrBody {
|
2020-04-16 00:18:19 +00:00
|
|
|
var t *stateTask
|
2019-12-23 19:51:00 +00:00
|
|
|
mutex.Do(func() {
|
|
|
|
tasks[i].orgID = orgID
|
2020-04-16 00:18:19 +00:00
|
|
|
t = tasks[i]
|
2019-12-23 19:51:00 +00:00
|
|
|
})
|
|
|
|
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
newTask, err := s.applyTask(ctx, userID, t)
|
2019-12-23 19:51:00 +00:00
|
|
|
if err != nil {
|
2020-04-16 00:18:19 +00:00
|
|
|
return &applyErrBody{
|
2020-06-30 21:54:00 +00:00
|
|
|
name: t.parserTask.MetaName(),
|
2020-04-16 00:18:19 +00:00
|
|
|
msg: err.Error(),
|
|
|
|
}
|
2019-12-23 19:51:00 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
mutex.Do(func() {
|
|
|
|
tasks[i].id = newTask.ID
|
2020-04-16 00:18:19 +00:00
|
|
|
rollbackTasks = append(rollbackTasks, tasks[i])
|
2019-12-23 19:51:00 +00:00
|
|
|
})
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
return applier{
|
|
|
|
creater: creater{
|
|
|
|
entries: len(tasks),
|
|
|
|
fn: createFn,
|
|
|
|
},
|
|
|
|
rollbacker: rollbacker{
|
|
|
|
resource: resource,
|
2021-03-30 18:10:02 +00:00
|
|
|
fn: func(_ platform.ID) error {
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
return s.rollbackTasks(ctx, rollbackTasks)
|
2019-12-23 19:51:00 +00:00
|
|
|
},
|
|
|
|
},
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-04-07 18:42:55 +00:00
|
|
|
func (s *Service) applyTask(ctx context.Context, userID platform.ID, t *stateTask) (taskmodel.Task, error) {
|
2020-06-26 22:12:57 +00:00
|
|
|
if isRestrictedTask(t.existing) {
|
|
|
|
return *t.existing, nil
|
|
|
|
}
|
2020-05-06 23:09:38 +00:00
|
|
|
switch {
|
|
|
|
case IsRemoval(t.stateStatus):
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
if err := s.taskSVC.DeleteTask(ctx, t.ID()); err != nil {
|
2021-03-30 18:10:02 +00:00
|
|
|
if errors2.ErrorCode(err) == errors2.ENotFound {
|
2021-04-07 18:42:55 +00:00
|
|
|
return taskmodel.Task{}, nil
|
2020-05-06 23:09:38 +00:00
|
|
|
}
|
2021-04-07 18:42:55 +00:00
|
|
|
return taskmodel.Task{}, applyFailErr("delete", t.stateIdentity(), err)
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
}
|
|
|
|
return *t.existing, nil
|
2020-05-06 23:09:38 +00:00
|
|
|
case IsExisting(t.stateStatus) && t.existing != nil:
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
newFlux := t.parserTask.flux()
|
|
|
|
newStatus := string(t.parserTask.Status())
|
|
|
|
opt := options.Options{
|
|
|
|
Name: t.parserTask.Name(),
|
|
|
|
Cron: t.parserTask.cron,
|
|
|
|
}
|
|
|
|
if every := t.parserTask.every; every > 0 {
|
|
|
|
opt.Every.Parse(every.String())
|
|
|
|
}
|
|
|
|
if offset := t.parserTask.offset; offset > 0 {
|
2020-06-11 23:45:09 +00:00
|
|
|
var off options.Duration
|
|
|
|
if err := off.Parse(offset.String()); err == nil {
|
|
|
|
opt.Offset = &off
|
|
|
|
}
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
}
|
|
|
|
|
2021-04-07 18:42:55 +00:00
|
|
|
updatedTask, err := s.taskSVC.UpdateTask(ctx, t.ID(), taskmodel.TaskUpdate{
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
Flux: &newFlux,
|
|
|
|
Status: &newStatus,
|
|
|
|
Description: &t.parserTask.description,
|
|
|
|
Options: opt,
|
|
|
|
})
|
|
|
|
if err != nil {
|
2021-04-07 18:42:55 +00:00
|
|
|
return taskmodel.Task{}, applyFailErr("update", t.stateIdentity(), err)
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
}
|
|
|
|
return *updatedTask, nil
|
|
|
|
default:
|
2021-04-07 18:42:55 +00:00
|
|
|
newTask, err := s.taskSVC.CreateTask(ctx, taskmodel.TaskCreate{
|
|
|
|
Type: taskmodel.TaskSystemType,
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
Flux: t.parserTask.flux(),
|
|
|
|
OwnerID: userID,
|
|
|
|
Description: t.parserTask.description,
|
|
|
|
Status: string(t.parserTask.Status()),
|
|
|
|
OrganizationID: t.orgID,
|
|
|
|
})
|
|
|
|
if err != nil {
|
2021-04-07 18:42:55 +00:00
|
|
|
return taskmodel.Task{}, applyFailErr("create", t.stateIdentity(), err)
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
}
|
|
|
|
return *newTask, nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func (s *Service) rollbackTasks(ctx context.Context, tasks []*stateTask) error {
|
|
|
|
rollbackFn := func(t *stateTask) error {
|
2020-06-26 22:12:57 +00:00
|
|
|
if !IsNew(t.stateStatus) && t.existing == nil || isRestrictedTask(t.existing) {
|
2020-05-06 23:09:38 +00:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
var err error
|
|
|
|
switch t.stateStatus {
|
|
|
|
case StateStatusRemove:
|
2021-04-07 18:42:55 +00:00
|
|
|
newTask, err := s.taskSVC.CreateTask(ctx, taskmodel.TaskCreate{
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
Type: t.existing.Type,
|
|
|
|
Flux: t.existing.Flux,
|
|
|
|
OwnerID: t.existing.OwnerID,
|
|
|
|
Description: t.existing.Description,
|
|
|
|
Status: t.existing.Status,
|
|
|
|
OrganizationID: t.orgID,
|
|
|
|
Metadata: t.existing.Metadata,
|
|
|
|
})
|
|
|
|
if err != nil {
|
|
|
|
return ierrors.Wrap(err, "failed to rollback removed task")
|
|
|
|
}
|
|
|
|
t.existing = newTask
|
|
|
|
case StateStatusExists:
|
|
|
|
opt := options.Options{
|
|
|
|
Name: t.existing.Name,
|
|
|
|
Cron: t.existing.Cron,
|
|
|
|
}
|
|
|
|
if every := t.existing.Every; every != "" {
|
|
|
|
opt.Every.Parse(every)
|
|
|
|
}
|
|
|
|
if offset := t.existing.Offset; offset > 0 {
|
2020-06-11 23:45:09 +00:00
|
|
|
var off options.Duration
|
|
|
|
if err := off.Parse(offset.String()); err == nil {
|
|
|
|
opt.Offset = &off
|
|
|
|
}
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
}
|
|
|
|
|
2021-04-07 18:42:55 +00:00
|
|
|
_, err = s.taskSVC.UpdateTask(ctx, t.ID(), taskmodel.TaskUpdate{
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
Flux: &t.existing.Flux,
|
|
|
|
Status: &t.existing.Status,
|
|
|
|
Description: &t.existing.Description,
|
|
|
|
Metadata: t.existing.Metadata,
|
|
|
|
Options: opt,
|
|
|
|
})
|
|
|
|
err = ierrors.Wrap(err, "failed to rollback updated task")
|
|
|
|
default:
|
|
|
|
err = s.taskSVC.DeleteTask(ctx, t.ID())
|
|
|
|
err = ierrors.Wrap(err, "failed to rollback created task")
|
|
|
|
}
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
var errs []string
|
|
|
|
for _, d := range tasks {
|
|
|
|
if err := rollbackFn(d); err != nil {
|
|
|
|
errs = append(errs, fmt.Sprintf("error for task[%q]: %s", d.ID(), err))
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if len(errs) > 0 {
|
|
|
|
// TODO: fixup error
|
|
|
|
return errors.New(strings.Join(errs, ", "))
|
|
|
|
}
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) applyTelegrafs(ctx context.Context, userID platform.ID, teles []*stateTelegraf) applier {
|
2019-12-04 01:00:15 +00:00
|
|
|
const resource = "telegrafs"
|
|
|
|
|
2019-12-07 00:23:09 +00:00
|
|
|
mutex := new(doMutex)
|
2020-04-16 18:27:30 +00:00
|
|
|
rollbackTelegrafs := make([]*stateTelegraf, 0, len(teles))
|
2019-12-07 00:23:09 +00:00
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
createFn := func(ctx context.Context, i int, orgID, userID platform.ID) *applyErrBody {
|
2020-04-21 22:00:29 +00:00
|
|
|
var t *stateTelegraf
|
2019-12-07 00:23:09 +00:00
|
|
|
mutex.Do(func() {
|
2020-04-16 18:27:30 +00:00
|
|
|
teles[i].orgID = orgID
|
2020-04-21 22:00:29 +00:00
|
|
|
t = teles[i]
|
2019-12-07 00:23:09 +00:00
|
|
|
})
|
|
|
|
|
2020-04-21 22:00:29 +00:00
|
|
|
existing, err := s.applyTelegrafConfig(ctx, userID, t)
|
2019-12-07 00:23:09 +00:00
|
|
|
if err != nil {
|
|
|
|
return &applyErrBody{
|
2020-06-30 21:54:00 +00:00
|
|
|
name: t.parserTelegraf.MetaName(),
|
2019-12-07 00:23:09 +00:00
|
|
|
msg: err.Error(),
|
2019-12-04 01:00:15 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-12-07 00:23:09 +00:00
|
|
|
mutex.Do(func() {
|
2020-04-21 22:00:29 +00:00
|
|
|
teles[i].id = existing.ID
|
2019-12-07 00:23:09 +00:00
|
|
|
rollbackTelegrafs = append(rollbackTelegrafs, teles[i])
|
|
|
|
})
|
|
|
|
|
|
|
|
return nil
|
2019-12-04 01:00:15 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return applier{
|
2019-12-07 00:23:09 +00:00
|
|
|
creater: creater{
|
|
|
|
entries: len(teles),
|
|
|
|
fn: createFn,
|
|
|
|
},
|
2019-12-04 01:00:15 +00:00
|
|
|
rollbacker: rollbacker{
|
|
|
|
resource: resource,
|
2021-03-30 18:10:02 +00:00
|
|
|
fn: func(_ platform.ID) error {
|
2020-04-21 22:00:29 +00:00
|
|
|
return s.rollbackTelegrafConfigs(ctx, userID, rollbackTelegrafs)
|
2019-12-06 00:53:00 +00:00
|
|
|
},
|
2019-12-04 01:00:15 +00:00
|
|
|
},
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) applyTelegrafConfig(ctx context.Context, userID platform.ID, t *stateTelegraf) (influxdb.TelegrafConfig, error) {
|
2020-05-06 23:21:07 +00:00
|
|
|
switch {
|
|
|
|
case IsRemoval(t.stateStatus):
|
2020-04-21 22:00:29 +00:00
|
|
|
if err := s.teleSVC.DeleteTelegrafConfig(ctx, t.ID()); err != nil {
|
2021-03-30 18:10:02 +00:00
|
|
|
if errors2.ErrorCode(err) == errors2.ENotFound {
|
2020-05-06 23:21:07 +00:00
|
|
|
return influxdb.TelegrafConfig{}, nil
|
|
|
|
}
|
2020-06-18 17:25:43 +00:00
|
|
|
return influxdb.TelegrafConfig{}, applyFailErr("delete", t.stateIdentity(), err)
|
2020-04-21 22:00:29 +00:00
|
|
|
}
|
|
|
|
return *t.existing, nil
|
2020-05-06 23:21:07 +00:00
|
|
|
case IsExisting(t.stateStatus) && t.existing != nil:
|
2020-04-21 22:00:29 +00:00
|
|
|
cfg := t.summarize().TelegrafConfig
|
|
|
|
updatedConfig, err := s.teleSVC.UpdateTelegrafConfig(ctx, t.ID(), &cfg, userID)
|
|
|
|
if err != nil {
|
2020-06-18 17:25:43 +00:00
|
|
|
return influxdb.TelegrafConfig{}, applyFailErr("update", t.stateIdentity(), err)
|
2020-04-21 22:00:29 +00:00
|
|
|
}
|
|
|
|
return *updatedConfig, nil
|
|
|
|
default:
|
|
|
|
cfg := t.summarize().TelegrafConfig
|
|
|
|
err := s.teleSVC.CreateTelegrafConfig(ctx, &cfg, userID)
|
|
|
|
if err != nil {
|
2020-06-18 17:25:43 +00:00
|
|
|
return influxdb.TelegrafConfig{}, applyFailErr("create", t.stateIdentity(), err)
|
2020-04-21 22:00:29 +00:00
|
|
|
}
|
|
|
|
return cfg, nil
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) rollbackTelegrafConfigs(ctx context.Context, userID platform.ID, cfgs []*stateTelegraf) error {
|
2020-04-21 22:00:29 +00:00
|
|
|
rollbackFn := func(t *stateTelegraf) error {
|
2020-05-06 23:21:07 +00:00
|
|
|
if !IsNew(t.stateStatus) && t.existing == nil {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2020-04-21 22:00:29 +00:00
|
|
|
var err error
|
|
|
|
switch t.stateStatus {
|
|
|
|
case StateStatusRemove:
|
|
|
|
err = ierrors.Wrap(s.teleSVC.CreateTelegrafConfig(ctx, t.existing, userID), "rolling back removed telegraf config")
|
|
|
|
case StateStatusExists:
|
|
|
|
_, err = s.teleSVC.UpdateTelegrafConfig(ctx, t.ID(), t.existing, userID)
|
|
|
|
err = ierrors.Wrap(err, "rolling back updated telegraf config")
|
|
|
|
default:
|
|
|
|
err = ierrors.Wrap(s.teleSVC.DeleteTelegrafConfig(ctx, t.ID()), "rolling back created telegraf config")
|
|
|
|
}
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
var errs []string
|
|
|
|
for _, v := range cfgs {
|
|
|
|
if err := rollbackFn(v); err != nil {
|
|
|
|
errs = append(errs, fmt.Sprintf("error for variable[%q]: %s", v.ID(), err))
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if len(errs) > 0 {
|
|
|
|
return errors.New(strings.Join(errs, "; "))
|
|
|
|
}
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2020-04-14 23:18:34 +00:00
|
|
|
func (s *Service) applyVariables(ctx context.Context, vars []*stateVariable) applier {
|
2019-11-07 00:45:00 +00:00
|
|
|
const resource = "variable"
|
|
|
|
|
2019-12-07 00:23:09 +00:00
|
|
|
mutex := new(doMutex)
|
2020-04-14 23:18:34 +00:00
|
|
|
rollBackVars := make([]*stateVariable, 0, len(vars))
|
2019-11-07 00:45:00 +00:00
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
createFn := func(ctx context.Context, i int, orgID, userID platform.ID) *applyErrBody {
|
2020-04-14 23:18:34 +00:00
|
|
|
var v *stateVariable
|
2019-12-07 00:23:09 +00:00
|
|
|
mutex.Do(func() {
|
2020-04-14 23:18:34 +00:00
|
|
|
vars[i].orgID = orgID
|
|
|
|
v = vars[i]
|
2019-12-07 00:23:09 +00:00
|
|
|
})
|
|
|
|
if !v.shouldApply() {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
influxVar, err := s.applyVariable(ctx, v)
|
|
|
|
if err != nil {
|
|
|
|
return &applyErrBody{
|
2020-06-30 21:54:00 +00:00
|
|
|
name: v.parserVar.MetaName(),
|
2019-12-07 00:23:09 +00:00
|
|
|
msg: err.Error(),
|
2019-11-07 00:45:00 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-12-07 00:23:09 +00:00
|
|
|
mutex.Do(func() {
|
|
|
|
vars[i].id = influxVar.ID
|
|
|
|
rollBackVars = append(rollBackVars, vars[i])
|
|
|
|
})
|
|
|
|
return nil
|
2019-11-07 00:45:00 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return applier{
|
2019-12-07 00:23:09 +00:00
|
|
|
creater: creater{
|
|
|
|
entries: len(vars),
|
|
|
|
fn: createFn,
|
|
|
|
},
|
2019-11-07 00:45:00 +00:00
|
|
|
rollbacker: rollbacker{
|
|
|
|
resource: resource,
|
2021-03-30 18:10:02 +00:00
|
|
|
fn: func(_ platform.ID) error { return s.rollbackVariables(ctx, rollBackVars) },
|
2019-11-07 00:45:00 +00:00
|
|
|
},
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-04-14 23:18:34 +00:00
|
|
|
func (s *Service) rollbackVariables(ctx context.Context, variables []*stateVariable) error {
|
|
|
|
rollbackFn := func(v *stateVariable) error {
|
2020-04-03 00:44:27 +00:00
|
|
|
var err error
|
2020-05-06 20:48:50 +00:00
|
|
|
switch {
|
|
|
|
case IsRemoval(v.stateStatus):
|
|
|
|
if v.existing == nil {
|
|
|
|
return nil
|
|
|
|
}
|
2020-04-14 23:18:34 +00:00
|
|
|
err = ierrors.Wrap(s.varSVC.CreateVariable(ctx, v.existing), "rolling back removed variable")
|
2020-05-06 20:48:50 +00:00
|
|
|
case IsExisting(v.stateStatus):
|
|
|
|
if v.existing == nil {
|
|
|
|
return nil
|
|
|
|
}
|
2020-04-03 00:44:27 +00:00
|
|
|
_, err = s.varSVC.UpdateVariable(ctx, v.ID(), &influxdb.VariableUpdate{
|
2020-05-06 18:09:16 +00:00
|
|
|
Name: v.existing.Name,
|
|
|
|
Description: v.existing.Description,
|
2020-06-22 22:44:53 +00:00
|
|
|
Selected: v.existing.Selected,
|
2020-05-06 18:09:16 +00:00
|
|
|
Arguments: v.existing.Arguments,
|
2020-04-03 00:44:27 +00:00
|
|
|
})
|
2020-04-14 23:18:34 +00:00
|
|
|
err = ierrors.Wrap(err, "rolling back updated variable")
|
|
|
|
default:
|
|
|
|
err = ierrors.Wrap(s.varSVC.DeleteVariable(ctx, v.ID()), "rolling back created variable")
|
2019-11-07 00:45:00 +00:00
|
|
|
}
|
2020-04-03 00:44:27 +00:00
|
|
|
return err
|
|
|
|
}
|
2019-11-07 00:45:00 +00:00
|
|
|
|
2020-04-03 00:44:27 +00:00
|
|
|
var errs []string
|
|
|
|
for _, v := range variables {
|
|
|
|
if err := rollbackFn(v); err != nil {
|
|
|
|
errs = append(errs, fmt.Sprintf("error for variable[%q]: %s", v.ID(), err))
|
2019-11-07 00:45:00 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if len(errs) > 0 {
|
2020-04-03 00:44:27 +00:00
|
|
|
return errors.New(strings.Join(errs, "; "))
|
2019-11-07 00:45:00 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2020-04-14 23:18:34 +00:00
|
|
|
func (s *Service) applyVariable(ctx context.Context, v *stateVariable) (influxdb.Variable, error) {
|
2020-05-06 18:09:16 +00:00
|
|
|
switch {
|
|
|
|
case IsRemoval(v.stateStatus):
|
2021-03-30 18:10:02 +00:00
|
|
|
if err := s.varSVC.DeleteVariable(ctx, v.id); err != nil && errors2.ErrorCode(err) != errors2.ENotFound {
|
2020-06-18 17:25:43 +00:00
|
|
|
return influxdb.Variable{}, applyFailErr("delete", v.stateIdentity(), err)
|
2020-05-06 18:09:16 +00:00
|
|
|
}
|
|
|
|
if v.existing == nil {
|
|
|
|
return influxdb.Variable{}, nil
|
2020-04-03 00:44:27 +00:00
|
|
|
}
|
|
|
|
return *v.existing, nil
|
2020-05-06 18:09:16 +00:00
|
|
|
case IsExisting(v.stateStatus) && v.existing != nil:
|
2019-11-07 00:45:00 +00:00
|
|
|
updatedVar, err := s.varSVC.UpdateVariable(ctx, v.ID(), &influxdb.VariableUpdate{
|
2020-04-14 23:18:34 +00:00
|
|
|
Name: v.parserVar.Name(),
|
2020-06-22 22:44:53 +00:00
|
|
|
Selected: v.parserVar.Selected(),
|
2020-04-14 23:18:34 +00:00
|
|
|
Description: v.parserVar.Description,
|
|
|
|
Arguments: v.parserVar.influxVarArgs(),
|
2019-11-07 00:45:00 +00:00
|
|
|
})
|
|
|
|
if err != nil {
|
2020-06-18 17:25:43 +00:00
|
|
|
return influxdb.Variable{}, applyFailErr("update", v.stateIdentity(), err)
|
2019-11-07 00:45:00 +00:00
|
|
|
}
|
|
|
|
return *updatedVar, nil
|
2020-04-14 23:18:34 +00:00
|
|
|
default:
|
2020-05-06 18:09:16 +00:00
|
|
|
// when an existing variable (referenced in stack) has been deleted by a user
|
|
|
|
// then the resource is created anew to get it back to the expected state.
|
2020-04-14 23:18:34 +00:00
|
|
|
influxVar := influxdb.Variable{
|
|
|
|
OrganizationID: v.orgID,
|
|
|
|
Name: v.parserVar.Name(),
|
2020-06-22 22:44:53 +00:00
|
|
|
Selected: v.parserVar.Selected(),
|
2020-04-14 23:18:34 +00:00
|
|
|
Description: v.parserVar.Description,
|
|
|
|
Arguments: v.parserVar.influxVarArgs(),
|
|
|
|
}
|
|
|
|
err := s.varSVC.CreateVariable(ctx, &influxVar)
|
|
|
|
if err != nil {
|
2020-06-18 17:25:43 +00:00
|
|
|
return influxdb.Variable{}, applyFailErr("create", v.stateIdentity(), err)
|
2020-04-14 23:18:34 +00:00
|
|
|
}
|
|
|
|
return influxVar, nil
|
2019-11-07 00:45:00 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-04-24 17:59:58 +00:00
|
|
|
func (s *Service) removeLabelMappings(ctx context.Context, labelMappings []stateLabelMappingForRemoval) applier {
|
|
|
|
const resource = "removed_label_mapping"
|
|
|
|
|
|
|
|
var rollbackMappings []stateLabelMappingForRemoval
|
|
|
|
|
|
|
|
mutex := new(doMutex)
|
2021-03-30 18:10:02 +00:00
|
|
|
createFn := func(ctx context.Context, i int, orgID, userID platform.ID) *applyErrBody {
|
2020-04-24 17:59:58 +00:00
|
|
|
var mapping stateLabelMappingForRemoval
|
|
|
|
mutex.Do(func() {
|
|
|
|
mapping = labelMappings[i]
|
|
|
|
})
|
|
|
|
|
|
|
|
err := s.labelSVC.DeleteLabelMapping(ctx, &influxdb.LabelMapping{
|
|
|
|
LabelID: mapping.LabelID,
|
|
|
|
ResourceID: mapping.ResourceID,
|
|
|
|
ResourceType: mapping.ResourceType,
|
|
|
|
})
|
2021-03-30 18:10:02 +00:00
|
|
|
if err != nil && errors2.ErrorCode(err) != errors2.ENotFound {
|
2020-04-24 17:59:58 +00:00
|
|
|
return &applyErrBody{
|
|
|
|
name: fmt.Sprintf("%s:%s:%s", mapping.ResourceType, mapping.ResourceID, mapping.LabelID),
|
|
|
|
msg: err.Error(),
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
mutex.Do(func() {
|
|
|
|
rollbackMappings = append(rollbackMappings, mapping)
|
|
|
|
})
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
return applier{
|
|
|
|
creater: creater{
|
|
|
|
entries: len(labelMappings),
|
|
|
|
fn: createFn,
|
|
|
|
},
|
|
|
|
rollbacker: rollbacker{
|
|
|
|
resource: resource,
|
2021-03-30 18:10:02 +00:00
|
|
|
fn: func(_ platform.ID) error { return s.rollbackRemoveLabelMappings(ctx, rollbackMappings) },
|
2020-04-24 17:59:58 +00:00
|
|
|
},
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func (s *Service) rollbackRemoveLabelMappings(ctx context.Context, mappings []stateLabelMappingForRemoval) error {
|
|
|
|
var errs []string
|
|
|
|
for _, m := range mappings {
|
|
|
|
err := s.labelSVC.CreateLabelMapping(ctx, &influxdb.LabelMapping{
|
|
|
|
LabelID: m.LabelID,
|
|
|
|
ResourceID: m.ResourceID,
|
|
|
|
ResourceType: m.ResourceType,
|
|
|
|
})
|
|
|
|
if err != nil {
|
|
|
|
errs = append(errs,
|
|
|
|
fmt.Sprintf(
|
|
|
|
"error for label mapping: resource_type=%s resource_id=%s label_id=%s err=%s",
|
|
|
|
m.ResourceType,
|
|
|
|
m.ResourceID,
|
|
|
|
m.LabelID,
|
|
|
|
err,
|
|
|
|
))
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if len(errs) > 0 {
|
|
|
|
return errors.New(strings.Join(errs, "; "))
|
|
|
|
}
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (s *Service) applyLabelMappings(ctx context.Context, labelMappings []stateLabelMapping) applier {
|
2020-04-11 04:51:13 +00:00
|
|
|
const resource = "label_mapping"
|
|
|
|
|
|
|
|
mutex := new(doMutex)
|
|
|
|
rollbackMappings := make([]stateLabelMapping, 0, len(labelMappings))
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
createFn := func(ctx context.Context, i int, orgID, userID platform.ID) *applyErrBody {
|
2020-04-11 04:51:13 +00:00
|
|
|
var mapping stateLabelMapping
|
|
|
|
mutex.Do(func() {
|
|
|
|
mapping = labelMappings[i]
|
|
|
|
})
|
|
|
|
|
|
|
|
ident := mapping.resource.stateIdentity()
|
2020-04-14 22:19:15 +00:00
|
|
|
if IsExisting(mapping.status) || mapping.label.ID() == 0 || ident.id == 0 {
|
2020-04-11 04:51:13 +00:00
|
|
|
// this block here does 2 things, it does not write a
|
|
|
|
// mapping when one exists. it also avoids having to worry
|
|
|
|
// about deleting an existing mapping since it will not be
|
|
|
|
// passed to the delete function below b/c it is never added
|
|
|
|
// to the list of mappings that is referenced in the delete
|
|
|
|
// call.
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
m := influxdb.LabelMapping{
|
|
|
|
LabelID: mapping.label.ID(),
|
|
|
|
ResourceID: ident.id,
|
|
|
|
ResourceType: ident.resourceType,
|
|
|
|
}
|
|
|
|
err := s.labelSVC.CreateLabelMapping(ctx, &m)
|
|
|
|
if err != nil {
|
|
|
|
return &applyErrBody{
|
|
|
|
name: fmt.Sprintf("%s:%s:%s", ident.resourceType, ident.id, mapping.label.ID()),
|
|
|
|
msg: err.Error(),
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
mutex.Do(func() {
|
|
|
|
rollbackMappings = append(rollbackMappings, mapping)
|
|
|
|
})
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
return applier{
|
|
|
|
creater: creater{
|
|
|
|
entries: len(labelMappings),
|
|
|
|
fn: createFn,
|
|
|
|
},
|
|
|
|
rollbacker: rollbacker{
|
|
|
|
resource: resource,
|
2021-03-30 18:10:02 +00:00
|
|
|
fn: func(_ platform.ID) error { return s.rollbackLabelMappings(ctx, rollbackMappings) },
|
2020-04-11 04:51:13 +00:00
|
|
|
},
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-04-24 17:59:58 +00:00
|
|
|
func (s *Service) rollbackLabelMappings(ctx context.Context, mappings []stateLabelMapping) error {
|
2020-04-11 04:51:13 +00:00
|
|
|
var errs []string
|
|
|
|
for _, stateMapping := range mappings {
|
|
|
|
influxMapping := stateLabelMappingToInfluxLabelMapping(stateMapping)
|
2020-04-24 17:59:58 +00:00
|
|
|
err := s.labelSVC.DeleteLabelMapping(ctx, &influxMapping)
|
2020-04-11 04:51:13 +00:00
|
|
|
if err != nil {
|
|
|
|
errs = append(errs, fmt.Sprintf("%s:%s", stateMapping.label.ID(), stateMapping.resource.stateIdentity().id))
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if len(errs) > 0 {
|
|
|
|
return fmt.Errorf(`label_resource_id_pairs=[%s] err="unable to delete label"`, strings.Join(errs, ", "))
|
|
|
|
}
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
func (s *Service) templateFromApplyOpts(ctx context.Context, opt ApplyOpt) (*Template, error) {
|
2020-06-15 21:13:38 +00:00
|
|
|
if opt.StackID != 0 {
|
2020-06-30 21:54:00 +00:00
|
|
|
remotes, err := s.getStackRemoteTemplates(ctx, opt.StackID)
|
2020-06-15 21:13:38 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2020-06-30 21:54:00 +00:00
|
|
|
opt.Templates = append(opt.Templates, remotes...)
|
2020-06-15 21:13:38 +00:00
|
|
|
}
|
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
return Combine(opt.Templates, ValidWithoutResources())
|
2020-06-15 21:13:38 +00:00
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) getStackRemoteTemplates(ctx context.Context, stackID platform.ID) ([]*Template, error) {
|
2020-04-29 22:24:19 +00:00
|
|
|
stack, err := s.store.ReadStackByID(ctx, stackID)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2020-07-07 22:07:11 +00:00
|
|
|
lastEvent := stack.LatestEvent()
|
2020-06-30 21:54:00 +00:00
|
|
|
var remotes []*Template
|
2020-07-07 22:07:11 +00:00
|
|
|
for _, rawURL := range lastEvent.TemplateURLs {
|
2020-04-29 22:24:19 +00:00
|
|
|
u, err := url.Parse(rawURL)
|
|
|
|
if err != nil {
|
2021-03-30 18:10:02 +00:00
|
|
|
return nil, &errors2.Error{
|
|
|
|
Code: errors2.EInternal,
|
2020-04-29 22:24:19 +00:00
|
|
|
Msg: "failed to parse url",
|
|
|
|
Err: err,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
encoding := EncodingSource
|
|
|
|
switch path.Ext(u.String()) {
|
|
|
|
case ".jsonnet":
|
|
|
|
encoding = EncodingJsonnet
|
|
|
|
case ".json":
|
|
|
|
encoding = EncodingJSON
|
|
|
|
case ".yaml", ".yml":
|
|
|
|
encoding = EncodingYAML
|
|
|
|
}
|
|
|
|
|
feat: add --hardening-enabled option to limit flux/pkger HTTP requests (#23207)
Flux HTTP and template fetching requests do not perform IP address
checks for local addresses. This behavior on the one hand allows SSRF
(Server Side Request Forgery) attacks via authenticated requests but on
the other hand is useful for scenarios that have legitimate requirements
to fetch from private addresses (eg, hosting templates internally or
performing flux queries to local resources during development).
To not break existing installations, the default behavior will remain
the same but a new --hardening-enabled option is added to influxd to
turn on IP address verification and limit both flux and template
fetching HTTP requests to non-private addresses. We plan to enable new
security features that aren't suitable for the default install with this
option. Put another way, this new option is intended to be used to make
it easy to turn on all security options when running in production
environments. The 'Manage security and authorization' section of the
docs will also be updated for this option.
Specifically for flux, when --hardening-enabled is specified, we now
pass in PrivateIPValidator{} to the flux dependency configuration. The
flux url validator will then tap into the http.Client 'Control'
mechanism to validate the IP address since it is called after DNS lookup
but before the connection starts.
For pkger (template fetching), when --hardening-enabled is specified,
the template parser's HTTP client will be configured to also use
PrivateIPValidator{}. Note that /api/v2/stacks POST ('init', aka create)
and PATCH ('update') only store the new url to be applied later with
/api/v2/templates/apply. While it is possible to have InitStack() and
UpdateStack() mimic net.DialContext() to setup a go routine to perform a
DNS lookup and then loop through the returned addresses to verify none
are for a private IP before storing the url, this would add considerable
complexity to the stacks implementation. Since the stack's urls are
fetched when it is applied and the IP address is verified as part of
apply (see above), for now we'll keep this simple and not validate the
IPs of the stack's urls during init or update.
Lastly, update pkger/http_server_template_test.go's Templates() test for
disabled jsonnet to also check the contents of the 422 error (since the
flux validator also returns a 422 with different message). Also, fix the
URL in one of these tests to use a valid path.
2022-03-18 14:25:31 +00:00
|
|
|
readerFn := FromHTTPRequest(u.String(), s.client)
|
2020-04-29 22:24:19 +00:00
|
|
|
if u.Scheme == "file" {
|
|
|
|
readerFn = FromFile(u.Path)
|
|
|
|
}
|
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
template, err := Parse(encoding, readerFn)
|
2020-04-29 22:24:19 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2020-06-30 21:54:00 +00:00
|
|
|
remotes = append(remotes, template)
|
2020-04-29 22:24:19 +00:00
|
|
|
}
|
2020-06-30 21:54:00 +00:00
|
|
|
return remotes, nil
|
2020-04-29 22:24:19 +00:00
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) updateStackAfterSuccess(ctx context.Context, stackID platform.ID, state *stateCoordinator, sources []string) error {
|
2020-04-01 00:01:45 +00:00
|
|
|
stack, err := s.store.ReadStackByID(ctx, stackID)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
var stackResources []StackResource
|
2020-04-11 04:51:13 +00:00
|
|
|
for _, b := range state.mBuckets {
|
2020-06-26 22:12:57 +00:00
|
|
|
if IsRemoval(b.stateStatus) || isSystemBucket(b.existing) {
|
2020-04-02 22:28:11 +00:00
|
|
|
continue
|
|
|
|
}
|
2020-04-01 00:01:45 +00:00
|
|
|
stackResources = append(stackResources, StackResource{
|
2020-04-24 17:59:58 +00:00
|
|
|
APIVersion: APIVersion,
|
|
|
|
ID: b.ID(),
|
|
|
|
Kind: KindBucket,
|
2020-06-30 21:54:00 +00:00
|
|
|
MetaName: b.parserBkt.MetaName(),
|
2020-06-22 20:20:34 +00:00
|
|
|
Associations: stateLabelsToStackAssociations(b.labels()),
|
2020-04-01 00:01:45 +00:00
|
|
|
})
|
|
|
|
}
|
2020-04-14 22:19:15 +00:00
|
|
|
for _, c := range state.mChecks {
|
|
|
|
if IsRemoval(c.stateStatus) {
|
2020-04-02 22:28:11 +00:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
stackResources = append(stackResources, StackResource{
|
2020-04-24 17:59:58 +00:00
|
|
|
APIVersion: APIVersion,
|
|
|
|
ID: c.ID(),
|
|
|
|
Kind: KindCheck,
|
2020-06-30 21:54:00 +00:00
|
|
|
MetaName: c.parserCheck.MetaName(),
|
2020-06-22 20:20:34 +00:00
|
|
|
Associations: stateLabelsToStackAssociations(c.labels()),
|
2020-04-02 22:28:11 +00:00
|
|
|
})
|
|
|
|
}
|
2020-04-20 19:29:30 +00:00
|
|
|
for _, d := range state.mDashboards {
|
|
|
|
if IsRemoval(d.stateStatus) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
stackResources = append(stackResources, StackResource{
|
2020-04-24 17:59:58 +00:00
|
|
|
APIVersion: APIVersion,
|
|
|
|
ID: d.ID(),
|
|
|
|
Kind: KindDashboard,
|
2020-06-30 21:54:00 +00:00
|
|
|
MetaName: d.parserDash.MetaName(),
|
2020-06-22 20:20:34 +00:00
|
|
|
Associations: stateLabelsToStackAssociations(d.labels()),
|
2020-04-20 19:29:30 +00:00
|
|
|
})
|
|
|
|
}
|
2020-04-15 19:46:17 +00:00
|
|
|
for _, n := range state.mEndpoints {
|
|
|
|
if IsRemoval(n.stateStatus) {
|
2020-04-06 17:25:20 +00:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
stackResources = append(stackResources, StackResource{
|
2020-04-24 17:59:58 +00:00
|
|
|
APIVersion: APIVersion,
|
|
|
|
ID: n.ID(),
|
|
|
|
Kind: KindNotificationEndpoint,
|
2020-06-30 21:54:00 +00:00
|
|
|
MetaName: n.parserEndpoint.MetaName(),
|
2020-06-22 20:20:34 +00:00
|
|
|
Associations: stateLabelsToStackAssociations(n.labels()),
|
2020-04-06 17:25:20 +00:00
|
|
|
})
|
|
|
|
}
|
2020-04-11 04:51:13 +00:00
|
|
|
for _, l := range state.mLabels {
|
2020-04-14 22:19:15 +00:00
|
|
|
if IsRemoval(l.stateStatus) {
|
2020-04-02 22:28:11 +00:00
|
|
|
continue
|
|
|
|
}
|
2020-04-01 23:44:17 +00:00
|
|
|
stackResources = append(stackResources, StackResource{
|
|
|
|
APIVersion: APIVersion,
|
|
|
|
ID: l.ID(),
|
|
|
|
Kind: KindLabel,
|
2020-06-30 21:54:00 +00:00
|
|
|
MetaName: l.parserLabel.MetaName(),
|
2020-04-01 23:44:17 +00:00
|
|
|
})
|
|
|
|
}
|
2020-04-23 00:05:10 +00:00
|
|
|
for _, r := range state.mRules {
|
|
|
|
if IsRemoval(r.stateStatus) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
stackResources = append(stackResources, StackResource{
|
2020-04-24 17:59:58 +00:00
|
|
|
APIVersion: APIVersion,
|
|
|
|
ID: r.ID(),
|
|
|
|
Kind: KindNotificationRule,
|
2020-06-30 21:54:00 +00:00
|
|
|
MetaName: r.parserRule.MetaName(),
|
2020-04-24 17:59:58 +00:00
|
|
|
Associations: append(
|
2020-06-22 20:20:34 +00:00
|
|
|
stateLabelsToStackAssociations(r.labels()),
|
2020-04-24 17:59:58 +00:00
|
|
|
r.endpointAssociation(),
|
|
|
|
),
|
2020-04-23 00:05:10 +00:00
|
|
|
})
|
|
|
|
}
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
for _, t := range state.mTasks {
|
2020-06-26 22:12:57 +00:00
|
|
|
if IsRemoval(t.stateStatus) || isRestrictedTask(t.existing) {
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
stackResources = append(stackResources, StackResource{
|
2020-04-24 17:59:58 +00:00
|
|
|
APIVersion: APIVersion,
|
|
|
|
ID: t.ID(),
|
|
|
|
Kind: KindTask,
|
2020-06-30 21:54:00 +00:00
|
|
|
MetaName: t.parserTask.MetaName(),
|
2020-06-22 20:20:34 +00:00
|
|
|
Associations: stateLabelsToStackAssociations(t.labels()),
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
})
|
|
|
|
}
|
2020-04-21 22:00:29 +00:00
|
|
|
for _, t := range state.mTelegrafs {
|
|
|
|
if IsRemoval(t.stateStatus) {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
stackResources = append(stackResources, StackResource{
|
2020-04-24 17:59:58 +00:00
|
|
|
APIVersion: APIVersion,
|
|
|
|
ID: t.ID(),
|
|
|
|
Kind: KindTelegraf,
|
2020-06-30 21:54:00 +00:00
|
|
|
MetaName: t.parserTelegraf.MetaName(),
|
2020-06-22 20:20:34 +00:00
|
|
|
Associations: stateLabelsToStackAssociations(t.labels()),
|
2020-04-21 22:00:29 +00:00
|
|
|
})
|
|
|
|
}
|
2020-04-14 23:18:34 +00:00
|
|
|
for _, v := range state.mVariables {
|
|
|
|
if IsRemoval(v.stateStatus) {
|
2020-04-03 00:44:27 +00:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
stackResources = append(stackResources, StackResource{
|
2020-04-24 17:59:58 +00:00
|
|
|
APIVersion: APIVersion,
|
|
|
|
ID: v.ID(),
|
|
|
|
Kind: KindVariable,
|
2020-06-30 21:54:00 +00:00
|
|
|
MetaName: v.parserVar.MetaName(),
|
2020-06-22 20:20:34 +00:00
|
|
|
Associations: stateLabelsToStackAssociations(v.labels()),
|
2020-04-03 00:44:27 +00:00
|
|
|
})
|
|
|
|
}
|
2020-07-07 22:07:11 +00:00
|
|
|
ev := stack.LatestEvent()
|
|
|
|
ev.EventType = StackEventUpdate
|
|
|
|
ev.Resources = stackResources
|
|
|
|
ev.Sources = sources
|
|
|
|
ev.UpdatedAt = s.timeGen.Now()
|
|
|
|
stack.Events = append(stack.Events, ev)
|
2020-04-01 00:01:45 +00:00
|
|
|
return s.store.UpdateStack(ctx, stack)
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) updateStackAfterRollback(ctx context.Context, stackID platform.ID, state *stateCoordinator, sources []string) error {
|
2020-04-01 00:01:45 +00:00
|
|
|
stack, err := s.store.ReadStackByID(ctx, stackID)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
type key struct {
|
2020-06-30 21:54:00 +00:00
|
|
|
k Kind
|
|
|
|
metaName string
|
2020-04-01 00:01:45 +00:00
|
|
|
}
|
2020-06-30 21:54:00 +00:00
|
|
|
newKey := func(k Kind, metaName string) key {
|
|
|
|
return key{k: k, metaName: metaName}
|
2020-04-01 00:01:45 +00:00
|
|
|
}
|
|
|
|
|
2020-07-07 22:07:11 +00:00
|
|
|
latestEvent := stack.LatestEvent()
|
2020-04-01 00:01:45 +00:00
|
|
|
existingResources := make(map[key]*StackResource)
|
2020-07-07 22:07:11 +00:00
|
|
|
for i := range latestEvent.Resources {
|
|
|
|
res := latestEvent.Resources[i]
|
|
|
|
existingResources[newKey(res.Kind, res.MetaName)] = &latestEvent.Resources[i]
|
2020-04-01 00:01:45 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
hasChanges := false
|
2020-04-01 23:44:17 +00:00
|
|
|
{
|
|
|
|
// these are the case where a deletion happens and is rolled back creating a new resource.
|
|
|
|
// when resource is not to be removed this is a nothing burger, as it should be
|
|
|
|
// rolled back to previous state.
|
2020-04-11 04:51:13 +00:00
|
|
|
for _, b := range state.mBuckets {
|
2020-06-30 21:54:00 +00:00
|
|
|
res, ok := existingResources[newKey(KindBucket, b.parserBkt.MetaName())]
|
2020-04-14 22:19:15 +00:00
|
|
|
if ok && res.ID != b.ID() {
|
|
|
|
hasChanges = true
|
|
|
|
res.ID = b.existing.ID
|
2020-04-01 23:44:17 +00:00
|
|
|
}
|
|
|
|
}
|
2020-04-14 22:19:15 +00:00
|
|
|
for _, c := range state.mChecks {
|
2020-06-30 21:54:00 +00:00
|
|
|
res, ok := existingResources[newKey(KindCheck, c.parserCheck.MetaName())]
|
2020-04-14 22:19:15 +00:00
|
|
|
if ok && res.ID != c.ID() {
|
|
|
|
hasChanges = true
|
|
|
|
res.ID = c.existing.GetID()
|
2020-04-02 22:28:11 +00:00
|
|
|
}
|
|
|
|
}
|
2020-04-20 19:29:30 +00:00
|
|
|
for _, d := range state.mDashboards {
|
2020-06-30 21:54:00 +00:00
|
|
|
res, ok := existingResources[newKey(KindDashboard, d.parserDash.MetaName())]
|
2020-04-20 19:29:30 +00:00
|
|
|
if ok && res.ID != d.ID() {
|
|
|
|
hasChanges = true
|
|
|
|
res.ID = d.existing.ID
|
|
|
|
}
|
|
|
|
}
|
2020-04-15 19:46:17 +00:00
|
|
|
for _, e := range state.mEndpoints {
|
2020-06-30 21:54:00 +00:00
|
|
|
res, ok := existingResources[newKey(KindNotificationEndpoint, e.parserEndpoint.MetaName())]
|
2020-04-15 19:46:17 +00:00
|
|
|
if ok && res.ID != e.ID() {
|
|
|
|
hasChanges = true
|
|
|
|
res.ID = e.existing.GetID()
|
2020-04-06 17:25:20 +00:00
|
|
|
}
|
|
|
|
}
|
2020-04-11 04:51:13 +00:00
|
|
|
for _, l := range state.mLabels {
|
2020-06-30 21:54:00 +00:00
|
|
|
res, ok := existingResources[newKey(KindLabel, l.parserLabel.MetaName())]
|
2020-04-14 22:19:15 +00:00
|
|
|
if ok && res.ID != l.ID() {
|
|
|
|
hasChanges = true
|
|
|
|
res.ID = l.existing.ID
|
2020-04-01 00:01:45 +00:00
|
|
|
}
|
|
|
|
}
|
2020-04-23 00:05:10 +00:00
|
|
|
for _, r := range state.mRules {
|
2020-06-30 21:54:00 +00:00
|
|
|
res, ok := existingResources[newKey(KindNotificationRule, r.parserRule.MetaName())]
|
2020-04-23 00:05:10 +00:00
|
|
|
if !ok {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
|
|
|
if res.ID != r.ID() {
|
|
|
|
hasChanges = true
|
|
|
|
res.ID = r.existing.GetID()
|
|
|
|
}
|
2020-04-24 17:59:58 +00:00
|
|
|
|
|
|
|
endpointAssociation := r.endpointAssociation()
|
|
|
|
newAss := make([]StackResourceAssociation, 0, len(res.Associations))
|
|
|
|
|
|
|
|
var endpointAssociationChanged bool
|
|
|
|
for _, ass := range res.Associations {
|
|
|
|
if ass.Kind.is(KindNotificationEndpoint) && ass != endpointAssociation {
|
|
|
|
endpointAssociationChanged = true
|
|
|
|
ass = endpointAssociation
|
|
|
|
}
|
|
|
|
newAss = append(newAss, ass)
|
|
|
|
}
|
|
|
|
if endpointAssociationChanged {
|
2020-04-23 00:05:10 +00:00
|
|
|
hasChanges = true
|
|
|
|
res.Associations = newAss
|
|
|
|
}
|
|
|
|
}
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
for _, t := range state.mTasks {
|
2020-06-30 21:54:00 +00:00
|
|
|
res, ok := existingResources[newKey(KindTask, t.parserTask.MetaName())]
|
feat(pkger): add stateful management for tasks
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
2020-04-21 02:59:56 +00:00
|
|
|
if ok && res.ID != t.ID() {
|
|
|
|
hasChanges = true
|
|
|
|
res.ID = t.existing.ID
|
|
|
|
}
|
|
|
|
}
|
2020-04-21 22:00:29 +00:00
|
|
|
for _, t := range state.mTelegrafs {
|
2020-06-30 21:54:00 +00:00
|
|
|
res, ok := existingResources[newKey(KindTelegraf, t.parserTelegraf.MetaName())]
|
2020-04-21 22:00:29 +00:00
|
|
|
if ok && res.ID != t.ID() {
|
|
|
|
hasChanges = true
|
|
|
|
res.ID = t.existing.ID
|
|
|
|
}
|
|
|
|
}
|
2020-04-14 23:18:34 +00:00
|
|
|
for _, v := range state.mVariables {
|
2020-06-30 21:54:00 +00:00
|
|
|
res, ok := existingResources[newKey(KindVariable, v.parserVar.MetaName())]
|
2020-04-14 23:18:34 +00:00
|
|
|
if ok && res.ID != v.ID() {
|
|
|
|
hasChanges = true
|
|
|
|
res.ID = v.existing.ID
|
2020-04-03 00:44:27 +00:00
|
|
|
}
|
|
|
|
}
|
2020-04-01 00:01:45 +00:00
|
|
|
}
|
|
|
|
if !hasChanges {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2020-07-07 22:07:11 +00:00
|
|
|
latestEvent.EventType = StackEventUpdate
|
|
|
|
latestEvent.Sources = sources
|
|
|
|
latestEvent.UpdatedAt = s.timeGen.Now()
|
|
|
|
stack.Events = append(stack.Events, latestEvent)
|
2020-04-01 00:01:45 +00:00
|
|
|
return s.store.UpdateStack(ctx, stack)
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) findLabel(ctx context.Context, orgID platform.ID, l *stateLabel) (*influxdb.Label, error) {
|
2020-04-02 22:28:11 +00:00
|
|
|
if l.ID() != 0 {
|
|
|
|
return s.labelSVC.FindLabelByID(ctx, l.ID())
|
2020-04-01 23:44:17 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
existingLabels, err := s.labelSVC.FindLabels(ctx, influxdb.LabelFilter{
|
2020-04-14 20:21:05 +00:00
|
|
|
Name: l.parserLabel.Name(),
|
2020-04-01 23:44:17 +00:00
|
|
|
OrgID: &orgID,
|
|
|
|
}, influxdb.FindOptions{Limit: 1})
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
if len(existingLabels) == 0 {
|
2020-04-14 20:21:05 +00:00
|
|
|
return nil, errors.New("no labels found for name: " + l.parserLabel.Name())
|
2020-04-01 23:44:17 +00:00
|
|
|
}
|
|
|
|
return existingLabels[0], nil
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) getAllPlatformVariables(ctx context.Context, orgID platform.ID) ([]*influxdb.Variable, error) {
|
2020-04-03 00:44:27 +00:00
|
|
|
const limit = 100
|
|
|
|
|
|
|
|
var (
|
|
|
|
existingVars []*influxdb.Variable
|
|
|
|
offset int
|
|
|
|
)
|
|
|
|
for {
|
|
|
|
vars, err := s.varSVC.FindVariables(ctx, influxdb.VariableFilter{
|
|
|
|
OrganizationID: &orgID,
|
|
|
|
// TODO: would be ideal to extend find variables to allow for a name matcher
|
|
|
|
// since names are unique for vars within an org. In the meanwhile, make large
|
|
|
|
// limit returned vars, should be more than enough for the time being.
|
|
|
|
}, influxdb.FindOptions{Limit: limit, Offset: offset})
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
existingVars = append(existingVars, vars...)
|
|
|
|
|
|
|
|
if len(vars) < limit {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
offset += len(vars)
|
|
|
|
}
|
|
|
|
return existingVars, nil
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) getAllChecks(ctx context.Context, orgID platform.ID) ([]influxdb.Check, error) {
|
2020-07-06 19:18:32 +00:00
|
|
|
filter := influxdb.CheckFilter{OrgID: &orgID}
|
|
|
|
const limit = 100
|
|
|
|
|
|
|
|
var (
|
|
|
|
out []influxdb.Check
|
|
|
|
offset int
|
|
|
|
)
|
|
|
|
for {
|
|
|
|
checks, _, err := s.checkSVC.FindChecks(ctx, filter, influxdb.FindOptions{
|
|
|
|
Limit: limit,
|
|
|
|
Offset: offset,
|
|
|
|
})
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
out = append(out, checks...)
|
|
|
|
if len(checks) < limit {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
offset += limit
|
|
|
|
}
|
|
|
|
return out, nil
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (s *Service) getNotificationRules(ctx context.Context, orgID platform.ID) ([]influxdb.NotificationRule, error) {
|
2020-07-06 19:18:32 +00:00
|
|
|
filter := influxdb.NotificationRuleFilter{OrgID: &orgID}
|
|
|
|
const limit = 100
|
|
|
|
|
|
|
|
var (
|
|
|
|
out []influxdb.NotificationRule
|
|
|
|
offset int
|
|
|
|
)
|
|
|
|
for {
|
|
|
|
rules, _, err := s.ruleSVC.FindNotificationRules(ctx, filter)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
out = append(out, rules...)
|
|
|
|
if len(rules) < limit {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
offset += limit
|
|
|
|
}
|
|
|
|
return out, nil
|
|
|
|
|
|
|
|
}
|
|
|
|
|
2021-04-07 18:42:55 +00:00
|
|
|
func (s *Service) getAllTasks(ctx context.Context, orgID platform.ID) ([]*taskmodel.Task, error) {
|
2020-07-06 19:18:32 +00:00
|
|
|
var (
|
2021-04-07 18:42:55 +00:00
|
|
|
out []*taskmodel.Task
|
2021-03-30 18:10:02 +00:00
|
|
|
afterID *platform.ID
|
2020-07-06 19:18:32 +00:00
|
|
|
)
|
|
|
|
for {
|
2021-04-07 18:42:55 +00:00
|
|
|
f := taskmodel.TaskFilter{
|
2020-07-06 19:18:32 +00:00
|
|
|
OrganizationID: &orgID,
|
2021-04-07 18:42:55 +00:00
|
|
|
Limit: taskmodel.TaskMaxPageSize,
|
2020-07-06 19:18:32 +00:00
|
|
|
}
|
|
|
|
if afterID != nil {
|
|
|
|
f.After = afterID
|
|
|
|
}
|
|
|
|
tasks, _, err := s.taskSVC.FindTasks(ctx, f)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
if len(tasks) == 0 {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
out = append(out, tasks...)
|
|
|
|
afterID = &tasks[len(tasks)-1].ID
|
|
|
|
}
|
|
|
|
return out, nil
|
|
|
|
}
|
|
|
|
|
2020-06-30 21:54:00 +00:00
|
|
|
func newSummaryFromStateTemplate(state *stateCoordinator, template *Template) Summary {
|
2020-04-15 19:46:17 +00:00
|
|
|
stateSum := state.summary()
|
2020-06-30 21:54:00 +00:00
|
|
|
stateSum.MissingEnvs = template.missingEnvRefs()
|
|
|
|
stateSum.MissingSecrets = template.missingSecrets()
|
2020-04-17 02:27:58 +00:00
|
|
|
return stateSum
|
2020-04-15 19:46:17 +00:00
|
|
|
}
|
|
|
|
|
2020-04-24 17:59:58 +00:00
|
|
|
func stateLabelsToStackAssociations(stateLabels []*stateLabel) []StackResourceAssociation {
|
|
|
|
var out []StackResourceAssociation
|
|
|
|
for _, l := range stateLabels {
|
|
|
|
out = append(out, StackResourceAssociation{
|
2020-06-26 03:17:11 +00:00
|
|
|
Kind: KindLabel,
|
2020-06-30 21:54:00 +00:00
|
|
|
MetaName: l.parserLabel.MetaName(),
|
2020-04-24 17:59:58 +00:00
|
|
|
})
|
|
|
|
}
|
|
|
|
return out
|
|
|
|
}
|
|
|
|
|
2020-06-18 17:25:43 +00:00
|
|
|
func applyFailErr(method string, ident stateIdentity, err error) error {
|
|
|
|
v := ident.id.String()
|
|
|
|
if v == "" {
|
2020-06-30 21:54:00 +00:00
|
|
|
v = ident.metaName
|
2020-06-18 17:25:43 +00:00
|
|
|
}
|
|
|
|
msg := fmt.Sprintf("failed to %s %s[%q]", method, ident.resourceType, v)
|
|
|
|
return ierrors.Wrap(err, msg)
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func getLabelIDMap(ctx context.Context, labelSVC influxdb.LabelService, labelNames []string) (map[platform.ID]bool, error) {
|
|
|
|
mLabelIDs := make(map[platform.ID]bool)
|
2020-03-06 23:50:04 +00:00
|
|
|
for _, labelName := range labelNames {
|
|
|
|
iLabels, err := labelSVC.FindLabels(ctx, influxdb.LabelFilter{
|
|
|
|
Name: labelName,
|
|
|
|
})
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
if len(iLabels) == 1 {
|
|
|
|
mLabelIDs[iLabels[0].ID] = true
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return mLabelIDs, nil
|
|
|
|
}
|
|
|
|
|
2020-06-01 21:58:36 +00:00
|
|
|
func sortObjects(objects []Object) []Object {
|
|
|
|
sort.Slice(objects, func(i, j int) bool {
|
|
|
|
iName, jName := objects[i].Name(), objects[j].Name()
|
|
|
|
iKind, jKind := objects[i].Kind, objects[j].Kind
|
|
|
|
|
|
|
|
if iKind.is(jKind) {
|
|
|
|
return iName < jName
|
|
|
|
}
|
|
|
|
return kindPriorities[iKind] < kindPriorities[jKind]
|
|
|
|
})
|
|
|
|
return objects
|
|
|
|
}
|
|
|
|
|
2019-12-07 00:23:09 +00:00
|
|
|
type doMutex struct {
|
|
|
|
sync.Mutex
|
|
|
|
}
|
|
|
|
|
|
|
|
func (m *doMutex) Do(fn func()) {
|
|
|
|
m.Lock()
|
|
|
|
defer m.Unlock()
|
|
|
|
fn()
|
|
|
|
}
|
|
|
|
|
2019-10-28 22:23:40 +00:00
|
|
|
type (
|
|
|
|
applier struct {
|
|
|
|
creater creater
|
|
|
|
rollbacker rollbacker
|
|
|
|
}
|
|
|
|
|
|
|
|
rollbacker struct {
|
|
|
|
resource string
|
2021-03-30 18:10:02 +00:00
|
|
|
fn func(orgID platform.ID) error
|
2019-10-28 22:23:40 +00:00
|
|
|
}
|
|
|
|
|
2019-12-07 00:23:09 +00:00
|
|
|
creater struct {
|
|
|
|
entries int
|
2021-03-30 18:10:02 +00:00
|
|
|
fn func(ctx context.Context, i int, orgID, userID platform.ID) *applyErrBody
|
2019-12-07 00:23:09 +00:00
|
|
|
}
|
2019-10-28 22:23:40 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
type rollbackCoordinator struct {
|
2020-06-11 23:45:09 +00:00
|
|
|
logger *zap.Logger
|
2019-10-28 22:23:40 +00:00
|
|
|
rollbacks []rollbacker
|
2019-12-07 00:23:09 +00:00
|
|
|
|
|
|
|
sem chan struct{}
|
2019-10-28 22:23:40 +00:00
|
|
|
}
|
|
|
|
|
2020-06-11 23:45:09 +00:00
|
|
|
func newRollbackCoordinator(logger *zap.Logger, reqLimit int) *rollbackCoordinator {
|
|
|
|
return &rollbackCoordinator{
|
|
|
|
logger: logger,
|
|
|
|
sem: make(chan struct{}, reqLimit),
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (r *rollbackCoordinator) runTilEnd(ctx context.Context, orgID, userID platform.ID, appliers ...applier) error {
|
2019-12-07 00:23:09 +00:00
|
|
|
errStr := newErrStream(ctx)
|
|
|
|
|
|
|
|
wg := new(sync.WaitGroup)
|
|
|
|
for i := range appliers {
|
|
|
|
// cannot reuse the shared variable from for loop since we're using concurrency b/c
|
|
|
|
// that temp var gets recycled between iterations
|
|
|
|
app := appliers[i]
|
2019-10-28 22:23:40 +00:00
|
|
|
r.rollbacks = append(r.rollbacks, app.rollbacker)
|
2019-12-07 00:23:09 +00:00
|
|
|
for idx := range make([]struct{}, app.creater.entries) {
|
|
|
|
r.sem <- struct{}{}
|
|
|
|
wg.Add(1)
|
|
|
|
|
|
|
|
go func(i int, resource string) {
|
|
|
|
defer func() {
|
|
|
|
wg.Done()
|
|
|
|
<-r.sem
|
|
|
|
}()
|
|
|
|
|
|
|
|
ctx, cancel := context.WithTimeout(ctx, 30*time.Second)
|
|
|
|
defer cancel()
|
|
|
|
|
2020-06-11 23:45:09 +00:00
|
|
|
defer func() {
|
2020-10-27 11:45:05 +00:00
|
|
|
if err := recover(); err != nil {
|
2020-06-22 20:20:34 +00:00
|
|
|
r.logger.Error(
|
2020-06-11 23:45:09 +00:00
|
|
|
"panic applying "+resource,
|
|
|
|
zap.String("stack_trace", fmt.Sprintf("%+v", stack.Trace())),
|
2020-10-27 11:45:05 +00:00
|
|
|
zap.Reflect("panic", err),
|
2020-06-11 23:45:09 +00:00
|
|
|
)
|
|
|
|
errStr.add(errMsg{
|
|
|
|
resource: resource,
|
|
|
|
err: applyErrBody{
|
|
|
|
msg: fmt.Sprintf("panic: %s paniced", resource),
|
|
|
|
},
|
|
|
|
})
|
|
|
|
}
|
|
|
|
}()
|
|
|
|
|
2019-12-12 19:09:32 +00:00
|
|
|
if err := app.creater.fn(ctx, i, orgID, userID); err != nil {
|
2019-12-07 00:23:09 +00:00
|
|
|
errStr.add(errMsg{resource: resource, err: *err})
|
|
|
|
}
|
|
|
|
}(idx, app.rollbacker.resource)
|
2019-10-28 22:23:40 +00:00
|
|
|
}
|
|
|
|
}
|
2019-12-07 00:23:09 +00:00
|
|
|
wg.Wait()
|
2019-10-28 22:23:40 +00:00
|
|
|
|
2019-12-07 00:23:09 +00:00
|
|
|
return errStr.close()
|
2019-10-28 22:23:40 +00:00
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func (r *rollbackCoordinator) rollback(l *zap.Logger, err *error, orgID platform.ID) {
|
2019-10-28 22:23:40 +00:00
|
|
|
if *err == nil {
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
for _, r := range r.rollbacks {
|
2019-12-27 19:22:05 +00:00
|
|
|
if err := r.fn(orgID); err != nil {
|
2019-10-28 22:23:40 +00:00
|
|
|
l.Error("failed to delete "+r.resource, zap.Error(err))
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-12-07 00:23:09 +00:00
|
|
|
type errMsg struct {
|
|
|
|
resource string
|
|
|
|
err applyErrBody
|
|
|
|
}
|
|
|
|
|
|
|
|
type errStream struct {
|
|
|
|
msgStream chan errMsg
|
|
|
|
err chan error
|
|
|
|
done <-chan struct{}
|
|
|
|
}
|
|
|
|
|
|
|
|
func newErrStream(ctx context.Context) *errStream {
|
|
|
|
e := &errStream{
|
|
|
|
msgStream: make(chan errMsg),
|
|
|
|
err: make(chan error),
|
|
|
|
done: ctx.Done(),
|
|
|
|
}
|
|
|
|
e.do()
|
|
|
|
return e
|
|
|
|
}
|
|
|
|
|
|
|
|
func (e *errStream) do() {
|
|
|
|
go func() {
|
|
|
|
mErrs := func() map[string]applyErrs {
|
|
|
|
mErrs := make(map[string]applyErrs)
|
|
|
|
for {
|
|
|
|
select {
|
|
|
|
case <-e.done:
|
|
|
|
return nil
|
|
|
|
case msg, ok := <-e.msgStream:
|
|
|
|
if !ok {
|
|
|
|
return mErrs
|
|
|
|
}
|
|
|
|
mErrs[msg.resource] = append(mErrs[msg.resource], &msg.err)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}()
|
|
|
|
|
|
|
|
if len(mErrs) == 0 {
|
|
|
|
e.err <- nil
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
var errs []string
|
|
|
|
for resource, err := range mErrs {
|
2020-04-01 00:01:45 +00:00
|
|
|
errs = append(errs, err.toError(resource, "failed to apply resource").Error())
|
2019-12-07 00:23:09 +00:00
|
|
|
}
|
|
|
|
e.err <- errors.New(strings.Join(errs, "\n"))
|
|
|
|
}()
|
|
|
|
}
|
|
|
|
|
|
|
|
func (e *errStream) close() error {
|
|
|
|
close(e.msgStream)
|
|
|
|
return <-e.err
|
|
|
|
}
|
|
|
|
|
|
|
|
func (e *errStream) add(msg errMsg) {
|
|
|
|
select {
|
|
|
|
case <-e.done:
|
|
|
|
case e.msgStream <- msg:
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-10-28 22:23:40 +00:00
|
|
|
// TODO: clean up apply errors to inform the user in an actionable way
|
|
|
|
type applyErrBody struct {
|
|
|
|
name string
|
|
|
|
msg string
|
|
|
|
}
|
|
|
|
|
2019-12-07 00:23:09 +00:00
|
|
|
type applyErrs []*applyErrBody
|
2019-10-28 22:23:40 +00:00
|
|
|
|
|
|
|
func (a applyErrs) toError(resType, msg string) error {
|
|
|
|
if len(a) == 0 {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
errMsg := fmt.Sprintf(`resource_type=%q err=%q`, resType, msg)
|
|
|
|
for _, e := range a {
|
2020-06-30 21:54:00 +00:00
|
|
|
errMsg += fmt.Sprintf("\n\tmetadata_name=%q err_msg=%q", e.name, e.msg)
|
2019-10-28 22:23:40 +00:00
|
|
|
}
|
|
|
|
return errors.New(errMsg)
|
|
|
|
}
|
2019-11-21 00:38:12 +00:00
|
|
|
|
2020-03-26 20:23:14 +00:00
|
|
|
func validURLs(urls []string) error {
|
|
|
|
for _, u := range urls {
|
|
|
|
if _, err := url.Parse(u); err != nil {
|
|
|
|
msg := fmt.Sprintf("url invalid for entry %q", u)
|
2021-03-30 18:10:02 +00:00
|
|
|
return influxErr(errors2.EInvalid, msg)
|
2020-03-26 20:23:14 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2021-04-07 18:42:55 +00:00
|
|
|
func isRestrictedTask(t *taskmodel.Task) bool {
|
|
|
|
return t != nil && t.Type != taskmodel.TaskSystemType
|
2020-06-26 22:12:57 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
func isSystemBucket(b *influxdb.Bucket) bool {
|
|
|
|
return b != nil && b.Type == influxdb.BucketTypeSystem
|
|
|
|
}
|
|
|
|
|
2020-06-22 20:20:34 +00:00
|
|
|
func labelSlcToMap(labels []*stateLabel) map[string]*stateLabel {
|
|
|
|
m := make(map[string]*stateLabel)
|
2019-11-21 00:38:12 +00:00
|
|
|
for i := range labels {
|
2019-12-03 02:05:10 +00:00
|
|
|
m[labels[i].Name()] = labels[i]
|
2019-11-21 00:38:12 +00:00
|
|
|
}
|
|
|
|
return m
|
|
|
|
}
|
2019-12-21 23:57:41 +00:00
|
|
|
|
|
|
|
func failedValidationErr(err error) error {
|
|
|
|
if err == nil {
|
|
|
|
return nil
|
|
|
|
}
|
2021-03-30 18:10:02 +00:00
|
|
|
return &errors2.Error{Code: errors2.EUnprocessableEntity, Err: err}
|
2019-12-21 23:57:41 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
func internalErr(err error) error {
|
|
|
|
if err == nil {
|
|
|
|
return nil
|
|
|
|
}
|
2021-03-30 18:10:02 +00:00
|
|
|
return influxErr(errors2.EInternal, err)
|
2020-03-20 23:20:53 +00:00
|
|
|
}
|
|
|
|
|
2021-03-30 18:10:02 +00:00
|
|
|
func influxErr(code string, errArg interface{}, rest ...interface{}) *errors2.Error {
|
|
|
|
err := &errors2.Error{
|
2020-03-20 23:20:53 +00:00
|
|
|
Code: code,
|
|
|
|
}
|
2020-06-17 22:52:58 +00:00
|
|
|
for _, a := range append(rest, errArg) {
|
|
|
|
switch v := a.(type) {
|
|
|
|
case string:
|
|
|
|
err.Msg = v
|
|
|
|
case error:
|
|
|
|
err.Err = v
|
|
|
|
case nil:
|
|
|
|
case interface{ String() string }:
|
|
|
|
err.Msg = v.String()
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return err
|
2019-12-21 23:57:41 +00:00
|
|
|
}
|