* feat: WIP allow on the fly bucket creation
* refactor: fully implement create bucket in both places
* refactor: use separate popover based component for selector list bucket creation
* chore: prettier
* chore: cleanup buckets tab
Doesn't need org, overlay state, or any overlay components
* chore: prettier
* refactor: convert CreateBucketOverlay to function component
* chore: changelog
* chore: cleanup
* feat: add integration test for creating a bucket from the query builder
* refactor: rebuild selector list bucket creator with useReducer
* refactor: keeping it DRY - both bucket creation components use the same state management
The capabilities interface will now return a mapping of capabilities to
a capabilities object. The capabilities object will contain a list of
features supported by the capability.
This modifies the read window aggregate interfaces to future-proof it
if and when we add additional capabilities to the method. Previously,
the interface was all or nothing. If we modified the RPC call itself, we
would have to make a new interface to denote the change to the Go code.
This changes the interface so now a `WindowAggregateCapability` exists.
This way, we can modify the struct to include things like:
```
type WindowAggregateCapability struct {
WindowPeriodCapability bool
MeanAggregateCapability bool
}
```
This way we can learn if the RPC call itself supports some specific
option. If the first iteration doesn't support a mean aggregate or the
mean aggregate is only supported by single server implementations, the
window aggregate can tell the caller that it won't be able to compute
the mean aggregate.
Since it fills in a struct with these capabilities, the struct can
safely introduce new values. If a downstream consumer wants to take
advantage of that functionality, then all interfaces in the chain have
to be updated to consume the upstream capabilities.
The `ReadWindowAggregateSource` will invoke the `ReadWindowAggregate`
method on the `influxdb.Reader` and return the table. It is implemented
using the same common methods that are used for the other sources.
fix(package.json): Updated the cloud-dev command to have the CLOUD_URL land on the same default port as Quartz
Co-authored-by: Iris Scholten <ischolten.is@gmail.com>
Added an interface for an additional storage capability. This interface
will allow for checking if the reader supports the window aggregate call
and another method for invoking the call if it does.
This is implemented using a single interface. If the reader implements
the interface, it indicates that the client is capable of reading the
response. The `HasXXX` method is intended to check if the store supports
the operation. This method also takes a context because it could require
a remote call or to wait for one.
It was a request from boonito QA to start running nightly builds 2 hours earlier, instead of 7am UTC to start running nightly builds at 5am UTC (10pm PDT) to give them time to debug test failures.
one thing to note here is we are deleting the default value on the host
flag when it is registered. The config is the fallback and has the default
value set. If the host flag has a default, the determination if the user
set it or not is ambiguous. We can't have that.
closes: #17812
notes on this commit. This commit was grueling ;-(. The task API is not a friendly
API to consume. There are a lot of non obvious things going on and almost every
one of them tripped me up. Things of note:
* the http.TaskService does not satisfy the influxdb.TaskService,
making it impossible to use as a dependency if tasks service gets
split out
* the APIs for create and update do not share common types. For example:
creating a task takes every field as a string, but in the update it is
taken as a options.Duration type. A step further and you'll notice that
create does not need an option to be provided, but the update does. Its
jarring trying to understand the indirection here. I struggled mightily
trying to make sense of it all with the indirection and differing types.
Made for a very difficult task (no pun intended) when it should have been
trivial. Opportunity here to fix these up and make this API more uniform
and remove unneccesary complexity like the options type.
* Nested IDs that get marshaled, are no bueno when you want to marshal a task
that does not have an ID in it, for either user/org/or self IDs. Its a challenge
just to do that.
* Lots of logs in the kv.Task portion where we hit errors and log and others where
we return. It isn't clear what is happening. The kv implementation is also very
procedural, and I found myself bouncing around like a ping pong ball trying to
make heads or tails of it.
* There is auth buried deep inside the kv.Task implementation that kept throwing me
off b/c it kept throwing errors, instead of warns. I assume, not sure if I'm
correct on this, but that the stuff being logged is determined inconsequential
to the task working. I had lots of errors from the auth buried in there, and hadn't
a clue what to make of it....
leaving these notes here as a look back at why working with tasks is so
difficult. This API can improve dramatically. I spent 5x the time trying
to figure out how to use the task API, in procedural calls, than I did
writing the business logic to consume it.... that's a scary realization ;-(
references: #17434
also drops a skipped test that has been skipped for over a year. Tried
unskipping it, but now it fails for all sorts of reasons, without the
race flag enabled.