* feat: `PartitionRepo::list_ids`
* refactor: `CatalogPartitionsSource` => `CatalogToCompactPartitionsSource`
* feat: allow the compactor to process all known partitions
Closes#6648.
* docs: improve
Co-authored-by: Andrew Lamb <alamb@influxdata.com>
---------
Co-authored-by: Andrew Lamb <alamb@influxdata.com>
* chore: Add more tests
* chore: Fix default ordering; implement ORDER BY
* feat: Add EXPLAIN support
* chore: Add additional tests to validate GROUP BY expansion
* chore: More test cases for TZ, and failing log scalar function
This fixes an issue where persistence that does not ever complete blocks
the periodic enqueuing of persist tasks - this leads to the amount of
buffered data in the buffer tree increasing, and the persist queue depth
stays the same instead of draining the buffer.
This is an issue as the queue depth is designed to act as the
back-pressure of the ingester - once the depth exceeds a configurable
limit, further writes are rejected until the queue has drained
sufficiently (50%).
After this commit, stalled persistence (i.e. object store outage) will
not prevent the queue depth from growing, which should enable the
saturation protection to kick in.
- do not wait for a non-empty partition result (this doesn't make sense
if we are not running endlessly)
- modify entry point to allow the compactor to exit on its own (this is
normally not allowed for other server types)
Ignore partitions that where throttled or filtered due to the "not
unique" combo.
This is in line w/ the "partitions source", so the metric for "partition
in" and "partition out" line up.
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
* refactor: propagate origin argument to gap fill operator
* refactor: add param expressions to from_template
* chore: add more validation for gap fill queries
* feat: extract stride, first and last from gap fill params
* chore: clippy
* refactor: code review feedback
* chore: update for changed result type
---------
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
This debugging tool was more useful in previous situations where it was
harder to get real data as input for the compactor.
It's currently causing a flaky test that isn't worth investigating.
Fixes#6190 by making it moot.
Co-authored-by: kodiakhq[bot] <49736102+kodiakhq[bot]@users.noreply.github.com>
Use the namespace schema cache in the router to enforce the
per-namespace table limit (service protection limit), adding O(1)
overhead to the existing column limit evaluation logic.
Prior to this commit, each request that would breach the table limit
would be (potentially partially) applied to the catalog and return an
error. Every subsequent request creating a new table continued to cause
a catalog query, unnecessarily adding load proportional to request
counts.
After this commit, catalog requests are sent when the router instance
can determine (to the best of it's ability, see below) that the request
will not cause the namespace to exceed the table limit.
Because this uses cached schemas, the actual state set of tables may
have changed - this will cause inconsistent enforcement and spurious
errors in the same way it currently does for the column limit. For more
details (and to track a resolution) see:
https://github.com/influxdata/influxdb_iox/issues/5957
The maximum number of tables is part of the Namespace, which is already
loaded in its entirety. This commit copies the value into the
NamespaceSchema, making it available for the router to utilise.
Ensure a HTTP error response contains a well-formed JSON structure
containing "code" and "message" fields (for backwards compatibility with
existing InfluxDB versions) and a correct "content-type" header.
No "Content-Type" header has ever been returned, even though the
response body has always been hard-coded to a JSON string.
This commit returns a content type of "application/json" for all
JSON-encoded HTTP errors.