Merge branch 'master' into feature/pr-5974-add-api-examples-to-cache-guides

pull/6277/head
Jason Stirnaman 2025-08-14 08:50:01 -05:00 committed by GitHub
commit b4f0f818b3
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
34 changed files with 6339 additions and 2677 deletions

View File

@ -31,7 +31,7 @@ LogicalPlan
[Mm]onitor
MBs?
PBs?
Parquet
Parquet|\b\w*-*parquet-\w*\b|\b--\w*parquet\w*\b|`[^`]*parquet[^`]*`
Redoc
SQLAlchemy
SQLAlchemy

View File

@ -26,6 +26,7 @@ related:
- /influxdb3/cloud-dedicated/reference/influxql/
- /influxdb3/cloud-dedicated/reference/sql/
- /influxdb3/cloud-dedicated/query-data/execute-queries/troubleshoot/
- /influxdb3/cloud-dedicated/query-data/troubleshoot-and-optimize/query-timeout-best-practices/
list_code_example: |
```py
@ -240,7 +241,8 @@ from influxdb_client_3 import InfluxDBClient3
client = InfluxDBClient3(
host='{{< influxdb/host >}}',
token='DATABASE_TOKEN',
database='DATABASE_NAME'
database='DATABASE_NAME',
timeout=60 # Set default timeout to 60 seconds
)
```
{{% /code-placeholders %}}
@ -275,6 +277,7 @@ client = InfluxDBClient3(
host="{{< influxdb/host >}}",
token='DATABASE_TOKEN',
database='DATABASE_NAME',
timeout=60, # Set default timeout to 60 seconds
flight_client_options=flight_client_options(
tls_root_certs=cert))
...
@ -332,7 +335,8 @@ client = InfluxDBClient3(
# Execute the query and return an Arrow table
table = client.query(
query="SELECT * FROM home",
language="sql"
language="sql",
timeout=30 # Override default timeout for simple queries (30 seconds)
)
print("\n#### View Schema information\n")
@ -377,7 +381,8 @@ client = InfluxDBClient3(
# Execute the query and return an Arrow table
table = client.query(
query="SELECT * FROM home",
language="influxql"
language="influxql",
timeout=30 # Override default timeout for simple queries (30 seconds)
)
print("\n#### View Schema information\n")

View File

@ -13,6 +13,7 @@ influxdb3/cloud-dedicated/tags: [query, sql, influxql, influxctl, CLI]
related:
- /influxdb3/cloud-dedicated/reference/cli/influxctl/query/
- /influxdb3/cloud-dedicated/get-started/query/#execute-an-sql-query, Get started querying data
- /influxdb3/cloud-dedicated/query-data/troubleshoot-and-optimize/query-timeout-best-practices/, Query timeout best practices
- /influxdb3/cloud-dedicated/reference/sql/
- /influxdb3/cloud-dedicated/reference/influxql/
list_code_example: |
@ -142,6 +143,34 @@ Replace the following:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
Name of the database to query
## Query timeouts
The [`influxctl --timeout` global flag](/influxdb3/cloud-dedicated/reference/cli/influxctl/) sets the maximum duration for API calls, including query requests.
If a query takes longer than the specified timeout, the operation will be canceled.
### Timeout examples
Use different timeout values based on your query type:
{{% code-placeholders "DATABASE_(TOKEN|NAME)" %}}
```sh
# Shorter timeout for testing dashboard queries (10 seconds)
influxctl query \
--timeout 10s \
--token DATABASE_TOKEN \
--database DATABASE_NAME \
"SELECT AVG(temperature) FROM sensors WHERE time >= now() - INTERVAL '1 day'"
# Longer timeout for analytical queries (5 minutes)
influxctl query \
--timeout 5m \
--token DATABASE_TOKEN \
--database DATABASE_NAME \
"SELECT room, AVG(temperature) FROM sensors WHERE time >= now() - INTERVAL '30 days' GROUP BY room"
```
{{% /code-placeholders %}}
For guidance on selecting appropriate timeout values, see [Query timeout best practices](/influxdb3/cloud-dedicated/query-data/troubleshoot-and-optimize/query-timeout-best-practices/).
## Output format
@ -243,7 +272,7 @@ influxctl query \
{{% /influxdb/custom-timestamps %}}
{{< expand-wrapper >}}
{{% expand "View example results with unix nanosecond timestamps" %}}
{{% expand "View example results with Unix nanosecond timestamps" %}}
{{% influxdb/custom-timestamps %}}
```
+-------+--------+---------+------+---------------------+

View File

@ -0,0 +1,17 @@
---
title: Query timeout best practices
description: Learn how to set appropriate query timeouts to balance performance and resource protection.
menu:
influxdb3_cloud_dedicated:
name: Query timeout best practices
parent: Troubleshoot and optimize queries
weight: 205
related:
- /influxdb3/cloud-dedicated/reference/client-libraries/v3/
- /influxdb3/cloud-dedicated/query-data/execute-queries/influxctl-cli/
source: shared/influxdb3-query-guides/query-timeout-best-practices.md
---
<!--
//SOURCE - content/shared/influxdb3-query-guides/query-timeout-best-practices.md
>

View File

@ -12,6 +12,7 @@ related:
- /influxdb3/cloud-dedicated/query-data/sql/
- /influxdb3/cloud-dedicated/query-data/influxql/
- /influxdb3/cloud-dedicated/reference/client-libraries/v3/
- /influxdb3/cloud-dedicated/query-data/troubleshoot-and-optimize/query-timeout-best-practices/
aliases:
- /influxdb3/cloud-dedicated/query-data/execute-queries/troubleshoot/
- /influxdb3/cloud-dedicated/query-data/troubleshoot-and-optimize/trace/
@ -30,7 +31,9 @@ If a query doesn't return any data, it might be due to the following:
- Your data falls outside the time range (or other conditions) in the query--for example, the InfluxQL `SHOW TAG VALUES` command uses a default time range of 1 day.
- The query (InfluxDB server) timed out.
- The query client timed out.
- The query client timed out.
See [Query timeout best practices](/influxdb3/cloud-dedicated/query-data/troubleshoot-and-optimize/query-timeout-best-practices/)
for guidance on setting appropriate timeouts.
- The query return type is not supported by the client library.
For example, array or list types may not be supported.
In this case, use `array_to_string()` to convert the array value to a string--for example:

View File

@ -10,101 +10,15 @@ menu:
influxdb3_cloud_dedicated:
name: Troubleshoot issues
parent: Write data
influxdb3/cloud-dedicated/tags: [write, line protocol, errors]
influxdb3/cloud-dedicated/tags: [write, line protocol, errors, partial writes]
related:
- /influxdb3/cloud-dedicated/get-started/write/
- /influxdb3/cloud-dedicated/reference/syntax/line-protocol/
- /influxdb3/cloud-dedicated/write-data/best-practices/
- /influxdb3/cloud-dedicated/reference/internals/durability/
source: /shared/influxdb3-write-guides/troubleshoot-distributed.md
---
Learn how to avoid unexpected results and recover from errors when writing to {{% product-name %}}.
- [Handle write responses](#handle-write-responses)
- [Review HTTP status codes](#review-http-status-codes)
- [Troubleshoot failures](#troubleshoot-failures)
- [Troubleshoot rejected points](#troubleshoot-rejected-points)
## Handle write responses
{{% product-name %}} does the following when you send a write request:
1. Validates the request.
2. If successful, attempts to [ingest data](/influxdb3/cloud-dedicated/reference/internals/durability/#data-ingest) from the request body; otherwise, responds with an [error status](#review-http-status-codes).
3. Ingests or rejects data in the batch and returns one of the following HTTP status codes:
- `204 No Content`: All data in the batch is ingested.
- `400 Bad Request`: Some (_when **partial writes** are configured for the cluster_) or all of the data has been rejected. Data that has not been rejected is ingested and queryable.
The response body contains error details about [rejected points](#troubleshoot-rejected-points), up to 100 points.
Writes are synchronous--the response status indicates the final status of the write and all ingested data is queryable.
To ensure that InfluxDB handles writes in the order you request them,
wait for the response before you send the next request.
### Review HTTP status codes
InfluxDB uses conventional HTTP status codes to indicate the success or failure of a request.
The `message` property of the response body may contain additional details about the error.
{{< product-name >}} returns one the following HTTP status codes for a write request:
| HTTP response code | Response body | Description |
|:------------------------------|:------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `204 No Content"` | no response body | If InfluxDB ingested all of the data in the batch |
| `400 "Bad request"` | error details about rejected points, up to 100 points: `line` contains the first rejected line, `message` describes rejections | If some (_when **partial writes** are configured for the cluster_) or all request data isn't allowed (for example, if it is malformed or falls outside of the bucket's retention period)--the response body indicates whether a partial write has occurred or if all data has been rejected |
| `401 "Unauthorized"` | | If the `Authorization` header is missing or malformed or if the [token](/influxdb3/cloud-dedicated/admin/tokens/) doesn't have [permission](/influxdb3/cloud-dedicated/reference/cli/influxctl/token/create/#examples) to write to the database. See [examples using credentials](/influxdb3/cloud-dedicated/get-started/write/#write-line-protocol-to-influxdb) in write requests. |
| `404 "Not found"` | requested **resource type** (for example, "organization" or "database"), and **resource name** | If a requested resource (for example, organization or database) wasn't found |
| `422 "Unprocessable Entity"` | `message` contains details about the error | If the data isn't allowed (for example, falls outside of the databases retention period).
| `500 "Internal server error"` | | Default status for an error |
| `503 "Service unavailable"` | | If the server is temporarily unavailable to accept writes. The `Retry-After` header contains the number of seconds to wait before trying the write again.
The `message` property of the response body may contain additional details about the error.
If your data did not write to the database, see how to [troubleshoot rejected points](#troubleshoot-rejected-points).
## Troubleshoot failures
If you notice data is missing in your database, do the following:
- Check the [HTTP status code](#review-http-status-codes) in the response.
- Check the `message` property in the response body for details about the error.
- If the `message` describes a field error, [troubleshoot rejected points](#troubleshoot-rejected-points).
- Verify all lines contain valid syntax ([line protocol](/influxdb3/cloud-dedicated/reference/syntax/line-protocol/)).
- Verify the timestamps in your data match the [precision parameter](/influxdb3/cloud-dedicated/reference/glossary/#precision) in your request.
- Minimize payload size and network errors by [optimizing writes](/influxdb3/cloud-dedicated/write-data/best-practices/optimize-writes/).
## Troubleshoot rejected points
When writing points from a batch, InfluxDB rejects points that have syntax errors or schema conflicts.
If InfluxDB processes the data in your batch and then rejects points, the [HTTP response](#handle-write-responses) body contains the following properties that describe rejected points:
- `code`: `"invalid"`
- `line`: the line number of the _first_ rejected point in the batch.
- `message`: a string that contains line-separated error messages, one message for each rejected point in the batch, up to 100 rejected points.
InfluxDB rejects points for the following reasons:
- a line protocol parsing error
- an invalid timestamp
- a schema conflict
Schema conflicts occur when you try to write data that contains any of the following:
- a wrong data type: the point falls within the same partition (default partitioning is measurement and day) as existing bucket data and contains a different data type for an existing field
- a tag and a field that use the same key
### Example
The following example shows a response body for a write request that contains two rejected points:
```json
{
"code": "invalid",
"line": 2,
"message": "failed to parse line protocol:
errors encountered on line(s):
error parsing line 2 (1-based): Invalid measurement was provided
error parsing line 4 (1-based): Unable to parse timestamp value '123461000000000000000000000000'"
}
```
Check for [field data type](/influxdb3/cloud-dedicated/reference/syntax/line-protocol/#data-types-and-format) differences between the rejected data point and points within the same database and partition--for example, did you attempt to write `string` data to an `int` field?
<!-- The content for this page is at
//SOURCE - content/shared/influxdb3-write-guides/troubleshoot-distributed.md
-->

View File

@ -27,6 +27,7 @@ related:
- /influxdb3/cloud-serverless/reference/influxql/
- /influxdb3/cloud-serverless/reference/sql/
- /influxdb3/cloud-serverless/query-data/execute-queries/troubleshoot/
- /influxdb3/cloud-serverless/query-data/troubleshoot-and-optimize/query-timeout-best-practices/
list_code_example: |
```py
@ -241,7 +242,8 @@ from influxdb_client_3 import InfluxDBClient3
client = InfluxDBClient3(
host='{{< influxdb/host >}}',
token='API_TOKEN',
database='BUCKET_NAME'
database='BUCKET_NAME',
timeout=30 # Set default timeout to 30 seconds for serverless
)
```
{{% /code-placeholders %}}
@ -332,7 +334,8 @@ client = InfluxDBClient3(
# Execute the query and return an Arrow table
table = client.query(
query="SELECT * FROM home",
language="sql"
language="sql",
timeout=10 # Override default timeout for simple queries (10 seconds)
)
print("\n#### View Schema information\n")
@ -377,7 +380,8 @@ client = InfluxDBClient3(
# Execute the query and return an Arrow table
table = client.query(
query="SELECT * FROM home",
language="influxql"
language="influxql",
timeout=10 # Override default timeout for simple queries (10 seconds)
)
print("\n#### View Schema information\n")

View File

@ -0,0 +1,17 @@
---
title: Query timeout best practices
description: Learn how to set appropriate query timeouts to balance performance and resource protection.
menu:
influxdb3_cloud_serverless:
name: Query timeout best practices
parent: Troubleshoot and optimize queries
identifier: query-timeout-best-practices
weight: 201
related:
- /influxdb3/cloud-serverless/reference/client-libraries/v3/
source: shared/influxdb3-query-guides/query-timeout-best-practices.md
---
<!--
//SOURCE - content/shared/influxdb3-query-guides/query-timeout-best-practices.md
>

View File

@ -12,6 +12,7 @@ related:
- /influxdb3/cloud-serverless/query-data/sql/
- /influxdb3/cloud-serverless/query-data/influxql/
- /influxdb3/cloud-serverless/reference/client-libraries/v3/
- /influxdb3/cloud-serverless/query-data/troubleshoot-and-optimize/query-timeout-best-practices/
aliases:
- /influxdb3/cloud-serverless/query-data/execute-queries/troubleshoot/
---
@ -29,7 +30,9 @@ If a query doesn't return any data, it might be due to the following:
- Your data falls outside the time range (or other conditions) in the query--for example, the InfluxQL `SHOW TAG VALUES` command uses a default time range of 1 day.
- The query (InfluxDB server) timed out.
- The query client timed out.
- The query client timed out.
See [Query timeout best practices](/influxdb3/cloud-serverless/query-data/troubleshoot-and-optimize/query-timeout-best-practices/)
for guidance on setting appropriate timeouts.
- The query return type is not supported by the client library.
For example, array or list types may not be supported.
In this case, use `array_to_string()` to convert the array value to a string--for example:

View File

@ -10,103 +10,15 @@ menu:
influxdb3_cloud_serverless:
name: Troubleshoot issues
parent: Write data
influxdb3/cloud-serverless/tags: [write, line protocol, errors]
influxdb3/cloud-serverless/tags: [write, line protocol, errors, partial writes]
related:
- /influxdb3/cloud-serverless/get-started/write/
- /influxdb3/cloud-serverless/reference/syntax/line-protocol/
- /influxdb3/cloud-serverless/write-data/best-practices/
- /influxdb3/cloud-serverless/reference/internals/durability/
source: /shared/influxdb3-write-guides/troubleshoot-distributed.md
---
Learn how to avoid unexpected results and recover from errors when writing to {{% product-name %}}.
- [Handle write responses](#handle-write-responses)
- [Review HTTP status codes](#review-http-status-codes)
- [Troubleshoot failures](#troubleshoot-failures)
- [Troubleshoot rejected points](#troubleshoot-rejected-points)
## Handle write responses
{{% product-name %}} does the following when you send a write request:
1. Validates the request.
2. If successful, attempts to [ingest data](/influxdb3/cloud-serverless/reference/internals/durability/#data-ingest) from the request body; otherwise, responds with an [error status](#review-http-status-codes).
3. Ingests or rejects data from the batch and returns one of the following HTTP status codes:
- `204 No Content`: All of the data is ingested and queryable.
- `400 Bad Request`: Some or all of the data has been rejected. Data that has not been rejected is ingested and queryable.
The response body contains error details about [rejected points](#troubleshoot-rejected-points), up to 100 points.
Writes are synchronous--the response status indicates the final status of the write and all ingested data is queryable.
To ensure that InfluxDB handles writes in the order you request them,
wait for the response before you send the next request.
### Review HTTP status codes
InfluxDB uses conventional HTTP status codes to indicate the success or failure of a request.
The `message` property of the response body may contain additional details about the error.
{{< product-name >}} returns one the following HTTP status codes for a write request:
| HTTP response code | Response body | Description |
| :-------------------------------| :--------------------------------------------------------------- | :------------- |
| `204 "No Content"` | no response body | If InfluxDB ingested all of the data in the batch |
| `400 "Bad request"` | error details about rejected points, up to 100 points: `line` contains the first rejected line, `message` describes rejections | If some or all request data isn't allowed (for example, is malformed or falls outside of the bucket's retention period)--the response body indicates whether a partial write has occurred or if all data has been rejected |
| `401 "Unauthorized"` | | If the `Authorization` header is missing or malformed or if the [token](/influxdb3/cloud-serverless/admin/tokens/) doesn't have [permission](/influxdb3/cloud-serverless/admin/tokens/create-token/) to write to the bucket. See [examples using credentials](/influxdb3/cloud-serverless/get-started/write/#write-line-protocol-to-influxdb) in write requests. |
| `404 "Not found"` | requested **resource type** (for example, "organization" or "bucket"), and **resource name** | If a requested resource (for example, organization or bucket) wasn't found |
| `413 “Request too large”` | cannot read data: points in batch is too large | If a request exceeds the maximum [global limit](/influxdb3/cloud-serverless/admin/billing/limits/) |
| `429 “Too many requests”` | | If the number of requests exceeds the [adjustable service quota](/influxdb3/cloud-serverless/admin/billing/limits/#adjustable-service-quotas). The `Retry-After` header contains the number of seconds to wait before trying the write again. | If a request exceeds your plan's [adjustable service quotas](/influxdb3/cloud-serverless/admin/billing/limits/#adjustable-service-quotas)
| `500 "Internal server error"` | | Default status for an error |
| `503 "Service unavailable"` | | If the server is temporarily unavailable to accept writes. The `Retry-After` header contains the number of seconds to wait before trying the write again.
The `message` property of the response body may contain additional details about the error.
If your data did not write to the bucket, see how to [troubleshoot rejected points](#troubleshoot-rejected-points).
## Troubleshoot failures
If you notice data is missing in your database, do the following:
- Check the [HTTP status code](#review-http-status-codes) in the response.
- Check the `message` property in the response body for details about the error.
- If the `message` describes a field error, [troubleshoot rejected points](#troubleshoot-rejected-points).
- Verify all lines contain valid syntax ([line protocol](/influxdb3/cloud-serverless/reference/syntax/line-protocol/)).
- Verify the timestamps in your data match the [precision parameter](/influxdb3/cloud-serverless/reference/glossary/#precision) in your request.
- Minimize payload size and network errors by [optimizing writes](/influxdb3/cloud-serverless/write-data/best-practices/optimize-writes/).
## Troubleshoot rejected points
When writing points from a batch, InfluxDB rejects points that have syntax errors or schema conflicts.
If InfluxDB processes the data in your batch and then rejects points, the [HTTP response](#handle-write-responses) body contains the following properties that describe rejected points:
- `code`: `"invalid"`
- `line`: the line number of the _first_ rejected point in the batch.
- `message`: a string that contains line-separated error messages, one message for each rejected point in the batch, up to 100 rejected points.
InfluxDB rejects points for the following reasons:
- a line protocol parsing error
- an invalid timestamp
- a schema conflict
Schema conflicts occur when you try to write data that contains any of the following:
- a wrong data type: the point falls within the same partition (default partitioning is measurement and day) as existing bucket data and contains a different data type for an existing field
- a tag and a field that use the same key
### Example
The following example shows a response body for a write request that contains two rejected points:
```json
{
"code": "invalid",
"line": 2,
"message": "failed to parse line protocol:
errors encountered on line(s):
error parsing line 2 (1-based): Invalid measurement was provided
error parsing line 4 (1-based): Unable to parse timestamp value '123461000000000000000000000000'"
}
```
Check for [field data type](/influxdb3/cloud-serverless/reference/syntax/line-protocol/#data-types-and-format) differences between the rejected data point and points within the same database and partition--for example, did you attempt to write `string` data to an `int` field?
<!-- The content for this page is at
//SOURCE - content/shared/influxdb3-write-guides/troubleshoot-distributed.md
-->

View File

@ -20,6 +20,7 @@ related:
- /influxdb3/clustered/query-data/sql/
- /influxdb3/clustered/reference/influxql/
- /influxdb3/clustered/reference/sql/
- /influxdb3/clustered/query-data/troubleshoot-and-optimize/query-timeout-best-practices/
list_code_example: |
```py
@ -234,7 +235,8 @@ from influxdb_client_3 import InfluxDBClient3
client = InfluxDBClient3(
host='{{< influxdb/host >}}',
token='DATABASE_TOKEN',
database='DATABASE_NAME'
database='DATABASE_NAME',
timeout=60 # Set default timeout to 60 seconds
)
```
{{% /code-placeholders %}}
@ -325,7 +327,8 @@ client = InfluxDBClient3(
# Execute the query and return an Arrow table
table = client.query(
query="SELECT * FROM home",
language="sql"
language="sql",
timeout=30 # Override default timeout for simple queries (30 seconds)
)
print("\n#### View Schema information\n")
@ -370,7 +373,8 @@ client = InfluxDBClient3(
# Execute the query and return an Arrow table
table = client.query(
query="SELECT * FROM home",
language="influxql"
language="influxql",
timeout=30 # Override default timeout for simple queries (30 seconds)
)
print("\n#### View Schema information\n")

View File

@ -12,6 +12,7 @@ influxdb3/clustered/tags: [query, sql, influxql, influxctl, CLI]
related:
- /influxdb3/clustered/reference/cli/influxctl/query/
- /influxdb3/clustered/get-started/query/#execute-an-sql-query, Get started querying data
- /influxdb3/clustered/query-data/troubleshoot-and-optimize/query-timeout-best-practices/, Query timeout best practices
- /influxdb3/clustered/reference/sql/
- /influxdb3/clustered/reference/influxql/
list_code_example: |
@ -141,6 +142,35 @@ Replace the following:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
Name of the database to query
## Query timeouts
The [`influxctl --timeout` global flag](/influxdb3/clustered/reference/cli/influxctl/) sets the maximum duration for API calls, including query requests.
If a query takes longer than the specified timeout, the operation will be canceled.
### Timeout examples
Use different timeout values based on your query type:
{{% code-placeholders "DATABASE_(TOKEN|NAME)" %}}
```sh
# Shorter timeout for testing dashboard queries (10 seconds)
influxctl query \
--timeout 10s \
--token DATABASE_TOKEN \
--database DATABASE_NAME \
"SELECT * FROM sensors WHERE time >= now() - INTERVAL '1 hour' LIMIT 100"
# Longer timeout for analytical queries (5 minutes)
influxctl query \
--timeout 300s \
--token DATABASE_TOKEN \
--database DATABASE_NAME \
"SELECT room, AVG(temperature) FROM sensors WHERE time >= now() - INTERVAL '30 days' GROUP BY room"
```
{{% /code-placeholders %}}
For guidance on selecting appropriate timeout values, see [Query timeout best practices](/influxdb3/clustered/query-data/troubleshoot-and-optimize/query-timeout-best-practices/).
## Output format
The `influxctl query` command supports the following output formats:
@ -241,7 +271,7 @@ influxctl query \
{{% /influxdb/custom-timestamps %}}
{{< expand-wrapper >}}
{{% expand "View example results with unix nanosecond timestamps" %}}
{{% expand "View example results with Unix nanosecond timestamps" %}}
{{% influxdb/custom-timestamps %}}
```
+-------+--------+---------+------+---------------------+

View File

@ -0,0 +1,18 @@
---
title: Query timeout best practices
description: Learn how to set appropriate query timeouts to balance performance and resource protection.
menu:
influxdb3_clustered:
name: Query timeout best practices
parent: Troubleshoot and optimize queries
identifier: query-timeout-best-practices
weight: 201
related:
- /influxdb3/clustered/reference/client-libraries/v3/
- /influxdb3/clustered/query-data/execute-queries/influxctl-cli/
source: shared/influxdb3-query-guides/query-timeout-best-practices.md
---
<!--
//SOURCE - content/shared/influxdb3-query-guides/query-timeout-best-practices.md
>

View File

@ -12,6 +12,7 @@ related:
- /influxdb3/clustered/query-data/sql/
- /influxdb3/clustered/query-data/influxql/
- /influxdb3/clustered/reference/client-libraries/v3/
- /influxdb3/clustered/query-data/troubleshoot-and-optimize/query-timeout-best-practices/
aliases:
- /influxdb3/clustered/query-data/execute-queries/troubleshoot/
---
@ -29,7 +30,9 @@ If a query doesn't return any data, it might be due to the following:
- Your data falls outside the time range (or other conditions) in the query--for example, the InfluxQL `SHOW TAG VALUES` command uses a default time range of 1 day.
- The query (InfluxDB server) timed out.
- The query client timed out.
- The query client timed out.
See [Query timeout best practices](/influxdb3/clustered/query-data/troubleshoot-and-optimize/query-timeout-best-practices/)
for guidance on setting appropriate timeouts.
- The query return type is not supported by the client library.
For example, array or list types may not be supported.
In this case, use `array_to_string()` to convert the array value to a string--for example:

View File

@ -61,6 +61,34 @@ directory. This new directory contains artifacts associated with the specified r
---
## 20250721-1796368 {date="2025-07-21"}
### Quickstart
```yaml
spec:
package:
image: us-docker.pkg.dev/influxdb2-artifacts/clustered/influxdb:20250721-1796368
```
#### Release artifacts
- [app-instance-schema.json](/downloads/clustered-release-artifacts/20250721-1796368/app-instance-schema.json)
- [example-customer.yml](/downloads/clustered-release-artifacts/20250721-1796368/example-customer.yml)
- [InfluxDB Clustered README EULA July 2024.txt](/downloads/clustered-release-artifacts/InfluxDB%20Clustered%20README%20EULA%20July%202024.txt)
### Highlights
#### Support for InfluxQL INTEGRAL()
InfluxQL `INTEGRAL()` function is now supported in the InfluxDB 3.0 database engine.
### Bug Fixes
- Fix `SHOW TABLES` timeout when a database has a large number of tables.
---
## 20250707-1777929 {date="2025-07-07"}
### Quickstart

View File

@ -11,77 +11,15 @@ menu:
influxdb3_clustered:
name: Troubleshoot issues
parent: Write data
influxdb3/clustered/tags: [write, line protocol, errors]
influxdb3/clustered/tags: [write, line protocol, errors, partial writes]
related:
- /influxdb3/clustered/get-started/write/
- /influxdb3/clustered/reference/syntax/line-protocol/
- /influxdb3/clustered/write-data/best-practices/
- /influxdb3/clustered/reference/internals/durability/
source: /shared/influxdb3-write-guides/troubleshoot-distributed.md
---
Learn how to avoid unexpected results and recover from errors when writing to
{{% product-name %}}.
- [Handle write responses](#handle-write-responses)
- [Review HTTP status codes](#review-http-status-codes)
- [Troubleshoot failures](#troubleshoot-failures)
- [Troubleshoot rejected points](#troubleshoot-rejected-points)
## Handle write responses
{{% product-name %}} does the following when you send a write request:
1. Validates the request.
2. If successful, attempts to ingest data from the request body; otherwise,
responds with an [error status](#review-http-status-codes).
3. Ingests or rejects data in the batch and returns one of the following HTTP
status codes:
- `204 No Content`: All data in the batch is ingested.
- `400 Bad Request`: Some or all of the data has been rejected.
Data that has not been rejected is ingested and queryable.
The response body contains error details about
[rejected points](#troubleshoot-rejected-points), up to 100 points.
Writes are synchronous--the response status indicates the final status of the
write and all ingested data is queryable.
To ensure that InfluxDB handles writes in the order you request them,
wait for the response before you send the next request.
### Review HTTP status codes
InfluxDB uses conventional HTTP status codes to indicate the success or failure of a request.
The `message` property of the response body may contain additional details about the error.
Write requests return the following status codes:
| HTTP response code | Message | Description |
| :-------------------------------| :--------------------------------------------------------------- | :------------- |
| `204 "Success"` | | If InfluxDB ingested the data |
| `400 "Bad request"` | error details about rejected points, up to 100 points: `line` contains the first rejected line, `message` describes rejections | If some or all request data isn't allowed (for example, if it is malformed or falls outside of the bucket's retention period)--the response body indicates whether a partial write has occurred or if all data has been rejected |
| `401 "Unauthorized"` | | If the `Authorization` header is missing or malformed or if the [token](/influxdb3/clustered/admin/tokens/) doesn't have [permission](/influxdb3/clustered/reference/cli/influxctl/token/create/#examples) to write to the database. See [examples using credentials](/influxdb3/clustered/get-started/write/#write-line-protocol-to-influxdb) in write requests. |
| `404 "Not found"` | requested **resource type** (for example, "organization" or "database"), and **resource name** | If a requested resource (for example, organization or database) wasn't found |
| `500 "Internal server error"` | | Default status for an error |
| `503` "Service unavailable" | | If the server is temporarily unavailable to accept writes. The `Retry-After` header describes when to try the write again.
If your data did not write to the database, see how to [troubleshoot rejected points](#troubleshoot-rejected-points).
## Troubleshoot failures
If you notice data is missing in your database, do the following:
- Check the `message` property in the response body for details about the error.
- If the `message` describes a field error, [troubleshoot rejected points](#troubleshoot-rejected-points).
- Verify all lines contain valid syntax ([line protocol](/influxdb3/clustered/reference/syntax/line-protocol/)).
- Verify the timestamps in your data match the [precision parameter](/influxdb3/clustered/reference/glossary/#precision) in your request.
- Minimize payload size and network errors by [optimizing writes](/influxdb3/clustered/write-data/best-practices/optimize-writes/).
## Troubleshoot rejected points
InfluxDB rejects points that fall within the same partition (default partitioning
is by measurement and day) as existing bucket data and have a different data type
for an existing field.
Check for [field data type](/influxdb3/clustered/reference/syntax/line-protocol/#data-types-and-format)
differences between the rejected data point and points within the same database
and partition--for example, did you attempt to write `string` data to an `int` field?
<!-- The content for this page is at
//SOURCE - content/shared/influxdb3-write-guides/troubleshoot-distributed.md
-->

View File

@ -1,7 +1,7 @@
---
title: influxdb3 delete
description: >
The `influxdb3 delete` command deletes a resource such as a database or a table.
The `influxdb3 delete` command deletes a resource such as a cache, database, or table.
menu:
influxdb3_core:
parent: influxdb3
@ -10,6 +10,6 @@ weight: 300
source: /shared/influxdb3-cli/delete/_index.md
---
<!--
The content of this file is at content/shared/influxdb3-cli/delete/_index.md
<!-- The content of this file is at
//SOURCE - content/shared/influxdb3-cli/delete/_index.md
-->

View File

@ -0,0 +1,18 @@
---
title: influxdb3 delete token
description: >
The `influxdb3 delete token` command deletes an authorization token from the {{% product-name %}} server.
influxdb3/core/tags: [cli]
menu:
influxdb3_core:
parent: influxdb3 delete
weight: 201
related:
- /influxdb3/core/admin/tokens/
- /influxdb3/core/api/v3/#tag/Token, InfluxDB /api/v3 Token API reference
source: /shared/influxdb3-cli/delete/token.md
---
<!-- The content of this file is at
//SOURCE - content/shared/influxdb3-cli/delete/token.md
-->

View File

@ -36,41 +36,23 @@ influxdb3 serve [OPTIONS] --node-id <HOST_IDENTIFIER_PREFIX>
| :--------------- | :--------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------ |
| {{< req "\*" >}} | `--node-id` | _See [configuration options](/influxdb3/core/reference/config-options/#node-id)_ |
| | `--object-store` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store)_ |
| | `--bucket` | _See [configuration options](/influxdb3/core/reference/config-options/#bucket)_ |
| | `--data-dir` | _See [configuration options](/influxdb3/core/reference/config-options/#data-dir)_ |
| | `--admin-token-recovery-http-bind` | _See [configuration options](/influxdb3/core/reference/config-options/#admin-token-recovery-http-bind)_ |
| | `--admin-token-recovery-tcp-listener-file-path` | _See [configuration options](/influxdb3/core/reference/config-options/#admin-token-recovery-tcp-listener-file-path)_ |
| | `--aws-access-key-id` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-access-key-id)_ |
| | `--aws-secret-access-key` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-secret-access-key)_ |
| | `--aws-allow-http` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-allow-http)_ |
| | `--aws-default-region` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-default-region)_ |
| | `--aws-endpoint` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-endpoint)_ |
| | `--aws-secret-access-key` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-secret-access-key)_ |
| | `--aws-session-token` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-session-token)_ |
| | `--aws-allow-http` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-allow-http)_ |
| | `--aws-skip-signature` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-skip-signature)_ |
| | `--google-service-account` | _See [configuration options](/influxdb3/core/reference/config-options/#google-service-account)_ |
| | `--azure-storage-account` | _See [configuration options](/influxdb3/core/reference/config-options/#azure-storage-account)_ |
| | `--azure-storage-access-key` | _See [configuration options](/influxdb3/core/reference/config-options/#azure-storage-access-key)_ |
| | `--object-store-connection-limit` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-connection-limit)_ |
| | `--object-store-http2-only` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-http2-only)_ |
| | `--object-store-http2-max-frame-size` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-http2-max-frame-size)_ |
| | `--object-store-max-retries` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-max-retries)_ |
| | `--object-store-retry-timeout` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-retry-timeout)_ |
| | `--object-store-cache-endpoint` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-cache-endpoint)_ |
| `-h` | `--help` | Print help information |
| | `--help-all` | Print detailed help information |
| | `--log-filter` | _See [configuration options](/influxdb3/core/reference/config-options/#log-filter)_ |
| `-v` | `--verbose` | Enable verbose output |
| | `--log-destination` | _See [configuration options](/influxdb3/core/reference/config-options/#log-destination)_ |
| | `--log-format` | _See [configuration options](/influxdb3/core/reference/config-options/#log-format)_ |
| | `--traces-exporter` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter)_ |
| | `--traces-exporter-jaeger-agent-host` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-agent-host)_ |
| | `--traces-exporter-jaeger-agent-port` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-agent-port)_ |
| | `--traces-exporter-jaeger-service-name` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-service-name)_ |
| | `--traces-exporter-jaeger-trace-context-header-name` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-trace-context-header-name)_ |
| | `--traces-jaeger-debug-name` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-debug-name)_ |
| | `--traces-jaeger-tags` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-tags)_ |
| | `--traces-jaeger-max-msgs-per-second` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-max-msgs-per-second)_ |
| | `--azure-storage-account` | _See [configuration options](/influxdb3/core/reference/config-options/#azure-storage-account)_ |
| | `--bucket` | _See [configuration options](/influxdb3/core/reference/config-options/#bucket)_ |
| | `--buffer-mem-limit-mb` | _See [configuration options](/influxdb3/core/reference/config-options/#buffer-mem-limit-mb)_ |
| | `--data-dir` | _See [configuration options](/influxdb3/core/reference/config-options/#data-dir)_ |
| | `--datafusion-config` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-config)_ |
| | `--datafusion-max-parquet-fanout` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-max-parquet-fanout)_ |
| | `--datafusion-num-threads` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-num-threads)_ |
| | `--datafusion-runtime-type` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-type)_ |
| | `--datafusion-runtime-disable-lifo-slot` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-disable-lifo-slot)_ |
| | `--datafusion-runtime-event-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-event-interval)_ |
| | `--datafusion-runtime-global-queue-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-global-queue-interval)_ |
@ -78,29 +60,67 @@ influxdb3 serve [OPTIONS] --node-id <HOST_IDENTIFIER_PREFIX>
| | `--datafusion-runtime-max-io-events-per-tick` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-max-io-events-per-tick)_ |
| | `--datafusion-runtime-thread-keep-alive` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-thread-keep-alive)_ |
| | `--datafusion-runtime-thread-priority` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-thread-priority)_ |
| | `--datafusion-max-parquet-fanout` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-max-parquet-fanout)_ |
| | `--datafusion-runtime-type` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-type)_ |
| | `--datafusion-use-cached-parquet-loader` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-use-cached-parquet-loader)_ |
| | `--datafusion-config` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-config)_ |
| | `--max-http-request-size` | _See [configuration options](/influxdb3/core/reference/config-options/#max-http-request-size)_ |
| | `--http-bind` | _See [configuration options](/influxdb3/core/reference/config-options/#http-bind)_ |
| | `--exec-mem-pool-bytes` | _See [configuration options](/influxdb3/core/reference/config-options/#exec-mem-pool-bytes)_ |
| | `--gen1-duration` | _See [configuration options](/influxdb3/core/reference/config-options/#gen1-duration)_ |
| | `--wal-flush-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#wal-flush-interval)_ |
| | `--wal-snapshot-size` | _See [configuration options](/influxdb3/core/reference/config-options/#wal-snapshot-size)_ |
| | `--wal-max-write-buffer-size` | _See [configuration options](/influxdb3/core/reference/config-options/#wal-max-write-buffer-size)_ |
| | `--snapshotted-wal-files-to-keep` | _See [configuration options](/influxdb3/core/reference/config-options/#snapshotted-wal-files-to-keep)_ |
| | `--query-log-size` | _See [configuration options](/influxdb3/core/reference/config-options/#query-log-size)_ |
| | `--parquet-mem-cache-size` | _See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-size)_ |
| | `--parquet-mem-cache-prune-percentage` | _See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-prune-percentage)_ |
| | `--parquet-mem-cache-prune-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-prune-interval)_ |
| | `--delete-grace-period` | _See [configuration options](/influxdb3/core/reference/config-options/#delete-grace-period)_ |
| | `--disable-authz` | _See [configuration options](/influxdb3/core/reference/config-options/#disable-authz)_ |
| | `--disable-parquet-mem-cache` | _See [configuration options](/influxdb3/core/reference/config-options/#disable-parquet-mem-cache)_ |
| | `--last-cache-eviction-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#last-cache-eviction-interval)_ |
| | `--distinct-cache-eviction-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#distinct-cache-eviction-interval)_ |
| | `--plugin-dir` | _See [configuration options](/influxdb3/core/reference/config-options/#plugin-dir)_ |
| | `--exec-mem-pool-bytes` | _See [configuration options](/influxdb3/core/reference/config-options/#exec-mem-pool-bytes)_ |
| | `--force-snapshot-mem-threshold` | _See [configuration options](/influxdb3/core/reference/config-options/#force-snapshot-mem-threshold)_ |
| | `--virtual-env-location` | _See [configuration options](/influxdb3/core/reference/config-options/#virtual-env-location)_ |
| | `--gen1-duration` | _See [configuration options](/influxdb3/core/reference/config-options/#gen1-duration)_ |
| | `--gen1-lookback-duration` | _See [configuration options](/influxdb3/core/reference/config-options/#gen1-lookback-duration)_ |
| | `--google-service-account` | _See [configuration options](/influxdb3/core/reference/config-options/#google-service-account)_ |
| | `--hard-delete-default-duration` | _See [configuration options](/influxdb3/core/reference/config-options/#hard-delete-default-duration)_ |
| `-h` | `--help` | Print help information |
| | `--help-all` | Print detailed help information |
| | `--http-bind` | _See [configuration options](/influxdb3/core/reference/config-options/#http-bind)_ |
| | `--last-cache-eviction-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#last-cache-eviction-interval)_ |
| | `--log-destination` | _See [configuration options](/influxdb3/core/reference/config-options/#log-destination)_ |
| | `--log-filter` | _See [configuration options](/influxdb3/core/reference/config-options/#log-filter)_ |
| | `--log-format` | _See [configuration options](/influxdb3/core/reference/config-options/#log-format)_ |
| | `--max-http-request-size` | _See [configuration options](/influxdb3/core/reference/config-options/#max-http-request-size)_ |
| | `--object-store-cache-endpoint` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-cache-endpoint)_ |
| | `--object-store-connection-limit` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-connection-limit)_ |
| | `--object-store-http2-max-frame-size` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-http2-max-frame-size)_ |
| | `--object-store-http2-only` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-http2-only)_ |
| | `--object-store-max-retries` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-max-retries)_ |
| | `--object-store-retry-timeout` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-retry-timeout)_ |
| | `--package-manager` | _See [configuration options](/influxdb3/core/reference/config-options/#package-manager)_ |
| | `--parquet-mem-cache-prune-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-prune-interval)_ |
| | `--parquet-mem-cache-prune-percentage` | _See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-prune-percentage)_ |
| | `--parquet-mem-cache-query-path-duration` | _See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-query-path-duration)_ |
| | `--parquet-mem-cache-size` | _See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-size)_ |
| | `--plugin-dir` | _See [configuration options](/influxdb3/core/reference/config-options/#plugin-dir)_ |
| | `--preemptive-cache-age` | _See [configuration options](/influxdb3/core/reference/config-options/#preemptive-cache-age)_ |
| | `--query-file-limit` | _See [configuration options](/influxdb3/core/reference/config-options/#query-file-limit)_ |
| | `--query-log-size` | _See [configuration options](/influxdb3/core/reference/config-options/#query-log-size)_ |
| | `--retention-check-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#retention-check-interval)_ |
| | `--snapshotted-wal-files-to-keep` | _See [configuration options](/influxdb3/core/reference/config-options/#snapshotted-wal-files-to-keep)_ |
| | `--table-index-cache-concurrency-limit` | _See [configuration options](/influxdb3/core/reference/config-options/#table-index-cache-concurrency-limit)_ |
| | `--table-index-cache-max-entries` | _See [configuration options](/influxdb3/core/reference/config-options/#table-index-cache-max-entries)_ |
| | `--tcp-listener-file-path` | _See [configuration options](/influxdb3/core/reference/config-options/#tcp-listener-file-path)_ |
| | `--telemetry-disable-upload` | _See [configuration options](/influxdb3/core/reference/config-options/#telemetry-disable-upload)_ |
| | `--telemetry-endpoint` | _See [configuration options](/influxdb3/core/reference/config-options/#telemetry-endpoint)_ |
| | `--tls-cert` | _See [configuration options](/influxdb3/core/reference/config-options/#tls-cert)_ |
| | `--tls-key` | _See [configuration options](/influxdb3/core/reference/config-options/#tls-key)_ |
| | `--tls-minimum-version` | _See [configuration options](/influxdb3/core/reference/config-options/#tls-minimum-version)_ |
| | `--traces-exporter` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter)_ |
| | `--traces-exporter-jaeger-agent-host` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-agent-host)_ |
| | `--traces-exporter-jaeger-agent-port` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-agent-port)_ |
| | `--traces-exporter-jaeger-service-name` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-service-name)_ |
| | `--traces-exporter-jaeger-trace-context-header-name` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-trace-context-header-name)_ |
| | `--traces-jaeger-debug-name` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-debug-name)_ |
| | `--traces-jaeger-max-msgs-per-second` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-max-msgs-per-second)_ |
| | `--traces-jaeger-tags` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-tags)_ |
| `-v` | `--verbose` | Enable verbose output |
| | `--virtual-env-location` | _See [configuration options](/influxdb3/core/reference/config-options/#virtual-env-location)_ |
| | `--wal-flush-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#wal-flush-interval)_ |
| | `--wal-max-write-buffer-size` | _See [configuration options](/influxdb3/core/reference/config-options/#wal-max-write-buffer-size)_ |
| | `--wal-replay-concurrency-limit` | _See [configuration options](/influxdb3/core/reference/config-options/#wal-replay-concurrency-limit)_ |
| | `--wal-replay-fail-on-error` | _See [configuration options](/influxdb3/core/reference/config-options/#wal-replay-fail-on-error)_ |
| | `--wal-snapshot-size` | _See [configuration options](/influxdb3/core/reference/config-options/#wal-snapshot-size)_ |
| | `--without-auth` | _See [configuration options](/influxdb3/core/reference/config-options/#without-auth)_ |
{{< caption >}}
{{< req text="\* Required options" >}}
@ -110,7 +130,7 @@ influxdb3 serve [OPTIONS] --node-id <HOST_IDENTIFIER_PREFIX>
You can use environment variables to define most `influxdb3 serve` options.
For more information, see
[Configuration options](/influxdb3/enterprise/reference/config-options/).
[Configuration options](/influxdb3/core/reference/config-options/).
## Examples

File diff suppressed because it is too large Load Diff

View File

@ -1,7 +1,7 @@
---
title: influxdb3 delete
description: >
The `influxdb3 delete` command deletes a resource such as a database or a table.
The `influxdb3 delete` command deletes a resource such as a cache, database, or table.
menu:
influxdb3_enterprise:
parent: influxdb3
@ -10,6 +10,6 @@ weight: 300
source: /shared/influxdb3-cli/delete/_index.md
---
<!--
The content of this file is at content/shared/influxdb3-cli/delete/_index.md
<!-- The content of this file is at
//SOURCE - content/shared/influxdb3-cli/delete/_index.md
-->

View File

@ -0,0 +1,18 @@
---
title: influxdb3 delete token
description: >
The `influxdb3 delete token` command deletes an authorization token from the {{% product-name %}} server.
influxdb3/enterprise/tags: [cli]
menu:
influxdb3_enterprise:
parent: influxdb3 delete
weight: 201
related:
- /influxdb3/enterprise/admin/tokens/
- /influxdb3/enterprise/api/v3/#tag/Token, InfluxDB /api/v3 Token API reference
source: /shared/influxdb3-cli/delete/token.md
---
<!-- The content of this file is at
//SOURCE - content/shared/influxdb3-cli/delete/token.md
-->

View File

@ -38,6 +38,7 @@ influxdb3 serve [OPTIONS] \
| Option | | Description |
| :--------------- | :--------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------ |
| | `--admin-token-recovery-http-bind` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-recovery-http-bind)_ |
| | `--admin-token-recovery-tcp-listener-file-path` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-recovery-tcp-listener-file-path)_ |
| | `--aws-access-key-id` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-access-key-id)_ |
| | `--aws-allow-http` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-allow-http)_ |
| | `--aws-default-region` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-default-region)_ |
@ -48,7 +49,11 @@ influxdb3 serve [OPTIONS] \
| | `--azure-storage-access-key` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-storage-access-key)_ |
| | `--azure-storage-account` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-storage-account)_ |
| | `--bucket` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#bucket)_ |
| | `--buffer-mem-limit-mb` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#buffer-mem-limit-mb)_ |
| | `--catalog-sync-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#catalog-sync-interval)_ |
| {{< req "\*" >}} | `--cluster-id` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#cluster-id)_ |
| | `--compaction-check-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-check-interval)_ |
| | `--compaction-cleanup-wait` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-cleanup-wait)_ |
| | `--compaction-gen2-duration` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-gen2-duration)_ |
| | `--compaction-max-num-files-per-plan` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-max-num-files-per-plan)_ |
| | `--compaction-multipliers` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-multipliers)_ |
@ -66,16 +71,22 @@ influxdb3 serve [OPTIONS] \
| | `--datafusion-runtime-thread-priority` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-thread-priority)_ |
| | `--datafusion-runtime-type` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-type)_ |
| | `--datafusion-use-cached-parquet-loader` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-use-cached-parquet-loader)_ |
| | `--delete-grace-period` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#delete-grace-period)_ |
| | `--disable-authz` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#disable-authz)_ |
| | `--disable-parquet-mem-cache` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#disable-parquet-mem-cache)_ |
| | `--distinct-cache-eviction-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#distinct-cache-eviction-interval)_ |
| | `--distinct-value-cache-disable-from-history` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#distinct-value-cache-disable-from-history)_ |
| | `--exec-mem-pool-bytes` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#exec-mem-pool-bytes)_ |
| | `--force-snapshot-mem-threshold` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#force-snapshot-mem-threshold)_ |
| | `--gen1-duration` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#gen1-duration)_ |
| | `--gen1-lookback-duration` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#gen1-lookback-duration)_ |
| | `--google-service-account` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#google-service-account)_ |
| | `--hard-delete-default-duration` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#hard-delete-default-duration)_ |
| `-h` | `--help` | Print help information |
| | `--help-all` | Print detailed help information |
| | `--http-bind` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#http-bind)_ |
| | `--last-cache-eviction-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#last-cache-eviction-interval)_ |
| | `--last-value-cache-disable-from-history` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#last-value-cache-disable-from-history)_ |
| | `--license-email` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#license-email)_ |
| | `--license-file` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#license-file)_ |
| | `--log-destination` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#log-destination)_ |
@ -84,6 +95,11 @@ influxdb3 serve [OPTIONS] \
| | `--max-http-request-size` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#max-http-request-size)_ |
| | `--mode` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#mode)_ |
| {{< req "\*" >}} | `--node-id` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#node-id)_ |
| | `--node-id-from-env` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#node-id-from-env)_ |
| | `--num-cores` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-cores)_ |
| | `--num-database-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-database-limit)_ |
| | `--num-table-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-table-limit)_ |
| | `--num-total-columns-per-table-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-total-columns-per-table-limit)_ |
| | `--object-store` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store)_ |
| | `--object-store-cache-endpoint` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-cache-endpoint)_ |
| | `--object-store-connection-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-connection-limit)_ |
@ -101,7 +117,16 @@ influxdb3 serve [OPTIONS] \
| | `--query-file-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#query-file-limit)_ |
| | `--query-log-size` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#query-log-size)_ |
| | `--replication-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#replication-interval)_ |
| | `--retention-check-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#retention-check-interval)_ |
| | `--snapshotted-wal-files-to-keep` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#snapshotted-wal-files-to-keep)_ |
| | `--table-index-cache-concurrency-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#table-index-cache-concurrency-limit)_ |
| | `--table-index-cache-max-entries` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#table-index-cache-max-entries)_ |
| | `--tcp-listener-file-path` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#tcp-listener-file-path)_ |
| | `--telemetry-disable-upload` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#telemetry-disable-upload)_ |
| | `--telemetry-endpoint` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#telemetry-endpoint)_ |
| | `--tls-cert` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#tls-cert)_ |
| | `--tls-key` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#tls-key)_ |
| | `--tls-minimum-version` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#tls-minimum-version)_ |
| | `--traces-exporter` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter)_ |
| | `--traces-exporter-jaeger-agent-host` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-agent-host)_ |
| | `--traces-exporter-jaeger-agent-port` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-agent-port)_ |
@ -110,11 +135,16 @@ influxdb3 serve [OPTIONS] \
| | `--traces-jaeger-debug-name` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-jaeger-debug-name)_ |
| | `--traces-jaeger-max-msgs-per-second` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-jaeger-max-msgs-per-second)_ |
| | `--traces-jaeger-tags` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-jaeger-tags)_ |
| | `--use-pacha-tree` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#use-pacha-tree)_ |
| `-v` | `--verbose` | Enable verbose output |
| | `--virtual-env-location` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#virtual-env-location)_ |
| | `--wait-for-running-ingestor` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wait-for-running-ingestor)_ |
| | `--wal-flush-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-flush-interval)_ |
| | `--wal-max-write-buffer-size` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-max-write-buffer-size)_ |
| | `--wal-replay-concurrency-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-replay-concurrency-limit)_ |
| | `--wal-replay-fail-on-error` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-replay-fail-on-error)_ |
| | `--wal-snapshot-size` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-snapshot-size)_ |
| | `--without-auth` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#without-auth)_ |
{{< caption >}}
{{< req text="\* Required options" >}}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,5 @@
The `influxdb3 delete` command deletes a resource such as a database or a table.
The `influxdb3 delete` command deletes a resource such as a cache, a database, or a table.
## Usage
@ -19,6 +19,7 @@ influxdb3 delete <SUBCOMMAND>
| [last_cache](/influxdb3/version/reference/cli/influxdb3/delete/last_cache/) | Delete a last value cache |
| [distinct_cache](/influxdb3/version/reference/cli/influxdb3/delete/distinct_cache/) | Delete a metadata cache |
| [table](/influxdb3/version/reference/cli/influxdb3/delete/table/) | Delete a table from a database |
| [token](/influxdb3/version/reference/cli/influxdb3/delete/token/) | Delete an authorization token from the server |
| [trigger](/influxdb3/version/reference/cli/influxdb3/delete/trigger/) | Delete a trigger for the processing engine |
| help | Print command help or the help of a subcommand |
{{% /show-in %}}
@ -30,6 +31,7 @@ influxdb3 delete <SUBCOMMAND>
| [last_cache](/influxdb3/version/reference/cli/influxdb3/delete/last_cache/) | Delete a last value cache |
| [distinct_cache](/influxdb3/version/reference/cli/influxdb3/delete/distinct_cache/) | Delete a metadata cache |
| [table](/influxdb3/version/reference/cli/influxdb3/delete/table/) | Delete a table from a database |
| [token](/influxdb3/version/reference/cli/influxdb3/delete/token/) | Delete an authorization token from the server |
| [trigger](/influxdb3/version/reference/cli/influxdb3/delete/trigger/) | Delete a trigger for the processing engine |
| help | Print command help or the help of a subcommand |
{{% /show-in %}}

View File

@ -0,0 +1,32 @@
The `influxdb3 delete token` command deletes an authorization token from the {{% product-name %}} server.
## Usage
```bash
influxdb3 delete token [OPTIONS]
```
## Options
| Option | Description | Default | Environment |
|----------------|-----------------------------------------------------------------------------------|---------|------------------------|
| `--token` | _({{< req >}})_ The token for authentication with the {{% product-name %}} server | | `INFLUXDB3_AUTH_TOKEN` |
| `--token-name` | _({{< req >}})_ The name of the token to be deleted | | |
| `--tls-ca` | An optional arg to use a custom ca for useful for testing with self signed certs | | `INFLUXDB3_TLS_CA` |
| `-h` | `--help` | Print help information |
| | `--help-all` | Print detailed help information |
## Examples
### Delete a token by name
```bash
influxdb3 delete token --token-name TOKEN_TO_DELETE --token AUTH_TOKEN
```
### Show help for the command
```bash
influxdb3 delete token --help
```

View File

@ -0,0 +1,301 @@
Learn how to set appropriate query timeouts for InfluxDB 3 to balance performance and resource protection.
Query timeouts prevent resource monopolization while allowing legitimate queries to complete successfully.
The key is finding the "goldilocks zone"—timeouts that are not too short (causing legitimate queries to fail) and not too long (allowing runaway queries to monopolize resources).
- [Understanding query timeouts](#understanding-query-timeouts)
- [How query routing affects timeout strategy](#how-query-routing-affects-timeout-strategy)
- [Timeout configuration best practices](#timeout-configuration-best-practices)
- [InfluxDB 3 client library examples](#influxdb-3-client-library-examples)
- [Monitoring and troubleshooting](#monitoring-and-troubleshooting)
## Understanding query timeouts
Query timeouts define the maximum duration a query can run before being canceled.
In {{% product-name %}}, timeouts serve multiple purposes:
- **Resource protection**: Prevent runaway queries from monopolizing system resources
- **Performance optimization**: Ensure responsive system behavior for time-sensitive operations
- **Cost control**: Limit compute resource consumption
- **User experience**: Provide predictable response times for applications and dashboards
Query execution includes network latency, query planning, data retrieval, processing, and result serialization.
### The "goldilocks zone" for query timeouts
Optimal timeouts are:
- **Long enough**: To accommodate normal query execution under typical load
- **Short enough**: To prevent resource monopolization and provide reasonable feedback
- **Adaptive**: Adjusted based on query type, system load, and historical performance
## How query routing affects timeout strategy
InfluxDB 3 uses round-robin query routing to balance load across multiple queriers.
This creates a "checkout line" effect that influences timeout strategy.
> [!Note]
> #### Concurrent query execution
>
> InfluxDB 3 supports concurrent query execution, which helps minimize the impact of intensive or inefficient queries.
> However, you should still use appropriate timeouts and optimize your queries for best performance.
### The checkout line analogy
Consider a grocery store with multiple checkout lines:
- Customers (queries) are distributed across lines (queriers)
- A slow customer (long-running query) can block others in the same line
- More checkout lines (queriers) provide more alternatives when retrying
If one querier is unhealthy or has been hijacked by a "noisy neighbor" query (excessively resource hungry), giving up sooner may save time--it's like jumping to a cashier with no customers in line. However, if all queriers are overloaded, then short retries may exacerbate the problem--you wouldn't jump to the end of another line if the cashier is already starting to scan your items.
### Noisy neighbor effects
In distributed systems:
- A single long-running query can impact other queries on the same querier
- Shorter timeouts with retries can help queries find less congested queriers
- The effectiveness depends on the number of available queriers
### When shorter timeouts help
- **Multiple queriers available**: Retries can find less congested queriers
- **Uneven load distribution**: Some queriers may be significantly less busy
- **Temporary congestion**: Brief spikes in query load or resource usage
### When shorter timeouts hurt
- **Few queriers**: Limited alternatives for retries
- **System-wide congestion**: All queriers are equally busy
- **Expensive query planning**: High overhead for query preparation
## Timeout configuration best practices
### Make timeouts adjustable
Configure timeouts that can be modified without service restarts using environment variables, configuration files, runtime APIs, or per-query overrides. Design your client applications to easily adjust timeouts on the fly, allowing you to respond quickly to performance changes and test different timeout strategies without code changes.
See the [InfluxDB 3 client library examples](#influxdb-3-client-library-examples)
for how to configure timeouts in Python.
### Use tiered timeout strategies
Implement different timeout classes based on query characteristics.
#### Starting point recommendations
{{% hide-in "cloud-serverless" %}}
| Query Type | Recommended Timeout | Use Case | Rationale |
|------------|-------------------|-----------|-----------|
| UI and dashboard | 10 seconds | Interactive dashboards, real-time monitoring | Users expect immediate feedback |
| Generic default | 60 seconds | Application queries, APIs | Balances performance and reliability |
| Mixed workload | 2 minutes | Development, testing environments | Accommodates various query types |
| Analytical and background | 5 minutes | Reports, batch processing, ETL operations | Complex queries need more time |
{{% /hide-in %}}
{{% show-in "cloud-serverless" %}}
| Query Type | Recommended Timeout | Use Case | Rationale |
|------------|-------------------|-----------|-----------|
| UI and dashboard | 10 seconds | Interactive dashboards, real-time monitoring | Users expect immediate feedback |
| Generic default | 30 seconds | Application queries, APIs | Serverless optimized for shorter queries |
| Mixed workload | 60 seconds | Development, testing environments | Limited by serverless execution model |
| Analytical and background | 2 minutes | Reports, batch processing | Complex queries within serverless limits |
{{% /show-in %}}
{{% show-in "enterprise, core" %}}
> [!Tip]
> #### Use caching
> Where immediate feedback is crucial, consider using [Last Value Cache](/influxdb3/version/admin/manage-last-value-caches/) to speed up queries for recent values and [Distinct Value Cache](/influxdb3/version/admin/manage-distinct-value-caches/) to speed up queries for distinct values.
{{% /show-in %}}
### Implement progressive timeout and retry logic
Consider using more sophisticated retry strategies rather than simple fixed retries:
1. **Exponential backoff**: Increase delay between retry attempts
2. **Jitter**: Add randomness to prevent thundering herd effects
3. **Circuit breakers**: Stop retries when system is overloaded
4. **Deadline propagation**: Respect overall operation deadlines
### Warning signs
Consider these indicators that timeouts may need adjustment:
- **Timeouts > 10 minutes**: Usually indicates [query optimization](/influxdb3/version/query-data/troubleshoot-and-optimize/optimize-queries/) opportunities
- **High retry rates**: May indicate timeouts are too aggressive
- **Resource utilization spikes**: Long-running queries may need shorter timeouts
- **User complaints**: Balance between performance and user experience
### Environment-specific considerations
- **Development**: Use longer timeouts for debugging
- **Production**: Use shorter timeouts with monitoring
- **Cost-sensitive**: Use aggressive timeouts and [query optimization](/influxdb3/version/query-data/troubleshoot-and-optimize/optimize-queries/)
### Experimental and ad-hoc queries
When introducing a new query to your application or when issuing ad-hoc queries to a database with many users, your query might be the "noisy neighbor" (the shopping cart overloaded with groceries). By setting a tighter timeout on experimental queries you can reduce the impact on other users.
## InfluxDB 3 client library examples
### Python client with timeout configuration
Configure timeouts in the InfluxDB 3 Python client:
```python { placeholders="DATABASE_NAME|HOST_URL|AUTH_TOKEN" }
import influxdb_client_3 as InfluxDBClient3
# Configure different timeout classes (in seconds)
ui_timeout = 10 # For dashboard queries
api_timeout = 60 # For application queries
batch_timeout = 300 # For analytical queries
# Create client with default timeout
client = InfluxDBClient3.InfluxDBClient3(
host="https://{{< influxdb/host >}}",
database="DATABASE_NAME",
token="AUTH_TOKEN",
timeout=api_timeout # Python client uses seconds
)
# Quick query with short timeout
def query_latest_data():
try:
result = client.query(
query="SELECT * FROM sensors WHERE time >= now() - INTERVAL '5 minutes' ORDER BY time DESC LIMIT 10",
timeout=ui_timeout
)
return result.to_pandas()
except Exception as e:
print(f"Quick query failed: {e}")
return None
# Analytical query with longer timeout
def query_daily_averages():
query = """
SELECT
DATE_TRUNC('day', time) as day,
room,
AVG(temperature) as avg_temp,
COUNT(*) as readings
FROM sensors
WHERE time >= now() - INTERVAL '30 days'
GROUP BY DATE_TRUNC('day', time), room
ORDER BY day DESC, room
"""
try:
result = client.query(
query=query,
timeout=batch_timeout
)
return result.to_pandas()
except Exception as e:
print(f"Analytical query failed: {e}")
return None
```
Replace the following:
{{% hide-in "cloud-serverless" %}}
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database to query{{% /hide-in %}}
{{% show-in "cloud-serverless" %}}
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the bucket to query{{% /show-in %}}
{{% show-in "clustered,cloud-dedicated" %}}
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: a [database token](/influxdb3/clustered/admin/tokens/#database-tokens) with _read_ access to the specified database.{{% /show-in %}}
{{% show-in "cloud-serverless" %}}
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: an [API token](/influxdb3/cloud-serverless/admin/tokens/) with _read_ access to the specified bucket.{{% /show-in %}}
{{% show-in "enterprise,core" %}}
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "database" %}}with read permissions on the specified database{{% /show-in %}}
### Basic retry logic implementation
Implement simple retry strategies with progressive timeouts:
```python
import time
import influxdb_client_3 as InfluxDBClient3
def query_with_retry(client, query: str, initial_timeout: int = 60, max_retries: int = 2):
"""Execute query with basic retry and progressive timeout increase"""
for attempt in range(max_retries + 1):
# Progressive timeout: increase timeout on each retry
timeout_seconds = initial_timeout + attempt * 30
try:
result = client.query(
query=query,
timeout=timeout_seconds
)
return result
except Exception as e:
if attempt == max_retries:
print(f"Query failed after {max_retries + 1} attempts: {e}")
raise
# Simple backoff delay
delay = 2 * (attempt + 1)
print(f"Query attempt {attempt + 1} failed: {e}")
print(f"Retrying in {delay} seconds with timeout {timeout_seconds}s...")
time.sleep(delay)
return None
# Usage example
result = query_with_retry(
client=client,
query="SELECT * FROM large_table WHERE time >= now() - INTERVAL '1 day'",
initial_timeout=60,
max_retries=2
)
```
## Monitoring and troubleshooting
### Key metrics to monitor
Track these essential timeout-related metrics:
- **Query duration percentiles**: P50, P95, P99 execution times
- **Timeout rate**: Percentage of queries that time out
- **Error rates**: Timeout errors vs. other failure types
- **Resource utilization**: CPU and memory usage during query execution
### Common timeout issues
#### High timeout rates
**Symptoms**: Many queries exceeding timeout limits
**Common causes**:
- Timeouts set too aggressively for query complexity
- System resource constraints
- Inefficient query patterns
**Solutions**:
1. Analyze query performance patterns
2. [Optimize slow queries](/influxdb3/version/query-data/troubleshoot-and-optimize/optimize-queries/) or increase timeouts appropriately
3. Scale system resources
#### Inconsistent query performance
**Symptoms**: Same queries sometimes fast, sometimes timeout
**Common causes**:
- Resource contention from concurrent queries
- Data compaction state (queries may be faster after compaction completes)
**Solutions**:
1. Analyze query patterns to identify and optimize slow queries
2. Implement retry logic with exponential backoff in your client applications
3. Adjust timeout values based on observed query performance patterns
{{% show-in "enterprise,core" %}}
4. Implement [Last Value Cache](/influxdb3/version/admin/manage-last-value-caches/) to speed up queries for recent values
5. Implement [Distinct Value Cache](/influxdb3/version/admin/manage-distinct-value-caches/) to speed up queries for distinct values
{{% /show-in %}}
> [!Note]
> Regular analysis of timeout patterns helps identify optimization opportunities and system scaling needs.

View File

@ -0,0 +1,348 @@
Learn how to avoid unexpected results and recover from errors when writing to {{% product-name %}}.
- [Handle write responses](#handle-write-responses)
- [Review HTTP status codes](#review-http-status-codes)
- [Troubleshoot failures](#troubleshoot-failures)
- [Troubleshoot rejected points](#troubleshoot-rejected-points)
- [Report write issues](#report-write-issues)
## Handle write responses
{{% product-name %}} does the following when you send a write request:
1. Validates the request.
2. If successful, attempts to [ingest data](/influxdb3/version/reference/internals/durability/#data-ingest) from the request body; otherwise, responds with an [error status](#review-http-status-codes).
3. Ingests or rejects data from the batch and returns one of the following HTTP status codes:
- `204 No Content`: All of the data is ingested and queryable.
- `400 Bad Request`: Some {{% show-in "cloud-dedicated,clustered" %}}(_when **partial writes** are configured for the cluster_){{% /show-in %}} or all of the data has been rejected. Data that has not been rejected is ingested and queryable.
The response body contains error details about [rejected points](#troubleshoot-rejected-points), up to 100 points.
Writes are synchronous--the response status indicates the final status of the write and all ingested data is queryable.
To ensure that InfluxDB handles writes in the order you request them,
wait for the response before you send the next request.
### Review HTTP status codes
InfluxDB uses conventional HTTP status codes to indicate the success or failure of a request.
The `message` property of the response body may contain additional details about the error.
{{< product-name >}} returns one the following HTTP status codes for a write request:
{{% show-in "clustered,cloud-dedicated" %}}
| HTTP response code | Response body | Description |
| :-------------------------------| :--------------------------------------------------------------- | :------------- |
| `204 "No Content"` | Empty | InfluxDB ingested all of the data in the batch |
| `400 "Bad request"` | error details about rejected points, up to 100 points: `line` contains the first rejected line, `message` describes rejections | Some or all request data isn't allowed (for example, is malformed or falls outside of the database's retention period)--the response body indicates whether a partial write has occurred or if all data has been rejected |
| `401 "Unauthorized"` | Empty | The `Authorization` request header is missing or malformed or the [token](/influxdb3/version/admin/tokens/) doesn't have permission to write to the database |
| `404 "Not found"` | A requested **resource type** (for example, "database"), and **resource name** | A requested resource wasn't found |
| `422 "Unprocessable Entity"` | `message` contains details about the error | The data isn't allowed (for example, falls outside of the database's retention period). |
| `500 "Internal server error"` | Empty | Default status for an error |
| `503 "Service unavailable"` | Empty | The server is temporarily unavailable to accept writes. The `Retry-After` header contains the number of seconds to wait before trying the write again. |
{{% /show-in %}}
{{% show-in "cloud-serverless" %}}
| HTTP response code | Response body | Description |
| :-------------------------------| :--------------------------------------------------------------- | :------------- |
| `204 "No Content"` | Empty | InfluxDB ingested all of the data in the batch |
| `400 "Bad request"` | error details about rejected points, up to 100 points: `line` contains the first rejected line, `message` describes rejections | Some or all request data isn't allowed (for example, is malformed or falls outside of the bucket's retention period)--the response body indicates whether a partial write has occurred or if all data has been rejected |
| `401 "Unauthorized"` | Empty | The `Authorization` request header is missing or malformed or the [token](/influxdb3/version/admin/tokens/) doesn't have permission to write to the bucket |
| `404 "Not found"` | A requested **resource type** (for example, "organization" or "bucket"), and **resource name** | A requested resource wasn't found |
| `413 "Request too large"` | cannot read data: points in batch is too large | The request exceeds the maximum [global limit](/influxdb3/cloud-serverless/admin/billing/limits/) |
| `422 "Unprocessable Entity"` | `message` contains details about the error | The data isn't allowed (for example, falls outside of the database's retention period). |
| `429 "Too many requests"` | Empty | The number of requests exceeds the [adjustable service quota](/influxdb3/cloud-serverless/admin/billing/limits/#adjustable-service-quotas). The `Retry-After` header contains the number of seconds to wait before trying the write again. |
| `500 "Internal server error"` | Empty | Default status for an error |
| `503 "Service unavailable"` | Empty | The server is temporarily unavailable to accept writes. The `Retry-After` header contains the number of seconds to wait before trying the write again. |
{{% /show-in %}}
The `message` property of the response body may contain additional details about the error.
If your data did not write to the {{% show-in "cloud-serverless" %}}bucket{{% /show-in %}}{{% show-in "cloud-dedicated,clustered" %}}database{{% /show-in %}}, see how to [troubleshoot rejected points](#troubleshoot-rejected-points).
## Troubleshoot failures
If you notice data is missing in your database, do the following:
- Check the [HTTP status code](#review-http-status-codes) in the response.
- Check the `message` property in the response body for details about the error.
- If the `message` describes a field error, [troubleshoot rejected points](#troubleshoot-rejected-points).
- Verify all lines contain valid syntax ([line protocol](/influxdb3/version/reference/syntax/line-protocol/)).
- Verify the timestamps in your data match the [precision parameter](/influxdb3/version/reference/glossary/#precision) in your request.
- Minimize payload size and network errors by [optimizing writes](/influxdb3/version/write-data/best-practices/optimize-writes/).
## Troubleshoot rejected points
When writing points from a batch, InfluxDB rejects points that have syntax errors or schema conflicts.
If InfluxDB processes the data in your batch and then rejects points, the [HTTP response](#handle-write-responses) body contains the following properties that describe rejected points:
- `code`: `"invalid"`
- `line`: the line number of the _first_ rejected point in the batch.
- `message`: a string that contains line-separated error messages, one message for each rejected point in the batch, up to 100 rejected points. Line numbers are 1-based.
InfluxDB rejects points for the following reasons:
- a line protocol parsing error
- an invalid timestamp
- a schema conflict
Schema conflicts occur when you try to write data that contains any of the following:
- a wrong data type: the point falls within the same partition (default partitioning is measurement and day) as existing {{% show-in "cloud-serverless" %}}bucket{{% /show-in %}} {{% show-in "cloud-dedicated,clustered" %}}database{{% /show-in %}} data and contains a different data type for an existing field
- a tag and a field that use the same key
### Example
The following example shows a response body for a write request that contains two rejected points:
```json
{
"code": "invalid",
"line": 2,
"message": "failed to parse line protocol:
errors encountered on line(s):
error parsing line 2 (1-based): Invalid measurement was provided
error parsing line 4 (1-based): Unable to parse timestamp value '123461000000000000000000000000'"
}
```
Check for [field data type](/influxdb3/version/reference/syntax/line-protocol/#data-types-and-format) differences between the rejected data point and points within the same database and partition (default partitioning
is by measurement and day)--for example, did you attempt to write `string` data to an `int` field?
## Report write issues
If you experience persistent write issues that you can't resolve using the troubleshooting steps above, use these guidelines to gather the necessary information when reporting the issue to InfluxData support.
> [!Note]
> #### Before reporting an issue
>
> Ensure you have followed all [troubleshooting steps](#troubleshoot-failures) and
> reviewed the [write optimization guidelines](/influxdb3/version/write-data/best-practices/optimize-writes/)
> to rule out common configuration and data formatting issues.
### Gather essential information
When reporting write issues, provide the following information to help InfluxData engineers diagnose the problem:
#### 1. Error details and logs
**Capture the complete error response:**
```bash { placeholders="AUTH_TOKEN|DATABASE_NAME" }
# Example: Capture both successful and failed write attempts
curl --silent --show-error --write-out "\nHTTP Status: %{http_code}\nResponse Time: %{time_total}s\n" \
--request POST \
"https://{{< influxdb/host >}}/write?db=DATABASE_NAME&precision=ns" \
--header "Authorization: Bearer AUTH_TOKEN" \
--header "Content-Type: text/plain; charset=utf-8" \
--data-binary @problematic-data.lp \
> write-error-response.txt 2>&1
```
**Log client-side errors:**
If using a client library, enable debug logging and capture the full exception details:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Python](#)
[Go](#)
[Java](#)
[JavaScript](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```python { placeholders="DATABASE_NAME|AUTH_TOKEN" }
import logging
from influxdb_client_3 import InfluxDBClient3
# Enable debug logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger("influxdb_client_3")
try:
client = InfluxDBClient3(token="AUTH_TOKEN", host="{{< influxdb/host >}}", database="DATABASE_NAME")
client.write(data)
except Exception as e:
logger.error(f"Write failed: {str(e)}")
# Include full stack trace in your report
import traceback
traceback.print_exc()
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```go { placeholders="DATABASE_NAME|AUTH_TOKEN" }
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/InfluxCommunity/influxdb3-go"
)
func main() {
// Enable debug logging
client, err := influxdb3.New(influxdb3.ClientConfig{
Host: "https://{{< influxdb/host >}}",
Token: "AUTH_TOKEN",
Database: "DATABASE_NAME",
Debug: true,
})
if err != nil {
log.Fatal(err)
}
defer client.Close()
err = client.Write(context.Background(), data)
if err != nil {
// Log the full error details
fmt.Fprintf(os.Stderr, "Write error: %+v\n", err)
}
}
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```java { placeholders="DATABASE_NAME|AUTH_TOKEN" }
import com.influxdb.v3.client.InfluxDBClient;
import java.util.logging.Logger;
import java.util.logging.Level;
public class WriteErrorExample {
private static final Logger logger = Logger.getLogger(WriteErrorExample.class.getName());
public static void main(String[] args) {
try (InfluxDBClient client = InfluxDBClient.getInstance(
"https://{{< influxdb/host >}}",
"AUTH_TOKEN".toCharArray(),
"DATABASE_NAME")) {
client.writeRecord(data);
} catch (Exception e) {
logger.log(Level.SEVERE, "Write failed", e);
// Include full stack trace in your report
e.printStackTrace();
}
}
}
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```javascript { placeholders="DATABASE_NAME|AUTH_TOKEN" }
import { InfluxDBClient } from '@influxdata/influxdb3-client'
const client = new InfluxDBClient({
host: 'https://{{< influxdb/host >}}',
token: 'AUTH_TOKEN',
database: 'DATABASE_NAME'
})
try {
await client.write(data)
} catch (error) {
console.error('Write failed:', error)
// Include the full error object in your report
console.error('Full error details:', JSON.stringify(error, null, 2))
}
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
Replace the following in your code:
{{% hide-in "cloud-serverless" %}}
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database to query{{% /hide-in %}}
{{% show-in "cloud-serverless" %}}
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the bucket to query{{% /show-in %}}
{{% show-in "clustered,cloud-dedicated" %}}
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: a [database token](/influxdb3/clustered/admin/tokens/#database-tokens) with _write_ access to the specified database.{{% /show-in %}}
{{% show-in "cloud-serverless" %}}
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: an [API token](/influxdb3/cloud-serverless/admin/tokens/) with _write_ access to the specified bucket.{{% /show-in %}}
{{% show-in "enterprise,core" %}}
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "database" %}} with write permissions on the specified database{{% /show-in %}}
#### 2. Data samples and patterns
**Provide representative data samples:**
- Include 10-20 lines of the problematic line protocol data (sanitized if necessary)
- Show both successful and failing data formats
- Include timestamp ranges and precision used
- Specify if the issue occurs with specific measurements, tags, or field types
**Example data documentation:**
```
# Successful writes:
measurement1,tag1=value1,tag2=value2 field1=1.23,field2="text" 1640995200000000000
# Failing writes:
measurement1,tag1=value1,tag2=value2 field1="string",field2=456 1640995260000000000
# Error: field data type conflict - field1 changed from float to string
```
#### 3. Write patterns and volume
Document your write patterns:
- **Frequency**: How often do you write data? (for example, every 10 seconds, once per minute)
- **Batch size**: How many points per write request?
- **Concurrency**: How many concurrent write operations?
- **Data retention**: How long is data retained?
- **Timing**: When did the issue first occur? Is it intermittent or consistent?
#### 4. Environment details
{{% show-in "clustered" %}}
**Cluster configuration:**
- InfluxDB Clustered version
- Kubernetes environment details
- Node specifications (CPU, memory, storage)
- Network configuration between client and cluster
{{% /show-in %}}
**Client configuration:**
- Client library version and language
- Connection settings (timeouts, retry logic)
- Geographic location relative to cluster
#### 5. Reproduction steps
Provide step-by-step instructions to reproduce the issue:
1. **Environment setup**: How to configure a similar environment
2. **Data preparation**: Sample data files or generation scripts
3. **Write commands**: Exact commands or code used
4. **Expected vs actual results**: What should happen vs what actually happens
### Create a support package
Organize all gathered information into a comprehensive package:
**Files to include:**
- `write-error-response.txt` - HTTP response details
- `client-logs.txt` - Client library debug logs
- `sample-data.lp` - Representative line protocol data (sanitized)
- `reproduction-steps.md` - Detailed reproduction guide
- `environment-details.md` - {{% show-in "clustered" %}}Cluster and{{% /show-in %}} client configuration
- `write-patterns.md` - Usage patterns and volume information
**Package format:**
```bash
# Create a timestamped support package
TIMESTAMP=$(date -Iseconds)
mkdir "write-issue-${TIMESTAMP}"
# Add all relevant files to the directory
tar -czf "write-issue-${TIMESTAMP}.tar.gz" "write-issue-${TIMESTAMP}/"
```
### Submit the issue
Include the support package when contacting InfluxData support through your standard [support channels](#bug-reports-and-feedback), along with:
- A clear description of the problem
- Impact assessment (how critical is this issue?)
- Any workarounds you've attempted
- Business context if the issue affects production systems
This comprehensive information will help InfluxData engineers identify root causes and provide targeted solutions for your write issues.

View File

@ -65,11 +65,11 @@ The following table provides information about what metaqueries are available in
### Aggregate functions
| Function | Supported |
| :---------------------------------------------------------------------------------------- | :----------------------: |
| Function | Supported |
| :-------------------------------------------------------------------------------- | :----------------------: |
| [COUNT()](/influxdb/version/reference/influxql/functions/aggregates/#count) | **{{< icon "check" >}}** |
| [DISTINCT()](/influxdb/version/reference/influxql/functions/aggregates/#distinct) | **{{< icon "check" >}}** |
| <span style="opacity: .5;">INTEGRAL()</span> | |
| [INTEGRAL()](/influxdb/version/reference/influxql/functions/aggregates/#integral) | **{{< icon "check" >}}** |
| [MEAN()](/influxdb/version/reference/influxql/functions/aggregates/#mean) | **{{< icon "check" >}}** |
| [MEDIAN()](/influxdb/version/reference/influxql/functions/aggregates/#median) | **{{< icon "check" >}}** |
| [MODE()](/influxdb/version/reference/influxql/functions/aggregates/#mode) | **{{< icon "check" >}}** |
@ -77,29 +77,25 @@ The following table provides information about what metaqueries are available in
| [STDDEV()](/influxdb/version/reference/influxql/functions/aggregates/#stddev) | **{{< icon "check" >}}** |
| [SUM()](/influxdb/version/reference/influxql/functions/aggregates/#sum) | **{{< icon "check" >}}** |
<!--
INTEGRAL [influxdb_iox#6937](https://github.com/influxdata/influxdb_iox/issues/6937)
-->
### Selector functions
| Function | Supported |
| :------------------------------------------------------------------------------------------- | :----------------------: |
| Function | Supported |
| :----------------------------------------------------------------------------------- | :----------------------: |
| [BOTTOM()](/influxdb/version/reference/influxql/functions/selectors/#bottom) | **{{< icon "check" >}}** |
| [FIRST()](/influxdb/version/reference/influxql/functions/selectors/#first) | **{{< icon "check" >}}** |
| [LAST()](/influxdb/version/reference/influxql/functions/selectors/#last) | **{{< icon "check" >}}** |
| [MAX()](/influxdb/version/reference/influxql/functions/selectors/#max) | **{{< icon "check" >}}** |
| [MIN()](/influxdb/version/reference/influxql/functions/selectors/#min) | **{{< icon "check" >}}** |
| [PERCENTILE()](/influxdb/version/reference/influxql/functions/selectors/#percentile) | **{{< icon "check" >}}** |
| <span style="opacity: .5;">SAMPLE()</span> | |
| <span style="opacity: .5;">SAMPLE()</span> | |
| [TOP()](/influxdb/version/reference/influxql/functions/selectors/#top) | **{{< icon "check" >}}** |
<!-- SAMPLE() [influxdb_iox#6935](https://github.com/influxdata/influxdb_iox/issues/6935) -->
### Transformations
| Function | Supported |
| :--------------------------------------------------------------------------------------------------------------------------- | :----------------------: |
| Function | Supported |
| :------------------------------------------------------------------------------------------------------------------- | :----------------------: |
| [ABS()](/influxdb/version/reference/influxql/functions/transformations/#abs) | **{{< icon "check" >}}** |
| [ACOS()](/influxdb/version/reference/influxql/functions/transformations/#acos) | **{{< icon "check" >}}** |
| [ASIN()](/influxdb/version/reference/influxql/functions/transformations/#asin) | **{{< icon "check" >}}** |

View File

@ -6,6 +6,7 @@ _Examples use the sample data set provided in the
- [COUNT()](#count)
- [DISTINCT()](#distinct)
- [INTEGRAL()](#integral)
- [MEAN()](#mean)
- [MEDIAN()](#median)
- [MODE()](#mode)
@ -13,17 +14,6 @@ _Examples use the sample data set provided in the
- [STDDEV()](#stddev)
- [SUM()](#sum)
<!-- When implemented, place back in alphabetical order -->
<!-- - [INTEGRAL()](#integral) -->
> [!Important]
> #### Missing InfluxQL functions
>
> Some InfluxQL functions are in the process of being rearchitected to work with
> the InfluxDB 3 storage engine. If a function you need is not here, check the
> [InfluxQL feature support page](/influxdb/version/reference/influxql/feature-support/#function-support)
> for more information.
## COUNT()
Returns the number of non-null [field values](/influxdb/version/reference/glossary/#field-value).
@ -186,14 +176,14 @@ name: home
{{% /expand %}}
{{< /expand-wrapper >}}
<!-- ## INTEGRAL()
## INTEGRAL()
Returns the area under the curve for queried [field values](/influxdb/version/reference/glossary/#field-value)
and converts those results into the summed area per **unit** of time.
> [!Note]
> `INTEGRAL()` does not support [`fill()`](/influxdb/version/query-data/influxql/explore-data/group-by/> #group-by-time-intervals-and-fill).
> `INTEGRAL()` supports int64 and float64 field value [data types](/influxdb/version/reference/glossary/#data-type).
> [!Important]
> - `INTEGRAL()` does not support [`fill()`](/influxdb/version/reference/influxql/group-by/#group-by-time-and-fill-gaps).
> - `INTEGRAL()` supports int64 and float64 field value [data types](/influxdb/version/reference/glossary/#data-type).
```sql
INTEGRAL(field_expression[, unit])
@ -318,7 +308,7 @@ name: home
{{% /influxdb/custom-timestamps %}}
{{% /expand %}}
{{< /expand-wrapper >}} -->
{{< /expand-wrapper >}}
## MEAN()

View File

@ -0,0 +1,342 @@
# yaml-language-server: $schema=app-instance-schema.json
apiVersion: kubecfg.dev/v1alpha1
kind: AppInstance
metadata:
name: influxdb
namespace: influxdb
spec:
# One or more secrets that are used to pull the images from an authenticated registry.
# This will either be the secret provided to you, if using our registry, or a secret for your own registry
# if self-hosting the images.
imagePullSecrets:
- name: <name of the secret>
package:
# The version of the clustered package that will be used.
# This determines the version of all of the individual components.
# When a new version of the product is released, this version should be updated and any
# new config options should be updated below.
image: us-docker.pkg.dev/influxdb2-artifacts/clustered/influxdb:20250721-1796368
apiVersion: influxdata.com/v1alpha1
spec:
# # Provides a way to pass down hosting environment specific configuration, such as an role ARN when using EKS IRSA.
# # This section contains three multually-exclusive "blocks". Uncomment the block named after the hosting environment
# # you run: "aws", "openshift" or "gke".
# hostingEnvironment:
# # # Uncomment this block if you're running in EKS.
# # aws:
# # eksRoleArn: 'arn:aws:iam::111111111111:role/your-influxdb-clustered-role'
# #
# # # Uncomment this block if you're running inside OpenShift.
# # # Note: there are currently no OpenShift-specific parameters. You have to pass an empty object
# # # as a marker that you're choosing OpenShift as hosting environment.
# # openshift: {}
# #
# # # Uncomment this block if you're running in GKE:
# # gke:
# # # Authenticate to Google Cloud services via workload identity, this
# # # annotates the 'iox' ServiceAccount with the role name you specify.
# # # NOTE: This setting just enables GKE specific authentication mechanism,
# # # You still need to enable `spec.objectStore.google` below if you want to use GCS.
# # workloadIdentity:
# # # Google Service Account name to use for the workload identity.
# # serviceAccountEmail: <service-account>@<project-name>.iam.gserviceaccount.com
catalog:
# A postgresql style DSN that points at a postgresql compatible database.
# eg: postgres://[user[:password]@][netloc][:port][/dbname][?param1=value1&...]
dsn:
valueFrom:
secretKeyRef:
name: <your secret name here>
key: <the key in the secret that contains the dsn>
# images:
# # This can be used to override a specific image name with its FQIN
# # (Fully Qualified Image Name) for testing. eg.
# overrides:
# - name: influxdb2-artifacts/iox/iox
# newFQIN: mycompany/test-iox-build:aninformativetag
#
# # Set this variable to the prefix of your internal registry. This will be prefixed to all expected images.
# # eg. us-docker.pkg.dev/iox:latest => registry.mycompany.io/us-docker.pkg.dev/iox:latest
# registryOverride: <the domain name portion of your registry (registry.mycompany.io in the example above)>
objectStore:
# Bucket that the parquet files will be stored in
bucket: <bucket name>
# Uncomment one of the following (s3, azure)
# to enable the configuration of your object store
s3:
# URL for S3 Compatible object store
endpoint: <S3 url>
# Set to true to allow communication over HTTP (instead of HTTPS)
allowHttp: "false"
# S3 Access Key
# This can also be provided as a valueFrom: secretKeyRef:
accessKey:
value: <your access key>
# S3 Secret Key
# This can also be provided as a valueFrom: secretKeyRef:
secretKey:
value: <your secret>
# This value is required for AWS S3, it may or may not be required for other providers.
region: <region>
# azure:
# Azure Blob Storage Access Key
# This can also be provided as a valueFrom: secretKeyRef:
# accessKey:
# value: <your access key>
# Azure Blob Storage Account
# This can also be provided as a valueFrom: secretKeyRef:
# account:
# value: <your access key>
# There are two main ways you can access a Google:
#
# a) GKE Workload Identity: configure workload identity in the top level `hostingEnvironment.gke` section.
# b) Explicit service account secret (JSON) file: use the `serviceAccountSecret` field here
#
# If you pick (a) you may not need to uncomment anything else in this section,
# but you still need to tell influxdb that you intend to use Google Cloud Storage.
# so you need to specify an empty object. Uncomment the following line:
#
# google: {}
#
#
# If you pick (b), uncomment the following block:
#
# google:
# # If you're authenticating to Google Cloud service using a Service Account credentials file, as opposed
# # as to use workload identity (see above) you need to provide a reference to a k8s secret containing the credentials file.
# serviceAccountSecret:
# # Kubernetes Secret name containing the credentials for a Google IAM Service Account.
# name: <secret name>
# # The key within the Secret containing the credentials.
# key: <key name>
# Parameters to tune observability configuration, such as Prometheus ServiceMonitor's.
observability: {}
# retention: 12h
# serviceMonitor:
# interval: 10s
# scrapeTimeout: 30s
# Ingester pods have a volume attached.
ingesterStorage:
# (Optional) Set the storage class. This will differ based on the K8s environment and desired storage characteristics.
# If not set, the default storage class will be used.
# storageClassName: <storage-class>
# Set the storage size (minimum 2Gi recommended)
storage: <storage-size>
# Monitoring pods have a volume attached.
monitoringStorage:
# (Optional) Set the storage class. This will differ based on the K8s environment and desired storage characteristics.
# If not set, the default storage class will be used.
# storageClassName: <storage-class>
# Set the storage size (minimum 10Gi recommended)
storage: <storage-size>
# Uncomment the follow block if using our provided Ingress.
#
# We currently only support the ingress NGINX ingress controller: https://github.com/kubernetes/ingress-nginx
#
# ingress:
# hosts:
# # This is the host on which you will access Influxdb 3.0, for both reads and writes
# - <influxdb-host>
# (Optional)
# The name of the Kubernetes Secret containing a TLS certificate, this should exist in the same namespace as the Clustered installation.
# If you are using cert-manager, enter a name for the Secret it should create.
# tlsSecretName: <secret-name>
# http:
# # Usually you have only one ingress controller installed in a given cluster.
# # In case you have more than one, you have to specify the "class name" of the ingress controller you want to use
# className: nginx
# grpc:
# # Usually you have only one ingress controller installed in a given cluster.
# # In case you have more than one, you have to specify the "class name" of the ingress controller you want to use
# className: nginx
#
# Enables specifying which 'type' of Ingress to use, alongside whether to place additional annotations
# onto those objects, this is useful for third party software in your environment, such as cert-manager.
# template:
# apiVersion: 'route.openshift.io/v1'
# kind: 'Route'
# metadata:
# annotations:
# 'example-annotation': 'annotation-value'
# Enables specifying customizations for the various components in InfluxDB 3.0.
# components:
# # router:
# # template:
# # containers:
# # iox:
# # env:
# # INFLUXDB_IOX_MAX_HTTP_REQUESTS: "5000"
# # nodeSelector:
# # disktype: ssd
# # tolerations:
# # - effect: NoSchedule
# # key: example
# # operator: Exists
# # Common customizations for all components go in a pseudo-component called "common"
# # common:
# # template:
# # # Metadata contains custom annotations (and labels) to be added to a component. E.g.:
# # metadata:
# # annotations:
# # telegraf.influxdata.com/class: "foo"
# Example of setting nodeAffinity for the querier component to ensure it runs on nodes with specific labels
# components:
# # querier:
# # template:
# # affinity:
# # nodeAffinity:
# # requiredDuringSchedulingIgnoredDuringExecution:
# # Node must have these labels to be considered for scheduling
# # nodeSelectorTerms:
# # - matchExpressions:
# # - key: required
# # operator: In
# # values:
# # - ssd
# # preferredDuringSchedulingIgnoredDuringExecution:
# # Scheduler will prefer nodes with these labels but they're not required
# # - weight: 1
# # preference:
# # matchExpressions:
# # - key: preferred
# # operator: In
# # values:
# # - postgres
# Example of setting podAntiAffinity for the querier component to ensure it runs on nodes with specific labels
# components:
# # querier:
# # template:
# # affinity:
# # podAntiAffinity:
# # requiredDuringSchedulingIgnoredDuringExecution:
# # Ensures that the pod will not be scheduled on a node if another pod matching the labelSelector is already running there
# # - labelSelector:
# # matchExpressions:
# # - key: app
# # operator: In
# # values:
# # - querier
# # topologyKey: "kubernetes.io/hostname"
# # preferredDuringSchedulingIgnoredDuringExecution:
# # Scheduler will prefer not to schedule pods together but may do so if necessary
# # - weight: 1
# # podAffinityTerm:
# # labelSelector:
# # matchExpressions:
# # - key: app
# # operator: In
# # values:
# # - querier
# # topologyKey: "kubernetes.io/hostname"
# Uncomment the following block to tune the various pods for their cpu/memory/replicas based on workload needs.
# Only uncomment the specific resources you want to change, anything uncommented will use the package default.
# (You can read more about k8s resources and limits in https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits)
#
# resources:
# # The ingester handles data being written
# ingester:
# requests:
# cpu: <cpu amount>
# memory: <ram amount>
# replicas: <num replicas> # The default for ingesters is 3 to increase availability
#
# # optionally you can specify the resource limits which improves isolation.
# # (see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits)
# # limits:
# # cpu: <cpu amount>
# # memory: <ram amount>
# # The compactor reorganizes old data to improve query and storage efficiency.
# compactor:
# requests:
# cpu: <cpu amount>
# memory: <ram amount>
# replicas: <num replicas> # the default is 1
# # The querier handles querying data.
# querier:
# requests:
# cpu: <cpu amount>
# memory: <ram amount>
# replicas: <num replicas> # the default is 3
# # The router performs some api routing.
# router:
# requests:
# cpu: <cpu amount>
# memory: <ram amount>
# replicas: <num replicas> # the default is 3
admin:
# The list of users to grant access to Clustered via influxctl
users:
# First name of user
- firstName: <first-name>
# Last name of user
lastName: <last-name>
# Email of user
email: <email>
# The ID that the configured Identity Provider uses for the user in oauth flows
id: <id>
# Optional list of user groups to assign to the user, rather than the default groups. The following groups are currently supported: Admin, Auditor, Member
userGroups:
- <group-name>
# The dsn for the postgres compatible database (note this is the same as defined above)
dsn:
valueFrom:
secretKeyRef:
name: <secret name>
key: <dsn key>
# The identity provider to be used e.g. "keycloak", "auth0", "azure", etc
# Note for Azure Active Directory it must be exactly "azure"
identityProvider: <identity-provider>
# The JWKS endpoint provided by the Identity Provider
jwksEndpoint: <endpoint>
# # This (optional) section controls how InfluxDB issues outbound requests to other services
# egress:
# # If you're using a custom CA you will need to specify the full custom CA bundle here.
# #
# # NOTE: the custom CA is currently only honoured for outbound requests used to obtain
# # the JWT public keys from your identiy provider (see `jwksEndpoint`).
# customCertificates:
# valueFrom:
# configMapKeyRef:
# key: ca.pem
# name: custom-ca
# We also include the ability to enable some features that are not yet ready for general availability
# or for which we don't yet have a proper place to turn on an optional feature in the configuration file.
# To turn on these you should include the name of the feature flag in the `featureFlag` array.
#
# featureFlags:
# # Uncomment to install a Grafana deployment.
# # Depends on one of the prometheus features being deployed.
# # - grafana
# # The following 2 flags should be uncommented for k8s API 1.21 support.
# # Note that this is an experimental configuration.
# # - noMinReadySeconds
# # - noGrpcProbes

View File

@ -5046,9 +5046,9 @@ tldts@^6.1.32:
tldts-core "^6.1.86"
tmp@~0.2.3:
version "0.2.3"
resolved "https://registry.yarnpkg.com/tmp/-/tmp-0.2.3.tgz#eb783cc22bc1e8bebd0671476d46ea4eb32a79ae"
integrity sha512-nZD7m9iCPC5g0pYmcaxogYKggSfLsdxl8of3Q/oIbqCqLLIO9IAF0GWjX1z9NZRHPiXv8Wex4yDCaZsgEw0Y8w==
version "0.2.4"
resolved "https://registry.yarnpkg.com/tmp/-/tmp-0.2.4.tgz#c6db987a2ccc97f812f17137b36af2b6521b0d13"
integrity sha512-UdiSoX6ypifLmrfQ/XfiawN6hkjSBpCjhKxxZcWlUUmoXLaCKQU0bx4HF/tdDK2uzRuchf1txGvrWBzYREssoQ==
to-buffer@^1.1.1:
version "1.2.1"