Merge branch 'master' into docs/5625-v1-get-query-endpoint

pull/6274/head
Jameelah Mercer 2025-08-14 09:32:18 -07:00 committed by GitHub
commit 293557157d
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
24 changed files with 901 additions and 317 deletions

View File

@ -26,6 +26,7 @@ related:
- /influxdb3/cloud-dedicated/reference/influxql/
- /influxdb3/cloud-dedicated/reference/sql/
- /influxdb3/cloud-dedicated/query-data/execute-queries/troubleshoot/
- /influxdb3/cloud-dedicated/query-data/troubleshoot-and-optimize/query-timeout-best-practices/
list_code_example: |
```py
@ -240,7 +241,8 @@ from influxdb_client_3 import InfluxDBClient3
client = InfluxDBClient3(
host='{{< influxdb/host >}}',
token='DATABASE_TOKEN',
database='DATABASE_NAME'
database='DATABASE_NAME',
timeout=60 # Set default timeout to 60 seconds
)
```
{{% /code-placeholders %}}
@ -275,6 +277,7 @@ client = InfluxDBClient3(
host="{{< influxdb/host >}}",
token='DATABASE_TOKEN',
database='DATABASE_NAME',
timeout=60, # Set default timeout to 60 seconds
flight_client_options=flight_client_options(
tls_root_certs=cert))
...
@ -332,7 +335,8 @@ client = InfluxDBClient3(
# Execute the query and return an Arrow table
table = client.query(
query="SELECT * FROM home",
language="sql"
language="sql",
timeout=30 # Override default timeout for simple queries (30 seconds)
)
print("\n#### View Schema information\n")
@ -377,7 +381,8 @@ client = InfluxDBClient3(
# Execute the query and return an Arrow table
table = client.query(
query="SELECT * FROM home",
language="influxql"
language="influxql",
timeout=30 # Override default timeout for simple queries (30 seconds)
)
print("\n#### View Schema information\n")

View File

@ -13,6 +13,7 @@ influxdb3/cloud-dedicated/tags: [query, sql, influxql, influxctl, CLI]
related:
- /influxdb3/cloud-dedicated/reference/cli/influxctl/query/
- /influxdb3/cloud-dedicated/get-started/query/#execute-an-sql-query, Get started querying data
- /influxdb3/cloud-dedicated/query-data/troubleshoot-and-optimize/query-timeout-best-practices/, Query timeout best practices
- /influxdb3/cloud-dedicated/reference/sql/
- /influxdb3/cloud-dedicated/reference/influxql/
list_code_example: |
@ -142,6 +143,34 @@ Replace the following:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
Name of the database to query
## Query timeouts
The [`influxctl --timeout` global flag](/influxdb3/cloud-dedicated/reference/cli/influxctl/) sets the maximum duration for API calls, including query requests.
If a query takes longer than the specified timeout, the operation will be canceled.
### Timeout examples
Use different timeout values based on your query type:
{{% code-placeholders "DATABASE_(TOKEN|NAME)" %}}
```sh
# Shorter timeout for testing dashboard queries (10 seconds)
influxctl query \
--timeout 10s \
--token DATABASE_TOKEN \
--database DATABASE_NAME \
"SELECT AVG(temperature) FROM sensors WHERE time >= now() - INTERVAL '1 day'"
# Longer timeout for analytical queries (5 minutes)
influxctl query \
--timeout 5m \
--token DATABASE_TOKEN \
--database DATABASE_NAME \
"SELECT room, AVG(temperature) FROM sensors WHERE time >= now() - INTERVAL '30 days' GROUP BY room"
```
{{% /code-placeholders %}}
For guidance on selecting appropriate timeout values, see [Query timeout best practices](/influxdb3/cloud-dedicated/query-data/troubleshoot-and-optimize/query-timeout-best-practices/).
## Output format
@ -243,7 +272,7 @@ influxctl query \
{{% /influxdb/custom-timestamps %}}
{{< expand-wrapper >}}
{{% expand "View example results with unix nanosecond timestamps" %}}
{{% expand "View example results with Unix nanosecond timestamps" %}}
{{% influxdb/custom-timestamps %}}
```
+-------+--------+---------+------+---------------------+

View File

@ -0,0 +1,17 @@
---
title: Query timeout best practices
description: Learn how to set appropriate query timeouts to balance performance and resource protection.
menu:
influxdb3_cloud_dedicated:
name: Query timeout best practices
parent: Troubleshoot and optimize queries
weight: 205
related:
- /influxdb3/cloud-dedicated/reference/client-libraries/v3/
- /influxdb3/cloud-dedicated/query-data/execute-queries/influxctl-cli/
source: shared/influxdb3-query-guides/query-timeout-best-practices.md
---
<!--
//SOURCE - content/shared/influxdb3-query-guides/query-timeout-best-practices.md
>

View File

@ -12,6 +12,7 @@ related:
- /influxdb3/cloud-dedicated/query-data/sql/
- /influxdb3/cloud-dedicated/query-data/influxql/
- /influxdb3/cloud-dedicated/reference/client-libraries/v3/
- /influxdb3/cloud-dedicated/query-data/troubleshoot-and-optimize/query-timeout-best-practices/
aliases:
- /influxdb3/cloud-dedicated/query-data/execute-queries/troubleshoot/
- /influxdb3/cloud-dedicated/query-data/troubleshoot-and-optimize/trace/
@ -30,7 +31,9 @@ If a query doesn't return any data, it might be due to the following:
- Your data falls outside the time range (or other conditions) in the query--for example, the InfluxQL `SHOW TAG VALUES` command uses a default time range of 1 day.
- The query (InfluxDB server) timed out.
- The query client timed out.
- The query client timed out.
See [Query timeout best practices](/influxdb3/cloud-dedicated/query-data/troubleshoot-and-optimize/query-timeout-best-practices/)
for guidance on setting appropriate timeouts.
- The query return type is not supported by the client library.
For example, array or list types may not be supported.
In this case, use `array_to_string()` to convert the array value to a string--for example:

View File

@ -10,101 +10,15 @@ menu:
influxdb3_cloud_dedicated:
name: Troubleshoot issues
parent: Write data
influxdb3/cloud-dedicated/tags: [write, line protocol, errors]
influxdb3/cloud-dedicated/tags: [write, line protocol, errors, partial writes]
related:
- /influxdb3/cloud-dedicated/get-started/write/
- /influxdb3/cloud-dedicated/reference/syntax/line-protocol/
- /influxdb3/cloud-dedicated/write-data/best-practices/
- /influxdb3/cloud-dedicated/reference/internals/durability/
source: /shared/influxdb3-write-guides/troubleshoot-distributed.md
---
Learn how to avoid unexpected results and recover from errors when writing to {{% product-name %}}.
- [Handle write responses](#handle-write-responses)
- [Review HTTP status codes](#review-http-status-codes)
- [Troubleshoot failures](#troubleshoot-failures)
- [Troubleshoot rejected points](#troubleshoot-rejected-points)
## Handle write responses
{{% product-name %}} does the following when you send a write request:
1. Validates the request.
2. If successful, attempts to [ingest data](/influxdb3/cloud-dedicated/reference/internals/durability/#data-ingest) from the request body; otherwise, responds with an [error status](#review-http-status-codes).
3. Ingests or rejects data in the batch and returns one of the following HTTP status codes:
- `204 No Content`: All data in the batch is ingested.
- `400 Bad Request`: Some (_when **partial writes** are configured for the cluster_) or all of the data has been rejected. Data that has not been rejected is ingested and queryable.
The response body contains error details about [rejected points](#troubleshoot-rejected-points), up to 100 points.
Writes are synchronous--the response status indicates the final status of the write and all ingested data is queryable.
To ensure that InfluxDB handles writes in the order you request them,
wait for the response before you send the next request.
### Review HTTP status codes
InfluxDB uses conventional HTTP status codes to indicate the success or failure of a request.
The `message` property of the response body may contain additional details about the error.
{{< product-name >}} returns one the following HTTP status codes for a write request:
| HTTP response code | Response body | Description |
|:------------------------------|:------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `204 No Content"` | no response body | If InfluxDB ingested all of the data in the batch |
| `400 "Bad request"` | error details about rejected points, up to 100 points: `line` contains the first rejected line, `message` describes rejections | If some (_when **partial writes** are configured for the cluster_) or all request data isn't allowed (for example, if it is malformed or falls outside of the bucket's retention period)--the response body indicates whether a partial write has occurred or if all data has been rejected |
| `401 "Unauthorized"` | | If the `Authorization` header is missing or malformed or if the [token](/influxdb3/cloud-dedicated/admin/tokens/) doesn't have [permission](/influxdb3/cloud-dedicated/reference/cli/influxctl/token/create/#examples) to write to the database. See [examples using credentials](/influxdb3/cloud-dedicated/get-started/write/#write-line-protocol-to-influxdb) in write requests. |
| `404 "Not found"` | requested **resource type** (for example, "organization" or "database"), and **resource name** | If a requested resource (for example, organization or database) wasn't found |
| `422 "Unprocessable Entity"` | `message` contains details about the error | If the data isn't allowed (for example, falls outside of the databases retention period).
| `500 "Internal server error"` | | Default status for an error |
| `503 "Service unavailable"` | | If the server is temporarily unavailable to accept writes. The `Retry-After` header contains the number of seconds to wait before trying the write again.
The `message` property of the response body may contain additional details about the error.
If your data did not write to the database, see how to [troubleshoot rejected points](#troubleshoot-rejected-points).
## Troubleshoot failures
If you notice data is missing in your database, do the following:
- Check the [HTTP status code](#review-http-status-codes) in the response.
- Check the `message` property in the response body for details about the error.
- If the `message` describes a field error, [troubleshoot rejected points](#troubleshoot-rejected-points).
- Verify all lines contain valid syntax ([line protocol](/influxdb3/cloud-dedicated/reference/syntax/line-protocol/)).
- Verify the timestamps in your data match the [precision parameter](/influxdb3/cloud-dedicated/reference/glossary/#precision) in your request.
- Minimize payload size and network errors by [optimizing writes](/influxdb3/cloud-dedicated/write-data/best-practices/optimize-writes/).
## Troubleshoot rejected points
When writing points from a batch, InfluxDB rejects points that have syntax errors or schema conflicts.
If InfluxDB processes the data in your batch and then rejects points, the [HTTP response](#handle-write-responses) body contains the following properties that describe rejected points:
- `code`: `"invalid"`
- `line`: the line number of the _first_ rejected point in the batch.
- `message`: a string that contains line-separated error messages, one message for each rejected point in the batch, up to 100 rejected points.
InfluxDB rejects points for the following reasons:
- a line protocol parsing error
- an invalid timestamp
- a schema conflict
Schema conflicts occur when you try to write data that contains any of the following:
- a wrong data type: the point falls within the same partition (default partitioning is measurement and day) as existing bucket data and contains a different data type for an existing field
- a tag and a field that use the same key
### Example
The following example shows a response body for a write request that contains two rejected points:
```json
{
"code": "invalid",
"line": 2,
"message": "failed to parse line protocol:
errors encountered on line(s):
error parsing line 2 (1-based): Invalid measurement was provided
error parsing line 4 (1-based): Unable to parse timestamp value '123461000000000000000000000000'"
}
```
Check for [field data type](/influxdb3/cloud-dedicated/reference/syntax/line-protocol/#data-types-and-format) differences between the rejected data point and points within the same database and partition--for example, did you attempt to write `string` data to an `int` field?
<!-- The content for this page is at
//SOURCE - content/shared/influxdb3-write-guides/troubleshoot-distributed.md
-->

View File

@ -27,6 +27,7 @@ related:
- /influxdb3/cloud-serverless/reference/influxql/
- /influxdb3/cloud-serverless/reference/sql/
- /influxdb3/cloud-serverless/query-data/execute-queries/troubleshoot/
- /influxdb3/cloud-serverless/query-data/troubleshoot-and-optimize/query-timeout-best-practices/
list_code_example: |
```py
@ -241,7 +242,8 @@ from influxdb_client_3 import InfluxDBClient3
client = InfluxDBClient3(
host='{{< influxdb/host >}}',
token='API_TOKEN',
database='BUCKET_NAME'
database='BUCKET_NAME',
timeout=30 # Set default timeout to 30 seconds for serverless
)
```
{{% /code-placeholders %}}
@ -332,7 +334,8 @@ client = InfluxDBClient3(
# Execute the query and return an Arrow table
table = client.query(
query="SELECT * FROM home",
language="sql"
language="sql",
timeout=10 # Override default timeout for simple queries (10 seconds)
)
print("\n#### View Schema information\n")
@ -377,7 +380,8 @@ client = InfluxDBClient3(
# Execute the query and return an Arrow table
table = client.query(
query="SELECT * FROM home",
language="influxql"
language="influxql",
timeout=10 # Override default timeout for simple queries (10 seconds)
)
print("\n#### View Schema information\n")

View File

@ -0,0 +1,17 @@
---
title: Query timeout best practices
description: Learn how to set appropriate query timeouts to balance performance and resource protection.
menu:
influxdb3_cloud_serverless:
name: Query timeout best practices
parent: Troubleshoot and optimize queries
identifier: query-timeout-best-practices
weight: 201
related:
- /influxdb3/cloud-serverless/reference/client-libraries/v3/
source: shared/influxdb3-query-guides/query-timeout-best-practices.md
---
<!--
//SOURCE - content/shared/influxdb3-query-guides/query-timeout-best-practices.md
>

View File

@ -12,6 +12,7 @@ related:
- /influxdb3/cloud-serverless/query-data/sql/
- /influxdb3/cloud-serverless/query-data/influxql/
- /influxdb3/cloud-serverless/reference/client-libraries/v3/
- /influxdb3/cloud-serverless/query-data/troubleshoot-and-optimize/query-timeout-best-practices/
aliases:
- /influxdb3/cloud-serverless/query-data/execute-queries/troubleshoot/
---
@ -29,7 +30,9 @@ If a query doesn't return any data, it might be due to the following:
- Your data falls outside the time range (or other conditions) in the query--for example, the InfluxQL `SHOW TAG VALUES` command uses a default time range of 1 day.
- The query (InfluxDB server) timed out.
- The query client timed out.
- The query client timed out.
See [Query timeout best practices](/influxdb3/cloud-serverless/query-data/troubleshoot-and-optimize/query-timeout-best-practices/)
for guidance on setting appropriate timeouts.
- The query return type is not supported by the client library.
For example, array or list types may not be supported.
In this case, use `array_to_string()` to convert the array value to a string--for example:

View File

@ -10,103 +10,15 @@ menu:
influxdb3_cloud_serverless:
name: Troubleshoot issues
parent: Write data
influxdb3/cloud-serverless/tags: [write, line protocol, errors]
influxdb3/cloud-serverless/tags: [write, line protocol, errors, partial writes]
related:
- /influxdb3/cloud-serverless/get-started/write/
- /influxdb3/cloud-serverless/reference/syntax/line-protocol/
- /influxdb3/cloud-serverless/write-data/best-practices/
- /influxdb3/cloud-serverless/reference/internals/durability/
source: /shared/influxdb3-write-guides/troubleshoot-distributed.md
---
Learn how to avoid unexpected results and recover from errors when writing to {{% product-name %}}.
- [Handle write responses](#handle-write-responses)
- [Review HTTP status codes](#review-http-status-codes)
- [Troubleshoot failures](#troubleshoot-failures)
- [Troubleshoot rejected points](#troubleshoot-rejected-points)
## Handle write responses
{{% product-name %}} does the following when you send a write request:
1. Validates the request.
2. If successful, attempts to [ingest data](/influxdb3/cloud-serverless/reference/internals/durability/#data-ingest) from the request body; otherwise, responds with an [error status](#review-http-status-codes).
3. Ingests or rejects data from the batch and returns one of the following HTTP status codes:
- `204 No Content`: All of the data is ingested and queryable.
- `400 Bad Request`: Some or all of the data has been rejected. Data that has not been rejected is ingested and queryable.
The response body contains error details about [rejected points](#troubleshoot-rejected-points), up to 100 points.
Writes are synchronous--the response status indicates the final status of the write and all ingested data is queryable.
To ensure that InfluxDB handles writes in the order you request them,
wait for the response before you send the next request.
### Review HTTP status codes
InfluxDB uses conventional HTTP status codes to indicate the success or failure of a request.
The `message` property of the response body may contain additional details about the error.
{{< product-name >}} returns one the following HTTP status codes for a write request:
| HTTP response code | Response body | Description |
| :-------------------------------| :--------------------------------------------------------------- | :------------- |
| `204 "No Content"` | no response body | If InfluxDB ingested all of the data in the batch |
| `400 "Bad request"` | error details about rejected points, up to 100 points: `line` contains the first rejected line, `message` describes rejections | If some or all request data isn't allowed (for example, is malformed or falls outside of the bucket's retention period)--the response body indicates whether a partial write has occurred or if all data has been rejected |
| `401 "Unauthorized"` | | If the `Authorization` header is missing or malformed or if the [token](/influxdb3/cloud-serverless/admin/tokens/) doesn't have [permission](/influxdb3/cloud-serverless/admin/tokens/create-token/) to write to the bucket. See [examples using credentials](/influxdb3/cloud-serverless/get-started/write/#write-line-protocol-to-influxdb) in write requests. |
| `404 "Not found"` | requested **resource type** (for example, "organization" or "bucket"), and **resource name** | If a requested resource (for example, organization or bucket) wasn't found |
| `413 “Request too large”` | cannot read data: points in batch is too large | If a request exceeds the maximum [global limit](/influxdb3/cloud-serverless/admin/billing/limits/) |
| `429 “Too many requests”` | | If the number of requests exceeds the [adjustable service quota](/influxdb3/cloud-serverless/admin/billing/limits/#adjustable-service-quotas). The `Retry-After` header contains the number of seconds to wait before trying the write again. | If a request exceeds your plan's [adjustable service quotas](/influxdb3/cloud-serverless/admin/billing/limits/#adjustable-service-quotas)
| `500 "Internal server error"` | | Default status for an error |
| `503 "Service unavailable"` | | If the server is temporarily unavailable to accept writes. The `Retry-After` header contains the number of seconds to wait before trying the write again.
The `message` property of the response body may contain additional details about the error.
If your data did not write to the bucket, see how to [troubleshoot rejected points](#troubleshoot-rejected-points).
## Troubleshoot failures
If you notice data is missing in your database, do the following:
- Check the [HTTP status code](#review-http-status-codes) in the response.
- Check the `message` property in the response body for details about the error.
- If the `message` describes a field error, [troubleshoot rejected points](#troubleshoot-rejected-points).
- Verify all lines contain valid syntax ([line protocol](/influxdb3/cloud-serverless/reference/syntax/line-protocol/)).
- Verify the timestamps in your data match the [precision parameter](/influxdb3/cloud-serverless/reference/glossary/#precision) in your request.
- Minimize payload size and network errors by [optimizing writes](/influxdb3/cloud-serverless/write-data/best-practices/optimize-writes/).
## Troubleshoot rejected points
When writing points from a batch, InfluxDB rejects points that have syntax errors or schema conflicts.
If InfluxDB processes the data in your batch and then rejects points, the [HTTP response](#handle-write-responses) body contains the following properties that describe rejected points:
- `code`: `"invalid"`
- `line`: the line number of the _first_ rejected point in the batch.
- `message`: a string that contains line-separated error messages, one message for each rejected point in the batch, up to 100 rejected points.
InfluxDB rejects points for the following reasons:
- a line protocol parsing error
- an invalid timestamp
- a schema conflict
Schema conflicts occur when you try to write data that contains any of the following:
- a wrong data type: the point falls within the same partition (default partitioning is measurement and day) as existing bucket data and contains a different data type for an existing field
- a tag and a field that use the same key
### Example
The following example shows a response body for a write request that contains two rejected points:
```json
{
"code": "invalid",
"line": 2,
"message": "failed to parse line protocol:
errors encountered on line(s):
error parsing line 2 (1-based): Invalid measurement was provided
error parsing line 4 (1-based): Unable to parse timestamp value '123461000000000000000000000000'"
}
```
Check for [field data type](/influxdb3/cloud-serverless/reference/syntax/line-protocol/#data-types-and-format) differences between the rejected data point and points within the same database and partition--for example, did you attempt to write `string` data to an `int` field?
<!-- The content for this page is at
//SOURCE - content/shared/influxdb3-write-guides/troubleshoot-distributed.md
-->

View File

@ -20,6 +20,7 @@ related:
- /influxdb3/clustered/query-data/sql/
- /influxdb3/clustered/reference/influxql/
- /influxdb3/clustered/reference/sql/
- /influxdb3/clustered/query-data/troubleshoot-and-optimize/query-timeout-best-practices/
list_code_example: |
```py
@ -234,7 +235,8 @@ from influxdb_client_3 import InfluxDBClient3
client = InfluxDBClient3(
host='{{< influxdb/host >}}',
token='DATABASE_TOKEN',
database='DATABASE_NAME'
database='DATABASE_NAME',
timeout=60 # Set default timeout to 60 seconds
)
```
{{% /code-placeholders %}}
@ -325,7 +327,8 @@ client = InfluxDBClient3(
# Execute the query and return an Arrow table
table = client.query(
query="SELECT * FROM home",
language="sql"
language="sql",
timeout=30 # Override default timeout for simple queries (30 seconds)
)
print("\n#### View Schema information\n")
@ -370,7 +373,8 @@ client = InfluxDBClient3(
# Execute the query and return an Arrow table
table = client.query(
query="SELECT * FROM home",
language="influxql"
language="influxql",
timeout=30 # Override default timeout for simple queries (30 seconds)
)
print("\n#### View Schema information\n")

View File

@ -12,6 +12,7 @@ influxdb3/clustered/tags: [query, sql, influxql, influxctl, CLI]
related:
- /influxdb3/clustered/reference/cli/influxctl/query/
- /influxdb3/clustered/get-started/query/#execute-an-sql-query, Get started querying data
- /influxdb3/clustered/query-data/troubleshoot-and-optimize/query-timeout-best-practices/, Query timeout best practices
- /influxdb3/clustered/reference/sql/
- /influxdb3/clustered/reference/influxql/
list_code_example: |
@ -141,6 +142,35 @@ Replace the following:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
Name of the database to query
## Query timeouts
The [`influxctl --timeout` global flag](/influxdb3/clustered/reference/cli/influxctl/) sets the maximum duration for API calls, including query requests.
If a query takes longer than the specified timeout, the operation will be canceled.
### Timeout examples
Use different timeout values based on your query type:
{{% code-placeholders "DATABASE_(TOKEN|NAME)" %}}
```sh
# Shorter timeout for testing dashboard queries (10 seconds)
influxctl query \
--timeout 10s \
--token DATABASE_TOKEN \
--database DATABASE_NAME \
"SELECT * FROM sensors WHERE time >= now() - INTERVAL '1 hour' LIMIT 100"
# Longer timeout for analytical queries (5 minutes)
influxctl query \
--timeout 300s \
--token DATABASE_TOKEN \
--database DATABASE_NAME \
"SELECT room, AVG(temperature) FROM sensors WHERE time >= now() - INTERVAL '30 days' GROUP BY room"
```
{{% /code-placeholders %}}
For guidance on selecting appropriate timeout values, see [Query timeout best practices](/influxdb3/clustered/query-data/troubleshoot-and-optimize/query-timeout-best-practices/).
## Output format
The `influxctl query` command supports the following output formats:
@ -241,7 +271,7 @@ influxctl query \
{{% /influxdb/custom-timestamps %}}
{{< expand-wrapper >}}
{{% expand "View example results with unix nanosecond timestamps" %}}
{{% expand "View example results with Unix nanosecond timestamps" %}}
{{% influxdb/custom-timestamps %}}
```
+-------+--------+---------+------+---------------------+

View File

@ -0,0 +1,18 @@
---
title: Query timeout best practices
description: Learn how to set appropriate query timeouts to balance performance and resource protection.
menu:
influxdb3_clustered:
name: Query timeout best practices
parent: Troubleshoot and optimize queries
identifier: query-timeout-best-practices
weight: 201
related:
- /influxdb3/clustered/reference/client-libraries/v3/
- /influxdb3/clustered/query-data/execute-queries/influxctl-cli/
source: shared/influxdb3-query-guides/query-timeout-best-practices.md
---
<!--
//SOURCE - content/shared/influxdb3-query-guides/query-timeout-best-practices.md
>

View File

@ -12,6 +12,7 @@ related:
- /influxdb3/clustered/query-data/sql/
- /influxdb3/clustered/query-data/influxql/
- /influxdb3/clustered/reference/client-libraries/v3/
- /influxdb3/clustered/query-data/troubleshoot-and-optimize/query-timeout-best-practices/
aliases:
- /influxdb3/clustered/query-data/execute-queries/troubleshoot/
---
@ -29,7 +30,9 @@ If a query doesn't return any data, it might be due to the following:
- Your data falls outside the time range (or other conditions) in the query--for example, the InfluxQL `SHOW TAG VALUES` command uses a default time range of 1 day.
- The query (InfluxDB server) timed out.
- The query client timed out.
- The query client timed out.
See [Query timeout best practices](/influxdb3/clustered/query-data/troubleshoot-and-optimize/query-timeout-best-practices/)
for guidance on setting appropriate timeouts.
- The query return type is not supported by the client library.
For example, array or list types may not be supported.
In this case, use `array_to_string()` to convert the array value to a string--for example:

View File

@ -11,77 +11,15 @@ menu:
influxdb3_clustered:
name: Troubleshoot issues
parent: Write data
influxdb3/clustered/tags: [write, line protocol, errors]
influxdb3/clustered/tags: [write, line protocol, errors, partial writes]
related:
- /influxdb3/clustered/get-started/write/
- /influxdb3/clustered/reference/syntax/line-protocol/
- /influxdb3/clustered/write-data/best-practices/
- /influxdb3/clustered/reference/internals/durability/
source: /shared/influxdb3-write-guides/troubleshoot-distributed.md
---
Learn how to avoid unexpected results and recover from errors when writing to
{{% product-name %}}.
- [Handle write responses](#handle-write-responses)
- [Review HTTP status codes](#review-http-status-codes)
- [Troubleshoot failures](#troubleshoot-failures)
- [Troubleshoot rejected points](#troubleshoot-rejected-points)
## Handle write responses
{{% product-name %}} does the following when you send a write request:
1. Validates the request.
2. If successful, attempts to ingest data from the request body; otherwise,
responds with an [error status](#review-http-status-codes).
3. Ingests or rejects data in the batch and returns one of the following HTTP
status codes:
- `204 No Content`: All data in the batch is ingested.
- `400 Bad Request`: Some or all of the data has been rejected.
Data that has not been rejected is ingested and queryable.
The response body contains error details about
[rejected points](#troubleshoot-rejected-points), up to 100 points.
Writes are synchronous--the response status indicates the final status of the
write and all ingested data is queryable.
To ensure that InfluxDB handles writes in the order you request them,
wait for the response before you send the next request.
### Review HTTP status codes
InfluxDB uses conventional HTTP status codes to indicate the success or failure of a request.
The `message` property of the response body may contain additional details about the error.
Write requests return the following status codes:
| HTTP response code | Message | Description |
| :-------------------------------| :--------------------------------------------------------------- | :------------- |
| `204 "Success"` | | If InfluxDB ingested the data |
| `400 "Bad request"` | error details about rejected points, up to 100 points: `line` contains the first rejected line, `message` describes rejections | If some or all request data isn't allowed (for example, if it is malformed or falls outside of the bucket's retention period)--the response body indicates whether a partial write has occurred or if all data has been rejected |
| `401 "Unauthorized"` | | If the `Authorization` header is missing or malformed or if the [token](/influxdb3/clustered/admin/tokens/) doesn't have [permission](/influxdb3/clustered/reference/cli/influxctl/token/create/#examples) to write to the database. See [examples using credentials](/influxdb3/clustered/get-started/write/#write-line-protocol-to-influxdb) in write requests. |
| `404 "Not found"` | requested **resource type** (for example, "organization" or "database"), and **resource name** | If a requested resource (for example, organization or database) wasn't found |
| `500 "Internal server error"` | | Default status for an error |
| `503` "Service unavailable" | | If the server is temporarily unavailable to accept writes. The `Retry-After` header describes when to try the write again.
If your data did not write to the database, see how to [troubleshoot rejected points](#troubleshoot-rejected-points).
## Troubleshoot failures
If you notice data is missing in your database, do the following:
- Check the `message` property in the response body for details about the error.
- If the `message` describes a field error, [troubleshoot rejected points](#troubleshoot-rejected-points).
- Verify all lines contain valid syntax ([line protocol](/influxdb3/clustered/reference/syntax/line-protocol/)).
- Verify the timestamps in your data match the [precision parameter](/influxdb3/clustered/reference/glossary/#precision) in your request.
- Minimize payload size and network errors by [optimizing writes](/influxdb3/clustered/write-data/best-practices/optimize-writes/).
## Troubleshoot rejected points
InfluxDB rejects points that fall within the same partition (default partitioning
is by measurement and day) as existing bucket data and have a different data type
for an existing field.
Check for [field data type](/influxdb3/clustered/reference/syntax/line-protocol/#data-types-and-format)
differences between the rejected data point and points within the same database
and partition--for example, did you attempt to write `string` data to an `int` field?
<!-- The content for this page is at
//SOURCE - content/shared/influxdb3-write-guides/troubleshoot-distributed.md
-->

View File

@ -13,4 +13,4 @@ source: /shared/influxdb3-cli/config-options.md
<!-- The content of this file is at
//SOURCE - content/shared/influxdb3-cli/config-options.md
-->
-->

View File

@ -86,7 +86,7 @@ To use {{% product-name %}} to query data from InfluxDB 3, navigate to
The _Data Explorer_ lets you explore the
schema of your database and automatically builds SQL queries by either
selecting columns in the _Schema Browser_ or by using _Natural Language_ with
the {{% product-name %}} OpenAI integration.
the {{% product-name %}} AI integration.
For this getting started guide, use the Schema Browser to build a SQL query
that returns data from the newly written sample data set.

View File

@ -13,7 +13,7 @@ stored. Each database can contain multiple tables.
> **If coming from InfluxDB v2, InfluxDB Cloud (TSM), or InfluxDB Cloud Serverless**,
> _database_ and _bucket_ are synonymous.
<!--
{{% show-in "enterprise" %}}
## Retention periods
A database **retention period** is the maximum age of data stored in the database.
@ -22,10 +22,9 @@ When a point's timestamp is beyond the retention period (relative to now), the
point is marked for deletion and is removed from the database the next time the
retention enforcement service runs.
The _minimum_ retention period for an InfluxDB database is 1 hour.
The _maximum_ retention period is infinite meaning data does not expire and will
never be removed by the retention enforcement service.
-->
The _maximum_ retention period is infinite (`none`) meaning data does not expire
and will never be removed by the retention enforcement service.
{{% /show-in %}}
## Database, table, and column limits
@ -40,9 +39,11 @@ never be removed by the retention enforcement service.
**Maximum number of tables across all databases**: {{% influxdb3/limit "table" %}}
{{< product-name >}} limits the number of tables you can have across _all_
databases to {{% influxdb3/limit "table" %}}. There is no specific limit on how
many tables you can have in an individual database, as long as the total across
all databases is below the limit.
databases to {{% influxdb3/limit "table" %}}{{% show-in "enterprise" %}} by default{{% /show-in %}}.
{{% show-in "enterprise" %}}You can configure the table limit using the
[`--num-table-limit` configuration option](/influxdb3/enterprise/reference/config-options/#num-table-limit).{{% /show-in %}}
InfluxDB doesn't limit how many tables you can have in an individual database,
as long as the total across all databases is below the limit.
Having more tables affects your {{% product-name %}} installation in the
following ways:
@ -64,7 +65,8 @@ persists data to Parquet files. Each `PUT` request incurs a monetary cost and
increases the operating cost of {{< product-name >}}.
{{% /expand %}}
{{% expand "**More work for the compactor** _(Enterprise only)_ <em style='opacity:.5;font-weight:normal;'>View more info</em>" %}}
{{% show-in "enterprise" %}}
{{% expand "**More work for the compactor** <em style='opacity:.5;font-weight:normal;'>View more info</em>" %}}
To optimize storage over time, InfluxDB 3 Enterprise has a compactor that
routinely compacts Parquet files.
@ -72,6 +74,7 @@ With more tables and Parquet files to compact, the compactor may need to be scal
to keep up with demand, adding to the operating cost of InfluxDB 3 Enterprise.
{{% /expand %}}
{{% /show-in %}}
{{< /expand-wrapper >}}
### Column limit
@ -80,11 +83,17 @@ to keep up with demand, adding to the operating cost of InfluxDB 3 Enterprise.
Each row must include a time column, with the remaining columns representing
tags and fields.
As a result, a table can have one time column and up to {{% influxdb3/limit "column" -1 %}}
As a result,{{% show-in "enterprise" %}} by default,{{% /show-in %}} a table can
have one time column and up to {{% influxdb3/limit "column" -1 %}}
_combined_ field and tag columns.
If you attempt to write to a table and exceed the column limit, the write
request fails and InfluxDB returns an error.
{{% show-in "enterprise" %}}
You can configure the maximum number of columns per
table using the [`num-total-columns-per-table-limit` configuration option](/influxdb3/enterprise/reference/config-options/#num-total-columns-per-table-limit).
{{% /show-in %}}
Higher numbers of columns has the following side-effects:
{{< expand-wrapper >}}

View File

@ -130,7 +130,12 @@ database_name/retention_policy_name
## Database limit
{{% show-in "enterprise" %}}
**Default maximum number of databases**: {{% influxdb3/limit "database" %}}
{{% /show-in %}}
{{% show-in "core" %}}
**Maximum number of databases**: {{% influxdb3/limit "database" %}}
{{% /show-in %}}
_For more information about {{< product-name >}} database, table, and column limits,
see [Database, table, and column limits](/influxdb3/version/admin/databases/#database-table-and-column-limits)._

View File

@ -53,6 +53,10 @@ influxdb3 serve
- [tls-minimum-versions](#tls-minimum-version)
- [without-auth](#without-auth)
- [disable-authz](#disable-authz)
{{% show-in "enterprise" %}}
- [num-database-limit](#num-database-limit)
- [num-table-limit](#num-table-limit)
- [num-total-columns-per-table-limit](#num-total-columns-per-table-limit){{% /show-in %}}
- [AWS](#aws)
- [aws-access-key-id](#aws-access-key-id)
- [aws-secret-access-key](#aws-secret-access-key)
@ -204,7 +208,7 @@ This value must be different than the [`--node-id`](#node-id) value.
#### data-dir
For the `file` object store, defines the location InfluxDB 3 uses to store files locally.
For the `file` object store, defines the location {{< product-name >}} uses to store files locally.
Required when using the `file` [object store](#object-store).
| influxdb3 serve option | Environment variable |
@ -216,7 +220,7 @@ Required when using the `file` [object store](#object-store).
{{% show-in "enterprise" %}}
#### license-email
Specifies the email address to associate with your InfluxDB 3 Enterprise license
Specifies the email address to associate with your {{< product-name >}} license
and automatically responds to the interactive email prompt when the server starts.
This option is mutually exclusive with [license-file](#license-file).
@ -228,7 +232,7 @@ This option is mutually exclusive with [license-file](#license-file).
#### license-file
Specifies the path to a license file for InfluxDB 3 Enterprise. When provided, the license
Specifies the path to a license file for {{< product-name >}}. When provided, the license
file's contents are used instead of requesting a new license.
This option is mutually exclusive with [license-email](#license-email).
@ -361,10 +365,44 @@ The server processes all requests without requiring tokens or authentication.
Optionally disable authz by passing in a comma separated list of resources.
Valid values are `health`, `ping`, and `metrics`.
| influxdb3 serve option | Environment variable |
| :--------------------- | :----------------------- |
| `--disable-authz` | `INFLUXDB3_DISABLE_AUTHZ`|
| influxdb3 serve option | Environment variable |
| :--------------------- | :------------------------ |
| `--disable-authz` | `INFLUXDB3_DISABLE_AUTHZ` |
{{% show-in "enterprise" %}}
---
#### num-database-limit
Limits the total number of active databases.
Default is {{% influxdb3/limit "database" %}}.
| influxdb3 serve option | Environment variable |
| :---------------------- | :---------------------------------------- |
| `--num-database-limit` | `INFLUXDB3_ENTERPRISE_NUM_DATABASE_LIMIT` |
---
#### num-table-limit
Limits the total number of active tables across all databases.
Default is {{% influxdb3/limit "table" %}}.
| influxdb3 serve option | Environment variable |
| :--------------------- | :------------------------------------- |
| `--num-table-limit` | `INFLUXDB3_ENTERPRISE_NUM_TABLE_LIMIT` |
---
#### num-total-columns-per-table-limit
Limits the total number of columns per table.
Default is {{% influxdb3/limit "column" %}}.
| influxdb3 serve option | Environment variable |
| :------------------------------------ | :------------------------------------------------------- |
| `--num-total-columns-per-table-limit` | `INFLUXDB3_ENTERPRISE_NUM_TOTAL_COLUMNS_PER_TABLE_LIMIT` |
{{% /show-in %}}
---
### AWS

View File

@ -0,0 +1,301 @@
Learn how to set appropriate query timeouts for InfluxDB 3 to balance performance and resource protection.
Query timeouts prevent resource monopolization while allowing legitimate queries to complete successfully.
The key is finding the "goldilocks zone"—timeouts that are not too short (causing legitimate queries to fail) and not too long (allowing runaway queries to monopolize resources).
- [Understanding query timeouts](#understanding-query-timeouts)
- [How query routing affects timeout strategy](#how-query-routing-affects-timeout-strategy)
- [Timeout configuration best practices](#timeout-configuration-best-practices)
- [InfluxDB 3 client library examples](#influxdb-3-client-library-examples)
- [Monitoring and troubleshooting](#monitoring-and-troubleshooting)
## Understanding query timeouts
Query timeouts define the maximum duration a query can run before being canceled.
In {{% product-name %}}, timeouts serve multiple purposes:
- **Resource protection**: Prevent runaway queries from monopolizing system resources
- **Performance optimization**: Ensure responsive system behavior for time-sensitive operations
- **Cost control**: Limit compute resource consumption
- **User experience**: Provide predictable response times for applications and dashboards
Query execution includes network latency, query planning, data retrieval, processing, and result serialization.
### The "goldilocks zone" for query timeouts
Optimal timeouts are:
- **Long enough**: To accommodate normal query execution under typical load
- **Short enough**: To prevent resource monopolization and provide reasonable feedback
- **Adaptive**: Adjusted based on query type, system load, and historical performance
## How query routing affects timeout strategy
InfluxDB 3 uses round-robin query routing to balance load across multiple queriers.
This creates a "checkout line" effect that influences timeout strategy.
> [!Note]
> #### Concurrent query execution
>
> InfluxDB 3 supports concurrent query execution, which helps minimize the impact of intensive or inefficient queries.
> However, you should still use appropriate timeouts and optimize your queries for best performance.
### The checkout line analogy
Consider a grocery store with multiple checkout lines:
- Customers (queries) are distributed across lines (queriers)
- A slow customer (long-running query) can block others in the same line
- More checkout lines (queriers) provide more alternatives when retrying
If one querier is unhealthy or has been hijacked by a "noisy neighbor" query (excessively resource hungry), giving up sooner may save time--it's like jumping to a cashier with no customers in line. However, if all queriers are overloaded, then short retries may exacerbate the problem--you wouldn't jump to the end of another line if the cashier is already starting to scan your items.
### Noisy neighbor effects
In distributed systems:
- A single long-running query can impact other queries on the same querier
- Shorter timeouts with retries can help queries find less congested queriers
- The effectiveness depends on the number of available queriers
### When shorter timeouts help
- **Multiple queriers available**: Retries can find less congested queriers
- **Uneven load distribution**: Some queriers may be significantly less busy
- **Temporary congestion**: Brief spikes in query load or resource usage
### When shorter timeouts hurt
- **Few queriers**: Limited alternatives for retries
- **System-wide congestion**: All queriers are equally busy
- **Expensive query planning**: High overhead for query preparation
## Timeout configuration best practices
### Make timeouts adjustable
Configure timeouts that can be modified without service restarts using environment variables, configuration files, runtime APIs, or per-query overrides. Design your client applications to easily adjust timeouts on the fly, allowing you to respond quickly to performance changes and test different timeout strategies without code changes.
See the [InfluxDB 3 client library examples](#influxdb-3-client-library-examples)
for how to configure timeouts in Python.
### Use tiered timeout strategies
Implement different timeout classes based on query characteristics.
#### Starting point recommendations
{{% hide-in "cloud-serverless" %}}
| Query Type | Recommended Timeout | Use Case | Rationale |
|------------|-------------------|-----------|-----------|
| UI and dashboard | 10 seconds | Interactive dashboards, real-time monitoring | Users expect immediate feedback |
| Generic default | 60 seconds | Application queries, APIs | Balances performance and reliability |
| Mixed workload | 2 minutes | Development, testing environments | Accommodates various query types |
| Analytical and background | 5 minutes | Reports, batch processing, ETL operations | Complex queries need more time |
{{% /hide-in %}}
{{% show-in "cloud-serverless" %}}
| Query Type | Recommended Timeout | Use Case | Rationale |
|------------|-------------------|-----------|-----------|
| UI and dashboard | 10 seconds | Interactive dashboards, real-time monitoring | Users expect immediate feedback |
| Generic default | 30 seconds | Application queries, APIs | Serverless optimized for shorter queries |
| Mixed workload | 60 seconds | Development, testing environments | Limited by serverless execution model |
| Analytical and background | 2 minutes | Reports, batch processing | Complex queries within serverless limits |
{{% /show-in %}}
{{% show-in "enterprise, core" %}}
> [!Tip]
> #### Use caching
> Where immediate feedback is crucial, consider using [Last Value Cache](/influxdb3/version/admin/manage-last-value-caches/) to speed up queries for recent values and [Distinct Value Cache](/influxdb3/version/admin/manage-distinct-value-caches/) to speed up queries for distinct values.
{{% /show-in %}}
### Implement progressive timeout and retry logic
Consider using more sophisticated retry strategies rather than simple fixed retries:
1. **Exponential backoff**: Increase delay between retry attempts
2. **Jitter**: Add randomness to prevent thundering herd effects
3. **Circuit breakers**: Stop retries when system is overloaded
4. **Deadline propagation**: Respect overall operation deadlines
### Warning signs
Consider these indicators that timeouts may need adjustment:
- **Timeouts > 10 minutes**: Usually indicates [query optimization](/influxdb3/version/query-data/troubleshoot-and-optimize/optimize-queries/) opportunities
- **High retry rates**: May indicate timeouts are too aggressive
- **Resource utilization spikes**: Long-running queries may need shorter timeouts
- **User complaints**: Balance between performance and user experience
### Environment-specific considerations
- **Development**: Use longer timeouts for debugging
- **Production**: Use shorter timeouts with monitoring
- **Cost-sensitive**: Use aggressive timeouts and [query optimization](/influxdb3/version/query-data/troubleshoot-and-optimize/optimize-queries/)
### Experimental and ad-hoc queries
When introducing a new query to your application or when issuing ad-hoc queries to a database with many users, your query might be the "noisy neighbor" (the shopping cart overloaded with groceries). By setting a tighter timeout on experimental queries you can reduce the impact on other users.
## InfluxDB 3 client library examples
### Python client with timeout configuration
Configure timeouts in the InfluxDB 3 Python client:
```python { placeholders="DATABASE_NAME|HOST_URL|AUTH_TOKEN" }
import influxdb_client_3 as InfluxDBClient3
# Configure different timeout classes (in seconds)
ui_timeout = 10 # For dashboard queries
api_timeout = 60 # For application queries
batch_timeout = 300 # For analytical queries
# Create client with default timeout
client = InfluxDBClient3.InfluxDBClient3(
host="https://{{< influxdb/host >}}",
database="DATABASE_NAME",
token="AUTH_TOKEN",
timeout=api_timeout # Python client uses seconds
)
# Quick query with short timeout
def query_latest_data():
try:
result = client.query(
query="SELECT * FROM sensors WHERE time >= now() - INTERVAL '5 minutes' ORDER BY time DESC LIMIT 10",
timeout=ui_timeout
)
return result.to_pandas()
except Exception as e:
print(f"Quick query failed: {e}")
return None
# Analytical query with longer timeout
def query_daily_averages():
query = """
SELECT
DATE_TRUNC('day', time) as day,
room,
AVG(temperature) as avg_temp,
COUNT(*) as readings
FROM sensors
WHERE time >= now() - INTERVAL '30 days'
GROUP BY DATE_TRUNC('day', time), room
ORDER BY day DESC, room
"""
try:
result = client.query(
query=query,
timeout=batch_timeout
)
return result.to_pandas()
except Exception as e:
print(f"Analytical query failed: {e}")
return None
```
Replace the following:
{{% hide-in "cloud-serverless" %}}
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database to query{{% /hide-in %}}
{{% show-in "cloud-serverless" %}}
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the bucket to query{{% /show-in %}}
{{% show-in "clustered,cloud-dedicated" %}}
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: a [database token](/influxdb3/clustered/admin/tokens/#database-tokens) with _read_ access to the specified database.{{% /show-in %}}
{{% show-in "cloud-serverless" %}}
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: an [API token](/influxdb3/cloud-serverless/admin/tokens/) with _read_ access to the specified bucket.{{% /show-in %}}
{{% show-in "enterprise,core" %}}
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "database" %}}with read permissions on the specified database{{% /show-in %}}
### Basic retry logic implementation
Implement simple retry strategies with progressive timeouts:
```python
import time
import influxdb_client_3 as InfluxDBClient3
def query_with_retry(client, query: str, initial_timeout: int = 60, max_retries: int = 2):
"""Execute query with basic retry and progressive timeout increase"""
for attempt in range(max_retries + 1):
# Progressive timeout: increase timeout on each retry
timeout_seconds = initial_timeout + attempt * 30
try:
result = client.query(
query=query,
timeout=timeout_seconds
)
return result
except Exception as e:
if attempt == max_retries:
print(f"Query failed after {max_retries + 1} attempts: {e}")
raise
# Simple backoff delay
delay = 2 * (attempt + 1)
print(f"Query attempt {attempt + 1} failed: {e}")
print(f"Retrying in {delay} seconds with timeout {timeout_seconds}s...")
time.sleep(delay)
return None
# Usage example
result = query_with_retry(
client=client,
query="SELECT * FROM large_table WHERE time >= now() - INTERVAL '1 day'",
initial_timeout=60,
max_retries=2
)
```
## Monitoring and troubleshooting
### Key metrics to monitor
Track these essential timeout-related metrics:
- **Query duration percentiles**: P50, P95, P99 execution times
- **Timeout rate**: Percentage of queries that time out
- **Error rates**: Timeout errors vs. other failure types
- **Resource utilization**: CPU and memory usage during query execution
### Common timeout issues
#### High timeout rates
**Symptoms**: Many queries exceeding timeout limits
**Common causes**:
- Timeouts set too aggressively for query complexity
- System resource constraints
- Inefficient query patterns
**Solutions**:
1. Analyze query performance patterns
2. [Optimize slow queries](/influxdb3/version/query-data/troubleshoot-and-optimize/optimize-queries/) or increase timeouts appropriately
3. Scale system resources
#### Inconsistent query performance
**Symptoms**: Same queries sometimes fast, sometimes timeout
**Common causes**:
- Resource contention from concurrent queries
- Data compaction state (queries may be faster after compaction completes)
**Solutions**:
1. Analyze query patterns to identify and optimize slow queries
2. Implement retry logic with exponential backoff in your client applications
3. Adjust timeout values based on observed query performance patterns
{{% show-in "enterprise,core" %}}
4. Implement [Last Value Cache](/influxdb3/version/admin/manage-last-value-caches/) to speed up queries for recent values
5. Implement [Distinct Value Cache](/influxdb3/version/admin/manage-distinct-value-caches/) to speed up queries for distinct values
{{% /show-in %}}
> [!Note]
> Regular analysis of timeout patterns helps identify optimization opportunities and system scaling needs.

View File

@ -0,0 +1,348 @@
Learn how to avoid unexpected results and recover from errors when writing to {{% product-name %}}.
- [Handle write responses](#handle-write-responses)
- [Review HTTP status codes](#review-http-status-codes)
- [Troubleshoot failures](#troubleshoot-failures)
- [Troubleshoot rejected points](#troubleshoot-rejected-points)
- [Report write issues](#report-write-issues)
## Handle write responses
{{% product-name %}} does the following when you send a write request:
1. Validates the request.
2. If successful, attempts to [ingest data](/influxdb3/version/reference/internals/durability/#data-ingest) from the request body; otherwise, responds with an [error status](#review-http-status-codes).
3. Ingests or rejects data from the batch and returns one of the following HTTP status codes:
- `204 No Content`: All of the data is ingested and queryable.
- `400 Bad Request`: Some {{% show-in "cloud-dedicated,clustered" %}}(_when **partial writes** are configured for the cluster_){{% /show-in %}} or all of the data has been rejected. Data that has not been rejected is ingested and queryable.
The response body contains error details about [rejected points](#troubleshoot-rejected-points), up to 100 points.
Writes are synchronous--the response status indicates the final status of the write and all ingested data is queryable.
To ensure that InfluxDB handles writes in the order you request them,
wait for the response before you send the next request.
### Review HTTP status codes
InfluxDB uses conventional HTTP status codes to indicate the success or failure of a request.
The `message` property of the response body may contain additional details about the error.
{{< product-name >}} returns one the following HTTP status codes for a write request:
{{% show-in "clustered,cloud-dedicated" %}}
| HTTP response code | Response body | Description |
| :-------------------------------| :--------------------------------------------------------------- | :------------- |
| `204 "No Content"` | Empty | InfluxDB ingested all of the data in the batch |
| `400 "Bad request"` | error details about rejected points, up to 100 points: `line` contains the first rejected line, `message` describes rejections | Some or all request data isn't allowed (for example, is malformed or falls outside of the database's retention period)--the response body indicates whether a partial write has occurred or if all data has been rejected |
| `401 "Unauthorized"` | Empty | The `Authorization` request header is missing or malformed or the [token](/influxdb3/version/admin/tokens/) doesn't have permission to write to the database |
| `404 "Not found"` | A requested **resource type** (for example, "database"), and **resource name** | A requested resource wasn't found |
| `422 "Unprocessable Entity"` | `message` contains details about the error | The data isn't allowed (for example, falls outside of the database's retention period). |
| `500 "Internal server error"` | Empty | Default status for an error |
| `503 "Service unavailable"` | Empty | The server is temporarily unavailable to accept writes. The `Retry-After` header contains the number of seconds to wait before trying the write again. |
{{% /show-in %}}
{{% show-in "cloud-serverless" %}}
| HTTP response code | Response body | Description |
| :-------------------------------| :--------------------------------------------------------------- | :------------- |
| `204 "No Content"` | Empty | InfluxDB ingested all of the data in the batch |
| `400 "Bad request"` | error details about rejected points, up to 100 points: `line` contains the first rejected line, `message` describes rejections | Some or all request data isn't allowed (for example, is malformed or falls outside of the bucket's retention period)--the response body indicates whether a partial write has occurred or if all data has been rejected |
| `401 "Unauthorized"` | Empty | The `Authorization` request header is missing or malformed or the [token](/influxdb3/version/admin/tokens/) doesn't have permission to write to the bucket |
| `404 "Not found"` | A requested **resource type** (for example, "organization" or "bucket"), and **resource name** | A requested resource wasn't found |
| `413 "Request too large"` | cannot read data: points in batch is too large | The request exceeds the maximum [global limit](/influxdb3/cloud-serverless/admin/billing/limits/) |
| `422 "Unprocessable Entity"` | `message` contains details about the error | The data isn't allowed (for example, falls outside of the database's retention period). |
| `429 "Too many requests"` | Empty | The number of requests exceeds the [adjustable service quota](/influxdb3/cloud-serverless/admin/billing/limits/#adjustable-service-quotas). The `Retry-After` header contains the number of seconds to wait before trying the write again. |
| `500 "Internal server error"` | Empty | Default status for an error |
| `503 "Service unavailable"` | Empty | The server is temporarily unavailable to accept writes. The `Retry-After` header contains the number of seconds to wait before trying the write again. |
{{% /show-in %}}
The `message` property of the response body may contain additional details about the error.
If your data did not write to the {{% show-in "cloud-serverless" %}}bucket{{% /show-in %}}{{% show-in "cloud-dedicated,clustered" %}}database{{% /show-in %}}, see how to [troubleshoot rejected points](#troubleshoot-rejected-points).
## Troubleshoot failures
If you notice data is missing in your database, do the following:
- Check the [HTTP status code](#review-http-status-codes) in the response.
- Check the `message` property in the response body for details about the error.
- If the `message` describes a field error, [troubleshoot rejected points](#troubleshoot-rejected-points).
- Verify all lines contain valid syntax ([line protocol](/influxdb3/version/reference/syntax/line-protocol/)).
- Verify the timestamps in your data match the [precision parameter](/influxdb3/version/reference/glossary/#precision) in your request.
- Minimize payload size and network errors by [optimizing writes](/influxdb3/version/write-data/best-practices/optimize-writes/).
## Troubleshoot rejected points
When writing points from a batch, InfluxDB rejects points that have syntax errors or schema conflicts.
If InfluxDB processes the data in your batch and then rejects points, the [HTTP response](#handle-write-responses) body contains the following properties that describe rejected points:
- `code`: `"invalid"`
- `line`: the line number of the _first_ rejected point in the batch.
- `message`: a string that contains line-separated error messages, one message for each rejected point in the batch, up to 100 rejected points. Line numbers are 1-based.
InfluxDB rejects points for the following reasons:
- a line protocol parsing error
- an invalid timestamp
- a schema conflict
Schema conflicts occur when you try to write data that contains any of the following:
- a wrong data type: the point falls within the same partition (default partitioning is measurement and day) as existing {{% show-in "cloud-serverless" %}}bucket{{% /show-in %}} {{% show-in "cloud-dedicated,clustered" %}}database{{% /show-in %}} data and contains a different data type for an existing field
- a tag and a field that use the same key
### Example
The following example shows a response body for a write request that contains two rejected points:
```json
{
"code": "invalid",
"line": 2,
"message": "failed to parse line protocol:
errors encountered on line(s):
error parsing line 2 (1-based): Invalid measurement was provided
error parsing line 4 (1-based): Unable to parse timestamp value '123461000000000000000000000000'"
}
```
Check for [field data type](/influxdb3/version/reference/syntax/line-protocol/#data-types-and-format) differences between the rejected data point and points within the same database and partition (default partitioning
is by measurement and day)--for example, did you attempt to write `string` data to an `int` field?
## Report write issues
If you experience persistent write issues that you can't resolve using the troubleshooting steps above, use these guidelines to gather the necessary information when reporting the issue to InfluxData support.
> [!Note]
> #### Before reporting an issue
>
> Ensure you have followed all [troubleshooting steps](#troubleshoot-failures) and
> reviewed the [write optimization guidelines](/influxdb3/version/write-data/best-practices/optimize-writes/)
> to rule out common configuration and data formatting issues.
### Gather essential information
When reporting write issues, provide the following information to help InfluxData engineers diagnose the problem:
#### 1. Error details and logs
**Capture the complete error response:**
```bash { placeholders="AUTH_TOKEN|DATABASE_NAME" }
# Example: Capture both successful and failed write attempts
curl --silent --show-error --write-out "\nHTTP Status: %{http_code}\nResponse Time: %{time_total}s\n" \
--request POST \
"https://{{< influxdb/host >}}/write?db=DATABASE_NAME&precision=ns" \
--header "Authorization: Bearer AUTH_TOKEN" \
--header "Content-Type: text/plain; charset=utf-8" \
--data-binary @problematic-data.lp \
> write-error-response.txt 2>&1
```
**Log client-side errors:**
If using a client library, enable debug logging and capture the full exception details:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Python](#)
[Go](#)
[Java](#)
[JavaScript](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```python { placeholders="DATABASE_NAME|AUTH_TOKEN" }
import logging
from influxdb_client_3 import InfluxDBClient3
# Enable debug logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger("influxdb_client_3")
try:
client = InfluxDBClient3(token="AUTH_TOKEN", host="{{< influxdb/host >}}", database="DATABASE_NAME")
client.write(data)
except Exception as e:
logger.error(f"Write failed: {str(e)}")
# Include full stack trace in your report
import traceback
traceback.print_exc()
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```go { placeholders="DATABASE_NAME|AUTH_TOKEN" }
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/InfluxCommunity/influxdb3-go"
)
func main() {
// Enable debug logging
client, err := influxdb3.New(influxdb3.ClientConfig{
Host: "https://{{< influxdb/host >}}",
Token: "AUTH_TOKEN",
Database: "DATABASE_NAME",
Debug: true,
})
if err != nil {
log.Fatal(err)
}
defer client.Close()
err = client.Write(context.Background(), data)
if err != nil {
// Log the full error details
fmt.Fprintf(os.Stderr, "Write error: %+v\n", err)
}
}
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```java { placeholders="DATABASE_NAME|AUTH_TOKEN" }
import com.influxdb.v3.client.InfluxDBClient;
import java.util.logging.Logger;
import java.util.logging.Level;
public class WriteErrorExample {
private static final Logger logger = Logger.getLogger(WriteErrorExample.class.getName());
public static void main(String[] args) {
try (InfluxDBClient client = InfluxDBClient.getInstance(
"https://{{< influxdb/host >}}",
"AUTH_TOKEN".toCharArray(),
"DATABASE_NAME")) {
client.writeRecord(data);
} catch (Exception e) {
logger.log(Level.SEVERE, "Write failed", e);
// Include full stack trace in your report
e.printStackTrace();
}
}
}
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```javascript { placeholders="DATABASE_NAME|AUTH_TOKEN" }
import { InfluxDBClient } from '@influxdata/influxdb3-client'
const client = new InfluxDBClient({
host: 'https://{{< influxdb/host >}}',
token: 'AUTH_TOKEN',
database: 'DATABASE_NAME'
})
try {
await client.write(data)
} catch (error) {
console.error('Write failed:', error)
// Include the full error object in your report
console.error('Full error details:', JSON.stringify(error, null, 2))
}
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
Replace the following in your code:
{{% hide-in "cloud-serverless" %}}
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database to query{{% /hide-in %}}
{{% show-in "cloud-serverless" %}}
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the bucket to query{{% /show-in %}}
{{% show-in "clustered,cloud-dedicated" %}}
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: a [database token](/influxdb3/clustered/admin/tokens/#database-tokens) with _write_ access to the specified database.{{% /show-in %}}
{{% show-in "cloud-serverless" %}}
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: an [API token](/influxdb3/cloud-serverless/admin/tokens/) with _write_ access to the specified bucket.{{% /show-in %}}
{{% show-in "enterprise,core" %}}
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "database" %}} with write permissions on the specified database{{% /show-in %}}
#### 2. Data samples and patterns
**Provide representative data samples:**
- Include 10-20 lines of the problematic line protocol data (sanitized if necessary)
- Show both successful and failing data formats
- Include timestamp ranges and precision used
- Specify if the issue occurs with specific measurements, tags, or field types
**Example data documentation:**
```
# Successful writes:
measurement1,tag1=value1,tag2=value2 field1=1.23,field2="text" 1640995200000000000
# Failing writes:
measurement1,tag1=value1,tag2=value2 field1="string",field2=456 1640995260000000000
# Error: field data type conflict - field1 changed from float to string
```
#### 3. Write patterns and volume
Document your write patterns:
- **Frequency**: How often do you write data? (for example, every 10 seconds, once per minute)
- **Batch size**: How many points per write request?
- **Concurrency**: How many concurrent write operations?
- **Data retention**: How long is data retained?
- **Timing**: When did the issue first occur? Is it intermittent or consistent?
#### 4. Environment details
{{% show-in "clustered" %}}
**Cluster configuration:**
- InfluxDB Clustered version
- Kubernetes environment details
- Node specifications (CPU, memory, storage)
- Network configuration between client and cluster
{{% /show-in %}}
**Client configuration:**
- Client library version and language
- Connection settings (timeouts, retry logic)
- Geographic location relative to cluster
#### 5. Reproduction steps
Provide step-by-step instructions to reproduce the issue:
1. **Environment setup**: How to configure a similar environment
2. **Data preparation**: Sample data files or generation scripts
3. **Write commands**: Exact commands or code used
4. **Expected vs actual results**: What should happen vs what actually happens
### Create a support package
Organize all gathered information into a comprehensive package:
**Files to include:**
- `write-error-response.txt` - HTTP response details
- `client-logs.txt` - Client library debug logs
- `sample-data.lp` - Representative line protocol data (sanitized)
- `reproduction-steps.md` - Detailed reproduction guide
- `environment-details.md` - {{% show-in "clustered" %}}Cluster and{{% /show-in %}} client configuration
- `write-patterns.md` - Usage patterns and volume information
**Package format:**
```bash
# Create a timestamped support package
TIMESTAMP=$(date -Iseconds)
mkdir "write-issue-${TIMESTAMP}"
# Add all relevant files to the directory
tar -czf "write-issue-${TIMESTAMP}.tar.gz" "write-issue-${TIMESTAMP}/"
```
### Submit the issue
Include the support package when contacting InfluxData support through your standard [support channels](#bug-reports-and-feedback), along with:
- A clear description of the problem
- Impact assessment (how critical is this issue?)
- Any workarounds you've attempted
- Business context if the issue affects production systems
This comprehensive information will help InfluxData engineers identify root causes and provide targeted solutions for your write issues.

View File

@ -65,11 +65,11 @@ The following table provides information about what metaqueries are available in
### Aggregate functions
| Function | Supported |
| :---------------------------------------------------------------------------------------- | :----------------------: |
| Function | Supported |
| :-------------------------------------------------------------------------------- | :----------------------: |
| [COUNT()](/influxdb/version/reference/influxql/functions/aggregates/#count) | **{{< icon "check" >}}** |
| [DISTINCT()](/influxdb/version/reference/influxql/functions/aggregates/#distinct) | **{{< icon "check" >}}** |
| <span style="opacity: .5;">INTEGRAL()</span> | |
| [INTEGRAL()](/influxdb/version/reference/influxql/functions/aggregates/#integral) | **{{< icon "check" >}}** |
| [MEAN()](/influxdb/version/reference/influxql/functions/aggregates/#mean) | **{{< icon "check" >}}** |
| [MEDIAN()](/influxdb/version/reference/influxql/functions/aggregates/#median) | **{{< icon "check" >}}** |
| [MODE()](/influxdb/version/reference/influxql/functions/aggregates/#mode) | **{{< icon "check" >}}** |
@ -77,29 +77,25 @@ The following table provides information about what metaqueries are available in
| [STDDEV()](/influxdb/version/reference/influxql/functions/aggregates/#stddev) | **{{< icon "check" >}}** |
| [SUM()](/influxdb/version/reference/influxql/functions/aggregates/#sum) | **{{< icon "check" >}}** |
<!--
INTEGRAL [influxdb_iox#6937](https://github.com/influxdata/influxdb_iox/issues/6937)
-->
### Selector functions
| Function | Supported |
| :------------------------------------------------------------------------------------------- | :----------------------: |
| Function | Supported |
| :----------------------------------------------------------------------------------- | :----------------------: |
| [BOTTOM()](/influxdb/version/reference/influxql/functions/selectors/#bottom) | **{{< icon "check" >}}** |
| [FIRST()](/influxdb/version/reference/influxql/functions/selectors/#first) | **{{< icon "check" >}}** |
| [LAST()](/influxdb/version/reference/influxql/functions/selectors/#last) | **{{< icon "check" >}}** |
| [MAX()](/influxdb/version/reference/influxql/functions/selectors/#max) | **{{< icon "check" >}}** |
| [MIN()](/influxdb/version/reference/influxql/functions/selectors/#min) | **{{< icon "check" >}}** |
| [PERCENTILE()](/influxdb/version/reference/influxql/functions/selectors/#percentile) | **{{< icon "check" >}}** |
| <span style="opacity: .5;">SAMPLE()</span> | |
| <span style="opacity: .5;">SAMPLE()</span> | |
| [TOP()](/influxdb/version/reference/influxql/functions/selectors/#top) | **{{< icon "check" >}}** |
<!-- SAMPLE() [influxdb_iox#6935](https://github.com/influxdata/influxdb_iox/issues/6935) -->
### Transformations
| Function | Supported |
| :--------------------------------------------------------------------------------------------------------------------------- | :----------------------: |
| Function | Supported |
| :------------------------------------------------------------------------------------------------------------------- | :----------------------: |
| [ABS()](/influxdb/version/reference/influxql/functions/transformations/#abs) | **{{< icon "check" >}}** |
| [ACOS()](/influxdb/version/reference/influxql/functions/transformations/#acos) | **{{< icon "check" >}}** |
| [ASIN()](/influxdb/version/reference/influxql/functions/transformations/#asin) | **{{< icon "check" >}}** |

View File

@ -6,6 +6,7 @@ _Examples use the sample data set provided in the
- [COUNT()](#count)
- [DISTINCT()](#distinct)
- [INTEGRAL()](#integral)
- [MEAN()](#mean)
- [MEDIAN()](#median)
- [MODE()](#mode)
@ -13,17 +14,6 @@ _Examples use the sample data set provided in the
- [STDDEV()](#stddev)
- [SUM()](#sum)
<!-- When implemented, place back in alphabetical order -->
<!-- - [INTEGRAL()](#integral) -->
> [!Important]
> #### Missing InfluxQL functions
>
> Some InfluxQL functions are in the process of being rearchitected to work with
> the InfluxDB 3 storage engine. If a function you need is not here, check the
> [InfluxQL feature support page](/influxdb/version/reference/influxql/feature-support/#function-support)
> for more information.
## COUNT()
Returns the number of non-null [field values](/influxdb/version/reference/glossary/#field-value).
@ -186,14 +176,14 @@ name: home
{{% /expand %}}
{{< /expand-wrapper >}}
<!-- ## INTEGRAL()
## INTEGRAL()
Returns the area under the curve for queried [field values](/influxdb/version/reference/glossary/#field-value)
and converts those results into the summed area per **unit** of time.
> [!Note]
> `INTEGRAL()` does not support [`fill()`](/influxdb/version/query-data/influxql/explore-data/group-by/> #group-by-time-intervals-and-fill).
> `INTEGRAL()` supports int64 and float64 field value [data types](/influxdb/version/reference/glossary/#data-type).
> [!Important]
> - `INTEGRAL()` does not support [`fill()`](/influxdb/version/reference/influxql/group-by/#group-by-time-and-fill-gaps).
> - `INTEGRAL()` supports int64 and float64 field value [data types](/influxdb/version/reference/glossary/#data-type).
```sql
INTEGRAL(field_expression[, unit])
@ -318,7 +308,7 @@ name: home
{{% /influxdb/custom-timestamps %}}
{{% /expand %}}
{{< /expand-wrapper >}} -->
{{< /expand-wrapper >}}
## MEAN()

View File

@ -5046,9 +5046,9 @@ tldts@^6.1.32:
tldts-core "^6.1.86"
tmp@~0.2.3:
version "0.2.3"
resolved "https://registry.yarnpkg.com/tmp/-/tmp-0.2.3.tgz#eb783cc22bc1e8bebd0671476d46ea4eb32a79ae"
integrity sha512-nZD7m9iCPC5g0pYmcaxogYKggSfLsdxl8of3Q/oIbqCqLLIO9IAF0GWjX1z9NZRHPiXv8Wex4yDCaZsgEw0Y8w==
version "0.2.4"
resolved "https://registry.yarnpkg.com/tmp/-/tmp-0.2.4.tgz#c6db987a2ccc97f812f17137b36af2b6521b0d13"
integrity sha512-UdiSoX6ypifLmrfQ/XfiawN6hkjSBpCjhKxxZcWlUUmoXLaCKQU0bx4HF/tdDK2uzRuchf1txGvrWBzYREssoQ==
to-buffer@^1.1.1:
version "1.2.1"