Merge branch 'master' into copilot/fix-6299

pull/6300/head
Jason Stirnaman 2025-08-15 10:01:42 -05:00 committed by GitHub
commit a4c89f024b
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
19 changed files with 1179 additions and 24 deletions

View File

@ -66,7 +66,22 @@ paths:
schema:
type: string
required: true
description: Bucket to write to. If none exists, InfluxDB creates a bucket with a default 3-day retention policy.
description: |
The database to write to.
**Database targeting:** In Cloud Dedicated, databases can be named using the `database_name/retention_policy_name` convention for InfluxQL compatibility. Cloud Dedicated does not use DBRP mappings. The db and rp parameters are used to construct the target database name following this naming convention.
**Auto-creation behavior:** Cloud Dedicated requires databases to be created before writing data. The v1 `/write` API does not automatically create databases. If the specified
database does not exist, the write request will fail.
Authentication: Requires a valid API token with _write_ permissions for the target database.
### Related
- [Write data to InfluxDB Cloud Dedicated](/influxdb3/cloud-dedicated/write-data/)
- [Manage databases in InfluxDB Cloud Dedicated](/influxdb3/cloud-dedicated/admin/databases/)
- [InfluxQL DBRP naming convention](/influxdb3/cloud-dedicated/admin/databases/create/#influxql-dbrp-naming-convention)
- [InfluxQL data retention policy mapping differences](/influxdb3/cloud-serverless/guides/prototype-evaluation/#influxql-data-retention-policy-mapping-differences)
- in: query
name: rp
schema:
@ -137,6 +152,160 @@ paths:
schema:
$ref: '#/components/schemas/Error'
/query:
get:
operationId: GetQueryV1
tags:
- Query
summary: Query using the InfluxDB v1 HTTP API
parameters:
- $ref: '#/components/parameters/TraceSpan'
- $ref: '#/components/parameters/AuthUserV1'
- $ref: '#/components/parameters/AuthPassV1'
- in: header
name: Accept
schema:
type: string
description: Specifies how query results should be encoded in the response. **Note:** With `application/csv`, query results include epoch timestamps instead of RFC3339 timestamps.
default: application/json
enum:
- application/json
- application/csv
- text/csv
- application/x-msgpack
- in: header
name: Accept-Encoding
description: The Accept-Encoding request HTTP header advertises which content encoding, usually a compression algorithm, the client is able to understand.
schema:
type: string
description: Specifies that the query response in the body should be encoded with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
- in: query
name: chunked
description: |
If true, the response is divided into chunks of size `chunk_size`.
schema:
type: boolean
default: false
- in: query
name: chunk_size
description: |
The number of records that will go into a chunk.
This parameter is only used if `chunked=true`.
schema:
type: integer
default: 10000
- in: query
name: db
schema:
type: string
required: true
description: The database to query from.
- in: query
name: pretty
description: |
If true, the JSON response is formatted in a human-readable format.
schema:
type: boolean
default: false
- in: query
name: q
description: Defines the InfluxQL query to run.
required: true
schema:
type: string
- in: query
name: rp
schema:
type: string
description: |
The retention policy name for InfluxQL compatibility
Optional parameter that, when combined with the db parameter, forms the complete database name to query. In InfluxDB Cloud Dedicated, databases can be named using the
database_name/retention_policy_name convention for InfluxQL compatibility.
When a request specifies both `db` and `rp`, Cloud Dedicated combines them as `db/rp` to target the database--for example:
- If `db=mydb` and `rp=autogen`, the query targets the database named `mydb/autogen`
- If only `db=mydb` is provided (no `rp`), the query targets the database named `mydb`
Unlike InfluxDB v1 and Cloud Serverless, Cloud Dedicated does not use DBRP mappings or separate retention policy objects. This parameter exists solely for v1 API
compatibility and database naming conventions.
_Note: The retention policy name does not control data retention in Cloud Dedicated. Data retention is determined by the database's **retention period** setting._
### Related
- [InfluxQL DBRP naming convention](/influxdb3/cloud-dedicated/admin/databases/create/#influxql-dbrp-naming-convention)
- [InfluxQL data retention policy mapping differences](/influxdb3/cloud-serverless/guides/prototype-evaluation/#influxql-data-retention-policy-mapping-differences)
- name: epoch
description: |
Formats timestamps as unix (epoch) timestamps with the specified precision
instead of RFC3339 timestamps with nanosecond precision.
in: query
schema:
type: string
enum:
- h
- m
- s
- ms
- u
- µ
- ns
responses:
'200':
description: Query results
headers:
Content-Encoding:
description: The Content-Encoding entity header is used to compress the media-type. When present, its value indicates which encodings were applied to the entity-body
schema:
type: string
description: Specifies that the response in the body is encoded with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
Trace-Id:
description: The Trace-Id header reports the request's trace ID, if one was generated.
schema:
type: string
description: Specifies the request's trace ID.
content:
application/csv:
schema:
$ref: '#/components/schemas/InfluxQLCSVResponse'
text/csv:
schema:
$ref: '#/components/schemas/InfluxQLCSVResponse'
application/json:
schema:
$ref: '#/components/schemas/InfluxQLResponse'
examples:
influxql-chunk_size_2:
value: |
{"results":[{"statement_id":0,"series":[{"name":"mymeas","columns":["time","myfield","mytag"],"values":[["2016-05-19T18:37:55Z",90,"1"],["2016-05-19T18:37:56Z",90,"1"]],"partial":true}],"partial":true}]}
{"results":[{"statement_id":0,"series":[{"name":"mymeas","columns":["time","myfield","mytag"],"values":[["2016-05-19T18:37:57Z",90,"1"],["2016-05-19T18:37:58Z",90,"1"]]}]}]}
application/x-msgpack:
schema:
type: string
format: binary
'429':
description: Token is temporarily over quota. The Retry-After header describes when to try the read again.
headers:
Retry-After:
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
schema:
type: integer
format: int32
default:
description: Error processing query
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
post:
operationId: PostQueryV1
tags:
@ -148,6 +317,83 @@ paths:
text/plain:
schema:
type: string
application/json:
schema:
type: object
properties:
db:
type: string
description: |
The database name for InfluxQL queries.
Required parameter that specifies the database to query.
In InfluxDB Cloud Dedicated, this can be either:
- A simple database name (for example, `mydb`)
- The database portion of a `database_name/retention_policy_name` naming convention (used together with the `rp` parameter)
When used alone, `db` specifies the complete database name to query. When used with the `rp` parameter, they combine to form the full database name as `db/rp`--for example, if `db=mydb` and `rp=autogen`, the query targets the database named `mydb/autogen`.
Unlike InfluxDB Cloud Serverless, Cloud Dedicated does not use DBRP mappings. The database name directly corresponds to an existing database in your Cloud Dedicated cluster.
Examples:
- `db=mydb` - queries the database named `mydb`
- `db=mydb` with `rp=autogen` - queries the database named `mydb/autogen`
_Note: The specified database must exist in your Cloud Dedicated cluster. Queries will fail if the database does not exist._
### Related
- [InfluxQL DBRP naming convention](/influxdb3/cloud-dedicated/admin/databases/create/#influxql-dbrp-naming-convention)
- [Migrate data from InfluxDB 1.x to Cloud Dedicated](/influxdb3/cloud-dedicated/guides/migrate-data/migrate-1x-to-cloud-dedicated/)
- [InfluxQL data retention policy mapping differences between InfluxDB Cloud Dedicated and Cloud Serverless](/influxdb3/cloud-serverless/guides/prototype-evaluation/#influxql-data-retention-policy-mapping-differences)
rp:
description: |
The retention policy name for InfluxQL compatibility
Optional parameter that, when combined with the db parameter, forms the complete database name to query. In InfluxDB Cloud Dedicated, databases can be named using the
database_name/retention_policy_name convention for InfluxQL compatibility.
When a request specifies both `db` and `rp`, Cloud Dedicated combines them as `db/rp` to target the database--for example:
- If `db=mydb` and `rp=autogen`, the query targets the database named `mydb/autogen`
- If only `db=mydb` is provided (no `rp`), the query targets the database named `mydb`
Unlike InfluxDB v1 and Cloud Serverless, Cloud Dedicated does not use DBRP mappings or separate retention policy objects. This parameter exists solely for v1 API
compatibility and database naming conventions.
_Note: The retention policy name does not control data retention in Cloud Dedicated. Data retention is determined by the database's **retention period** setting._
### Related
- [InfluxQL DBRP naming convention](/influxdb3/cloud-dedicated/admin/databases/create/#influxql-dbrp-naming-convention)
- [Migrate data from InfluxDB 1.x to Cloud Dedicated](/influxdb3/cloud-dedicated/guides/migrate-data/migrate-1x-to-cloud-dedicated/)
- [InfluxQL data retention policy mapping differences](/influxdb3/cloud-serverless/guides/prototype-evaluation/#influxql-data-retention-policy-mapping-differences)
type: string
q:
description: Defines the InfluxQL query to run.
type: string
chunked:
description: |
If true, the response is divided into chunks of size `chunk_size`.
type: boolean
chunk_size:
description: |
The number of records that will go into a chunk.
This parameter is only used if `chunked=true`.
type: integer
default: 10000
epoch:
description: |
A unix timestamp precision.
type: string
enum:
- h
- m
- s
- ms
- u
- µ
- ns
parameters:
- $ref: '#/components/parameters/TraceSpan'
- $ref: '#/components/parameters/AuthUserV1'
@ -184,7 +430,7 @@ paths:
schema:
type: string
required: true
description: Bucket to query.
description: Database to query.
- in: query
name: rp
schema:

View File

@ -65,7 +65,7 @@ paths:
schema:
type: string
required: true
description: Bucket to write to. If none exists, InfluxDB creates a bucket with a default 3-day retention policy.
description: Database to write to. If none exists, InfluxDB creates a database with a default 3-day retention policy.
- in: query
name: rp
schema:
@ -136,6 +136,188 @@ paths:
schema:
$ref: '#/components/schemas/Error'
/query:
get:
operationId: GetQueryV1
tags:
- Query
summary: Query using the InfluxDB v1 HTTP API
parameters:
- $ref: '#/components/parameters/TraceSpan'
- $ref: '#/components/parameters/AuthUserV1'
- $ref: '#/components/parameters/AuthPassV1'
- in: header
name: Accept
schema:
type: string
description: Specifies how query results should be encoded in the response. **Note:** With `application/csv`, query results include epoch timestamps instead of RFC3339 timestamps.
default: application/json
enum:
- application/json
- application/csv
- text/csv
- application/x-msgpack
- in: header
name: Accept-Encoding
description: The Accept-Encoding request HTTP header advertises which content encoding, usually a compression algorithm, the client is able to understand.
schema:
type: string
description: Specifies that the query response in the body should be encoded with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
- in: query
name: chunked
description: |
If true, the response is divided into chunks of size `chunk_size`.
schema:
type: boolean
default: false
- in: query
name: chunk_size
description: |
The number of records that will go into a chunk.
This parameter is only used if `chunked=true`.
schema:
type: integer
default: 10000
- in: query
name: db
schema:
type: string
required: true
description: |
The database name for InfluxQL queries
Required parameter that specifies the database to query via DBRP (Database Retention Policy) mapping. In Cloud Serverless, this parameter is used together with DBRP
mappings to identify which bucket to query.
The `db` parameter (optionally combined with `rp`) must have an existing DBRP mapping that points to a bucket. Without a valid DBRP mapping, queries will fail with an
authorization error.
**DBRP mapping requirements:**
- A DBRP mapping must exist before querying
- Mappings can be created automatically when writing data with the v1 API (if your token has permissions)
- Mappings can be created manually using the InfluxDB CLI or API
### Examples
- `db=mydb` - uses the default DBRP mapping for `mydb`
- `db=mydb` with `rp=weekly` - uses the DBRP mapping for `mydb/weekly`
_Note: Unlike the v1 `/write` endpoint which can auto-create buckets and mappings, the `/query` endpoint requires pre-existing DBRP mappings. The actual data is stored in and
queried from the bucket that the DBRP mapping points to._
### Related
- [Use the InfluxDB v1 query API and InfluxQL in Cloud Serverless](/influxdb3/cloud-serverless/query-data/execute-queries/v1-http/)
- [Map v1 databases and retention policies to buckets in Cloud Serverless](/influxdb3/cloud-serverless/guides/api-compatibility/v1/#map-v1-databases-and-retention-policies-to-buckets)
- [Migrate from InfluxDB 1.x to Cloud Serverless](/influxdb3/cloud-serverless/guides/migrate-data/migrate-1x-to-serverless/)
- in: query
name: pretty
description: |
If true, the JSON response is formatted in a human-readable format.
schema:
type: boolean
default: false
- in: query
name: q
description: Defines the InfluxQL query to run.
required: true
schema:
type: string
- in: query
name: rp
schema:
type: string
description: |
The retention policy name for InfluxQL queries
Optional parameter that specifies the retention policy to use when querying data with InfluxQL. In Cloud Serverless, this parameter works with DBRP (Database Retention
Policy) mappings to identify the target bucket.
When provided together with the `db` parameter, Cloud Serverless uses the DBRP mapping to determine which bucket to query. The combination of `db` and `rp` must have an
existing DBRP mapping that points to a bucket. If no `rp` is specified, Cloud Serverless uses the default retention policy mapping for the database.
Requirements: A DBRP mapping must exist for the db/rp combination before you can query data. DBRP mappings can be created:
- Automatically when writing data with the v1 API (if your token has sufficient permissions)
- Manually using the InfluxDB CLI or API
Example: If `db=mydb` and `rp=weekly`, the query uses the DBRP mapping for `mydb/weekly` to determine which bucket to query.
_Note: The retention policy name is used only for DBRP mapping. Actual data retention is controlled by the target bucket's retention period setting, not by the retention
policy name._
### Related
- [Use the InfluxDB v1 query API and InfluxQL in Cloud Serverless](/influxdb3/cloud-serverless/query-data/execute-queries/v1-http/)
- [Map v1 databases and retention policies to buckets in Cloud Serverless](/influxdb3/cloud-serverless/guides/api-compatibility/v1/#map-v1-databases-and-retention-policies-to-buckets)
- [Migrate from InfluxDB 1.x to Cloud Serverless](/influxdb3/cloud-serverless/guides/migrate-data/migrate-1x-to-serverless/)
- name: epoch
description: |
Formats timestamps as unix (epoch) timestamps with the specified precision
instead of RFC3339 timestamps with nanosecond precision.
in: query
schema:
type: string
enum:
- h
- m
- s
- ms
- u
- µ
- ns
responses:
'200':
description: Query results
headers:
Content-Encoding:
description: The Content-Encoding entity header is used to compress the media-type. When present, its value indicates which encodings were applied to the entity-body
schema:
type: string
description: Specifies that the response in the body is encoded with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
Trace-Id:
description: The Trace-Id header reports the request's trace ID, if one was generated.
schema:
type: string
description: Specifies the request's trace ID.
content:
application/csv:
schema:
$ref: '#/components/schemas/InfluxQLCSVResponse'
text/csv:
schema:
$ref: '#/components/schemas/InfluxQLCSVResponse'
application/json:
schema:
$ref: '#/components/schemas/InfluxQLResponse'
examples:
influxql-chunk_size_2:
value: |
{"results":[{"statement_id":0,"series":[{"name":"mymeas","columns":["time","myfield","mytag"],"values":[["2016-05-19T18:37:55Z",90,"1"],["2016-05-19T18:37:56Z",90,"1"]],"partial":true}],"partial":true}]}
{"results":[{"statement_id":0,"series":[{"name":"mymeas","columns":["time","myfield","mytag"],"values":[["2016-05-19T18:37:57Z",90,"1"],["2016-05-19T18:37:58Z",90,"1"]]}]}]}
application/x-msgpack:
schema:
type: string
format: binary
'429':
description: Token is temporarily over quota. The Retry-After header describes when to try the read again.
headers:
Retry-After:
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
schema:
type: integer
format: int32
default:
description: Error processing query
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
post:
operationId: PostQueryV1
tags:
@ -147,6 +329,87 @@ paths:
text/plain:
schema:
type: string
application/json:
schema:
type: object
properties:
db:
type: string
description: |
The database name for InfluxQL queries
Required parameter that specifies the database to query via DBRP (Database Retention Policy) mapping. In Cloud Serverless, this parameter is used together with DBRP
mappings to identify which bucket to query.
The `db` parameter (optionally combined with `rp`) must have an existing DBRP mapping that points to a bucket. Without a valid DBRP mapping, queries will fail with an
authorization error.
**DBRP mapping requirements:**
- A DBRP mapping must exist before querying
- Mappings can be created automatically when writing data with the v1 API (if your token has permissions)
- Mappings can be created manually using the InfluxDB CLI or API
### Examples
- `db=mydb` - uses the default DBRP mapping for `mydb`
- `db=mydb` with `rp=weekly` - uses the DBRP mapping for `mydb/weekly`
_Note: Unlike the v1 `/write` endpoint which can auto-create buckets and mappings, the `/query` endpoint requires pre-existing DBRP mappings. The actual data is stored in and
queried from the bucket that the DBRP mapping points to._
### Related
- [Execute InfluxQL queries using the v1 API](/influxdb3/cloud-serverless/query-data/execute-queries/influxql/api/v1-http/)
- [Map v1 databases and retention policies to buckets in Cloud Serverless](/influxdb3/cloud-serverless/guides/api-compatibility/v1/#map-v1-databases-and-retention-policies-to-buckets)
- [Manage DBRP mappings in Cloud Serverless](/influxdb3/cloud-serverless/admin/dbrp/)
rp:
description: |
The retention policy name for InfluxQL queries
Optional parameter that specifies the retention policy to use when querying data with InfluxQL. In Cloud Serverless, this parameter works with DBRP (Database Retention
Policy) mappings to identify the target bucket.
When provided together with the `db` parameter, Cloud Serverless uses the DBRP mapping to determine which bucket to query. The combination of `db` and `rp` must have an
existing DBRP mapping that points to a bucket. If no `rp` is specified, Cloud Serverless uses the default retention policy mapping for the database.
Requirements: A DBRP mapping must exist for the db/rp combination before you can query data. DBRP mappings can be created:
- Automatically when writing data with the v1 API (if your token has sufficient permissions)
- Manually using the InfluxDB CLI or API
Example: If `db=mydb` and `rp=weekly`, the query uses the DBRP mapping for `mydb/weekly` to determine which bucket to query.
_Note: The retention policy name is used only for DBRP mapping. Actual data retention is controlled by the target bucket's retention period setting, not by the retention policy name._
### Related
- [Execute InfluxQL queries using the v1 API](/influxdb3/cloud-serverless/query-data/execute-queries/influxql/api/v1-http/)
- [Map v1 databases and retention policies to buckets in Cloud Serverless](/influxdb3/cloud-serverless/guides/api-compatibility/v1/#map-v1-databases-and-retention-policies-to-buckets)
- [Manage DBRP mappings in Cloud Serverless](/influxdb3/cloud-serverless/admin/dbrp/)
type: string
q:
description: Defines the InfluxQL query to run.
type: string
chunked:
description: |
If true, the response is divided into chunks of size `chunk_size`.
type: boolean
chunk_size:
description: |
The number of records that will go into a chunk.
This parameter is only used if `chunked=true`.
type: integer
default: 10000
epoch:
description: |
A unix timestamp precision.
type: string
enum:
- h
- m
- s
- ms
- u
- µ
- ns
parameters:
- $ref: '#/components/parameters/TraceSpan'
- $ref: '#/components/parameters/AuthUserV1'

View File

@ -65,7 +65,23 @@ paths:
schema:
type: string
required: true
description: Bucket to write to. If none exists, InfluxDB creates a bucket with a default 3-day retention policy.
description: |
The database to write to.
**Database targeting:** In InfluxDB Clustered, databases can be named using the `database_name/retention_policy_name` convention for InfluxQL compatibility. InfluxDB Clustered does not use DBRP mappings. The db and rp parameters are used to construct the target database name following this naming convention.
**Auto-creation behavior:** InfluxDB Clustered requires databases to be created before writing data. The v1 `/write` API does not automatically create databases. If the specified
database does not exist, the write request will fail.
Authentication: Requires a valid API token with _write_ permissions for the target database.
### Related
- [Write data to InfluxDB Clustered](/influxdb3/clustered/write-data/)
- [Use the InfluxDB v1 API with InfluxDB Clustered](/influxdb3/clustered/guides/api-compatibility/v1/)
- [Manage databases in InfluxDB Clustered](/influxdb3/clustered/admin/databases/)
- [InfluxQL DBRP naming convention in InfluxDB Clustered](/influxdb3/clustered/admin/databases/create/#influxql-dbrp-naming-convention)
- [Migrate data from InfluxDB v1 to InfluxDB Clustered](/influxdb3/clustered/guides/migrate-data/migrate-1x-to-clustered/)
- in: query
name: rp
schema:
@ -136,6 +152,141 @@ paths:
schema:
$ref: '#/components/schemas/Error'
/query:
get:
operationId: GetQueryV1
tags:
- Query
summary: Query using the InfluxDB v1 HTTP API
parameters:
- $ref: '#/components/parameters/TraceSpan'
- $ref: '#/components/parameters/AuthUserV1'
- $ref: '#/components/parameters/AuthPassV1'
- in: header
name: Accept
schema:
type: string
description: Specifies how query results should be encoded in the response. **Note:** With `application/csv`, query results include epoch timestamps instead of RFC3339 timestamps.
default: application/json
enum:
- application/json
- application/csv
- text/csv
- application/x-msgpack
- in: header
name: Accept-Encoding
description: The Accept-Encoding request HTTP header advertises which content encoding, usually a compression algorithm, the client is able to understand.
schema:
type: string
description: Specifies that the query response in the body should be encoded with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
- in: query
name: chunked
description: |
If true, the response is divided into chunks of size `chunk_size`.
schema:
type: boolean
default: false
- in: query
name: chunk_size
description: |
The number of records that will go into a chunk.
This parameter is only used if `chunked=true`.
schema:
type: integer
default: 10000
- in: query
name: db
schema:
type: string
required: true
description: The database to query from.
- in: query
name: pretty
description: |
If true, the JSON response is formatted in a human-readable format.
schema:
type: boolean
default: false
- in: query
name: q
description: Defines the InfluxQL query to run.
required: true
schema:
type: string
- in: query
name: rp
schema:
type: string
description: Retention policy name.
- name: epoch
description: |
Formats timestamps as unix (epoch) timestamps with the specified precision
instead of RFC3339 timestamps with nanosecond precision.
in: query
schema:
type: string
enum:
- h
- m
- s
- ms
- u
- µ
- ns
responses:
'200':
description: Query results
headers:
Content-Encoding:
description: The Content-Encoding entity header is used to compress the media-type. When present, its value indicates which encodings were applied to the entity-body
schema:
type: string
description: Specifies that the response in the body is encoded with gzip or not encoded with identity.
default: identity
enum:
- gzip
- identity
Trace-Id:
description: The Trace-Id header reports the request's trace ID, if one was generated.
schema:
type: string
description: Specifies the request's trace ID.
content:
application/csv:
schema:
$ref: '#/components/schemas/InfluxQLCSVResponse'
text/csv:
schema:
$ref: '#/components/schemas/InfluxQLCSVResponse'
application/json:
schema:
$ref: '#/components/schemas/InfluxQLResponse'
examples:
influxql-chunk_size_2:
value: |
{"results":[{"statement_id":0,"series":[{"name":"mymeas","columns":["time","myfield","mytag"],"values":[["2016-05-19T18:37:55Z",90,"1"],["2016-05-19T18:37:56Z",90,"1"]],"partial":true}],"partial":true}]}
{"results":[{"statement_id":0,"series":[{"name":"mymeas","columns":["time","myfield","mytag"],"values":[["2016-05-19T18:37:57Z",90,"1"],["2016-05-19T18:37:58Z",90,"1"]]}]}]}
application/x-msgpack:
schema:
type: string
format: binary
'429':
description: Token is temporarily over quota. The Retry-After header describes when to try the read again.
headers:
Retry-After:
description: A non-negative decimal integer indicating the seconds to delay after the response is received.
schema:
type: integer
format: int32
default:
description: Error processing query
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
post:
operationId: PostQueryV1
tags:
@ -147,6 +298,64 @@ paths:
text/plain:
schema:
type: string
application/json:
schema:
type: object
properties:
db:
type: string
description: Database to query.
rp:
description: |
The retention policy name for InfluxQL compatibility
Optional parameter that, when combined with the db parameter, forms the complete database name to query. In InfluxDB Clustered, databases can be named using the
database_name/retention_policy_name convention for InfluxQL compatibility.
When a request specifies both `db` and `rp`, InfluxDB Clustered combines them as `db/rp` to target the database--for example:
- If `db=mydb` and `rp=autogen`, the query targets the database named `mydb/autogen`
- If only `db=mydb` is provided (no `rp`), the query targets the database named `mydb`
Unlike InfluxDB v1 and Cloud Serverless, InfluxDB Clustered does not use DBRP mappings or separate retention policy objects. This parameter exists solely for v1 API
compatibility and database naming conventions.
Note: The retention policy name does not control data retention in InfluxDB Clustered. Data retention is determined by the database's _retention period_ setting.
### Related
- [Use the v1 query API and InfluxQL to query data in InfluxDB Clustered](/influxdb3/clustered/query-data/execute-queries/influxdb-v1-api/)
- [Use the InfluxDB v1 API with InfluxDB Clustered](/influxdb3/clustered/guides/api-compatibility/v1/)
- [Manage databases in InfluxDB Clustered](/influxdb3/clustered/admin/databases/)
- [InfluxQL DBRP naming convention in InfluxDB Clustered](/influxdb3/clustered/admin/databases/create/#influxql-dbrp-naming-convention)
- [Migrate data from InfluxDB v1 to InfluxDB Clustered](/influxdb3/clustered/guides/migrate-data/migrate-1x-to-clustered/)
type: string
q:
description: |
Defines the InfluxQL query to run.
type: string
chunked:
description: |
If true, the response is divided into chunks of size `chunk_size`.
type: boolean
chunk_size:
description: |
The number of records that will go into a chunk.
This parameter is only used if `chunked=true`.
type: integer
default: 10000
epoch:
description: |
A unix timestamp precision.
type: string
enum:
- h
- m
- s
- ms
- u
- µ
- ns
parameters:
- $ref: '#/components/parameters/TraceSpan'
- $ref: '#/components/parameters/AuthUserV1'

View File

@ -10,6 +10,12 @@ aliases:
- /chronograf/v1/about_the_project/release-notes-changelog/
---
## v1.10.8 {date="2025-08-15"}
### Bug Fixes
- Fix missing retention policies on the Databases page.
## v1.10.7 {date="2025-04-15"}
### Bug Fixes

View File

@ -0,0 +1,19 @@
---
title: Usage telemetry
seotitle: InfluxDB 3 Core usage telemetry
description: >
InfluxData collects telemetry data to help improve the {{< product-name >}}.
Learn what data {{< product-name >}} collects and sends to InfluxData, how it's used, and
how you can opt out.
menu:
influxdb3_core:
parent: Reference
weight: 108
influxdb3/core/tags: [telemetry, monitoring, metrics, observability]
source: /shared/influxdb3-reference/telemetry.md
---
<!--
The content of this file is located at
//SOURCE - content/shared/influxdb3-reference/telemetry.md
-->

View File

@ -13,4 +13,4 @@ source: /shared/influxdb3-cli/config-options.md
<!-- The content of this file is at
//SOURCE - content/shared/influxdb3-cli/config-options.md
-->
-->

View File

@ -0,0 +1,19 @@
---
title: Usage telemetry
seotitle: InfluxDB 3 Enterprise usage telemetry
description: >
InfluxData collects telemetry data to help improve the {{< product-name >}}.
Learn what data {{< product-name >}} collects and sends to InfluxData, how it's used, and
how you can opt out.
menu:
influxdb3_enterprise:
parent: Reference
weight: 108
influxdb3/enterprise/tags: [telemetry, monitoring, metrics, observability]
source: /shared/influxdb3-reference/telemetry.md
---
<!--
The content of this file is located at
//SOURCE - content/shared/influxdb3-reference/telemetry.md
-->

View File

@ -86,7 +86,7 @@ To use {{% product-name %}} to query data from InfluxDB 3, navigate to
The _Data Explorer_ lets you explore the
schema of your database and automatically builds SQL queries by either
selecting columns in the _Schema Browser_ or by using _Natural Language_ with
the {{% product-name %}} OpenAI integration.
the {{% product-name %}} AI integration.
For this getting started guide, use the Schema Browser to build a SQL query
that returns data from the newly written sample data set.

View File

@ -13,7 +13,7 @@ stored. Each database can contain multiple tables.
> **If coming from InfluxDB v2, InfluxDB Cloud (TSM), or InfluxDB Cloud Serverless**,
> _database_ and _bucket_ are synonymous.
<!--
{{% show-in "enterprise" %}}
## Retention periods
A database **retention period** is the maximum age of data stored in the database.
@ -22,10 +22,9 @@ When a point's timestamp is beyond the retention period (relative to now), the
point is marked for deletion and is removed from the database the next time the
retention enforcement service runs.
The _minimum_ retention period for an InfluxDB database is 1 hour.
The _maximum_ retention period is infinite meaning data does not expire and will
never be removed by the retention enforcement service.
-->
The _maximum_ retention period is infinite (`none`) meaning data does not expire
and will never be removed by the retention enforcement service.
{{% /show-in %}}
## Database, table, and column limits
@ -40,9 +39,11 @@ never be removed by the retention enforcement service.
**Maximum number of tables across all databases**: {{% influxdb3/limit "table" %}}
{{< product-name >}} limits the number of tables you can have across _all_
databases to {{% influxdb3/limit "table" %}}. There is no specific limit on how
many tables you can have in an individual database, as long as the total across
all databases is below the limit.
databases to {{% influxdb3/limit "table" %}}{{% show-in "enterprise" %}} by default{{% /show-in %}}.
{{% show-in "enterprise" %}}You can configure the table limit using the
[`--num-table-limit` configuration option](/influxdb3/enterprise/reference/config-options/#num-table-limit).{{% /show-in %}}
InfluxDB doesn't limit how many tables you can have in an individual database,
as long as the total across all databases is below the limit.
Having more tables affects your {{% product-name %}} installation in the
following ways:
@ -64,7 +65,8 @@ persists data to Parquet files. Each `PUT` request incurs a monetary cost and
increases the operating cost of {{< product-name >}}.
{{% /expand %}}
{{% expand "**More work for the compactor** _(Enterprise only)_ <em style='opacity:.5;font-weight:normal;'>View more info</em>" %}}
{{% show-in "enterprise" %}}
{{% expand "**More work for the compactor** <em style='opacity:.5;font-weight:normal;'>View more info</em>" %}}
To optimize storage over time, InfluxDB 3 Enterprise has a compactor that
routinely compacts Parquet files.
@ -72,6 +74,7 @@ With more tables and Parquet files to compact, the compactor may need to be scal
to keep up with demand, adding to the operating cost of InfluxDB 3 Enterprise.
{{% /expand %}}
{{% /show-in %}}
{{< /expand-wrapper >}}
### Column limit
@ -80,11 +83,17 @@ to keep up with demand, adding to the operating cost of InfluxDB 3 Enterprise.
Each row must include a time column, with the remaining columns representing
tags and fields.
As a result, a table can have one time column and up to {{% influxdb3/limit "column" -1 %}}
As a result,{{% show-in "enterprise" %}} by default,{{% /show-in %}} a table can
have one time column and up to {{% influxdb3/limit "column" -1 %}}
_combined_ field and tag columns.
If you attempt to write to a table and exceed the column limit, the write
request fails and InfluxDB returns an error.
{{% show-in "enterprise" %}}
You can configure the maximum number of columns per
table using the [`num-total-columns-per-table-limit` configuration option](/influxdb3/enterprise/reference/config-options/#num-total-columns-per-table-limit).
{{% /show-in %}}
Higher numbers of columns has the following side-effects:
{{< expand-wrapper >}}

View File

@ -130,7 +130,12 @@ database_name/retention_policy_name
## Database limit
{{% show-in "enterprise" %}}
**Default maximum number of databases**: {{% influxdb3/limit "database" %}}
{{% /show-in %}}
{{% show-in "core" %}}
**Maximum number of databases**: {{% influxdb3/limit "database" %}}
{{% /show-in %}}
_For more information about {{< product-name >}} database, table, and column limits,
see [Database, table, and column limits](/influxdb3/version/admin/databases/#database-table-and-column-limits)._

View File

@ -69,6 +69,59 @@ influxdb3 create distinct_cache \
<!--------------------------- END ENTERPRISE EXAMPLE -------------------------->
{{% /show-in %}}
## Use the HTTP API
To use the HTTP API to create a Distinct Value Cache, send a `POST` request to the `/api/v3/configure/distinct_cache` endpoint.
{{% api-endpoint method="POST" endpoint="/api/v3/configure/distinct_cache" api-ref="/influxdb3/version/api/v3/#operation/PostConfigureDistinctCache" %}}
{{% code-placeholders "(DATABASE|TABLE|DVC)_NAME|AUTH_TOKEN|COLUMNS|MAX_(CARDINALITY|AGE)" %}}
```bash
curl -X POST "https://localhost:8181/api/v3/configure/distinct_cache" \
--header "Authorization: Bearer AUTH_TOKEN" \
--json '{
"db": "DATABASE_NAME",
"table": "TABLE_NAME",
"name": "DVC_NAME",
"columns": ["COLUMNS"],
"max_cardinality": MAX_CARDINALITY,
"max_age": MAX_AGE
}'
```
{{% /code-placeholders %}}
### Example
```bash
curl -X POST "https://localhost:8181/api/v3/configure/distinct_cache" \
--header "Authorization: Bearer 00xoXX0xXXx0000XxxxXx0Xx0xx0" \
--json '{
"db": "example-db",
"table": "wind_data",
"name": "windDistinctCache",
"columns": ["country", "county", "city"],
"max_cardinality": 10000,
"max_age": 86400
}'
```
**Response codes:**
- `201` : Success. The distinct cache has been created.
- `204` : Not created. A distinct cache with this configuration already exists.
- `400` : Bad request.
> [!Note]
> #### API parameter differences
>
> - **Columns format**: The API uses a JSON array (`["country", "county", "city"]`)
> instead of the CLI's comma-delimited format (`country,county,city`).
> - **Maximum age format**: The API uses seconds (`86400`) instead of the CLI's
> [humantime format](https://docs.rs/humantime/latest/humantime/fn.parse_duration.html) (`24h`, `1 day`).
Replace the following:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:

View File

@ -31,3 +31,37 @@ FROM
WHERE
country = 'Spain'
```
## Use the HTTP API
To use the HTTP API to query cached data, send a `GET` or `POST` request to the `/api/v3/query_sql` endpoint and include the [`distinct_cache()`](/influxdb3/version/reference/sql/functions/cache/#distinct_cache) function in your query.
{{% api-endpoint method="GET" endpoint="/api/v3/query_sql" api-ref="/influxdb3/version/api/v3/#operation/GetExecuteQuerySQL" %}}
{{% api-endpoint method="POST" endpoint="/api/v3/query_sql" api-ref="/influxdb3/version/api/v3/#operation/PostExecuteQuerySQL" %}}
{{% code-placeholders "DATABASE_NAME|AUTH_TOKEN|TABLE_NAME|CACHE_NAME" %}}
```bash
curl -X POST "https://localhost:8181/api/v3/query_sql" \
--header "Authorization: Bearer AUTH_TOKEN" \
--json '{
"db": "DATABASE_NAME",
"q": "SELECT * FROM distinct_cache('\''TABLE_NAME'\'', '\''CACHE_NAME'\'')",
"format": "json"
}'
```
{{% /code-placeholders %}}
## Example with WHERE clause
```bash
curl -X POST "https://localhost:8181/api/v3/query_sql" \
--header "Authorization: Bearer 00xoXX0xXXx0000XxxxXx0Xx0xx0" \
--json '{
"db": "example-db",
"q": "SELECT room, temp FROM last_cache('\''home'\'', '\''homeCache'\'') WHERE room = '\''Kitchen'\''",
"format": "json"
}'
```

View File

@ -67,3 +67,44 @@ In the examples above, replace the following:
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}:
your {{< product-name >}} {{% show-in "enterprise" %}}admin {{% /show-in %}}
authentication token
## Use the HTTP API
To use the HTTP API to query and output cache information from the system table, send a `GET` or `POST` request to the `/api/v3/query_sql` endpoint.
{{% api-endpoint method="GET" endpoint="/api/v3/query_sql" api-ref="/influxdb3/version/api/v3/#operation/GetExecuteQuerySQL" %}}
{{% api-endpoint method="POST" endpoint="/api/v3/query_sql" api-ref="/influxdb3/version/api/v3/#operation/PostExecuteQuerySQL" %}}
### Query all caches
{{% code-placeholders "DATABASE_NAME|AUTH_TOKEN" %}}
```bash
curl -X POST "https://localhost:8181/api/v3/query_sql" \
--header "Authorization: Bearer AUTH_TOKEN" \
--json '{
"db": "DATABASE_NAME",
"q": "SELECT * FROM system.distinct_caches",
"format": "json"
}'
```
{{% /code-placeholders %}}
## Query specific cache details
{{% code-placeholders "DATABASE_NAME|AUTH_TOKEN|CACHE_NAME" %}}
```bash
curl -X POST "https://localhost:8181/api/v3/query_sql" \
--header "Authorization: Bearer AUTH_TOKEN" \
--json '{
"db": "DATABASE_NAME",
"q": "SELECT * FROM system.distinct_caches WHERE name = '\''CACHE_NAME'\''",
"format": "json"
}'
```
{{% /code-placeholders %}}

View File

@ -80,6 +80,59 @@ influxdb3 create last_cache \
<!--------------------------- END ENTERPRISE EXAMPLE -------------------------->
{{% /show-in %}}
## Use the HTTP API
To use the HTTP API to create a Last Value Cache, send a `POST` request to the `/api/v3/configure/last_cache` endpoint.
{{% api-endpoint method="POST" endpoint="/api/v3/configure/last_cache" api-ref="/influxdb3/version/api/v3/#operation/PostConfigureLastCache" %}}
{{% code-placeholders "(DATABASE|TABLE|LVC)_NAME|AUTH_TOKEN|(KEY|VALUE)_COLUMNS|COUNT|TTL" %}}
```bash
curl -X POST "https://localhost:8181/api/v3/configure/last_cache" \
--header "Authorization: Bearer AUTH_TOKEN" \
--json '{
"db": "DATABASE_NAME",
"table": "TABLE_NAME",
"name": "LVC_NAME",
"key_columns": ["KEY_COLUMNS"],
"value_columns": ["VALUE_COLUMNS"],
"count": COUNT,
"ttl": TTL
}'
```
{{% /code-placeholders %}}
### Example
```bash
curl -X POST "https://localhost:8181/api/v3/configure/last_cache" \
--header "Authorization: Bearer 00xoXX0xXXx0000XxxxXx0Xx0xx0" \
--json '{
"db": "example-db",
"table": "home",
"name": "homeLastCache",
"key_columns": ["room", "wall"],
"value_columns": ["temp", "hum", "co"],
"count": 5,
"ttl": 14400
}'
```
**Response codes:**
- `201` : Success. Last cache created.
- `400` : Bad request.
- `401` : Unauthorized.
- `404` : Cache not found.
- `409` : Cache already exists.
> [!Note]
> #### API parameter differences
> Column format: The API uses JSON arrays (["room", "wall"]) instead of the CLI's comma-delimited format (room,wall).
> TTL format: The API uses seconds (14400) instead of the CLI's humantime format (4h, 4 hours).
Replace the following:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
@ -116,4 +169,4 @@ The cache imports the distinct values from the table and starts caching them.
>
> The LVC is stored in memory, so it's important to consider the size and persistence
> of the cache. For more information, see
> [Important things to know about the Last Value Cache](/influxdb3/version/admin/last-value-cache/#important-things-to-know-about-the-last-value-cache).
> [Important things to know about the Last Value Cache.](/influxdb3/version/admin/last-value-cache/#important-things-to-know-about-the-last-value-cache)

View File

@ -23,6 +23,33 @@ influxdb3 delete last_cache \
```
{{% /code-placeholders %}}
## Use the HTTP API
To use the HTTP API to delete a Last Value Cache, send a `DELETE` request to the `/api/v3/configure/last_cache` endpoint with query parameters.
{{% api-endpoint method="DELETE" endpoint="/api/v3/configure/last_cache" api-ref="/influxdb3/core/api/v3/#operation/DeleteConfigureLastCache" %}}
{{% code-placeholders "(DATABASE|TABLE|LVC)_NAME|AUTH_TOKEN" %}}
```bash
curl -X DELETE "https://localhost:8181/api/v3/configure/last_cache?db=DATABASE_NAME&table=TABLE_NAME&name=LVC_NAME" \
--header "Authorization: Bearer AUTH_TOKEN"
```
{{% /code-placeholders %}}
## Example
```bash
curl -X DELETE "https://localhost:8181/api/v3/configure/last_cache?db=example-db&table=home&name=homeLastCache" \
--header "Authorization: Bearer 00xoXX0xXXx0000XxxxXx0Xx0xx0"
```
**Response codes:**
- `200` : Success. The last cache has been deleted.
- `400` : Bad request.
- `401` : Unauthorized.
- `404` : Cache not found.
Replace the following:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:

View File

@ -66,3 +66,43 @@ In the examples above, replace the following:
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}:
your {{< product-name >}} {{% show-in "enterprise" %}}admin {{% /show-in %}}
authentication token
## Use the HTTP API
To use the HTTP API to query and output cache information from the system table, send a `GET` or `POST` request to the `/api/v3/query_sql` endpoint.
{{% api-endpoint method="GET" endpoint="/api/v3/query_sql" api-ref="/influxdb3/version/api/v3/#operation/GetExecuteQuerySQL" %}}
{{% api-endpoint method="POST" endpoint="/api/v3/query_sql" api-ref="/influxdb3/version/api/v3/#operation/PostExecuteQuerySQL" %}}
### Query all last value caches
{{% code-placeholders "DATABASE_NAME|AUTH_TOKEN" %}}
```bash
curl -X POST "https://localhost:8181/api/v3/query_sql" \
--header "Authorization: Bearer AUTH_TOKEN" \
--json '{
"db": "DATABASE_NAME",
"q": "SELECT * FROM system.last_caches",
"format": "json"
}'
```
{{% /code-placeholders %}}
## Query specific cache details
{{% code-placeholders "DATABASE_NAME|AUTH_TOKEN|CACHE_NAME" %}}
```bash
curl -X POST "https://localhost:8181/api/v3/query_sql" \
--header "Authorization: Bearer AUTH_TOKEN" \
--json '{
"db": "DATABASE_NAME",
"q": "SELECT * FROM system.last_caches WHERE name = '\''CACHE_NAME'\''",
"format": "json"
}'
```
{{% /code-placeholders %}}

View File

@ -53,6 +53,10 @@ influxdb3 serve
- [tls-minimum-versions](#tls-minimum-version)
- [without-auth](#without-auth)
- [disable-authz](#disable-authz)
{{% show-in "enterprise" %}}
- [num-database-limit](#num-database-limit)
- [num-table-limit](#num-table-limit)
- [num-total-columns-per-table-limit](#num-total-columns-per-table-limit){{% /show-in %}}
- [AWS](#aws)
- [aws-access-key-id](#aws-access-key-id)
- [aws-secret-access-key](#aws-secret-access-key)
@ -204,7 +208,7 @@ This value must be different than the [`--node-id`](#node-id) value.
#### data-dir
For the `file` object store, defines the location InfluxDB 3 uses to store files locally.
For the `file` object store, defines the location {{< product-name >}} uses to store files locally.
Required when using the `file` [object store](#object-store).
| influxdb3 serve option | Environment variable |
@ -216,7 +220,7 @@ Required when using the `file` [object store](#object-store).
{{% show-in "enterprise" %}}
#### license-email
Specifies the email address to associate with your InfluxDB 3 Enterprise license
Specifies the email address to associate with your {{< product-name >}} license
and automatically responds to the interactive email prompt when the server starts.
This option is mutually exclusive with [license-file](#license-file).
@ -228,7 +232,7 @@ This option is mutually exclusive with [license-file](#license-file).
#### license-file
Specifies the path to a license file for InfluxDB 3 Enterprise. When provided, the license
Specifies the path to a license file for {{< product-name >}}. When provided, the license
file's contents are used instead of requesting a new license.
This option is mutually exclusive with [license-email](#license-email).
@ -361,10 +365,44 @@ The server processes all requests without requiring tokens or authentication.
Optionally disable authz by passing in a comma separated list of resources.
Valid values are `health`, `ping`, and `metrics`.
| influxdb3 serve option | Environment variable |
| :--------------------- | :----------------------- |
| `--disable-authz` | `INFLUXDB3_DISABLE_AUTHZ`|
| influxdb3 serve option | Environment variable |
| :--------------------- | :------------------------ |
| `--disable-authz` | `INFLUXDB3_DISABLE_AUTHZ` |
{{% show-in "enterprise" %}}
---
#### num-database-limit
Limits the total number of active databases.
Default is {{% influxdb3/limit "database" %}}.
| influxdb3 serve option | Environment variable |
| :---------------------- | :---------------------------------------- |
| `--num-database-limit` | `INFLUXDB3_ENTERPRISE_NUM_DATABASE_LIMIT` |
---
#### num-table-limit
Limits the total number of active tables across all databases.
Default is {{% influxdb3/limit "table" %}}.
| influxdb3 serve option | Environment variable |
| :--------------------- | :------------------------------------- |
| `--num-table-limit` | `INFLUXDB3_ENTERPRISE_NUM_TABLE_LIMIT` |
---
#### num-total-columns-per-table-limit
Limits the total number of columns per table.
Default is {{% influxdb3/limit "column" %}}.
| influxdb3 serve option | Environment variable |
| :------------------------------------ | :------------------------------------------------------- |
| `--num-total-columns-per-table-limit` | `INFLUXDB3_ENTERPRISE_NUM_TOTAL_COLUMNS_PER_TABLE_LIMIT` |
{{% /show-in %}}
---
### AWS

View File

@ -0,0 +1,93 @@
InfluxData collects information, or _telemetry data_, about the usage of {{% product-name %}} to help improve the product.
Learn what data {{% product-name %}} collects and sends to InfluxData, how it's used, and
how you can opt out.
## What data is collected
{{< product-name >}} collects the following telemetry data:
### System metrics
- **CPU utilization**: Process-specific CPU usage
- **Memory usage**: Process memory consumption in MB
- **Cores**: Number of CPU cores in use
- **OS**: Operating system information
- **Version**: {{< product-name >}} version
- **Uptime**: Server uptime in seconds
### Write metrics
- **Write requests**: Number of write operations
- **Write lines**: Number of lines written
- **Write bytes**: Amount of data written in MB
### Query metrics
- **Query requests**: Number of query operations
### Storage metrics
- **Parquet file count**: Number of Parquet files
- **Parquet file size**: Total size of Parquet files in MB
- **Parquet row count**: Total number of rows in Parquet files
### Processing engine metrics
- **WAL triggers**: Write-Ahead Log trigger counts
- **Schedule triggers**: Scheduled processing trigger counts
- **Request triggers**: Request-based processing trigger counts
### Instance information
- **Instance ID**: Unique identifier for the server instance
- **Cluster UUID**: Unique identifier for the cluster
- **Storage type**: Type of object storage being used
{{% show-in "core" %}}
- **Product type**: "Core"
{{% /show-in %}}
{{% show-in "enterprise" %}}
- **Product type**: "Enterprise"
{{% /show-in %}}
## Collection frequency
- **System metrics** (CPU, memory): Collected every 60 seconds
- **Write and query metrics**: Collected per operation, rolled up every 60 seconds
- **Storage and processing engine metrics**: Collected at snapshot time (when available)
- **Instance information**: Static data collected once
Telemetry data is transmitted once per hour.
## Disable telemetry
To "opt-out" of collecting and sending {{% product-name %}} telemetry data,
include the `--disable-telemetry-upload` flag or set the `INFLUXDB3_TELEMETRY_DISABLE_UPLOAD` environment variable
when starting {{% product-name %}}.
**Default:** `false`
| influxdb3 flag | Environment variable |
| :------------- | :------------------- |
| `--disable-telemetry-upload` | `INFLUXDB3_TELEMETRY_DISABLE_UPLOAD` |
#### Command line flag
```sh
influxdb3 serve --disable-telemetry-upload
```
#### Environment variable
```sh
export INFLUXDB3_TELEMETRY_DISABLE_UPLOAD=true
```
When telemetry is disabled, no usage data is collected or transmitted.
## Data handling
The telemetry data is used by InfluxData to understand product usage patterns, improve product performance and reliability, prioritize feature development, and identify/resolve issues. No personally identifiable information (PII) is collected.
## Privacy and security
All telemetry data is transmitted securely via HTTPS. No database contents, queries, or user data is collected; only operational metrics and system information is transmitted.
All data collection follows InfluxData's privacy policy.

View File

@ -157,7 +157,7 @@ chronograf:
versions: [v1]
latest: v1.10
latest_patches:
v1: 1.10.7
v1: 1.10.8
ai_sample_questions:
- How do I configure Chronograf for InfluxDB v1?
- How do I create a dashboard in Chronograf?