Merge pull request #6140 from influxdata/monolith-gs-restructure

Restructure Core/Enterprise getting started docs
jts-dedicated-product-updates
Jason Stirnaman 2025-06-24 17:53:45 -05:00 committed by GitHub
commit a85eb3286e
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
64 changed files with 3177 additions and 2343 deletions

4
.gitignore vendored
View File

@ -15,7 +15,7 @@ node_modules
!telegraf-build/templates
!telegraf-build/scripts
!telegraf-build/README.md
/cypress/downloads
/cypress/downloads/*
/cypress/screenshots/*
/cypress/videos/*
test-results.xml
@ -25,4 +25,4 @@ test-results.xml
.idea
**/config.toml
package-lock.json
tmp
tmp

View File

@ -250,8 +250,8 @@ spec:
#### Release artifacts
- [app-instance-schema.json](/downloads/clustered-release-artifacts/20241022-1346953/app-instance-schema.json)
- [example-customer.yml](/downloads/clustered-release-artifacts/20241022-1346953/example-customer.yml)
- [app-instance-schema.json](/downloads/clustered-release-artifacts/20241024-1354148/app-instance-schema.json)
- [example-customer.yml](/downloads/clustered-release-artifacts/20241024-1354148/example-customer.yml)
- [InfluxDB Clustered README EULA July 2024.txt](/downloads/clustered-release-artifacts/InfluxDB%20Clustered%20README%20EULA%20July%202024.txt)
### Known Bugs
@ -804,13 +804,13 @@ version of `influxctl` prior to v2.8.0.
```yaml
spec:
package:
image: us-docker.pkg.dev/influxdb2-artifacts/clustered/influxdb:20240325-920726
image: us-docker.pkg.dev/influxdb2-artifacts/clustered/influxdb:20240326-922145
```
#### Release artifacts
- [app-instance-schema.json](/downloads/clustered-release-artifacts/20240325-920726/app-instance-schema.json)
- [example-customer.yml](/downloads/clustered-release-artifacts/20240325-920726/example-customer.yml)
- [app-instance-schema.json](/downloads/clustered-release-artifacts/20240326-922145/app-instance-schema.json)
- [example-customer.yml](/downloads/clustered-release-artifacts/20240326-922145/example-customer.yml)
- [InfluxDB Clustered README EULA July 2024.txt](/downloads/clustered-release-artifacts/InfluxDB%20Clustered%20README%20EULA%20July%202024.txt)
### Highlights
@ -1424,12 +1424,6 @@ spec:
image: us-docker.pkg.dev/influxdb2-artifacts/clustered/influxdb:20230915-630658
```
#### Release artifacts
- [app-instance-schema.json](/downloads/clustered-release-artifacts/20230915-630658/app-instance-schema.json)
- [example-customer.yml](/downloads/clustered-release-artifacts/20230915-630658/example-customer.yml)
- [InfluxDB Clustered README EULA July 2024.txt](/downloads/clustered-release-artifacts/InfluxDB%20Clustered%20README%20EULA%20July%202024.txt)
### Highlights
#### Persistent volume fixes
@ -1456,12 +1450,6 @@ spec:
image: us-docker.pkg.dev/influxdb2-artifacts/clustered/influxdb:20230914-628600
```
#### Release artifacts
- [app-instance-schema.json](/downloads/clustered-release-artifacts/20230914-628600/app-instance-schema.json)
- [example-customer.yml](/downloads/clustered-release-artifacts/20230914-628600/example-customer.yml)
- [InfluxDB Clustered README EULA July 2024.txt](/downloads/clustered-release-artifacts/InfluxDB%20Clustered%20README%20EULA%20July%202024.txt)
### Highlights
#### Updated Azure AD documentation
@ -1497,12 +1485,6 @@ spec:
image: us-docker.pkg.dev/influxdb2-artifacts/clustered/influxdb:20230912-619813
```
#### Release artifacts
- [app-instance-schema.json](/downloads/clustered-release-artifacts/20230912-619813/app-instance-schema.json)
- [example-customer.yml](/downloads/clustered-release-artifacts/20230912-619813/example-customer.yml)
- [InfluxDB Clustered README EULA July 2024.txt](/downloads/clustered-release-artifacts/InfluxDB%20Clustered%20README%20EULA%20July%202024.txt)
### Highlights
#### Custom CA certificates {note="(Optional)"}
@ -1573,12 +1555,6 @@ spec:
image: us-docker.pkg.dev/influxdb2-artifacts/clustered/influxdb:20230911-604209
```
#### Release artifacts
- [app-instance-schema.json](/downloads/clustered-release-artifacts/20230911-604209/app-instance-schema.json)
- [example-customer.yml](/downloads/clustered-release-artifacts/20230911-604209/example-customer.yml)
- [InfluxDB Clustered README EULA July 2024.txt](/downloads/clustered-release-artifacts/InfluxDB%20Clustered%20README%20EULA%20July%202024.txt)
### Highlights
This release contains a breaking change to the monitoring subsystem that
@ -1628,12 +1604,6 @@ spec:
image: us-docker.pkg.dev/influxdb2-artifacts/clustered/influxdb:20230908-600131
```
#### Release artifacts
- [app-instance-schema.json](/downloads/clustered-release-artifacts/20230908-600131/app-instance-schema.json)
- [example-customer.yml](/downloads/clustered-release-artifacts/20230908-600131/example-customer.yml)
- [InfluxDB Clustered README EULA July 2024.txt](/downloads/clustered-release-artifacts/InfluxDB%20Clustered%20README%20EULA%20July%202024.txt)
### Highlights
#### Default storage class
@ -1661,12 +1631,6 @@ spec:
image: us-docker.pkg.dev/influxdb2-artifacts/clustered/influxdb:20230907-597343
```
#### Release artifacts
- [app-instance-schema.json](/downloads/clustered-release-artifacts/20230907-597343/app-instance-schema.json)
- [example-customer.yml](/downloads/clustered-release-artifacts/20230907-597343/example-customer.yml)
- [InfluxDB Clustered README EULA July 2024.txt](/downloads/clustered-release-artifacts/InfluxDB%20Clustered%20README%20EULA%20July%202024.txt)
### Upgrade Notes
This release requires a new configuration block:

View File

@ -0,0 +1,27 @@
---
title: Process data in {{% product-name %}}
seotitle: Process data | Get started with {{% product-name %}}
description: >
Learn how to use the {{% product-name %}} Processing Engine to process data and
perform various tasks like downsampling, alerting, forecasting, data
normalization, and more.
menu:
influxdb3_core:
name: Process data
identifier: gs-process-data
parent: Get started
weight: 104
aliases:
- /influxdb3/core/get-started/process-data/
- /influxdb3/core/get-started/processing-engine/
related:
- /influxdb3/core/plugins/
- /influxdb3/core/reference/cli/influxdb3/create/plugin/
- /influxdb3/core/reference/cli/influxdb3/create/trigger/
source: /shared/influxdb3-get-started/processing-engine.md
---
<!--
The content of this page is at
// SOURCE content/shared/influxdb3-get-started/query.md
-->

View File

@ -0,0 +1,24 @@
---
title: Query data in {{% product-name %}}
seotitle: Query data | Get started with {{% product-name %}}
description: >
Learn how to get started querying data in {{% product-name %}} using native
SQL or InfluxQL with the `influxdb3` CLI and other tools.
menu:
influxdb3_core:
name: Query data
identifier: gs-query-data
parent: Get started
weight: 103
related:
- /influxdb3/core/query-data/
- /influxdb3/core/reference/sql/
- https://datafusion.apache.org/user-guide/sql/index.html, Apache DataFusion SQL reference
- /influxdb3/core/reference/influxql/
source: /shared/influxdb3-get-started/query.md
---
<!--
The content of this page is at
// SOURCE content/shared/influxdb3-get-started/query.md
-->

View File

@ -0,0 +1,21 @@
---
title: Set up {{% product-name %}}
seotitle: Set up InfluxDB | Get started with {{% product-name %}}
description: >
Install, configure, and set up authorization for {{% product-name %}}.
menu:
influxdb3_core:
name: Set up Core
parent: Get started
weight: 3
related:
- /influxdb3/core/install/
- /influxdb3/core/admin/tokens/
- /influxdb3/core/reference/config-options/
source: /shared/influxdb3-get-started/setup.md
---
<!--
The content of this page is at
// SOURCE content/shared/influxdb3-get-started/setup.md
-->

View File

@ -0,0 +1,22 @@
---
title: Write data to {{% product-name %}}
seotitle: Write data | Get started with {{% product-name %}}
description: >
Learn how to write time series data to {{% product-name %}} using the
`influxdb3` CLI and _line protocol_, an efficient, human-readable write syntax.
menu:
influxdb3_core:
name: Write data
identifier: gs-write-data
parent: Get started
weight: 102
related:
- /influxdb3/core/write-data/
- /influxdb3/core/reference/line-protocol/
source: /shared/influxdb3-get-started/write.md
---
<!--
The content of this page is at
// SOURCE content/shared/influxdb3-get-started/write.md
-->

View File

@ -1,16 +0,0 @@
---
title: influxdb3 create file_index
description: >
The `influxdb3 create file_index` command creates a new file index for a
database or table.
menu:
influxdb3_core:
parent: influxdb3 create
name: influxdb3 create file_index
weight: 400
source: /shared/influxdb3-cli/create/file_index.md
---
<!--
The content of this file is at content/shared/influxdb3-cli/create/file_index.md
-->

View File

@ -1,16 +0,0 @@
---
title: influxdb3 delete file_index
description: >
The `influxdb3 delete file_index` command deletes a file index for a
database or table.
menu:
influxdb3_core:
parent: influxdb3 delete
name: influxdb3 delete file_index
weight: 400
source: /shared/influxdb3-cli/delete/file_index.md
---
<!--
The content of this file is at content/shared/influxdb3-cli/delete/file_index.md
-->

View File

@ -1,21 +1,21 @@
---
title: Use the HTTP API and client libraries to write data
title: Use InfluxDB client libraries to write data
description: >
Use the `/api/v3/write_lp` HTTP API endpoint and InfluxDB API clients to write points as line protocol data to {{% product-name %}}.
Use InfluxDB API clients to write points as line protocol data to {{% product-name %}}.
menu:
influxdb3_core:
name: Use the API and client libraries
name: Use client libraries
parent: Write data
identifier: write-api-client-libs
identifier: write-client-libs
weight: 100
aliases:
- /influxdb3/core/write-data/client-libraries/
- /influxdb3/core/write-data/api-client-libraries/
related:
- /influxdb3/core/reference/syntax/line-protocol/
- /influxdb3/core/get-started/write/
- /influxdb3/core/reference/client-libraries/v3/
- /influxdb3/core/api/v3/#operation/PostWriteLP, /api/v3/write_lp endpoint
source: /shared/influxdb3-write-guides/api-client-libraries.md
source: /shared/influxdb3-write-guides/client-libraries.md
---
<!--

View File

@ -0,0 +1,22 @@
---
title: Use the InfluxDB HTTP API to write data
description: >
Use the `/api/v3/write_lp`, `/api/v2/write`, or `/write` HTTP API endpoints
to write data to {{% product-name %}}.
menu:
influxdb3_core:
name: Use the HTTP API
parent: Write data
identifier: write-http-api
weight: 100
related:
- /influxdb3/core/reference/syntax/line-protocol/
- /influxdb3/core/get-started/write/
- /influxdb3/core/api/v3/#operation/PostWriteLP, /api/v3/write_lp endpoint
source: /shared/influxdb3-write-guides/http-api/_index.md
---
<!--
The content for this page is at
// SOURCE content/shared/influxdb3-write-guides/http-api/_index.md
-->

View File

@ -6,21 +6,21 @@ description: >
menu:
influxdb3_core:
name: Use v1 and v2 compatibility APIs
parent: Write data
identifier: write-compatibility-client-libs
weight: 101
parent: write-http-api
weight: 202
aliases:
- /influxdb3/core/write-data/client-libraries/
- /influxdb3/core/write-data/compatibility-apis/
related:
- /influxdb3/core/reference/syntax/line-protocol/
- /influxdb3/core/get-started/write/
- /influxdb3/core/reference/client-libraries/v2/
- /influxdb3/core/api/v3/#operation/PostV2Write, /api/v2/write (v2-compatible) endpoint
- /influxdb3/core/api/v3/#operation/PostV1Write, /write (v1-compatible) endpoint
source: /shared/influxdb3-write-guides/compatibility-apis.md
source: /shared/influxdb3-write-guides/http-api/compatibility-apis.md
---
<!--
The content for this page is at
// SOURCE content/shared/influxdb3-write-guides/compatibility-apis.md
// SOURCE content/shared/influxdb3-write-guides/http-api/compatibility-apis.md
-->

View File

@ -0,0 +1,20 @@
---
title: Use the v3 write API to write data
description: >
Use the `/api/v3/write_lp` HTTP API endpoint to write data to {{% product-name %}}.
menu:
influxdb3_core:
name: Use the v3 write API
parent: write-http-api
weight: 201
related:
- /influxdb3/core/reference/syntax/line-protocol/
- /influxdb3/core/get-started/write/
- /influxdb3/core/api/v3/#operation/PostWriteLP, /api/v3/write_lp endpoint
source: /shared/influxdb3-write-guides/http-api/v3-write-lp.md
---
<!--
The content for this page is at
// SOURCE content/shared/influxdb3-write-guides/http-api/v3-write-lp.md
-->

View File

@ -0,0 +1,51 @@
---
title: Manage file indexes
seotitle: Manage file indexes in {{< product-name >}}
description: >
Customize the indexing strategy of a database or table in {{% product-name %}}
to optimize the performance of single-series queries.
menu:
influxdb3_enterprise:
parent: Administer InfluxDB
weight: 106
influxdb3/enterprise/tags: [indexing]
---
{{% product-name %}} lets you customize how your data is indexed to help
optimize query performance for your specific workload, especially workloads that
include single-series queries. Indexes help the InfluxDB query engine quickly
identify the physical location of files that contain the queried data.
By default, InfluxDB indexes on the primary key—`time` and tag columns. However,
if your schema includes tags that you don't specifically use when querying, you
can define a custom indexing strategy to only index on `time` and columns
important to your query workload.
For example, if your schema includes the following columns:
- country
- state_province
- county
- city
- postal_code
And in your query workload, you only query based on country, state or province,
and city, you can create a custom file indexing strategy that only indexes on
`time` and these specific columns. This makes your index more efficient and
improves the performance of your single-series queries.
> [!Note]
> File indexes can use any string column, including both tags and fields.
- [Indexing life cycle](#indexing-life-cycle)
- [Create a custom file index](#create-a-custom-file-index)
- [Delete a custom file index](#delete-a-custom-file-index)
## Indexing life cycle
{{% product-name %}} builds indexes as it compacts data. Compaction is the
process that organizes and optimizes Parquet files in storage and occurs in
multiple phases or generations. Generation 1 (gen1) data is un-compacted and
is not indexed. Generation 2 (gen2) data and beyond is all indexed.
{{< children hlevel="h2" >}}

View File

@ -0,0 +1,62 @@
---
title: Create a custom file index
seotitle: Create a custom file index in {{< product-name >}}
description: >
Use the [`influxdb3 create file_index` command](/influxdb3/enterprise/reference/cli/influxdb3/create/file_index/)
to create a custom file indexing strategy for a database or a table.
menu:
influxdb3_enterprise:
parent: Manage file indexes
weight: 106
influxdb3/enterprise/tags: [indexing]
related:
- /influxdb3/enterprise/reference/cli/influxdb3/create/file_index/
list_code_example: |
<!--pytest.mark.skip-->
```bash
influxdb3 create file_index \
--database example-db \
--token 00xoXX0xXXx0000XxxxXx0Xx0xx0 \
--table wind_data \
country,city
```
---
Use the [`influxdb3 create file_index` command](/influxdb3/enterprise/reference/cli/influxdb3/create/file_index/)
to create a custom file indexing strategy for a database or table.
Provide the following:
- **Token** (`--token`): _({{< req >}})_ Your {{% token-link "admin" %}}.
You can also use the `INFLUXDB3_AUTH_TOKEN` environment variable to specify
the token.
- **Database** (`-d`, `--database`): _({{< req >}})_ The name of the database to
apply the index to. You can also use the `INFLUXDB3_DATABASE_NAME`
environment variable to specify the database.
- **Table** (`-t`, `--table`): The name of the table to apply the index to.
If no table is specified, the indexing strategy applies to all tables in the
specified database.
- **Columns**: _({{< req >}})_ A comma-separated list of string columns to
index on. These are typically tag columns but can also be string fields.
{{% code-placeholders "AUTH_TOKEN|DATABASE|TABLE|COLUMNS" %}}
<!--pytest.mark.skip-->
```bash
influxdb3 create file_index \
--token AUTH_TOKEN \
--database DATABASE_NAME \
--table TABLE_NAME \
COLUMNS
```
{{% /code-placeholders %}}
Replace the following placeholders with your values:
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}:
your {{% token-link "admin" %}}
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
the name of the database to create the file index in
- {{% code-placeholder-key %}}`TABLE_NAME`{{% /code-placeholder-key %}}:
the name of the table to create the file index in
- {{% code-placeholder-key %}}`COLUMNS`{{% /code-placeholder-key %}}:
a comma-separated list of columns to index on--for example: `host,application`

View File

@ -0,0 +1,58 @@
---
title: Delete a custom file index
seotitle: Delete a custom file index in {{< product-name >}}
description: >
Use the [`influxdb3 delete file_index` command](/influxdb3/enterprise/reference/cli/influxdb3/delete/file_index/)
to delete a custom file indexing strategy from a database or a table and revert
to the default indexing strategy.
menu:
influxdb3_enterprise:
parent: Manage file indexes
weight: 106
influxdb3/enterprise/tags: [indexing]
related:
- /influxdb3/enterprise/reference/cli/influxdb3/delete/file_index/
list_code_example: |
<!--pytest.mark.skip-->
```bash
influxdb3 delete file_index \
--database example-db \
--token 00xoXX0xXXx0000XxxxXx0Xx0xx0 \
--table wind_data
```
---
Use the [`influxdb3 delete file_index` command](/influxdb3/enterprise/reference/cli/influxdb3/delete/file_index/)
to delete a custom file indexing strategy from a database or a table and revert
to the default indexing strategy.
Provide the following:
- **Token** (`--token`): _({{< req >}})_ Your {{% token-link "admin" %}}.
You can also use the `INFLUXDB3_AUTH_TOKEN` environment variable to specify
the token.
- **Database** (`-d`, `--database`): _({{< req >}})_ The name of the database to
apply remove the custom index from. You can also use the `INFLUXDB3_DATABASE_NAME`
environment variable to specify the database.
- **Table** (`-t`, `--table`): The name of the table to remove the custom index from.
If no table is specified, the custom indexing strategy is removed from all
tables in the specified database.
{{% code-placeholders "AUTH_TOKEN|DATABASE|TABLE|COLUMNS" %}}
```bash
influxdb3 delete file_index \
--host http://localhost:8585 \
--database DATABASE_NAME \
--table TABLE_NAME \
```
{{% /code-placeholders %}}
Replace the following placeholders with your values:
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}:
your {{% token-link "admin" %}}
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
the name of the database to remove the custom file index from
- {{% code-placeholder-key %}}`TABLE_NAME`{{% /code-placeholder-key %}}:
the name of the table to remove the custom file index from

View File

@ -101,7 +101,7 @@ The license file is a JWT file that contains the license information.
> use one of the methods to [skip the email prompt](#skip-the-email-prompt).
> This ensures that the container can generate the license file after you
> verify your email address.
> See the [Docker Compose example](?t=Docker+compose#activate-a-trial-or-home-license-with-docker).
> See the [Docker Compose example](?t=Docker+compose#start-with-license-email-and-compose).
#### Skip the email prompt
@ -186,7 +186,7 @@ existing license if it's still valid.
{{% code-tabs %}}
[influxdb3 options](#)
[Environment variables](#)
[Docker compose](#example-activate-trial-or-home-with-compose)
[Docker compose](#start-with-license-email-and-compose)
{{% /code-tabs %}}
{{% code-tab-content %}}
<!------------------------ BEGIN INFLUXDB3 CLI OPTIONS ------------------------>
@ -215,6 +215,7 @@ influxdb3 serve \
{{% /code-tab-content %}}
{{% code-tab-content %}}
<!------------------------ BEGIN DOCKER COMPOSE ------------------------>
{{% code-placeholders "${EMAIL_ADDRESS}" %}}
```yaml
# compose.yaml
name: data-crunching-stack
@ -235,7 +236,8 @@ services:
- --object-store=file
- --data-dir=/var/lib/influxdb3
- --plugin-dir=/var/lib/influxdb3/plugins
- --license-email=INFLUXDB3_LICENSE_EMAIL
environment:
- INFLUXDB3_LICENSE_EMAIL=${EMAIL_ADDRESS}
volumes:
- type: bind
source: ~/.influxdb3/data
@ -244,6 +246,9 @@ services:
source: ~/.influxdb3/plugins
target: /var/lib/influxdb3/plugins
```
{{% /code-placeholders %}}
Replace {{% code-placeholder-key %}}`${EMAIL_ADDRESS}`{{% /code-placeholder-key %}} with your email address
or a variable from your Compose `.env` file.
<!------------------------- END DOCKER COMPOSE ------------------------->
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}

View File

@ -0,0 +1,528 @@
---
title: Create a multi-node cluster
seotitle: Create a multi-node InfluxDB 3 Enterprise cluster
description: >
Create a multi-node InfluxDB 3 Enterprise cluster for high availability,
performance, read replicas, and more to meet the specific needs of your use case.
menu:
influxdb3_enterprise:
name: Create a multi-node cluster
parent: Get started
identifier: gs-multi-node-cluster
weight: 102
influxdb3/enterprise/tags: [cluster, multi-node, multi-server]
---
Create a multi-node {{% product-name %}} cluster for high availability, performance, and workload isolation.
Configure nodes with specific _modes_ (ingest, query, process, compact) to optimize for your use case.
## Prerequisites
- Shared object store
- Network connectivity between nodes
## Basic multi-node setup
<!-- pytest.mark.skip -->
```bash
## NODE 1 compacts stored data
# Example variables
# node-id: 'host01'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve \
--node-id host01 \
--cluster-id cluster01 \
--mode ingest,query,compact \
--object-store s3 \
--bucket influxdb-3-enterprise-storage \
--http-bind {{< influxdb/host >}} \
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
```
<!-- pytest.mark.skip -->
```bash
## NODE 2 handles writes and queries
# Example variables
# node-id: 'host02'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve \
--node-id host02 \
--cluster-id cluster01 \
--mode ingest,query \
--object-store s3 \
--bucket influxdb-3-enterprise-storage \
--http-bind localhost:8282 \
--aws-access-key-id AWS_ACCESS_KEY_ID \
--aws-secret-access-key AWS_SECRET_ACCESS_KEY
```
Learn how to set up a multi-node cluster for different use cases, including high availability, read replicas, processing data, and workload isolation.
- [Create an object store](#create-an-object-store)
- [Connect to your object store](#connect-to-your-object-store)
- [Server modes](#server-modes)
- [Cluster configuration examples](#cluster-configuration-examples)
- [Writing and querying in multi-node clusters](#writing-and-querying-in-multi-node-clusters)
## Create an object store
With the {{% product-name %}} diskless architecture, all data is stored in a common object store.
In a multi-node cluster, you connect all nodes to the same object store.
Enterprise supports the following object stores:
- AWS S3 (or S3-compatible)
- Azure Blob Storage
- Google Cloud Storage
> [!Note]
> Refer to your object storage provider's documentation for
> setting up an object store.
## Connect to your object store
When starting your {{% product-name %}} node, include provider-specific options for connecting to your object store--for example:
{{< tabs-wrapper >}}
{{% tabs %}}
[S3 or S3-compatible](#)
[Azure Blob Storage](#)
[Google Cloud Storage](#)
{{% /tabs %}}
{{% tab-content %}}
<!---------------------------------- BEGIN S3 --------------------------------->
To use an AWS S3 or S3-compatible object store, provide the following options
with your `influxdb3 serve` command:
- `--object-store`: `s3`
- `--bucket`: Your AWS S3 bucket name
- `--aws-access-key-id`: Your AWS access key ID
_(can also be defined using the `AWS_ACCESS_KEY_ID` environment variable)_
- `--aws-secret-access-key`: Your AWS secret access key
_(can also be defined using the `AWS_SECRET_ACCESS_KEY` environment variable)_
{{% code-placeholders "AWS_(BUCKET_NAME|ACCESS_KEY_ID|SECRET_ACCESS_KEY)" %}}
<!-- pytest.mark.skip -->
```bash
influxdb3 serve \
# ...
--object-store s3 \
--bucket AWS_BUCKET_NAME \
--aws-access-key-id AWS_ACCESS_KEY_ID \
--aws-secret-access-key AWS_SECRET_ACCESS_KEY
```
{{% /code-placeholders %}}
_For information about other S3-specific settings, see
[Configuration options - AWS](/influxdb3/enterprise/reference/config-options/#aws)._
<!----------------------------------- END S3 ---------------------------------->
{{% /tab-content %}}
{{% tab-content %}}
<!-------------------------- BEGIN AZURE BLOB STORAGE ------------------------->
To use Azure Blob Storage as your object store, provide the following options
with your `influxdb3 serve` command:
- `--object-store`: `azure`
- `--bucket`: Your Azure Blob Storage container name
- `--azure-storage-account`: Your Azure Blob Storage storage account name
_(can also be defined using the `AZURE_STORAGE_ACCOUNT` environment variable)_
- `--aws-secret-access-key`: Your Azure Blob Storage access key
_(can also be defined using the `AZURE_STORAGE_ACCESS_KEY` environment variable)_
{{% code-placeholders "AZURE_(CONTAINER_NAME|STORAGE_ACCOUNT|STORAGE_ACCESS_KEY)" %}}
<!-- pytest.mark.skip -->
```bash
influxdb3 serve \
# ...
--object-store azure \
--bucket AZURE_CONTAINER_NAME \
--azure-storage-account AZURE_STORAGE_ACCOUNT \
--azure-storage-access-key AZURE_STORAGE_ACCESS_KEY
```
{{% /code-placeholders %}}
<!--------------------------- END AZURE BLOB STORAGE -------------------------->
{{% /tab-content %}}
{{% tab-content %}}
<!------------------------- BEGIN GOOGLE CLOUD STORAGE ------------------------>
To use Google Cloud Storage as your object store, provide the following options
with your `influxdb3 serve` command:
- `--object-store`: `google`
- `--bucket`: Your Google Cloud Storage bucket name
- `--google-service-account`: The path to to your Google credentials JSON file
_(can also be defined using the `GOOGLE_SERVICE_ACCOUNT` environment variable)_
{{% code-placeholders "GOOGLE_(BUCKET_NAME|SERVICE_ACCOUNT)" %}}
<!-- pytest.mark.skip -->
```bash
influxdb3 serve \
# ...
--object-store google \
--bucket GOOGLE_BUCKET_NAME \
--google-service-account GOOGLE_SERVICE_ACCOUNT
```
{{% /code-placeholders %}}
<!-------------------------- END GOOGLE CLOUD STORAGE ------------------------->
{{% /tab-content %}}
{{< /tabs-wrapper >}}
## Server modes
{{% product-name %}} _modes_ determine what subprocesses the Enterprise node runs.
These subprocesses fulfill required tasks including data ingestion, query
processing, compaction, and running the processing engine.
The `influxdb3 serve --mode` option defines what subprocesses a node runs.
Each node can run in one _or more_ of the following modes:
- **all** _(default)_: Runs all necessary subprocesses.
- **ingest**: Runs the data ingestion subprocess to handle writes.
- **query**: Runs the query processing subprocess to handle queries.
- **process**: Runs the processing engine subprocess to trigger and execute plugins.
- **compact**: Runs the compactor subprocess to optimize data in object storage.
> [!Important]
> Only _one_ node in your cluster can run in `compact` mode.
### Server mode examples
#### Configure a node to only handle write requests
<!-- pytest.mark.skip -->
```bash
influxdb3 serve \
# ...
--mode ingest
```
#### Configure a node to only run the Compactor
<!-- pytest.mark.skip -->
```bash
influxdb3 serve \
# ...
--mode compact
```
#### Configure a node to handle queries and run the processing engine
<!-- pytest.mark.skip -->
```bash
influxdb3 serve \
# ...
--mode query,process
```
## Cluster configuration examples
- [High availability cluster](#high-availability-cluster)
- [High availability with a dedicated Compactor](#high-availability-with-a-dedicated-compactor)
- [High availability with read replicas and a dedicated Compactor](#high-availability-with-read-replicas-and-a-dedicated-compactor)
### High availability cluster
A minimum of two nodes are required for basic high availability (HA), with both
nodes reading and writing data.
{{< img-hd src="/img/influxdb/influxdb-3-enterprise-high-availability.png" alt="Basic high availability setup" />}}
In a basic HA setup:
- Two nodes both write data to the same object store and both handle queries
- Node 1 and Node 2 are _read replicas_ that read from each others object store directories
- One of the nodes is designated as the Compactor node
> [!Note]
> Only one node can be designated as the Compactor.
> Compacted data is meant for a single writer, and many readers.
The following examples show how to configure and start two nodes for a basic HA
setup.
- _Node 1_ is for compaction
- _Node 2_ is for ingest and query
<!-- pytest.mark.skip -->
```bash
## NODE 1
# Example variables
# node-id: 'host01'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve \
--node-id host01 \
--cluster-id cluster01 \
--mode ingest,query,compact \
--object-store s3 \
--bucket influxdb-3-enterprise-storage \
--http-bind {{< influxdb/host >}} \
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
```
<!-- pytest.mark.skip -->
```bash
## NODE 2
# Example variables
# node-id: 'host02'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve \
--node-id host02 \
--cluster-id cluster01 \
--mode ingest,query \
--object-store s3 \
--bucket influxdb-3-enterprise-storage \
--http-bind localhost:8282 \
--aws-access-key-id AWS_ACCESS_KEY_ID \
--aws-secret-access-key AWS_SECRET_ACCESS_KEY
```
After the nodes have started, querying either node returns data for both nodes,
and _NODE 1_ runs compaction.
To add nodes to this setup, start more read replicas with the same cluster ID.
### High availability with a dedicated Compactor
Data compaction in {{% product-name %}} is one of the more computationally
demanding operations.
To ensure stable performance in ingest and query nodes, set up a
compactor-only node to isolate the compaction workload.
{{< img-hd src="/img/influxdb/influxdb-3-enterprise-dedicated-compactor.png" alt="Dedicated Compactor setup" />}}
The following examples sets up high availability with a dedicated Compactor node:
1. Start two read-write nodes as read replicas, similar to the previous example.
<!-- pytest.mark.skip -->
```bash
## NODE 1 — Writer/Reader Node #1
# Example variables
# node-id: 'host01'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve \
--node-id host01 \
--cluster-id cluster01 \
--mode ingest,query \
--object-store s3 \
--bucket influxdb-3-enterprise-storage \
--http-bind {{< influxdb/host >}} \
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
```
<!-- pytest.mark.skip -->
```bash
## NODE 2 — Writer/Reader Node #2
# Example variables
# node-id: 'host02'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve \
--node-id host02 \
--cluster-id cluster01 \
--mode ingest,query \
--object-store s3 \
--bucket influxdb-3-enterprise-storage \
--http-bind localhost:8282 \
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
```
2. Start the dedicated compactor node with the `--mode=compact` option to ensure the node **only** runs compaction.
```bash
## NODE 3 — Compactor Node
# Example variables
# node-id: 'host03'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve \
--node-id host03 \
--cluster-id cluster01 \
--mode compact \
--object-store s3 \
--bucket influxdb-3-enterprise-storage \
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
```
### High availability with read replicas and a dedicated Compactor
For a robust and effective setup for managing time-series data, you can run
ingest nodes alongside query nodes and a dedicated Compactor node.
{{< img-hd src="/img/influxdb/influxdb-3-enterprise-workload-isolation.png" alt="Workload Isolation Setup" />}}
1. Start ingest nodes with the **`ingest`** mode.
> [!Note]
> Send all write requests to only your ingest nodes.
```bash
## NODE 1 — Writer Node #1
# Example variables
# node-id: 'host01'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve \
--node-id host01 \
--cluster-id cluster01 \
--mode ingest \
--object-store s3 \
--bucket influxdb-3-enterprise-storage \
--http-bind {{< influxdb/host >}} \
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
```
<!-- The following examples use different ports for different nodes. Don't use the influxdb/host shortcode below. -->
```bash
## NODE 2 — Writer Node #2
# Example variables
# node-id: 'host02'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve \
--node-id host02 \
--cluster-id cluster01 \
--mode ingest \
--object-store s3 \
--bucket influxdb-3-enterprise-storage \
--http-bind localhost:8282 \
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
```
2. Start the dedicated Compactor node with the `compact` mode.
```bash
## NODE 3 — Compactor Node
# Example variables
# node-id: 'host03'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve \
--node-id host03 \
--cluster-id cluster01 \
--mode compact \
--object-store s3 \
--bucket influxdb-3-enterprise-storage \
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
<AWS_SECRET_ACCESS_KEY>
```
3. Finally, start the query nodes using the `query` mode.
> [!Note]
> Send all query requests to only your query nodes.
```bash
## NODE 4 — Read Node #1
# Example variables
# node-id: 'host04'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve \
--node-id host04 \
--cluster-id cluster01 \
--mode query \
--object-store s3 \
--bucket influxdb-3-enterprise-storage \
--http-bind localhost:8383 \
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
```
```bash
## NODE 5 — Read Node #2
# Example variables
# node-id: 'host05'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve \
--node-id host05 \
--cluster-id cluster01 \
--mode query \
--object-store s3 \
--bucket influxdb-3-enterprise-storage \
--http-bind localhost:8484 \
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
<AWS_SECRET_ACCESS_KEY>
```
## Writing and querying in multi-node clusters
You can use the default port `8181` for any write or query request without
changing any of the commands.
> [!Note]
> #### Specify hosts for write and query requests
>
> To benefit from this multi-node, isolated architecture:
>
> - Send write requests to a node that you have designated as an ingester.
> - Send query requests to a node that you have designated as a querier.
>
> When running multiple local instances for testing or separate nodes in
> production, specifying the host ensures writes and queries are routed to the
> correct instance.
{{% code-placeholders "(http://localhost:8585)|AUTH_TOKEN|DATABASE_NAME|QUERY" %}}
```bash
# Example querying a specific host
# HTTP-bound Port: 8585
influxdb3 query \
--host http://localhost:8585
--token AUTH_TOKEN \
--database DATABASE_NAME \
"QUERY"
```
{{% /code-placeholders %}}
Replace the following placeholders with your values:
- {{% code-placeholder-key %}}`http://localhost:8585`{{% /code-placeholder-key %}}: the host and port of the node to query
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in "enterprise" %}} with permission to query the specified database{{% /show-in %}}
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database to query
- {{% code-placeholder-key %}}`QUERY`{{% /code-placeholder-key %}}: the SQL or InfluxQL query to run against the database
{{% page-nav
prev="/influxdb3/enterprise/get-started/setup/"
prevText="Set up InfluxDB"
next="/influxdb3/enterprise/get-started/write/"
nextText="Write data"
%}}

View File

@ -0,0 +1,27 @@
---
title: Process data in {{% product-name %}}
seotitle: Process data | Get started with {{% product-name %}}
description: >
Learn how to use the {{% product-name %}} Processing Engine to process data and
perform various tasks like downsampling, alerting, forecasting, data
normalization, and more.
menu:
influxdb3_enterprise:
name: Process data
identifier: gs-process-data
parent: Get started
weight: 105
aliases:
- /influxdb3/enterprise/get-started/process-data/
- /influxdb3/enterprise/get-started/processing-engine/
related:
- /influxdb3/enterprise/plugins/
- /influxdb3/enterprise/reference/cli/influxdb3/create/plugin/
- /influxdb3/enterprise/reference/cli/influxdb3/create/trigger/
source: /shared/influxdb3-get-started/processing-engine.md
---
<!--
The content of this page is at
// SOURCE content/shared/influxdb3-get-started/query.md
-->

View File

@ -0,0 +1,24 @@
---
title: Query data in {{% product-name %}}
seotitle: Query data | Get started with {{% product-name %}}
description: >
Learn how to get started querying data in {{% product-name %}} using native
SQL or InfluxQL with the `influxdb3` CLI and other tools.
menu:
influxdb3_enterprise:
name: Query data
identifier: gs-query-data
parent: Get started
weight: 104
related:
- /influxdb3/enterprise/query-data/
- /influxdb3/enterprise/reference/sql/
- https://datafusion.apache.org/user-guide/sql/index.html, Apache DataFusion SQL reference
- /influxdb3/enterprise/reference/influxql/
source: /shared/influxdb3-get-started/query.md
---
<!--
The content of this page is at
// SOURCE content/shared/influxdb3-get-started/query.md
-->

View File

@ -0,0 +1,21 @@
---
title: Set up {{% product-name %}}
seotitle: Set up InfluxDB | Get started with {{% product-name %}}
description: >
Install, configure, and set up authorization for {{% product-name %}}.
menu:
influxdb3_enterprise:
name: Set up Enterprise
parent: Get started
weight: 101
related:
- /influxdb3/enterprise/install/
- /influxdb3/enterprise/admin/tokens/
- /influxdb3/enterprise/reference/config-options/
source: /shared/influxdb3-get-started/setup.md
---
<!--
The content of this page is at
// SOURCE content/shared/influxdb3-get-started/setup.md
-->

View File

@ -0,0 +1,22 @@
---
title: Write data to {{% product-name %}}
seotitle: Write data | Get started with {{% product-name %}}
description: >
Learn how to write time series data to {{% product-name %}} using the
`influxdb3` CLI and _line protocol_, an efficient, human-readable write syntax.
menu:
influxdb3_enterprise:
name: Write data
identifier: gs-write-data
parent: Get started
weight: 103
related:
- /influxdb3/enterprise/write-data/
- /influxdb3/enterprise/reference/line-protocol/
source: /shared/influxdb3-get-started/write.md
---
<!--
The content of this page is at
// SOURCE content/shared/influxdb3-get-started/write.md
-->

View File

@ -4,7 +4,7 @@ description: >
The `influxdb3 create token` command creates an admin token or a scoped resource token for authenticating and authorizing actions in an {{% product-name %}} instance.
menu:
influxdb3_enterprise:
parent: influxdb3
parent: influxdb3 create
name: influxdb3 create token
weight: 300
source: /shared/influxdb3-cli/create/token/_index.md

View File

@ -254,6 +254,8 @@ export DATABASE_NODE=node0 && influxdb3 serve \
--cluster-id cluster0 \
--object-store file \
--data-dir ~/.influxdb3/data
```
---
#### object-store
@ -318,7 +320,6 @@ The server processes all requests without requiring tokens or authentication.
Optionally disable authz by passing in a comma separated list of resources.
Valid values are `health`, `ping`, and `metrics`.
---
### AWS

View File

@ -1,24 +1,24 @@
---
title: Use the HTTP API and client libraries to write data
title: Use InfluxDB client libraries to write data
description: >
Use the `/api/v3/write_lp` HTTP API endpoint and InfluxDB API clients to write points as line protocol data to {{% product-name %}}.
Use InfluxDB API clients to write points as line protocol data to {{% product-name %}}.
menu:
influxdb3_enterprise:
name: Use the API and client libraries
name: Use client libraries
parent: Write data
identifier: write-api-client-libs
identifier: write-client-libs
weight: 100
aliases:
- /influxdb3/enterprise/write-data/client-libraries/
- /influxdb3/enterprise/write-data/api-client-libraries/
related:
- /influxdb3/enterprise/reference/syntax/line-protocol/
- /influxdb3/enterprise/get-started/write/
- /influxdb3/enterprise/reference/client-libraries/v3/
- /influxdb3/enterprise/api/v3/#operation/PostWriteLP, /api/v3/write_lp endpoint
source: /shared/influxdb3-write-guides/api-client-libraries.md
source: /shared/influxdb3-write-guides/client-libraries.md
---
<!--
The content for this page is at
// SOURCE content/shared/influxdb3-write-guides/client-libraries.md
-->
-->

View File

@ -17,10 +17,10 @@ related:
- /influxdb3/enterprise/reference/client-libraries/v2/
- /influxdb3/enterprise/api/v3/#operation/PostV2Write, /api/v2/write (v2-compatible) endpoint
- /influxdb3/enterprise/api/v3/#operation/PostV1Write, /write (v1-compatible) endpoint
source: /shared/influxdb3-write-guides/compatibility-apis.md
source: /shared/influxdb3-write-guides/http-api/compatibility-apis.md
---
<!--
The content for this page is at
// SOURCE content/shared/influxdb3-write-guides/compatibility-apis.md
// SOURCE content/shared/influxdb3-write-guides/http-api/compatibility-apis.md
-->

View File

@ -0,0 +1,22 @@
---
title: Use the InfluxDB HTTP API to write data
description: >
Use the `/api/v3/write_lp`, `/api/v2/write`, or `/write` HTTP API endpoints
to write data to {{% product-name %}}.
menu:
influxdb3_enterprise:
name: Use the HTTP API
parent: Write data
identifier: write-http-api
weight: 100
related:
- /influxdb3/enterprise/reference/syntax/line-protocol/
- /influxdb3/enterprise/get-started/write/
- /influxdb3/enterprise/api/v3/#operation/PostWriteLP, /api/v3/write_lp endpoint
source: /shared/influxdb3-write-guides/http-api/_index.md
---
<!--
The content for this page is at
// SOURCE content/shared/influxdb3-write-guides/http-api/_index.md
-->

View File

@ -0,0 +1,26 @@
---
title: Use compatibility APIs and client libraries to write data
description: >
Use HTTP API endpoints compatible with InfluxDB v2 and v1 clients to write
points as line protocol data to {{% product-name %}}.
menu:
influxdb3_enterprise:
name: Use v1 and v2 compatibility APIs
parent: write-http-api
weight: 202
aliases:
- /influxdb3/enterprise/write-data/client-libraries/
- /influxdb3/enterprise/write-data/compatibility-apis/
related:
- /influxdb3/enterprise/reference/syntax/line-protocol/
- /influxdb3/enterprise/get-started/write/
- /influxdb3/enterprise/reference/client-libraries/v2/
- /influxdb3/enterprise/api/v3/#operation/PostV2Write, /api/v2/write (v2-compatible) endpoint
- /influxdb3/enterprise/api/v3/#operation/PostV1Write, /write (v1-compatible) endpoint
source: /shared/influxdb3-write-guides/http-api/compatibility-apis.md
---
<!--
The content for this page is at
// SOURCE content/shared/influxdb3-write-guides/http-api/compatibility-apis.md
-->

View File

@ -0,0 +1,20 @@
---
title: Use the v3 write API to write data
description: >
Use the `/api/v3/write_lp` HTTP API endpoint to write data to {{% product-name %}}.
menu:
influxdb3_enterprise:
name: Use the v3 write API
parent: write-http-api
weight: 201
related:
- /influxdb3/enterprise/reference/syntax/line-protocol/
- /influxdb3/enterprise/get-started/write/
- /influxdb3/enterprise/api/v3/#operation/PostWriteLP, /api/v3/write_lp endpoint
source: /shared/influxdb3-write-guides/http-api/v3-write-lp.md
---
<!--
The content for this page is at
// SOURCE content/shared/influxdb3-write-guides/http-api/v3-write-lp.md
-->

View File

@ -4,10 +4,7 @@ to create a database in {{< product-name >}}.
Provide the following:
- Database name _(see [Database naming restrictions](#database-naming-restrictions))_
- {{< product-name >}} authorization token
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization token.
- {{< product-name >}} {{% token-link "admin" "admin" %}}
<!--Allow fail for database create and delete: namespaces aren't reusable-->
<!--pytest.mark.skip-->

View File

@ -11,10 +11,7 @@ to delete a database from {{< product-name >}}.
Provide the following:
- Name of the database to delete
- {{< product-name >}} authorization token
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization token.
- - {{< product-name >}} {{% token-link "admin" "admin" %}}
{{% code-placeholders "DATABASE_NAME" %}}
```sh

View File

@ -6,10 +6,7 @@ Provide the following:
- _(Optional)_ [Output format](#output-formats) with the `--format` option
- _(Optional)_ [Show deleted databases](list-deleted-databasese) with the
`--show-deleted` option
- {{< product-name >}} authorization token with the `-t`, `--token` option
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization token.
- {{< product-name >}} {{% token-link "admin" "admin" %}} with the `-t`, `--token` option
```sh
influxdb3 show databases

View File

@ -93,7 +93,7 @@ that surround field names._
```bash
curl "http://localhost:8181/api/v3/query_sql" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer AUTH_TOKEN" \
--json '{
"db": "mydb",
"q": "SELECT * FROM information_schema.columns WHERE table_schema = '"'iox'"' AND table_name = '"'system_swap'"'",
@ -120,7 +120,7 @@ To view recently executed queries, query the `queries` system table:
```bash
curl "http://localhost:8181/api/v3/query_sql" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer AUTH_TOKEN"
--json '{
"db": "mydb",
"q": "SELECT * FROM system.queries LIMIT 2",

View File

@ -12,7 +12,7 @@ The mechanism for providing your token depends on the client you use to interact
{{< tabs-wrapper >}}
{{% tabs %}}
[influxdb3 CLI](#influxdb3-cli-auth)
[cURL](#curl-auth)
[HTTP API](#http-api-auth)
{{% /tabs %}}
{{% tab-content %}}
@ -49,6 +49,12 @@ authorization token to all `influxdb3` commands.
{{% /tab-content %}}
{{% tab-content %}}
To authenticate directly to the HTTP API, you can include your authorization token in the HTTP Authorization header of your request.
The `Authorization: Bearer AUTH_TOKEN` scheme works with all HTTP API endpoints that require authentication.
The following examples use `curl` to show to authenticate to the HTTP API.
{{% code-placeholders "YOUR_AUTH_TOKEN" %}}
```bash
# Add your token to the HTTP Authorization header
@ -57,14 +63,46 @@ curl "http://{{< influxdb/host >}}/api/v3/query_sql" \
--data-urlencode "db=DATABASE_NAME" \
--data-urlencode "q=SELECT * FROM 'DATABASE_NAME' WHERE time > now() - INTERVAL '10 minutes'"
```
{{% /code-placeholders %}}
### Authenticate using v1 and v2 compatibility
```bash
# Token scheme with v2 /api/v2/write
curl http://localhost:8181/api/v2/write\?bucket\=DATABASE_NAME \
--header "Authorization: Token YOUR_AUTH_TOKEN" \
--data-raw "home,room=Kitchen temp=23.5 1622547800"
```
```bash
# Basic scheme with v1 /write
# Username is ignored, but required for the request
# Password is your auth token encoded in base64
curl "http://localhost:8181/write?db=DATABASE_NAME" \
--user "admin:YOUR_AUTH_TOKEN" \
--data-raw "home,room=Kitchen temp=23.5 1622547800"
```
```bash
# URL auth parameters with v1 /write
# Username is ignored, but required for the request
curl "http://localhost:8181/write?db=DATABASE_NAME&u=admin&p=YOUR_AUTH_TOKEN" \
--data-raw "home,room=Kitchen temp=23.5 1622547800"
```
{{% /code-placeholders %}}
{{% /tab-content %}}
{{< /tabs-wrapper >}}
Replace the following with your values:
- {{% code-placeholder-key %}}`YOUR_AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link %}}
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database you want to query
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the [database](/influxdb3/version/admin/databases) you want to query
To use tokens with other clients for {{< product-name >}},
see the client-specific documentation:
- [InfluxDB 3 Explorer](/influxdb3/explorer/)
- [InfluxDB client libraries](/influxdb3/version/reference/client-libraries/)
- [Telegraf](/telegraf/v1/)
- [Grafana](/influxdb3/version/visualize-data/grafana/)
{{< children hlevel="h2" readmore=true hr=true >}}

View File

@ -12,6 +12,7 @@ influxdb3 create <SUBCOMMAND>
## Subcommands
{{% show-in "enterprise" %}}
| Subcommand | Description |
| :---------------------------------------------------------------------------------- | :---------------------------------------------- |
| [database](/influxdb3/version/reference/cli/influxdb3/create/database/) | Create a new database |
@ -22,6 +23,19 @@ influxdb3 create <SUBCOMMAND>
| [token](/influxdb3/version/reference/cli/influxdb3/create/token/) | Create a new authentication token |
| [trigger](/influxdb3/version/reference/cli/influxdb3/create/trigger/) | Create a new trigger for the processing engine |
| help | Print command help or the help of a subcommand |
{{% /show-in %}}
{{% show-in "core" %}}
| Subcommand | Description |
| :---------------------------------------------------------------------------------- | :---------------------------------------------- |
| [database](/influxdb3/version/reference/cli/influxdb3/create/database/) | Create a new database |
| [last_cache](/influxdb3/version/reference/cli/influxdb3/create/last_cache/) | Create a new last value cache |
| [distinct_cache](/influxdb3/version/reference/cli/influxdb3/create/distinct_cache/) | Create a new distinct value cache |
| [table](/influxdb3/version/reference/cli/influxdb3/create/table/) | Create a new table in a database |
| [token](/influxdb3/version/reference/cli/influxdb3/create/token/) | Create a new authentication token |
| [trigger](/influxdb3/version/reference/cli/influxdb3/create/trigger/) | Create a new trigger for the processing engine |
| help | Print command help or the help of a subcommand |
{{% /show-in %}}
## Options

View File

@ -11,16 +11,28 @@ influxdb3 delete <SUBCOMMAND>
## Subcommands
| Subcommand | Description |
| :----------------------------------------------------------------------------- | :--------------------------------------------- |
| [database](/influxdb3/version/reference/cli/influxdb3/delete/database/) | Delete a database |
| [file_index](/influxdb3/version/reference/cli/influxdb3/delete/file_index/) | Delete a file index for a database or table |
| [last_cache](/influxdb3/version/reference/cli/influxdb3/delete/last_cache/) | Delete a last value cache |
{{% show-in "enterprise" %}}
| Subcommand | Description |
| :---------------------------------------------------------------------------------- | :--------------------------------------------- |
| [database](/influxdb3/version/reference/cli/influxdb3/delete/database/) | Delete a database |
| [file_index](/influxdb3/version/reference/cli/influxdb3/delete/file_index/) | Delete a file index for a database or table |
| [last_cache](/influxdb3/version/reference/cli/influxdb3/delete/last_cache/) | Delete a last value cache |
| [distinct_cache](/influxdb3/version/reference/cli/influxdb3/delete/distinct_cache/) | Delete a metadata cache |
| [plugin](/influxdb3/version/reference/cli/influxdb3/delete/plugin/) | Delete a processing engine plugin |
| [table](/influxdb3/version/reference/cli/influxdb3/delete/table/) | Delete a table from a database |
| [trigger](/influxdb3/version/reference/cli/influxdb3/delete/trigger/) | Delete a trigger for the processing engine |
| help | Print command help or the help of a subcommand |
| [table](/influxdb3/version/reference/cli/influxdb3/delete/table/) | Delete a table from a database |
| [trigger](/influxdb3/version/reference/cli/influxdb3/delete/trigger/) | Delete a trigger for the processing engine |
| help | Print command help or the help of a subcommand |
{{% /show-in %}}
{{% show-in "core" %}}
| Subcommand | Description |
| :---------------------------------------------------------------------------------- | :--------------------------------------------- |
| [database](/influxdb3/version/reference/cli/influxdb3/delete/database/) | Delete a database |
| [last_cache](/influxdb3/version/reference/cli/influxdb3/delete/last_cache/) | Delete a last value cache |
| [distinct_cache](/influxdb3/version/reference/cli/influxdb3/delete/distinct_cache/) | Delete a metadata cache |
| [table](/influxdb3/version/reference/cli/influxdb3/delete/table/) | Delete a table from a database |
| [trigger](/influxdb3/version/reference/cli/influxdb3/delete/trigger/) | Delete a trigger for the processing engine |
| help | Print command help or the help of a subcommand |
{{% /show-in %}}
## Options

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,262 @@
The {{% product-name %}} processing engine is an embedded Python virtual machine
(VM) that runs code inside the database to process and transform data.
Create processing engine [plugins](#plugin) that run when [triggered](#trigger)
by specific events.
- [Processing engine terminology](#processing-engine-terminology)
- [Plugin](#plugin)
- [Trigger](#trigger)
- [Trigger types](#trigger-types)
- [Activate the processing engine](#activate-the-processing-engine)
- [Create a plugin](#create-a-plugin)
- [Test a plugin on the server](#test-a-plugin-on-the-server)
- [Create a trigger](#create-a-trigger)
- [Enable the trigger](#enable-the-trigger)
## Processing engine terminology
### Plugin
A plugin is a Python function that has a signature compatible with a processing
engine [trigger](#trigger).
### Trigger
When you create a trigger, you specify a [plugin](#plugin), a database, optional
arguments, and a _trigger-spec_, which defines when the plugin is executed and
what data it receives.
#### Trigger types
InfluxDB 3 provides the following types of triggers, each with specific
trigger-specs:
- **On WAL flush**: Sends a batch of written data (for a specific table or all
tables) to a plugin (by default, every second).
- **On Schedule**: Executes a plugin on a user-configured schedule (using a
crontab or a duration). This trigger type is useful for data collection and
deadman monitoring.
- **On Request**: Binds a plugin to a custom HTTP API endpoint at
`/api/v3/engine/<ENDPOINT_PATH>`.
The plugin receives the HTTP request headers and content, and can parse,
process, and send the data into the database or to third-party services.
## Activate the processing engine
To activate the processing engine, include the `--plugin-dir <PLUGIN_DIR>` option
when starting the {{% product-name %}} server.
`PLUGIN_DIR` is your file system location for storing [plugin](#plugin) files for
the processing engine to run.
{{% code-placeholders "PLUGIN_DIR" %}}
<!-- pytest.mark.skip -->
```bash
influxdb3 serve \
# ...
--plugin-dir PLUGIN_DIR
```
{{% /code-placeholders %}}
Replace {{% code-placeholder-key %}}`PLUGIN_DIR`{{% /code-placeholder-key %}}
with the path to your plugin directory. This path can be absolute or relative
to the current working directory of the `influxdb3` server.
## Create a plugin
To create a plugin, write and store a Python file in your configured `PLUGIN_DIR`.
The following example is a WAL flush plugin that processes data before it gets
persisted to the object store.
##### Example Python plugin for WAL rows
```python
# This is the basic structure for Python plugin code that runs in the
# InfluxDB 3 Processing engine.
# When creating a trigger, you can provide runtime arguments to your plugin,
# allowing you to write generic code that uses variables such as monitoring
# thresholds, environment variables, and host names.
#
# Use the following exact signature to define a function for the WAL flush
# trigger.
# When you create a trigger for a WAL flush plugin, you specify the database
# and tables that the plugin receives written data from on every WAL flush
# (default is once per second).
def process_writes(influxdb3_local, table_batches, args=None):
# here you can see logging. for now this won't do anything, but soon
# we'll capture this so you can query it from system tables
if args and "arg1" in args:
influxdb3_local.info("arg1: " + args["arg1"])
# here we're using arguments provided at the time the trigger was set up
# to feed into parameters that we'll put into a query
query_params = {"room": "Kitchen"}
# The following example shows how to execute a parameterized query. Only SQL is supported.
# It queries the database that the trigger is configured for.
query_result = influxdb3_local.query("SELECT * FROM home where room = '$room'", query_params)
# The result is a list of Dict that have the column name as key and value as
# value.
influxdb3_local.info("query result: " + str(query_result))
# this is the data that is sent when the WAL is flushed of writes the server
# received for the DB or table of interest. One batch for each table (will
# only be one if triggered on a single table)
for table_batch in table_batches:
# here you can see that the table_name is available.
influxdb3_local.info("table: " + table_batch["table_name"])
# example to skip the table we're later writing data into
if table_batch["table_name"] == "some_table":
continue
# and then the individual rows, which are Dict with keys of the column names and values
for row in table_batch["rows"]:
influxdb3_local.info("row: " + str(row))
# this shows building a line of LP to write back to the database. tags must go first and
# their order is important and must always be the same for each individual table. Then
# fields and lastly an optional time, which you can see in the next example below
line = LineBuilder("some_table")\
.tag("tag1", "tag1_value")\
.tag("tag2", "tag2_value")\
.int64_field("field1", 1)\
.float64_field("field2", 2.0)\
.string_field("field3", "number three")
# this writes it back (it actually just buffers it until the completion of this function
# at which point it will write everything back that you put in)
influxdb3_local.write(line)
# here's another example, but with us setting a nanosecond timestamp at the end
other_line = LineBuilder("other_table")
other_line.int64_field("other_field", 1)
other_line.float64_field("other_field2", 3.14)
other_line.time_ns(1302)
# and you can see that we can write to any DB in the server
influxdb3_local.write_to_db("mytestdb", other_line)
# just some log output as an example
influxdb3_local.info("done")
```
## Test a plugin on the server
Use the [`influxdb3 test wal_plugin`](/influxdb3/version/reference/cli/influxdb3/test/wal_plugin/)
CLI command to test your processing engine plugin safely without
affecting actual data. During a plugin test:
- A query executed by the plugin queries against the server you send the request to.
- Writes aren't sent to the server but are returned to you.
To test a plugin:
1. Save the [example plugin code](#example-python-plugin-for-wal-rows) to a
plugin file inside of the plugin directory. If you haven't yet written data
to the table in the example, comment out the lines where it queries.
2. To run the test, enter the following command with the following options:
- `--lp` or `--file`: The line protocol to test
- Optional: `--input-arguments`: A comma-delimited list of `<KEY>=<VALUE>` arguments for your plugin code
{{% code-placeholders "INPUT_LINE_PROTOCOL|INPUT_ARGS|DATABASE_NAME|AUTH_TOKEN|PLUGIN_FILENAME" %}}
```bash
influxdb3 test wal_plugin \
--database DATABASE_NAME \
--token AUTH_TOKEN \
--lp INPUT_LINE_PROTOCOL \
--input-arguments INPUT_ARGS \
PLUGIN_FILENAME
```
{{% /code-placeholders %}}
Replace the following:
- {{% code-placeholder-key %}}`INPUT_LINE_PROTOCOL`{{% /code-placeholder-key %}}: the line protocol to test
- Optional: {{% code-placeholder-key %}}`INPUT_ARGS`{{% /code-placeholder-key %}}: a comma-delimited list of `<KEY>=<VALUE>` arguments for your plugin code--for example, `arg1=hello,arg2=world`
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database to test against
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: the {{% token-link "admin" %}} for your {{% product-name %}} server
- {{% code-placeholder-key %}}`PLUGIN_FILENAME`{{% /code-placeholder-key %}}: the name of the plugin file to test
### Example: Test a plugin
<!-- pytest.mark.skip -->
```bash
# Test a plugin
# Requires:
# - A database named `mydb` with a table named `foo`
# - A Python plugin file named `test.py`
# Test a plugin
influxdb3 test wal_plugin \
--lp "my_measure,tag1=asdf f1=1.0 123" \
--token apiv3_0xxx0o0XxXxx00Xxxx000xXXxoo0== \
--database sensors \
--input-arguments "arg1=hello,arg2=world" \
test.py
```
The command runs the plugin code with the test data, yields the data to the
plugin code, and then responds with the plugin result.
You can quickly see how the plugin behaves, what data it would have written to
the database, and any errors.
You can then edit your Python code in the plugins directory, and rerun the test.
The server reloads the file for every request to the `test` API.
For more information, see [`influxdb3 test wal_plugin`](/influxdb3/version/reference/cli/influxdb3/test/wal_plugin/)
or run `influxdb3 test wal_plugin -h`.
## Create a trigger
With the plugin code inside the server plugin directory, and a successful test,
you're ready to create a trigger to run the plugin. Use the
[`influxdb3 create trigger` command](/influxdb3/version/reference/cli/influxdb3/create/trigger/)
to create a trigger.
```bash
# Create a trigger that runs the plugin
influxdb3 create trigger \
--token apiv3_0xxx0o0XxXxx00Xxxx000xXXxoo0== \
--database sensors \
--plugin test_plugin \
--trigger-spec "table:foo" \
--trigger-arguments "arg1=hello,arg2=world" \
trigger1
```
## Enable the trigger
After you have created a plugin and trigger, enter the following command to
enable the trigger and have it run the plugin as you write data:
{{% code-placeholders "DATABASE_NAME|AUTH_TOKEN|TRIGGER_NAME" %}}
```bash
influxdb3 enable trigger \
--token AUTH_TOKEN \
--database DATABASE_NAME \
TRIGGER_NAME
```
{{% /code-placeholders %}}
Replace the following placeholders with your values:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database to enable the trigger in
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "admin" %}}
- {{% code-placeholder-key %}}`TRIGGER_NAME`{{% /code-placeholder-key %}}: the name of the trigger to enable
For example, to enable the trigger named `trigger1` in the `sensors` database:
```bash
influxdb3 enable trigger \
--token apiv3_0xxx0o0XxXxx00Xxxx000xXXxoo0== \
--database sensors
trigger1
```
## Next steps
If you've completed this Get Started guide for {{% product-name %}},
learn more about tools and options for:
- [Writing data](/influxdb3/version/write-data/)
- [Querying data](/influxdb3/version/query-data/)
- [Processing data with plugins](/influxdb3/version/plugins/)
- [Visualizing data](/influxdb3/version/visualize-data/)

View File

@ -0,0 +1,503 @@
<!-- COMMENT TO ALLOW STARTING WITH SHORTCODE -->
{{% product-name %}} supports both native SQL and InfluxQL for querying data. InfluxQL is
an SQL-like query language designed for InfluxDB v1 and customized for time
series queries.
{{% show-in "core" %}}
{{< product-name >}} limits
query time ranges to approximately 72 hours (both recent and historical) to
ensure query performance. For more information about the 72-hour limitation, see
the [update on InfluxDB 3 Cores 72-hour limitation](https://www.influxdata.com/blog/influxdb3-open-source-public-alpha-jan-27/).
{{% /show-in %}}
> [!Note]
> Flux, the language introduced in InfluxDB v2, is **not** supported in InfluxDB 3.
<!-- TOC -->
- [Query data with the influxdb3 CLI](#query-data-with-the-influxdb3-cli)
- [Example queries](#example-queries)
- [Other tools for executing queries](#other-tools-for-executing-queries)
- [SQL vs InfluxQL](#sql-vs-influxql)
- [SQL](#sql)
- [InfluxQL](#influxql)
- [Optimize queries](#optimize-queries)
- [Last values cache](#last-values-cache)
- [Distinct values cache](#distinct-values-cache)
{{% show-in "enterprise" %}}- [File indexes](#file-indexes){{% /show-in %}}
<!-- /TOC -->
## Query data with the influxdb3 CLI
To get started querying data in {{% product-name %}}, use the
[`influxdb3 query` command](/influxdb3/version/reference/cli/influxdb3/query/)
and provide the following:
- `-H`, `--host`: The host URL of the server _(default is `http://127.0.0.1:8181`)_
- `-d`, `--database`: _({{% req %}})_ The name of the database to query
- `-l`, `--language`: The query language of the provided query string
- `sql` _(default)_
- `influxql`
- SQL or InfluxQL query as a string
> [!Important]
> If the `INFLUXDB3_AUTH_TOKEN` environment variable defined in
> [Set up {{% product-name %}}](/influxdb3/version/get-started/setup/#set-your-token-for-authorization)
> isn't set in your environment, set it or provide your token using
> the `-t, --token` option in your command.
To query the home sensor sample data you wrote in
[Write data to {{% product-name %}}](/influxdb3/version/get-started/write/#write-data-using-the-cli),
run the following command:
{{% code-placeholders "DATABASE_NAME|AUTH_TOKEN" %}}
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[SQL](#)
[InfluxQL](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
<!-- pytest.mark.skip -->
```bash
influxdb3 query \
--database DATABASE_NAME \
"SELECT * FROM home ORDER BY time"
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
<!-- pytest.mark.skip -->
```bash
influxdb3 query \
--database DATABASE_NAME \
--language influxql \
"SELECT * FROM home"
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
{{% /code-placeholders %}}
_Replace {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}
with the name of the database to query._
To query from a specific time range, use the `WHERE` clause to designate the
boundaries of your time range.
{{% code-placeholders "DATABASE_NAME|AUTH_TOKEN" %}}
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[SQL](#)
[InfluxQL](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
<!-- pytest.mark.skip -->
```bash
influxdb3 query \
--database DATABASE_NAME \
"SELECT * FROM home WHERE time >= now() - INTERVAL '7 days' ORDER BY time"
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
<!-- pytest.mark.skip -->
```bash
influxdb3 query \
--database DATABASE_NAME \
--language influxql \
"SELECT * FROM home WHERE time >= now() - 7d"
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
{{% /code-placeholders %}}
### Example queries
{{< expand-wrapper >}}
{{% expand "List tables in a database" %}}
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[SQL](#)
[InfluxQL](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sql
SHOW TABLES
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sql
SHOW MEASUREMENTS
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
{{% /expand %}}
{{% expand "Return the average temperature of all rooms" %}}
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[SQL](#)
[InfluxQL](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sql
SELECT avg(temp) AS avg_temp FROM home
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sql
SELECT MEAN(temp) AS avg_temp FROM home
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
{{% /expand %}}
{{% expand "Return the average temperature of the kitchen" %}}
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[SQL](#)
[InfluxQL](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sql
SELECT avg(temp) AS avg_temp FROM home WHERE room = 'Kitchen'
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sql
SELECT MEAN(temp) AS avg_temp FROM home WHERE room = 'Kitchen'
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
{{% /expand %}}
{{% expand "Query data from an absolute time range" %}}
{{% influxdb/custom-timestamps %}}
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[SQL](#)
[InfluxQL](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sql
SELECT
*
FROM
home
WHERE
time >= '2022-01-01T12:00:00Z'
AND time <= '2022-01-01T18:00:00Z'
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sql
SELECT
*
FROM
home
WHERE
time >= '2022-01-01T12:00:00Z'
AND time <= '2022-01-01T18:00:00Z'
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
{{% /influxdb/custom-timestamps %}}
{{% /expand %}}
{{% expand "Query data from a relative time range" %}}
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[SQL](#)
[InfluxQL](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sql
SELECT
*
FROM
home
WHERE
time >= now() - INTERVAL '7 days'
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sql
SELECT
*
FROM
home
WHERE
time >= now() - 7d
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
{{% /expand %}}
{{% expand "Calculate average humidity in 3-hour windows per room" %}}
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[SQL](#)
[InfluxQL](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sql
SELECT
date_bin(INTERVAL '3 hours', time) AS time,
room,
avg(hum) AS avg_hum
FROM
home
GROUP BY
1,
room
ORDER BY
room,
1
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sql
SELECT
MEAN(hum) AS avg_hum
FROM
home
WHERE
time >= '2022-01-01T08:00:00Z'
AND time <= '2022-01-01T20:00:00Z'
GROUP BY
time(3h),
room
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
{{% /expand %}}
{{< /expand-wrapper >}}
## Other tools for executing queries
Other tools are available for querying data in {{% product-name %}}, including
the following:
{{< expand-wrapper >}}
{{% expand "Query using the API" %}}
#### Query using the API
InfluxDB 3 supports Flight (gRPC) APIs and an HTTP API.
To query your database using the HTTP API, send a request to the `/api/v3/query_sql` or `/api/v3/query_influxql` endpoints.
In the request, specify the database name in the `db` parameter
and a query in the `q` parameter.
You can pass parameters in the query string or inside a JSON object.
Use the `format` parameter to specify the response format: `pretty`, `jsonl`, `parquet`, `csv`, and `json`. Default is `json`.
##### Example: Query passing URL-encoded parameters
The following example sends an HTTP `GET` request with a URL-encoded SQL query:
{{% code-placeholders "DATABASE_NAME|AUTH_TOKEN" %}}
```bash
curl -G "http://{{< influxdb/host >}}/api/v3/query_sql" \
--header 'Authorization: Bearer AUTH_TOKEN' \
--data-urlencode "db=DATABASE_NAME" \
--data-urlencode "q=select * from cpu limit 5"
```
{{% /code-placeholders %}}
Replace the following placeholders with your values:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database to query
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in "enterprise" %}} with permission to query the specified database{{% /show-in %}}
##### Example: Query passing JSON parameters
The following example sends an HTTP `POST` request with parameters in a JSON payload:
{{% code-placeholders "DATABASE_NAME|AUTH_TOKEN" %}}
```bash
curl http://{{< influxdb/host >}}/api/v3/query_sql \
--data '{"db": "DATABASE_NAME", "q": "select * from cpu limit 5"}'
```
{{% /code-placeholders %}}
Replace the following placeholders with your values:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database to query
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in "enterprise" %}} with permission to query the specified database{{% /show-in %}}
{{% /expand %}}
{{% expand "Query using the Python client" %}}
#### Query using the Python client
Use the InfluxDB 3 Python library to interact with the database and integrate with your application.
We recommend installing the required packages in a Python virtual environment for your specific project.
To get started, install the `influxdb3-python` package.
```bash
pip install influxdb3-python
```
From here, you can connect to your database with the client library using just the **host** and **database name:
{{% code-placeholders "DATABASE_NAME|AUTH_TOKEN" %}}
```python
from influxdb_client_3 import InfluxDBClient3
client = InfluxDBClient3(
token='AUTH_TOKEN',
host='http://{{< influxdb/host >}}',
database='DATABASE_NAME'
)
```
{{% /code-placeholders %}}
Replace the following placeholders with your values:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database to query
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in "enterprise" %}} with permission to query the specified database{{% /show-in %}}
The following example shows how to query using SQL, and then
use PyArrow to explore the schema and process results.
To authorize the query, the example retrieves the {{% token-link "database" %}}
from the `INFLUXDB3_AUTH_TOKEN` environment variable.
```python
from influxdb_client_3 import InfluxDBClient3
import os
client = InfluxDBClient3(
token=os.environ.get('INFLUXDB3_AUTH_TOKEN'),
host='http://{{< influxdb/host >}}',
database='servers'
)
# Execute the query and return an Arrow table
table = client.query(
query="SELECT * FROM cpu LIMIT 10",
language="sql"
)
print("\n#### View Schema information\n")
print(table.schema)
print("\n#### Use PyArrow to read the specified columns\n")
print(table.column('usage_active'))
print(table.select(['host', 'usage_active']))
print(table.select(['time', 'host', 'usage_active']))
print("\n#### Use PyArrow compute functions to aggregate data\n")
print(table.group_by('host').aggregate([]))
print(table.group_by('cpu').aggregate([('time_system', 'mean')]))
```
For more information about the Python client library, see the
[`influxdb3-python` repository](https://github.com/InfluxCommunity/influxdb3-python)
in GitHub.
{{% /expand %}}
{{% expand "Query using InfluxDB 3 Explorer" %}}
#### Query using InfluxDB 3 Explorer
You can use the InfluxDB 3 Explorer web-based interface to query and visualize data,
and administer your {{% product-name %}} instance.
For more information, see how to [install InfluxDB 3 Explorer](/influxdb3/explorer/install/)
using Docker and get started querying your data.
{{% /expand %}}
{{< /expand-wrapper >}}
## SQL vs InfluxQL
{{% product-name %}} supports two query languages--SQL and InfluxQL.
While these two query languages are similar, there are important differences to
consider.
### SQL
The InfluxDB 3 SQL implementation provides a full-featured SQL query engine
powered by [Apache DataFusion](https://datafusion.apache.org/). InfluxDB extends
DataFusion with additional time series-specific functionality and supports the
complex SQL queries, including queries that use joins, unions, window functions,
and more.
- [SQL query guides](/influxdb3/version/query-data/sql/)
- [SQL reference](/influxdb3/version/reference/sql/)
- [Apache DataFusion SQL reference](https://datafusion.apache.org/user-guide/sql/index.html)
### InfluxQL
InfluxQL is a SQL-like query language built for InfluxDB v1 and supported in
{{% product-name %}}. Its syntax and functionality is similar SQL, but specifically
designed for querying time series data. InfluxQL does not offer the full range
of query functionality that SQL does.
If you are migrating from previous versions of InfluxDB, you can continue to use
InfluxQL and the established InfluxQL-related APIs you have been using.
- [InfluxQL query guides](/influxdb3/version/query-data/influxql/)
- [InfluxQL reference](/influxdb3/version/reference/influxql/)
- [InfluxQL feature support](/influxdb3/version/reference/influxql/feature-support/)
## Optimize queries
{{% product-name %}} provides the following optimization options to improve
specific kinds of queries:
- [Last values cache](#last-values-cache)
- [Distinct values cache](#distinct-values-cache)
{{% show-in "enterprise" %}}- [File indexes](#file-indexes){{% /show-in %}}
### Last values cache
The {{% product-name %}} last values cache (LVC) stores the last N values in a
series or column hierarchy in memory. This gives the database the ability to
answer these kinds of queries in under 10 milliseconds.
For information about configuring and using the LVC, see:
- [Manage a last values cache](/influxdb3/version/admin/last-value-cache/)
- [Query the last values cache](/influxdb3/version/admin/last-value-cache/query/)
### Distinct values cache
The {{% product-name %}} distinct values cache (DVC) stores distinct values for
specified columns in a series or column hierarchy in memory.
This is useful for fast metadata lookups, which can return in under 30 milliseconds.
For information about configuring and using the DVC, see:
- [Manage a distinct values cache](/influxdb3/version/admin/distinct-value-cache/)
- [Query the distinct values cache](/influxdb3/version/admin/distinct-value-cache/query/)
{{% show-in "enterprise" %}}
### File indexes
{{% product-name %}} lets you customize how your data is indexed to help
optimize query performance for your specific workload, especially workloads that
include single-series queries. Define custom indexing strategies for databases
or specific tables. For more information, see
[Manage file indexes](/influxdb3/enterprise/admin/file-index/).
{{% /show-in %}}
{{% page-nav
prev="/influxdb3/version/get-started/write/"
prevText="Write data"
next="/influxdb3/version/get-started/process/"
nextText="Processing engine"
%}}

View File

@ -0,0 +1,538 @@
<!-- TOC -->
- [Prerequisites](#prerequisites)
- [Start InfluxDB](#start-influxdb)
- [Object store examples](#object-store-examples)
{{% show-in "enterprise" %}}
- [Set up licensing](#set-up-licensing)
- [Available license types](#available-license-types)
{{% /show-in %}}
- [Set up authorization](#set-up-authorization)
- [Create an operator token](#create-an-operator-token)
- [Set your token for authorization](#set-your-token-for-authorization)
<!-- /TOC -->
## Prerequisites
To get started, you'll need:
- **{{% product-name %}}**: [Install and verify the latest version](/influxdb3/version/install/) on your system.
- If you want to persist data, have access to one of the following:
- A directory on your local disk where you can persist data (used by examples in this guide)
- S3-compatible object store and credentials
## Start InfluxDB
Use the [`influxdb3 serve` command](/influxdb3/version/reference/cli/influxdb3/serve/)
to start {{% product-name %}}.
Provide the following:
{{% show-in "enterprise" %}}
- `--node-id`: A string identifier that distinguishes individual server
instances within the cluster. This forms the final part of the storage path:
`<CONFIGURED_PATH>/<CLUSTER_ID>/<NODE_ID>`.
In a multi-node setup, this ID is used to reference specific nodes.
- `--cluster-id`: A string identifier that determines part of the storage path
hierarchy. All nodes within the same cluster share this identifier.
The storage path follows the pattern `<CONFIGURED_PATH>/<CLUSTER_ID>/<NODE_ID>`.
In a multi-node setup, this ID is used to reference the entire cluster.
{{% /show-in %}}
{{% show-in "core" %}}
- `--node-id`: A string identifier that distinguishes individual server instances.
This forms the final part of the storage path: `<CONFIGURED_PATH>/<NODE_ID>`.
{{% /show-in %}}
- `--object-store`: Specifies the type of object store to use.
InfluxDB supports the following:
- `file` _(default)_: local file system
- `memory`: in memory _(no object persistence)_
- `memory-throttled`: like `memory` but with latency and throughput that
somewhat resembles a cloud-based object store
- `s3`: AWS S3 and S3-compatible services like Ceph or Minio
- `google`: Google Cloud Storage
- `azure`: Azure Blob Storage
> [!Note]
> #### Diskless architecture
>
> InfluxDB 3 supports a diskless architecture that can operate with object
> storage alone, eliminating the need for locally attached disks.
> {{% product-name %}} can also work with only local disk storage when needed.
>
> {{% show-in "enterprise" %}}
> The combined path structure `<CONFIGURED_PATH>/<CLUSTER_ID>/<NODE_ID>` ensures
> proper organization of data in your object store, allowing for clean
> separation between clusters and individual nodes.
> {{% /show-in %}}
For this getting started guide, use the `file` object store to persist data to
your local disk.
{{% show-in "enterprise" %}}
```bash
# File system object store
# Provide the filesystem directory
influxdb3 serve \
--node-id host01 \
--cluster-id cluster01 \
--object-store file \
--data-dir ~/.influxdb3
```
{{% /show-in %}}
{{% show-in "core" %}}
```bash
# File system object store
# Provide the file system directory
influxdb3 serve \
--node-id host01 \
--object-store file \
--data-dir ~/.influxdb3
```
{{% /show-in %}}
### Object store examples
{{< expand-wrapper >}}
{{% expand "File system object store" %}}
Store data in a specified directory on the local filesystem.
This is the default object store type.
Replace the following with your values:
{{% show-in "enterprise" %}}
```bash
# Filesystem object store
# Provide the filesystem directory
influxdb3 serve \
--node-id host01 \
--cluster-id cluster01 \
--object-store file \
--data-dir ~/.influxdb3
```
{{% /show-in %}}
{{% show-in "core" %}}
```bash
# File system object store
# Provide the file system directory
influxdb3 serve \
--node-id host01 \
--object-store file \
--data-dir ~/.influxdb3
```
{{% /show-in %}}
{{% /expand %}}
{{% expand "Docker with a mounted file system object store" %}}
To run the [Docker image](/influxdb3/version/install/#docker-image) and persist
data to the local file system, mount a volume for the object store--for example,
provide the following options with your `docker run` command:
- `--volume /path/on/host:/path/in/container`: Mounts a directory from your file system to the container
- `--object-store file --data-dir /path/in/container`: Use the volume mount for object storage
{{% show-in "enterprise" %}}
<!--pytest.mark.skip-->
```bash
# File system object store with Docker
# Create a mount
# Provide the mount path
docker run -it \
--volume /path/on/host:/path/in/container \
influxdb:3-enterprise influxdb3 serve \
--node-id my_host \
--cluster-id my_cluster \
--object-store file \
--data-dir /path/in/container
```
{{% /show-in %}}
{{% show-in "core" %}}
<!--pytest.mark.skip-->
```bash
# File system object store with Docker
# Create a mount
# Provide the mount path
docker run -it \
--volume /path/on/host:/path/in/container \
influxdb:3-core influxdb3 serve \
--node-id my_host \
--object-store file \
--data-dir /path/in/container
```
{{% /show-in %}}
> [!Note]
>
> The {{% product-name %}} Docker image exposes port `8181`, the `influxdb3`
> server default for HTTP connections.
> To map the exposed port to a different port when running a container, see the
> Docker guide for [Publishing and exposing ports](https://docs.docker.com/get-started/docker-concepts/running-containers/publishing-ports/).
{{% /expand %}}
{{% expand "Docker compose with a mounted file system object store" %}}
{{% show-in "enterprise" %}}
1. Open `compose.yaml` for editing and add a `services` entry for {{% product-name %}}.
--for example:
```yaml
# compose.yaml
services:
influxdb3-{{< product-key >}}:
container_name: influxdb3-{{< product-key >}}
image: influxdb:3-{{< product-key >}}
ports:
- 8181:8181
command:
- influxdb3
- serve
- --node-id=node0
- --cluster-id=cluster0
- --object-store=file
- --data-dir=/var/lib/influxdb3
- --plugin-dir=/var/lib/influxdb3-plugins
environment:
- INFLUXDB3_LICENSE_EMAIL=EMAIL_ADDRESS
```
_Replace `EMAIL_ADDRESS` with your email address to bypass the email prompt
when generating a trial or at-home license. For more information, see [Manage your
{{% product-name %}} license](/influxdb3/version/admin/license/)_.
{{% /show-in %}}
{{% show-in "core" %}}
1. Open `compose.yaml` for editing and add a `services` entry for {{% product-name %}}--for example:
```yaml
# compose.yaml
services:
influxdb3-{{< product-key >}}:
container_name: influxdb3-{{< product-key >}}
image: influxdb:3-{{< product-key >}}
ports:
- 8181:8181
command:
- influxdb3
- serve
- --node-id=node0
- --object-store=file
- --data-dir=/var/lib/influxdb3
- --plugin-dir=/var/lib/influxdb3-plugins
```
{{% /show-in %}}
2. Use the Docker Compose CLI to start the server.
Optional: to make sure you have the latest version of the image before you
start the server, run `docker compose pull`.
<!--pytest.mark.skip-->
```bash
docker compose pull && docker compose run influxdb3-{{< product-key >}}
```
InfluxDB 3 starts in a container with host port `8181` mapped to container port
`8181`, the `influxdb3` server default for HTTP connections.
> [!Tip]
> #### Custom port mapping
>
> To customize your `influxdb3` server hostname and port, specify the
> [`--http-bind` option or the `INFLUXDB3_HTTP_BIND_ADDR` environment variable](/influxdb3/version/reference/config-options/#http-bind).
>
> For more information about mapping your container port to a specific host port, see the
> Docker guide for [Publishing and exposing ports](https://docs.docker.com/get-started/docker-concepts/running-containers/publishing-ports/).
> [!Note]
> #### Stopping an InfluxDB 3 container
>
> To stop a running InfluxDB 3 container, find and terminate the process or container--for example:
>
> <!--pytest.mark.skip-->
> ```bash
> docker container ls --filter "name=influxdb3"
> docker kill <CONTAINER_ID>
> ```
>
> _Currently, a bug prevents using {{< keybind all="Ctrl+c" >}} in the terminal to stop an InfluxDB 3 container._
{{% /expand %}}
{{% expand "S3 object storage" %}}
Store data in an S3-compatible object store.
This is useful for production deployments that require high availability and durability.
Provide your bucket name and credentials to access the S3 object store.
{{% show-in "enterprise" %}}
```bash
# S3 object store (default is the us-east-1 region)
# Specify the object store type and associated options
influxdb3 serve \
--node-id host01 \
--cluster-id cluster01 \
--object-store s3 \
--bucket OBJECT_STORE_BUCKET \
--aws-access-key AWS_ACCESS_KEY_ID \
--aws-secret-access-key AWS_SECRET_ACCESS_KEY
```
```bash
# Minio or other open source object store
# (using the AWS S3 API with additional parameters)
# Specify the object store type and associated options
influxdb3 serve \
--node-id host01 \
--cluster-id cluster01 \
--object-store s3 \
--bucket OBJECT_STORE_BUCKET \
--aws-access-key-id AWS_ACCESS_KEY_ID \
--aws-secret-access-key AWS_SECRET_ACCESS_KEY \
--aws-endpoint ENDPOINT \
--aws-allow-http
```
{{% /show-in %}}
{{% show-in "core" %}}
```bash
# S3 object store (default is the us-east-1 region)
# Specify the object store type and associated options
influxdb3 serve \
--node-id host01 \
--object-store s3 \
--bucket OBJECT_STORE_BUCKET \
--aws-access-key AWS_ACCESS_KEY_ID \
--aws-secret-access-key AWS_SECRET_ACCESS_KEY
```
```bash
# Minio or other open source object store
# (using the AWS S3 API with additional parameters)
# Specify the object store type and associated options
influxdb3 serve \
--node-id host01 \
--object-store s3 \
--bucket OBJECT_STORE_BUCKET \
--aws-access-key-id AWS_ACCESS_KEY_ID \
--aws-secret-access-key AWS_SECRET_ACCESS_KEY \
--aws-endpoint ENDPOINT \
--aws-allow-http
```
{{% /show-in %}}
{{% /expand %}}
{{% expand "Memory-based object store" %}}
Store data in RAM without persisting it on shutdown.
It's useful for rapid testing and development.
{{% show-in "enterprise" %}}
```bash
# Memory object store
# Stores data in RAM; doesn't persist data
influxdb3 serve \
--node-id host01 \
--cluster-id cluster01 \
--object-store memory
```
{{% /show-in %}}
{{% show-in "core" %}}
```bash
# Memory object store
# Stores data in RAM; doesn't persist data
influxdb3 serve \
--node-id host01 \
--object-store memory
```
{{% /show-in %}}
{{% /expand %}}
{{< /expand-wrapper >}}
For more information about server options, use the CLI help or view the
[InfluxDB 3 CLI reference](/influxdb3/version/reference/cli/influxdb3/serve/):
```bash
influxdb3 serve --help
```
{{% show-in "enterprise" %}}
## Set up licensing
When you first start a new instance, {{% product-name %}} prompts you to select a
license type.
InfluxDB 3 Enterprise licenses:
- **Authorize** usage of InfluxDB 3 Enterprise software for a single cluster.
- **Apply per cluster**, with limits based primarily on CPU cores.
- **Vary by license type**, each offering different capabilities and restrictions.
### Available license types
- **Trial**: 30-day trial license with full access to InfluxDB 3 Enterprise capabilities.
- **At-Home**: For at-home hobbyist use with limited access to InfluxDB 3 Enterprise capabilities.
- **Commercial**: Commercial license with full access to InfluxDB 3 Enterprise capabilities.
> [!Important]
> #### Trial and at-home licenses with Docker
>
> To generate the trial or home license in Docker, bypass the email prompt.
> The first time you start a new instance, provide your email address with the
> `--license-email` option or the `INFLUXDB3_LICENSE_EMAIL` environment variable.
>
> _Currently, if you use Docker and enter your email address in the prompt, a bug may
> prevent the container from generating the license ._
>
> For more information, see [the Docker Compose example](/influxdb3/enterprise/admin/license/?t=Docker+compose#start-the-server-with-your-license-email).
{{% /show-in %}}
> [!Tip]
> #### Use the InfluxDB 3 Explorer query interface (beta)
>
> You can complete the remaining steps in this guide using InfluxDB 3 Explorer
> (currently in beta), the web-based query and administrative interface for InfluxDB 3.
> Explorer provides visual management of databases and tokens and an
> easy way to write and query your time series data.
>
> For more information, see the [InfluxDB 3 Explorer documentation](/influxdb3/explorer/).
## Set up authorization
{{% product-name %}} uses token-based authorization to authorize actions in the
database. Authorization is enabled by default when you start the server.
With authorization enabled, you must provide a token with `influxdb3` CLI
commands and HTTP API requests.
{{% show-in "enterprise" %}}
{{% product-name %}} supports the following types of tokens:
- **admin token**: Grants access to all CLI actions and API endpoints.
- **resource tokens**: Tokens that grant read and write access to specific
resources (databases and system information endpoints) on the server.
- A database token grants access to write and query data in a
database
- A system token grants read access to system information endpoints and
metrics for the server
{{% /show-in %}}
{{% show-in "core" %}}
{{% product-name %}} supports _admin_ tokens, which grant access to all CLI actions and API endpoints.
{{% /show-in %}}
For more information about tokens and authorization, see [Manage tokens](/influxdb3/version/admin/tokens/).
### Create an operator token
After you start the server, create your first admin token.
The first admin token you create is the _operator_ token for the server.
Use the [`influxdb3 create token` command](/influxdb3/version/reference/cli/influxdb3/create/token/)
with the `--admin` option to create your operator token:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[CLI](#)
[Docker](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```bash
influxdb3 create token --admin
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
{{% code-placeholders "CONTAINER_NAME" %}}
```bash
# With Docker — in a new terminal:
docker exec -it CONTAINER_NAME influxdb3 create token --admin
```
{{% /code-placeholders %}}
Replace {{% code-placeholder-key %}}`CONTAINER_NAME`{{% /code-placeholder-key %}} with the name of your running Docker container.
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
The command returns a token string for authenticating CLI commands and API requests.
> [!Important]
> #### Store your token securely
>
> InfluxDB displays the token string only when you create it.
> Store your token securely—you cannot retrieve it from the database later.
### Set your token for authorization
Use your operator token to authenticate server actions in {{% product-name %}},
such as {{% show-in "enterprise" %}}creating additional tokens, {{% /show-in %}}
performing administrative tasks{{% show-in "enterprise" %}},{{% /show-in %}}
and writing and querying data.
Use one of the following methods to provide your token and authenticate `influxdb3` CLI commands.
In your command, replace {{% code-placeholder-key %}}`YOUR_AUTH_TOKEN`{{% /code-placeholder-key %}} with your token string (for example, the [operator token](#create-an-operator-token) from the previous step).
{{< tabs-wrapper >}}
{{% tabs %}}
[Environment variable (recommended)](#)
[Command option](#)
{{% /tabs %}}
{{% tab-content %}}
Set the `INFLUXDB3_AUTH_TOKEN` environment variable to have the CLI use your
token automatically:
{{% code-placeholders "YOUR_AUTH_TOKEN" %}}
```bash
export INFLUXDB3_AUTH_TOKEN=YOUR_AUTH_TOKEN
```
{{% /code-placeholders %}}
{{% /tab-content %}}
{{% tab-content %}}
Include the `--token` option with CLI commands:
{{% code-placeholders "YOUR_AUTH_TOKEN" %}}
```bash
influxdb3 show databases --token YOUR_AUTH_TOKEN
```
{{% /code-placeholders %}}
{{% /tab-content %}}
{{< /tabs-wrapper >}}
For HTTP API requests, include your token in the `Authorization` header--for example:
{{% code-placeholders "YOUR_AUTH_TOKEN" %}}
```bash
curl "http://{{< influxdb/host >}}/api/v3/configure/database" \
--header "Authorization: Bearer YOUR_AUTH_TOKEN"
```
{{% /code-placeholders %}}
#### Learn more about tokens and permissions
- [Manage admin tokens](/influxdb3/version/admin/tokens/admin/) - Understand and
manage operator and named admin tokens
{{% show-in "enterprise" %}}
- [Manage resource tokens](/influxdb3/version/admin/tokens/resource/) - Create,
list, and delete resource tokens
{{% /show-in %}}
- [Authentication](/influxdb3/version/reference/internals/authentication/) -
Understand authentication, authorizations, and permissions in {{% product-name %}}
<!-- //TODO - Authenticate with compatibility APIs -->
{{% show-in "core" %}}
{{% page-nav
prev="/influxdb3/version/get-started/"
prevText="Get started"
next="/influxdb3/version/get-started/write/"
nextText="Write data"
%}}
{{% /show-in %}}
{{% show-in "enterprise" %}}
{{% page-nav
prev="/influxdb3/version/get-started/"
prevText="Get started"
next="/influxdb3/version/get-started/multi-server/"
nextText="Create a multi-node cluster"
%}}
{{% /show-in %}}

View File

@ -0,0 +1,252 @@
<!-- ALLOW SHORTCODE -->
{{% product-name %}} is designed for high write-throughput and uses an efficient,
human-readable write syntax called _[line protocol](#line-protocol)_. InfluxDB
is a schema-on-write database, meaning you can start writing data and InfluxDB
creates the logical database, tables, and their schemas automatically, without
any required intervention. Once InfluxDB creates the schema, it validates future
write requests against the schema before accepting new data.
Both new tags and fields can be added later as your schema changes.
{{% show-in "core" %}}
> [!Note]
> #### InfluxDB 3 Core is optimized for recent data
>
> {{% product-name %}} is optimized for recent data but accepts writes from any time period.
> The system persists data to Parquet files for historical analysis with [InfluxDB 3 Enterprise](/influxdb3/enterprise/get-started/) or third-party tools.
> For extended historical queries and optimized data organization, consider using [InfluxDB 3 Enterprise](/influxdb3/enterprise/get-started/).
{{% /show-in %}}
<!-- TOC -->
- [Line protocol](#line-protocol)
- [Construct line protocol](#construct-line-protocol)
- [Write data using the CLI](#write-data-using-the-cli)
- [Other tools for writing data](#other-tools-for-writing-data)
<!-- /TOC -->
## Line protocol
{{% product-name %}} accepts data in
[line protocol](/influxdb3/version/reference/syntax/line-protocol/) syntax.
Line protocol consists of the following elements:
<!-- vale InfluxDataDocs.v3Schema = NO -->
{{< req type="key" >}}
- {{< req "\*" >}} **table**: A string that identifies the
[table](/influxdb3/version/reference/glossary/#table) to store the data in.
- **tag set**: Comma-delimited list of key value pairs, each representing a tag.
Tag keys and values are unquoted strings. _Spaces, commas, and equal characters
must be escaped._
- {{< req "\*" >}} **field set**: Comma-delimited list of key value pairs, each
representing a field.
Field keys are unquoted strings. _Spaces and commas must be escaped._
Field values can be one of the following types:
- [strings](/influxdb3/clustered/reference/syntax/line-protocol/#string) (quoted)
- [floats](/influxdb3/clustered/reference/syntax/line-protocol/#float)
- [integers](/influxdb3/clustered/reference/syntax/line-protocol/#integer)
- [unsigned integers](/influxdb3/clustered/reference/syntax/line-protocol/#uinteger)
- [booleans](/influxdb3/clustered/reference/syntax/line-protocol/#boolean)
- **timestamp**: [Unix timestamp](/influxdb3/clustered/reference/syntax/line-protocol/#unix-timestamp)
associated with the data. InfluxDB supports up to nanosecond precision.
<!-- vale InfluxDataDocs.v3Schema = YES -->
{{< expand-wrapper >}}
{{% expand "How are InfluxDB line protocol elements parsed?" %}}
<!-- vale InfluxDataDocs.v3Schema = YES -->
- **table**: Everything before the _first unescaped comma before the first
whitespace_.
- **tag set**: Key-value pairs between the _first unescaped comma_ and the _first
unescaped whitespace_.
- **field set**: Key-value pairs between the _first and second unescaped whitespaces_.
- **timestamp**: Integer value after the _second unescaped whitespace_.
- Lines are separated by the newline character (`\n`). Line protocol is
whitespace sensitive.
<!-- vale InfluxDataDocs.v3Schema = YES -->
{{% /expand %}}
{{< /expand-wrapper >}}
_For schema design recommendations, see
[InfluxDB schema design recomendations](/influxdb3/version/write-data/best-practices/schema-design/)._
---
{{< influxdb/line-protocol version="v3" >}}
---
## Construct line protocol
<!-- vale InfluxDataDocs.v3Schema = NO -->
With a basic understanding of line protocol, you can now construct line protocol
and write data to {{% product-name %}}.
Consider a use case where you collect data from sensors in your home.
Each sensor collects temperature, humidity, and carbon monoxide readings.
To collect this data, use the following schema:
- **table**: `home`
- **tags**
- `room`: Living Room or Kitchen
- **fields**
- `temp`: temperature in °C (float)
- `hum`: percent humidity (float)
- `co`: carbon monoxide in parts per million (integer)
- **timestamp**: Unix timestamp in _second_ precision
<!-- vale InfluxDataDocs.v3Schema = YES -->
The following line protocol sample represents data collected hourly beginning at
{{% influxdb/custom-timestamps-span %}}**2022-01-01T08:00:00Z (UTC)** until **2022-01-01T20:00:00Z (UTC)**{{% /influxdb/custom-timestamps-span %}}.
_These timestamps are dynamic and can be updated by clicking the {{% icon "clock" %}}
icon in the bottom right corner._
{{% influxdb/custom-timestamps %}}
##### Home sensor data line protocol
```text
home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000
home,room=Kitchen temp=21.0,hum=35.9,co=0i 1641024000
home,room=Living\ Room temp=21.4,hum=35.9,co=0i 1641027600
home,room=Kitchen temp=23.0,hum=36.2,co=0i 1641027600
home,room=Living\ Room temp=21.8,hum=36.0,co=0i 1641031200
home,room=Kitchen temp=22.7,hum=36.1,co=0i 1641031200
home,room=Living\ Room temp=22.2,hum=36.0,co=0i 1641034800
home,room=Kitchen temp=22.4,hum=36.0,co=0i 1641034800
home,room=Living\ Room temp=22.2,hum=35.9,co=0i 1641038400
home,room=Kitchen temp=22.5,hum=36.0,co=0i 1641038400
home,room=Living\ Room temp=22.4,hum=36.0,co=0i 1641042000
home,room=Kitchen temp=22.8,hum=36.5,co=1i 1641042000
home,room=Living\ Room temp=22.3,hum=36.1,co=0i 1641045600
home,room=Kitchen temp=22.8,hum=36.3,co=1i 1641045600
home,room=Living\ Room temp=22.3,hum=36.1,co=1i 1641049200
home,room=Kitchen temp=22.7,hum=36.2,co=3i 1641049200
home,room=Living\ Room temp=22.4,hum=36.0,co=4i 1641052800
home,room=Kitchen temp=22.4,hum=36.0,co=7i 1641052800
home,room=Living\ Room temp=22.6,hum=35.9,co=5i 1641056400
home,room=Kitchen temp=22.7,hum=36.0,co=9i 1641056400
home,room=Living\ Room temp=22.8,hum=36.2,co=9i 1641060000
home,room=Kitchen temp=23.3,hum=36.9,co=18i 1641060000
home,room=Living\ Room temp=22.5,hum=36.3,co=14i 1641063600
home,room=Kitchen temp=23.1,hum=36.6,co=22i 1641063600
home,room=Living\ Room temp=22.2,hum=36.4,co=17i 1641067200
home,room=Kitchen temp=22.7,hum=36.5,co=26i 1641067200
```
{{% /influxdb/custom-timestamps %}}
## Write data using the CLI
To quickly get started writing data, use the
[`influxdb3 write` command](/influxdb3/version/reference/cli/influxdb3/write/).
Include the following:
- `--database` option that identifies the target database
- `--token` option that specifies the token to use _(unless the `INFLUXDB3_AUTH_TOKEN`
environment variable is already set)_
- Quoted line protocol data via standard input (stdin)
{{% code-placeholders "DATABASE_NAME|AUTH_TOKEN" %}}
```bash
influxdb3 write \
--database DATABASE_NAME \
--token AUTH_TOKEN \
--precision s \
'home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1641024000
home,room=Kitchen temp=21.0,hum=35.9,co=0i 1641024000
home,room=Living\ Room temp=21.4,hum=35.9,co=0i 1641027600
home,room=Kitchen temp=23.0,hum=36.2,co=0i 1641027600
home,room=Living\ Room temp=21.8,hum=36.0,co=0i 1641031200
home,room=Kitchen temp=22.7,hum=36.1,co=0i 1641031200
home,room=Living\ Room temp=22.2,hum=36.0,co=0i 1641034800
home,room=Kitchen temp=22.4,hum=36.0,co=0i 1641034800
home,room=Living\ Room temp=22.2,hum=35.9,co=0i 1641038400
home,room=Kitchen temp=22.5,hum=36.0,co=0i 1641038400
home,room=Living\ Room temp=22.4,hum=36.0,co=0i 1641042000
home,room=Kitchen temp=22.8,hum=36.5,co=1i 1641042000
home,room=Living\ Room temp=22.3,hum=36.1,co=0i 1641045600
home,room=Kitchen temp=22.8,hum=36.3,co=1i 1641045600
home,room=Living\ Room temp=22.3,hum=36.1,co=1i 1641049200
home,room=Kitchen temp=22.7,hum=36.2,co=3i 1641049200
home,room=Living\ Room temp=22.4,hum=36.0,co=4i 1641052800
home,room=Kitchen temp=22.4,hum=36.0,co=7i 1641052800
home,room=Living\ Room temp=22.6,hum=35.9,co=5i 1641056400
home,room=Kitchen temp=22.7,hum=36.0,co=9i 1641056400
home,room=Living\ Room temp=22.8,hum=36.2,co=9i 1641060000
home,room=Kitchen temp=23.3,hum=36.9,co=18i 1641060000
home,room=Living\ Room temp=22.5,hum=36.3,co=14i 1641063600
home,room=Kitchen temp=23.1,hum=36.6,co=22i 1641063600
home,room=Living\ Room temp=22.2,hum=36.4,co=17i 1641067200
home,room=Kitchen temp=22.7,hum=36.5,co=26i 1641067200'
```
{{% /code-placeholders %}}
In the code samples, replace the following placeholders with your values:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
the name of the [database](/influxdb3/version/admin/databases/) to write to
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}:
your {{% token-link "database" %}}{{% show-in "enterprise" %}} with permission
to write to the specified database{{% /show-in %}}
### Write data from a file
To write line protocol you have saved to a file, pass the `--file` option--for example, save the
[sample line protocol](#home-sensor-data-line-protocol) to a file named `sensor_data`
and then enter the following command:
{{% code-placeholders "DATABASE_NAME|AUTH_TOKEN" %}}
```bash
influxdb3 write \
--database DATABASE_NAME \
--token AUTH_TOKEN \
--precision s \
--accept-partial \
--file path/to/sensor_data
```
{{% /code-placeholders %}}
Replace the following placeholders with your values:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the [database](/influxdb3/version/admin/databases/) to write to.
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in "enterprise" %}} with permission to write to the specified database{{% /show-in %}}
## Other tools for writing data
There are many ways to write data to your {{% product-name %}} database, including:
- [InfluxDB HTTP API](/influxdb3/version/write-data/http-api/): Recommended for
batching and higher-volume write workloads.
- [InfluxDB client libraries](/influxdb3/version/write-data/client-libraries/):
Client libraries that integrate with your code to construct data as time
series points and write the data as line protocol to your
{{% product-name %}} database.
- [Telegraf](/telegraf/v1/): A data collection agent with over 300 plugins for
collecting, processing, and writing data.
For more information, see [Write data to {{% product-name %}}](/influxdb3/version/write-data/).
{{% show-in "enterprise" %}}
{{% page-nav
prev="/influxdb3/version/get-started/multi-server/"
prevText="Create a multi-node cluster"
next="/influxdb3/version/get-started/query/"
nextText="Query data"
%}}
{{% /show-in %}}
{{% show-in "core" %}}
{{% page-nav
prev="/influxdb3/version/get-started/setup/"
prevText="Set up InfluxDB"
next="/influxdb3/version/get-started/query/"
nextText="Query data"
%}}
{{% /show-in %}}

View File

@ -21,12 +21,6 @@ Provide the following with your request:
- **Headers:**
- **Authorization:** `Bearer AUTH_TOKEN`
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization
> token. You can either omit this header or include it with an arbitrary
> token string.
- **Query parameters:**
- **db**: the database to query
- **rp**: Optional: the retention policy to query
@ -44,9 +38,9 @@ curl --get https://{{< influxdb/host >}}/query \
Replace the following configuration values:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
the name of the database to query
the name of the [database](/influxdb3/version/admin/databases/) to query
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}:
your authorization token
your {{< product-name >}} {{% token-link %}}{{% show-in "enterprise" %}} with read access to the database{{% /show-in %}}
## Return results as JSON or CSV
@ -57,7 +51,7 @@ with the `application/csv` or `text/csv` MIME type:
{{% code-placeholders "(DATABASE|AUTH)_(NAME|TOKEN)" %}}
```sh
curl --get https://{{< influxdb/host >}}/query \
--header "Authorization: BEARER AUTH_TOKEN" \
--header "Authorization: Bearer AUTH_TOKEN" \
--header "Accept: application/csv" \
--data-urlencode "db=DATABASE_NAME" \
--data-urlencode "q=SELECT * FROM home"

View File

@ -35,7 +35,8 @@ Include the following parameters:
The following example sends an HTTP `GET` request with a URL-encoded SQL query:
```bash
curl -v "http://{{< influxdb/host >}}/api/v3/query_sql?db=servers&q=select+*+from+cpu+limit+5"
curl "http://{{< influxdb/host >}}/api/v3/query_sql?db=servers&q=select+*+from+cpu+limit+5" \
--header "Authorization: Bearer AUTH_TOKEN"
```
### Example: Query passing JSON parameters
@ -44,7 +45,8 @@ The following example sends an HTTP `POST` request with parameters in a JSON pay
```bash
curl http://{{< influxdb/host >}}/api/v3/query_sql \
--data '{"db": "server", "q": "select * from cpu limit 5"}'
--header "Authorization: Bearer AUTH_TOKEN"
--json '{"db": "server", "q": "select * from cpu limit 5"}'
```
### Query system information
@ -71,7 +73,8 @@ tables (`"table_schema":"iox"`), system tables, and information schema tables
for a database:
```bash
curl "http://{{< influxdb/host >}}/api/v3/query_sql?db=mydb&format=jsonl&q=show%20tables"
curl "http://{{< influxdb/host >}}/api/v3/query_sql?db=mydb&format=jsonl&q=show%20tables" \
--header "Authorization: Bearer AUTH_TOKEN"
```
The response body contains the following JSONL:
@ -117,7 +120,7 @@ that surround field names._
```bash
curl "http://localhost:8181/api/v3/query_sql" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer AUTH_TOKEN" \
--json '{
"db": "mydb",
"q": "SELECT * FROM information_schema.columns WHERE table_schema = '"'iox'"' AND table_name = '"'system_swap'"'",
@ -144,7 +147,7 @@ To view recently executed queries, query the `queries` system table:
```bash
curl "http://localhost:8181/api/v3/query_sql" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer AUTH_TOKEN" \
--json '{
"db": "mydb",
"q": "SELECT * FROM system.queries LIMIT 2",
@ -180,7 +183,8 @@ Include the following parameters:
The following example sends an HTTP `GET` request with a URL-encoded InfluxQL query:
```bash
curl -v "http://{{< influxdb/host >}}/api/v3/query_influxql?db=servers&q=select+*+from+cpu+limit+5"
curl "http://{{< influxdb/host >}}/api/v3/query_influxql?db=servers&q=select+*+from+cpu+limit+5" \
--header "Authorization: Bearer AUTH_TOKEN"
```
### Example: Query passing JSON parameters
@ -189,5 +193,6 @@ The following example sends an HTTP `POST` request with parameters in a JSON pay
```bash
curl http://{{< influxdb/host >}}/api/v3/query_influxql \
--data '{"db": "server", "q": "select * from cpu limit 5"}'
--header "Authorization: Bearer AUTH_TOKEN" \
--json '{"db": "server", "q": "select * from cpu limit 5"}'
```

View File

@ -4,12 +4,12 @@ to query data in {{< product-name >}} with SQL or InfluxQL.
Provide the following with your command:
<!-- - **Authorization token**: A [authorization token](/influxdb3/version/admin/tokens/#database-tokens)
with read permissions on the queried database.
- **Authorization token**: Your {{< product-name >}} {{% token-link "admin" "admin" %}}
with read permissions on the database.
Provide this using one of the following:
- `--token` command option
- `INFLUXDB3_AUTH_TOKEN` environment variable -->
- `INFLUXDB3_AUTH_TOKEN` environment variable
- **Database name**: The name of the database to query.
Provide this using one of the following:
@ -53,6 +53,7 @@ Provide the following with your command:
```bash
influxdb3 query \
--token AUTH_TOKEN \
--database DATABASE_NAME \
"SELECT * FROM home"
```
@ -62,6 +63,7 @@ influxdb3 query \
```bash
influxdb3 query \
--token AUTH_TOKEN \
--database DATABASE_NAME \
--file ./query.sql
```
@ -70,7 +72,7 @@ influxdb3 query \
<!--pytest.mark.skip-->
```bash
cat ./query.sql | influxdb3 query --database DATABASE_NAME
cat ./query.sql | influxdb3 query --token AUTH_TOKEN --database DATABASE_NAME
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
@ -94,6 +96,7 @@ cat ./query.sql | influxdb3 query --database DATABASE_NAME
```bash
influxdb3 query \
--token AUTH_TOKEN \
--language influxql \
--database DATABASE_NAME \
"SELECT * FROM home"
@ -104,8 +107,8 @@ influxdb3 query \
```bash
influxdb3 query \
--token AUTH_TOKEN \
--language influxql \
--database DATABASE_NAME \
--file ./query.influxql
```
{{% /code-tab-content %}}
@ -114,6 +117,7 @@ influxdb3 query \
```bash
cat ./query.influxql | influxdb3 query \
--token AUTH_TOKEN \
--language influxql \
--database DATABASE_NAME
```
@ -150,6 +154,7 @@ Use the `--format` flag to specify the output format:
{{% influxdb/custom-timestamps %}}
```sh
influxdb3 query \
--token AUTH_TOKEN \
--database DATABASE_NAME \
--format json \
"SELECT * FROM home WHERE time >= '2022-01-01T08:00:00Z' LIMIT 5"
@ -217,6 +222,7 @@ the `influxdb3 query` command:
{{% influxdb/custom-timestamps %}}
```sh
influxdb3 query \
--token AUTH_TOKEN \
--database DATABASE_NAME \
--format parquet \
--output path/to/results.parquet \

View File

@ -216,21 +216,16 @@ home,room=Kitchen temp=22.7,hum=36.5,co=26i 1641067200
Replace the following in the sample script:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
the name of database to write to
the name of [database](/influxdb3/version/admin/databases/) to write to
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}:
your {{< product-name >}} authorization token
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization token.
> You can either omit the CLI `--token` option or the HTTP `Authorization` header or
> you can provide an arbitrary token string.
your {{< product-name >}} {{% token-link %}}
{{% /expand %}}
{{< /expand-wrapper >}}
## Home sensor actions data
Includes hypothetical actions triggered by data in the [Get started home sensor data](#get-started-home-sensor-data)
Includes hypothetical actions triggered by data in the [home sensor data](#home-sensor-data)
and is a companion dataset to that sample dataset.
To customize timestamps in the dataset, use the {{< icon "clock" >}} button in
the lower right corner of the page.
@ -371,12 +366,7 @@ Replace the following in the sample script:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
the name of database to write to
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}:
your {{< product-name >}} authorization token
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization token.
> You can either omit the CLI `--token` option or the HTTP `Authorization` header or
> you can provide an arbitrary token string.
your {{< product-name >}} {{% token-link %}}
{{% /expand %}}
{{< /expand-wrapper >}}
@ -478,12 +468,7 @@ Replace the following in the sample script:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
the name of database to write to
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}:
your {{< product-name >}} authorization token
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization token.
> You can either omit the CLI `--token` option or the HTTP `Authorization` header or
> you can provide an arbitrary token string.
your {{< product-name >}} {{% token-link %}}
{{% /expand %}}
{{< /expand-wrapper >}}
@ -575,12 +560,7 @@ Replace the following in the sample script:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
the name of database to write to
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}:
your {{< product-name >}} authorization token
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization token.
> You can either omit the CLI `--token` option or the HTTP `Authorization` header or
> you can provide an arbitrary token string.
your {{< product-name >}} {{% token-link %}}
{{% /expand %}}
{{< /expand-wrapper >}}
@ -674,12 +654,7 @@ Replace the following in the sample script:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
the name of database to write to
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}:
your {{< product-name >}} authorization token
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization token.
> You can either omit the CLI `--token` option or the HTTP `Authorization` header or
> you can provide an arbitrary token string.
your {{< product-name >}} {{% token-link %}}
{{% /expand %}}
{{< /expand-wrapper >}}

View File

@ -69,13 +69,6 @@ When creating an InfluxDB data source that uses SQL to query data:
- **Database**: Provide a default database name to query.
- **Token**: Provide an arbitrary, non-empty string.
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization token.
> However, if you included a `--token` option or defined the
> `INFLUXDB3_AUTH_TOKEN` environment variable when starting your
> {{< product-name >}} server, provide that token.
- **Insecure Connection**: If _not_ using HTTPS, enable this option.
3. Click **Save & test**.
@ -103,11 +96,6 @@ When creating an InfluxDB data source that uses InfluxQL to query data:
- **User**: Provide an arbitrary string.
_This credential is ignored when querying {{% product-name %}}, but it cannot be empty._
- **Password**: Provide an arbitrary string.
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization
> token, but the **Password** field does require a value.
- **HTTP Method**: Choose one of the available HTTP request methods to use when querying data:
- **POST** ({{< req text="Recommended" >}})

View File

@ -211,11 +211,8 @@ a database connection.
**Query parameters**
- **`?database`**: URL-encoded InfluxDB database name
- **`?token`**: InfluxDB authorization token
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization token.
- **`?database`**: URL-encoded [database](/influxdb3/version/admin/databases/) name
- **`?token`**: {{< product-name >}} {{% token-link %}}
{{< code-callout "&lt;(domain|port|database-name|token)&gt;" >}}
{{< code-callout "localhost|8181|example-database|example-token" >}}

View File

@ -67,10 +67,6 @@ the **Flight SQL JDBC driver**.
- **Dialect**: PostgreSQL
- **Username**: _Leave empty_
- **Password**: _Leave empty_
> [!Note]
> While in beta, {{< product-name >}} does not require authorization tokens.
- **Properties File**: _Leave empty_
4. Click **Sign In**.

View File

@ -15,8 +15,9 @@ to line protocol.
>
> #### Choose the write endpoint for your workload
>
> When creating new write workloads, use the HTTP API
> [`/api/v3/write_lp` endpoint with client libraries](/influxdb3/version/write-data/api-client-libraries/).
> When creating new write workloads, use the
> [InfluxDB HTTP API `/api/v3/write_lp` endpoint](influxdb3/version/write-data/http-api/v3-write-lp/)
> and [client libraries](/influxdb3/version/write-data/client-libraries/).
>
> When bringing existing v1 write workloads, use the {{% product-name %}}
> HTTP API [`/write` endpoint](/influxdb3/core/api/v3/#operation/PostV1Write).

View File

@ -162,14 +162,9 @@ Replace the following:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
the name of the database to write data to
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}:
your {{< product-name >}} authorization token.
your {{< product-name >}} {{% token-link %}}
_Store this in a secret store or environment variable to avoid exposing the raw token string._
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization token.
> You can either omit the `Authorization` header or you can provide an
> arbitrary token string.
{{% /tab-content %}}
{{< /tabs-wrapper >}}
@ -248,13 +243,9 @@ EOF
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
the name of the database to write data to
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}:
your {{< product-name >}} authorization token.
your {{< product-name >}} {{% token-link %}}
_Store this in a secret store or environment variable to avoid exposing the raw token string._
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization token.
> For the `token` option, provide an empty or arbitrary token string.
2. To test the input and processor, enter the following command:
<!--pytest-codeblocks:cont-->
@ -361,12 +352,9 @@ EOF
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
the name of the database to write data to
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}:
your {{< product-name >}} authorization token.
your {{< product-name >}} {{% token-link %}}
_Store this in a secret store or environment variable to avoid exposing the raw token string._
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization token.
> For the `token` option, provide an empty or arbitrary token string.
3. To test the input and processor, enter the following command:
@ -463,12 +451,9 @@ table, tag set, and timestamp), and then merges points in each series:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
the name of the database to write data to
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}:
your {{< product-name >}} authorization token.
your {{< product-name >}} {{% token-link %}}
_Store this in a secret store or environment variable to avoid exposing the raw token string._
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization token.
> For the `token` option, provide an empty or arbitrary token string.
3. To test the input and aggregator, enter the following command:
@ -566,12 +551,9 @@ field values, and then write the data to InfluxDB:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
the name of the database to write data to
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}:
your {{< product-name >}} authorization token.
your {{< product-name >}} {{% token-link %}}
_Store this in a secret store or environment variable to avoid exposing the raw token string._
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization token.
> For the `token` option, provide an empty or arbitrary token string.
3. To test the input and processor, enter the following command:
@ -805,12 +787,9 @@ EOF
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
the name of the database to write data to
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}:
your {{< product-name >}} authorization token.
your {{< product-name >}} {{% token-link %}}
_Store this in a secret store or environment variable to avoid exposing the raw token string._
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization token.
> For the `token` option, provide an empty or arbitrary token string.
5. To test the input and processor, enter the following command:

View File

@ -1,193 +1,42 @@
Use the `/api/v3/write_lp` HTTP API endpoint and InfluxDB v3 API clients to write points as line protocol data to {{% product-name %}}.
- [Use the /api/v3/write\_lp endpoint](#use-the-apiv3write_lp-endpoint)
- [Example: write data using the /api/v3 HTTP API](#example-write-data-using-the-apiv3-http-api)
- [Write responses](#write-responses)
- [Use no\_sync for immediate write responses](#use-no_sync-for-immediate-write-responses)
- [Use API client libraries](#use-api-client-libraries)
- [Construct line protocol](#construct-line-protocol)
- [Set up your project](#set-up-your-project)
## Use the /api/v3/write_lp endpoint
{{% product-name %}} adds the `/api/v3/write_lp` endpoint.
{{<api-endpoint endpoint="/api/v3/write_lp?db=mydb&precision=nanosecond&accept_partial=true&no_sync=false" method="post" >}}
This endpoint accepts the same line protocol syntax as [previous versions](/influxdb3/version/write-data/compatibility-apis/),
and supports the following parameters:
- `?accept_partial=<BOOLEAN>`: Accept or reject partial writes (default is `true`).
- `?no_sync=<BOOLEAN>`: Control when writes are acknowledged:
- `no_sync=true`: Acknowledge writes before WAL persistence completes.
- `no_sync=false`: Acknowledges writes after WAL persistence completes (default).
- `?precision=<PRECISION>`: Specify the precision of the timestamp. The default is nanosecond precision.
For more information about the parameters, see [Write data](/influxdb3/version/write-data/).
InfluxData provides supported InfluxDB 3 client libraries that you can integrate with your code
to construct data as time series points, and then write them as line protocol to an {{% product-name %}} database.
For more information, see how to [use InfluxDB client libraries to write data](/influxdb3/version/write-data/client-libraries/).
### Example: write data using the /api/v3 HTTP API
The following examples show how to write data using `curl` and the `/api/3/write_lp` HTTP endpoint.
To show the difference between accepting and rejecting partial writes, line `2` in the example contains a string value (`"hi"`) for a float field (`temp`).
#### Partial write of line protocol occurred
With `accept_partial=true` (default):
```bash
curl -v "http://{{< influxdb/host >}}/api/v3/write_lp?db=sensors&precision=auto" \
--data-raw 'home,room=Sunroom temp=96
home,room=Sunroom temp="hi"'
```
The response is the following:
```
< HTTP/1.1 400 Bad Request
...
{
"error": "partial write of line protocol occurred",
"data": [
{
"original_line": "home,room=Sunroom temp=hi",
"line_number": 2,
"error_message": "invalid column type for column 'temp', expected iox::column_type::field::float, got iox::column_type::field::string"
}
]
}
```
Line `1` is written and queryable.
Line `2` is rejected.
The response is an HTTP error (`400`) status, and the response body contains the error message `partial write of line protocol occurred` with details about the problem line.
#### Parsing failed for write_lp endpoint
With `accept_partial=false`:
```bash
curl -v "http://{{< influxdb/host >}}/api/v3/write_lp?db=sensors&precision=auto&accept_partial=false" \
--data-raw 'home,room=Sunroom temp=96
home,room=Sunroom temp="hi"'
```
The response is the following:
```
< HTTP/1.1 400 Bad Request
...
{
"error": "parsing failed for write_lp endpoint",
"data": {
"original_line": "home,room=Sunroom temp=hi",
"line_number": 2,
"error_message": "invalid column type for column 'temp', expected iox::column_type::field::float, got iox::column_type::field::string"
}
}
```
InfluxDB rejects all points in the batch.
The response is an HTTP error (`400`) status, and the response body contains `parsing failed for write_lp endpoint` and details about the problem line.
For more information about the ingest path and data flow, see [Data durability](/influxdb3/version/reference/internals/durability/).
### Write responses
By default, InfluxDB acknowledges writes after flushing the WAL file to the Object store (occurring every second).
For high write throughput, you can send multiple concurrent write requests.
### Use no_sync for immediate write responses
To reduce the latency of writes, use the `no_sync` write option, which acknowledges writes _before_ WAL persistence completes.
When `no_sync=true`, InfluxDB validates the data, writes the data to the WAL, and then immediately responds to the client, without waiting for persistence to the Object store.
Using `no_sync=true` is best when prioritizing high-throughput writes over absolute durability.
- Default behavior (`no_sync=false`): Waits for data to be written to the Object store before acknowledging the write. Reduces the risk of data loss, but increases the latency of the response.
- With `no_sync=true`: Reduces write latency, but increases the risk of data loss in case of a crash before WAL persistence.
#### Immediate write using the HTTP API
The `no_sync` parameter controls when writes are acknowledged--for example:
```bash
curl "http://localhost:8181/api/v3/write_lp?db=sensors&precision=auto&no_sync=true" \
--data-raw "home,room=Sunroom temp=96"
```
## Use API client libraries
Use InfluxDB 3 client libraries that integrate with your code to construct data
as time series points, and
then write them as line protocol to an {{% product-name %}} database.
as time series points, and then write them as line protocol to an
{{% product-name %}} database.
- [Set up your project](#set-up-your-project)
- [Initialize a project directory](#initialize-a-project-directory)
- [Install the client library](#install-the-client-library)
- [Construct line protocol](#construct-line-protocol)
- [Example home schema](#example-home-schema)
- [Set up your project](#set-up-your-project)
- [Construct points and write line protocol](#construct-points-and-write-line-protocol)
### Construct line protocol
## Set up your project
With a [basic understanding of line protocol](/influxdb3/version/write-data/#line-protocol),
you can construct line protocol data and write it to {{% product-name %}}.
Set up your {{< product-name >}} project and credentials
to write data using the InfluxDB 3 client library for your programming language
of choice.
All InfluxDB client libraries write data in line protocol format to InfluxDB.
Client library `write` methods let you provide data as raw line protocol or as
`Point` objects that the client library converts to line protocol. If your
program creates the data you write to InfluxDB, use the client library `Point`
interface to take advantage of type safety in your program.
#### Example home schema
Consider a use case where you collect data from sensors in your home. Each
sensor collects temperature, humidity, and carbon monoxide readings.
To collect this data, use the following schema:
<!-- vale InfluxDataDocs.v3Schema = NO -->
- **table**: `home`
- **tags**
- `room`: Living Room or Kitchen
- **fields**
- `temp`: temperature in °C (float)
- `hum`: percent humidity (float)
- `co`: carbon monoxide in parts per million (integer)
- **timestamp**: Unix timestamp in _second_ precision
<!-- vale InfluxDataDocs.v3Schema = YES -->
The following example shows how to construct and write points that follow the
`home` schema.
### Set up your project
1. [Install {{< product-name >}}](/influxdb3/version/install/)
2. [Set up {{< product-name >}}](/influxdb3/version/get-started/setup/)
3. Create a project directory and store your
{{< product-name >}} credentials as environment variables or in a project
configuration file, such as a `.env` ("dotenv") file.
After setting up {{< product-name >}} and your project, you should have the following:
- {{< product-name >}} credentials:
- [Database](/influxdb3/version/admin/databases/)
- Authorization token
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization token.
- [Authorization token](/influxdb3/version/admin/tokens/)
- {{% product-name %}} URL
- A directory for your project.
- Credentials stored as environment variables or in a project configuration
file--for example, a `.env` ("dotenv") file.
- Client libraries installed for writing data to {{< product-name >}}.
### Initialize a project directory
The following examples use InfluxDB 3 client libraries to show how to construct
`Point` objects that follow the [example `home` schema](#example-home-schema),
and then write the data as line protocol to an {{% product-name %}} database.
Create a project directory and initialize it for your programming language.
<!-- vale InfluxDataDocs.v3Schema = YES -->
{{< tabs-wrapper >}}
{{% tabs %}}
@ -196,86 +45,61 @@ and then write the data as line protocol to an {{% product-name %}} database.
[Python](#)
{{% /tabs %}}
{{% tab-content %}}
The following steps set up a Go project using the
[InfluxDB 3 Go client](https://github.com/InfluxCommunity/influxdb3-go/):
<!-- BEGIN GO PROJECT SETUP -->
1. Install [Go 1.13 or later](https://golang.org/doc/install).
1. Create a directory for your Go module and change to the directory--for
2. Create a directory for your Go module and change to the directory--for
example:
```sh
mkdir iot-starter-go && cd $_
```
1. Initialize a Go module--for example:
3. Initialize a Go module--for example:
```sh
go mod init iot-starter
```
1. Install [`influxdb3-go`](https://github.com/InfluxCommunity/influxdb3-go/),
which provides the InfluxDB `influxdb3` Go client library module.
```sh
go get github.com/InfluxCommunity/influxdb3-go/v2
```
<!-- END GO SETUP PROJECT -->
{{% /tab-content %}} {{% tab-content %}}
<!-- BEGIN NODE.JS PROJECT SETUP -->
The following steps set up a JavaScript project using the
[InfluxDB 3 JavaScript client](https://github.com/InfluxCommunity/influxdb3-js/).
{{% /tab-content %}}
{{% tab-content %}}
<!-- BEGIN JAVASCRIPT PROJECT SETUP -->
1. Install [Node.js](https://nodejs.org/en/download/).
1. Create a directory for your JavaScript project and change to the
2. Create a directory for your JavaScript project and change to the
directory--for example:
```sh
mkdir -p iot-starter-js && cd $_
```
1. Initialize a project--for example, using `npm`:
3. Initialize a project--for example, using `npm`:
<!-- pytest.mark.skip -->
```sh
npm init
```
<!-- END JAVASCRIPT SETUP PROJECT -->
1. Install the `@influxdata/influxdb3-client` InfluxDB 3 JavaScript client
library.
```sh
npm install @influxdata/influxdb3-client
```
<!-- END NODE.JS SETUP PROJECT -->
{{% /tab-content %}} {{% tab-content %}}
{{% /tab-content %}}
{{% tab-content %}}
<!-- BEGIN PYTHON SETUP PROJECT -->
The following steps set up a Python project using the
[InfluxDB 3 Python client](https://github.com/InfluxCommunity/influxdb3-python/):
1. Install [Python](https://www.python.org/downloads/)
1. Inside of your project directory, create a directory for your Python module
2. Inside of your project directory, create a directory for your Python module
and change to the module directory--for example:
```sh
mkdir -p iot-starter-py && cd $_
```
1. **Optional, but recommended**: Use
3. **Optional, but recommended**: Use
[`venv`](https://docs.python.org/3/library/venv.html) or
[`conda`](https://docs.continuum.io/anaconda/install/) to activate a virtual
environment for installing and executing code--for example, enter the
@ -285,29 +109,134 @@ The following steps set up a Python project using the
```bash
python3 -m venv envs/iot-starter && source ./envs/iot-starter/bin/activate
```
1. Install
[`influxdb3-python`](https://github.com/InfluxCommunity/influxdb3-python),
which provides the InfluxDB `influxdb_client_3` Python client library module
and also installs the
[`pyarrow` package](https://arrow.apache.org/docs/python/index.html) for
working with Arrow data.
```sh
pip install influxdb3-python
```
<!-- END PYTHON SETUP PROJECT -->
{{% /tab-content %}}
{{< /tabs-wrapper >}}
#### Construct points and write line protocol
### Install the client library
Install the InfluxDB 3 client library for your programming language of choice.
{{< tabs-wrapper >}}
{{% tabs %}}
[C#](#)
[Go](#)
[Java](#)
[Node.js](#)
[Python](#)
{{% /tabs %}}
{{% tab-content %}}
<!-- BEGIN C# INSTALL CLIENT LIBRARY -->
Add the [InfluxDB 3 C# client library](https://github.com/InfluxCommunity/influxdb3-csharp) to your project using the
[`dotnet` CLI](https://docs.microsoft.com/dotnet/core/tools/dotnet) or
by adding the package to your project file--for example:
```bash
dotnet add package InfluxDB3.Client
```
{{% /tab-content %}}
{{% tab-content %}}
<!-- BEGIN GO INSTALL CLIENT LIBRARY -->
Add the
[InfluxDB 3 Go client library](https://github.com/InfluxCommunity/influxdb3-go)
to your project using the
[`go get` command](https://golang.org/cmd/go/#hdr-Add_dependencies_to_current_module_and_install_them)--for example:
```bash
go mod init path/to/project/dir && cd $_
go get github.com/InfluxCommunity/influxdb3-go/v2/influxdb3
```
{{% /tab-content %}}
{{% tab-content %}}
<!-- BEGIN JAVA INSTALL CLIENT LIBRARY -->
Add the [InfluxDB 3 Java client library](https://github.com/InfluxCommunity/influxdb3-java) to your project dependencies using
the [Maven](https://maven.apache.org/)
[Gradle](https://gradle.org/) build tools.
For example, to add the library to a Maven project, add the following dependency
to your `pom.xml` file:
```xml
<dependency>
<groupId>com.influxdb</groupId>
<artifactId>influxdb3-java</artifactId>
<version>1.1.0</version>
</dependency>
```
To add the library to a Gradle project, add the following dependency to your `build.gradle` file:
```groovy
dependencies {
implementation 'com.influxdb:influxdb3-java:1.1.0'
}
```
{{% /tab-content %}}
{{% tab-content %}}
<!-- BEGIN NODE.JS INSTALL CLIENT LIBRARY -->
For a Node.js project, use `@influxdata/influxdb3-client`, which provides main (CommonJS),
module (ESM), and browser (UMD) exports.
Add the [InfluxDB 3 JavaScript client library](https://github.com/InfluxCommunity/influxdb3-js) using your preferred package manager--for example, using [`npm`](https://www.npmjs.com/):
```bash
npm install --save @influxdata/influxdb3-client
```
{{% /tab-content %}}
{{% tab-content %}}
<!-- BEGIN PYTHON INSTALL CLIENT LIBRARY -->
Install the [InfluxDB 3 Python client library](https://github.com/InfluxCommunity/influxdb3-python) using
[`pip`](https://pypi.org/project/pip/).
To use Pandas features, such as `to_pandas()`, provided by the Python
client library, you must also install the
[`pandas` package](https://pandas.pydata.org/).
```bash
pip install influxdb3-python pandas
```
{{% /tab-content %}}
{{< /tabs-wrapper >}}
## Construct line protocol
With a [basic understanding of line protocol](/influxdb3/version/write-data/#line-protocol),
you can construct line protocol data and write it to {{% product-name %}}.
Use client library write methods to provide data as raw line protocol
or as `Point` objects that the client library converts to line protocol.
If your program creates the data you write to InfluxDB, the `Point`
interface to take advantage of type safety in your program.
Client libraries provide one or more `Point` constructor methods. Some libraries
support language-native data structures, such as Go's `struct`, for creating
points.
Examples in this guide show how to construct `Point` objects that follow the [example `home` schema](#example-home-schema),
and then write the points as line protocol data to an {{% product-name %}} database.
### Example home schema
Consider a use case where you collect data from sensors in your home. Each
sensor collects temperature, humidity, and carbon monoxide readings.
To collect this data, use the following schema:
<!-- vale InfluxDataDocs.v3Schema = YES -->
- **table**: `home`
- **tags**
- `room`: Living Room or Kitchen
- **fields**
- `temp`: temperature in °C (float)
- `hum`: percent humidity (float)
- `co`: carbon monoxide in parts per million (integer)
- **timestamp**: Unix timestamp in _second_ precision
{{< tabs-wrapper >}}
{{% tabs %}}
[Go](#)

View File

@ -0,0 +1,4 @@
Use the InfluxDB HTTP API to write data to {{< product-name >}}.
There are different APIs you can use depending on your integration method.
{{< children >}}

View File

@ -15,14 +15,15 @@ to write points as line protocol data to {{% product-name %}}.
## InfluxDB v2 compatibility
The `/api/v2/write` InfluxDB v2 compatibility endpoint provides backwards compatibility with clients that can write data to InfluxDB OSS v2.x and Cloud 2 (TSM).
The `/api/v2/write` InfluxDB v2 compatibility endpoint provides backwards
compatibility with clients that can write data to InfluxDB OSS v2.x and Cloud 2 (TSM).
{{<api-endpoint endpoint="/api/v2/write?bucket=mydb&precision=ns" method="post" >}}
{{<api-endpoint endpoint="/api/v2/write?bucket=mydb&precision=ns" method="post" api-ref="/influxdb3/version/api/v3/#operation/PostV1Write" >}}
## InfluxDB v1 compatibility
The `/write` InfluxDB v1 compatibility endpoint provides backwards compatibility with clients that can write data to InfluxDB v1.x.
{{<api-endpoint endpoint="/write?db=mydb&precision=ns" method="post" >}}
{{<api-endpoint endpoint="/write?db=mydb&precision=ns" method="post" api-ref="/influxdb3/version/api/v3/#operation/PostV2Write" >}}

View File

@ -0,0 +1,162 @@
Use the `/api/v3/write_lp` endpoint to write data to {{% product-name %}}.
This endpoint accepts the same [line protocol](/influxdb3/version/reference/line-protocol/)
syntax as previous versions of InfluxDB, and supports the following:
##### Query parameters
- `?accept_partial=<BOOLEAN>`: Accept or reject partial writes (default is `true`).
- `?no_sync=<BOOLEAN>`: Control when writes are acknowledged:
- `no_sync=true`: Acknowledge writes before WAL persistence completes.
- `no_sync=false`: Acknowledges writes after WAL persistence completes (default).
- `?precision=<PRECISION>`: Specify the precision of the timestamp.
The default is `ns` (nanosecond) precision.
You can also use `auto` to let InfluxDB automatically determine the timestamp
precision by identifying which precisions resolves most closely to _now_.
##### Request body
- Line protocol
{{<api-endpoint endpoint="/api/v3/write_lp?db=mydb&precision=nanosecond&accept_partial=true&no_sync=false" method="post" >}}
_The following example uses [cURL](https://curl.se/) to send a write request using
the {{< influxdb3/home-sample-link >}}, but you can use any HTTP client._
{{% influxdb/custom-timestamps %}}
```bash
curl -v "http://{{< influxdb/host >}}/api/v3/write_lp?db=sensors&precision=auto" \
--data-raw "home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1735545600
home,room=Kitchen temp=21.0,hum=35.9,co=0i 1735545600
home,room=Living\ Room temp=21.4,hum=35.9,co=0i 1735549200
home,room=Kitchen temp=23.0,hum=36.2,co=0i 1735549200
home,room=Living\ Room temp=21.8,hum=36.0,co=0i 1735552800
home,room=Kitchen temp=22.7,hum=36.1,co=0i 1735552800
home,room=Living\ Room temp=22.2,hum=36.0,co=0i 1735556400
home,room=Kitchen temp=22.4,hum=36.0,co=0i 1735556400
home,room=Living\ Room temp=22.2,hum=35.9,co=0i 1735560000
home,room=Kitchen temp=22.5,hum=36.0,co=0i 1735560000
home,room=Living\ Room temp=22.4,hum=36.0,co=0i 1735563600
home,room=Kitchen temp=22.8,hum=36.5,co=1i 1735563600
home,room=Living\ Room temp=22.3,hum=36.1,co=0i 1735567200
home,room=Kitchen temp=22.8,hum=36.3,co=1i 1735567200
home,room=Living\ Room temp=22.3,hum=36.1,co=1i 1735570800
home,room=Kitchen temp=22.7,hum=36.2,co=3i 1735570800
home,room=Living\ Room temp=22.4,hum=36.0,co=4i 1735574400
home,room=Kitchen temp=22.4,hum=36.0,co=7i 1735574400
home,room=Living\ Room temp=22.6,hum=35.9,co=5i 1735578000
home,room=Kitchen temp=22.7,hum=36.0,co=9i 1735578000
home,room=Living\ Room temp=22.8,hum=36.2,co=9i 1735581600
home,room=Kitchen temp=23.3,hum=36.9,co=18i 1735581600
home,room=Living\ Room temp=22.5,hum=36.3,co=14i 1735585200
home,room=Kitchen temp=23.1,hum=36.6,co=22i 1735585200
home,room=Living\ Room temp=22.2,hum=36.4,co=17i 1735588800
home,room=Kitchen temp=22.7,hum=36.5,co=26i 1735588800"
```
{{% /influxdb/custom-timestamps %}}
- [Partial writes](#partial-writes)
- [Accept partial writes](#accept-partial-writes)
- [Do not accept partial writes](#do-not-accept-partial-writes)
- [Write responses](#write-responses)
- [Use no_sync for immediate write responses](#use-no_sync-for-immediate-write-responses)
> [!Note]
> #### InfluxDB client libraries
>
> InfluxData provides supported InfluxDB 3 client libraries that you can
> integrate with your code to construct data as time series points, and then
> write them as line protocol to an {{% product-name %}} database.
> For more information, see how to [use InfluxDB client libraries to write data](/influxdb3/version/write-data/client-libraries/).
## Partial writes
The `/api/v3/write_lp` endpoint lets you accept or reject partial writes using
the `accept_partial` parameter. This parameter changes the behavior of the API
when the write request contains invalid line protocol or schema conflicts.
For example, the following line protocol contains two points, each using a
different datatype for the `temp` field, which causes a schema conflict:
```
home,room=Sunroom temp=96 1735545600
home,room=Sunroom temp="hi" 1735549200
```
### Accept partial writes
With `accept_partial=true` (default), InfluxDB:
- Accepts and writes line `1`
- Rejects line `2`
- Returns a `400 Bad Request` status code and the following response body:
```
< HTTP/1.1 400 Bad Request
...
{
"error": "partial write of line protocol occurred",
"data": [
{
"original_line": "home,room=Sunroom temp=hi 1735549200",
"line_number": 2,
"error_message": "invalid column type for column 'temp', expected iox::column_type::field::float, got iox::column_type::field::string"
}
]
}
```
### Do not accept partial writes
With `accept_partial=false`, InfluxDB:
- Rejects _all_ points in the batch
- Returns a `400 Bad Request` status code and the following response body:
```
< HTTP/1.1 400 Bad Request
...
{
"error": "parsing failed for write_lp endpoint",
"data": {
"original_line": "home,room=Sunroom temp=hi 1735549200",
"line_number": 2,
"error_message": "invalid column type for column 'temp', expected iox::column_type::field::float, got iox::column_type::field::string"
}
}
```
_For more information about the ingest path and data flow, see
[Data durability](/influxdb3/version/reference/internals/durability/)._
## Write responses
By default, {{% product-name %}} acknowledges writes after flushing the WAL file
to the Object store (occurring every second).
For high write throughput, you can send multiple concurrent write requests.
### Use no_sync for immediate write responses
To reduce the latency of writes, use the `no_sync` write option, which
acknowledges writes _before_ WAL persistence completes.
When `no_sync=true`, InfluxDB validates the data, writes the data to the WAL,
and then immediately responds to the client, without waiting for persistence to
the Object store.
> [!Tip]
> Using `no_sync=true` is best when prioritizing high-throughput writes over
> absolute durability.
- Default behavior (`no_sync=false`): Waits for data to be written to the Object
store before acknowledging the write. Reduces the risk of data loss, but
increases the latency of the response.
- With `no_sync=true`: Reduces write latency, but increases the risk of data
loss in case of a crash before WAL persistence.
The following example immediately returns a response without waiting for WAL
persistence:
```bash
curl "http://localhost:8181/api/v3/write_lp?db=sensors&no_sync=true" \
--data-raw "home,room=Sunroom temp=96"
```

View File

@ -9,8 +9,9 @@ to write line protocol data to {{< product-name >}}.
> #### Use the API for batching and higher-volume writes
>
> The `influxdb3` CLI lets you quickly get started writing data to {{< product-name >}}.
> For batching and higher-volume write workloads, use
> [API client libraries](/influxdb3/version/write-data/api/#use-api-client-libraries)
> For batching and higher-volume write workloads, use the
> [InfluxDB HTTP API](/influxdb3/version/write-data/http-api),
> [API client libraries](/influxdb3/version/write-data/client-libraries/)
> or [Telegraf](/influxdb3/version/write-data/use-telegraf/).
## Construct line protocol
@ -63,7 +64,7 @@ Provide the following:
- The [database](/influxdb3/version/admin/databases/) name using the
`--database` option
- Your {{< product-name >}} authorization token using the `-t`, `--token` option
- Your {{< product-name >}} {{% token-link %}} using the `-t`, `--token` option
- [Line protocol](#construct-line-protocol).
Provide the line protocol in one of the following ways:
@ -195,7 +196,4 @@ Replace the following:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
the name of the database to write to
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}:
your {{< product-name >}} authorization token
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization token.
your {{< product-name >}} {{% token-link %}}

View File

@ -41,7 +41,7 @@ Write requests return the following status codes:
| :-------------------------------| :--------------------------------------------------------------- | :------------- |
| `204 "Success"` | | If InfluxDB ingested the data |
| `400 "Bad request"` | error details about rejected points, up to 100 points: `line` contains the first rejected line, `message` describes rejections | If some or all request data isn't allowed (for example, if it is malformed or falls outside of the bucket's retention period)--the response body indicates whether a partial write has occurred or if all data has been rejected |
| `401 "Unauthorized"` | | If the `Authorization` header is missing or malformed or if the [token](/influxdb3/version/admin/tokens/) doesn't have [permission](/influxdb3/version/reference/cli/influxctl/token/create/#examples) to write to the database. See [examples using credentials](/influxdb3/version/write-data/api-client-libraries/) in write requests. |
| `401 "Unauthorized"` | | If the `Authorization` header is missing or malformed or if the [token](/influxdb3/version/admin/tokens/) doesn't have permission to write to the database. See [write API examples](/influxdb3/enterprise/write-data/http-api/) using credentials. |
| `404 "Not found"` | requested **resource type** (for example, "organization" or "database"), and **resource name** | If a requested resource (for example, organization or database) wasn't found |
| `500 "Internal server error"` | | Default status for an error |
| `503` "Service unavailable" | | If the server is temporarily unavailable to accept writes. The `Retry-After` header describes when to try the write again.

View File

@ -46,13 +46,9 @@ Replace the following:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
the name of the database to write data to
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}:
your {{< product-name >}} authorization token.
your {{< product-name >}} {{% token-link %}}.
_Store this in a secret store or environment variable to avoid exposing the raw token string._
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization token.
> For the `token` option, provide an arbitrary, non-empty token string.
_See how to [Configure Telegraf to write to {{% product-name %}}](/influxdb3/version/write-data/use-telegraf/configure/)._
## Use Telegraf with InfluxDB

View File

@ -65,13 +65,9 @@ Replace the following:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
the name of the database to write data to
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}:
your {{< product-name >}} authorization token.
your {{< product-name >}} {{% token-link %}}.
_Store this in a secret store or environment variable to avoid exposing the raw token string._
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization token.
> For the `token` option, provide an arbitrary, non-empty token string.
The InfluxDB output plugin configuration contains the following options:
#### urls
@ -87,10 +83,6 @@ To write to {{% product-name %}}, include your {{% product-name %}} URL:
Your {{% product-name %}} authorization token.
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization token.
> For the `token` option, provide an arbitrary, non-empty token string.
> [!Tip]
>
> ##### Store your authorization token as an environment variable

View File

@ -95,13 +95,9 @@ Replace the following:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
the name of the database to write data to
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}:
your {{< product-name >}} authorization token.
your {{< product-name >}} {{% token-link %}}.
_Store this in a secret store or environment variable to avoid exposing the raw token string._
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization token.
> For the `token` option, provide an arbitrary, non-empty token string.
> [!Tip]
>
> ##### Store your authorization token as an environment variable

View File

@ -5,7 +5,7 @@ to a separate instance or for migrating from other versions of InfluxDB to
{{< product-name >}}.
The following example configures Telegraf for dual writing to {{% product-name %}} and an InfluxDB v2 OSS instance.
Specifically, it uses the following:
- The [InfluxDB v2 output plugin](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/influxdb_v2)
twice--the first pointing to {{< product-name >}} and the other to an
@ -14,11 +14,6 @@ The following example configures Telegraf for dual writing to {{% product-name %
Configure both tokens as environment variables and use string interpolation
in your Telegraf configuration file to reference each environment variable.
> [!Note]
> While in beta, {{< product-name >}} does not require an authorization token.
> For the `token` option, provide an arbitrary, non-empty token string.
## Sample configuration
```toml

View File

@ -28,7 +28,7 @@ Core's feature highlights include:
- Compatibility with InfluxDB 1.x and 2.x write APIs
{{% show-in "core" %}}
[Get started with Core](/influxdb3/version/get-started/)
<a href="/influxdb3/version/get-started/" class="btn">Get started with {{% product-name %}}</a>
{{% /show-in %}}
The Enterprise version adds the following features to Core:
@ -41,5 +41,8 @@ The Enterprise version adds the following features to Core:
- Integrated admin UI (coming soon)
{{% show-in "core" %}}
For more information, see how to [get started with Enterprise](/influxdb3/enterprise/get-started/).
For more information, see how to [get started with InfluxDB 3 Enterprise](/influxdb3/enterprise/get-started/).
{{% /show-in %}}
{{% show-in "enterprise" %}}
<a href="/influxdb3/version/get-started/" class="btn">Get started with {{% product-name %}}</a>
{{% /show-in %}}

View File

@ -1,8 +1,16 @@
<!-- Comment: This file is used to generate the InfluxDB 3 install page. -->
- [System Requirements](#system-requirements)
- [Quick install](#quick-install)
- [Download {{% product-name %}} binaries](#download-influxdb-3-{{< product-key >}}-binaries)
- [Docker image](#docker-image)
- [Install](#install)
- [Quick install for Linux and macOS](#quick-install-for-linux-and-macos)
- [Download and install the latest build artifacts](#download-and-install-the-latest-build-artifacts)
- [Pull the Docker image](#pull-the-docker-image)
- [Verify the installation](#verify-the-installation)
{{% show-in "enterprise" %}}
> [!Note]
> For information about setting up a multi-node {{% product-name %}} cluster,
> see [Create a multi-node cluster](/influxdb3/enterprise/get-started/multi-server/) in the Get started guide.
{{% /show-in %}}
## System Requirements
@ -21,119 +29,69 @@ Azure Blob Storage, and Google Cloud Storage.
You can also use many local object storage implementations that provide an
S3-compatible API, such as [Minio](https://min.io/).
## Quick install
## Install
Use the InfluxDB 3 quick install script to install {{< product-name >}} on
**Linux** and **macOS**.
{{% product-name %}} runs on **Linux**, **macOS**, and **Windows**.
> [!Important]
> If using Windows, [download the {{% product-name %}} Windows binary](?t=Windows#download-influxdb-3-{{< product-key >}}-binaries).
Choose one of the following methods to install {{% product-name %}}:
1. Use the following command to download and install the appropriate
{{< product-name >}} package on your local machine:
{{% show-in "enterprise" %}}
<!--pytest.mark.skip-->
```bash
curl -O https://www.influxdata.com/d/install_influxdb3.sh \
&& sh install_influxdb3.sh {{% product-key %}}
```
{{% /show-in %}}
{{% show-in "core" %}}
<!--pytest.mark.skip-->
```bash
curl -O https://www.influxdata.com/d/install_influxdb3.sh \
&& sh install_influxdb3.sh
```
{{% /show-in %}}
- [Quick install for Linux and macOS](#quick-install-for-linux-and-macos)
- [Download and install the latest build artifacts](#download-and-install-the-latest-build-artifacts)
- [Pull the Docker image](#pull-the-docker-image)
2. Verify that installation completed successfully:
### Quick install for Linux and macOS
```bash
influxdb3 --version
```
To install {{% product-name %}} on **Linux** or **macOS**, download and run the quick
installer script for {{% product-name %}}--for example, using [`curl`](https://curl.se/)
to download the script:
> [!Note]
>
> #### influxdb3 not found
>
> If your system can't locate your `influxdb3` binary, `source` your
> current shell configuration file (`.bashrc`, `.zshrc`, etc.).
>
> {{< code-tabs-wrapper >}}
{{% code-tabs %}}
[.bashrc](#)
[.zshrc](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```bash
source ~/.bashrc
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
<!--pytest.mark.skip-->
```bash
source ~/.zshrc
curl -O https://www.influxdata.com/d/install_influxdb3.sh \
&& sh install_influxdb3.sh {{% show-in "enterprise" %}}enterprise{{% /show-in %}}
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
## Download {{% product-name %}} binaries
> [!Note]
> The quick installer script is updated with each {{% product-name %}} release,
> so it always installs the latest version.
{{< tabs-wrapper >}}
{{% tabs %}}
[Linux](#)
[macOS](#)
[Windows](#)
{{% /tabs %}}
{{% tab-content %}}
### Download and install the latest build artifacts
<!-------------------------------- BEGIN LINUX -------------------------------->
You can also download and install [{{% product-name %}} build artifacts](/influxdb3/enterprise/install/#download-influxdb-3-enterprise-binaries) directly:
- [{{< product-name >}} • Linux (AMD64, x86_64) • GNU](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_linux_amd64.tar.gz)
{{< expand-wrapper >}}
{{% expand "Linux binaries" %}}
- [Linux | AMD64 (x86_64) | GNU](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_linux_amd64.tar.gz)
[sha256](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_linux_amd64.tar.gz.sha256)
- [{{< product-name >}} • Linux (ARM64, AArch64) • GNU](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_linux_arm64.tar.gz)
- [Linux | ARM64 (AArch64) | GNU](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_linux_arm64.tar.gz)
[sha256](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_linux_arm64.tar.gz.sha256)
<!--------------------------------- END LINUX --------------------------------->
{{% /expand %}}
{{% expand "macOS binaries" %}}
{{% /tab-content %}}
{{% tab-content %}}
<!-------------------------------- BEGIN MACOS -------------------------------->
- [{{< product-name >}} • macOS (Silicon, ARM64)](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_darwin_arm64.tar.gz)
- [macOS | Silicon (ARM64)](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_darwin_arm64.tar.gz)
[sha256](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_darwin_arm64.tar.gz.sha256)
> [!Note]
> macOS Intel builds are coming soon.
<!--------------------------------- END MACOS --------------------------------->
{{% /expand %}}
{{% expand "Windows binaries" %}}
{{% /tab-content %}}
{{% tab-content %}}
- [Windows (AMD64, x86_64) binary](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}-windows_amd64.zip)
[sha256](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}-windows_amd64.zip.sha256)
<!------------------------------- BEGIN WINDOWS ------------------------------->
{{% /expand %}}
{{< /expand-wrapper >}}
- [{{< product-name >}} • Windows (AMD64, x86_64)](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}-windows_amd64.zip)
[sha256](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}-windows_amd64.zip.sha256)
### Pull the Docker image
<!-------------------------------- END WINDOWS -------------------------------->
{{% /tab-content %}}
{{< /tabs-wrapper >}}
## Docker image
Use the `influxdb:3-{{< product-key >}}` Docker image to deploy {{< product-name >}} in a
Docker container.
The image is available for x86_64 (AMD64) and ARM64 architectures.
### Use Docker CLI
Run the following command to pull the [`influxdb:3-{{< product-key >}}` image](https://hub.docker.com/_/influxdb/tags?tag=3-{{< product-key >}}&name=3-{{< product-key >}}), available for x86_64 (AMD64) and ARM64 architectures:
<!--pytest.mark.skip-->
```bash
@ -142,6 +100,8 @@ docker pull influxdb:3-{{< product-key >}}
Docker automatically pulls the appropriate image for your system architecture.
{{< expand-wrapper >}}
{{% expand "Pull for a specific system architecture" %}}
To specify the system architecture, use platform-specific tags--for example:
```bash
@ -157,79 +117,31 @@ docker pull \
--platform linux/arm64 \
influxdb:3-{{< product-key >}}
```
{{% /expand %}}
{{< /expand-wrapper >}}
> [!Note]
> The {{% product-name %}} Docker image exposes port `8181`, the `influxdb3` server default for HTTP connections.
> To map the exposed port to a different port when running a container, see the Docker guide for [Publishing and exposing ports](https://docs.docker.com/get-started/docker-concepts/running-containers/publishing-ports/).
### Use Docker Compose
### Verify the installation
After installing {{% product-name %}}, enter the following command to verify
that it installed successfully:
```bash
influxdb3 --version
```
If your system doesn't locate `influxdb3`, then `source` the configuration file (for example, .bashrc, .zshrc) for your shell--for example:
<!--pytest.mark.skip-->
```zsh
source ~/.zshrc
```
{{% show-in "enterprise" %}}
1. Open `compose.yaml` for editing and add a `services` entry for {{% product-name %}}.
To generate a trial or at-home license for {{% product-name %}} when using Docker, you must pass the `--license-email` option or the `INFLUXDB3_LICENSE_EMAIL` environment variable the first time you start the server--for example:
```yaml
# compose.yaml
services:
influxdb3-{{< product-key >}}:
container_name: influxdb3-{{< product-key >}}
image: influxdb:3-{{< product-key >}}
ports:
- 8181:8181
command:
- influxdb3
- serve
- --node-id=node0
- --cluster-id=cluster0
- --object-store=file
- --data-dir=/var/lib/influxdb3
- --plugins-dir=/var/lib/influxdb3-plugins
- --license-email=${INFLUXDB3_LICENSE_EMAIL}
```
{{% /show-in %}}
{{% show-in "core" %}}
1. Open `compose.yaml` for editing and add a `services` entry for {{% product-name %}}--for example:
```yaml
# compose.yaml
services:
influxdb3-{{< product-key >}}:
container_name: influxdb3-{{< product-key >}}
image: influxdb:3-{{< product-key >}}
ports:
- 8181:8181
command:
- influxdb3
- serve
- --node-id=node0
- --object-store=file
- --data-dir=/var/lib/influxdb3
- --plugins-dir=/var/lib/influxdb3-plugins
```
{{% /show-in %}}
2. Use the Docker Compose CLI to start the server.
Optional: to make sure you have the latest version of the image before you
start the server, run `docker compose pull`.
<!--pytest.mark.skip-->
```bash
docker compose pull && docker compose run influxdb3-{{< product-key >}}
```
> [!Note]
> #### Stopping an InfluxDB 3 container
>
> To stop a running InfluxDB 3 container, find and terminate the process or container--for example:
>
> <!--pytest.mark.skip-->
> ```bash
> docker container ls --filter "name=influxdb3"
> docker kill <CONTAINER_ID>
> ```
>
> Currently, a bug prevents using {{< keybind all="Ctrl+c" >}} in the terminal to stop an InfluxDB 3 container.
> For information about setting up a multi-node {{% product-name %}} cluster,
> see [Create a multi-node cluster](/influxdb3/enterprise/get-started/multi-server/) in the Get started guide.
{{% /show-in %}}
{{% show-in "enterprise" %}}
{{< page-nav next="/influxdb3/enterprise/get-started/" nextText="Get started with InfluxDB 3 Enterprise" >}}

Binary file not shown.

View File

@ -1,5 +1,5 @@
## Test dependencies
pytest>=7.4.1
pytest>=8.4.1
pytest-cov>=2.12.1
pytest-codeblocks>=0.16.1
python-dotenv>=1.0.0