WIP monolith get started, enterprise multi-node, file index docs
parent
481d5b818f
commit
4d22388bd9
|
|
@ -0,0 +1,24 @@
|
|||
---
|
||||
title: Process data in {{% product-name %}}
|
||||
seotitle: Process data | Get started with {{% product-name %}}
|
||||
description: >
|
||||
Learn how to use the {{% product-name %}} Processing Engine to process data and
|
||||
perform various tasks like downsampling, alerting, forecasting, data
|
||||
normalization, and more.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
name: Process data
|
||||
identifier: gs-process-data
|
||||
parent: Get started
|
||||
weight: 104
|
||||
related:
|
||||
- /influxdb3/core/plugins/
|
||||
- /influxdb3/core/reference/cli/influxdb3/create/plugin/
|
||||
- /influxdb3/core/reference/cli/influxdb3/create/trigger/
|
||||
source: /shared/influxdb3-get-started/processing-engine.md
|
||||
---
|
||||
|
||||
<!--
|
||||
The content of this page is at
|
||||
// SOURCE content/shared/influxdb3-get-started/query.md
|
||||
-->
|
||||
|
|
@ -0,0 +1,24 @@
|
|||
---
|
||||
title: Query data in {{% product-name %}}
|
||||
seotitle: Query data | Get started with {{% product-name %}}
|
||||
description: >
|
||||
Learn how to get started querying data in {{% product-name %}} using native
|
||||
SQL or InfluxQL with the `influxdb3` CLI and other tools.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
name: Query data
|
||||
identifier: gs-query-data
|
||||
parent: Get started
|
||||
weight: 103
|
||||
related:
|
||||
- /influxdb3/core/query-data/
|
||||
- /influxdb3/core/reference/sql/
|
||||
- https://datafusion.apache.org/user-guide/sql/index.html, Apache DataFusion SQL reference
|
||||
- /influxdb3/core/reference/influxql/
|
||||
source: /shared/influxdb3-get-started/query.md
|
||||
---
|
||||
|
||||
<!--
|
||||
The content of this page is at
|
||||
// SOURCE content/shared/influxdb3-get-started/query.md
|
||||
-->
|
||||
|
|
@ -2,7 +2,7 @@
|
|||
title: Set up {{% product-name %}}
|
||||
seotitle: Set up InfluxDB | Get started with {{% product-name %}}
|
||||
description: >
|
||||
....
|
||||
Install, configure, and set up authorization for {{% product-name %}}.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
name: Set up Core
|
||||
|
|
|
|||
|
|
@ -2,7 +2,8 @@
|
|||
title: Write data to {{% product-name %}}
|
||||
seotitle: Write data | Get started with {{% product-name %}}
|
||||
description: >
|
||||
....
|
||||
Learn how to write time series data to {{% product-name %}} using the
|
||||
`influxdb3` CLI and _line protocol_, an efficient, human-readable write syntax.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
name: Write data
|
||||
|
|
|
|||
|
|
@ -1,16 +0,0 @@
|
|||
---
|
||||
title: influxdb3 create file_index
|
||||
description: >
|
||||
The `influxdb3 create file_index` command creates a new file index for a
|
||||
database or table.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
parent: influxdb3 create
|
||||
name: influxdb3 create file_index
|
||||
weight: 400
|
||||
source: /shared/influxdb3-cli/create/file_index.md
|
||||
---
|
||||
|
||||
<!--
|
||||
The content of this file is at content/shared/influxdb3-cli/create/file_index.md
|
||||
-->
|
||||
|
|
@ -1,16 +0,0 @@
|
|||
---
|
||||
title: influxdb3 delete file_index
|
||||
description: >
|
||||
The `influxdb3 delete file_index` command deletes a file index for a
|
||||
database or table.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
parent: influxdb3 delete
|
||||
name: influxdb3 delete file_index
|
||||
weight: 400
|
||||
source: /shared/influxdb3-cli/delete/file_index.md
|
||||
---
|
||||
|
||||
<!--
|
||||
The content of this file is at content/shared/influxdb3-cli/delete/file_index.md
|
||||
-->
|
||||
|
|
@ -0,0 +1,51 @@
|
|||
---
|
||||
title: Manage file indexes
|
||||
seotitle: Manage file indexes in {{< product-name >}}
|
||||
description: >
|
||||
Customize the indexing strategy of a database or table in {{% product-name %}}
|
||||
to optimize the performance of single-series queries.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
parent: Administer InfluxDB
|
||||
weight: 106
|
||||
influxdb3/enterprise/tags: [indexing]
|
||||
---
|
||||
|
||||
{{% product-name %}} lets you customize how your data is indexed to help
|
||||
optimize query performance for your specific workload, especially workloads that
|
||||
include single-series queries. Indexes help the InfluxDB query engine quickly
|
||||
identify the physical location of files that contain the queried data.
|
||||
|
||||
By default, InfluxDB indexes on the primary key—`time` and tag columns. However,
|
||||
if your schema includes tags that you don't specifically use when querying, you
|
||||
can define a custom indexing strategy to only index on `time` and columns
|
||||
important to your query workload.
|
||||
|
||||
For example, if your schema includes the following columns:
|
||||
|
||||
- country
|
||||
- state_province
|
||||
- county
|
||||
- city
|
||||
- postal_code
|
||||
|
||||
And in your query workload, you only query based on country, state or province,
|
||||
and city, you can create a custom file indexing strategy that only indexes on
|
||||
`time` and these specific columns. This makes your index more efficient and
|
||||
improves the performance of your single-series queries.
|
||||
|
||||
> [!Note]
|
||||
> File indexes can use any string column, including both tags and fields.
|
||||
|
||||
- [Indexing life cycle](#indexing-life-cycle)
|
||||
- [Create a custom file index](#create-a-custom-file-index)
|
||||
- [Delete a custom file index](#delete-a-custom-file-index)
|
||||
|
||||
## Indexing life cycle
|
||||
|
||||
{{% product-name %}} builds indexes as it compacts data. Compaction is the
|
||||
process that organizes and optimizes Parquet files in storage and occurs in
|
||||
multiple phases or generations. Generation 1 (gen1) data is un-compacted and
|
||||
is not indexed. Generation 2 (gen2) data and beyond is all indexed.
|
||||
|
||||
{{< children hlevel="h2" >}}
|
||||
|
|
@ -0,0 +1,62 @@
|
|||
---
|
||||
title: Create a custom file index
|
||||
seotitle: Create a custom file index in {{< product-name >}}
|
||||
description: >
|
||||
Use the [`influxdb3 create file_index` command](/influxdb3/enterprise/reference/cli/influxdb3/create/file_index/)
|
||||
to create a custom file indexing strategy for a database or a table.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
parent: Manage file indexes
|
||||
weight: 106
|
||||
influxdb3/enterprise/tags: [indexing]
|
||||
related:
|
||||
- /influxdb3/enterprise/reference/cli/influxdb3/create/file_index/
|
||||
list_code_example: |
|
||||
<!--pytest.mark.skip-->
|
||||
```bash
|
||||
influxdb3 create file_index \
|
||||
--database example-db \
|
||||
--token 00xoXX0xXXx0000XxxxXx0Xx0xx0 \
|
||||
--table wind_data \
|
||||
country,city
|
||||
```
|
||||
---
|
||||
|
||||
Use the [`influxdb3 create file_index` command](/influxdb3/enterprise/reference/cli/influxdb3/create/file_index/)
|
||||
to create a custom file indexing strategy for a database or table.
|
||||
|
||||
Provide the following:
|
||||
|
||||
- **Token** (`--token`): _({{< req >}})_ Your {{% token-link "admin" %}}.
|
||||
You can also use the `INFLUXDB3_AUTH_TOKEN` environment variable to specify
|
||||
the token.
|
||||
- **Database** (`-d`, `--database`): _({{< req >}})_ The name of the database to
|
||||
apply the index to. You can also use the `INFLUXDB3_DATABASE_NAME`
|
||||
environment variable to specify the database.
|
||||
- **Table** (`-t`, `--table`): The name of the table to apply the index to.
|
||||
If no table is specified, the indexing strategy applies to all tables in the
|
||||
specified database.
|
||||
- **Columns**: _({{< req >}})_ A comma-separated list of string columns to
|
||||
index on. These are typically tag columns but can also be string fields.
|
||||
|
||||
{{% code-placeholders "AUTH_TOKEN|DATABASE|TABLE|COLUMNS" %}}
|
||||
<!--pytest.mark.skip-->
|
||||
```bash
|
||||
influxdb3 create file_index \
|
||||
--token AUTH_TOKEN \
|
||||
--database DATABASE_NAME \
|
||||
--table TABLE_NAME \
|
||||
COLUMNS
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace the following placeholders with your values:
|
||||
|
||||
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}:
|
||||
your {{% token-link "admin" %}}
|
||||
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
|
||||
the name of the database to create the file index in
|
||||
- {{% code-placeholder-key %}}`TABLE_NAME`{{% /code-placeholder-key %}}:
|
||||
the name of the table to create the file index in
|
||||
- {{% code-placeholder-key %}}`COLUMNS`{{% /code-placeholder-key %}}:
|
||||
a comma-separated list of columns to index on--for example: `host,application`
|
||||
|
|
@ -0,0 +1,58 @@
|
|||
---
|
||||
title: Delete a custom file index
|
||||
seotitle: Delete a custom file index in {{< product-name >}}
|
||||
description: >
|
||||
Use the [`influxdb3 delete file_index` command](/influxdb3/enterprise/reference/cli/influxdb3/delete/file_index/)
|
||||
to delete a custom file indexing strategy from a database or a table and revert
|
||||
to the default indexing strategy.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
parent: Manage file indexes
|
||||
weight: 106
|
||||
influxdb3/enterprise/tags: [indexing]
|
||||
related:
|
||||
- /influxdb3/enterprise/reference/cli/influxdb3/delete/file_index/
|
||||
list_code_example: |
|
||||
<!--pytest.mark.skip-->
|
||||
```bash
|
||||
influxdb3 delete file_index \
|
||||
--database example-db \
|
||||
--token 00xoXX0xXXx0000XxxxXx0Xx0xx0 \
|
||||
--table wind_data
|
||||
```
|
||||
---
|
||||
|
||||
Use the [`influxdb3 delete file_index` command](/influxdb3/enterprise/reference/cli/influxdb3/delete/file_index/)
|
||||
to delete a custom file indexing strategy from a database or a table and revert
|
||||
to the default indexing strategy.
|
||||
|
||||
Provide the following:
|
||||
|
||||
- **Token** (`--token`): _({{< req >}})_ Your {{% token-link "admin" %}}.
|
||||
You can also use the `INFLUXDB3_AUTH_TOKEN` environment variable to specify
|
||||
the token.
|
||||
- **Database** (`-d`, `--database`): _({{< req >}})_ The name of the database to
|
||||
apply remove the custom index from. You can also use the `INFLUXDB3_DATABASE_NAME`
|
||||
environment variable to specify the database.
|
||||
- **Table** (`-t`, `--table`): The name of the table to remove the custom index from.
|
||||
If no table is specified, the custom indexing strategy is removed from all
|
||||
tables in the specified database.
|
||||
|
||||
{{% code-placeholders "AUTH_TOKEN|DATABASE|TABLE|COLUMNS" %}}
|
||||
|
||||
```bash
|
||||
influxdb3 delete file_index \
|
||||
--host http://localhost:8585 \
|
||||
--database DATABASE_NAME \
|
||||
--table TABLE_NAME \
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace the following placeholders with your values:
|
||||
|
||||
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}:
|
||||
your {{% token-link "admin" %}}
|
||||
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}:
|
||||
the name of the database to remove the custom file index from
|
||||
- {{% code-placeholder-key %}}`TABLE_NAME`{{% /code-placeholder-key %}}:
|
||||
the name of the table to remove the custom file index from
|
||||
|
|
@ -1,293 +0,0 @@
|
|||
---
|
||||
title: Use a multi-server setup
|
||||
seotitle: Use a multi-server InfluxDB 3 Enterprise setup
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
name: Multi-server
|
||||
parent: Get started
|
||||
weight: 4
|
||||
influxdb3/enterprise/tags: [cluster, multi-node, multi-server]
|
||||
draft: true
|
||||
---
|
||||
|
||||
### Multi-server setup
|
||||
|
||||
{{% product-name %}} is built to support multi-node setups for high availability, read replicas, and flexible implementations depending on use case.
|
||||
|
||||
### High availability
|
||||
|
||||
Enterprise is architecturally flexible, giving you options on how to configure multiple servers that work together for high availability (HA) and high performance.
|
||||
Built on top of the diskless engine and leveraging the Object store, an HA setup ensures that if a node fails, you can still continue reading from, and writing to, a secondary node.
|
||||
|
||||
A two-node setup is the minimum for basic high availability, with both nodes having read-write permissions.
|
||||
|
||||
{{< img-hd src="/img/influxdb/influxdb-3-enterprise-high-availability.png" alt="Basic high availability setup" />}}
|
||||
|
||||
In a basic HA setup:
|
||||
|
||||
- Two nodes both write data to the same Object store and both handle queries
|
||||
- Node 1 and Node 2 are _read replicas_ that read from each other’s Object store directories
|
||||
- One of the nodes is designated as the Compactor node
|
||||
|
||||
> [!Note]
|
||||
> Only one node can be designated as the Compactor.
|
||||
> Compacted data is meant for a single writer, and many readers.
|
||||
|
||||
The following examples show how to configure and start two nodes
|
||||
for a basic HA setup.
|
||||
|
||||
- _Node 1_ is for compaction (passes `compact` in `--mode`)
|
||||
- _Node 2_ is for ingest and query
|
||||
|
||||
```bash
|
||||
## NODE 1
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host01'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve \
|
||||
--node-id host01 \
|
||||
--cluster-id cluster01 \
|
||||
--mode ingest,query,compact \
|
||||
--object-store s3 \
|
||||
--bucket influxdb-3-enterprise-storage \
|
||||
--http-bind {{< influxdb/host >}} \
|
||||
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
|
||||
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
```bash
|
||||
## NODE 2
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host02'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve \
|
||||
--node-id host02 \
|
||||
--cluster-id cluster01 \
|
||||
--mode ingest,query \
|
||||
--object-store s3 \
|
||||
--bucket influxdb-3-enterprise-storage \
|
||||
--http-bind localhost:8282 \
|
||||
--aws-access-key-id AWS_ACCESS_KEY_ID \
|
||||
--aws-secret-access-key AWS_SECRET_ACCESS_KEY
|
||||
```
|
||||
|
||||
After the nodes have started, querying either node returns data for both nodes, and _NODE 1_ runs compaction.
|
||||
To add nodes to this setup, start more read replicas with the same cluster ID.
|
||||
|
||||
### High availability with a dedicated Compactor
|
||||
|
||||
Data compaction in InfluxDB 3 is one of the more computationally expensive operations.
|
||||
To ensure that your read-write nodes don't slow down due to compaction work, set up a compactor-only node for consistent and high performance across all nodes.
|
||||
|
||||
{{< img-hd src="/img/influxdb/influxdb-3-enterprise-dedicated-compactor.png" alt="Dedicated Compactor setup" />}}
|
||||
|
||||
The following examples show how to set up high availability with a dedicated Compactor node:
|
||||
|
||||
1. Start two read-write nodes as read replicas, similar to the previous example.
|
||||
|
||||
```bash
|
||||
## NODE 1 — Writer/Reader Node #1
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host01'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve \
|
||||
--node-id host01 \
|
||||
--cluster-id cluster01 \
|
||||
--mode ingest,query \
|
||||
--object-store s3 \
|
||||
--bucket influxdb-3-enterprise-storage \
|
||||
--http-bind {{< influxdb/host >}} \
|
||||
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
|
||||
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
```bash
|
||||
## NODE 2 — Writer/Reader Node #2
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host02'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve \
|
||||
--node-id host02 \
|
||||
--cluster-id cluster01 \
|
||||
--mode ingest,query \
|
||||
--object-store s3 \
|
||||
--bucket influxdb-3-enterprise-storage \
|
||||
--http-bind localhost:8282 \
|
||||
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
|
||||
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
2. Start the dedicated compactor node with the `--mode=compact` option to ensure the node **only** runs compaction.
|
||||
|
||||
```bash
|
||||
## NODE 3 — Compactor Node
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host03'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve \
|
||||
--node-id host03 \
|
||||
--cluster-id cluster01 \
|
||||
--mode compact \
|
||||
--object-store s3 \
|
||||
--bucket influxdb-3-enterprise-storage \
|
||||
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
|
||||
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
### High availability with read replicas and a dedicated Compactor
|
||||
|
||||
For a robust and effective setup for managing time-series data, you can run ingest nodes alongside read-only nodes and a dedicated Compactor node.
|
||||
|
||||
{{< img-hd src="/img/influxdb/influxdb-3-enterprise-workload-isolation.png" alt="Workload Isolation Setup" />}}
|
||||
|
||||
1. Start ingest nodes by assigning them the **`ingest`** mode.
|
||||
To achieve the benefits of workload isolation, you'll send _only write requests_ to these ingest nodes. Later, you'll configure the _read-only_ nodes.
|
||||
|
||||
```bash
|
||||
## NODE 1 — Writer Node #1
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host01'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve \
|
||||
--node-id host01 \
|
||||
--cluster-id cluster01 \
|
||||
--mode ingest \
|
||||
--object-store s3 \
|
||||
--bucket influxdb-3-enterprise-storage \
|
||||
--http-bind {{< influxdb/host >}} \
|
||||
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
|
||||
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
<!-- The following examples use different ports for different nodes. Don't use the influxdb/host shortcode below. -->
|
||||
|
||||
```bash
|
||||
## NODE 2 — Writer Node #2
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host02'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve \
|
||||
--node-id host02 \
|
||||
--cluster-id cluster01 \
|
||||
--mode ingest \
|
||||
--object-store s3 \
|
||||
--bucket influxdb-3-enterprise-storage \
|
||||
--http-bind localhost:8282 \
|
||||
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
|
||||
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
2. Start the dedicated Compactor node with ` compact`.
|
||||
|
||||
```bash
|
||||
## NODE 3 — Compactor Node
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host03'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve \
|
||||
--node-id host03 \
|
||||
--cluster-id cluster01 \
|
||||
--mode compact \
|
||||
--object-store s3 \
|
||||
--bucket influxdb-3-enterprise-storage \
|
||||
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
|
||||
<AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
3. Finally, start the query nodes as _read-only_ with `--mode query`.
|
||||
|
||||
```bash
|
||||
## NODE 4 — Read Node #1
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host04'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve \
|
||||
--node-id host04 \
|
||||
--cluster-id cluster01 \
|
||||
--mode query \
|
||||
--object-store s3 \
|
||||
--bucket influxdb-3-enterprise-storage \
|
||||
--http-bind localhost:8383 \
|
||||
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
|
||||
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
```bash
|
||||
## NODE 5 — Read Node #2
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host05'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve \
|
||||
--node-id host05 \
|
||||
--cluster-id cluster01 \
|
||||
--mode query \
|
||||
--object-store s3 \
|
||||
--bucket influxdb-3-enterprise-storage \
|
||||
--http-bind localhost:8484 \
|
||||
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
|
||||
<AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
Congratulations, you have a robust setup for workload isolation using {{% product-name %}}.
|
||||
|
||||
### Writing and querying for multi-node setups
|
||||
|
||||
You can use the default port `8181` for any write or query, without changing any of the commands.
|
||||
|
||||
> [!Note]
|
||||
> #### Specify hosts for writes and queries
|
||||
>
|
||||
> To benefit from this multi-node, isolated architecture, specify hosts:
|
||||
>
|
||||
> - In write requests, specify a host that you have designated as _write-only_.
|
||||
> - In query requests, specify a host that you have designated as _read-only_.
|
||||
>
|
||||
> When running multiple local instances for testing or separate nodes in production, specifying the host ensures writes and queries are routed to the correct instance.
|
||||
|
||||
{{% code-placeholders "(http://localhost:8585)|AUTH_TOKEN|DATABASE_NAME|QUERY" %}}
|
||||
```bash
|
||||
# Example querying a specific host
|
||||
# HTTP-bound Port: 8585
|
||||
influxdb3 query \
|
||||
--host http://localhost:8585
|
||||
--token AUTH_TOKEN \
|
||||
--database DATABASE_NAME "QUERY"
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace the following placeholders with your values:
|
||||
|
||||
- {{% code-placeholder-key %}}`http://localhost:8585`{{% /code-placeholder-key %}}: the host and port of the node to query
|
||||
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in "enterprise" %}} with permission to query the specified database{{% /show-in %}}
|
||||
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database to query
|
||||
- {{% code-placeholder-key %}}`QUERY`{{% /code-placeholder-key %}}: the SQL or InfluxQL query to run against the database
|
||||
|
|
@ -0,0 +1,24 @@
|
|||
---
|
||||
title: Process data in {{% product-name %}}
|
||||
seotitle: Process data | Get started with {{% product-name %}}
|
||||
description: >
|
||||
Learn how to use the {{% product-name %}} Processing Engine to process data and
|
||||
perform various tasks like downsampling, alerting, forecasting, data
|
||||
normalization, and more.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
name: Process data
|
||||
identifier: gs-process-data
|
||||
parent: Get started
|
||||
weight: 104
|
||||
related:
|
||||
- /influxdb3/enterprise/plugins/
|
||||
- /influxdb3/enterprise/reference/cli/influxdb3/create/plugin/
|
||||
- /influxdb3/enterprise/reference/cli/influxdb3/create/trigger/
|
||||
source: /shared/influxdb3-get-started/processing-engine.md
|
||||
---
|
||||
|
||||
<!--
|
||||
The content of this page is at
|
||||
// SOURCE content/shared/influxdb3-get-started/query.md
|
||||
-->
|
||||
|
|
@ -2,7 +2,8 @@
|
|||
title: Query data in {{% product-name %}}
|
||||
seotitle: Query data | Get started with {{% product-name %}}
|
||||
description: >
|
||||
....
|
||||
Learn how to get started querying data in {{% product-name %}} using native
|
||||
SQL or InfluxQL with the `influxdb3` CLI and other tools.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
name: Query data
|
||||
|
|
@ -12,6 +13,7 @@ weight: 103
|
|||
related:
|
||||
- /influxdb3/enterprise/query-data/
|
||||
- /influxdb3/enterprise/reference/sql/
|
||||
- https://datafusion.apache.org/user-guide/sql/index.html, Apache DataFusion SQL reference
|
||||
- /influxdb3/enterprise/reference/influxql/
|
||||
source: /shared/influxdb3-get-started/query.md
|
||||
---
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
title: Set up {{% product-name %}}
|
||||
seotitle: Set up InfluxDB | Get started with {{% product-name %}}
|
||||
description: >
|
||||
....
|
||||
Install, configure, and set up authorization for {{% product-name %}}.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
name: Set up Enterprise
|
||||
|
|
|
|||
|
|
@ -2,7 +2,8 @@
|
|||
title: Write data to {{% product-name %}}
|
||||
seotitle: Write data | Get started with {{% product-name %}}
|
||||
description: >
|
||||
....
|
||||
Learn how to write time series data to {{% product-name %}} using the
|
||||
`influxdb3` CLI and _line protocol_, an efficient, human-readable write syntax.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
name: Write data
|
||||
|
|
|
|||
|
|
@ -14,6 +14,7 @@ alt_links:
|
|||
- [Quick install](#quick-install)
|
||||
- [Download {{% product-name %}} binaries](#download-influxdb-3-{{< product-key >}}-binaries)
|
||||
- [Docker image](#docker-image)
|
||||
- [Create a multi-node cluster](#create-a-multi-node-cluster)
|
||||
|
||||
## System Requirements
|
||||
|
||||
|
|
@ -208,4 +209,6 @@ influxdb:3-{{< product-key >}}
|
|||
>
|
||||
> Currently, a bug prevents using {{< keybind all="Ctrl+c" >}} in the terminal to stop an InfluxDB 3 container.
|
||||
|
||||
{{< children hlevel="h2" >}}
|
||||
|
||||
{{< page-nav next="/influxdb3/enterprise/get-started/" nextText="Get started with InfluxDB 3 Enterprise" >}}
|
||||
|
|
@ -0,0 +1,481 @@
|
|||
---
|
||||
title: Create a multi-node cluster
|
||||
seotitle: Create a multi-node InfluxDB 3 Enterprise cluster
|
||||
description: >
|
||||
Create a multi-node InfluxDB 3 Enterprise cluster for high availability,
|
||||
performance, read replicas, and more to meet the specific needs of your use case.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
name: Create a multi-node cluster
|
||||
parent: Install InfluxDB 3 Enterprise
|
||||
weight: 101
|
||||
influxdb3/enterprise/tags: [cluster, multi-node, multi-server]
|
||||
---
|
||||
|
||||
{{% product-name %}} supports flexible, multi-node configurations for high
|
||||
availability, performance, read replicas, and more to meet the specific needs
|
||||
of your use case.
|
||||
The {{% product-name %}} server can run in different _modes_ fulfil specific roles
|
||||
in your multi-node cluster.
|
||||
With the diskless architecture, all nodes in the cluster share the same common
|
||||
object store.
|
||||
|
||||
- [Create an object store](#create-an-object-store)
|
||||
- [Connect to your object store](#connect-to-your-object-store)
|
||||
- [Server modes](#server-modes)
|
||||
- [Server mode examples](#server-mode-examples)
|
||||
- [Configure a node to only handle write requests](#configure-a-node-to-only-handle-write-requests)
|
||||
- [Configure a node to only run the Compactor](#configure-a-node-to-only-run-the-compactor)
|
||||
- [Configure a handle query requests and run the processing engine](#configure-a-handle-query-requests-and-run-the-processing-engine)
|
||||
- [InfluxDB 3 Enterprise cluster configuration examples](#influxdb-3-enterprise-cluster-configuration-examples)
|
||||
- [High availability cluster](#high-availability-cluster)
|
||||
- [High availability with a dedicated Compactor](#high-availability-with-a-dedicated-compactor)
|
||||
- [High availability with read replicas and a dedicated Compactor](#high-availability-with-read-replicas-and-a-dedicated-compactor)
|
||||
- [Writing and querying in multi-node clusters](#writing-and-querying-in-multi-node-clusters)
|
||||
|
||||
## Create an object store
|
||||
|
||||
To run a mulit-node {{% product-name %}} cluster, nodes must connect to a
|
||||
common object store. Enterprise supports the following object stores:
|
||||
|
||||
- AWS S3 (or S3-compatible)
|
||||
- Azure Blob Storage
|
||||
- Google Cloud Storage
|
||||
|
||||
> [!Note]
|
||||
> Refer to your object storage provider's documentation for information about
|
||||
> setting up an object store.
|
||||
|
||||
## Connect to your object store
|
||||
|
||||
Depending on your object storage provider, connect nodes in your cluster to the
|
||||
object store by including provider-specific options when starting each node.
|
||||
|
||||
{{< tabs-wrapper >}}
|
||||
{{% tabs %}}
|
||||
[S3 or S3-compatible](#)
|
||||
[Azure Blob Storage](#)
|
||||
[Google Cloud Storage](#)
|
||||
{{% /tabs %}}
|
||||
{{% tab-content %}}
|
||||
<!---------------------------------- BEGIN S3 --------------------------------->
|
||||
|
||||
To use an AWS S3 or S3-compatible object store, provide the following options
|
||||
with your `influxdb3 serve` command:
|
||||
|
||||
- `--object-store`: `s3`
|
||||
- `--bucket`: Your AWS S3 bucket name
|
||||
- `--aws-access-key-id`: Your AWS access key ID
|
||||
_(can also be defined using the `AWS_ACCESS_KEY_ID` environment variable)_
|
||||
- `--aws-secret-access-key`: Your AWS secret access key
|
||||
_(can also be defined using the `AWS_SECRET_ACCESS_KEY` environment variable)_
|
||||
|
||||
{{% code-placeholders "AWS_(BUCKET_NAME|ACCESS_KEY_ID|SECRET_ACCESS_KEY)" %}}
|
||||
<!-- pytest.mark.skip -->
|
||||
```bash
|
||||
influxdb3 server \
|
||||
# ...
|
||||
--object-store s3 \
|
||||
--bucket AWS_BUCKET_NAME \
|
||||
--aws-access-key-id AWS_ACCESS_KEY_ID \
|
||||
--aws-secret-access-key AWS_SECRET_ACCESS_KEY
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
_For information about other S3-specific settings, see
|
||||
[Configuration options - AWS](/influxdb3/enterprise/reference/config-options/#aws)._
|
||||
|
||||
<!----------------------------------- END S3 ---------------------------------->
|
||||
{{% /tab-content %}}
|
||||
{{% tab-content %}}
|
||||
<!-------------------------- BEGIN AZURE BLOB STORAGE ------------------------->
|
||||
|
||||
To use Azure Blob Storage as your object store, provide the following options
|
||||
with your `influxdb3 serve` command:
|
||||
|
||||
- `--object-store`: `azure`
|
||||
- `--bucket`: Your Azure Blob Storage container name
|
||||
- `--azure-storage-account`: Your Azure Blob Storage storage account name
|
||||
_(can also be defined using the `AZURE_STORAGE_ACCOUNT` environment variable)_
|
||||
- `--aws-secret-access-key`: Your Azure Blob Storage access key
|
||||
_(can also be defined using the `AZURE_STORAGE_ACCESS_KEY` environment variable)_
|
||||
|
||||
{{% code-placeholders "AZURE_(CONTAINER_NAME|STORAGE_ACCOUNT|STORAGE_ACCESS_KEY)" %}}
|
||||
<!-- pytest.mark.skip -->
|
||||
```bash
|
||||
influxdb3 server \
|
||||
# ...
|
||||
--object-store azure \
|
||||
--bucket AZURE_CONTAINER_NAME \
|
||||
--azure-storage-account AZURE_STORAGE_ACCOUNT \
|
||||
--azure-storage-access-key AZURE_STORAGE_ACCESS_KEY
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
<!--------------------------- END AZURE BLOB STORAGE -------------------------->
|
||||
{{% /tab-content %}}
|
||||
{{% tab-content %}}
|
||||
<!------------------------- BEGIN GOOGLE CLOUD STORAGE ------------------------>
|
||||
|
||||
To use Google Cloud Storage as your object store, provide the following options
|
||||
with your `influxdb3 serve` command:
|
||||
|
||||
- `--object-store`: `google`
|
||||
- `--bucket`: Your Google Cloud Storage bucket name
|
||||
- `--google-service-account`: The path to to your Google credentials JSON file
|
||||
_(can also be defined using the `GOOGLE_SERVICE_ACCOUNT` environment variable)_
|
||||
|
||||
{{% code-placeholders "GOOGLE_(BUCKET_NAME|SERVICE_ACCOUNT)" %}}
|
||||
<!-- pytest.mark.skip -->
|
||||
```bash
|
||||
influxdb3 server \
|
||||
# ...
|
||||
--object-store google \
|
||||
--bucket GOOGLE_BUCKET_NAME \
|
||||
--google-service-account GOOGLE_SERVICE_ACCOUNT
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
<!-------------------------- END GOOGLE CLOUD STORAGE ------------------------->
|
||||
{{% /tab-content %}}
|
||||
{{< /tabs-wrapper >}}
|
||||
|
||||
## Server modes
|
||||
|
||||
{{% product-name %}} _modes_ determine what subprocesses the Enterprise node runs.
|
||||
These subprocesses fulfill required tasks including data ingestion, query
|
||||
processing, compaction, and running the processing engine.
|
||||
|
||||
The `influxdb3 serve --mode` option defines what subprocesses a node runs.
|
||||
Each node can run in one _or more_ of the following modes:
|
||||
|
||||
- **all** _(default)_: Runs all necessary subprocesses.
|
||||
- **ingest**: Runs the data ingestion subprocess to handle writes.
|
||||
- **query**: Runs the query processing subprocess to handle queries.
|
||||
- **process**: Runs the processing engine subprocess to trigger and execute plugins.
|
||||
- **compact**: Runs the compactor subprocess to optimize data in object storage.
|
||||
|
||||
> [!Important]
|
||||
> Only _one_ node in your cluster can run in `compact` mode.
|
||||
|
||||
### Server mode examples
|
||||
|
||||
#### Configure a node to only handle write requests
|
||||
<!-- pytest.mark.skip -->
|
||||
```bash
|
||||
influxdb3 server \
|
||||
# ...
|
||||
--mode ingest
|
||||
```
|
||||
|
||||
#### Configure a node to only run the Compactor
|
||||
<!-- pytest.mark.skip -->
|
||||
```bash
|
||||
influxdb3 server \
|
||||
# ...
|
||||
--mode compact
|
||||
```
|
||||
|
||||
#### Configure a handle query requests and run the processing engine
|
||||
<!-- pytest.mark.skip -->
|
||||
```bash
|
||||
influxdb3 server \
|
||||
# ...
|
||||
--mode query,process
|
||||
```
|
||||
|
||||
|
||||
## {{% product-name %}} cluster configuration examples
|
||||
|
||||
<!-- Placeholder for links -->
|
||||
|
||||
### High availability cluster
|
||||
|
||||
A minimum of two nodes are required for basic high availability (HA), with both
|
||||
nodes reading and writing data.
|
||||
|
||||
{{< img-hd src="/img/influxdb/influxdb-3-enterprise-high-availability.png" alt="Basic high availability setup" />}}
|
||||
|
||||
In a basic HA setup:
|
||||
|
||||
- Two nodes both write data to the same object store and both handle queries
|
||||
- Node 1 and Node 2 are _read replicas_ that read from each other’s object store directories
|
||||
- One of the nodes is designated as the Compactor node
|
||||
|
||||
> [!Note]
|
||||
> Only one node can be designated as the Compactor.
|
||||
> Compacted data is meant for a single writer, and many readers.
|
||||
|
||||
The following examples show how to configure and start two nodes for a basic HA
|
||||
setup.
|
||||
|
||||
- _Node 1_ is for compaction
|
||||
- _Node 2_ is for ingest and query
|
||||
|
||||
<!-- pytest.mark.skip -->
|
||||
```bash
|
||||
## NODE 1
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host01'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve \
|
||||
--node-id host01 \
|
||||
--cluster-id cluster01 \
|
||||
--mode ingest,query,compact \
|
||||
--object-store s3 \
|
||||
--bucket influxdb-3-enterprise-storage \
|
||||
--http-bind {{< influxdb/host >}} \
|
||||
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
|
||||
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
<!-- pytest.mark.skip -->
|
||||
```bash
|
||||
## NODE 2
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host02'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve \
|
||||
--node-id host02 \
|
||||
--cluster-id cluster01 \
|
||||
--mode ingest,query \
|
||||
--object-store s3 \
|
||||
--bucket influxdb-3-enterprise-storage \
|
||||
--http-bind localhost:8282 \
|
||||
--aws-access-key-id AWS_ACCESS_KEY_ID \
|
||||
--aws-secret-access-key AWS_SECRET_ACCESS_KEY
|
||||
```
|
||||
|
||||
After the nodes have started, querying either node returns data for both nodes,
|
||||
and _NODE 1_ runs compaction.
|
||||
To add nodes to this setup, start more read replicas with the same cluster ID.
|
||||
|
||||
### High availability with a dedicated Compactor
|
||||
|
||||
Data compaction in {{% product-name %}} is one of the more computationally
|
||||
demanding operations.
|
||||
To ensure stable performance in ingest and query nodes, set up a
|
||||
compactor-only node to isolate the compaction workload.
|
||||
|
||||
{{< img-hd src="/img/influxdb/influxdb-3-enterprise-dedicated-compactor.png" alt="Dedicated Compactor setup" />}}
|
||||
|
||||
The following examples sets up high availability with a dedicated Compactor node:
|
||||
|
||||
1. Start two read-write nodes as read replicas, similar to the previous example.
|
||||
|
||||
<!-- pytest.mark.skip -->
|
||||
```bash
|
||||
## NODE 1 — Writer/Reader Node #1
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host01'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve \
|
||||
--node-id host01 \
|
||||
--cluster-id cluster01 \
|
||||
--mode ingest,query \
|
||||
--object-store s3 \
|
||||
--bucket influxdb-3-enterprise-storage \
|
||||
--http-bind {{< influxdb/host >}} \
|
||||
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
|
||||
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
<!-- pytest.mark.skip -->
|
||||
```bash
|
||||
## NODE 2 — Writer/Reader Node #2
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host02'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve \
|
||||
--node-id host02 \
|
||||
--cluster-id cluster01 \
|
||||
--mode ingest,query \
|
||||
--object-store s3 \
|
||||
--bucket influxdb-3-enterprise-storage \
|
||||
--http-bind localhost:8282 \
|
||||
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
|
||||
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
2. Start the dedicated compactor node with the `--mode=compact` option to ensure the node **only** runs compaction.
|
||||
|
||||
```bash
|
||||
## NODE 3 — Compactor Node
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host03'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve \
|
||||
--node-id host03 \
|
||||
--cluster-id cluster01 \
|
||||
--mode compact \
|
||||
--object-store s3 \
|
||||
--bucket influxdb-3-enterprise-storage \
|
||||
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
|
||||
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
### High availability with read replicas and a dedicated Compactor
|
||||
|
||||
For a robust and effective setup for managing time-series data, you can run
|
||||
ingest nodes alongside query nodes and a dedicated Compactor node.
|
||||
|
||||
{{< img-hd src="/img/influxdb/influxdb-3-enterprise-workload-isolation.png" alt="Workload Isolation Setup" />}}
|
||||
|
||||
1. Start ingest nodes with the **`ingest`** mode.
|
||||
|
||||
> [!Note]
|
||||
> Send all write requests to only your ingest nodes.
|
||||
|
||||
```bash
|
||||
## NODE 1 — Writer Node #1
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host01'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve \
|
||||
--node-id host01 \
|
||||
--cluster-id cluster01 \
|
||||
--mode ingest \
|
||||
--object-store s3 \
|
||||
--bucket influxdb-3-enterprise-storage \
|
||||
--http-bind {{< influxdb/host >}} \
|
||||
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
|
||||
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
<!-- The following examples use different ports for different nodes. Don't use the influxdb/host shortcode below. -->
|
||||
|
||||
```bash
|
||||
## NODE 2 — Writer Node #2
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host02'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve \
|
||||
--node-id host02 \
|
||||
--cluster-id cluster01 \
|
||||
--mode ingest \
|
||||
--object-store s3 \
|
||||
--bucket influxdb-3-enterprise-storage \
|
||||
--http-bind localhost:8282 \
|
||||
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
|
||||
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
2. Start the dedicated Compactor node with the `compact` mode.
|
||||
|
||||
```bash
|
||||
## NODE 3 — Compactor Node
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host03'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve \
|
||||
--node-id host03 \
|
||||
--cluster-id cluster01 \
|
||||
--mode compact \
|
||||
--object-store s3 \
|
||||
--bucket influxdb-3-enterprise-storage \
|
||||
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
|
||||
<AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
3. Finally, start the query nodes using the `query` mode.
|
||||
|
||||
> [!Note]
|
||||
> Send all query requests to only your query nodes.
|
||||
|
||||
```bash
|
||||
## NODE 4 — Read Node #1
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host04'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve \
|
||||
--node-id host04 \
|
||||
--cluster-id cluster01 \
|
||||
--mode query \
|
||||
--object-store s3 \
|
||||
--bucket influxdb-3-enterprise-storage \
|
||||
--http-bind localhost:8383 \
|
||||
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
|
||||
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
```bash
|
||||
## NODE 5 — Read Node #2
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host05'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve \
|
||||
--node-id host05 \
|
||||
--cluster-id cluster01 \
|
||||
--mode query \
|
||||
--object-store s3 \
|
||||
--bucket influxdb-3-enterprise-storage \
|
||||
--http-bind localhost:8484 \
|
||||
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
|
||||
<AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
## Writing and querying in multi-node clusters
|
||||
|
||||
You can use the default port `8181` for any write or query request without
|
||||
changing any of the commands.
|
||||
|
||||
> [!Note]
|
||||
> #### Specify hosts for write and query requests
|
||||
>
|
||||
> To benefit from this multi-node, isolated architecture:
|
||||
>
|
||||
> - Send write requests to a node that you have designated as an ingester.
|
||||
> - Send query requests to a node that you have designated as a querier.
|
||||
>
|
||||
> When running multiple local instances for testing or separate nodes in
|
||||
> production, specifying the host ensures writes and queries are routed to the
|
||||
> correct instance.
|
||||
|
||||
{{% code-placeholders "(http://localhost:8585)|AUTH_TOKEN|DATABASE_NAME|QUERY" %}}
|
||||
```bash
|
||||
# Example querying a specific host
|
||||
# HTTP-bound Port: 8585
|
||||
influxdb3 query \
|
||||
--host http://localhost:8585
|
||||
--token AUTH_TOKEN \
|
||||
--database DATABASE_NAME \
|
||||
"QUERY"
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace the following placeholders with your values:
|
||||
|
||||
- {{% code-placeholder-key %}}`http://localhost:8585`{{% /code-placeholder-key %}}: the host and port of the node to query
|
||||
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in "enterprise" %}} with permission to query the specified database{{% /show-in %}}
|
||||
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database to query
|
||||
- {{% code-placeholder-key %}}`QUERY`{{% /code-placeholder-key %}}: the SQL or InfluxQL query to run against the database
|
||||
|
|
@ -254,6 +254,8 @@ export DATABASE_NODE=node0 && influxdb3 serve \
|
|||
--cluster-id cluster0 \
|
||||
--object-store file \
|
||||
--data-dir ~/.influxdb3/data
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### object-store
|
||||
|
|
|
|||
|
|
@ -12,6 +12,7 @@ influxdb3 create <SUBCOMMAND>
|
|||
|
||||
## Subcommands
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
| Subcommand | Description |
|
||||
| :---------------------------------------------------------------------------------- | :---------------------------------------------- |
|
||||
| [database](/influxdb3/version/reference/cli/influxdb3/create/database/) | Create a new database |
|
||||
|
|
@ -22,6 +23,19 @@ influxdb3 create <SUBCOMMAND>
|
|||
| [token](/influxdb3/version/reference/cli/influxdb3/create/token/) | Create a new authentication token |
|
||||
| [trigger](/influxdb3/version/reference/cli/influxdb3/create/trigger/) | Create a new trigger for the processing engine |
|
||||
| help | Print command help or the help of a subcommand |
|
||||
{{% /show-in %}}
|
||||
|
||||
{{% show-in "core" %}}
|
||||
| Subcommand | Description |
|
||||
| :---------------------------------------------------------------------------------- | :---------------------------------------------- |
|
||||
| [database](/influxdb3/version/reference/cli/influxdb3/create/database/) | Create a new database |
|
||||
| [last_cache](/influxdb3/version/reference/cli/influxdb3/create/last_cache/) | Create a new last value cache |
|
||||
| [distinct_cache](/influxdb3/version/reference/cli/influxdb3/create/distinct_cache/) | Create a new distinct value cache |
|
||||
| [table](/influxdb3/version/reference/cli/influxdb3/create/table/) | Create a new table in a database |
|
||||
| [token](/influxdb3/version/reference/cli/influxdb3/create/token/) | Create a new authentication token |
|
||||
| [trigger](/influxdb3/version/reference/cli/influxdb3/create/trigger/) | Create a new trigger for the processing engine |
|
||||
| help | Print command help or the help of a subcommand |
|
||||
{{% /show-in %}}
|
||||
|
||||
## Options
|
||||
|
||||
|
|
|
|||
|
|
@ -11,16 +11,30 @@ influxdb3 delete <SUBCOMMAND>
|
|||
|
||||
## Subcommands
|
||||
|
||||
| Subcommand | Description |
|
||||
| :----------------------------------------------------------------------------- | :--------------------------------------------- |
|
||||
| [database](/influxdb3/version/reference/cli/influxdb3/delete/database/) | Delete a database |
|
||||
| [file_index](/influxdb3/version/reference/cli/influxdb3/delete/file_index/) | Delete a file index for a database or table |
|
||||
| [last_cache](/influxdb3/version/reference/cli/influxdb3/delete/last_cache/) | Delete a last value cache |
|
||||
{{% show-in "enterprise" %}}
|
||||
| Subcommand | Description |
|
||||
| :---------------------------------------------------------------------------------- | :--------------------------------------------- |
|
||||
| [database](/influxdb3/version/reference/cli/influxdb3/delete/database/) | Delete a database |
|
||||
| [file_index](/influxdb3/version/reference/cli/influxdb3/delete/file_index/) | Delete a file index for a database or table |
|
||||
| [last_cache](/influxdb3/version/reference/cli/influxdb3/delete/last_cache/) | Delete a last value cache |
|
||||
| [distinct_cache](/influxdb3/version/reference/cli/influxdb3/delete/distinct_cache/) | Delete a metadata cache |
|
||||
| [plugin](/influxdb3/version/reference/cli/influxdb3/delete/plugin/) | Delete a processing engine plugin |
|
||||
| [table](/influxdb3/version/reference/cli/influxdb3/delete/table/) | Delete a table from a database |
|
||||
| [trigger](/influxdb3/version/reference/cli/influxdb3/delete/trigger/) | Delete a trigger for the processing engine |
|
||||
| help | Print command help or the help of a subcommand |
|
||||
| [plugin](/influxdb3/version/reference/cli/influxdb3/delete/plugin/) | Delete a processing engine plugin |
|
||||
| [table](/influxdb3/version/reference/cli/influxdb3/delete/table/) | Delete a table from a database |
|
||||
| [trigger](/influxdb3/version/reference/cli/influxdb3/delete/trigger/) | Delete a trigger for the processing engine |
|
||||
| help | Print command help or the help of a subcommand |
|
||||
{{% /show-in %}}
|
||||
|
||||
{{% show-in "core" %}}
|
||||
| Subcommand | Description |
|
||||
| :---------------------------------------------------------------------------------- | :--------------------------------------------- |
|
||||
| [database](/influxdb3/version/reference/cli/influxdb3/delete/database/) | Delete a database |
|
||||
| [last_cache](/influxdb3/version/reference/cli/influxdb3/delete/last_cache/) | Delete a last value cache |
|
||||
| [distinct_cache](/influxdb3/version/reference/cli/influxdb3/delete/distinct_cache/) | Delete a metadata cache |
|
||||
| [plugin](/influxdb3/version/reference/cli/influxdb3/delete/plugin/) | Delete a processing engine plugin |
|
||||
| [table](/influxdb3/version/reference/cli/influxdb3/delete/table/) | Delete a table from a database |
|
||||
| [trigger](/influxdb3/version/reference/cli/influxdb3/delete/trigger/) | Delete a trigger for the processing engine |
|
||||
| help | Print command help or the help of a subcommand |
|
||||
{{% /show-in %}}
|
||||
|
||||
## Options
|
||||
|
||||
|
|
|
|||
|
|
@ -72,49 +72,3 @@ This tutorial covers many of the recommended tools.
|
|||
{{< show-in "enterprise" >}}
|
||||
{{< page-nav next="/influxdb3/enterprise/get-started/setup/" nextText="Set up InfluxDB 3 Enterprise" >}}
|
||||
{{< /show-in >}}
|
||||
|
||||
<!--
|
||||
TO-DOs
|
||||
- Move this to it's own management section
|
||||
- Learn exactly how file indexes work
|
||||
- Add this content to optimizing queries
|
||||
|
||||
### File index settings
|
||||
|
||||
To accelerate performance on specific queries, you can define non-primary keys to index on, which helps improve performance for single-series queries.
|
||||
This feature is only available in {{% product-name %}} and is not available in Core.
|
||||
|
||||
#### Create a file index
|
||||
|
||||
{{% code-placeholders "AUTH_TOKEN|DATABASE|TABLE|COLUMNS" %}}
|
||||
|
||||
```bash
|
||||
# Example variables on a query
|
||||
# HTTP-bound Port: 8585
|
||||
|
||||
influxdb3 create file_index \
|
||||
--host http://localhost:8585 \
|
||||
--token AUTH_TOKEN \
|
||||
--database DATABASE_NAME \
|
||||
--table TABLE_NAME \
|
||||
COLUMNS
|
||||
```
|
||||
|
||||
#### Delete a file index
|
||||
|
||||
```bash
|
||||
influxdb3 delete file_index \
|
||||
--host http://localhost:8585 \
|
||||
--database DATABASE_NAME \
|
||||
--table TABLE_NAME \
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace the following placeholders with your values:
|
||||
|
||||
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "admin" %}}
|
||||
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database to create the file index in
|
||||
- {{% code-placeholder-key %}}`TABLE_NAME`{{% /code-placeholder-key %}}: the name of the table to create the file index in
|
||||
- {{% code-placeholder-key %}}`COLUMNS`{{% /code-placeholder-key %}}: a comma-separated list of columns to index on, for example, `host,application`
|
||||
-->
|
||||
|
||||
|
|
|
|||
|
|
@ -1,31 +1,73 @@
|
|||
### Python plugins and the processing engine
|
||||
The {{% product-name %}} processing engine is an embedded Python virtual machine
|
||||
(VM) that runs code inside the database to process and transform data.
|
||||
Create processing engine [plugins](#plugin) that run when [triggered](#trigger)
|
||||
by specific events.
|
||||
|
||||
The InfluxDB 3 processing engine is an embedded Python VM for running code inside the database to process and transform data.
|
||||
- [Processing engine terminology](#processing-engine-terminology)
|
||||
- [Plugin](#plugin)
|
||||
- [Trigger](#trigger)
|
||||
- [Trigger types](#trigger-types)
|
||||
- [Activate the processing engine](#activate-the-processing-engine)
|
||||
- [Create a plugin](#create-a-plugin)
|
||||
- [Test a plugin on the server](#test-a-plugin-on-the-server)
|
||||
- [Create a trigger](#create-a-trigger)
|
||||
- [Enable the trigger](#enable-the-trigger)
|
||||
|
||||
To activate the processing engine, pass the `--plugin-dir <PLUGIN_DIR>` option when starting the {{% product-name %}} server.
|
||||
`PLUGIN_DIR` is your filesystem location for storing [plugin](#plugin) files for the processing engine to run.
|
||||
## Processing engine terminology
|
||||
|
||||
#### Plugin
|
||||
### Plugin
|
||||
|
||||
A plugin is a Python function that has a signature compatible with a Processing engine [trigger](#trigger).
|
||||
A plugin is a Python function that has a signature compatible with a processing
|
||||
engine [trigger](#trigger).
|
||||
|
||||
#### Trigger
|
||||
### Trigger
|
||||
|
||||
When you create a trigger, you specify a [plugin](#plugin), a database, optional arguments,
|
||||
and a _trigger-spec_, which defines when the plugin is executed and what data it receives.
|
||||
When you create a trigger, you specify a [plugin](#plugin), a database, optional
|
||||
arguments, and a _trigger-spec_, which defines when the plugin is executed and
|
||||
what data it receives.
|
||||
|
||||
##### Trigger types
|
||||
#### Trigger types
|
||||
|
||||
InfluxDB 3 provides the following types of triggers, each with specific trigger-specs:
|
||||
InfluxDB 3 provides the following types of triggers, each with specific
|
||||
trigger-specs:
|
||||
|
||||
- **On WAL flush**: Sends a batch of written data (for a specific table or all tables) to a plugin (by default, every second).
|
||||
- **On Schedule**: Executes a plugin on a user-configured schedule (using a crontab or a duration); useful for data collection and deadman monitoring.
|
||||
- **On Request**: Binds a plugin to a custom HTTP API endpoint at `/api/v3/engine/<ENDPOINT_PATH>`.
|
||||
The plugin receives the HTTP request headers and content, and can then parse, process, and send the data into the database or to third-party services.
|
||||
- **On WAL flush**: Sends a batch of written data (for a specific table or all
|
||||
tables) to a plugin (by default, every second).
|
||||
- **On Schedule**: Executes a plugin on a user-configured schedule (using a
|
||||
crontab or a duration). This trigger type is useful for data collection and
|
||||
deadman monitoring.
|
||||
- **On Request**: Binds a plugin to a custom HTTP API endpoint at
|
||||
`/api/v3/engine/<ENDPOINT_PATH>`.
|
||||
The plugin receives the HTTP request headers and content, and can parse,
|
||||
process, and send the data into the database or to third-party services.
|
||||
|
||||
### Test, create, and trigger plugin code
|
||||
## Activate the processing engine
|
||||
|
||||
##### Example: Python plugin for WAL rows
|
||||
To activate the processing engine, include the `--plugin-dir <PLUGIN_DIR>` option
|
||||
when starting the {{% product-name %}} server.
|
||||
`PLUGIN_DIR` is your file system location for storing [plugin](#plugin) files for
|
||||
the processing engine to run.
|
||||
|
||||
{{% code-placeholders "PLUGIN_DIR" %}}
|
||||
<!-- pytest.mark.skip -->
|
||||
```bash
|
||||
influxdb3 serve \
|
||||
# ...
|
||||
--plugin-dir PLUGIN_DIR
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace {{% code-placeholder-key %}}`PLUGIN_DIR`{{% /code-placeholder-key %}}
|
||||
with the path to your plugin directory. This path can be absolute or relative
|
||||
to the current working directory of the `influxdb3` server.
|
||||
|
||||
## Create a plugin
|
||||
|
||||
To create a plugin, write and store a Python file in your configured `PLUGIN_DIR`.
|
||||
The following example is a WAL flush plugin that processes data before it gets
|
||||
persisted to the object store.
|
||||
|
||||
##### Example Python plugin for WAL rows
|
||||
|
||||
```python
|
||||
# This is the basic structure for Python plugin code that runs in the
|
||||
|
|
@ -33,7 +75,7 @@ InfluxDB 3 provides the following types of triggers, each with specific trigger-
|
|||
|
||||
# When creating a trigger, you can provide runtime arguments to your plugin,
|
||||
# allowing you to write generic code that uses variables such as monitoring
|
||||
thresholds, environment variables, and host names.
|
||||
# thresholds, environment variables, and host names.
|
||||
#
|
||||
# Use the following exact signature to define a function for the WAL flush
|
||||
# trigger.
|
||||
|
|
@ -48,11 +90,11 @@ def process_writes(influxdb3_local, table_batches, args=None):
|
|||
|
||||
# here we're using arguments provided at the time the trigger was set up
|
||||
# to feed into paramters that we'll put into a query
|
||||
query_params = {"host": "foo"}
|
||||
query_params = {"room": "Kitchen"}
|
||||
# here's an example of executing a parameterized query. Only SQL is supported.
|
||||
# It will query the database that the trigger is attached to by default. We'll
|
||||
# soon have support for querying other DBs.
|
||||
query_result = influxdb3_local.query("SELECT * FROM cpu where host = '$host'", query_params)
|
||||
query_result = influxdb3_local.query("SELECT * FROM home where room = '$host'", query_params)
|
||||
# the result is a list of Dict that have the column name as key and value as
|
||||
# value. If you run the WAL test plugin with your plugin against a DB that
|
||||
# you've written data into, you'll be able to see some results
|
||||
|
|
@ -100,19 +142,20 @@ def process_writes(influxdb3_local, table_batches, args=None):
|
|||
influxdb3_local.info("done")
|
||||
```
|
||||
|
||||
##### Test a plugin on the server
|
||||
## Test a plugin on the server
|
||||
|
||||
Test your InfluxDB 3 plugin safely without affecting written data. During a plugin test:
|
||||
{{% product-name %}} lets you test your processing engine plugin safely without
|
||||
affecting actual data. During a plugin test:
|
||||
|
||||
- A query executed by the plugin queries against the server you send the request to.
|
||||
- Writes aren't sent to the server but are returned to you.
|
||||
|
||||
To test a plugin, do the following:
|
||||
To test a plugin:
|
||||
|
||||
1. Create a _plugin directory_--for example, `/path/to/.influxdb/plugins`
|
||||
2. [Start the InfluxDB server](#start-influxdb) and include the `--plugin-dir <PATH>` option.
|
||||
3. Save the [example plugin code](#example-python-plugin-for-wal-rows) to a plugin file inside of the plugin directory. If you haven't yet written data to the table in the example, comment out the lines where it queries.
|
||||
4. To run the test, enter the following command with the following options:
|
||||
1. Save the [example plugin code](#example-python-plugin-for-wal-rows) to a
|
||||
plugin file inside of the plugin directory. If you haven't yet written data
|
||||
to the table in the example, comment out the lines where it queries.
|
||||
2. To run the test, enter the following command with the following options:
|
||||
|
||||
- `--lp` or `--file`: The line protocol to test
|
||||
- Optional: `--input-arguments`: A comma-delimited list of `<KEY>=<VALUE>` arguments for your plugin code
|
||||
|
|
@ -120,15 +163,15 @@ To test a plugin, do the following:
|
|||
{{% code-placeholders "INPUT_LINE_PROTOCOL|INPUT_ARGS|DATABASE_NAME|AUTH_TOKEN|PLUGIN_FILENAME" %}}
|
||||
```bash
|
||||
influxdb3 test wal_plugin \
|
||||
--lp INPUT_LINE_PROTOCOL \
|
||||
--input-arguments INPUT_ARGS \
|
||||
--database DATABASE_NAME \
|
||||
--token AUTH_TOKEN \
|
||||
PLUGIN_FILENAME
|
||||
--database DATABASE_NAME \
|
||||
--token AUTH_TOKEN \
|
||||
--lp INPUT_LINE_PROTOCOL \
|
||||
--input-arguments INPUT_ARGS \
|
||||
PLUGIN_FILENAME
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace the following placeholders with your values:
|
||||
Replace the following:
|
||||
|
||||
- {{% code-placeholder-key %}}`INPUT_LINE_PROTOCOL`{{% /code-placeholder-key %}}: the line protocol to test
|
||||
- Optional: {{% code-placeholder-key %}}`INPUT_ARGS`{{% /code-placeholder-key %}}: a comma-delimited list of `<KEY>=<VALUE>` arguments for your plugin code--for example, `arg1=hello,arg2=world`
|
||||
|
|
@ -136,21 +179,16 @@ Replace the following placeholders with your values:
|
|||
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: the {{% token-link "admin" %}} for your {{% product-name %}} server
|
||||
- {{% code-placeholder-key %}}`PLUGIN_FILENAME`{{% /code-placeholder-key %}}: the name of the plugin file to test
|
||||
|
||||
The command runs the plugin code with the test data, yields the data to the plugin code, and then responds with the plugin result.
|
||||
You can quickly see how the plugin behaves, what data it would have written to the database, and any errors.
|
||||
The command runs the plugin code with the test data, yields the data to the
|
||||
plugin code, and then responds with the plugin result.
|
||||
You can quickly see how the plugin behaves, what data it would have written to
|
||||
the database, and any errors.
|
||||
You can then edit your Python code in the plugins directory, and rerun the test.
|
||||
The server reloads the file for every request to the `test` API.
|
||||
|
||||
For more information, see [`influxdb3 test wal_plugin`](/influxdb3/version/reference/cli/influxdb3/test/wal_plugin/) or run `influxdb3 test wal_plugin -h`.
|
||||
|
||||
With the plugin code inside the server plugin directory, and a successful test,
|
||||
you're ready to create a plugin and a trigger to run on the server.
|
||||
|
||||
##### Example: Test, create, and run a plugin
|
||||
|
||||
The following example shows how to test a plugin, and then create the plugin and
|
||||
trigger:
|
||||
|
||||
<!-- pytest.mark.skip -->
|
||||
```bash
|
||||
# Test and create a plugin
|
||||
# Requires:
|
||||
|
|
@ -165,6 +203,16 @@ influxdb3 test wal_plugin \
|
|||
test.py
|
||||
```
|
||||
|
||||
For more information, see [`influxdb3 test wal_plugin`](/influxdb3/version/reference/cli/influxdb3/test/wal_plugin/)
|
||||
or run `influxdb3 test wal_plugin -h`.
|
||||
|
||||
## Create a trigger
|
||||
|
||||
With the plugin code inside the server plugin directory, and a successful test,
|
||||
you're ready to create a trigger to run the plugin. Use the
|
||||
[`influxdb3 create trigger` command](/influxdb3/version/reference/cli/influxdb3/create/trigger/)
|
||||
to create a trigger.
|
||||
|
||||
```bash
|
||||
# Create a trigger that runs the plugin
|
||||
influxdb3 create trigger \
|
||||
|
|
@ -176,6 +224,8 @@ influxdb3 create trigger \
|
|||
trigger1
|
||||
```
|
||||
|
||||
## Enable the trigger
|
||||
|
||||
After you have created a plugin and trigger, enter the following command to
|
||||
enable the trigger and have it run the plugin as you write data:
|
||||
|
||||
|
|
|
|||
|
|
@ -13,7 +13,24 @@ the [update on InfluxDB 3 Core’s 72-hour limitation](https://www.influxdata.co
|
|||
> [!Note]
|
||||
> Flux, the language introduced in InfluxDB v2, is **not** supported in InfluxDB 3.
|
||||
|
||||
The quickly to get started querying, use the
|
||||
<!-- TOC -->
|
||||
|
||||
- [Query data with the influxdb3 CLI](#query-data-with-the-influxdb3-cli)
|
||||
- [Example queries](#example-queries)
|
||||
- [Other tools for executing queries](#other-tools-for-executing-queries)
|
||||
- [SQL vs InfluxQL](#sql-vs-influxql)
|
||||
- [SQL](#sql)
|
||||
- [InfluxQL](#influxql)
|
||||
- [Optimize queries](#optimize-queries)
|
||||
- [Last values cache](#last-values-cache)
|
||||
- [Distinct values cache](#distinct-values-cache)
|
||||
{{% show-in "enterprise" %}}- [File indexes](#file-indexes){{% /show-in %}}
|
||||
|
||||
<!-- /TOC -->
|
||||
|
||||
## Query data with the influxdb3 CLI
|
||||
|
||||
To get started querying data in {{% product-name %}}, use the
|
||||
[`influxdb3 query` command](/influxdb3/version/reference/cli/influxdb3/query/)
|
||||
and provide the following:
|
||||
|
||||
|
|
@ -98,7 +115,7 @@ influxdb3 query \
|
|||
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
## Example queries
|
||||
### Example queries
|
||||
|
||||
{{< expand-wrapper >}}
|
||||
{{% expand "List tables in a database" %}}
|
||||
|
|
@ -269,12 +286,14 @@ GROUP BY
|
|||
{{% /expand %}}
|
||||
{{< /expand-wrapper >}}
|
||||
|
||||
## Other tools for executing queries
|
||||
|
||||
## SQL vs InfluxQL
|
||||
Other tools are available for querying data in {{% product-name %}}, including
|
||||
the following:
|
||||
|
||||
## LVC, DVC
|
||||
|
||||
### Query using the API
|
||||
{{< expand-wrapper >}}
|
||||
{{% expand "Query using the API" %}}
|
||||
#### Query using the API
|
||||
|
||||
InfluxDB 3 supports Flight (gRPC) APIs and an HTTP API.
|
||||
To query your database using the HTTP API, send a request to the `/api/v3/query_sql` or `/api/v3/query_influxql` endpoints.
|
||||
|
|
@ -318,7 +337,11 @@ Replace the following placeholders with your values:
|
|||
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database to query
|
||||
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in "enterprise" %}} with permission to query the specified database{{% /show-in %}}
|
||||
|
||||
### Query using the Python client
|
||||
{{% /expand %}}
|
||||
|
||||
{{% expand "Query using the Python client" %}}
|
||||
|
||||
#### Query using the Python client
|
||||
|
||||
Use the InfluxDB 3 Python library to interact with the database and integrate with your application.
|
||||
We recommend installing the required packages in a Python virtual environment for your specific project.
|
||||
|
|
@ -382,177 +405,99 @@ print(table.group_by('host').aggregate([]))
|
|||
print(table.group_by('cpu').aggregate([('time_system', 'mean')]))
|
||||
```
|
||||
|
||||
For more information about the Python client library, see the [`influxdb3-python` repository](https://github.com/InfluxCommunity/influxdb3-python) in GitHub.
|
||||
For more information about the Python client library, see the
|
||||
[`influxdb3-python` repository](https://github.com/InfluxCommunity/influxdb3-python)
|
||||
in GitHub.
|
||||
|
||||
### Query using InfluxDB 3 Explorer (Beta)
|
||||
{{% /expand %}}
|
||||
|
||||
{{% expand "Query using InfluxDB 3 Explorer" %}}
|
||||
|
||||
#### Query using InfluxDB 3 Explorer
|
||||
|
||||
You can use the InfluxDB 3 Explorer web-based interface to query and visualize data,
|
||||
and administer your {{% product-name %}} instance.
|
||||
For more information, see how to [install InfluxDB 3 Explorer (Beta)](/influxdb3/explorer/install/) using Docker
|
||||
and get started querying your data.
|
||||
For more information, see how to [install InfluxDB 3 Explorer](/influxdb3/explorer/install/)
|
||||
using Docker and get started querying your data.
|
||||
|
||||
{{% /expand %}}
|
||||
{{< /expand-wrapper >}}
|
||||
|
||||
## SQL vs InfluxQL
|
||||
|
||||
{{% product-name %}} supports two query languages--SQL and InfluxQL.
|
||||
While these two query languages are similar, there are important differences to
|
||||
consider.
|
||||
|
||||
### SQL
|
||||
|
||||
The InfluxDB 3 SQL implementation provides a full-featured SQL query engine
|
||||
powered by [Apache DataFusion](https://datafusion.apache.org/). InfluxDB extends
|
||||
DataFusion with additional time series-specific functionality and supports the
|
||||
complex SQL queries, including queries that use joins, unions, window functions,
|
||||
and more.
|
||||
|
||||
- [SQL query guides](/influxdb3/version/query-data/sql/)
|
||||
- [SQL reference](/influxdb3/version/reference/sql/)
|
||||
- [Apache DataFusion SQL reference](https://datafusion.apache.org/user-guide/sql/index.html)
|
||||
|
||||
### InfluxQL
|
||||
|
||||
InfluxQL is a SQL-like query language built for InfluxDB v1 and supported in
|
||||
{{% product-name %}}. Its syntax and functionality is similar SQL, but specifically
|
||||
designed for querying time series data. InfluxQL does not offer the full range
|
||||
of query functionality that SQL does.
|
||||
|
||||
If you are migrating from previous versions of InfluxDB, you can continue to use
|
||||
InfluxQL and the established InfluxQL-related APIs you have been using.
|
||||
|
||||
- [InfluxQL query guides](/influxdb3/version/query-data/influxql/)
|
||||
- [InfluxQL reference](/influxdb3/version/reference/influxql/)
|
||||
- [InfluxQL feature support](/influxdb3/version/reference/influxql/feature-support/)
|
||||
|
||||
## Optimize queries
|
||||
|
||||
{{% product-name %}} provides the following optimization options to improve
|
||||
specific kinds of queries:
|
||||
|
||||
- [Last values cache](#last-value-cache)
|
||||
- [Distinct values cache](#distinct-value-cache)
|
||||
{{% show-in "enterprise" %}}- [File indexes](#file-indexes){{% /show-in %}}
|
||||
|
||||
### Last values cache
|
||||
|
||||
{{% product-name %}} supports a **last-n values cache** which stores the last N values in a series or column hierarchy in memory. This gives the database the ability to answer these kinds of queries in under 10 milliseconds.
|
||||
The {{% product-name %}} last values cache (LVC) stores the last N values in a
|
||||
series or column hierarchy in memory. This gives the database the ability to
|
||||
answer these kinds of queries in under 10 milliseconds.
|
||||
For information about configuring and using the LVC, see:
|
||||
|
||||
You can use the `influxdb3` CLI to [create a last value cache](/influxdb3/version/reference/cli/influxdb3/create/last_cache/).
|
||||
|
||||
{{% code-placeholders "DATABASE_NAME|AUTH_TOKEN|TABLE_NAME|CACHE_NAME" %}}
|
||||
```bash
|
||||
influxdb3 create last_cache \
|
||||
--token AUTH_TOKEN
|
||||
--database DATABASE_NAME \
|
||||
--table TABLE_NAME \
|
||||
CACHE_NAME
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace the following placeholders with your values:
|
||||
|
||||
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database to create the last values cache in
|
||||
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "admin" %}}
|
||||
- {{% code-placeholder-key %}}`TABLE_NAME`{{% /code-placeholder-key %}}: the name of the table to create the last values cache in
|
||||
- {{% code-placeholder-key %}}`CACHE_NAME`{{% /code-placeholder-key %}}: Optionally, a name for the new cache
|
||||
|
||||
Consider the following `cpu` sample table:
|
||||
|
||||
| host | application | time | usage\_percent | status |
|
||||
| ----- | ----- | ----- | ----- | ----- |
|
||||
| Bravo | database | 2024-12-11T10:00:00 | 55.2 | OK |
|
||||
| Charlie | cache | 2024-12-11T10:00:00 | 65.4 | OK |
|
||||
| Bravo | database | 2024-12-11T10:01:00 | 70.1 | Warn |
|
||||
| Bravo | database | 2024-12-11T10:01:00 | 80.5 | OK |
|
||||
| Alpha | webserver | 2024-12-11T10:02:00 | 25.3 | Warn |
|
||||
|
||||
The following command creates a last value cache named `cpuCache`:
|
||||
|
||||
```bash
|
||||
influxdb3 create last_cache \
|
||||
--token apiv3_0xxx0o0XxXxx00Xxxx000xXXxoo0== \
|
||||
--database servers \
|
||||
--table cpu \
|
||||
--key-columns host,application \
|
||||
--value-columns usage_percent,status \
|
||||
--count 5 cpuCache
|
||||
```
|
||||
|
||||
_You can create a last values cache per time series, but be mindful of high cardinality tables that could take excessive memory._
|
||||
|
||||
#### Query a last values cache
|
||||
|
||||
To query data from the LVC, use the [`last_cache()`](/influxdb3/version/reference/sql/functions/cache/#last_cache) function in your query--for example:
|
||||
|
||||
```bash
|
||||
influxdb3 query \
|
||||
--token apiv3_0xxx0o0XxXxx00Xxxx000xXXxoo0== \
|
||||
--database servers \
|
||||
"SELECT * FROM last_cache('cpu', 'cpuCache') WHERE host = 'Bravo';"
|
||||
```
|
||||
|
||||
> [!Note]
|
||||
> #### Only works with SQL
|
||||
>
|
||||
> The last values cache only works with SQL, not InfluxQL; SQL is the default language.
|
||||
|
||||
#### Delete a last values cache
|
||||
|
||||
Use the `influxdb3` CLI to [delete a last values cache](/influxdb3/version/reference/cli/influxdb3/delete/last_cache/)
|
||||
|
||||
{{% code-placeholders "DATABASE_NAME|TABLE_NAME|CACHE_NAME" %}}
|
||||
```bash
|
||||
influxdb3 delete last_cache \
|
||||
--token AUTH_TOKEN \
|
||||
--database DATABASE_NAME \
|
||||
--table TABLE \
|
||||
--cache-name CACHE_NAME
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace the following placeholders with your values:
|
||||
|
||||
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "admin" %}}
|
||||
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database to delete the last values cache from
|
||||
- {{% code-placeholder-key %}}`TABLE_NAME`{{% /code-placeholder-key %}}: the name of the table to delete the last values cache from
|
||||
- {{% code-placeholder-key %}}`CACHE_NAME`{{% /code-placeholder-key %}}: the name of the last values cache to delete
|
||||
- [Manage a last values cache](/influxdb3/version/admin/last-value-cache/)
|
||||
- [Query the last values cache](/influxdb3/version/admin/last-value-cache/query/)
|
||||
|
||||
### Distinct values cache
|
||||
|
||||
Similar to the [last values cache](#last-values-cache), the database can cache in RAM the distinct values for a single column in a table or a hierarchy of columns.
|
||||
The {{% product-name %}} distinct values cache (DVC) stores distinct values for
|
||||
specified columns in a series or column hierarchy in memory.
|
||||
This is useful for fast metadata lookups, which can return in under 30 milliseconds.
|
||||
Many of the options are similar to the last value cache.
|
||||
For information about configuring and using the DVC, see:
|
||||
|
||||
You can use the `influxdb3` CLI to [create a distinct values cache](/influxdb3/version/reference/cli/influxdb3/create/distinct_cache/).
|
||||
- [Manage a distinct values cache](/influxdb3/version/admin/distinct-value-cache/)
|
||||
- [Query the distinct values cache](/influxdb3/version/admin/distinct-value-cache/query/)
|
||||
|
||||
{{% code-placeholders "DATABASE_NAME|AUTH_TOKEN|TABLE_NAME|CACHE_NAME" %}}
|
||||
```bash
|
||||
influxdb3 create distinct_cache \
|
||||
--token AUTH_TOKEN \
|
||||
--database DATABASE_NAME \
|
||||
--table TABLE \
|
||||
--columns COLUMNS \
|
||||
CACHE_NAME
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
Replace the following placeholders with your values:
|
||||
{{% show-in "enterprise" %}}
|
||||
### File indexes
|
||||
|
||||
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database to create the last values cache in
|
||||
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "admin" %}}
|
||||
- {{% code-placeholder-key %}}`TABLE_NAME`{{% /code-placeholder-key %}}: the name of the table to create the distinct values cache in
|
||||
- {{% code-placeholder-key %}}`CACHE_NAME`{{% /code-placeholder-key %}}: Optionally, a name for the new cache
|
||||
{{% product-name %}} lets you customize how your data is indexed to help
|
||||
optimize query performance for your specific workload, especially workloads that
|
||||
include single-series queries. Define custom indexing strategies for databases
|
||||
or specific tables. For more information, see
|
||||
[Manage file indexes](/influxdb3/enterprise/admin/file-index/).
|
||||
|
||||
Consider the following `cpu` sample table:
|
||||
{{% /show-in %}}
|
||||
|
||||
| host | application | time | usage\_percent | status |
|
||||
| ----- | ----- | ----- | ----- | ----- |
|
||||
| Bravo | database | 2024-12-11T10:00:00 | 55.2 | OK |
|
||||
| Charlie | cache | 2024-12-11T10:00:00 | 65.4 | OK |
|
||||
| Bravo | database | 2024-12-11T10:01:00 | 70.1 | Warn |
|
||||
| Bravo | database | 2024-12-11T10:01:00 | 80.5 | OK |
|
||||
| Alpha | webserver | 2024-12-11T10:02:00 | 25.3 | Warn |
|
||||
|
||||
The following command creates a distinct values cache named `cpuDistinctCache`:
|
||||
|
||||
```bash
|
||||
influxdb3 create distinct_cache \
|
||||
--token apiv3_0xxx0o0XxXxx00Xxxx000xXXxoo0== \
|
||||
--database servers \
|
||||
--table cpu \
|
||||
--columns host,application \
|
||||
cpuDistinctCache
|
||||
```
|
||||
|
||||
#### Query a distinct values cache
|
||||
|
||||
To query data from the distinct values cache, use the [`distinct_cache()`](/influxdb3/version/reference/sql/functions/cache/#distinct_cache) function in your query--for example:
|
||||
|
||||
```bash
|
||||
influxdb3 query \
|
||||
--token apiv3_0xxx0o0XxXxx00Xxxx000xXXxoo0== \
|
||||
--database servers \
|
||||
"SELECT * FROM distinct_cache('cpu', 'cpuDistinctCache')"
|
||||
```
|
||||
|
||||
> [!Note]
|
||||
> #### Only works with SQL
|
||||
>
|
||||
> The distinct cache only works with SQL, not InfluxQL; SQL is the default language.
|
||||
|
||||
#### Delete a distinct values cache
|
||||
|
||||
Use the `influxdb3` CLI to [delete a distinct values cache](/influxdb3/version/reference/cli/influxdb3/delete/distinct_cache/)
|
||||
|
||||
{{% code-placeholders "DATABASE_NAME|TABLE_NAME|CACHE_NAME" %}}
|
||||
```bash
|
||||
influxdb3 delete distinct_cache \
|
||||
--token AUTH_TOKEN \
|
||||
--database DATABASE_NAME \
|
||||
--table TABLE \
|
||||
--cache-name CACHE_NAME
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace the following placeholders with your values:
|
||||
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "admin" %}}
|
||||
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database to delete the distinct values cache from
|
||||
- {{% code-placeholder-key %}}`TABLE_NAME`{{% /code-placeholder-key %}}: the name of the table to delete the distinct values cache from
|
||||
- {{% code-placeholder-key %}}`CACHE_NAME`{{% /code-placeholder-key %}}: the name of the distinct values cache to delete
|
||||
{{% page-nav
|
||||
prev="/influxdb3/version/get-started/write/"
|
||||
prevText="Write data"
|
||||
next="/influxdb3/version/get-started/processing-engine/"
|
||||
nextText="Processing engine"
|
||||
%}}
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
<!-- -->
|
||||
<!-- TOC -->
|
||||
|
||||
- [Install {{% product-name %}}](#install-influxdb-3-{{% product-key %}})
|
||||
- [Verify the installation](#verify-the-installation)
|
||||
- [Start InfluxDB](#start-influxdb)
|
||||
|
|
@ -8,6 +9,8 @@
|
|||
- [Create an operator token](#create-an-operator-token)
|
||||
- [Set your token for authorization](#set-your-token-for-authorization)
|
||||
|
||||
<!-- /TOC -->
|
||||
|
||||
## Install {{% product-name %}}
|
||||
|
||||
{{% product-name %}} runs on **Linux**, **macOS**, and **Windows**.
|
||||
|
|
|
|||
|
|
@ -17,7 +17,14 @@ Both new tags and fields can be added later as your schema changes.
|
|||
> For extended historical queries and optimized data organization, consider using [InfluxDB 3 Enterprise](/influxdb3/enterprise/get-started/).
|
||||
{{% /show-in %}}
|
||||
|
||||
<!-- TOC PLACEHOLDER -->
|
||||
<!-- TOC -->
|
||||
|
||||
- [Line protocol](#line-protocol)
|
||||
- [Construct line protocol](#construct-line-protocol)
|
||||
- [Write data using the CLI](#write-data-using-the-cli)
|
||||
- [Other tools for writing data](#other-tools-for-writing-data)
|
||||
|
||||
<!-- /TOC -->
|
||||
|
||||
## Line protocol
|
||||
|
||||
|
|
@ -210,21 +217,20 @@ Replace the following placeholders with your values:
|
|||
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the [database](/influxdb3/version/admin/databases/) to write to.
|
||||
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in "enterprise" %}} with permission to write to the specified database{{% /show-in %}}
|
||||
|
||||
> [!Note]
|
||||
> #### Other write methods
|
||||
>
|
||||
> There are many ways to write data to your {{% product-name %}} database, including:
|
||||
>
|
||||
> - [InfluxDB HTTP API](/influxdb3/version/write-data/http-api/): Recommended for
|
||||
> batching and higher-volume write workloads.
|
||||
> - [InfluxDB client libraries](/influxdb3/version/write-data/client-libraries/):
|
||||
> Client libraries that integrate with your code to construct data as time
|
||||
> series points and write the data as line protocol to your
|
||||
> {{% product-name %}} database.
|
||||
> - [Telegraf](/telegraf/v1/): A data collection agent with over 300 plugins for
|
||||
> collecting, processing, and writing data.
|
||||
>
|
||||
> For more information, see [Write data to {{% product-name %}}](/influxdb3/version/write-data/).
|
||||
## Other tools for writing data
|
||||
|
||||
There are many ways to write data to your {{% product-name %}} database, including:
|
||||
|
||||
- [InfluxDB HTTP API](/influxdb3/version/write-data/http-api/): Recommended for
|
||||
batching and higher-volume write workloads.
|
||||
- [InfluxDB client libraries](/influxdb3/version/write-data/client-libraries/):
|
||||
Client libraries that integrate with your code to construct data as time
|
||||
series points and write the data as line protocol to your
|
||||
{{% product-name %}} database.
|
||||
- [Telegraf](/telegraf/v1/): A data collection agent with over 300 plugins for
|
||||
collecting, processing, and writing data.
|
||||
|
||||
For more information, see [Write data to {{% product-name %}}](/influxdb3/version/write-data/).
|
||||
|
||||
{{% page-nav
|
||||
prev="/influxdb3/version/get-started/setup/"
|
||||
|
|
|
|||
Loading…
Reference in New Issue