WIP monolith getting started restructure

pull/6148/head
Scott Anderson 2025-06-05 16:11:42 -06:00 committed by Jason Stirnaman
parent e3e76b46d5
commit c3b5458314
6 changed files with 392 additions and 350 deletions

View File

@ -0,0 +1,292 @@
---
title: Use a multi-server setup
seotitle: Use a multi-server InfluxDB 3 Enterprise setup
menu:
influxdb3_enterprise:
- name: Multi-server
- parent: Get started
weight: 4
influxdb3/enterprise/tags: [cluster, multi-node, multi-server]
---
### Multi-server setup
{{% product-name %}} is built to support multi-node setups for high availability, read replicas, and flexible implementations depending on use case.
### High availability
Enterprise is architecturally flexible, giving you options on how to configure multiple servers that work together for high availability (HA) and high performance.
Built on top of the diskless engine and leveraging the Object store, an HA setup ensures that if a node fails, you can still continue reading from, and writing to, a secondary node.
A two-node setup is the minimum for basic high availability, with both nodes having read-write permissions.
{{< img-hd src="/img/influxdb/influxdb-3-enterprise-high-availability.png" alt="Basic high availability setup" />}}
In a basic HA setup:
- Two nodes both write data to the same Object store and both handle queries
- Node 1 and Node 2 are _read replicas_ that read from each others Object store directories
- One of the nodes is designated as the Compactor node
> [!Note]
> Only one node can be designated as the Compactor.
> Compacted data is meant for a single writer, and many readers.
The following examples show how to configure and start two nodes
for a basic HA setup.
- _Node 1_ is for compaction (passes `compact` in `--mode`)
- _Node 2_ is for ingest and query
```bash
## NODE 1
# Example variables
# node-id: 'host01'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve \
--node-id host01 \
--cluster-id cluster01 \
--mode ingest,query,compact \
--object-store s3 \
--bucket influxdb-3-enterprise-storage \
--http-bind {{< influxdb/host >}} \
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
```
```bash
## NODE 2
# Example variables
# node-id: 'host02'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve \
--node-id host02 \
--cluster-id cluster01 \
--mode ingest,query \
--object-store s3 \
--bucket influxdb-3-enterprise-storage \
--http-bind localhost:8282 \
--aws-access-key-id AWS_ACCESS_KEY_ID \
--aws-secret-access-key AWS_SECRET_ACCESS_KEY
```
After the nodes have started, querying either node returns data for both nodes, and _NODE 1_ runs compaction.
To add nodes to this setup, start more read replicas with the same cluster ID.
### High availability with a dedicated Compactor
Data compaction in InfluxDB 3 is one of the more computationally expensive operations.
To ensure that your read-write nodes don't slow down due to compaction work, set up a compactor-only node for consistent and high performance across all nodes.
{{< img-hd src="/img/influxdb/influxdb-3-enterprise-dedicated-compactor.png" alt="Dedicated Compactor setup" />}}
The following examples show how to set up high availability with a dedicated Compactor node:
1. Start two read-write nodes as read replicas, similar to the previous example.
```bash
## NODE 1 — Writer/Reader Node #1
# Example variables
# node-id: 'host01'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve \
--node-id host01 \
--cluster-id cluster01 \
--mode ingest,query \
--object-store s3 \
--bucket influxdb-3-enterprise-storage \
--http-bind {{< influxdb/host >}} \
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
```
```bash
## NODE 2 — Writer/Reader Node #2
# Example variables
# node-id: 'host02'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve \
--node-id host02 \
--cluster-id cluster01 \
--mode ingest,query \
--object-store s3 \
--bucket influxdb-3-enterprise-storage \
--http-bind localhost:8282 \
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
```
2. Start the dedicated compactor node with the `--mode=compact` option to ensure the node **only** runs compaction.
```bash
## NODE 3 — Compactor Node
# Example variables
# node-id: 'host03'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve \
--node-id host03 \
--cluster-id cluster01 \
--mode compact \
--object-store s3 \
--bucket influxdb-3-enterprise-storage \
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
```
### High availability with read replicas and a dedicated Compactor
For a robust and effective setup for managing time-series data, you can run ingest nodes alongside read-only nodes and a dedicated Compactor node.
{{< img-hd src="/img/influxdb/influxdb-3-enterprise-workload-isolation.png" alt="Workload Isolation Setup" />}}
1. Start ingest nodes by assigning them the **`ingest`** mode.
To achieve the benefits of workload isolation, you'll send _only write requests_ to these ingest nodes. Later, you'll configure the _read-only_ nodes.
```bash
## NODE 1 — Writer Node #1
# Example variables
# node-id: 'host01'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve \
--node-id host01 \
--cluster-id cluster01 \
--mode ingest \
--object-store s3 \
--bucket influxdb-3-enterprise-storage \
--http-bind {{< influxdb/host >}} \
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
```
<!-- The following examples use different ports for different nodes. Don't use the influxdb/host shortcode below. -->
```bash
## NODE 2 — Writer Node #2
# Example variables
# node-id: 'host02'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve \
--node-id host02 \
--cluster-id cluster01 \
--mode ingest \
--object-store s3 \
--bucket influxdb-3-enterprise-storage \
--http-bind localhost:8282 \
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
```
2. Start the dedicated Compactor node with ` compact`.
```bash
## NODE 3 — Compactor Node
# Example variables
# node-id: 'host03'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve \
--node-id host03 \
--cluster-id cluster01 \
--mode compact \
--object-store s3 \
--bucket influxdb-3-enterprise-storage \
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
<AWS_SECRET_ACCESS_KEY>
```
3. Finally, start the query nodes as _read-only_ with `--mode query`.
```bash
## NODE 4 — Read Node #1
# Example variables
# node-id: 'host04'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve \
--node-id host04 \
--cluster-id cluster01 \
--mode query \
--object-store s3 \
--bucket influxdb-3-enterprise-storage \
--http-bind localhost:8383 \
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
--aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
```
```bash
## NODE 5 — Read Node #2
# Example variables
# node-id: 'host05'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve \
--node-id host05 \
--cluster-id cluster01 \
--mode query \
--object-store s3 \
--bucket influxdb-3-enterprise-storage \
--http-bind localhost:8484 \
--aws-access-key-id <AWS_ACCESS_KEY_ID> \
<AWS_SECRET_ACCESS_KEY>
```
Congratulations, you have a robust setup for workload isolation using {{% product-name %}}.
### Writing and querying for multi-node setups
You can use the default port `8181` for any write or query, without changing any of the commands.
> [!Note]
> #### Specify hosts for writes and queries
>
> To benefit from this multi-node, isolated architecture, specify hosts:
>
> - In write requests, specify a host that you have designated as _write-only_.
> - In query requests, specify a host that you have designated as _read-only_.
>
> When running multiple local instances for testing or separate nodes in production, specifying the host ensures writes and queries are routed to the correct instance.
{{% code-placeholders "(http://localhost:8585)|AUTH_TOKEN|DATABASE_NAME|QUERY" %}}
```bash
# Example querying a specific host
# HTTP-bound Port: 8585
influxdb3 query \
--host http://localhost:8585
--token AUTH_TOKEN \
--database DATABASE_NAME "QUERY"
```
{{% /code-placeholders %}}
Replace the following placeholders with your values:
- {{% code-placeholder-key %}}`http://localhost:8585`{{% /code-placeholder-key %}}: the host and port of the node to query
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in "enterprise" %}} with permission to query the specified database{{% /show-in %}}
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database to query
- {{% code-placeholder-key %}}`QUERY`{{% /code-placeholder-key %}}: the SQL or InfluxQL query to run against the database

View File

@ -10,38 +10,17 @@ including the following:
> The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for {{% product-name %}}.
> For other InfluxDB versions, see the [Support and feedback](#bug-reports-and-feedback) options.
## Data model
The {{% product-name %}} server contains logical databases; databases contain
tables; and tables are comprised of columns.
### Data model
Compared to previous versions of InfluxDB, you can think of a database as an
InfluxDB v2 `bucket` in v2 or an InfluxDB v1 `db/retention_policy`.
A `table` is equivalent to an InfluxDB v1 and v2 `measurement`.
The database server contains logical databases, which have tables, which have columns. Compared to previous versions of InfluxDB you can think of a database as a `bucket` in v2 or as a `db/retention_policy` in v1. A `table` is equivalent to a `measurement`, which has columns that can be of type `tag` (a string dictionary), `int64`, `float64`, `uint64`, `bool`, or `string` and finally every table has a `time` column that is a nanosecond precision timestamp.
Columns in a table represent time, tags, and fields. Columns can be one of the
following types:
In InfluxDB 3, every table has a primary key--the ordered set of tags and the time--for its data.
This is the sort order used for all Parquet files that get created. When you create a table, either through an explicit call or by writing data into a table for the first time, it sets the primary key to the tags in the order they arrived. This is immutable. Although InfluxDB is still a _schema-on-write_ database, the tag column definitions for a table are immutable.
- String dictionary (tag)
- `int64` (field)
- `float64` (field)
- `uint64` (field)
- `bool` (field)
- `string` (field)
- `time` (time with nanosecond precision)
Tags should hold unique identifying information like `sensor_id`, or `building_id` or `trace_id`. All other data should be kept in fields. You will be able to add fast last N value and distinct value lookups later for any column, whether it is a field or a tag.
In {{% product-name %}}, every table has a primary key--the ordered set of tags and the time--for its data.
The primary key uniquely identifies each and determines the sort order for all
Parquet files related to the table. When you create a table, either through an
explicit call or by writing data into a table for the first time, it sets the
primary key to the tags in the order they arrived.
Although InfluxDB is still a _schema-on-write_ database, the tag column
definitions for a table are immutable.
Tags should hold unique identifying information like `sensor_id`, `building_id`,
or `trace_id`. All other data should be stored as fields.
## Tools to use
### Tools to use
The following table compares tools that you can use to interact with {{% product-name %}}.
This tutorial covers many of the recommended tools.

View File

@ -90,11 +90,11 @@ def process_writes(influxdb3_local, table_batches, args=None):
# here we're using arguments provided at the time the trigger was set up
# to feed into paramters that we'll put into a query
query_params = {"room": "Kitchen"}
query_params = {"host": "foo"}
# here's an example of executing a parameterized query. Only SQL is supported.
# It will query the database that the trigger is attached to by default. We'll
# soon have support for querying other DBs.
query_result = influxdb3_local.query("SELECT * FROM home where room = '$host'", query_params)
query_result = influxdb3_local.query("SELECT * FROM cpu where host = '$host'", query_params)
# the result is a list of Dict that have the column name as key and value as
# value. If you run the WAL test plugin with your plugin against a DB that
# you've written data into, you'll be able to see some results
@ -142,20 +142,19 @@ def process_writes(influxdb3_local, table_batches, args=None):
influxdb3_local.info("done")
```
## Test a plugin on the server
##### Test a plugin on the server
{{% product-name %}} lets you test your processing engine plugin safely without
affecting actual data. During a plugin test:
Test your InfluxDB 3 plugin safely without affecting written data. During a plugin test:
- A query executed by the plugin queries against the server you send the request to.
- Writes aren't sent to the server but are returned to you.
To test a plugin:
To test a plugin, do the following:
1. Save the [example plugin code](#example-python-plugin-for-wal-rows) to a
plugin file inside of the plugin directory. If you haven't yet written data
to the table in the example, comment out the lines where it queries.
2. To run the test, enter the following command with the following options:
1. Create a _plugin directory_--for example, `/path/to/.influxdb/plugins`
2. [Start the InfluxDB server](#start-influxdb) and include the `--plugin-dir <PATH>` option.
3. Save the [example plugin code](#example-python-plugin-for-wal-rows) to a plugin file inside of the plugin directory. If you haven't yet written data to the table in the example, comment out the lines where it queries.
4. To run the test, enter the following command with the following options:
- `--lp` or `--file`: The line protocol to test
- Optional: `--input-arguments`: A comma-delimited list of `<KEY>=<VALUE>` arguments for your plugin code
@ -163,15 +162,15 @@ To test a plugin:
{{% code-placeholders "INPUT_LINE_PROTOCOL|INPUT_ARGS|DATABASE_NAME|AUTH_TOKEN|PLUGIN_FILENAME" %}}
```bash
influxdb3 test wal_plugin \
--database DATABASE_NAME \
--token AUTH_TOKEN \
--lp INPUT_LINE_PROTOCOL \
--input-arguments INPUT_ARGS \
PLUGIN_FILENAME
--lp INPUT_LINE_PROTOCOL \
--input-arguments INPUT_ARGS \
--database DATABASE_NAME \
--token AUTH_TOKEN \
PLUGIN_FILENAME
```
{{% /code-placeholders %}}
Replace the following:
Replace the following placeholders with your values:
- {{% code-placeholder-key %}}`INPUT_LINE_PROTOCOL`{{% /code-placeholder-key %}}: the line protocol to test
- Optional: {{% code-placeholder-key %}}`INPUT_ARGS`{{% /code-placeholder-key %}}: a comma-delimited list of `<KEY>=<VALUE>` arguments for your plugin code--for example, `arg1=hello,arg2=world`
@ -179,18 +178,23 @@ Replace the following:
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: the {{% token-link "admin" %}} for your {{% product-name %}} server
- {{% code-placeholder-key %}}`PLUGIN_FILENAME`{{% /code-placeholder-key %}}: the name of the plugin file to test
The command runs the plugin code with the test data, yields the data to the
plugin code, and then responds with the plugin result.
You can quickly see how the plugin behaves, what data it would have written to
the database, and any errors.
The command runs the plugin code with the test data, yields the data to the plugin code, and then responds with the plugin result.
You can quickly see how the plugin behaves, what data it would have written to the database, and any errors.
You can then edit your Python code in the plugins directory, and rerun the test.
The server reloads the file for every request to the `test` API.
##### Example: Test, create, and run a plugin
For more information, see [`influxdb3 test wal_plugin`](/influxdb3/version/reference/cli/influxdb3/test/wal_plugin/) or run `influxdb3 test wal_plugin -h`.
With the plugin code inside the server plugin directory, and a successful test,
you're ready to create a trigger for your server to run the plugin.
##### Example: Test and run a plugin
The following example shows how to test a plugin, and then create the plugin and
trigger:
<!-- pytest.mark.skip -->
```bash
# Test and create a plugin
# Test a plugin
# Requires:
# - A database named `mydb` with a table named `foo`
# - A Python plugin file named `test.py`
@ -203,16 +207,6 @@ influxdb3 test wal_plugin \
test.py
```
For more information, see [`influxdb3 test wal_plugin`](/influxdb3/version/reference/cli/influxdb3/test/wal_plugin/)
or run `influxdb3 test wal_plugin -h`.
## Create a trigger
With the plugin code inside the server plugin directory, and a successful test,
you're ready to create a trigger to run the plugin. Use the
[`influxdb3 create trigger` command](/influxdb3/version/reference/cli/influxdb3/create/trigger/)
to create a trigger.
```bash
# Create a trigger that runs the plugin
influxdb3 create trigger \
@ -224,8 +218,6 @@ influxdb3 create trigger \
trigger1
```
## Enable the trigger
After you have created a plugin and trigger, enter the following command to
enable the trigger and have it run the plugin as you write data:

View File

@ -1,299 +1,89 @@
<!-- COMMENT TO ALLOW STARTING WITH SHORTCODE -->
{{% product-name %}} supports both native SQL and InfluxQL for querying data. InfluxQL is
an SQL-like query language designed for InfluxDB v1 and customized for time
series queries.
### Query data
InfluxDB 3 supports native SQL for querying, in addition to InfluxQL, an
SQL-like language customized for time series queries.
{{% show-in "core" %}}
{{< product-name >}} limits
query time ranges to approximately 72 hours (both recent and historical) to
ensure query performance. For more information about the 72-hour limitation, see
the [update on InfluxDB 3 Cores 72-hour limitation](https://www.influxdata.com/blog/influxdb3-open-source-public-alpha-jan-27/).
query time ranges to 72 hours (both recent and historical) to ensure query performance.
For more information about the 72-hour limitation, see the
[update on InfluxDB 3 Cores 72-hour limitation](https://www.influxdata.com/blog/influxdb3-open-source-public-alpha-jan-27/).
{{% /show-in %}}
> [!Note]
> Flux, the language introduced in InfluxDB v2, is **not** supported in InfluxDB 3.
> Flux, the language introduced in InfluxDB 2.0, is **not** supported in InfluxDB 3.
<!-- TOC -->
The quickest way to get started querying is to use the `influxdb3` CLI (which uses the Flight SQL API over HTTP2).
- [Query data with the influxdb3 CLI](#query-data-with-the-influxdb3-cli)
- [Example queries](#example-queries)
- [Other tools for executing queries](#other-tools-for-executing-queries)
- [SQL vs InfluxQL](#sql-vs-influxql)
- [SQL](#sql)
- [InfluxQL](#influxql)
- [Optimize queries](#optimize-queries)
- [Last values cache](#last-values-cache)
- [Distinct values cache](#distinct-values-cache)
{{% show-in "enterprise" %}}- [File indexes](#file-indexes){{% /show-in %}}
The `query` subcommand includes options to help ensure that the right database is queried with the correct permissions. Only the `--database` option is required, but depending on your specific setup, you may need to pass other options, such as host, port, and token.
<!-- /TOC -->
| Option | Description | Required |
|---------|-------------|--------------|
| `--host` | The host URL of the server [default: `http://127.0.0.1:8181`] to query | No |
| `--database` | The name of the database to operate on | Yes |
| `--token` | The authentication token for the {{% product-name %}} server | No |
| `--language` | The query language of the provided query string [default: `sql`] [possible values: `sql`, `influxql`] | No |
| `--format` | The format in which to output the query [default: `pretty`] [possible values: `pretty`, `json`, `jsonl`, `csv`, `parquet`] | No |
| `--output` | The path to output data to | No |
## Query data with the influxdb3 CLI
#### Example: query `“SHOW TABLES”` on the `servers` database:
To get started querying data in {{% product-name %}}, use the
[`influxdb3 query` command](/influxdb3/version/reference/cli/influxdb3/query/)
and provide the following:
```console
$ influxdb3 query --database servers "SHOW TABLES"
+---------------+--------------------+--------------+------------+
| table_catalog | table_schema | table_name | table_type |
+---------------+--------------------+--------------+------------+
| public | iox | cpu | BASE TABLE |
| public | information_schema | tables | VIEW |
| public | information_schema | views | VIEW |
| public | information_schema | columns | VIEW |
| public | information_schema | df_settings | VIEW |
| public | information_schema | schemata | VIEW |
+---------------+--------------------+--------------+------------+
```
- `-H`, `--host`: The host URL of the server _(default is `http://127.0.0.1:8181`)_
- `-d`, `--database`: _({{% req %}})_ The name of the database to query
- `-l`, `--language`: The query language of the provided query string
- `sql` _(default)_
- `influxql`
- SQL or InfluxQL query as a string
#### Example: query the `cpu` table, limiting to 10 rows:
> [!Important]
> If the `INFLUXDB3_AUTH_TOKEN` environment variable defined in
> [Set up {{% product-name %}}](/influxdb3/version/get-started/setup/#set-your-token-for-authorization)
> is no longer set, reset the environment variable or provide your token using
> the `-t, --token` option in your command.
```console
$ influxdb3 query --database servers "SELECT DISTINCT usage_percent, time FROM cpu LIMIT 10"
+---------------+---------------------+
| usage_percent | time |
+---------------+---------------------+
| 63.4 | 2024-02-21T19:25:00 |
| 25.3 | 2024-02-21T19:06:40 |
| 26.5 | 2024-02-21T19:31:40 |
| 70.1 | 2024-02-21T19:03:20 |
| 83.7 | 2024-02-21T19:30:00 |
| 55.2 | 2024-02-21T19:00:00 |
| 80.5 | 2024-02-21T19:05:00 |
| 60.2 | 2024-02-21T19:33:20 |
| 20.5 | 2024-02-21T18:58:20 |
| 85.2 | 2024-02-21T19:28:20 |
+---------------+---------------------+
```
To query the home sensor sample data you wrote in
[Write data to {{% product-name %}}](/influxdb3/version/get-started/write/#write-data-using-the-cli),
run the following command:
### Query using the CLI for InfluxQL
[InfluxQL](/influxdb3/version/reference/influxql/) is an SQL-like language developed by InfluxData with specific features tailored for leveraging and working with InfluxDB. Its compatible with all versions of InfluxDB, making it a good choice for interoperability across different InfluxDB installations.
To query using InfluxQL, enter the `influxdb3 query` subcommand and specify `influxql` in the language option--for example:
{{% code-placeholders "DATABASE_NAME|AUTH_TOKEN" %}}
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[SQL](#)
[InfluxQL](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
<!-- pytest.mark.skip -->
```bash
influxdb3 query \
--database DATABASE_NAME \
"SELECT * FROM home ORDER BY time"
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
<!-- pytest.mark.skip -->
```bash
influxdb3 query \
--database DATABASE_NAME \
--token AUTH_TOKEN \
--language influxql \
"SELECT * FROM home"
"SELECT DISTINCT usage_percent FROM cpu WHERE time >= now() - 1d"
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
{{% /code-placeholders %}}
_Replace {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}
with the name of the database to query._
Replace the following placeholders with your values:
To query from a specific time range, use the `WHERE` clause to designate the
boundaries of your time range.
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database to query
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in "enterprise" %}} with permission to query the specified database{{% /show-in %}}
{{% code-placeholders "DATABASE_NAME|AUTH_TOKEN" %}}
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[SQL](#)
[InfluxQL](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
<!-- pytest.mark.skip -->
```bash
influxdb3 query \
--database DATABASE_NAME \
"SELECT * FROM home WHERE time >= now() - INTERVAL '7 days' ORDER BY time"
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
<!-- pytest.mark.skip -->
```bash
influxdb3 query \
--database DATABASE_NAME \
--language influxql \
"SELECT * FROM home WHERE time >= now() - 7d"
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
{{% /code-placeholders %}}
### Example queries
{{< expand-wrapper >}}
{{% expand "List tables in a database" %}}
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[SQL](#)
[InfluxQL](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sql
SHOW TABLES
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sql
SHOW MEASUREMENTS
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
{{% /expand %}}
{{% expand "Return the average temperature of all rooms" %}}
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[SQL](#)
[InfluxQL](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sql
SELECT avg(temp) AS avg_temp FROM home
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sql
SELECT MEAN(temp) AS avg_temp FROM home
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
{{% /expand %}}
{{% expand "Return the average temperature of the kitchen" %}}
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[SQL](#)
[InfluxQL](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sql
SELECT avg(temp) AS avg_temp FROM home WHERE room = 'Kitchen'
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sql
SELECT MEAN(temp) AS avg_temp FROM home WHERE room = 'Kitchen'
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
{{% /expand %}}
{{% expand "Query data from an absolute time range" %}}
{{% influxdb/custom-timestamps %}}
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[SQL](#)
[InfluxQL](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sql
SELECT
*
FROM
home
WHERE
time >= '2022-01-01T12:00:00Z'
AND time <= '2022-01-01T18:00:00Z'
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sql
SELECT
*
FROM
home
WHERE
time >= '2022-01-01T12:00:00Z'
AND time <= '2022-01-01T18:00:00Z'
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
{{% /influxdb/custom-timestamps %}}
{{% /expand %}}
{{% expand "Query data from a relative time range" %}}
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[SQL](#)
[InfluxQL](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sql
SELECT
*
FROM
home
WHERE
time >= now() - INTERVAL '7 days'
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sql
SELECT
*
FROM
home
WHERE
time >= now() - 7d
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
{{% /expand %}}
{{% expand "Calculate average humidity in 3-hour windows per room" %}}
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[SQL](#)
[InfluxQL](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```sql
SELECT
date_bin(INTERVAL '3 hours', time) AS time,
room,
avg(hum) AS avg_hum
FROM
home
GROUP BY
1,
room
ORDER BY
room,
1
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```sql
SELECT
MEAN(hum) AS avg_hum
FROM
home
WHERE
time >= '2022-01-01T08:00:00Z'
AND time <= '2022-01-01T20:00:00Z'
GROUP BY
time(3h),
room
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
{{% /expand %}}
{{< /expand-wrapper >}}
## Other tools for executing queries
Other tools are available for querying data in {{% product-name %}}, including
the following:
{{< expand-wrapper >}}
{{% expand "Query using the API" %}}
#### Query using the API
### Query using the API
InfluxDB 3 supports Flight (gRPC) APIs and an HTTP API.
To query your database using the HTTP API, send a request to the `/api/v3/query_sql` or `/api/v3/query_influxql` endpoints.
@ -337,11 +127,7 @@ Replace the following placeholders with your values:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database to query
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in "enterprise" %}} with permission to query the specified database{{% /show-in %}}
{{% /expand %}}
{{% expand "Query using the Python client" %}}
#### Query using the Python client
### Query using the Python client
Use the InfluxDB 3 Python library to interact with the database and integrate with your application.
We recommend installing the required packages in a Python virtual environment for your specific project.

View File

@ -283,7 +283,6 @@ influxdb3 serve \
--aws-allow-http
```
{{% /show-in %}}
{{% show-in "core" %}}
```bash
# S3 object store (default is the us-east-1 region)
@ -364,8 +363,6 @@ InfluxDB 3 Enterprise licenses:
- **At-Home**: For at-home hobbyist use with limited access to InfluxDB 3 Enterprise capabilities.
- **Commercial**: Commercial license with full access to InfluxDB 3 Enterprise capabilities.
You can obtain a license key from the [InfluxData pricing page](https://www.influxdata.com/pricing/).
### Start InfluxDB 3 Enterprise with your license
Use the following `docker run` command to start an InfluxDB 3 Enterprise container using your email address to activate a trial or at-home license.

View File

@ -1,16 +1,12 @@
<!-- ALLOW SHORTCODE -->
### Write data
{{% product-name %}} is designed for high write-throughput and uses an efficient,
human-readable write syntax called _[line protocol](#line-protocol)_. InfluxDB
is a schema-on-write database, meaning you can start writing data and InfluxDB
creates the logical database, tables, and their schemas automatically, without
any required intervention. Once InfluxDB creates the schema, it validates future
write requests against the schema before accepting new data.
Both new tags and fields can be added later as your schema changes.
InfluxDB is a schema-on-write database. You can start writing data and InfluxDB creates the logical database, tables, and their schemas on the fly.
After a schema is created, InfluxDB validates future write requests against it before accepting the data.
Subsequent requests can add new fields on-the-fly, but can't add new tags.
{{% show-in "core" %}}
> [!Note]
> #### InfluxDB 3 Core is optimized for recent data
> #### Core is optimized for recent data
>
> {{% product-name %}} is optimized for recent data but accepts writes from any time period.
> The system persists data to Parquet files for historical analysis with [InfluxDB 3 Enterprise](/influxdb3/enterprise/get-started/) or third-party tools.