Merge branch 'master' into feature/cloud-dedicated-user-management-docs
commit
6af3c72043
|
|
@ -0,0 +1,20 @@
|
|||
{
|
||||
"$schema": "https://raw.githubusercontent.com/modelcontextprotocol/modelcontextprotocol/refs/heads/main/schema/2025-06-18/schema.json",
|
||||
"description": "InfluxData documentation assistance via MCP server - Node.js execution",
|
||||
"mcpServers": {
|
||||
"influxdata": {
|
||||
"comment": "Use Node to run Docs MCP. To install and setup, see https://github.com/influxdata/docs-mcp-server",
|
||||
"type": "stdio",
|
||||
"command": "node",
|
||||
"args": [
|
||||
"${DOCS_MCP_SERVER_PATH}/dist/index.js"
|
||||
],
|
||||
"env": {
|
||||
"DOCS_API_KEY_FILE": "${DOCS_API_KEY_FILE:-$HOME/.env.docs-kapa-api-key}",
|
||||
"DOCS_MODE": "external-only",
|
||||
"MCP_LOG_LEVEL": "${MCP_LOG_LEVEL:-info}",
|
||||
"NODE_ENV": "${NODE_ENV:-production}"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -40,11 +40,20 @@ cluster, and they use the
|
|||
[`influxd-ctl` tool](/enterprise_influxdb/v1/tools/influxd-ctl/) available on
|
||||
all meta nodes.
|
||||
|
||||
{{% warn %}}
|
||||
Before you begin, stop writing historical data to InfluxDB.
|
||||
Historical data have timestamps that occur at anytime in the past.
|
||||
Performing a rebalance while writing historical data can lead to data loss.
|
||||
{{% /warn %}}
|
||||
> [!Warning]
|
||||
> #### Stop writing data before rebalancing
|
||||
>
|
||||
> Before you begin, stop writing historical data to InfluxDB.
|
||||
> Historical data have timestamps that occur at anytime in the past.
|
||||
> Performing a rebalance while writing historical data can lead to data loss.
|
||||
|
||||
> [!Caution]
|
||||
> #### Risks of rebalancing with future data
|
||||
>
|
||||
> Truncating shards that contain data with future timestamps (such as forecast or prediction data)
|
||||
> can lead to overlapping shards and data duplication.
|
||||
> For more information, see [`truncate-shards` and future data](/enterprise_influxdb/v1/tools/influxd-ctl/truncate-shards/#understand-the-risks-with-future-data)
|
||||
> or [contact InfluxData support](https://support.influxdata.com).
|
||||
|
||||
## Rebalance Procedure 1: Rebalance a cluster to create space
|
||||
|
||||
|
|
@ -61,18 +70,23 @@ data node to expand the total disk capacity of the cluster.
|
|||
In the next steps, you will safely move shards from one of the two original data
|
||||
nodes to the new data node.
|
||||
|
||||
### Step 1: Truncate Hot Shards
|
||||
### Step 1: Truncate hot shards
|
||||
|
||||
Hot shards are shards that are currently receiving writes.
|
||||
Hot shards are shards that currently receive writes.
|
||||
Performing any action on a hot shard can lead to data inconsistency within the
|
||||
cluster which requires manual intervention from the user.
|
||||
|
||||
To prevent data inconsistency, truncate hot shards before moving any shards
|
||||
> [!Caution]
|
||||
> #### Risks of rebalancing with future data
|
||||
>
|
||||
> Truncating shards that contain data with future timestamps (such as forecast or prediction data)
|
||||
> can lead to overlapping shards and data duplication.
|
||||
> For more information, see [`truncate-shards` and future data](/enterprise_influxdb/v1/tools/influxd-ctl/truncate-shards/#understand-the-risks-with-future-data)
|
||||
> or [contact InfluxData support](https://support.influxdata.com).
|
||||
|
||||
To prevent data inconsistency, truncate shards before moving any shards
|
||||
across data nodes.
|
||||
The command below creates a new hot shard which is automatically distributed
|
||||
across all data nodes in the cluster, and the system writes all new points to
|
||||
that shard.
|
||||
All previous writes are now stored in cold shards.
|
||||
The following command truncates all hot shards and creates new shards to write data to:
|
||||
|
||||
```
|
||||
influxd-ctl truncate-shards
|
||||
|
|
@ -84,10 +98,11 @@ The expected output of this command is:
|
|||
Truncated shards.
|
||||
```
|
||||
|
||||
Once you truncate the shards, you can work on redistributing the cold shards
|
||||
without the threat of data inconsistency in the cluster.
|
||||
Any hot or new shards are now evenly distributed across the cluster and require
|
||||
no further intervention.
|
||||
New shards are automatically distributed across all data nodes, and InfluxDB writes new points to them.
|
||||
Previous writes are stored in cold shards.
|
||||
|
||||
After truncating shards, you can redistribute cold shards without data inconsistency.
|
||||
Hot and new shards are evenly distributed and require no further intervention.
|
||||
|
||||
### Step 2: Identify Cold Shards
|
||||
|
||||
|
|
@ -292,18 +307,23 @@ name duration shardGroupDuration replicaN default
|
|||
autogen 0s 1h0m0s 3 #👍 true
|
||||
```
|
||||
|
||||
### Step 2: Truncate Hot Shards
|
||||
### Step 2: Truncate hot shards
|
||||
|
||||
Hot shards are shards that are currently receiving writes.
|
||||
Hot shards are shards that currently receive writes.
|
||||
Performing any action on a hot shard can lead to data inconsistency within the
|
||||
cluster which requires manual intervention from the user.
|
||||
|
||||
To prevent data inconsistency, truncate hot shards before copying any shards
|
||||
> [!Caution]
|
||||
> #### Risks of rebalancing with future data
|
||||
>
|
||||
> Truncating shards that contain data with future timestamps (such as forecast or prediction data)
|
||||
> can lead to overlapping shards and data duplication.
|
||||
> For more information, see [`truncate-shards` and future data](/enterprise_influxdb/v1/tools/influxd-ctl/truncate-shards/#understand-the-risks-with-future-data)
|
||||
> or [contact InfluxData support](https://support.influxdata.com).
|
||||
|
||||
To prevent data inconsistency, truncate shards before copying any shards
|
||||
to the new data node.
|
||||
The command below creates a new hot shard which is automatically distributed
|
||||
across the three data nodes in the cluster, and the system writes all new points
|
||||
to that shard.
|
||||
All previous writes are now stored in cold shards.
|
||||
The following command truncates all hot shards and creates new shards to write data to:
|
||||
|
||||
```
|
||||
influxd-ctl truncate-shards
|
||||
|
|
@ -315,10 +335,11 @@ The expected output of this command is:
|
|||
Truncated shards.
|
||||
```
|
||||
|
||||
Once you truncate the shards, you can work on distributing the cold shards
|
||||
without the threat of data inconsistency in the cluster.
|
||||
Any hot or new shards are now automatically distributed across the cluster and
|
||||
require no further intervention.
|
||||
New shards are automatically distributed across all data nodes, and InfluxDB writes new points to them.
|
||||
Previous writes are stored in cold shards.
|
||||
|
||||
After truncating shards, you can redistribute cold shards without data inconsistency.
|
||||
Hot and new shards are evenly distributed and require no further intervention.
|
||||
|
||||
### Step 3: Identify Cold Shards
|
||||
|
||||
|
|
|
|||
|
|
@ -16,6 +16,7 @@ We recommend the following design guidelines for most use cases:
|
|||
- [Where to store data (tag or field)](#where-to-store-data-tag-or-field)
|
||||
- [Avoid too many series](#avoid-too-many-series)
|
||||
- [Use recommended naming conventions](#use-recommended-naming-conventions)
|
||||
- [Writing data with future timestamps](#writing-data-with-future-timestamps)
|
||||
- [Shard Group Duration Management](#shard-group-duration-management)
|
||||
|
||||
## Where to store data (tag or field)
|
||||
|
|
@ -209,6 +210,38 @@ from(bucket:"<database>/<retention_policy>")
|
|||
> SELECT mean("temp") FROM "weather_sensor" WHERE region = 'north'
|
||||
```
|
||||
|
||||
## Writing data with future timestamps
|
||||
|
||||
When designing schemas for applications that write data with future timestamps--such as forecast data from machine learning models, predictions, or scheduled events--consider the following implications for InfluxDB Enterprise v1 cluster operations and data integrity.
|
||||
|
||||
### Understanding future data behavior
|
||||
|
||||
InfluxDB Enterprise v1 creates shards based on time ranges.
|
||||
When you write data with future timestamps, InfluxDB creates shards that cover future time periods.
|
||||
|
||||
> [!Caution]
|
||||
> #### Risks of rebalancing with future data
|
||||
>
|
||||
> Truncating shards that contain data with future timestamps (such as forecast or prediction data)
|
||||
> can lead to overlapping shards and data duplication.
|
||||
> For more information, see [`truncate-shards` and future data](/enterprise_influxdb/v1/tools/influxd-ctl/truncate-shards/#understand-the-risks-with-future-data)
|
||||
> or [contact InfluxData support](https://support.influxdata.com).
|
||||
|
||||
### Use separate databases for future data
|
||||
|
||||
When planning for data that contains future timestamps, consider isolating it in dedicated databases to:
|
||||
|
||||
- Minimize impact on real-time data operations
|
||||
- Allow targeted maintenance operations on current vs. future data
|
||||
- Simplify backup and recovery strategies for different data types
|
||||
|
||||
```sql
|
||||
# Example: Separate databases for different data types
|
||||
CREATE DATABASE "realtime_metrics"
|
||||
CREATE DATABASE "ml_forecasts"
|
||||
CREATE DATABASE "scheduled_predictions"
|
||||
```
|
||||
|
||||
## Shard group duration management
|
||||
|
||||
### Shard group duration overview
|
||||
|
|
|
|||
|
|
@ -17,6 +17,14 @@ The `influxd-ctl truncate-shards` command truncates all shards that are currentl
|
|||
being written to (also known as "hot" shards) and creates new shards to write
|
||||
new data to.
|
||||
|
||||
> [!Caution]
|
||||
> #### Overlapping shards with forecast and future data
|
||||
>
|
||||
> Running `truncate-shards` on shards containing future timestamps can create
|
||||
> overlapping shards with duplicate data points.
|
||||
>
|
||||
> [Understand the risks with future data](#understand-the-risks-with-future-data).
|
||||
|
||||
## Usage
|
||||
|
||||
```sh
|
||||
|
|
@ -40,3 +48,34 @@ _Also see [`influxd-ctl` global flags](/enterprise_influxdb/v1/tools/influxd-ctl
|
|||
```bash
|
||||
influxd-ctl truncate-shards -delay 3m
|
||||
```
|
||||
|
||||
## Understand the risks with future data
|
||||
|
||||
> [!Important]
|
||||
> If you need to rebalance shards that contain future data, contact [InfluxData support](https://www.influxdata.com/contact/) for assistance.
|
||||
|
||||
When you write data points with timestamps in the future (for example, forecast data from machine learning models),
|
||||
the `truncate-shards` command behaves differently and can cause data duplication issues.
|
||||
|
||||
### How truncate-shards normally works
|
||||
|
||||
For shards containing current data:
|
||||
1. The command creates an artificial stop point in the shard at the truncation timestamp
|
||||
2. Creates a new shard starting from the truncation point
|
||||
3. Example: A one-week shard (Sunday to Saturday) becomes:
|
||||
- Shard A: Sunday to truncation point (Wednesday 2pm)
|
||||
- Shard B: Truncation point (Wednesday 2pm) to Saturday
|
||||
|
||||
This works correctly because the meta nodes understand the boundaries and route queries appropriately.
|
||||
|
||||
### The problem with future data
|
||||
|
||||
For shards containing future timestamps:
|
||||
1. The truncation doesn't cleanly split the shard at a point in time
|
||||
2. Instead, it creates overlapping shards that cover the same time period
|
||||
3. Example: If you're writing September forecast data in August:
|
||||
- Original shard: September 1-7
|
||||
- After truncation:
|
||||
- Shard A: September 1-7 (with data up to truncation)
|
||||
- Shard B: September 1-7 (for new data after truncation)
|
||||
- **Result**: Duplicate data points for the same timestamps
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ menu:
|
|||
influxdb3_cloud_dedicated:
|
||||
name: Use Grafana
|
||||
parent: Visualize data
|
||||
influxdb3/cloud-dedicated/tags: [Flight client, query, visualization]
|
||||
influxdb3/cloud-dedicated/tags: [query, visualization, Grafana]
|
||||
aliases:
|
||||
- /influxdb3/cloud-dedicated/query-data/tools/grafana/
|
||||
- /influxdb3/cloud-dedicated/query-data/sql/execute-queries/grafana/
|
||||
|
|
@ -20,199 +20,7 @@ alt_links:
|
|||
cloud: /influxdb/cloud/tools/grafana/
|
||||
core: /influxdb3/core/visualize-data/grafana/
|
||||
enterprise: /influxdb3/enterprise/visualize-data/grafana/
|
||||
source: /content/shared/v3-process-data/visualize/grafana.md
|
||||
---
|
||||
|
||||
Use [Grafana](https://grafana.com/) to query and visualize data stored in
|
||||
{{% product-name %}}.
|
||||
|
||||
> [Grafana] enables you to query, visualize, alert on, and explore your metrics,
|
||||
> logs, and traces wherever they are stored.
|
||||
> [Grafana] provides you with tools to turn your time-series database (TSDB)
|
||||
> data into insightful graphs and visualizations.
|
||||
>
|
||||
> {{% cite %}}-- [Grafana documentation](https://grafana.com/docs/grafana/latest/introduction/){{% /cite %}}
|
||||
|
||||
<!-- TOC -->
|
||||
|
||||
- [Install Grafana or login to Grafana Cloud](#install-grafana-or-login-to-grafana-cloud)
|
||||
- [InfluxDB data source](#influxdb-data-source)
|
||||
- [Create an InfluxDB data source](#create-an-influxdb-data-source)
|
||||
- [Query InfluxDB with Grafana](#query-influxdb-with-grafana)
|
||||
- [Build visualizations with Grafana](#build-visualizations-with-grafana)
|
||||
|
||||
<!-- /TOC -->
|
||||
|
||||
## Install Grafana or login to Grafana Cloud
|
||||
|
||||
If using the open source version of **Grafana**, follow the
|
||||
[Grafana installation instructions](https://grafana.com/docs/grafana/latest/setup-grafana/installation/)
|
||||
to install Grafana for your operating system.
|
||||
If using **Grafana Cloud**, login to your Grafana Cloud instance.
|
||||
|
||||
## InfluxDB data source
|
||||
|
||||
The InfluxDB data source plugin is included in the Grafana core distribution.
|
||||
Use the plugin to query and visualize data stored in {{< product-name >}} with
|
||||
both InfluxQL and SQL.
|
||||
|
||||
> [!Note]
|
||||
> #### Grafana 10.3+
|
||||
>
|
||||
> The instructions below are for **Grafana 10.3+** which introduced the newest
|
||||
> version of the InfluxDB core plugin.
|
||||
> The updated plugin includes **SQL support** for InfluxDB 3-based products such
|
||||
> as {{< product-name >}}.
|
||||
|
||||
## Create an InfluxDB data source
|
||||
|
||||
1. In your Grafana user interface (UI), navigate to **Data Sources**.
|
||||
2. Click **Add new data source**.
|
||||
3. Search for and select the **InfluxDB** plugin.
|
||||
4. Provide a name for your data source.
|
||||
5. Under **Query Language**, select either **SQL** or **InfluxQL**:
|
||||
|
||||
{{< tabs-wrapper >}}
|
||||
{{% tabs %}}
|
||||
[SQL](#)
|
||||
[InfluxQL](#)
|
||||
{{% /tabs %}}
|
||||
{{% tab-content %}}
|
||||
<!--------------------------------- BEGIN SQL --------------------------------->
|
||||
|
||||
When creating an InfluxDB data source that uses SQL to query data:
|
||||
|
||||
1. Under **HTTP**:
|
||||
|
||||
- **URL**: Provide your {{% product-name omit=" Clustered" %}} cluster URL
|
||||
using the HTTPS protocol:
|
||||
|
||||
```
|
||||
https://{{< influxdb/host >}}
|
||||
```
|
||||
|
||||
2. Under **InfluxDB Details**:
|
||||
|
||||
- **Database**: Provide a default database name to query.
|
||||
- **Token**: Provide a [database token](/influxdb3/cloud-dedicated/admin/tokens/#database-tokens)
|
||||
with read access to the databases you want to query.
|
||||
|
||||
3. Click **Save & test**.
|
||||
|
||||
{{< img-hd src="/img/influxdb3/cloud-dedicated-grafana-influxdb-data-source-sql.png" alt="Grafana InfluxDB data source for InfluxDB Cloud Serverless that uses SQL" />}}
|
||||
|
||||
<!---------------------------------- END SQL ---------------------------------->
|
||||
{{% /tab-content %}}
|
||||
{{% tab-content %}}
|
||||
<!------------------------------- BEGIN INFLUXQL ------------------------------>
|
||||
|
||||
When creating an InfluxDB data source that uses InfluxQL to query data:
|
||||
|
||||
1. Under **HTTP**:
|
||||
|
||||
- **URL**: Provide your {{% product-name %}} cluster URL
|
||||
using the HTTPS protocol:
|
||||
|
||||
```
|
||||
https://{{< influxdb/host >}}
|
||||
```
|
||||
|
||||
2. Under **InfluxDB Details**:
|
||||
|
||||
- **Database**: Provide a default database name to query.
|
||||
- **User**: Provide an arbitrary string.
|
||||
_This credential is ignored when querying {{% product-name %}}, but it cannot be empty._
|
||||
- **Password**: Provide a [database token](/influxdb3/cloud-dedicated/admin/tokens/#database-tokens)
|
||||
with read access to the databases you want to query.
|
||||
- **HTTP Method**: Choose one of the available HTTP request methods to use when querying data:
|
||||
|
||||
- **POST** ({{< req text="Recommended" >}})
|
||||
- **GET**
|
||||
|
||||
3. Click **Save & test**.
|
||||
|
||||
{{< img-hd src="/img/influxdb3/cloud-dedicated-grafana-influxdb-data-source-influxql.png" alt="Grafana InfluxDB data source for InfluxDB Cloud Serverless using InfluxQL" />}}
|
||||
|
||||
<!-------------------------------- END INFLUXQL ------------------------------->
|
||||
{{% /tab-content %}}
|
||||
{{< /tabs-wrapper >}}
|
||||
|
||||
## Query InfluxDB with Grafana
|
||||
|
||||
After you [configure and save an InfluxDB datasource](#create-a-datasource),
|
||||
use Grafana to build, run, and inspect queries against your InfluxDB database.
|
||||
|
||||
{{< tabs-wrapper >}}
|
||||
{{% tabs %}}
|
||||
[SQL](#)
|
||||
[InfluxQL](#)
|
||||
{{% /tabs %}}
|
||||
{{% tab-content %}}
|
||||
<!--------------------------------- BEGIN SQL --------------------------------->
|
||||
|
||||
> [!Note]
|
||||
> {{% sql/sql-schema-intro %}}
|
||||
> To learn more, see [Query Data](/influxdb3/cloud-dedicated/query-data/sql/).
|
||||
|
||||
1. Click **Explore**.
|
||||
2. In the dropdown, select the saved InfluxDB data source to query.
|
||||
3. Use the SQL query form to build your query:
|
||||
- **Table**: Select the measurement to query.
|
||||
- **Column**: Select one or more fields and tags to return as columns in query results.
|
||||
|
||||
With SQL, select the `time` column to include timestamps with the data.
|
||||
Grafana relies on the `time` column to correctly graph time series data.
|
||||
|
||||
- _**Optional:**_ Toggle **filter** to generate **WHERE** clause statements.
|
||||
- **WHERE**: Configure condition expressions to include in the `WHERE` clause.
|
||||
|
||||
- _**Optional:**_ Toggle **group** to generate **GROUP BY** clause statements.
|
||||
|
||||
- **GROUP BY**: Select columns to group by.
|
||||
If you include an aggregation function in the **SELECT** list,
|
||||
you must group by one or more of the queried columns.
|
||||
SQL returns the aggregation for each group.
|
||||
|
||||
- {{< req text="Recommended" color="green" >}}:
|
||||
Toggle **order** to generate **ORDER BY** clause statements.
|
||||
|
||||
- **ORDER BY**: Select columns to sort by.
|
||||
You can sort by time and multiple fields or tags.
|
||||
To sort in descending order, select **DESC**.
|
||||
|
||||
4. {{< req text="Recommended" color="green" >}}: Change format to **Time series**.
|
||||
- Use the **Format** dropdown to change the format of the query results.
|
||||
For example, to visualize the query results as a time series, select **Time series**.
|
||||
|
||||
5. Click **Run query** to execute the query.
|
||||
|
||||
<!---------------------------------- END SQL ---------------------------------->
|
||||
{{% /tab-content %}}
|
||||
{{% tab-content %}}
|
||||
<!------------------------------- BEGIN INFLUXQL ------------------------------>
|
||||
|
||||
1. Click **Explore**.
|
||||
2. In the dropdown, select the **InfluxDB** data source that you want to query.
|
||||
3. Use the InfluxQL query form to build your query:
|
||||
- **FROM**: Select the measurement that you want to query.
|
||||
- **WHERE**: To filter the query results, enter a conditional expression.
|
||||
- **SELECT**: Select fields to query and an aggregate function to apply to each.
|
||||
The aggregate function is applied to each time interval defined in the
|
||||
`GROUP BY` clause.
|
||||
- **GROUP BY**: By default, Grafana groups data by time to downsample results
|
||||
and improve query performance.
|
||||
You can also add other tags to group by.
|
||||
4. Click **Run query** to execute the query.
|
||||
|
||||
<!-------------------------------- END INFLUXQL ------------------------------->
|
||||
{{% /tab-content %}}
|
||||
{{< /tabs-wrapper >}}
|
||||
|
||||
{{< youtube "rSsouoNsNDs" >}}
|
||||
|
||||
To learn about query management and inspection in Grafana, see the
|
||||
[Grafana Explore documentation](https://grafana.com/docs/grafana/latest/explore/).
|
||||
|
||||
## Build visualizations with Grafana
|
||||
|
||||
For a comprehensive walk-through of creating visualizations with
|
||||
Grafana, see the [Grafana documentation](https://grafana.com/docs/grafana/latest/).
|
||||
<!-- SOURCE: /content/shared/v3-process-data/visualize/grafana.md -->
|
||||
|
|
|
|||
|
|
@ -1,7 +1,8 @@
|
|||
---
|
||||
title: InfluxDB Cloud Dedicated data durability
|
||||
description: >
|
||||
InfluxDB Cloud Dedicated replicates all time series data in the storage tier across
|
||||
Data written to {{% product-name %}} progresses through multiple stages to ensure durability, optimized performance and storage, and efficient querying. Configuration options at each stage affect system behavior, balancing reliability and resource usage.
|
||||
{{% product-name %}} replicates all time series data in the storage tier across
|
||||
multiple availability zones within a cloud region and automatically creates backups
|
||||
that can be used to restore data in the event of a node failure or data corruption.
|
||||
weight: 102
|
||||
|
|
@ -13,73 +14,7 @@ influxdb3/cloud-dedicated/tags: [backups, internals]
|
|||
related:
|
||||
- https://docs.aws.amazon.com/AmazonS3/latest/userguide/DataDurability.html, AWS S3 Data Durabililty
|
||||
- /influxdb3/cloud-dedicated/reference/internals/storage-engine/
|
||||
source: /shared/v3-distributed-internals-reference/durability.md
|
||||
---
|
||||
|
||||
{{< product-name >}} writes data to multiple Write-Ahead-Log (WAL) files on local
|
||||
storage and retains WALs until the data is persisted to Parquet files in object storage.
|
||||
Parquet data files in object storage are redundantly stored on multiple devices
|
||||
across a minimum of three availability zones in a cloud region.
|
||||
|
||||
## Data storage
|
||||
|
||||
In {{< product-name >}}, all measurements are stored in
|
||||
[Apache Parquet](https://parquet.apache.org/) files that represent a
|
||||
point-in-time snapshot of the data. The Parquet files are immutable and are
|
||||
never replaced nor modified. Parquet files are stored in object storage and
|
||||
referenced in the [Catalog](/influxdb3/cloud-dedicated/reference/internals/storage-engine/#catalog), which InfluxDB uses to find the appropriate Parquet files for a particular set of data.
|
||||
|
||||
### Data deletion
|
||||
|
||||
When data is deleted or expires (reaches the database's [retention period](/influxdb3/cloud-dedicated/reference/internals/data-retention/#database-retention-period)), InfluxDB performs the following steps:
|
||||
|
||||
1. Marks the associated Parquet files as deleted in the catalog.
|
||||
2. Filters out data marked for deletion from all queries.
|
||||
3. Retains Parquet files marked for deletion in object storage for approximately 30 days after the youngest data in the file ages out of retention.
|
||||
|
||||
## Data ingest
|
||||
|
||||
When data is written to {{< product-name >}}, InfluxDB first writes the data to a
|
||||
Write-Ahead-Log (WAL) on locally attached storage on the [Ingester](/influxdb3/cloud-dedicated/reference/internals/storage-engine/#ingester) node before
|
||||
acknowledging the write request. After acknowledging the write request, the
|
||||
Ingester holds the data in memory temporarily and then writes the contents of
|
||||
the WAL to Parquet files in object storage and updates the [Catalog](/influxdb3/cloud-dedicated/reference/internals/storage-engine/#catalog) to
|
||||
reference the newly created Parquet files. If an Ingester node is gracefully shut
|
||||
down (for example, during a new software deployment), it flushes the contents of
|
||||
the WAL to the Parquet files before shutting down.
|
||||
|
||||
## Backups
|
||||
|
||||
{{< product-name >}} implements the following data backup strategies:
|
||||
|
||||
- **Backup of WAL file**: The WAL file is written on locally attached storage.
|
||||
If an ingester process fails, the new ingester simply reads the WAL file on
|
||||
startup and continues normal operation. WAL files are maintained until their
|
||||
contents have been written to the Parquet files in object storage.
|
||||
For added protection, ingesters can be configured for write replication, where
|
||||
each measurement is written to two different WAL files before acknowledging
|
||||
the write.
|
||||
|
||||
- **Backup of Parquet files**: Parquet files are stored in object storage where
|
||||
they are redundantly stored on multiple devices across a minimum of three
|
||||
availability zones in a cloud region. Parquet files associated with each
|
||||
database are kept in object storage for the duration of database retention period
|
||||
plus an additional time period (approximately 30 days).
|
||||
|
||||
- **Backup of catalog**: InfluxData keeps a transaction log of all recent updates
|
||||
to the [InfluxDB catalog](/influxdb3/cloud-dedicated/reference/internals/storage-engine/#catalog) and generates a daily backup of
|
||||
the catalog. Backups are preserved for at least 30 days in object storage across a minimum
|
||||
of three availability zones.
|
||||
|
||||
## Recovery
|
||||
|
||||
InfluxData can perform the following recovery operations:
|
||||
|
||||
- **Recovery after ingester failure**: If an ingester fails, a new ingester is
|
||||
started up and reads from the WAL file for the recently ingested data.
|
||||
|
||||
- **Recovery of Parquet files**: {{< product-name >}} uses the provided object
|
||||
storage data durability to recover Parquet files.
|
||||
|
||||
- **Recovery of the catalog**: InfluxData can restore the [Catalog](/influxdb3/cloud-dedicated/reference/internals/storage-engine/#catalog) to
|
||||
the most recent daily backup and then reapply any transactions
|
||||
that occurred since the interruption.
|
||||
<!--// SOURCE - content/shared/v3-distributed-internals-reference/durability.md -->
|
||||
|
|
|
|||
|
|
@ -21,211 +21,7 @@ alt_links:
|
|||
cloud: /influxdb/cloud/tools/grafana/
|
||||
core: /influxdb3/core/visualize-data/grafana/
|
||||
enterprise: /influxdb3/enterprise/visualize-data/grafana/
|
||||
source: /content/shared/v3-process-data/visualize/grafana.md
|
||||
---
|
||||
|
||||
Use [Grafana](https://grafana.com/) to query and visualize data stored in
|
||||
{{% product-name %}}.
|
||||
|
||||
> [Grafana] enables you to query, visualize, alert on, and explore your metrics,
|
||||
> logs, and traces wherever they are stored.
|
||||
> [Grafana] provides you with tools to turn your time-series database (TSDB)
|
||||
> data into insightful graphs and visualizations.
|
||||
>
|
||||
> {{% cite %}}-- [Grafana documentation](https://grafana.com/docs/grafana/latest/introduction/){{% /cite %}}
|
||||
|
||||
<!-- TOC -->
|
||||
|
||||
- [Install Grafana or login to Grafana Cloud](#install-grafana-or-login-to-grafana-cloud)
|
||||
- [InfluxDB data source](#influxdb-data-source)
|
||||
- [Create an InfluxDB data source](#create-an-influxdb-data-source)
|
||||
- [Query InfluxDB with Grafana](#query-influxdb-with-grafana)
|
||||
- [Build visualizations with Grafana](#build-visualizations-with-grafana)
|
||||
|
||||
<!-- /TOC -->
|
||||
|
||||
## Install Grafana or login to Grafana Cloud
|
||||
|
||||
If using the open source version of **Grafana**, follow the
|
||||
[Grafana installation instructions](https://grafana.com/docs/grafana/latest/setup-grafana/installation/)
|
||||
to install Grafana for your operating system.
|
||||
If using **Grafana Cloud**, login to your Grafana Cloud instance.
|
||||
|
||||
## InfluxDB data source
|
||||
|
||||
The InfluxDB data source plugin is included in the Grafana core distribution.
|
||||
Use the plugin to query and visualize data stored in {{< product-name >}} with
|
||||
both InfluxQL and SQL.
|
||||
|
||||
> [!Note]
|
||||
> #### Grafana 10.3+
|
||||
>
|
||||
> The instructions below are for **Grafana 10.3+** which introduced the newest
|
||||
> version of the InfluxDB core plugin.
|
||||
> The updated plugin includes **SQL support** for InfluxDB 3-based products such
|
||||
> as {{< product-name >}}.
|
||||
|
||||
## Create an InfluxDB data source
|
||||
|
||||
Which data source you create depends on which query language you want to use to
|
||||
query {{% product-name %}}:
|
||||
|
||||
1. In your Grafana user interface (UI), navigate to **Data Sources**.
|
||||
2. Click **Add new data source**.
|
||||
3. Search for and select the **InfluxDB** plugin.
|
||||
4. Provide a name for your data source.
|
||||
5. Under **Query Language**, select either **SQL** or **InfluxQL**:
|
||||
|
||||
{{< tabs-wrapper >}}
|
||||
{{% tabs %}}
|
||||
[SQL](#)
|
||||
[InfluxQL](#)
|
||||
{{% /tabs %}}
|
||||
{{% tab-content %}}
|
||||
<!--------------------------------- BEGIN SQL --------------------------------->
|
||||
|
||||
When creating an InfluxDB data source that uses SQL to query data:
|
||||
|
||||
1. Under **HTTP**:
|
||||
|
||||
- **URL**: Provide your [{{% product-name %}} region URL](/influxdb3/cloud-serverless/reference/regions/)
|
||||
using the HTTPS protocol:
|
||||
|
||||
```
|
||||
https://{{< influxdb/host >}}
|
||||
```
|
||||
|
||||
2. Under **InfluxDB Details**:
|
||||
|
||||
- **Database**: Provide a default bucket name to query.
|
||||
In {{< product-name >}}, a bucket functions as a database.
|
||||
- **Token**: Provide an [API token](/influxdb3/cloud-serverless/admin/tokens/)
|
||||
with read access to the buckets you want to query.
|
||||
|
||||
3. Click **Save & test**.
|
||||
|
||||
{{< img-hd src="/img/influxdb3/cloud-serverless-grafana-influxdb-data-source-sql.png" alt="Grafana InfluxDB data source for InfluxDB Cloud Serverless that uses SQL" />}}
|
||||
|
||||
<!---------------------------------- END SQL ---------------------------------->
|
||||
{{% /tab-content %}}
|
||||
{{% tab-content %}}
|
||||
<!------------------------------- BEGIN INFLUXQL ------------------------------>
|
||||
|
||||
When creating an InfluxDB data source that uses InfluxQL to query data:
|
||||
|
||||
> [!Note]
|
||||
> #### Map databases and retention policies to buckets
|
||||
>
|
||||
> To query {{% product-name %}} with InfluxQL, first map database and retention policy
|
||||
> (DBRP) combinations to your InfluxDB Cloud buckets. For more information, see
|
||||
> [Map databases and retention policies to buckets](/influxdb3/cloud-serverless/query-data/influxql/dbrp/).
|
||||
|
||||
1. Under **HTTP**:
|
||||
|
||||
- **URL**: Provide your [{{% product-name %}} region URL](/influxdb3/cloud-serverless/reference/regions/)
|
||||
using the HTTPS protocol:
|
||||
|
||||
```
|
||||
https://{{< influxdb/host >}}
|
||||
```
|
||||
|
||||
2. Under **InfluxDB Details**:
|
||||
|
||||
- **Database**: Provide a database name to query.
|
||||
Use the database name that is mapped to your InfluxBD bucket.
|
||||
- **User**: Provide an arbitrary string.
|
||||
_This credential is ignored when querying {{% product-name %}}, but it cannot be empty._
|
||||
- **Password**: Provide an [API token](/influxdb3/cloud-serverless/admin/tokens/)
|
||||
with read access to the buckets you want to query.
|
||||
- **HTTP Method**: Choose one of the available HTTP request methods to use when querying data:
|
||||
|
||||
- **POST** ({{< req text="Recommended" >}})
|
||||
- **GET**
|
||||
|
||||
3. Click **Save & test**.
|
||||
|
||||
{{< img-hd src="/img/influxdb3/cloud-serverless-grafana-influxdb-data-source-influxql.png" alt="Grafana InfluxDB data source for InfluxDB Cloud Serverless using InfluxQL" />}}
|
||||
|
||||
<!-------------------------------- END INFLUXQL ------------------------------->
|
||||
{{% /tab-content %}}
|
||||
{{< /tabs-wrapper >}}
|
||||
|
||||
## Query InfluxDB with Grafana
|
||||
|
||||
After you [configure and save a FlightSQL or InfluxDB datasource](#create-a-datasource),
|
||||
use Grafana to build, run, and inspect queries against your InfluxDB bucket.
|
||||
|
||||
{{< tabs-wrapper >}}
|
||||
{{% tabs %}}
|
||||
[SQL](#)
|
||||
[InfluxQL](#)
|
||||
{{% /tabs %}}
|
||||
{{% tab-content %}}
|
||||
<!--------------------------------- BEGIN SQL --------------------------------->
|
||||
|
||||
> [!Note]
|
||||
> {{% sql/sql-schema-intro %}}
|
||||
> To learn more, see [Query Data](/influxdb3/cloud-serverless/query-data/sql/).
|
||||
|
||||
1. Click **Explore**.
|
||||
2. In the dropdown, select the saved InfluxDB data source to query.
|
||||
3. Use the SQL query form to build your query:
|
||||
- **Table**: Select the measurement to query.
|
||||
- **Column**: Select one or more fields and tags to return as columns in query results.
|
||||
|
||||
With SQL, select the `time` column to include timestamps with the data.
|
||||
Grafana relies on the `time` column to correctly graph time series data.
|
||||
|
||||
- _**Optional:**_ Toggle **filter** to generate **WHERE** clause statements.
|
||||
- **WHERE**: Configure condition expressions to include in the `WHERE` clause.
|
||||
|
||||
- _**Optional:**_ Toggle **group** to generate **GROUP BY** clause statements.
|
||||
|
||||
- **GROUP BY**: Select columns to group by.
|
||||
If you include an aggregation function in the **SELECT** list,
|
||||
you must group by one or more of the queried columns.
|
||||
SQL returns the aggregation for each group.
|
||||
|
||||
- {{< req text="Recommended" color="green" >}}:
|
||||
Toggle **order** to generate **ORDER BY** clause statements.
|
||||
|
||||
- **ORDER BY**: Select columns to sort by.
|
||||
You can sort by time and multiple fields or tags.
|
||||
To sort in descending order, select **DESC**.
|
||||
|
||||
4. {{< req text="Recommended" color="green" >}}: Change format to **Time series**.
|
||||
- Use the **Format** dropdown to change the format of the query results.
|
||||
For example, to visualize the query results as a time series, select **Time series**.
|
||||
|
||||
5. Click **Run query** to execute the query.
|
||||
|
||||
<!---------------------------------- END SQL ---------------------------------->
|
||||
{{% /tab-content %}}
|
||||
{{% tab-content %}}
|
||||
<!------------------------------- BEGIN INFLUXQL ------------------------------>
|
||||
|
||||
1. Click **Explore**.
|
||||
2. In the dropdown, select the **InfluxDB** data source that you want to query.
|
||||
3. Use the InfluxQL query form to build your query:
|
||||
- **FROM**: Select the measurement that you want to query.
|
||||
- **WHERE**: To filter the query results, enter a conditional expression.
|
||||
- **SELECT**: Select fields to query and an aggregate function to apply to each.
|
||||
The aggregate function is applied to each time interval defined in the
|
||||
`GROUP BY` clause.
|
||||
- **GROUP BY**: By default, Grafana groups data by time to downsample results
|
||||
and improve query performance.
|
||||
You can also add other tags to group by.
|
||||
4. Click **Run query** to execute the query.
|
||||
|
||||
<!-------------------------------- END INFLUXQL ------------------------------->
|
||||
{{% /tab-content %}}
|
||||
{{< /tabs-wrapper >}}
|
||||
|
||||
{{< youtube "rSsouoNsNDs" >}}
|
||||
|
||||
To learn about query management and inspection in Grafana, see the
|
||||
[Grafana Explore documentation](https://grafana.com/docs/grafana/latest/explore/).
|
||||
|
||||
## Build visualizations with Grafana
|
||||
|
||||
For a comprehensive walk-through of creating visualizations with
|
||||
Grafana, see the [Grafana documentation](https://grafana.com/docs/grafana/latest/).
|
||||
<!-- SOURCE: /content/shared/v3-process-data/visualize/grafana.md -->
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ point-in-time snapshot of the data. The Parquet files are immutable and are
|
|||
never replaced nor modified. Parquet files are stored in object storage.
|
||||
|
||||
<span id="influxdb-catalog"></span>
|
||||
The _InfluxDB catalog_ is a relational, PostreSQL-compatible database that
|
||||
The _InfluxDB catalog_ is a relational, PostgreSQL-compatible database that
|
||||
contains references to all Parquet files in object storage and is used as an
|
||||
index to find the appropriate Parquet files for a particular set of data.
|
||||
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ menu:
|
|||
influxdb3_clustered:
|
||||
name: Use Grafana
|
||||
parent: Visualize data
|
||||
influxdb3/clustered/tags: [query, visualization]
|
||||
influxdb3/clustered/tags: [query, visualization, Grafana]
|
||||
aliases:
|
||||
- /influxdb3/clustered/query-data/tools/grafana/
|
||||
- /influxdb3/clustered/query-data/sql/execute-queries/grafana/
|
||||
|
|
@ -20,195 +20,7 @@ alt_links:
|
|||
cloud: /influxdb/cloud/tools/grafana/
|
||||
core: /influxdb3/core/visualize-data/grafana/
|
||||
enterprise: /influxdb3/enterprise/visualize-data/grafana/
|
||||
source: /content/shared/v3-process-data/visualize/grafana.md
|
||||
---
|
||||
|
||||
Use [Grafana](https://grafana.com/) to query and visualize data stored in
|
||||
{{% product-name %}}.
|
||||
|
||||
> [Grafana] enables you to query, visualize, alert on, and explore your metrics,
|
||||
> logs, and traces wherever they are stored.
|
||||
> [Grafana] provides you with tools to turn your time-series database (TSDB)
|
||||
> data into insightful graphs and visualizations.
|
||||
>
|
||||
> {{% cite %}}-- [Grafana documentation](https://grafana.com/docs/grafana/latest/introduction/){{% /cite %}}
|
||||
|
||||
- [Install Grafana or login to Grafana Cloud](#install-grafana-or-login-to-grafana-cloud)
|
||||
- [InfluxDB data source](#influxdb-data-source)
|
||||
- [Create an InfluxDB data source](#create-an-influxdb-data-source)
|
||||
- [Query InfluxDB with Grafana](#query-influxdb-with-grafana)
|
||||
- [Build visualizations with Grafana](#build-visualizations-with-grafana)
|
||||
|
||||
## Install Grafana or login to Grafana Cloud
|
||||
|
||||
If using the open source version of **Grafana**, follow the
|
||||
[Grafana installation instructions](https://grafana.com/docs/grafana/latest/setup-grafana/installation/)
|
||||
to install Grafana for your operating system.
|
||||
If using **Grafana Cloud**, login to your Grafana Cloud instance.
|
||||
|
||||
## InfluxDB data source
|
||||
|
||||
The InfluxDB data source plugin is included in the Grafana core distribution.
|
||||
Use the plugin to query and visualize data stored in {{< product-name >}} with
|
||||
both InfluxQL and SQL.
|
||||
|
||||
> [!Note]
|
||||
> #### Grafana 10.3+
|
||||
>
|
||||
> The instructions below are for **Grafana 10.3+** which introduced the newest
|
||||
> version of the InfluxDB core plugin.
|
||||
> The updated plugin includes **SQL support** for InfluxDB 3-based products such
|
||||
> as {{< product-name >}}.
|
||||
|
||||
## Create an InfluxDB data source
|
||||
|
||||
1. In your Grafana user interface (UI), navigate to **Data Sources**.
|
||||
2. Click **Add new data source**.
|
||||
3. Search for and select the **InfluxDB** plugin.
|
||||
4. Provide a name for your data source.
|
||||
5. Under **Query Language**, select either **SQL** or **InfluxQL**:
|
||||
|
||||
{{< tabs-wrapper >}}
|
||||
{{% tabs %}}
|
||||
[SQL](#)
|
||||
[InfluxQL](#)
|
||||
{{% /tabs %}}
|
||||
{{% tab-content %}}
|
||||
<!--------------------------------- BEGIN SQL --------------------------------->
|
||||
|
||||
When creating an InfluxDB data source that uses SQL to query data:
|
||||
|
||||
1. Under **HTTP**:
|
||||
|
||||
- **URL**: Provide your {{% product-name omit=" Clustered" %}} cluster URL
|
||||
using the HTTPS protocol:
|
||||
|
||||
```
|
||||
https://{{< influxdb/host >}}
|
||||
```
|
||||
|
||||
2. Under **InfluxDB Details**:
|
||||
|
||||
- **Database**: Provide a default [database](/influxdb3/clustered/admin/databases/) name to query.
|
||||
- **Token**: Provide a [database token](/influxdb3/clustered/admin/tokens/#database-tokens)
|
||||
with read access to the databases you want to query.
|
||||
|
||||
3. Click **Save & test**.
|
||||
|
||||
{{< img-hd src="/img/influxdb3/clustered-grafana-influxdb-data-source-sql.png" alt="Grafana InfluxDB data source for InfluxDB Cloud Serverless that uses SQL" />}}
|
||||
|
||||
<!---------------------------------- END SQL ---------------------------------->
|
||||
{{% /tab-content %}}
|
||||
{{% tab-content %}}
|
||||
<!------------------------------- BEGIN INFLUXQL ------------------------------>
|
||||
|
||||
When creating an InfluxDB data source that uses InfluxQL to query data:
|
||||
|
||||
1. Under **HTTP**:
|
||||
|
||||
- **URL**: Provide your [{{% product-name %}} region URL](/influxdb3/clustered/reference/regions/)
|
||||
using the HTTPS protocol:
|
||||
|
||||
```
|
||||
https://{{< influxdb/host >}}
|
||||
```
|
||||
|
||||
2. Under **InfluxDB Details**:
|
||||
|
||||
- **Database**: Provide a default [database](/influxdb3/clustered/admin/databases/) name to query.
|
||||
- **User**: Provide an arbitrary string.
|
||||
_This credential is ignored when querying {{% product-name %}}, but it cannot be empty._
|
||||
- **Password**: Provide a [database token](/influxdb3/clustered/admin/tokens/#database-tokens)
|
||||
with read access to the databases you want to query.
|
||||
- **HTTP Method**: Choose one of the available HTTP request methods to use when querying data:
|
||||
|
||||
- **POST** ({{< req text="Recommended" >}})
|
||||
- **GET**
|
||||
|
||||
3. Click **Save & test**.
|
||||
|
||||
{{< img-hd src="/img/influxdb3/clustered-grafana-influxdb-data-source-influxql.png" alt="Grafana InfluxDB data source for InfluxDB Cloud Serverless using InfluxQL" />}}
|
||||
|
||||
<!-------------------------------- END INFLUXQL ------------------------------->
|
||||
{{% /tab-content %}}
|
||||
{{< /tabs-wrapper >}}
|
||||
|
||||
## Query InfluxDB with Grafana
|
||||
|
||||
After you [configure and save an InfluxDB datasource](#create-a-datasource),
|
||||
use Grafana to build, run, and inspect queries against your InfluxDB database.
|
||||
|
||||
{{< tabs-wrapper >}}
|
||||
{{% tabs %}}
|
||||
[SQL](#)
|
||||
[InfluxQL](#)
|
||||
{{% /tabs %}}
|
||||
{{% tab-content %}}
|
||||
<!--------------------------------- BEGIN SQL --------------------------------->
|
||||
|
||||
> [!Note]
|
||||
> {{% sql/sql-schema-intro %}}
|
||||
> To learn more, see [Query Data](/influxdb3/clustered/query-data/sql/).
|
||||
|
||||
1. Click **Explore**.
|
||||
2. In the dropdown, select the saved InfluxDB data source to query.
|
||||
3. Use the SQL query form to build your query:
|
||||
- **Table**: Select the measurement to query.
|
||||
- **Column**: Select one or more fields and tags to return as columns in query results.
|
||||
|
||||
With SQL, select the `time` column to include timestamps with the data.
|
||||
Grafana relies on the `time` column to correctly graph time series data.
|
||||
|
||||
- _**Optional:**_ Toggle **filter** to generate **WHERE** clause statements.
|
||||
- **WHERE**: Configure condition expressions to include in the `WHERE` clause.
|
||||
|
||||
- _**Optional:**_ Toggle **group** to generate **GROUP BY** clause statements.
|
||||
|
||||
- **GROUP BY**: Select columns to group by.
|
||||
If you include an aggregation function in the **SELECT** list,
|
||||
you must group by one or more of the queried columns.
|
||||
SQL returns the aggregation for each group.
|
||||
|
||||
- {{< req text="Recommended" color="green" >}}:
|
||||
Toggle **order** to generate **ORDER BY** clause statements.
|
||||
|
||||
- **ORDER BY**: Select columns to sort by.
|
||||
You can sort by time and multiple fields or tags.
|
||||
To sort in descending order, select **DESC**.
|
||||
|
||||
4. {{< req text="Recommended" color="green" >}}: Change format to **Time series**.
|
||||
- Use the **Format** dropdown to change the format of the query results.
|
||||
For example, to visualize the query results as a time series, select **Time series**.
|
||||
|
||||
5. Click **Run query** to execute the query.
|
||||
|
||||
<!---------------------------------- END SQL ---------------------------------->
|
||||
{{% /tab-content %}}
|
||||
{{% tab-content %}}
|
||||
<!------------------------------- BEGIN INFLUXQL ------------------------------>
|
||||
|
||||
1. Click **Explore**.
|
||||
2. In the dropdown, select the **InfluxDB** data source that you want to query.
|
||||
3. Use the InfluxQL query form to build your query:
|
||||
- **FROM**: Select the measurement that you want to query.
|
||||
- **WHERE**: To filter the query results, enter a conditional expression.
|
||||
- **SELECT**: Select fields to query and an aggregate function to apply to each.
|
||||
The aggregate function is applied to each time interval defined in the
|
||||
`GROUP BY` clause.
|
||||
- **GROUP BY**: By default, Grafana groups data by time to downsample results
|
||||
and improve query performance.
|
||||
You can also add other tags to group by.
|
||||
4. Click **Run query** to execute the query.
|
||||
|
||||
<!-------------------------------- END INFLUXQL ------------------------------->
|
||||
{{% /tab-content %}}
|
||||
{{< /tabs-wrapper >}}
|
||||
|
||||
{{< youtube "rSsouoNsNDs" >}}
|
||||
|
||||
To learn about query management and inspection in Grafana, see the
|
||||
[Grafana Explore documentation](https://grafana.com/docs/grafana/latest/explore/).
|
||||
|
||||
## Build visualizations with Grafana
|
||||
|
||||
For a comprehensive walk-through of creating visualizations with
|
||||
Grafana, see the [Grafana documentation](https://grafana.com/docs/grafana/latest/).
|
||||
<!-- SOURCE: /content/shared/v3-process-data/visualize/grafana.md -->
|
||||
|
|
|
|||
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
title: InfluxDB Clustered data durability
|
||||
description: >
|
||||
Data written to {{% product-name %}} progresses through multiple stages to ensure durability, optimized performance and storage, and efficient querying. Configuration options at each stage affect system behavior, balancing reliability and resource usage.
|
||||
weight: 102
|
||||
menu:
|
||||
influxdb3_clustered:
|
||||
name: Data durability
|
||||
parent: InfluxDB internals
|
||||
influxdb3/clustered/tags: [backups, internals]
|
||||
related:
|
||||
- https://docs.aws.amazon.com/AmazonS3/latest/userguide/DataDurability.html, AWS S3 Data Durabililty
|
||||
- /influxdb3/clustered/reference/internals/storage-engine/
|
||||
source: /shared/v3-distributed-internals-reference/durability.md
|
||||
---
|
||||
|
||||
<!--// SOURCE - content/shared/v3-distributed-internals-reference/durability.md -->
|
||||
|
|
@ -390,6 +390,43 @@ spec:
|
|||
# ...[remaining configuration]
|
||||
```
|
||||
|
||||
### `clustered-auth` service routes to removed `gateway` service instead of `core` service
|
||||
|
||||
If you have the `clusteredAuth` feature flag enabled, the `clustered-auth` service will be deployed.
|
||||
The service currently routes to the recently removed `gateway` service instead of the new `core` service.
|
||||
|
||||
#### Temporary workaround for service routing
|
||||
|
||||
Until you upgrade to release `20250805-1812019`, you need to override the `clustered-auth`
|
||||
service to point to the new `core` service by adding the following `env` overrides to your `AppInstance`:
|
||||
|
||||
```yaml
|
||||
apiVersion: kubecfg.dev/v1alpha1
|
||||
kind: AppInstance
|
||||
metadata:
|
||||
name: influxdb
|
||||
namespace: influxdb
|
||||
spec:
|
||||
package:
|
||||
image: us-docker.pkg.dev/influxdb2-artifacts/clustered/influxdb:20241024-1354148
|
||||
apiVersion: influxdata.com/v1alpha1
|
||||
spec:
|
||||
components:
|
||||
querier:
|
||||
template:
|
||||
containers:
|
||||
clustered-auth:
|
||||
env:
|
||||
AUTHZ_TOKEN_SVC_ADDRESS: 'http://core:8091/'
|
||||
router:
|
||||
template:
|
||||
containers:
|
||||
clustered-auth:
|
||||
env:
|
||||
AUTHZ_TOKEN_SVC_ADDRESS: 'http://core:8091/'
|
||||
# ...remaining configuration...
|
||||
```
|
||||
|
||||
### Highlights
|
||||
|
||||
#### AppInstance image override bug fix
|
||||
|
|
@ -1241,7 +1278,7 @@ We now expose a `google` object within the `objectStore` configuration, which
|
|||
enables support for using Google Cloud's GCS as a backing object store for IOx
|
||||
components. This supports both
|
||||
[GKE workload identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity)
|
||||
and [IAM Service Account](https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform#step_3_create_service_account_credentials)
|
||||
and [IAM Service Account](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#kubernetes-sa-to-iam)
|
||||
authentication methods.
|
||||
|
||||
#### Support for bypassing identity provider configuration for database/token management
|
||||
|
|
|
|||
|
|
@ -9,6 +9,13 @@ aliases:
|
|||
- /kapacitor/v1/about_the_project/releasenotes-changelog/
|
||||
---
|
||||
|
||||
## v1.8.1 {date="2025-09-08}
|
||||
|
||||
### Dependency updates
|
||||
|
||||
- Upgrade golang.org/x/oauth2 from 0.23.0 to 0.27.0
|
||||
- Upgrade Go to 1.24.6
|
||||
|
||||
## v1.8.0 {date="2025-06-26"}
|
||||
|
||||
> [!Warning]
|
||||
|
|
|
|||
|
|
@ -1278,7 +1278,7 @@ Defines the address on which InfluxDB serves HTTP API requests.
|
|||
|
||||
Specifies the size of memory pool used during query execution.
|
||||
Can be given as absolute value in bytes or as a percentage of the total available memory--for
|
||||
example: `8000000000` or `10%`).
|
||||
example: `8000000000` or `10%`.
|
||||
|
||||
{{% show-in "core" %}}**Default:** `8589934592`{{% /show-in %}}
|
||||
{{% show-in "enterprise" %}}**Default:** `20%`{{% /show-in %}}
|
||||
|
|
@ -1316,6 +1316,7 @@ percentage (portion of available memory) or absolute value in MB--for example: `
|
|||
|
||||
Specifies the interval to flush buffered data to a WAL file. Writes that wait
|
||||
for WAL confirmation take up to this interval to complete.
|
||||
Use `s` for seconds or `ms` for milliseconds. For local disks, `100 ms` is recommended.
|
||||
|
||||
**Default:** `1s`
|
||||
|
||||
|
|
|
|||
|
|
@ -329,8 +329,8 @@ each frame that the window function operates on.
|
|||
|
||||
- [UNBOUNDED PRECEDING](#unbounded-preceding)
|
||||
- [offset PRECEDING](#offset-preceding)
|
||||
- CURRENT_ROW](#current-row)
|
||||
- [offset> FOLLOWING](#offset-following)
|
||||
- [CURRENT_ROW](#current-row)
|
||||
- [offset FOLLOWING](#offset-following)
|
||||
- [UNBOUNDED FOLLOWING](#unbounded-following)
|
||||
|
||||
##### UNBOUNDED PRECEDING
|
||||
|
|
@ -369,18 +369,6 @@ For example, `3 FOLLOWING` includes 3 rows after the current row.
|
|||
|
||||
##### UNBOUNDED FOLLOWING
|
||||
|
||||
Starts at the current row and ends at the last row of the partition.
|
||||
##### offset FOLLOWING
|
||||
|
||||
Use a specified offset of [frame units](#frame-units) _after_ the current row
|
||||
as a frame boundary.
|
||||
|
||||
```sql
|
||||
offset FOLLOWING
|
||||
```
|
||||
|
||||
##### UNBOUNDED FOLLOWING
|
||||
|
||||
Use the current row to the end of the current partition the frame boundary.
|
||||
|
||||
```sql
|
||||
|
|
|
|||
|
|
@ -96,6 +96,91 @@ less than or equal to `08-19-2019T13:00:00Z`.
|
|||
{{% /expand %}}
|
||||
{{< /expand-wrapper >}}
|
||||
|
||||
### Filter data by dynamic date ranges
|
||||
|
||||
Use date and time functions to filter data by relative time periods that automatically update.
|
||||
|
||||
#### Get data from yesterday
|
||||
|
||||
```sql
|
||||
SELECT *
|
||||
FROM h2o_feet
|
||||
WHERE "location" = 'santa_monica'
|
||||
AND time >= DATE_TRUNC('day', NOW() - INTERVAL '1 day')
|
||||
AND time < DATE_TRUNC('day', NOW())
|
||||
```
|
||||
|
||||
{{< expand-wrapper >}}
|
||||
{{% expand "View example results" %}}
|
||||
|
||||
This query filters data to include only records from the previous calendar day:
|
||||
|
||||
- `NOW() - INTERVAL '1 day'` calculates yesterday's timestamp
|
||||
- `DATE_TRUNC('day', ...)` truncates to the start of that day (00:00:00)
|
||||
- The range spans from yesterday at 00:00:00 to today at 00:00:00
|
||||
|
||||
| level description | location | time | water_level |
|
||||
| :---------------- | :----------- | :----------------------- | :---------- |
|
||||
| below 3 feet | santa_monica | 2019-08-18T12:00:00.000Z | 2.533 |
|
||||
| below 3 feet | santa_monica | 2019-08-18T12:06:00.000Z | 2.543 |
|
||||
| below 3 feet | santa_monica | 2019-08-18T12:12:00.000Z | 2.385 |
|
||||
| below 3 feet | santa_monica | 2019-08-18T12:18:00.000Z | 2.362 |
|
||||
| below 3 feet | santa_monica | 2019-08-18T12:24:00.000Z | 2.405 |
|
||||
| below 3 feet | santa_monica | 2019-08-18T12:30:00.000Z | 2.398 |
|
||||
|
||||
{{% /expand %}}
|
||||
{{< /expand-wrapper >}}
|
||||
|
||||
#### Get data from the last 24 hours
|
||||
|
||||
```sql
|
||||
SELECT *
|
||||
FROM h2o_feet
|
||||
WHERE time >= NOW() - INTERVAL '1 day' AND location = 'santa_monica'
|
||||
```
|
||||
|
||||
{{< expand-wrapper >}}
|
||||
{{% expand "View example results" %}}
|
||||
|
||||
This query returns data from exactly 24 hours before the current time. Unlike the "yesterday" example, this creates a rolling 24-hour window that moves with the current time.
|
||||
|
||||
| level description | location | time | water_level |
|
||||
| :---------------- | :----------- | :----------------------- | :---------- |
|
||||
| below 3 feet | santa_monica | 2019-08-18T18:00:00.000Z | 2.120 |
|
||||
| below 3 feet | santa_monica | 2019-08-18T18:06:00.000Z | 2.028 |
|
||||
| below 3 feet | santa_monica | 2019-08-18T18:12:00.000Z | 1.982 |
|
||||
| below 3 feet | santa_monica | 2019-08-19T06:00:00.000Z | 1.825 |
|
||||
| below 3 feet | santa_monica | 2019-08-19T06:06:00.000Z | 1.753 |
|
||||
| below 3 feet | santa_monica | 2019-08-19T06:12:00.000Z | 1.691 |
|
||||
|
||||
{{% /expand %}}
|
||||
{{< /expand-wrapper >}}
|
||||
|
||||
#### Get data from the current week
|
||||
|
||||
```sql
|
||||
SELECT *
|
||||
FROM h2o_feet
|
||||
WHERE time >= DATE_TRUNC('week', NOW()) AND location = 'santa_monica'
|
||||
```
|
||||
|
||||
{{< expand-wrapper >}}
|
||||
{{% expand "View example results" %}}
|
||||
|
||||
This query returns all data from the start of the current week (Monday at 00:00:00) to the current time. The DATE_TRUNC('week', NOW()) function truncates the current timestamp to the beginning of the week.
|
||||
|
||||
| level description | location | time | water_level |
|
||||
| :---------------- | :----------- | :----------------------- | :---------- |
|
||||
| below 3 feet | santa_monica | 2019-08-12T00:00:00.000Z | 2.064 |
|
||||
| below 3 feet | santa_monica | 2019-08-14T09:30:00.000Z | 2.116 |
|
||||
| below 3 feet | santa_monica | 2019-08-16T15:45:00.000Z | 1.952 |
|
||||
| below 3 feet | santa_monica | 2019-08-18T12:00:00.000Z | 2.533 |
|
||||
| below 3 feet | santa_monica | 2019-08-18T18:00:00.000Z | 2.385 |
|
||||
| below 3 feet | santa_monica | 2019-08-19T10:30:00.000Z | 1.691 |
|
||||
|
||||
{{% /expand %}}
|
||||
{{< /expand-wrapper >}}
|
||||
|
||||
### Filter data using the OR operator
|
||||
|
||||
```sql
|
||||
|
|
|
|||
|
|
@ -0,0 +1,92 @@
|
|||
## How data flows through {{% product-name %}}
|
||||
|
||||
When data is written to {{% product-name %}}, it progresses through multiple stages to ensure durability, optimized performance and storage, and efficient querying. Configuration options at each stage affect system behavior, balancing reliability and resource usage.
|
||||
|
||||
{{< svg "/static/svgs/v3-storage-architecture.svg" >}}
|
||||
|
||||
<span class="caption">Figure: Write request, response, and ingest flow for {{% product-name %}}</span>
|
||||
|
||||
- [Data ingest](#data-ingest)
|
||||
- [Data storage](#data-storage)
|
||||
- [Data deletion](#data-deletion)
|
||||
- [Backups](#backups)
|
||||
{{% hide-in "clustered" %}}- [Recovery](#recovery){{% /hide-in %}}
|
||||
|
||||
## Data ingest
|
||||
|
||||
1. [Write validation and memory buffer](#write-validation-and-memory-buffer)
|
||||
2. [Write-ahead log (WAL) persistence](#write-ahead-log-wal-persistence)
|
||||
|
||||
### Write validation and memory buffer
|
||||
|
||||
The [Router](/influxdb3/version/reference/internals/storage-engine/#router) validates incoming data to prevent malformed or unsupported data from entering the system.
|
||||
{{% product-name %}} writes accepted data to multiple write-ahead log (WAL) files on [Ingester](/influxdb3/version/reference/internals/storage-engine/#ingester) pods' local storage (default is 2 for redundancy) before acknowledging the write request.
|
||||
The Ingester holds the data in memory to ensure leading-edge data is available for querying.
|
||||
|
||||
### Write-ahead log (WAL) persistence
|
||||
|
||||
Ingesters persist the contents of
|
||||
the WAL to Parquet files in object storage and updates the [Catalog](/influxdb3/version/reference/internals/storage-engine/#catalog) to
|
||||
reference the newly created Parquet files.
|
||||
{{% product-name %}} retains WALs until the data is persisted.
|
||||
|
||||
If an Ingester node is gracefully shut down (for example, during a new software deployment), it flushes the contents of the WAL to the Parquet files before shutting down.
|
||||
|
||||
## Data storage
|
||||
|
||||
In {{< product-name >}}, all measurements are stored in
|
||||
[Apache Parquet](https://parquet.apache.org/) files that represent a
|
||||
point-in-time snapshot of the data. The Parquet files are immutable and are
|
||||
never replaced nor modified. Parquet files are stored in object storage and
|
||||
referenced in the [Catalog](/influxdb3/version/reference/internals/storage-engine/#catalog), which InfluxDB uses to find the appropriate Parquet files for a particular set of data.
|
||||
|
||||
{{% hide-in "clustered" %}}
|
||||
Parquet data files in object storage are redundantly stored on multiple devices
|
||||
across a minimum of three availability zones in a cloud region.
|
||||
{{% /hide-in %}}
|
||||
|
||||
## Data deletion
|
||||
|
||||
When data is deleted or expires (reaches the database's [retention period](/influxdb3/version/reference/internals/data-retention/#database-retention-period)), InfluxDB performs the following steps:
|
||||
|
||||
1. Marks the associated Parquet files as deleted in the catalog.
|
||||
2. Filters out data marked for deletion from all queries.
|
||||
{{% hide-in "clustered" %}}3. Retains Parquet files marked for deletion in object storage for approximately 30 days after the youngest data in the file ages out of retention.{{% /hide-in %}}
|
||||
|
||||
## Backups
|
||||
|
||||
{{< product-name >}} implements the following data backup strategies:
|
||||
|
||||
- **Backup of WAL file**: The WAL file is written on locally attached storage.
|
||||
If an ingester process fails, the new ingester simply reads the WAL file on
|
||||
startup and continues normal operation. WAL files are maintained until their
|
||||
contents have been written to the Parquet files in object storage.
|
||||
For added protection, ingesters can be configured for write replication, where
|
||||
each measurement is written to two different WAL files before acknowledging
|
||||
the write.
|
||||
|
||||
- **Backup of Parquet files**: Parquet files are stored in object storage {{% hide-in "clustered" %}}where
|
||||
they are redundantly stored on multiple devices across a minimum of three
|
||||
availability zones in a cloud region. Parquet files associated with each
|
||||
database are kept in object storage for the duration of database retention period
|
||||
plus an additional time period (approximately 30 days).{{% /hide-in %}}
|
||||
|
||||
- **Backup of catalog**: InfluxData keeps a transaction log of all recent updates
|
||||
to the [InfluxDB catalog](/influxdb3/version/reference/internals/storage-engine/#catalog) and generates a daily backup of
|
||||
the catalog. {{% hide-in "clustered" %}}Backups are preserved for at least 30 days in object storage across a minimum of three availability zones.{{% /hide-in %}}
|
||||
|
||||
{{% hide-in "clustered" %}}
|
||||
## Recovery
|
||||
|
||||
InfluxData can perform the following recovery operations:
|
||||
|
||||
- **Recovery after ingester failure**: If an ingester fails, a new ingester is
|
||||
started up and reads from the WAL file for the recently ingested data.
|
||||
|
||||
- **Recovery of Parquet files**: {{< product-name >}} uses the provided object
|
||||
storage data durability to recover Parquet files.
|
||||
|
||||
- **Recovery of the catalog**: InfluxData can restore the [Catalog](/influxdb3/version/reference/internals/storage-engine/#catalog) to
|
||||
the most recent daily backup and then reapply any transactions
|
||||
that occurred since the interruption.
|
||||
{{% /hide-in %}}
|
||||
|
|
@ -0,0 +1,201 @@
|
|||
Use [Grafana](https://grafana.com/) to query and visualize data stored in
|
||||
{{% product-name %}}.
|
||||
|
||||
> [Grafana] enables you to query, visualize, alert on, and explore your metrics,
|
||||
> logs, and traces wherever they are stored.
|
||||
> [Grafana] provides you with tools to turn your time-series database (TSDB)
|
||||
> data into insightful graphs and visualizations.
|
||||
>
|
||||
> {{% cite %}}-- [Grafana documentation](https://grafana.com/docs/grafana/latest/introduction/){{% /cite %}}
|
||||
|
||||
- [Install Grafana or login to Grafana Cloud](#install-grafana-or-login-to-grafana-cloud)
|
||||
- [InfluxDB data source](#influxdb-data-source)
|
||||
- [Create an InfluxDB data source](#create-an-influxdb-data-source)
|
||||
- [Query InfluxDB with Grafana](#query-influxdb-with-grafana)
|
||||
- [Build visualizations with Grafana](#build-visualizations-with-grafana)
|
||||
|
||||
## Install Grafana or login to Grafana Cloud
|
||||
|
||||
If using the open source version of **Grafana**, follow the
|
||||
[Grafana installation instructions](https://grafana.com/docs/grafana/latest/setup-grafana/installation/)
|
||||
to install Grafana for your operating system.
|
||||
If using **Grafana Cloud**, log in to your Grafana Cloud instance.
|
||||
|
||||
## InfluxDB data source
|
||||
|
||||
The InfluxDB data source plugin is included in the Grafana core distribution.
|
||||
Use the plugin to query and visualize data stored in {{< product-name >}} with
|
||||
both InfluxQL and SQL.
|
||||
|
||||
> [!Note]
|
||||
> #### Grafana 10.3+
|
||||
>
|
||||
> The instructions below are for **Grafana 10.3+** which introduced the newest
|
||||
> version of the InfluxDB core plugin.
|
||||
> The updated plugin includes **SQL support** for InfluxDB 3-based products such
|
||||
> as {{< product-name >}}.
|
||||
|
||||
## Create an InfluxDB data source
|
||||
|
||||
Which data source you create depends on which query language you want to use to
|
||||
query {{% product-name %}}:
|
||||
|
||||
1. In your Grafana user interface (UI), navigate to **Data Sources**.
|
||||
2. Click **Add new data source**.
|
||||
3. Search for and select the **InfluxDB** plugin.
|
||||
4. Provide a name for your data source.
|
||||
5. Under **Query Language**, select either **SQL** or **InfluxQL**:
|
||||
|
||||
{{< tabs-wrapper >}}
|
||||
{{% tabs %}}
|
||||
[SQL](#)
|
||||
[InfluxQL](#)
|
||||
{{% /tabs %}}
|
||||
{{% tab-content %}}
|
||||
<!--------------------------------- BEGIN SQL --------------------------------->
|
||||
|
||||
When creating an InfluxDB data source that uses SQL to query data:
|
||||
|
||||
1. Under **HTTP**:
|
||||
|
||||
- **URL**: Provide your {{% show-in "cloud-serverless" %}}[{{< product-name >}} region URL](/influxdb3/version/reference/regions/){{% /show-in %}}
|
||||
{{% hide-in "cloud-serverless" %}}{{% product-name omit=" Clustered" %}} cluster URL{{% /hide-in %}} using the HTTPS protocol:
|
||||
|
||||
```
|
||||
https://{{< influxdb/host >}}
|
||||
```
|
||||
2. Under **InfluxDB Details**:
|
||||
|
||||
- **Database**: Provide a default {{% show-in "cloud-serverless" %}}[bucket](/influxdb3/version/admin/buckets/) name to query. In {{< product-name >}}, a bucket functions as a database.{{% /show-in %}}{{% hide-in "cloud-serverless" %}}[database](/influxdb3/version/admin/databases/) name to query.{{% /hide-in %}}
|
||||
- **Token**: Provide {{% show-in "cloud-serverless" %}}an [API token](/influxdb3/version/admin/tokens/) with read access to the buckets you want to query.{{% /show-in %}}{{% hide-in "cloud-serverless" %}}a [database token](/influxdb3/version/admin/tokens/#database-tokens) with read access to the databases you want to query.{{% /hide-in %}}
|
||||
3. Click **Save & test**.
|
||||
|
||||
{{% show-in "cloud-serverless" %}}{{< img-hd src="/img/influxdb3/cloud-serverless-grafana-influxdb-data-source-sql.png" alt="Grafana InfluxDB data source for InfluxDB Cloud Serverless that uses SQL" />}}{{% /show-in %}}
|
||||
{{% show-in "cloud-dedicated" %}}{{< img-hd src="/img/influxdb/cloud-dedicated-grafana-influxdb-data-source-sql.png" alt="Grafana InfluxDB data source for InfluxDB Cloud Dedicated that uses SQL" />}}{{% /show-in %}}
|
||||
{{% show-in "clustered" %}}{{< img-hd src="/img/influxdb3/clustered-grafana-influxdb-data-source-sql.png" alt="Grafana InfluxDB data source for InfluxDB Clustered that uses SQL" />}}{{% /show-in %}}
|
||||
|
||||
<!---------------------------------- END SQL ---------------------------------->
|
||||
{{% /tab-content %}}
|
||||
{{% tab-content %}}
|
||||
<!------------------------------- BEGIN INFLUXQL ------------------------------>
|
||||
|
||||
When creating an InfluxDB data source that uses InfluxQL to query data:
|
||||
|
||||
{{% show-in "cloud-serverless" %}}
|
||||
> [!Note]
|
||||
> #### Map databases and retention policies to buckets
|
||||
>
|
||||
> To query {{% product-name %}} with InfluxQL, first map database and retention policy
|
||||
> (DBRP) combinations to your InfluxDB Cloud buckets. For more information, see
|
||||
> [Map databases and retention policies to buckets](/influxdb3/version/query-data/influxql/dbrp/).
|
||||
{{% /show-in %}}
|
||||
|
||||
1. Under **HTTP**:
|
||||
|
||||
- **URL**: Provide your {{% show-in "cloud-serverless" %}}[{{< product-name >}} region URL](/influxdb3/version/reference/regions/){{% /show-in %}}{{% hide-in "cloud-serverless" %}}{{% product-name omit=" Clustered" %}} cluster URL{{% /hide-in %}}
|
||||
using the HTTPS protocol:
|
||||
|
||||
```
|
||||
https://{{< influxdb/host >}}
|
||||
```
|
||||
2. Under **InfluxDB Details**:
|
||||
|
||||
- **Database**: Provide a {{% show-in "cloud-serverless" %}}database name to query.
|
||||
Use the database name that is mapped to your InfluxDB bucket{{% /show-in %}}{{% hide-in "cloud-serverless" %}}default [database](/influxdb3/version/admin/databases/) name to query{{% /hide-in %}}.
|
||||
- **User**: Provide an arbitrary string.
|
||||
_This credential is ignored when querying {{% product-name %}}, but it cannot be empty._
|
||||
- **Password**: Provide {{% show-in "cloud-serverless" %}}an [API token](/influxdb3/version/admin/tokens/) with read access to the buckets you want to query{{% /show-in %}}{{% hide-in "cloud-serverless" %}}a [database token](/influxdb3/version/admin/tokens/#database-tokens) with read access to the databases you want to query{{% /hide-in %}}.
|
||||
- **HTTP Method**: Choose one of the available HTTP request methods to use when querying data:
|
||||
|
||||
- **POST** ({{< req text="Recommended" >}})
|
||||
- **GET**
|
||||
3. Click **Save & test**.
|
||||
|
||||
{{% show-in "cloud-dedicated" %}}{{< img-hd src="/img/influxdb/cloud-dedicated-grafana-influxdb-data-source-influxql.png" alt="Grafana InfluxDB data source for InfluxDB Cloud Dedicated using InfluxQL" />}}{{% /show-in %}}
|
||||
{{% show-in "cloud-serverless" %}}{{< img-hd src="/img/influxdb3/cloud-serverless-grafana-influxdb-data-source-influxql.png" alt="Grafana InfluxDB data source for InfluxDB Cloud Serverless using InfluxQL" />}}{{% /show-in %}}
|
||||
{{% show-in "clustered" %}}{{< img-hd src="/img/influxdb3/clustered-grafana-influxdb-data-source-influxql.png" alt="Grafana InfluxDB data source for InfluxDB Clustered using InfluxQL" />}}{{% /show-in %}}
|
||||
|
||||
<!-------------------------------- END INFLUXQL ------------------------------->
|
||||
{{% /tab-content %}}
|
||||
{{< /tabs-wrapper >}}
|
||||
|
||||
## Query InfluxDB with Grafana
|
||||
|
||||
After you [configure and save an InfluxDB datasource](#create-an-influxdb-data-source),
|
||||
use Grafana to build, run, and inspect queries against your InfluxDB {{% show-in "cloud-serverless" %}}bucket{{% /show-in %}}{{% hide-in "cloud-serverless" %}}database{{% /hide-in %}}.
|
||||
|
||||
{{< tabs-wrapper >}}
|
||||
{{% tabs %}}
|
||||
[SQL](#)
|
||||
[InfluxQL](#)
|
||||
{{% /tabs %}}
|
||||
{{% tab-content %}}
|
||||
<!--------------------------------- BEGIN SQL --------------------------------->
|
||||
|
||||
> [!Note]
|
||||
> {{% sql/sql-schema-intro %}}
|
||||
> To learn more, see [Query Data](/influxdb3/version/query-data/sql/).
|
||||
|
||||
1. Click **Explore**.
|
||||
2. In the dropdown, select the saved InfluxDB data source to query.
|
||||
3. Use the SQL query form to build your query:
|
||||
- **Table**: Select the measurement to query.
|
||||
- **Column**: Select one or more fields and tags to return as columns in query results.
|
||||
|
||||
With SQL, select the `time` column to include timestamps with the data.
|
||||
Grafana relies on the `time` column to correctly graph time series data.
|
||||
|
||||
- _**Optional:**_ Toggle **filter** to generate **WHERE** clause statements.
|
||||
- **WHERE**: Configure condition expressions to include in the `WHERE` clause.
|
||||
|
||||
- _**Optional:**_ Toggle **group** to generate **GROUP BY** clause statements.
|
||||
|
||||
- **GROUP BY**: Select columns to group by.
|
||||
If you include an aggregation function in the **SELECT** list,
|
||||
you must group by one or more of the queried columns.
|
||||
SQL returns the aggregation for each group.
|
||||
|
||||
- {{< req text="Recommended" color="green" >}}:
|
||||
Toggle **order** to generate **ORDER BY** clause statements.
|
||||
|
||||
- **ORDER BY**: Select columns to sort by.
|
||||
You can sort by time and multiple fields or tags.
|
||||
To sort in descending order, select **DESC**.
|
||||
|
||||
4. {{< req text="Recommended" color="green" >}}: Change format to **Time series**.
|
||||
- Use the **Format** dropdown to change the format of the query results.
|
||||
For example, to visualize the query results as a time series, select **Time series**.
|
||||
|
||||
5. Click **Run query** to execute the query.
|
||||
|
||||
<!---------------------------------- END SQL ---------------------------------->
|
||||
{{% /tab-content %}}
|
||||
{{% tab-content %}}
|
||||
<!------------------------------- BEGIN INFLUXQL ------------------------------>
|
||||
|
||||
1. Click **Explore**.
|
||||
2. In the dropdown, select the **InfluxDB** data source that you want to query.
|
||||
3. Use the InfluxQL query form to build your query:
|
||||
- **FROM**: Select the measurement that you want to query.
|
||||
- **WHERE**: To filter the query results, enter a conditional expression.
|
||||
- **SELECT**: Select fields to query and an aggregate function to apply to each.
|
||||
The aggregate function is applied to each time interval defined in the
|
||||
`GROUP BY` clause.
|
||||
- **GROUP BY**: By default, Grafana groups data by time to downsample results
|
||||
and improve query performance.
|
||||
You can also add other tags to group by.
|
||||
4. Click **Run query** to execute the query.
|
||||
|
||||
<!-------------------------------- END INFLUXQL ------------------------------->
|
||||
{{% /tab-content %}}
|
||||
{{< /tabs-wrapper >}}
|
||||
|
||||
{{< youtube "rSsouoNsNDs" >}}
|
||||
|
||||
To learn about query management and inspection in Grafana, see the
|
||||
[Grafana Explore documentation](https://grafana.com/docs/grafana/latest/explore/).
|
||||
|
||||
## Build visualizations with Grafana
|
||||
|
||||
For a comprehensive walk-through of creating visualizations with
|
||||
Grafana, see the [Grafana documentation](https://grafana.com/docs/grafana/latest/).
|
||||
|
|
@ -11,6 +11,151 @@ menu:
|
|||
weight: 60
|
||||
---
|
||||
|
||||
## v1.36.0 {date="2025-09-08"}
|
||||
|
||||
### Important Changes
|
||||
|
||||
- Pull request [#17355](https://github.com/influxdata/telegraf/pull/17355) updates `profiles` support in `inputs.opentelemetry` from v1 experimental to v1 development, following upstream changes to the experimental API. This update modifies metric output. For example, the `frame_type`, `stack_trace_id`, `build_id`, and `build_id_type` fields are no longer reported. The value format of other fields or tags might also have changed. For more information, see the [OpenTelemetry documentation](https://opentelemetry.io/docs/).
|
||||
|
||||
### New Plugins
|
||||
|
||||
- [#17368](https://github.com/influxdata/telegraf/pull/17368) `inputs.turbostat` Add plugin
|
||||
- [#17078](https://github.com/influxdata/telegraf/pull/17078) `processors.round` Add plugin
|
||||
|
||||
### Features
|
||||
|
||||
- [#16705](https://github.com/influxdata/telegraf/pull/16705) `agent` Introduce labels and selectors to enable and disable plugins
|
||||
- [#17547](https://github.com/influxdata/telegraf/pull/17547) `inputs.influxdb_v2_listener` Add `/health` route
|
||||
- [#17312](https://github.com/influxdata/telegraf/pull/17312) `inputs.internal` Allow to collect statistics per plugin instance
|
||||
- [#17024](https://github.com/influxdata/telegraf/pull/17024) `inputs.lvm` Add sync_percent for lvm_logical_vol
|
||||
- [#17355](https://github.com/influxdata/telegraf/pull/17355) `inputs.opentelemetry` Upgrade otlp proto module
|
||||
- [#17156](https://github.com/influxdata/telegraf/pull/17156) `inputs.syslog` Add support for RFC3164 over TCP
|
||||
- [#17543](https://github.com/influxdata/telegraf/pull/17543) `inputs.syslog` Allow limiting message size in octet counting mode
|
||||
- [#17539](https://github.com/influxdata/telegraf/pull/17539) `inputs.x509_cert` Add support for Windows certificate stores
|
||||
- [#17244](https://github.com/influxdata/telegraf/pull/17244) `output.nats` Allow disabling stream creation for externally managed streams
|
||||
- [#17474](https://github.com/influxdata/telegraf/pull/17474) `outputs.elasticsearch` Support array headers and preserve commas in values
|
||||
- [#17548](https://github.com/influxdata/telegraf/pull/17548) `outputs.influxdb` Add internal statistics for written bytes
|
||||
- [#17213](https://github.com/influxdata/telegraf/pull/17213) `outputs.nats` Allow providing a subject layout
|
||||
- [#17346](https://github.com/influxdata/telegraf/pull/17346) `outputs.nats` Enable batch serialization with use_batch_format
|
||||
- [#17249](https://github.com/influxdata/telegraf/pull/17249) `outputs.sql` Allow sending batches of metrics in transactions
|
||||
- [#17510](https://github.com/influxdata/telegraf/pull/17510) `parsers.avro` Support record arrays at root level
|
||||
- [#17365](https://github.com/influxdata/telegraf/pull/17365) `plugins.snmp` Allow debug logging in gosnmp
|
||||
- [#17345](https://github.com/influxdata/telegraf/pull/17345) `selfstat` Implement collection of plugin-internal statistics
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- [#17411](https://github.com/influxdata/telegraf/pull/17411) `inputs.diskio` Handle counter wrapping in io fields
|
||||
- [#17551](https://github.com/influxdata/telegraf/pull/17551) `inputs.s7comm` Use correct value for string length with 'extra' parameter
|
||||
- [#17579](https://github.com/influxdata/telegraf/pull/17579) `internal` Extract go version more robustly
|
||||
- [#17566](https://github.com/influxdata/telegraf/pull/17566) `outputs` Retrigger batch-available-events only if at least one metric was written successfully
|
||||
- [#17381](https://github.com/influxdata/telegraf/pull/17381) `packaging` Rename rpm from loong64 to loongarch64
|
||||
|
||||
### Dependency Updates
|
||||
|
||||
- [#17519](https://github.com/influxdata/telegraf/pull/17519) `deps` Bump cloud.google.com/go/storage from 1.56.0 to 1.56.1
|
||||
- [#17532](https://github.com/influxdata/telegraf/pull/17532) `deps` Bump github.com/Azure/azure-sdk-for-go/sdk/azcore from 1.18.2 to 1.19.0
|
||||
- [#17494](https://github.com/influxdata/telegraf/pull/17494) `deps` Bump github.com/SAP/go-hdb from 1.13.12 to 1.14.0
|
||||
- [#17488](https://github.com/influxdata/telegraf/pull/17488) `deps` Bump github.com/antchfx/xpath from 1.3.4 to 1.3.5
|
||||
- [#17540](https://github.com/influxdata/telegraf/pull/17540) `deps` Bump github.com/aws/aws-sdk-go-v2/config from 1.31.0 to 1.31.2
|
||||
- [#17538](https://github.com/influxdata/telegraf/pull/17538) `deps` Bump github.com/aws/aws-sdk-go-v2/credentials from 1.18.4 to 1.18.6
|
||||
- [#17517](https://github.com/influxdata/telegraf/pull/17517) `deps` Bump github.com/aws/aws-sdk-go-v2/feature/ec2/imds from 1.18.3 to 1.18.4
|
||||
- [#17528](https://github.com/influxdata/telegraf/pull/17528) `deps` Bump github.com/aws/aws-sdk-go-v2/service/cloudwatch from 1.48.0 to 1.48.2
|
||||
- [#17536](https://github.com/influxdata/telegraf/pull/17536) `deps` Bump github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs from 1.56.0 to 1.57.0
|
||||
- [#17524](https://github.com/influxdata/telegraf/pull/17524) `deps` Bump github.com/aws/aws-sdk-go-v2/service/dynamodb from 1.46.0 to 1.49.1
|
||||
- [#17493](https://github.com/influxdata/telegraf/pull/17493) `deps` Bump github.com/aws/aws-sdk-go-v2/service/ec2 from 1.242.0 to 1.244.0
|
||||
- [#17527](https://github.com/influxdata/telegraf/pull/17527) `deps` Bump github.com/aws/aws-sdk-go-v2/service/ec2 from 1.244.0 to 1.246.0
|
||||
- [#17530](https://github.com/influxdata/telegraf/pull/17530) `deps` Bump github.com/aws/aws-sdk-go-v2/service/kinesis from 1.38.0 to 1.39.1
|
||||
- [#17534](https://github.com/influxdata/telegraf/pull/17534) `deps` Bump github.com/aws/aws-sdk-go-v2/service/sts from 1.37.0 to 1.38.0
|
||||
- [#17513](https://github.com/influxdata/telegraf/pull/17513) `deps` Bump github.com/aws/aws-sdk-go-v2/service/timestreamwrite from 1.34.0 to 1.34.2
|
||||
- [#17514](https://github.com/influxdata/telegraf/pull/17514) `deps` Bump github.com/coreos/go-systemd/v22 from 22.5.0 to 22.6.0
|
||||
- [#17563](https://github.com/influxdata/telegraf/pull/17563) `deps` Bump github.com/facebook/time from 0.0.0-20240626113945-18207c5d8ddc to 0.0.0-20250903103710-a5911c32cdb9
|
||||
- [#17526](https://github.com/influxdata/telegraf/pull/17526) `deps` Bump github.com/gophercloud/gophercloud/v2 from 2.7.0 to 2.8.0
|
||||
- [#17537](https://github.com/influxdata/telegraf/pull/17537) `deps` Bump github.com/microsoft/go-mssqldb from 1.9.2 to 1.9.3
|
||||
- [#17490](https://github.com/influxdata/telegraf/pull/17490) `deps` Bump github.com/nats-io/nats-server/v2 from 2.11.7 to 2.11.8
|
||||
- [#17523](https://github.com/influxdata/telegraf/pull/17523) `deps` Bump github.com/nats-io/nats.go from 1.44.0 to 1.45.0
|
||||
- [#17492](https://github.com/influxdata/telegraf/pull/17492) `deps` Bump github.com/safchain/ethtool from 0.5.10 to 0.6.2
|
||||
- [#17486](https://github.com/influxdata/telegraf/pull/17486) `deps` Bump github.com/snowflakedb/gosnowflake from 1.15.0 to 1.16.0
|
||||
- [#17541](https://github.com/influxdata/telegraf/pull/17541) `deps` Bump github.com/tidwall/wal from 1.1.8 to 1.2.0
|
||||
- [#17529](https://github.com/influxdata/telegraf/pull/17529) `deps` Bump github.com/vmware/govmomi from 0.51.0 to 0.52.0
|
||||
- [#17496](https://github.com/influxdata/telegraf/pull/17496) `deps` Bump go.opentelemetry.io/collector/pdata from 1.36.1 to 1.38.0
|
||||
- [#17533](https://github.com/influxdata/telegraf/pull/17533) `deps` Bump go.opentelemetry.io/collector/pdata from 1.38.0 to 1.39.0
|
||||
- [#17516](https://github.com/influxdata/telegraf/pull/17516) `deps` Bump go.step.sm/crypto from 0.69.0 to 0.70.0
|
||||
- [#17499](https://github.com/influxdata/telegraf/pull/17499) `deps` Bump golang.org/x/mod from 0.26.0 to 0.27.0
|
||||
- [#17497](https://github.com/influxdata/telegraf/pull/17497) `deps` Bump golang.org/x/net from 0.42.0 to 0.43.0
|
||||
- [#17487](https://github.com/influxdata/telegraf/pull/17487) `deps` Bump google.golang.org/api from 0.246.0 to 0.247.0
|
||||
- [#17531](https://github.com/influxdata/telegraf/pull/17531) `deps` Bump google.golang.org/api from 0.247.0 to 0.248.0
|
||||
- [#17520](https://github.com/influxdata/telegraf/pull/17520) `deps` Bump google.golang.org/grpc from 1.74.2 to 1.75.0
|
||||
- [#17518](https://github.com/influxdata/telegraf/pull/17518) `deps` Bump google.golang.org/protobuf from 1.36.7 to 1.36.8
|
||||
- [#17498](https://github.com/influxdata/telegraf/pull/17498) `deps` Bump k8s.io/client-go from 0.33.3 to 0.33.4
|
||||
- [#17515](https://github.com/influxdata/telegraf/pull/17515) `deps` Bump super-linter/super-linter from 8.0.0 to 8.1.0
|
||||
|
||||
## v1.35.4 {date="2025-08-18"}
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- [#17451](https://github.com/influxdata/telegraf/pull/17451) `agent` Update help message for `--test` CLI flag
|
||||
- [#17413](https://github.com/influxdata/telegraf/pull/17413) `inputs.gnmi` Handle empty updates in gnmi notification response
|
||||
- [#17445](https://github.com/influxdata/telegraf/pull/17445) `inputs.redfish` Log correct address on HTTP error
|
||||
|
||||
### Dependency Updates
|
||||
|
||||
- [#17454](https://github.com/influxdata/telegraf/pull/17454) `deps` Bump actions/checkout from 4 to 5
|
||||
- [#17404](https://github.com/influxdata/telegraf/pull/17404) `deps` Bump cloud.google.com/go/storage from 1.55.0 to 1.56.0
|
||||
- [#17428](https://github.com/influxdata/telegraf/pull/17428) `deps` Bump github.com/Azure/azure-sdk-for-go/sdk/azcore from 1.18.1 to 1.18.2
|
||||
- [#17455](https://github.com/influxdata/telegraf/pull/17455) `deps` Bump github.com/Azure/azure-sdk-for-go/sdk/azidentity from 1.10.1 to 1.11.0
|
||||
- [#17383](https://github.com/influxdata/telegraf/pull/17383) `deps` Bump github.com/ClickHouse/clickhouse-go/v2 from 2.37.2 to 2.39.0
|
||||
- [#17435](https://github.com/influxdata/telegraf/pull/17435) `deps` Bump github.com/ClickHouse/clickhouse-go/v2 from 2.39.0 to 2.40.1
|
||||
- [#17393](https://github.com/influxdata/telegraf/pull/17393) `deps` Bump github.com/apache/arrow-go/v18 from 18.3.1 to 18.4.0
|
||||
- [#17439](https://github.com/influxdata/telegraf/pull/17439) `deps` Bump github.com/apache/inlong/inlong-sdk/dataproxy-sdk-twins/dataproxy-sdk-golang from 1.0.3 to 1.0.5
|
||||
- [#17437](https://github.com/influxdata/telegraf/pull/17437) `deps` Bump github.com/aws/aws-sdk-go-v2 from 1.37.0 to 1.37.2
|
||||
- [#17402](https://github.com/influxdata/telegraf/pull/17402) `deps` Bump github.com/aws/aws-sdk-go-v2/config from 1.29.17 to 1.30.0
|
||||
- [#17458](https://github.com/influxdata/telegraf/pull/17458) `deps` Bump github.com/aws/aws-sdk-go-v2/config from 1.30.1 to 1.31.0
|
||||
- [#17391](https://github.com/influxdata/telegraf/pull/17391) `deps` Bump github.com/aws/aws-sdk-go-v2/credentials from 1.17.70 to 1.18.0
|
||||
- [#17436](https://github.com/influxdata/telegraf/pull/17436) `deps` Bump github.com/aws/aws-sdk-go-v2/credentials from 1.18.1 to 1.18.3
|
||||
- [#17434](https://github.com/influxdata/telegraf/pull/17434) `deps` Bump github.com/aws/aws-sdk-go-v2/feature/ec2/imds from 1.18.0 to 1.18.2
|
||||
- [#17461](https://github.com/influxdata/telegraf/pull/17461) `deps` Bump github.com/aws/aws-sdk-go-v2/service/cloudwatch from 1.45.3 to 1.48.0
|
||||
- [#17392](https://github.com/influxdata/telegraf/pull/17392) `deps` Bump github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs from 1.51.0 to 1.54.0
|
||||
- [#17440](https://github.com/influxdata/telegraf/pull/17440) `deps` Bump github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs from 1.54.0 to 1.55.0
|
||||
- [#17473](https://github.com/influxdata/telegraf/pull/17473) `deps` Bump github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs from 1.55.0 to 1.56.0
|
||||
- [#17431](https://github.com/influxdata/telegraf/pull/17431) `deps` Bump github.com/aws/aws-sdk-go-v2/service/dynamodb from 1.44.0 to 1.46.0
|
||||
- [#17470](https://github.com/influxdata/telegraf/pull/17470) `deps` Bump github.com/aws/aws-sdk-go-v2/service/ec2 from 1.231.0 to 1.242.0
|
||||
- [#17397](https://github.com/influxdata/telegraf/pull/17397) `deps` Bump github.com/aws/aws-sdk-go-v2/service/kinesis from 1.35.3 to 1.36.0
|
||||
- [#17430](https://github.com/influxdata/telegraf/pull/17430) `deps` Bump github.com/aws/aws-sdk-go-v2/service/kinesis from 1.36.0 to 1.37.0
|
||||
- [#17469](https://github.com/influxdata/telegraf/pull/17469) `deps` Bump github.com/aws/aws-sdk-go-v2/service/kinesis from 1.37.0 to 1.38.0
|
||||
- [#17432](https://github.com/influxdata/telegraf/pull/17432) `deps` Bump github.com/aws/aws-sdk-go-v2/service/sts from 1.35.0 to 1.36.0
|
||||
- [#17401](https://github.com/influxdata/telegraf/pull/17401) `deps` Bump github.com/aws/aws-sdk-go-v2/service/timestreamwrite from 1.31.2 to 1.32.0
|
||||
- [#17421](https://github.com/influxdata/telegraf/pull/17421) `deps` Bump github.com/aws/aws-sdk-go-v2/service/timestreamwrite from 1.32.0 to 1.33.0
|
||||
- [#17464](https://github.com/influxdata/telegraf/pull/17464) `deps` Bump github.com/aws/aws-sdk-go-v2/service/timestreamwrite from 1.33.0 to 1.34.0
|
||||
- [#17457](https://github.com/influxdata/telegraf/pull/17457) `deps` Bump github.com/clarify/clarify-go from 0.4.0 to 0.4.1
|
||||
- [#17407](https://github.com/influxdata/telegraf/pull/17407) `deps` Bump github.com/docker/docker from 28.3.2+incompatible to 28.3.3+incompatible
|
||||
- [#17463](https://github.com/influxdata/telegraf/pull/17463) `deps` Bump github.com/docker/go-connections from 0.5.0 to 0.6.0
|
||||
- [#17394](https://github.com/influxdata/telegraf/pull/17394) `deps` Bump github.com/golang-jwt/jwt/v5 from 5.2.2 to 5.2.3
|
||||
- [#17423](https://github.com/influxdata/telegraf/pull/17423) `deps` Bump github.com/gopacket/gopacket from 1.3.1 to 1.4.0
|
||||
- [#17399](https://github.com/influxdata/telegraf/pull/17399) `deps` Bump github.com/jedib0t/go-pretty/v6 from 6.6.7 to 6.6.8
|
||||
- [#17422](https://github.com/influxdata/telegraf/pull/17422) `deps` Bump github.com/lxc/incus/v6 from 6.14.0 to 6.15.0
|
||||
- [#17429](https://github.com/influxdata/telegraf/pull/17429) `deps` Bump github.com/miekg/dns from 1.1.67 to 1.1.68
|
||||
- [#17433](https://github.com/influxdata/telegraf/pull/17433) `deps` Bump github.com/nats-io/nats-server/v2 from 2.11.6 to 2.11.7
|
||||
- [#17426](https://github.com/influxdata/telegraf/pull/17426) `deps` Bump github.com/nats-io/nats.go from 1.43.0 to 1.44.0
|
||||
- [#17456](https://github.com/influxdata/telegraf/pull/17456) `deps` Bump github.com/redis/go-redis/v9 from 9.11.0 to 9.12.1
|
||||
- [#17420](https://github.com/influxdata/telegraf/pull/17420) `deps` Bump github.com/shirou/gopsutil/v4 from 4.25.6 to 4.25.7
|
||||
- [#17388](https://github.com/influxdata/telegraf/pull/17388) `deps` Bump github.com/testcontainers/testcontainers-go/modules/azure from 0.37.0 to 0.38.0
|
||||
- [#17382](https://github.com/influxdata/telegraf/pull/17382) `deps` Bump github.com/testcontainers/testcontainers-go/modules/kafka from 0.37.0 to 0.38.0
|
||||
- [#17427](https://github.com/influxdata/telegraf/pull/17427) `deps` Bump github.com/yuin/goldmark from 1.7.12 to 1.7.13
|
||||
- [#17386](https://github.com/influxdata/telegraf/pull/17386) `deps` Bump go.opentelemetry.io/collector/pdata from 1.36.0 to 1.36.1
|
||||
- [#17425](https://github.com/influxdata/telegraf/pull/17425) `deps` Bump go.step.sm/crypto from 0.67.0 to 0.68.0
|
||||
- [#17462](https://github.com/influxdata/telegraf/pull/17462) `deps` Bump go.step.sm/crypto from 0.68.0 to 0.69.0
|
||||
- [#17460](https://github.com/influxdata/telegraf/pull/17460) `deps` Bump golang.org/x/crypto from 0.40.0 to 0.41.0
|
||||
- [#17424](https://github.com/influxdata/telegraf/pull/17424) `deps` Bump google.golang.org/api from 0.243.0 to 0.244.0
|
||||
- [#17459](https://github.com/influxdata/telegraf/pull/17459) `deps` Bump google.golang.org/api from 0.244.0 to 0.246.0
|
||||
- [#17465](https://github.com/influxdata/telegraf/pull/17465) `deps` Bump google.golang.org/protobuf from 1.36.6 to 1.36.7
|
||||
- [#17384](https://github.com/influxdata/telegraf/pull/17384) `deps` Bump k8s.io/apimachinery from 0.33.2 to 0.33.3
|
||||
- [#17389](https://github.com/influxdata/telegraf/pull/17389) `deps` Bump k8s.io/client-go from 0.33.2 to 0.33.3
|
||||
- [#17396](https://github.com/influxdata/telegraf/pull/17396) `deps` Bump modernc.org/sqlite from 1.38.0 to 1.38.1
|
||||
- [#17385](https://github.com/influxdata/telegraf/pull/17385) `deps` Bump software.sslmate.com/src/go-pkcs12 from 0.5.0 to 0.6.0
|
||||
- [#17390](https://github.com/influxdata/telegraf/pull/17390) `deps` Bump super-linter/super-linter from 7.4.0 to 8.0.0
|
||||
- [#17448](https://github.com/influxdata/telegraf/pull/17448) `deps` Fix collectd dependency not resolving
|
||||
- [#17410](https://github.com/influxdata/telegraf/pull/17410) `deps` Migrate from cloud.google.com/go/pubsub to v2
|
||||
|
||||
## v1.35.3 {date="2025-07-28"}
|
||||
|
||||
### Bug fixes
|
||||
|
|
|
|||
|
|
@ -141,9 +141,9 @@ telegraf:
|
|||
menu_category: other
|
||||
list_order: 6
|
||||
versions: [v1]
|
||||
latest: v1.35
|
||||
latest: v1.36
|
||||
latest_patches:
|
||||
v1: 1.35.3
|
||||
v1: 1.36.0
|
||||
ai_sample_questions:
|
||||
- How do I install and configure Telegraf?
|
||||
- How do I write a custom Telegraf plugin?
|
||||
|
|
@ -171,7 +171,7 @@ kapacitor:
|
|||
versions: [v1]
|
||||
latest: v1.8
|
||||
latest_patches:
|
||||
v1: 1.8.0
|
||||
v1: 1.8.1
|
||||
ai_sample_questions:
|
||||
- How do I configure Kapacitor for InfluxDB v1?
|
||||
- How do I write a custom Kapacitor task?
|
||||
|
|
|
|||
|
|
@ -502,8 +502,8 @@ input:
|
|||
Docker containers.
|
||||
|
||||
> [!NOTE]
|
||||
> Make sure Telegraf has sufficient permissions to access the
|
||||
> configured endpoint.
|
||||
> Make sure Telegraf has sufficient permissions to access the configured
|
||||
> endpoint.
|
||||
introduced: v0.1.9
|
||||
os_support: [freebsd, linux, macos, solaris, windows]
|
||||
tags: [containers]
|
||||
|
|
@ -515,9 +515,9 @@ input:
|
|||
Docker containers.
|
||||
|
||||
> [!NOTE]
|
||||
> This plugin works only for containers with the `local` or `json-file` or
|
||||
> `journald` logging driver. Please make sure Telegraf has sufficient
|
||||
> permissions to access the configured endpoint.
|
||||
> This plugin works only for containers with the `local`, `json-file`, or
|
||||
> `journald` logging driver. Make sure Telegraf has sufficient permissions
|
||||
> to access the configured endpoint.
|
||||
introduced: v1.12.0
|
||||
os_support: [freebsd, linux, macos, solaris, windows]
|
||||
tags: [containers, logging]
|
||||
|
|
@ -1970,6 +1970,11 @@ input:
|
|||
This service plugin receives traces, metrics, logs and profiles from
|
||||
[OpenTelemetry](https://opentelemetry.io) clients and compatible agents
|
||||
via gRPC.
|
||||
|
||||
> [!NOTE]
|
||||
> Telegraf v1.32 through v1.35 support the Profiles signal using the **v1
|
||||
> experimental API**. Telegraf v1.36+ supports the Profiles signal using the
|
||||
> **v1 development API**.
|
||||
introduced: v1.19.0
|
||||
os_support: [freebsd, linux, macos, solaris, windows]
|
||||
tags: [logging, messaging]
|
||||
|
|
@ -2672,6 +2677,19 @@ input:
|
|||
introduced: v0.3.0
|
||||
os_support: [freebsd, linux, macos, solaris, windows]
|
||||
tags: [testing]
|
||||
- name: Turbostat
|
||||
id: turbostat
|
||||
description: |
|
||||
This service plugin monitors system performance using the
|
||||
[turbostat](https://github.com/torvalds/linux/tree/master/tools/power/x86/turbostat)
|
||||
command.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> This plugin requires the `turbostat` executable to be installed on the
|
||||
> system.
|
||||
introduced: v1.36.0
|
||||
os_support: [linux]
|
||||
tags: [hardware, system]
|
||||
- name: Twemproxy
|
||||
id: twemproxy
|
||||
description: |
|
||||
|
|
@ -2835,7 +2853,8 @@ input:
|
|||
description: |
|
||||
This plugin provides information about
|
||||
[X.509](https://en.wikipedia.org/wiki/X.509) certificates accessible e.g.
|
||||
via local file, tcp, udp, https or smtp protocols.
|
||||
via local file, tcp, udp, https or smtp protocols and the Windows
|
||||
Certificate Store.
|
||||
|
||||
> [!NOTE]
|
||||
> When using a UDP address as a certificate source, the server must
|
||||
|
|
@ -2940,8 +2959,8 @@ output:
|
|||
Explorer](https://docs.microsoft.com/en-us/azure/data-explorer), [Azure
|
||||
Synapse Data
|
||||
Explorer](https://docs.microsoft.com/en-us/azure/synapse-analytics/data-explorer/data-explorer-overview),
|
||||
and [Real-Time Intelligence in
|
||||
Fabric](https://learn.microsoft.com/fabric/real-time-intelligence/overview)
|
||||
and [Real time analytics in
|
||||
Fabric](https://learn.microsoft.com/en-us/fabric/real-time-analytics/overview)
|
||||
services.
|
||||
|
||||
Azure Data Explorer is a distributed, columnar store, purpose built for
|
||||
|
|
@ -3299,9 +3318,17 @@ output:
|
|||
- name: Microsoft Fabric
|
||||
id: microsoft_fabric
|
||||
description: |
|
||||
This plugin writes metrics to [Real time analytics in
|
||||
Fabric](https://learn.microsoft.com/en-us/fabric/real-time-analytics/overview)
|
||||
services.
|
||||
This plugin writes metrics to [Fabric
|
||||
Eventhouse](https://learn.microsoft.com/fabric/real-time-intelligence/eventhouse)
|
||||
and [Fabric
|
||||
Eventstream](https://learn.microsoft.com/fabric/real-time-intelligence/event-streams/overview?tabs=enhancedcapabilities)
|
||||
artifacts of [Real-Time Intelligence in Microsoft
|
||||
Fabric](https://learn.microsoft.com/fabric/real-time-intelligence/overview).
|
||||
|
||||
Real-Time Intelligence is a SaaS service in Microsoft Fabric that allows
|
||||
you to extract insights and visualize data in motion. It offers an
|
||||
end-to-end solution for event-driven scenarios, streaming data, and data
|
||||
logs.
|
||||
introduced: v1.35.0
|
||||
os_support: [freebsd, linux, macos, solaris, windows]
|
||||
tags: [datastore]
|
||||
|
|
@ -4026,6 +4053,17 @@ processor:
|
|||
introduced: v1.15.0
|
||||
os_support: [freebsd, linux, macos, solaris, windows]
|
||||
tags: [annotation]
|
||||
- name: Round
|
||||
id: round
|
||||
description: |
|
||||
This plugin rounds numerical field values to the configured
|
||||
precision. This is particularly useful in combination with the [dedup
|
||||
processor](/telegraf/v1/plugins/#processor-dedup) to reduce the number of
|
||||
metrics sent to the output when a lower precision is required for the
|
||||
values.
|
||||
introduced: v1.36.0
|
||||
os_support: [freebsd, linux, macos, solaris, windows]
|
||||
tags: [transformation]
|
||||
- name: S2 Geo
|
||||
id: s2geo
|
||||
description: |
|
||||
|
|
@ -4122,7 +4160,7 @@ processor:
|
|||
- name: Template
|
||||
id: template
|
||||
description: |
|
||||
This plugin applies templates to metrics for generatuing a new tag. The
|
||||
This plugin applies templates to metrics for generating a new tag. The
|
||||
primary use case of this plugin is to create a tag that can be used for
|
||||
dynamic routing to multiple output plugins or using an output specific
|
||||
routing option.
|
||||
|
|
|
|||
Loading…
Reference in New Issue