From 09b98c0195b054caf2f8c270c2d0ccde239c57d9 Mon Sep 17 00:00:00 2001 From: meelahme Date: Thu, 31 Jul 2025 17:39:43 -0700 Subject: [PATCH 01/31] docs: add dynamic date range filtering examples to WHERE clause --- content/shared/sql-reference/where.md | 58 +++++++++++++++++++++++++++ 1 file changed, 58 insertions(+) diff --git a/content/shared/sql-reference/where.md b/content/shared/sql-reference/where.md index 79ac64cee..bebdd7efc 100644 --- a/content/shared/sql-reference/where.md +++ b/content/shared/sql-reference/where.md @@ -96,6 +96,64 @@ less than or equal to `08-19-2019T13:00:00Z`. {{% /expand %}} {{< /expand-wrapper >}} +### Filter data by dynamic date ranges + +Use date and time functions to filter data by relative time periods that automatically update. + +#### Get data from yesterday + +```sql +SELECT * +FROM h2o_feet +WHERE "location" = 'santa_monica' + AND time >= DATE_TRUNC('day', NOW() - INTERVAL '1 day') + AND time < DATE_TRUNC('day', NOW()) +``` + +{{< expand-wrapper >}} +{{% expand "View query explanation" %}} + +This query filters data to include only records from the previous calendar day: + +- `NOW() - INTERVAL '1 day'` calculates yesterday's timestamp +- `DATE_TRUNC('day', ...)` truncates to the start of that day (00:00:00) +- The range spans from yesterday at 00:00:00 to today at 00:00:00 + +{{% /expand %}} +{{< /expand-wrapper >}} + +#### Get data from the last 24 hour + +```sql +SELECT * +FROM h2o_feet +WHERE time >= NOW() - INTERVAL '1 day' +``` + +{{< expand-wrapper >}} +{{% expand "View query explanation" %}} + +This query returns data from exactly 24 hours before the current time. Unlike the "yesterday" example, this creates a rolling 24-hour window that moves with the current time. + +{{% /expand %}} +{{< /expand-wrapper >}} + +#### Get data from the current week + +```sql +SELECT * +FROM h2o_feet +WHERE time >= DATE_TRUNC('week', NOW()) +``` + +{{< expand-wrapper >}} +{{% expand "View query explanation" %}} + +This query returns all data from the start of the current week (Monday at 00:00:00) to the current time. The DATE_TRUNC('week', NOW()) function truncates the current timestamp to the beginning of the week. + +{{% /expand %}} +{{< /expand-wrapper >}} + ### Filter data using the OR operator ```sql From 5e9e0cc97bee9f63490e4b3b460565e379cfce44 Mon Sep 17 00:00:00 2001 From: Jameelah Mercer <36314199+MeelahMe@users.noreply.github.com> Date: Thu, 31 Jul 2025 17:42:01 -0700 Subject: [PATCH 02/31] Update content/shared/sql-reference/where.md Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --- content/shared/sql-reference/where.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/shared/sql-reference/where.md b/content/shared/sql-reference/where.md index bebdd7efc..f1163eae0 100644 --- a/content/shared/sql-reference/where.md +++ b/content/shared/sql-reference/where.md @@ -122,7 +122,7 @@ This query filters data to include only records from the previous calendar day: {{% /expand %}} {{< /expand-wrapper >}} -#### Get data from the last 24 hour +#### Get data from the last 24 hours ```sql SELECT * From 77aed8468ffee13ec7f58ca1303a5a0eb8782a31 Mon Sep 17 00:00:00 2001 From: meelahme Date: Thu, 31 Jul 2025 22:26:53 -0700 Subject: [PATCH 03/31] docs: tested and updated examples --- content/shared/sql-reference/where.md | 28 +++++++++++++++++++++++---- 1 file changed, 24 insertions(+), 4 deletions(-) diff --git a/content/shared/sql-reference/where.md b/content/shared/sql-reference/where.md index bebdd7efc..40bbfb81e 100644 --- a/content/shared/sql-reference/where.md +++ b/content/shared/sql-reference/where.md @@ -111,7 +111,7 @@ WHERE "location" = 'santa_monica' ``` {{< expand-wrapper >}} -{{% expand "View query explanation" %}} +{{% expand "View example results" %}} This query filters data to include only records from the previous calendar day: @@ -119,10 +119,19 @@ This query filters data to include only records from the previous calendar day: - `DATE_TRUNC('day', ...)` truncates to the start of that day (00:00:00) - The range spans from yesterday at 00:00:00 to today at 00:00:00 +| level description | location | time | water_level | +| :---------------- | :----------- | :----------------------- | :---------- | +| below 3 feet | santa_monica | 2019-08-18T12:00:00.000Z | 2.533 | +| below 3 feet | santa_monica | 2019-08-18T12:06:00.000Z | 2.543 | +| below 3 feet | santa_monica | 2019-08-18T12:12:00.000Z | 2.385 | +| below 3 feet | santa_monica | 2019-08-18T12:18:00.000Z | 2.362 | +| below 3 feet | santa_monica | 2019-08-18T12:24:00.000Z | 2.405 | +| below 3 feet | santa_monica | 2019-08-18T12:30:00.000Z | 2.398 | + {{% /expand %}} {{< /expand-wrapper >}} -#### Get data from the last 24 hour +#### Get data from the last 24 hours ```sql SELECT * @@ -131,10 +140,19 @@ WHERE time >= NOW() - INTERVAL '1 day' ``` {{< expand-wrapper >}} -{{% expand "View query explanation" %}} +{{% expand "View example results" %}} This query returns data from exactly 24 hours before the current time. Unlike the "yesterday" example, this creates a rolling 24-hour window that moves with the current time. +| level description | location | time | water_level | +| :---------------- | :----------- | :----------------------- | :---------- | +| below 3 feet | santa_monica | 2019-08-18T18:00:00.000Z | 2.120 | +| below 3 feet | santa_monica | 2019-08-18T18:06:00.000Z | 2.028 | +| below 3 feet | santa_monica | 2019-08-18T18:12:00.000Z | 1.982 | +| below 3 feet | santa_monica | 2019-08-19T06:00:00.000Z | 1.825 | +| below 3 feet | santa_monica | 2019-08-19T06:06:00.000Z | 1.753 | +| below 3 feet | santa_monica | 2019-08-19T06:12:00.000Z | 1.691 | + {{% /expand %}} {{< /expand-wrapper >}} @@ -147,10 +165,12 @@ WHERE time >= DATE_TRUNC('week', NOW()) ``` {{< expand-wrapper >}} -{{% expand "View query explanation" %}} +{{% expand "View example results" %}} This query returns all data from the start of the current week (Monday at 00:00:00) to the current time. The DATE_TRUNC('week', NOW()) function truncates the current timestamp to the beginning of the week. +level description location timew ater_levelbelow 3 feetsanta_monica2019-08-12T00:00:00.000Z2.064below 3 feetsanta_monica2019-08-14T09:30:00.000Z2.116below 3 feetsanta_monica2019-08-16T15:45:00.000Z1.952below 3 feetsanta_monica2019-08-18T12:00:00.000Z2.533below 3 feetsanta_monica2019-08-18T18:00:00.000Z2.385below 3 feetsanta_monica2019-08-19T10:30:00.000Z1.691 + {{% /expand %}} {{< /expand-wrapper >}} From 77c43889e032ac13041f0d444d29bbb9ab276ab2 Mon Sep 17 00:00:00 2001 From: meelahme Date: Thu, 31 Jul 2025 22:28:33 -0700 Subject: [PATCH 04/31] docs: updated current week example --- content/shared/sql-reference/where.md | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/content/shared/sql-reference/where.md b/content/shared/sql-reference/where.md index 40bbfb81e..add7b35de 100644 --- a/content/shared/sql-reference/where.md +++ b/content/shared/sql-reference/where.md @@ -169,7 +169,14 @@ WHERE time >= DATE_TRUNC('week', NOW()) This query returns all data from the start of the current week (Monday at 00:00:00) to the current time. The DATE_TRUNC('week', NOW()) function truncates the current timestamp to the beginning of the week. -level description location timew ater_levelbelow 3 feetsanta_monica2019-08-12T00:00:00.000Z2.064below 3 feetsanta_monica2019-08-14T09:30:00.000Z2.116below 3 feetsanta_monica2019-08-16T15:45:00.000Z1.952below 3 feetsanta_monica2019-08-18T12:00:00.000Z2.533below 3 feetsanta_monica2019-08-18T18:00:00.000Z2.385below 3 feetsanta_monica2019-08-19T10:30:00.000Z1.691 +| level description | location | time | water_level | +| :---------------- | :----------- | :----------------------- | :---------- | +| below 3 feet | santa_monica | 2019-08-12T00:00:00.000Z | 2.064 | +| below 3 feet | santa_monica | 2019-08-14T09:30:00.000Z | 2.116 | +| below 3 feet | santa_monica | 2019-08-16T15:45:00.000Z | 1.952 | +| below 3 feet | santa_monica | 2019-08-18T12:00:00.000Z | 2.533 | +| below 3 feet | santa_monica | 2019-08-18T18:00:00.000Z | 2.385 | +| below 3 feet | santa_monica | 2019-08-19T10:30:00.000Z | 1.691 | {{% /expand %}} {{< /expand-wrapper >}} From c0aff8f47580c61e7f2a9390c4b4ec3361e35eb1 Mon Sep 17 00:00:00 2001 From: Jameelah Mercer <36314199+MeelahMe@users.noreply.github.com> Date: Thu, 31 Jul 2025 22:31:52 -0700 Subject: [PATCH 05/31] Update content/shared/sql-reference/where.md Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --- content/shared/sql-reference/where.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/shared/sql-reference/where.md b/content/shared/sql-reference/where.md index add7b35de..f7dc25246 100644 --- a/content/shared/sql-reference/where.md +++ b/content/shared/sql-reference/where.md @@ -161,7 +161,7 @@ This query returns data from exactly 24 hours before the current time. Unlike th ```sql SELECT * FROM h2o_feet -WHERE time >= DATE_TRUNC('week', NOW()) +WHERE time >= DATE_TRUNC('week', NOW()) AND location = 'santa_monica' ``` {{< expand-wrapper >}} From 45bdbb409cbca3fe1aba20e0927d166d9f3d5d50 Mon Sep 17 00:00:00 2001 From: Jameelah Mercer <36314199+MeelahMe@users.noreply.github.com> Date: Thu, 31 Jul 2025 22:31:58 -0700 Subject: [PATCH 06/31] Update content/shared/sql-reference/where.md Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --- content/shared/sql-reference/where.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/shared/sql-reference/where.md b/content/shared/sql-reference/where.md index f7dc25246..b28e3aa97 100644 --- a/content/shared/sql-reference/where.md +++ b/content/shared/sql-reference/where.md @@ -136,7 +136,7 @@ This query filters data to include only records from the previous calendar day: ```sql SELECT * FROM h2o_feet -WHERE time >= NOW() - INTERVAL '1 day' +WHERE time >= NOW() - INTERVAL '1 day' AND location = 'santa_monica' ``` {{< expand-wrapper >}} From 632b99fafcab5043c9fe9f2edc630bf7274f4458 Mon Sep 17 00:00:00 2001 From: karel rehor Date: Mon, 8 Sep 2025 15:10:40 +0200 Subject: [PATCH 07/31] chore: update release notes and data for kapacitor-1.8.1 --- .../v1/reference/about_the_project/release-notes.md | 7 +++++++ data/products.yml | 2 +- 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/content/kapacitor/v1/reference/about_the_project/release-notes.md b/content/kapacitor/v1/reference/about_the_project/release-notes.md index 2ae546029..f9ca94aa4 100644 --- a/content/kapacitor/v1/reference/about_the_project/release-notes.md +++ b/content/kapacitor/v1/reference/about_the_project/release-notes.md @@ -9,6 +9,13 @@ aliases: - /kapacitor/v1/about_the_project/releasenotes-changelog/ --- +## v1.8.1 {date="2025-09-08} + +### Dependency updates + +1. Upgrade golang.org/x/oauth2 from 0.23.0 to 0.27.0 +1. Upgrade Go to 1.24.6 + ## v1.8.0 {date="2025-06-26"} > [!Warning] diff --git a/data/products.yml b/data/products.yml index 9d9403904..307aea9cc 100644 --- a/data/products.yml +++ b/data/products.yml @@ -171,7 +171,7 @@ kapacitor: versions: [v1] latest: v1.8 latest_patches: - v1: 1.8.0 + v1: 1.8.1 ai_sample_questions: - How do I configure Kapacitor for InfluxDB v1? - How do I write a custom Kapacitor task? From 93eb70d3777f48dd29628123edc7f5fac025a12b Mon Sep 17 00:00:00 2001 From: karel rehor Date: Mon, 8 Sep 2025 16:26:48 +0200 Subject: [PATCH 08/31] chore: switch list from ordered to unordered. --- .../kapacitor/v1/reference/about_the_project/release-notes.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/kapacitor/v1/reference/about_the_project/release-notes.md b/content/kapacitor/v1/reference/about_the_project/release-notes.md index f9ca94aa4..ac71c9586 100644 --- a/content/kapacitor/v1/reference/about_the_project/release-notes.md +++ b/content/kapacitor/v1/reference/about_the_project/release-notes.md @@ -13,8 +13,8 @@ aliases: ### Dependency updates -1. Upgrade golang.org/x/oauth2 from 0.23.0 to 0.27.0 -1. Upgrade Go to 1.24.6 +- Upgrade golang.org/x/oauth2 from 0.23.0 to 0.27.0 +- Upgrade Go to 1.24.6 ## v1.8.0 {date="2025-06-26"} From be2974cea2749d45fbae92f18d03cb65d6d4e02f Mon Sep 17 00:00:00 2001 From: Sven Rebhan Date: Mon, 8 Sep 2025 23:53:16 +0200 Subject: [PATCH 09/31] Add Telegraf v1.35.4 release notes --- content/telegraf/v1/release-notes.md | 67 ++++++++++++++++++++++++++++ 1 file changed, 67 insertions(+) diff --git a/content/telegraf/v1/release-notes.md b/content/telegraf/v1/release-notes.md index b387c589c..2b4da71a1 100644 --- a/content/telegraf/v1/release-notes.md +++ b/content/telegraf/v1/release-notes.md @@ -11,6 +11,73 @@ menu: weight: 60 --- +## v1.35.4 {date="2025-08-18"} + +### Bugfixes + +- [#17451](https://github.com/influxdata/telegraf/pull/17451) `agent` Update help message for CLI flag --test +- [#17413](https://github.com/influxdata/telegraf/pull/17413) `inputs.gnmi` Handle empty updates in gnmi notification response +- [#17445](https://github.com/influxdata/telegraf/pull/17445) `inputs.redfish` Log correct address on HTTP error + +### Dependency Updates + +- [#17454](https://github.com/influxdata/telegraf/pull/17454) `deps` Bump actions/checkout from 4 to 5 +- [#17404](https://github.com/influxdata/telegraf/pull/17404) `deps` Bump cloud.google.com/go/storage from 1.55.0 to 1.56.0 +- [#17428](https://github.com/influxdata/telegraf/pull/17428) `deps` Bump github.com/Azure/azure-sdk-for-go/sdk/azcore from 1.18.1 to 1.18.2 +- [#17455](https://github.com/influxdata/telegraf/pull/17455) `deps` Bump github.com/Azure/azure-sdk-for-go/sdk/azidentity from 1.10.1 to 1.11.0 +- [#17383](https://github.com/influxdata/telegraf/pull/17383) `deps` Bump github.com/ClickHouse/clickhouse-go/v2 from 2.37.2 to 2.39.0 +- [#17435](https://github.com/influxdata/telegraf/pull/17435) `deps` Bump github.com/ClickHouse/clickhouse-go/v2 from 2.39.0 to 2.40.1 +- [#17393](https://github.com/influxdata/telegraf/pull/17393) `deps` Bump github.com/apache/arrow-go/v18 from 18.3.1 to 18.4.0 +- [#17439](https://github.com/influxdata/telegraf/pull/17439) `deps` Bump github.com/apache/inlong/inlong-sdk/dataproxy-sdk-twins/dataproxy-sdk-golang from 1.0.3 to 1.0.5 +- [#17437](https://github.com/influxdata/telegraf/pull/17437) `deps` Bump github.com/aws/aws-sdk-go-v2 from 1.37.0 to 1.37.2 +- [#17402](https://github.com/influxdata/telegraf/pull/17402) `deps` Bump github.com/aws/aws-sdk-go-v2/config from 1.29.17 to 1.30.0 +- [#17458](https://github.com/influxdata/telegraf/pull/17458) `deps` Bump github.com/aws/aws-sdk-go-v2/config from 1.30.1 to 1.31.0 +- [#17391](https://github.com/influxdata/telegraf/pull/17391) `deps` Bump github.com/aws/aws-sdk-go-v2/credentials from 1.17.70 to 1.18.0 +- [#17436](https://github.com/influxdata/telegraf/pull/17436) `deps` Bump github.com/aws/aws-sdk-go-v2/credentials from 1.18.1 to 1.18.3 +- [#17434](https://github.com/influxdata/telegraf/pull/17434) `deps` Bump github.com/aws/aws-sdk-go-v2/feature/ec2/imds from 1.18.0 to 1.18.2 +- [#17461](https://github.com/influxdata/telegraf/pull/17461) `deps` Bump github.com/aws/aws-sdk-go-v2/service/cloudwatch from 1.45.3 to 1.48.0 +- [#17392](https://github.com/influxdata/telegraf/pull/17392) `deps` Bump github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs from 1.51.0 to 1.54.0 +- [#17440](https://github.com/influxdata/telegraf/pull/17440) `deps` Bump github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs from 1.54.0 to 1.55.0 +- [#17473](https://github.com/influxdata/telegraf/pull/17473) `deps` Bump github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs from 1.55.0 to 1.56.0 +- [#17431](https://github.com/influxdata/telegraf/pull/17431) `deps` Bump github.com/aws/aws-sdk-go-v2/service/dynamodb from 1.44.0 to 1.46.0 +- [#17470](https://github.com/influxdata/telegraf/pull/17470) `deps` Bump github.com/aws/aws-sdk-go-v2/service/ec2 from 1.231.0 to 1.242.0 +- [#17397](https://github.com/influxdata/telegraf/pull/17397) `deps` Bump github.com/aws/aws-sdk-go-v2/service/kinesis from 1.35.3 to 1.36.0 +- [#17430](https://github.com/influxdata/telegraf/pull/17430) `deps` Bump github.com/aws/aws-sdk-go-v2/service/kinesis from 1.36.0 to 1.37.0 +- [#17469](https://github.com/influxdata/telegraf/pull/17469) `deps` Bump github.com/aws/aws-sdk-go-v2/service/kinesis from 1.37.0 to 1.38.0 +- [#17432](https://github.com/influxdata/telegraf/pull/17432) `deps` Bump github.com/aws/aws-sdk-go-v2/service/sts from 1.35.0 to 1.36.0 +- [#17401](https://github.com/influxdata/telegraf/pull/17401) `deps` Bump github.com/aws/aws-sdk-go-v2/service/timestreamwrite from 1.31.2 to 1.32.0 +- [#17421](https://github.com/influxdata/telegraf/pull/17421) `deps` Bump github.com/aws/aws-sdk-go-v2/service/timestreamwrite from 1.32.0 to 1.33.0 +- [#17464](https://github.com/influxdata/telegraf/pull/17464) `deps` Bump github.com/aws/aws-sdk-go-v2/service/timestreamwrite from 1.33.0 to 1.34.0 +- [#17457](https://github.com/influxdata/telegraf/pull/17457) `deps` Bump github.com/clarify/clarify-go from 0.4.0 to 0.4.1 +- [#17407](https://github.com/influxdata/telegraf/pull/17407) `deps` Bump github.com/docker/docker from 28.3.2+incompatible to 28.3.3+incompatible +- [#17463](https://github.com/influxdata/telegraf/pull/17463) `deps` Bump github.com/docker/go-connections from 0.5.0 to 0.6.0 +- [#17394](https://github.com/influxdata/telegraf/pull/17394) `deps` Bump github.com/golang-jwt/jwt/v5 from 5.2.2 to 5.2.3 +- [#17423](https://github.com/influxdata/telegraf/pull/17423) `deps` Bump github.com/gopacket/gopacket from 1.3.1 to 1.4.0 +- [#17399](https://github.com/influxdata/telegraf/pull/17399) `deps` Bump github.com/jedib0t/go-pretty/v6 from 6.6.7 to 6.6.8 +- [#17422](https://github.com/influxdata/telegraf/pull/17422) `deps` Bump github.com/lxc/incus/v6 from 6.14.0 to 6.15.0 +- [#17429](https://github.com/influxdata/telegraf/pull/17429) `deps` Bump github.com/miekg/dns from 1.1.67 to 1.1.68 +- [#17433](https://github.com/influxdata/telegraf/pull/17433) `deps` Bump github.com/nats-io/nats-server/v2 from 2.11.6 to 2.11.7 +- [#17426](https://github.com/influxdata/telegraf/pull/17426) `deps` Bump github.com/nats-io/nats.go from 1.43.0 to 1.44.0 +- [#17456](https://github.com/influxdata/telegraf/pull/17456) `deps` Bump github.com/redis/go-redis/v9 from 9.11.0 to 9.12.1 +- [#17420](https://github.com/influxdata/telegraf/pull/17420) `deps` Bump github.com/shirou/gopsutil/v4 from 4.25.6 to 4.25.7 +- [#17388](https://github.com/influxdata/telegraf/pull/17388) `deps` Bump github.com/testcontainers/testcontainers-go/modules/azure from 0.37.0 to 0.38.0 +- [#17382](https://github.com/influxdata/telegraf/pull/17382) `deps` Bump github.com/testcontainers/testcontainers-go/modules/kafka from 0.37.0 to 0.38.0 +- [#17427](https://github.com/influxdata/telegraf/pull/17427) `deps` Bump github.com/yuin/goldmark from 1.7.12 to 1.7.13 +- [#17386](https://github.com/influxdata/telegraf/pull/17386) `deps` Bump go.opentelemetry.io/collector/pdata from 1.36.0 to 1.36.1 +- [#17425](https://github.com/influxdata/telegraf/pull/17425) `deps` Bump go.step.sm/crypto from 0.67.0 to 0.68.0 +- [#17462](https://github.com/influxdata/telegraf/pull/17462) `deps` Bump go.step.sm/crypto from 0.68.0 to 0.69.0 +- [#17460](https://github.com/influxdata/telegraf/pull/17460) `deps` Bump golang.org/x/crypto from 0.40.0 to 0.41.0 +- [#17424](https://github.com/influxdata/telegraf/pull/17424) `deps` Bump google.golang.org/api from 0.243.0 to 0.244.0 +- [#17459](https://github.com/influxdata/telegraf/pull/17459) `deps` Bump google.golang.org/api from 0.244.0 to 0.246.0 +- [#17465](https://github.com/influxdata/telegraf/pull/17465) `deps` Bump google.golang.org/protobuf from 1.36.6 to 1.36.7 +- [#17384](https://github.com/influxdata/telegraf/pull/17384) `deps` Bump k8s.io/apimachinery from 0.33.2 to 0.33.3 +- [#17389](https://github.com/influxdata/telegraf/pull/17389) `deps` Bump k8s.io/client-go from 0.33.2 to 0.33.3 +- [#17396](https://github.com/influxdata/telegraf/pull/17396) `deps` Bump modernc.org/sqlite from 1.38.0 to 1.38.1 +- [#17385](https://github.com/influxdata/telegraf/pull/17385) `deps` Bump software.sslmate.com/src/go-pkcs12 from 0.5.0 to 0.6.0 +- [#17390](https://github.com/influxdata/telegraf/pull/17390) `deps` Bump super-linter/super-linter from 7.4.0 to 8.0.0 +- [#17448](https://github.com/influxdata/telegraf/pull/17448) `deps` Fix collectd dependency not resolving +- [#17410](https://github.com/influxdata/telegraf/pull/17410) `deps` Migrate from cloud.google.com/go/pubsub to v2 + ## v1.35.3 {date="2025-07-28"} ### Bug fixes From e94bd563fd50b72a6581cb1bc66e98591407cc2b Mon Sep 17 00:00:00 2001 From: Jason Stirnaman Date: Mon, 8 Sep 2025 23:42:16 -0500 Subject: [PATCH 10/31] Update content/telegraf/v1/release-notes.md --- content/telegraf/v1/release-notes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/telegraf/v1/release-notes.md b/content/telegraf/v1/release-notes.md index 2b4da71a1..0fd5dba61 100644 --- a/content/telegraf/v1/release-notes.md +++ b/content/telegraf/v1/release-notes.md @@ -15,7 +15,7 @@ menu: ### Bugfixes -- [#17451](https://github.com/influxdata/telegraf/pull/17451) `agent` Update help message for CLI flag --test +- [#17451](https://github.com/influxdata/telegraf/pull/17451) `agent` Update help message for `--test` CLI flag - [#17413](https://github.com/influxdata/telegraf/pull/17413) `inputs.gnmi` Handle empty updates in gnmi notification response - [#17445](https://github.com/influxdata/telegraf/pull/17445) `inputs.redfish` Log correct address on HTTP error From 767dcaeafbbfae4bdd5ea03dd54d1a3a3015d6cc Mon Sep 17 00:00:00 2001 From: Jason Stirnaman Date: Mon, 8 Sep 2025 23:45:52 -0500 Subject: [PATCH 11/31] Update products.yml for Telegraph 1.35.4 --- data/products.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/data/products.yml b/data/products.yml index 307aea9cc..ec0014361 100644 --- a/data/products.yml +++ b/data/products.yml @@ -143,7 +143,7 @@ telegraf: versions: [v1] latest: v1.35 latest_patches: - v1: 1.35.3 + v1: 1.35.4 ai_sample_questions: - How do I install and configure Telegraf? - How do I write a custom Telegraf plugin? From 74900a0cc995cb3ae455639f6b798c0d4b32f068 Mon Sep 17 00:00:00 2001 From: Jason Stirnaman Date: Tue, 9 Sep 2025 11:16:39 -0500 Subject: [PATCH 12/31] docs: Add .mcp.json config file for Docs MCP --- .mcp.json | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) create mode 100644 .mcp.json diff --git a/.mcp.json b/.mcp.json new file mode 100644 index 000000000..f600dfa67 --- /dev/null +++ b/.mcp.json @@ -0,0 +1,20 @@ +{ + "$schema": "https://raw.githubusercontent.com/modelcontextprotocol/modelcontextprotocol/refs/heads/main/schema/2025-06-18/schema.json", + "description": "InfluxData documentation assistance via MCP server - Node.js execution", + "mcpServers": { + "influxdata": { + "comment": "Use Node to run Docs MCP. To install and setup, see https://github.com/influxdata/docs-mcp-server", + "type": "stdio", + "command": "node", + "args": [ + "${DOCS_MCP_SERVER_PATH}/dist/index.js" + ], + "env": { + "DOCS_API_KEY_FILE": "${DOCS_API_KEY_FILE:-$HOME/.env.docs-kapa-api-key}", + "DOCS_MODE": "external-only", + "MCP_LOG_LEVEL": "${MCP_LOG_LEVEL:-info}", + "NODE_ENV": "${NODE_ENV:-production}" + } + } + } +} \ No newline at end of file From ddb36d1a39a2eff469021447f51398f5095257a2 Mon Sep 17 00:00:00 2001 From: Jason Stirnaman Date: Mon, 8 Sep 2025 11:30:58 -0500 Subject: [PATCH 13/31] Closes DAR #535 - Adds Clustered reference/internals/durability/\ - Migrates Cloud Dedicated durability page to shared for Dedicated and Clustered.\ - Adds diagram (also used in storage-engine) to illustrate data flow. - Fixes typo in Serverless --- .../reference/internals/durability.md | 73 +------------- .../reference/internals/durability.md | 2 +- .../reference/internals/durability.md | 17 ++++ .../durability.md | 98 +++++++++++++++++++ 4 files changed, 120 insertions(+), 70 deletions(-) create mode 100644 content/influxdb3/clustered/reference/internals/durability.md create mode 100644 content/shared/v3-distributed-internals-reference/durability.md diff --git a/content/influxdb3/cloud-dedicated/reference/internals/durability.md b/content/influxdb3/cloud-dedicated/reference/internals/durability.md index f90b0de44..7c50ef019 100644 --- a/content/influxdb3/cloud-dedicated/reference/internals/durability.md +++ b/content/influxdb3/cloud-dedicated/reference/internals/durability.md @@ -1,7 +1,8 @@ --- title: InfluxDB Cloud Dedicated data durability description: > - InfluxDB Cloud Dedicated replicates all time series data in the storage tier across + Data written to {{% product-name %}} progresses through multiple stages to ensure durability, optimized performance and storage, and efficient querying. Configuration options at each stage affect system behavior, balancing reliability and resource usage. + {{% product-name %}} replicates all time series data in the storage tier across multiple availability zones within a cloud region and automatically creates backups that can be used to restore data in the event of a node failure or data corruption. weight: 102 @@ -13,73 +14,7 @@ influxdb3/cloud-dedicated/tags: [backups, internals] related: - https://docs.aws.amazon.com/AmazonS3/latest/userguide/DataDurability.html, AWS S3 Data Durabililty - /influxdb3/cloud-dedicated/reference/internals/storage-engine/ +source: /shared/v3-distributed-internals-reference/durability.md --- -{{< product-name >}} writes data to multiple Write-Ahead-Log (WAL) files on local -storage and retains WALs until the data is persisted to Parquet files in object storage. -Parquet data files in object storage are redundantly stored on multiple devices -across a minimum of three availability zones in a cloud region. - -## Data storage - -In {{< product-name >}}, all measurements are stored in -[Apache Parquet](https://parquet.apache.org/) files that represent a -point-in-time snapshot of the data. The Parquet files are immutable and are -never replaced nor modified. Parquet files are stored in object storage and -referenced in the [Catalog](/influxdb3/cloud-dedicated/reference/internals/storage-engine/#catalog), which InfluxDB uses to find the appropriate Parquet files for a particular set of data. - -### Data deletion - -When data is deleted or expires (reaches the database's [retention period](/influxdb3/cloud-dedicated/reference/internals/data-retention/#database-retention-period)), InfluxDB performs the following steps: - -1. Marks the associated Parquet files as deleted in the catalog. -2. Filters out data marked for deletion from all queries. -3. Retains Parquet files marked for deletion in object storage for approximately 30 days after the youngest data in the file ages out of retention. - -## Data ingest - -When data is written to {{< product-name >}}, InfluxDB first writes the data to a -Write-Ahead-Log (WAL) on locally attached storage on the [Ingester](/influxdb3/cloud-dedicated/reference/internals/storage-engine/#ingester) node before -acknowledging the write request. After acknowledging the write request, the -Ingester holds the data in memory temporarily and then writes the contents of -the WAL to Parquet files in object storage and updates the [Catalog](/influxdb3/cloud-dedicated/reference/internals/storage-engine/#catalog) to -reference the newly created Parquet files. If an Ingester node is gracefully shut -down (for example, during a new software deployment), it flushes the contents of -the WAL to the Parquet files before shutting down. - -## Backups - -{{< product-name >}} implements the following data backup strategies: - -- **Backup of WAL file**: The WAL file is written on locally attached storage. - If an ingester process fails, the new ingester simply reads the WAL file on - startup and continues normal operation. WAL files are maintained until their - contents have been written to the Parquet files in object storage. - For added protection, ingesters can be configured for write replication, where - each measurement is written to two different WAL files before acknowledging - the write. - -- **Backup of Parquet files**: Parquet files are stored in object storage where - they are redundantly stored on multiple devices across a minimum of three - availability zones in a cloud region. Parquet files associated with each - database are kept in object storage for the duration of database retention period - plus an additional time period (approximately 30 days). - -- **Backup of catalog**: InfluxData keeps a transaction log of all recent updates - to the [InfluxDB catalog](/influxdb3/cloud-dedicated/reference/internals/storage-engine/#catalog) and generates a daily backup of - the catalog. Backups are preserved for at least 30 days in object storage across a minimum - of three availability zones. - -## Recovery - -InfluxData can perform the following recovery operations: - -- **Recovery after ingester failure**: If an ingester fails, a new ingester is - started up and reads from the WAL file for the recently ingested data. - -- **Recovery of Parquet files**: {{< product-name >}} uses the provided object - storage data durability to recover Parquet files. - -- **Recovery of the catalog**: InfluxData can restore the [Catalog](/influxdb3/cloud-dedicated/reference/internals/storage-engine/#catalog) to - the most recent daily backup and then reapply any transactions - that occurred since the interruption. + diff --git a/content/influxdb3/cloud-serverless/reference/internals/durability.md b/content/influxdb3/cloud-serverless/reference/internals/durability.md index d43d4bb8a..903fa3132 100644 --- a/content/influxdb3/cloud-serverless/reference/internals/durability.md +++ b/content/influxdb3/cloud-serverless/reference/internals/durability.md @@ -27,7 +27,7 @@ point-in-time snapshot of the data. The Parquet files are immutable and are never replaced nor modified. Parquet files are stored in object storage. -The _InfluxDB catalog_ is a relational, PostreSQL-compatible database that +The _InfluxDB catalog_ is a relational, PostgreSQL-compatible database that contains references to all Parquet files in object storage and is used as an index to find the appropriate Parquet files for a particular set of data. diff --git a/content/influxdb3/clustered/reference/internals/durability.md b/content/influxdb3/clustered/reference/internals/durability.md new file mode 100644 index 000000000..d9e674451 --- /dev/null +++ b/content/influxdb3/clustered/reference/internals/durability.md @@ -0,0 +1,17 @@ +--- +title: InfluxDB Clustered data durability +description: > + Data written to {{% product-name %}} progresses through multiple stages to ensure durability, optimized performance and storage, and efficient querying. Configuration options at each stage affect system behavior, balancing reliability and resource usage. +weight: 102 +menu: + influxdb3_clustered: + name: Data durability + parent: InfluxDB internals +influxdb3/clustered/tags: [backups, internals] +related: + - https://docs.aws.amazon.com/AmazonS3/latest/userguide/DataDurability.html, AWS S3 Data Durabililty + - /influxdb3/clustered/reference/internals/storage-engine/ +source: /shared/v3-distributed-internals-reference/durability.md +--- + + \ No newline at end of file diff --git a/content/shared/v3-distributed-internals-reference/durability.md b/content/shared/v3-distributed-internals-reference/durability.md new file mode 100644 index 000000000..70bd3c7c2 --- /dev/null +++ b/content/shared/v3-distributed-internals-reference/durability.md @@ -0,0 +1,98 @@ +## How data flows through {{% product-name %}} + +When data is written to {{% product-name %}}, it progresses through multiple stages to ensure durability, optimized performance and storage, and efficient querying. Configuration options at each stage affect system behavior, balancing reliability and resource usage. + +{{< svg "/static/svgs/v3-storage-architecture.svg" >}} + +Figure: Write request, response, and ingest flow for {{% product-name %}} + +- [How data flows through {{% product-name %}}](#how-data-flows-through--product-name-) +- [Data ingest](#data-ingest) + 1. [Write validation](#write-validation) + 2. [Write-ahead log (WAL) persistence](#write-ahead-log-wal-persistence) +- [Data storage](#data-storage) +- [Data deletion](#data-deletion) +- [Backups](#backups) +- [Recovery](#recovery) + +## Data ingest + +1. [Write validation and memory buffer](#write-validation-and-memory-buffer) +2. [Write-ahead log (WAL) persistence](#write-ahead-log-wal-persistence) + +### Write validation + +The [Router](/influxdb3/version/reference/internals/storage-engine/#router) validates incoming data to prevent malformed or unsupported data from entering the system. +{{% product-name %}} writes accepted data to multiple write-ahead-log (WAL) files on local +storage on the [Ingester](/influxdb3/version/reference/internals/storage-engine/#ingester) node before acknowledging the write request. +The Ingester holds the data in memory to ensure leading edge data is available for querying. + +### Write-ahead log (WAL) persistence + +InfluxDB writes yet-to-be persisted data to multiple Write-Ahead-Log (WAL) files on local +storage on the [Ingester](/influxdb3/version/reference/internals/storage-engine/#ingester) node before acknowledging the write request. +{{% hide-in "clustered" %}} +Parquet data files in object storage are redundantly stored on multiple devices +across a minimum of three availability zones in a cloud region. +{{% /hide-in %}} + +The Ingester then writes the contents of +the WAL to Parquet files in object storage and updates the [Catalog](/influxdb3/version/reference/internals/storage-engine/#catalog) to +reference the newly created Parquet files. + +If an Ingester node is gracefully shut down (for example, during a new software deployment), it flushes the contents of the WAL to the Parquet files before shutting down. +{{% product-name %}} retains WALs until the data is persisted to Parquet files in object storage. + +## Data storage + +In {{< product-name >}}, all measurements are stored in +[Apache Parquet](https://parquet.apache.org/) files that represent a +point-in-time snapshot of the data. The Parquet files are immutable and are +never replaced nor modified. Parquet files are stored in object storage and +referenced in the [Catalog](/influxdb3/version/reference/internals/storage-engine/#catalog), which InfluxDB uses to find the appropriate Parquet files for a particular set of data. + +## Data deletion + +When data is deleted or expires (reaches the database's [retention period](/influxdb3/version/reference/internals/data-retention/#database-retention-period)), InfluxDB performs the following steps: + +1. Marks the associated Parquet files as deleted in the catalog. +2. Filters out data marked for deletion from all queries. +{{% hide-in "clustered" %}}3. Retains Parquet files marked for deletion in object storage for approximately 30 days after the youngest data in the file ages out of retention.{{% /hide-in %}} + +## Backups + +{{< product-name >}} implements the following data backup strategies: + +- **Backup of WAL file**: The WAL file is written on locally attached storage. + If an ingester process fails, the new ingester simply reads the WAL file on + startup and continues normal operation. WAL files are maintained until their + contents have been written to the Parquet files in object storage. + For added protection, ingesters can be configured for write replication, where + each measurement is written to two different WAL files before acknowledging + the write. + +- **Backup of Parquet files**: Parquet files are stored in object storage {{% hide-in "clustered" %}}where + they are redundantly stored on multiple devices across a minimum of three + availability zones in a cloud region. Parquet files associated with each + database are kept in object storage for the duration of database retention period + plus an additional time period (approximately 30 days).{{% /hide-in %}} + +- **Backup of catalog**: InfluxData keeps a transaction log of all recent updates + to the [InfluxDB catalog](/influxdb3/version/reference/internals/storage-engine/#catalog) and generates a daily backup of + the catalog. {{% hide-in "clustered" %}}Backups are preserved for at least 30 days in object storage across a minimum of three availability zones.{{% /hide-in %}} + +{{% hide-in "clustered" %}} +## Recovery + +InfluxData can perform the following recovery operations: + +- **Recovery after ingester failure**: If an ingester fails, a new ingester is + started up and reads from the WAL file for the recently ingested data. + +- **Recovery of Parquet files**: {{< product-name >}} uses the provided object + storage data durability to recover Parquet files. + +- **Recovery of the catalog**: InfluxData can restore the [Catalog](/influxdb3/version/reference/internals/storage-engine/#catalog) to + the most recent daily backup and then reapply any transactions + that occurred since the interruption. +{{% /hide-in %}} From c4974d4a3d6072ed0384d09f0d96c613a27c9389 Mon Sep 17 00:00:00 2001 From: Jason Stirnaman Date: Mon, 8 Sep 2025 12:01:55 -0500 Subject: [PATCH 14/31] fix(v3): DAR-535 resolve duplication --- .../v3-distributed-internals-reference/durability.md | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/content/shared/v3-distributed-internals-reference/durability.md b/content/shared/v3-distributed-internals-reference/durability.md index 70bd3c7c2..3d8fbaf9a 100644 --- a/content/shared/v3-distributed-internals-reference/durability.md +++ b/content/shared/v3-distributed-internals-reference/durability.md @@ -29,17 +29,14 @@ The Ingester holds the data in memory to ensure leading edge data is available f ### Write-ahead log (WAL) persistence -InfluxDB writes yet-to-be persisted data to multiple Write-Ahead-Log (WAL) files on local -storage on the [Ingester](/influxdb3/version/reference/internals/storage-engine/#ingester) node before acknowledging the write request. +The Ingester persists the contents of +the WAL to Parquet files in object storage and updates the [Catalog](/influxdb3/version/reference/internals/storage-engine/#catalog) to +reference the newly created Parquet files. {{% hide-in "clustered" %}} Parquet data files in object storage are redundantly stored on multiple devices across a minimum of three availability zones in a cloud region. {{% /hide-in %}} -The Ingester then writes the contents of -the WAL to Parquet files in object storage and updates the [Catalog](/influxdb3/version/reference/internals/storage-engine/#catalog) to -reference the newly created Parquet files. - If an Ingester node is gracefully shut down (for example, during a new software deployment), it flushes the contents of the WAL to the Parquet files before shutting down. {{% product-name %}} retains WALs until the data is persisted to Parquet files in object storage. From 85b89e353e219f54d4432e6fac609a686fc64dbf Mon Sep 17 00:00:00 2001 From: Jason Stirnaman Date: Mon, 8 Sep 2025 12:03:20 -0500 Subject: [PATCH 15/31] fix(v3): remove top-level TOC link, hide recovery in Clustered --- .../v3-distributed-internals-reference/durability.md | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/content/shared/v3-distributed-internals-reference/durability.md b/content/shared/v3-distributed-internals-reference/durability.md index 3d8fbaf9a..dd0bf903b 100644 --- a/content/shared/v3-distributed-internals-reference/durability.md +++ b/content/shared/v3-distributed-internals-reference/durability.md @@ -6,21 +6,18 @@ When data is written to {{% product-name %}}, it progresses through multiple sta Figure: Write request, response, and ingest flow for {{% product-name %}} -- [How data flows through {{% product-name %}}](#how-data-flows-through--product-name-) - [Data ingest](#data-ingest) - 1. [Write validation](#write-validation) - 2. [Write-ahead log (WAL) persistence](#write-ahead-log-wal-persistence) - [Data storage](#data-storage) - [Data deletion](#data-deletion) - [Backups](#backups) -- [Recovery](#recovery) +{{% hide-in "clustered" %}}- [Recovery](#recovery){{% /hide-in %}} ## Data ingest 1. [Write validation and memory buffer](#write-validation-and-memory-buffer) 2. [Write-ahead log (WAL) persistence](#write-ahead-log-wal-persistence) -### Write validation +### Write validation and memory buffer The [Router](/influxdb3/version/reference/internals/storage-engine/#router) validates incoming data to prevent malformed or unsupported data from entering the system. {{% product-name %}} writes accepted data to multiple write-ahead-log (WAL) files on local From 2bc9e1736d92b01022239d5457c5fcc42efbcba6 Mon Sep 17 00:00:00 2001 From: Jason Stirnaman Date: Tue, 9 Sep 2025 12:15:29 -0500 Subject: [PATCH 16/31] fix(v3): Apply code review suggestions\ Co-authored-by: reidkauffman@users.noreply.github.com --- .../durability.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/content/shared/v3-distributed-internals-reference/durability.md b/content/shared/v3-distributed-internals-reference/durability.md index dd0bf903b..4036dcc07 100644 --- a/content/shared/v3-distributed-internals-reference/durability.md +++ b/content/shared/v3-distributed-internals-reference/durability.md @@ -20,22 +20,17 @@ When data is written to {{% product-name %}}, it progresses through multiple sta ### Write validation and memory buffer The [Router](/influxdb3/version/reference/internals/storage-engine/#router) validates incoming data to prevent malformed or unsupported data from entering the system. -{{% product-name %}} writes accepted data to multiple write-ahead-log (WAL) files on local -storage on the [Ingester](/influxdb3/version/reference/internals/storage-engine/#ingester) node before acknowledging the write request. -The Ingester holds the data in memory to ensure leading edge data is available for querying. +{{% product-name %}} writes accepted data to multiple write-ahead log (WAL) files on [Ingester](/influxdb3/version/reference/internals/storage-engine/#ingester) pods' local storage (default is 2 for redundancy) before acknowledging the write request. +The Ingester holds the data in memory to ensure leading-edge data is available for querying. ### Write-ahead log (WAL) persistence -The Ingester persists the contents of +Ingesters persist the contents of the WAL to Parquet files in object storage and updates the [Catalog](/influxdb3/version/reference/internals/storage-engine/#catalog) to reference the newly created Parquet files. -{{% hide-in "clustered" %}} -Parquet data files in object storage are redundantly stored on multiple devices -across a minimum of three availability zones in a cloud region. -{{% /hide-in %}} +{{% product-name %}} retains WALs until the data is persisted. If an Ingester node is gracefully shut down (for example, during a new software deployment), it flushes the contents of the WAL to the Parquet files before shutting down. -{{% product-name %}} retains WALs until the data is persisted to Parquet files in object storage. ## Data storage @@ -45,6 +40,11 @@ point-in-time snapshot of the data. The Parquet files are immutable and are never replaced nor modified. Parquet files are stored in object storage and referenced in the [Catalog](/influxdb3/version/reference/internals/storage-engine/#catalog), which InfluxDB uses to find the appropriate Parquet files for a particular set of data. +{{% hide-in "clustered" %}} +Parquet data files in object storage are redundantly stored on multiple devices +across a minimum of three availability zones in a cloud region. +{{% /hide-in %}} + ## Data deletion When data is deleted or expires (reaches the database's [retention period](/influxdb3/version/reference/internals/data-retention/#database-retention-period)), InfluxDB performs the following steps: From bba78ea40b24be778236a84150db2be51a62c605 Mon Sep 17 00:00:00 2001 From: Jason Stirnaman Date: Mon, 8 Sep 2025 17:04:56 -0500 Subject: [PATCH 17/31] chore(v1): Cautions, risks, and mitigations for using truncate-shards with future data - Closes influxdata/DAR/issues/534 - Contact Support for assistance - Add risks and technical details to truncate-shard command - Add cautions to rebalance guide - Add planning guidance for future data in schema_and_data_layout --- .../manage/clusters/rebalance.md | 35 ++++++++++++++--- .../v1/concepts/schema_and_data_layout.md | 33 ++++++++++++++++ .../v1/tools/influxd-ctl/truncate-shards.md | 39 +++++++++++++++++++ 3 files changed, 102 insertions(+), 5 deletions(-) diff --git a/content/enterprise_influxdb/v1/administration/manage/clusters/rebalance.md b/content/enterprise_influxdb/v1/administration/manage/clusters/rebalance.md index db0b48532..b6652e831 100644 --- a/content/enterprise_influxdb/v1/administration/manage/clusters/rebalance.md +++ b/content/enterprise_influxdb/v1/administration/manage/clusters/rebalance.md @@ -40,11 +40,20 @@ cluster, and they use the [`influxd-ctl` tool](/enterprise_influxdb/v1/tools/influxd-ctl/) available on all meta nodes. -{{% warn %}} -Before you begin, stop writing historical data to InfluxDB. -Historical data have timestamps that occur at anytime in the past. -Performing a rebalance while writing historical data can lead to data loss. -{{% /warn %}} +> [!Warning] +> #### Stop writing data before rebalancing +> +> Before you begin, stop writing historical data to InfluxDB. +> Historical data have timestamps that occur at anytime in the past. +> Performing a rebalance while writing historical data can lead to data loss. + +> [!Caution] +> #### Risks of rebalancing with future data +> +> Truncating shards that contain data with future timestamps (such as forecast or prediction data) +> can lead to overlapping shards and data duplication. +> For more information, see [`truncate-shards` and future data](/enterprise_influxdb/v1/tools/influxd-ctl/truncate-shards/#understand-the-risks-with-future-data) +> or [contact InfluxData support](https://support.influxdata.com). ## Rebalance Procedure 1: Rebalance a cluster to create space @@ -67,6 +76,14 @@ Hot shards are shards that are currently receiving writes. Performing any action on a hot shard can lead to data inconsistency within the cluster which requires manual intervention from the user. +> [!Caution] +> #### Risks of rebalancing with future data +> +> Truncating shards that contain data with future timestamps (such as forecast or prediction data) +> can lead to overlapping shards and data duplication. +> For more information, see [`truncate-shards` and future data](/enterprise_influxdb/v1/tools/influxd-ctl/truncate-shards/#understand-the-risks-with-future-data) +> or [contact InfluxData support](https://support.influxdata.com). + To prevent data inconsistency, truncate hot shards before moving any shards across data nodes. The command below creates a new hot shard which is automatically distributed @@ -298,6 +315,14 @@ Hot shards are shards that are currently receiving writes. Performing any action on a hot shard can lead to data inconsistency within the cluster which requires manual intervention from the user. +> [!Caution] +> #### Risks of rebalancing with future data +> +> Truncating shards that contain data with future timestamps (such as forecast or prediction data) +> can lead to overlapping shards and data duplication. +> For more information, see [`truncate-shards` and future data](/enterprise_influxdb/v1/tools/influxd-ctl/truncate-shards/#understand-the-risks-with-future-data) +> or [contact InfluxData support](https://support.influxdata.com). + To prevent data inconsistency, truncate hot shards before copying any shards to the new data node. The command below creates a new hot shard which is automatically distributed diff --git a/content/enterprise_influxdb/v1/concepts/schema_and_data_layout.md b/content/enterprise_influxdb/v1/concepts/schema_and_data_layout.md index c60e8ccef..febf8c2dc 100644 --- a/content/enterprise_influxdb/v1/concepts/schema_and_data_layout.md +++ b/content/enterprise_influxdb/v1/concepts/schema_and_data_layout.md @@ -16,6 +16,7 @@ We recommend the following design guidelines for most use cases: - [Where to store data (tag or field)](#where-to-store-data-tag-or-field) - [Avoid too many series](#avoid-too-many-series) - [Use recommended naming conventions](#use-recommended-naming-conventions) + - [Writing data with future timestamps](#writing-data-with-future-timestamps) - [Shard Group Duration Management](#shard-group-duration-management) ## Where to store data (tag or field) @@ -209,6 +210,38 @@ from(bucket:"/") > SELECT mean("temp") FROM "weather_sensor" WHERE region = 'north' ``` +## Writing data with future timestamps + +When designing schemas for applications that write data with future timestamps--such as forecast data from machine learning models, predictions, or scheduled events--consider the following implications for InfluxDB Enterprise v1 cluster operations and data integrity. + +### Understanding future data behavior + +InfluxDB Enterprise v1 creates shards based on time ranges. +When you write data with future timestamps, InfluxDB creates shards that cover future time periods. + +> [!Caution] +> #### Risks of rebalancing with future data +> +> Truncating shards that contain data with future timestamps (such as forecast or prediction data) +> can lead to overlapping shards and data duplication. +> For more information, see [`truncate-shards` and future data](/enterprise_influxdb/v1/tools/influxd-ctl/truncate-shards/#understand-the-risks-with-future-data) +> or [contact InfluxData support](https://support.influxdata.com). + +### Use separate databases for future data + +When planning for data that contains future timestamps, consider isolating it in dedicated databases to: + +- Minimize impact on real-time data operations +- Allow targeted maintenance operations on current vs. future data +- Simplify backup and recovery strategies for different data types + +```sql +# Example: Separate databases for different data types +CREATE DATABASE "realtime_metrics" +CREATE DATABASE "ml_forecasts" +CREATE DATABASE "scheduled_predictions" +``` + ## Shard group duration management ### Shard group duration overview diff --git a/content/enterprise_influxdb/v1/tools/influxd-ctl/truncate-shards.md b/content/enterprise_influxdb/v1/tools/influxd-ctl/truncate-shards.md index f7dffef50..fce401ac2 100644 --- a/content/enterprise_influxdb/v1/tools/influxd-ctl/truncate-shards.md +++ b/content/enterprise_influxdb/v1/tools/influxd-ctl/truncate-shards.md @@ -17,6 +17,14 @@ The `influxd-ctl truncate-shards` command truncates all shards that are currentl being written to (also known as "hot" shards) and creates new shards to write new data to. +> [!Caution] +> #### Overlapping shards with forecast and future data +> +> Running `truncate-shards` on shards containing future timestamps can create +> overlapping shards with duplicate data points. +> +> [Understand the risks with future data](#understand-the-risks-with-future-data). + ## Usage ```sh @@ -40,3 +48,34 @@ _Also see [`influxd-ctl` global flags](/enterprise_influxdb/v1/tools/influxd-ctl ```bash influxd-ctl truncate-shards -delay 3m ``` + +## Understand the risks with future data + +> [!Important] +> If you need to rebalance shards that contain future data, contact [InfluxData support](https://www.influxdata.com/contact/) for assistance. + +When you write data points with timestamps in the future (for example, forecast data from machine learning models), +the `truncate-shards` command behaves differently and can cause data duplication issues. + +### How truncate-shards normally works + +For shards containing current data: +1. The command creates an artificial stop point in the shard at the truncation timestamp +2. Creates a new shard starting from the truncation point +3. Example: A one-week shard (Sunday to Saturday) becomes: + - Shard A: Sunday to truncation point (Wednesday 2pm) + - Shard B: Truncation point (Wednesday 2pm) to Saturday + +This works correctly because the meta nodes understand the boundaries and route queries appropriately. + +### The problem with future data + +For shards containing future timestamps: +1. The truncation doesn't cleanly split the shard at a point in time +2. Instead, it creates overlapping shards that cover the same time period +3. Example: If you're writing September forecast data in August: + - Original shard: September 1-7 + - After truncation: + - Shard A: September 1-7 (with data up to truncation) + - Shard B: September 1-7 (for new data after truncation) + - **Result**: Duplicate data points for the same timestamps From de115edf8978fb33a064f95798f7150aff1ae08f Mon Sep 17 00:00:00 2001 From: Jason Stirnaman Date: Tue, 9 Sep 2025 15:58:43 -0500 Subject: [PATCH 18/31] fix(v1): clarify truncate-shards operates on hot shards --- .../manage/clusters/rebalance.md | 40 +++++++++---------- 1 file changed, 18 insertions(+), 22 deletions(-) diff --git a/content/enterprise_influxdb/v1/administration/manage/clusters/rebalance.md b/content/enterprise_influxdb/v1/administration/manage/clusters/rebalance.md index b6652e831..9715aed7d 100644 --- a/content/enterprise_influxdb/v1/administration/manage/clusters/rebalance.md +++ b/content/enterprise_influxdb/v1/administration/manage/clusters/rebalance.md @@ -70,9 +70,9 @@ data node to expand the total disk capacity of the cluster. In the next steps, you will safely move shards from one of the two original data nodes to the new data node. -### Step 1: Truncate Hot Shards +### Step 1: Truncate hot shards -Hot shards are shards that are currently receiving writes. +Hot shards are shards that currently receive writes. Performing any action on a hot shard can lead to data inconsistency within the cluster which requires manual intervention from the user. @@ -84,12 +84,9 @@ cluster which requires manual intervention from the user. > For more information, see [`truncate-shards` and future data](/enterprise_influxdb/v1/tools/influxd-ctl/truncate-shards/#understand-the-risks-with-future-data) > or [contact InfluxData support](https://support.influxdata.com). -To prevent data inconsistency, truncate hot shards before moving any shards +To prevent data inconsistency, truncate shards before moving any shards across data nodes. -The command below creates a new hot shard which is automatically distributed -across all data nodes in the cluster, and the system writes all new points to -that shard. -All previous writes are now stored in cold shards. +The following command truncates all hot shards and creates new shards to write data to: ``` influxd-ctl truncate-shards @@ -101,10 +98,11 @@ The expected output of this command is: Truncated shards. ``` -Once you truncate the shards, you can work on redistributing the cold shards -without the threat of data inconsistency in the cluster. -Any hot or new shards are now evenly distributed across the cluster and require -no further intervention. +New shards are automatically distributed across all data nodes, and InfluxDB writes new points to them. +Previous writes are stored in cold shards. + +After truncating shards, you can redistribute cold shards without data inconsistency. +Hot and new shards are evenly distributed and require no further intervention. ### Step 2: Identify Cold Shards @@ -309,9 +307,9 @@ name duration shardGroupDuration replicaN default autogen 0s 1h0m0s 3 #👍 true ``` -### Step 2: Truncate Hot Shards +### Step 2: Truncate hot shards -Hot shards are shards that are currently receiving writes. +Hot shards are shards that currently receive writes. Performing any action on a hot shard can lead to data inconsistency within the cluster which requires manual intervention from the user. @@ -323,12 +321,9 @@ cluster which requires manual intervention from the user. > For more information, see [`truncate-shards` and future data](/enterprise_influxdb/v1/tools/influxd-ctl/truncate-shards/#understand-the-risks-with-future-data) > or [contact InfluxData support](https://support.influxdata.com). -To prevent data inconsistency, truncate hot shards before copying any shards +To prevent data inconsistency, truncate shards before copying any shards to the new data node. -The command below creates a new hot shard which is automatically distributed -across the three data nodes in the cluster, and the system writes all new points -to that shard. -All previous writes are now stored in cold shards. +The following command truncates all hot shards and creates new shards to write data to: ``` influxd-ctl truncate-shards @@ -340,10 +335,11 @@ The expected output of this command is: Truncated shards. ``` -Once you truncate the shards, you can work on distributing the cold shards -without the threat of data inconsistency in the cluster. -Any hot or new shards are now automatically distributed across the cluster and -require no further intervention. +New shards are automatically distributed across all data nodes, and InfluxDB writes new points to them. +Previous writes are stored in cold shards. + +After truncating shards, you can redistribute cold shards without data inconsistency. +Hot and new shards are evenly distributed and require no further intervention. ### Step 3: Identify Cold Shards From 6fcd870555fad43b76aa3fbfb9fd303b3aefcaef Mon Sep 17 00:00:00 2001 From: Jason Stirnaman Date: Tue, 9 Sep 2025 14:12:52 -0500 Subject: [PATCH 19/31] fix(clustered): Add known bug and clustered-auth override to release notes --- .../reference/release-notes/clustered.md | 37 +++++++++++++++++++ 1 file changed, 37 insertions(+) diff --git a/content/influxdb3/clustered/reference/release-notes/clustered.md b/content/influxdb3/clustered/reference/release-notes/clustered.md index 3b382e8c2..54629f761 100644 --- a/content/influxdb3/clustered/reference/release-notes/clustered.md +++ b/content/influxdb3/clustered/reference/release-notes/clustered.md @@ -390,6 +390,43 @@ spec: # ...[remaining configuration] ``` +### `clustered-auth` service routes to removed `gateway` service instead of `core` service + +If you have the `clusteredAuth` feature flag enabled, the `clustered-auth` service will be deployed. +The service currently routes to the recently removed `gateway` service instead of the new `core` service. + +#### Temporary workaround for service routing + +Until you upgrade to release `20250805-1812019`, you will need to override the `clustered-auth` +service to point to the new `core` service by adding the following `env` overrides to your `AppInstance`: + +```yaml +apiVersion: kubecfg.dev/v1alpha1 +kind: AppInstance +metadata: + name: influxdb + namespace: influxdb +spec: + package: + image: us-docker.pkg.dev/influxdb2-artifacts/clustered/influxdb:20241024-1354148 + apiVersion: influxdata.com/v1alpha1 + spec: + components: + querier: + template: + containers: + clustered-auth: + env: + AUTHZ_TOKEN_SVC_ADDRESS: 'http://core:8091/' + router: + template: + containers: + clustered-auth: + env: + AUTHZ_TOKEN_SVC_ADDRESS: 'http://core:8091/' +# ...remaining configuration... +``` + ### Highlights #### AppInstance image override bug fix From f0117ed3996797dbcfc44049b3e8e7f82a68b9d0 Mon Sep 17 00:00:00 2001 From: Jason Stirnaman Date: Tue, 9 Sep 2025 14:22:15 -0500 Subject: [PATCH 20/31] Update content/influxdb3/clustered/reference/release-notes/clustered.md Co-authored-by: Scott Anderson --- .../influxdb3/clustered/reference/release-notes/clustered.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/influxdb3/clustered/reference/release-notes/clustered.md b/content/influxdb3/clustered/reference/release-notes/clustered.md index 54629f761..dc4a7d405 100644 --- a/content/influxdb3/clustered/reference/release-notes/clustered.md +++ b/content/influxdb3/clustered/reference/release-notes/clustered.md @@ -397,7 +397,7 @@ The service currently routes to the recently removed `gateway` service instead o #### Temporary workaround for service routing -Until you upgrade to release `20250805-1812019`, you will need to override the `clustered-auth` +Until you upgrade to release `20250805-1812019`, you need to override the `clustered-auth` service to point to the new `core` service by adding the following `env` overrides to your `AppInstance`: ```yaml From c748819f6db2344127802049a6c2bfa62ea14c4b Mon Sep 17 00:00:00 2001 From: Jason Stirnaman Date: Tue, 9 Sep 2025 16:03:52 -0500 Subject: [PATCH 21/31] fix(clustered): Google Cloud IAM link --- .../influxdb3/clustered/reference/release-notes/clustered.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/influxdb3/clustered/reference/release-notes/clustered.md b/content/influxdb3/clustered/reference/release-notes/clustered.md index dc4a7d405..0899575f0 100644 --- a/content/influxdb3/clustered/reference/release-notes/clustered.md +++ b/content/influxdb3/clustered/reference/release-notes/clustered.md @@ -1278,7 +1278,7 @@ We now expose a `google` object within the `objectStore` configuration, which enables support for using Google Cloud's GCS as a backing object store for IOx components. This supports both [GKE workload identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) -and [IAM Service Account](https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform#step_3_create_service_account_credentials) +and [IAM Service Account](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#kubernetes-sa-to-iam) authentication methods. #### Support for bypassing identity provider configuration for database/token management From fed9e49f045558aee2173ae3690ebd0441b359e4 Mon Sep 17 00:00:00 2001 From: mdevy-influxdata <53542066+mdevy-influxdata@users.noreply.github.com> Date: Tue, 9 Sep 2025 17:21:38 -0700 Subject: [PATCH 22/31] Update grafana.md typo --- .../cloud-serverless/process-data/visualize/grafana.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/influxdb3/cloud-serverless/process-data/visualize/grafana.md b/content/influxdb3/cloud-serverless/process-data/visualize/grafana.md index 959570f23..afeed288d 100644 --- a/content/influxdb3/cloud-serverless/process-data/visualize/grafana.md +++ b/content/influxdb3/cloud-serverless/process-data/visualize/grafana.md @@ -131,7 +131,7 @@ When creating an InfluxDB data source that uses InfluxQL to query data: 2. Under **InfluxDB Details**: - **Database**: Provide a database name to query. - Use the database name that is mapped to your InfluxBD bucket. + Use the database name that is mapped to your InfluxDB bucket. - **User**: Provide an arbitrary string. _This credential is ignored when querying {{% product-name %}}, but it cannot be empty._ - **Password**: Provide an [API token](/influxdb3/cloud-serverless/admin/tokens/) From e8350a39950a6258267ff35e65ede1f9d2674074 Mon Sep 17 00:00:00 2001 From: Jason Stirnaman Date: Wed, 10 Sep 2025 15:52:54 -0500 Subject: [PATCH 23/31] chore(v3-dist): Consolidate to shared Grafana guide - Fix broken link to fragment --- .../process-data/visualize/grafana.md | 198 +---------------- .../process-data/visualize/grafana.md | 208 +---------------- .../process-data/visualize/grafana.md | 194 +--------------- .../v3-process-data/visualize/grafana.md | 209 ++++++++++++++++++ 4 files changed, 217 insertions(+), 592 deletions(-) create mode 100644 content/shared/v3-process-data/visualize/grafana.md diff --git a/content/influxdb3/cloud-dedicated/process-data/visualize/grafana.md b/content/influxdb3/cloud-dedicated/process-data/visualize/grafana.md index e5b168e5e..4fc0cb47e 100644 --- a/content/influxdb3/cloud-dedicated/process-data/visualize/grafana.md +++ b/content/influxdb3/cloud-dedicated/process-data/visualize/grafana.md @@ -9,7 +9,7 @@ menu: influxdb3_cloud_dedicated: name: Use Grafana parent: Visualize data -influxdb3/cloud-dedicated/tags: [Flight client, query, visualization] +influxdb3/cloud-dedicated/tags: [query, visualization, Grafana] aliases: - /influxdb3/cloud-dedicated/query-data/tools/grafana/ - /influxdb3/cloud-dedicated/query-data/sql/execute-queries/grafana/ @@ -20,199 +20,7 @@ alt_links: cloud: /influxdb/cloud/tools/grafana/ core: /influxdb3/core/visualize-data/grafana/ enterprise: /influxdb3/enterprise/visualize-data/grafana/ +source: /content/shared/v3-process-data/visualize/grafana.md --- -Use [Grafana](https://grafana.com/) to query and visualize data stored in -{{% product-name %}}. - -> [Grafana] enables you to query, visualize, alert on, and explore your metrics, -> logs, and traces wherever they are stored. -> [Grafana] provides you with tools to turn your time-series database (TSDB) -> data into insightful graphs and visualizations. -> -> {{% cite %}}-- [Grafana documentation](https://grafana.com/docs/grafana/latest/introduction/){{% /cite %}} - - - -- [Install Grafana or login to Grafana Cloud](#install-grafana-or-login-to-grafana-cloud) -- [InfluxDB data source](#influxdb-data-source) -- [Create an InfluxDB data source](#create-an-influxdb-data-source) -- [Query InfluxDB with Grafana](#query-influxdb-with-grafana) -- [Build visualizations with Grafana](#build-visualizations-with-grafana) - - - -## Install Grafana or login to Grafana Cloud - -If using the open source version of **Grafana**, follow the -[Grafana installation instructions](https://grafana.com/docs/grafana/latest/setup-grafana/installation/) -to install Grafana for your operating system. -If using **Grafana Cloud**, login to your Grafana Cloud instance. - -## InfluxDB data source - -The InfluxDB data source plugin is included in the Grafana core distribution. -Use the plugin to query and visualize data stored in {{< product-name >}} with -both InfluxQL and SQL. - -> [!Note] -> #### Grafana 10.3+ -> -> The instructions below are for **Grafana 10.3+** which introduced the newest -> version of the InfluxDB core plugin. -> The updated plugin includes **SQL support** for InfluxDB 3-based products such -> as {{< product-name >}}. - -## Create an InfluxDB data source - -1. In your Grafana user interface (UI), navigate to **Data Sources**. -2. Click **Add new data source**. -3. Search for and select the **InfluxDB** plugin. -4. Provide a name for your data source. -5. Under **Query Language**, select either **SQL** or **InfluxQL**: - -{{< tabs-wrapper >}} -{{% tabs %}} -[SQL](#) -[InfluxQL](#) -{{% /tabs %}} -{{% tab-content %}} - - -When creating an InfluxDB data source that uses SQL to query data: - -1. Under **HTTP**: - - - **URL**: Provide your {{% product-name omit=" Clustered" %}} cluster URL - using the HTTPS protocol: - - ``` - https://{{< influxdb/host >}} - ``` - -2. Under **InfluxDB Details**: - - - **Database**: Provide a default database name to query. - - **Token**: Provide a [database token](/influxdb3/cloud-dedicated/admin/tokens/#database-tokens) - with read access to the databases you want to query. - -3. Click **Save & test**. - - {{< img-hd src="/img/influxdb3/cloud-dedicated-grafana-influxdb-data-source-sql.png" alt="Grafana InfluxDB data source for InfluxDB Cloud Serverless that uses SQL" />}} - - -{{% /tab-content %}} -{{% tab-content %}} - - -When creating an InfluxDB data source that uses InfluxQL to query data: - -1. Under **HTTP**: - - - **URL**: Provide your {{% product-name %}} cluster URL - using the HTTPS protocol: - - ``` - https://{{< influxdb/host >}} - ``` - -2. Under **InfluxDB Details**: - - - **Database**: Provide a default database name to query. - - **User**: Provide an arbitrary string. - _This credential is ignored when querying {{% product-name %}}, but it cannot be empty._ - - **Password**: Provide a [database token](/influxdb3/cloud-dedicated/admin/tokens/#database-tokens) - with read access to the databases you want to query. - - **HTTP Method**: Choose one of the available HTTP request methods to use when querying data: - - - **POST** ({{< req text="Recommended" >}}) - - **GET** - -3. Click **Save & test**. - - {{< img-hd src="/img/influxdb3/cloud-dedicated-grafana-influxdb-data-source-influxql.png" alt="Grafana InfluxDB data source for InfluxDB Cloud Serverless using InfluxQL" />}} - - -{{% /tab-content %}} -{{< /tabs-wrapper >}} - -## Query InfluxDB with Grafana - -After you [configure and save an InfluxDB datasource](#create-a-datasource), -use Grafana to build, run, and inspect queries against your InfluxDB database. - -{{< tabs-wrapper >}} -{{% tabs %}} -[SQL](#) -[InfluxQL](#) -{{% /tabs %}} -{{% tab-content %}} - - -> [!Note] -> {{% sql/sql-schema-intro %}} -> To learn more, see [Query Data](/influxdb3/cloud-dedicated/query-data/sql/). - -1. Click **Explore**. -2. In the dropdown, select the saved InfluxDB data source to query. -3. Use the SQL query form to build your query: - - **Table**: Select the measurement to query. - - **Column**: Select one or more fields and tags to return as columns in query results. - - With SQL, select the `time` column to include timestamps with the data. - Grafana relies on the `time` column to correctly graph time series data. - - - _**Optional:**_ Toggle **filter** to generate **WHERE** clause statements. - - **WHERE**: Configure condition expressions to include in the `WHERE` clause. - - - _**Optional:**_ Toggle **group** to generate **GROUP BY** clause statements. - - - **GROUP BY**: Select columns to group by. - If you include an aggregation function in the **SELECT** list, - you must group by one or more of the queried columns. - SQL returns the aggregation for each group. - - - {{< req text="Recommended" color="green" >}}: - Toggle **order** to generate **ORDER BY** clause statements. - - - **ORDER BY**: Select columns to sort by. - You can sort by time and multiple fields or tags. - To sort in descending order, select **DESC**. - -4. {{< req text="Recommended" color="green" >}}: Change format to **Time series**. - - Use the **Format** dropdown to change the format of the query results. - For example, to visualize the query results as a time series, select **Time series**. - -5. Click **Run query** to execute the query. - - -{{% /tab-content %}} -{{% tab-content %}} - - -1. Click **Explore**. -2. In the dropdown, select the **InfluxDB** data source that you want to query. -3. Use the InfluxQL query form to build your query: - - **FROM**: Select the measurement that you want to query. - - **WHERE**: To filter the query results, enter a conditional expression. - - **SELECT**: Select fields to query and an aggregate function to apply to each. - The aggregate function is applied to each time interval defined in the - `GROUP BY` clause. - - **GROUP BY**: By default, Grafana groups data by time to downsample results - and improve query performance. - You can also add other tags to group by. -4. Click **Run query** to execute the query. - - -{{% /tab-content %}} -{{< /tabs-wrapper >}} - -{{< youtube "rSsouoNsNDs" >}} - -To learn about query management and inspection in Grafana, see the -[Grafana Explore documentation](https://grafana.com/docs/grafana/latest/explore/). - -## Build visualizations with Grafana - -For a comprehensive walk-through of creating visualizations with -Grafana, see the [Grafana documentation](https://grafana.com/docs/grafana/latest/). + diff --git a/content/influxdb3/cloud-serverless/process-data/visualize/grafana.md b/content/influxdb3/cloud-serverless/process-data/visualize/grafana.md index afeed288d..b2ca5c372 100644 --- a/content/influxdb3/cloud-serverless/process-data/visualize/grafana.md +++ b/content/influxdb3/cloud-serverless/process-data/visualize/grafana.md @@ -21,211 +21,7 @@ alt_links: cloud: /influxdb/cloud/tools/grafana/ core: /influxdb3/core/visualize-data/grafana/ enterprise: /influxdb3/enterprise/visualize-data/grafana/ +source: /content/shared/v3-process-data/visualize/grafana.md --- -Use [Grafana](https://grafana.com/) to query and visualize data stored in -{{% product-name %}}. - -> [Grafana] enables you to query, visualize, alert on, and explore your metrics, -> logs, and traces wherever they are stored. -> [Grafana] provides you with tools to turn your time-series database (TSDB) -> data into insightful graphs and visualizations. -> -> {{% cite %}}-- [Grafana documentation](https://grafana.com/docs/grafana/latest/introduction/){{% /cite %}} - - - -- [Install Grafana or login to Grafana Cloud](#install-grafana-or-login-to-grafana-cloud) -- [InfluxDB data source](#influxdb-data-source) -- [Create an InfluxDB data source](#create-an-influxdb-data-source) -- [Query InfluxDB with Grafana](#query-influxdb-with-grafana) -- [Build visualizations with Grafana](#build-visualizations-with-grafana) - - - -## Install Grafana or login to Grafana Cloud - -If using the open source version of **Grafana**, follow the -[Grafana installation instructions](https://grafana.com/docs/grafana/latest/setup-grafana/installation/) -to install Grafana for your operating system. -If using **Grafana Cloud**, login to your Grafana Cloud instance. - -## InfluxDB data source - -The InfluxDB data source plugin is included in the Grafana core distribution. -Use the plugin to query and visualize data stored in {{< product-name >}} with -both InfluxQL and SQL. - -> [!Note] -> #### Grafana 10.3+ -> -> The instructions below are for **Grafana 10.3+** which introduced the newest -> version of the InfluxDB core plugin. -> The updated plugin includes **SQL support** for InfluxDB 3-based products such -> as {{< product-name >}}. - -## Create an InfluxDB data source - -Which data source you create depends on which query language you want to use to -query {{% product-name %}}: - -1. In your Grafana user interface (UI), navigate to **Data Sources**. -2. Click **Add new data source**. -3. Search for and select the **InfluxDB** plugin. -4. Provide a name for your data source. -5. Under **Query Language**, select either **SQL** or **InfluxQL**: - -{{< tabs-wrapper >}} -{{% tabs %}} -[SQL](#) -[InfluxQL](#) -{{% /tabs %}} -{{% tab-content %}} - - -When creating an InfluxDB data source that uses SQL to query data: - -1. Under **HTTP**: - - - **URL**: Provide your [{{% product-name %}} region URL](/influxdb3/cloud-serverless/reference/regions/) - using the HTTPS protocol: - - ``` - https://{{< influxdb/host >}} - ``` - -2. Under **InfluxDB Details**: - - - **Database**: Provide a default bucket name to query. - In {{< product-name >}}, a bucket functions as a database. - - **Token**: Provide an [API token](/influxdb3/cloud-serverless/admin/tokens/) - with read access to the buckets you want to query. - -3. Click **Save & test**. - - {{< img-hd src="/img/influxdb3/cloud-serverless-grafana-influxdb-data-source-sql.png" alt="Grafana InfluxDB data source for InfluxDB Cloud Serverless that uses SQL" />}} - - -{{% /tab-content %}} -{{% tab-content %}} - - -When creating an InfluxDB data source that uses InfluxQL to query data: - -> [!Note] -> #### Map databases and retention policies to buckets -> -> To query {{% product-name %}} with InfluxQL, first map database and retention policy -> (DBRP) combinations to your InfluxDB Cloud buckets. For more information, see -> [Map databases and retention policies to buckets](/influxdb3/cloud-serverless/query-data/influxql/dbrp/). - -1. Under **HTTP**: - - - **URL**: Provide your [{{% product-name %}} region URL](/influxdb3/cloud-serverless/reference/regions/) - using the HTTPS protocol: - - ``` - https://{{< influxdb/host >}} - ``` - -2. Under **InfluxDB Details**: - - - **Database**: Provide a database name to query. - Use the database name that is mapped to your InfluxDB bucket. - - **User**: Provide an arbitrary string. - _This credential is ignored when querying {{% product-name %}}, but it cannot be empty._ - - **Password**: Provide an [API token](/influxdb3/cloud-serverless/admin/tokens/) - with read access to the buckets you want to query. - - **HTTP Method**: Choose one of the available HTTP request methods to use when querying data: - - - **POST** ({{< req text="Recommended" >}}) - - **GET** - -3. Click **Save & test**. - - {{< img-hd src="/img/influxdb3/cloud-serverless-grafana-influxdb-data-source-influxql.png" alt="Grafana InfluxDB data source for InfluxDB Cloud Serverless using InfluxQL" />}} - - -{{% /tab-content %}} -{{< /tabs-wrapper >}} - -## Query InfluxDB with Grafana - -After you [configure and save a FlightSQL or InfluxDB datasource](#create-a-datasource), -use Grafana to build, run, and inspect queries against your InfluxDB bucket. - -{{< tabs-wrapper >}} -{{% tabs %}} -[SQL](#) -[InfluxQL](#) -{{% /tabs %}} -{{% tab-content %}} - - -> [!Note] -> {{% sql/sql-schema-intro %}} -> To learn more, see [Query Data](/influxdb3/cloud-serverless/query-data/sql/). - -1. Click **Explore**. -2. In the dropdown, select the saved InfluxDB data source to query. -3. Use the SQL query form to build your query: - - **Table**: Select the measurement to query. - - **Column**: Select one or more fields and tags to return as columns in query results. - - With SQL, select the `time` column to include timestamps with the data. - Grafana relies on the `time` column to correctly graph time series data. - - - _**Optional:**_ Toggle **filter** to generate **WHERE** clause statements. - - **WHERE**: Configure condition expressions to include in the `WHERE` clause. - - - _**Optional:**_ Toggle **group** to generate **GROUP BY** clause statements. - - - **GROUP BY**: Select columns to group by. - If you include an aggregation function in the **SELECT** list, - you must group by one or more of the queried columns. - SQL returns the aggregation for each group. - - - {{< req text="Recommended" color="green" >}}: - Toggle **order** to generate **ORDER BY** clause statements. - - - **ORDER BY**: Select columns to sort by. - You can sort by time and multiple fields or tags. - To sort in descending order, select **DESC**. - -4. {{< req text="Recommended" color="green" >}}: Change format to **Time series**. - - Use the **Format** dropdown to change the format of the query results. - For example, to visualize the query results as a time series, select **Time series**. - -5. Click **Run query** to execute the query. - - -{{% /tab-content %}} -{{% tab-content %}} - - -1. Click **Explore**. -2. In the dropdown, select the **InfluxDB** data source that you want to query. -3. Use the InfluxQL query form to build your query: - - **FROM**: Select the measurement that you want to query. - - **WHERE**: To filter the query results, enter a conditional expression. - - **SELECT**: Select fields to query and an aggregate function to apply to each. - The aggregate function is applied to each time interval defined in the - `GROUP BY` clause. - - **GROUP BY**: By default, Grafana groups data by time to downsample results - and improve query performance. - You can also add other tags to group by. -4. Click **Run query** to execute the query. - - -{{% /tab-content %}} -{{< /tabs-wrapper >}} - -{{< youtube "rSsouoNsNDs" >}} - -To learn about query management and inspection in Grafana, see the -[Grafana Explore documentation](https://grafana.com/docs/grafana/latest/explore/). - -## Build visualizations with Grafana - -For a comprehensive walk-through of creating visualizations with -Grafana, see the [Grafana documentation](https://grafana.com/docs/grafana/latest/). + diff --git a/content/influxdb3/clustered/process-data/visualize/grafana.md b/content/influxdb3/clustered/process-data/visualize/grafana.md index 3bf6a952e..4818070bf 100644 --- a/content/influxdb3/clustered/process-data/visualize/grafana.md +++ b/content/influxdb3/clustered/process-data/visualize/grafana.md @@ -9,7 +9,7 @@ menu: influxdb3_clustered: name: Use Grafana parent: Visualize data -influxdb3/clustered/tags: [query, visualization] +influxdb3/clustered/tags: [query, visualization, Grafana] aliases: - /influxdb3/clustered/query-data/tools/grafana/ - /influxdb3/clustered/query-data/sql/execute-queries/grafana/ @@ -20,195 +20,7 @@ alt_links: cloud: /influxdb/cloud/tools/grafana/ core: /influxdb3/core/visualize-data/grafana/ enterprise: /influxdb3/enterprise/visualize-data/grafana/ +source: /content/shared/v3-process-data/visualize/grafana.md --- -Use [Grafana](https://grafana.com/) to query and visualize data stored in -{{% product-name %}}. - -> [Grafana] enables you to query, visualize, alert on, and explore your metrics, -> logs, and traces wherever they are stored. -> [Grafana] provides you with tools to turn your time-series database (TSDB) -> data into insightful graphs and visualizations. -> -> {{% cite %}}-- [Grafana documentation](https://grafana.com/docs/grafana/latest/introduction/){{% /cite %}} - -- [Install Grafana or login to Grafana Cloud](#install-grafana-or-login-to-grafana-cloud) -- [InfluxDB data source](#influxdb-data-source) -- [Create an InfluxDB data source](#create-an-influxdb-data-source) -- [Query InfluxDB with Grafana](#query-influxdb-with-grafana) -- [Build visualizations with Grafana](#build-visualizations-with-grafana) - -## Install Grafana or login to Grafana Cloud - -If using the open source version of **Grafana**, follow the -[Grafana installation instructions](https://grafana.com/docs/grafana/latest/setup-grafana/installation/) -to install Grafana for your operating system. -If using **Grafana Cloud**, login to your Grafana Cloud instance. - -## InfluxDB data source - -The InfluxDB data source plugin is included in the Grafana core distribution. -Use the plugin to query and visualize data stored in {{< product-name >}} with -both InfluxQL and SQL. - -> [!Note] -> #### Grafana 10.3+ -> -> The instructions below are for **Grafana 10.3+** which introduced the newest -> version of the InfluxDB core plugin. -> The updated plugin includes **SQL support** for InfluxDB 3-based products such -> as {{< product-name >}}. - -## Create an InfluxDB data source - -1. In your Grafana user interface (UI), navigate to **Data Sources**. -2. Click **Add new data source**. -3. Search for and select the **InfluxDB** plugin. -4. Provide a name for your data source. -5. Under **Query Language**, select either **SQL** or **InfluxQL**: - -{{< tabs-wrapper >}} -{{% tabs %}} -[SQL](#) -[InfluxQL](#) -{{% /tabs %}} -{{% tab-content %}} - - -When creating an InfluxDB data source that uses SQL to query data: - -1. Under **HTTP**: - - - **URL**: Provide your {{% product-name omit=" Clustered" %}} cluster URL - using the HTTPS protocol: - - ``` - https://{{< influxdb/host >}} - ``` - -2. Under **InfluxDB Details**: - - - **Database**: Provide a default [database](/influxdb3/clustered/admin/databases/) name to query. - - **Token**: Provide a [database token](/influxdb3/clustered/admin/tokens/#database-tokens) - with read access to the databases you want to query. - -3. Click **Save & test**. - - {{< img-hd src="/img/influxdb3/clustered-grafana-influxdb-data-source-sql.png" alt="Grafana InfluxDB data source for InfluxDB Cloud Serverless that uses SQL" />}} - - -{{% /tab-content %}} -{{% tab-content %}} - - -When creating an InfluxDB data source that uses InfluxQL to query data: - -1. Under **HTTP**: - - - **URL**: Provide your [{{% product-name %}} region URL](/influxdb3/clustered/reference/regions/) - using the HTTPS protocol: - - ``` - https://{{< influxdb/host >}} - ``` - -2. Under **InfluxDB Details**: - - - **Database**: Provide a default [database](/influxdb3/clustered/admin/databases/) name to query. - - **User**: Provide an arbitrary string. - _This credential is ignored when querying {{% product-name %}}, but it cannot be empty._ - - **Password**: Provide a [database token](/influxdb3/clustered/admin/tokens/#database-tokens) - with read access to the databases you want to query. - - **HTTP Method**: Choose one of the available HTTP request methods to use when querying data: - - - **POST** ({{< req text="Recommended" >}}) - - **GET** - -3. Click **Save & test**. - - {{< img-hd src="/img/influxdb3/clustered-grafana-influxdb-data-source-influxql.png" alt="Grafana InfluxDB data source for InfluxDB Cloud Serverless using InfluxQL" />}} - - -{{% /tab-content %}} -{{< /tabs-wrapper >}} - -## Query InfluxDB with Grafana - -After you [configure and save an InfluxDB datasource](#create-a-datasource), -use Grafana to build, run, and inspect queries against your InfluxDB database. - -{{< tabs-wrapper >}} -{{% tabs %}} -[SQL](#) -[InfluxQL](#) -{{% /tabs %}} -{{% tab-content %}} - - -> [!Note] -> {{% sql/sql-schema-intro %}} -> To learn more, see [Query Data](/influxdb3/clustered/query-data/sql/). - -1. Click **Explore**. -2. In the dropdown, select the saved InfluxDB data source to query. -3. Use the SQL query form to build your query: - - **Table**: Select the measurement to query. - - **Column**: Select one or more fields and tags to return as columns in query results. - - With SQL, select the `time` column to include timestamps with the data. - Grafana relies on the `time` column to correctly graph time series data. - - - _**Optional:**_ Toggle **filter** to generate **WHERE** clause statements. - - **WHERE**: Configure condition expressions to include in the `WHERE` clause. - - - _**Optional:**_ Toggle **group** to generate **GROUP BY** clause statements. - - - **GROUP BY**: Select columns to group by. - If you include an aggregation function in the **SELECT** list, - you must group by one or more of the queried columns. - SQL returns the aggregation for each group. - - - {{< req text="Recommended" color="green" >}}: - Toggle **order** to generate **ORDER BY** clause statements. - - - **ORDER BY**: Select columns to sort by. - You can sort by time and multiple fields or tags. - To sort in descending order, select **DESC**. - -4. {{< req text="Recommended" color="green" >}}: Change format to **Time series**. - - Use the **Format** dropdown to change the format of the query results. - For example, to visualize the query results as a time series, select **Time series**. - -5. Click **Run query** to execute the query. - - -{{% /tab-content %}} -{{% tab-content %}} - - -1. Click **Explore**. -2. In the dropdown, select the **InfluxDB** data source that you want to query. -3. Use the InfluxQL query form to build your query: - - **FROM**: Select the measurement that you want to query. - - **WHERE**: To filter the query results, enter a conditional expression. - - **SELECT**: Select fields to query and an aggregate function to apply to each. - The aggregate function is applied to each time interval defined in the - `GROUP BY` clause. - - **GROUP BY**: By default, Grafana groups data by time to downsample results - and improve query performance. - You can also add other tags to group by. -4. Click **Run query** to execute the query. - - -{{% /tab-content %}} -{{< /tabs-wrapper >}} - -{{< youtube "rSsouoNsNDs" >}} - -To learn about query management and inspection in Grafana, see the -[Grafana Explore documentation](https://grafana.com/docs/grafana/latest/explore/). - -## Build visualizations with Grafana - -For a comprehensive walk-through of creating visualizations with -Grafana, see the [Grafana documentation](https://grafana.com/docs/grafana/latest/). + diff --git a/content/shared/v3-process-data/visualize/grafana.md b/content/shared/v3-process-data/visualize/grafana.md new file mode 100644 index 000000000..baf2a753c --- /dev/null +++ b/content/shared/v3-process-data/visualize/grafana.md @@ -0,0 +1,209 @@ +Use [Grafana](https://grafana.com/) to query and visualize data stored in +{{% product-name %}}. + +> [Grafana] enables you to query, visualize, alert on, and explore your metrics, +> logs, and traces wherever they are stored. +> [Grafana] provides you with tools to turn your time-series database (TSDB) +> data into insightful graphs and visualizations. +> +> {{% cite %}}-- [Grafana documentation](https://grafana.com/docs/grafana/latest/introduction/){{% /cite %}} + +- [Install Grafana or login to Grafana Cloud](#install-grafana-or-login-to-grafana-cloud) +- [InfluxDB data source](#influxdb-data-source) +- [Create an InfluxDB data source](#create-an-influxdb-data-source) +- [Query InfluxDB with Grafana](#query-influxdb-with-grafana) +- [Build visualizations with Grafana](#build-visualizations-with-grafana) + +## Install Grafana or login to Grafana Cloud + +If using the open source version of **Grafana**, follow the +[Grafana installation instructions](https://grafana.com/docs/grafana/latest/setup-grafana/installation/) +to install Grafana for your operating system. +If using **Grafana Cloud**, login to your Grafana Cloud instance. + +## InfluxDB data source + +The InfluxDB data source plugin is included in the Grafana core distribution. +Use the plugin to query and visualize data stored in {{< product-name >}} with +both InfluxQL and SQL. + +> [!Note] +> #### Grafana 10.3+ +> +> The instructions below are for **Grafana 10.3+** which introduced the newest +> version of the InfluxDB core plugin. +> The updated plugin includes **SQL support** for InfluxDB 3-based products such +> as {{< product-name >}}. + +## Create an InfluxDB data source + +Which data source you create depends on which query language you want to use to +query {{% product-name %}}: + +1. In your Grafana user interface (UI), navigate to **Data Sources**. +2. Click **Add new data source**. +3. Search for and select the **InfluxDB** plugin. +4. Provide a name for your data source. +5. Under **Query Language**, select either **SQL** or **InfluxQL**: + +{{< tabs-wrapper >}} +{{% tabs %}} +[SQL](#) +[InfluxQL](#) +{{% /tabs %}} +{{% tab-content %}} + + +When creating an InfluxDB data source that uses SQL to query data: + +1. Under **HTTP**: + + - **URL**: Provide your {{% show-in "cloud-serverless" %}}[{{< product-name >}} region URL](/influxdb3/version/reference/regions/){{% /show-in %}} + {{% hide-in "cloud-serverless" %}}{{% product-name omit=" Clustered" %}} cluster URL{{% /hide-in %}} using the HTTPS protocol: + + ``` + https://{{< influxdb/host >}} + ``` +2. Under **InfluxDB Details**: + + - **Database**: Provide a default {{% show-in "cloud-serverless" %}}[bucket](/influxdb3/version/admin/buckets/) name to query. In {{< product-name >}}, a bucket functions as a database.{{% /show-in %}}{{% hide-in "cloud-serverless" %}}[database](/influxdb3/version/admin/databases/) name to query.{{% /hide-in %}} + - **Token**: Provide {{% show-in "cloud-serverless" %}}an [API token](/influxdb3/version/admin/tokens/) with read access to the buckets you want to query.{{% /show-in %}}{{% hide-in "cloud-serverless" %}}a [database token](/influxdb3/version/admin/tokens/#database-tokens) with read access to the databases you want to query.{{% /hide-in %}} +3. Click **Save & test**. + +{{% show-in "cloud-serverless" %}}{{< img-hd src="/img/influxdb3/cloud-serverless-grafana-influxdb-data-source-sql.png" alt="Grafana InfluxDB data source for InfluxDB Cloud Serverless that uses SQL" />}}{{% /show-in %}} +{{% show-in "cloud-dedicated" %}}{{< img-hd src="/img/influxdb/cloud-dedicated-grafana-influxdb-data-source-sql.png" alt="Grafana InfluxDB data source for InfluxDB Cloud Dedicated that uses SQL" />}}{{% /show-in %}} +{{% show-in "clustered" %}}{{< img-hd src="/img/influxdb3/clustered-grafana-influxdb-data-source-sql.png" alt="Grafana InfluxDB data source for InfluxDB Clustered that uses SQL" />}}{{% /show-in %}} + + +{{% /tab-content %}} +{{% tab-content %}} + + +When creating an InfluxDB data source that uses InfluxQL to query data: + +{{% show-in "cloud-serverless" %}} +> [!Note] +> #### Map databases and retention policies to buckets +> +> To query {{% product-name %}} with InfluxQL, first map database and retention policy +> (DBRP) combinations to your InfluxDB Cloud buckets. For more information, see +> [Map databases and retention policies to buckets](/influxdb3/version/query-data/influxql/dbrp/). +{{% /show-in %}} + +1. Under **HTTP**: + + - **URL**: Provide your {{% show-in "cloud-serverless" %}}[{{< product-name >}} region URL](/influxdb3/version/reference/regions/){{% /show-in %}}{{% hide-in "cloud-serverless" %}}{{% product-name omit=" Clustered" %}} cluster URL{{% /hide-in %}} + using the HTTPS protocol: + + ``` + https://{{< influxdb/host >}} + ``` +2. Under **InfluxDB Details**: + + - **Database**: Provide a {{% show-in "cloud-serverless" %}}database name to query. + Use the database name that is mapped to your InfluxDB bucket{{% /show-in %}}{{% hide-in "cloud-serverless" %}}default [database](/influxdb3/version/admin/databases/) name to query{{% /hide-in %}}. + - **User**: Provide an arbitrary string. + _This credential is ignored when querying {{% product-name %}}, but it cannot be empty._ + - **Password**: Provide {{% show-in "cloud-serverless" %}}an [API token](/influxdb3/version/admin/tokens/) with read access to the buckets you want to query{{% /show-in %}}{{% hide-in "cloud-serverless" %}}a [database token](/influxdb3/version/admin/tokens/#database-tokens) with read access to the databases you want to query{{% /hide-in %}}. + - **HTTP Method**: Choose one of the available HTTP request methods to use when querying data: + + - **POST** ({{< req text="Recommended" >}}) + - **GET** +3. Click **Save & test**. + +{{% show-in "cloud-dedicated" %}}{{< img-hd src="/img/influxdb/cloud-dedicated-grafana-influxdb-data-source-influxql.png" alt="Grafana InfluxDB data source for InfluxDB Cloud Dedicated using InfluxQL" />}}{{% /show-in %}} +{{% show-in "cloud-serverless" %}}{{< img-hd src="/img/influxdb3/cloud-serverless-grafana-influxdb-data-source-influxql.png" alt="Grafana InfluxDB data source for InfluxDB Cloud Serverless using InfluxQL" />}}{{% /show-in %}} +{{% show-in "clustered" %}}{{< img-hd src="/img/influxdb3/clustered-grafana-influxdb-data-source-influxql.png" alt="Grafana InfluxDB data source for InfluxDB Clustered using InfluxQL" />}}{{% /show-in %}} + + +{{% /tab-content %}} +{{< /tabs-wrapper >}} + +## Query InfluxDB with Grafana + +After you [configure and save an InfluxDB datasource](#create-an-influxdb-data-source), +use Grafana to build, run, and inspect queries against your InfluxDB {{% show-in "cloud-serverless" %}}bucket{{% /show-in %}}{{% hide-in "cloud-serverless" %}}database{{% /hide-in %}}. + +{{< tabs-wrapper >}} +{{% tabs %}} +[SQL](#) +[InfluxQL](#) +{{% /tabs %}} +{{% tab-content %}} + + +> [!Note] +> {{% sql/sql-schema-intro %}} +{{% show-in "cloud-serverless" %}} +> To learn more, see [Query Data](/influxdb3/version/query-data/sql/). +{{% /show-in %}} +{{% show-in "cloud-dedicated" %}} +> To learn more, see [Query Data](/influxdb3/version/query-data/sql/). +{{% /show-in %}} +{{% show-in "clustered" %}} +> To learn more, see [Query Data](/influxdb3/version/query-data/sql/). +{{% /show-in %}} + +1. Click **Explore**. +2. In the dropdown, select the saved InfluxDB data source to query. +3. Use the SQL query form to build your query: + - **Table**: Select the measurement to query. + - **Column**: Select one or more fields and tags to return as columns in query results. + + With SQL, select the `time` column to include timestamps with the data. + Grafana relies on the `time` column to correctly graph time series data. + + - _**Optional:**_ Toggle **filter** to generate **WHERE** clause statements. + - **WHERE**: Configure condition expressions to include in the `WHERE` clause. + + - _**Optional:**_ Toggle **group** to generate **GROUP BY** clause statements. + + - **GROUP BY**: Select columns to group by. + If you include an aggregation function in the **SELECT** list, + you must group by one or more of the queried columns. + SQL returns the aggregation for each group. + + - {{< req text="Recommended" color="green" >}}: + Toggle **order** to generate **ORDER BY** clause statements. + + - **ORDER BY**: Select columns to sort by. + You can sort by time and multiple fields or tags. + To sort in descending order, select **DESC**. + +4. {{< req text="Recommended" color="green" >}}: Change format to **Time series**. + - Use the **Format** dropdown to change the format of the query results. + For example, to visualize the query results as a time series, select **Time series**. + +5. Click **Run query** to execute the query. + + +{{% /tab-content %}} +{{% tab-content %}} + + +1. Click **Explore**. +2. In the dropdown, select the **InfluxDB** data source that you want to query. +3. Use the InfluxQL query form to build your query: + - **FROM**: Select the measurement that you want to query. + - **WHERE**: To filter the query results, enter a conditional expression. + - **SELECT**: Select fields to query and an aggregate function to apply to each. + The aggregate function is applied to each time interval defined in the + `GROUP BY` clause. + - **GROUP BY**: By default, Grafana groups data by time to downsample results + and improve query performance. + You can also add other tags to group by. +4. Click **Run query** to execute the query. + + +{{% /tab-content %}} +{{< /tabs-wrapper >}} + +{{< youtube "rSsouoNsNDs" >}} + +To learn about query management and inspection in Grafana, see the +[Grafana Explore documentation](https://grafana.com/docs/grafana/latest/explore/). + +## Build visualizations with Grafana + +For a comprehensive walk-through of creating visualizations with +Grafana, see the [Grafana documentation](https://grafana.com/docs/grafana/latest/). \ No newline at end of file From 903c16e50a6c02bc43776386de460f8ae3a2a3a2 Mon Sep 17 00:00:00 2001 From: jaal2001 Date: Thu, 11 Sep 2025 16:46:36 +0200 Subject: [PATCH 24/31] Update config-options.md Removed ")" in --exec-mem-pool-bytes as typo. Added format information for --wal-flush-interval and 100ms suggestion from @peterbarnett03 in discord chat. --- content/shared/influxdb3-cli/config-options.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/content/shared/influxdb3-cli/config-options.md b/content/shared/influxdb3-cli/config-options.md index 5f91d50a0..32c545778 100644 --- a/content/shared/influxdb3-cli/config-options.md +++ b/content/shared/influxdb3-cli/config-options.md @@ -1278,7 +1278,7 @@ Defines the address on which InfluxDB serves HTTP API requests. Specifies the size of memory pool used during query execution. Can be given as absolute value in bytes or as a percentage of the total available memory--for -example: `8000000000` or `10%`). +example: `8000000000` or `10%`. {{% show-in "core" %}}**Default:** `8589934592`{{% /show-in %}} {{% show-in "enterprise" %}}**Default:** `20%`{{% /show-in %}} @@ -1316,6 +1316,7 @@ percentage (portion of available memory) or absolute value in MB--for example: ` Specifies the interval to flush buffered data to a WAL file. Writes that wait for WAL confirmation take up to this interval to complete. +Can be `s` for seconds or `ms` for miliseconds. 100ms is suggested for local disks. **Default:** `1s` From 988aef7e071c2a1172c3c3804f9afbacdbddc8d0 Mon Sep 17 00:00:00 2001 From: Scott Anderson Date: Thu, 11 Sep 2025 08:58:47 -0600 Subject: [PATCH 25/31] fix(sql): hotfix typos in sql window functions doc --- content/shared/sql-reference/functions/window.md | 16 ++-------------- 1 file changed, 2 insertions(+), 14 deletions(-) diff --git a/content/shared/sql-reference/functions/window.md b/content/shared/sql-reference/functions/window.md index 17693c8c0..ca980dbca 100644 --- a/content/shared/sql-reference/functions/window.md +++ b/content/shared/sql-reference/functions/window.md @@ -329,8 +329,8 @@ each frame that the window function operates on. - [UNBOUNDED PRECEDING](#unbounded-preceding) - [offset PRECEDING](#offset-preceding) -- CURRENT_ROW](#current-row) -- [offset> FOLLOWING](#offset-following) +- [CURRENT_ROW](#current-row) +- [offset FOLLOWING](#offset-following) - [UNBOUNDED FOLLOWING](#unbounded-following) ##### UNBOUNDED PRECEDING @@ -369,18 +369,6 @@ For example, `3 FOLLOWING` includes 3 rows after the current row. ##### UNBOUNDED FOLLOWING -Starts at the current row and ends at the last row of the partition. -##### offset FOLLOWING - -Use a specified offset of [frame units](#frame-units) _after_ the current row -as a frame boundary. - -```sql -offset FOLLOWING -``` - -##### UNBOUNDED FOLLOWING - Use the current row to the end of the current partition the frame boundary. ```sql From 74a1cc45df881603abc0d454c71cdb91ecf0e9dd Mon Sep 17 00:00:00 2001 From: Jason Stirnaman Date: Thu, 11 Sep 2025 11:12:16 -0500 Subject: [PATCH 26/31] Apply suggestions from code review Co-authored-by: Scott Anderson --- content/shared/v3-process-data/visualize/grafana.md | 10 +--------- 1 file changed, 1 insertion(+), 9 deletions(-) diff --git a/content/shared/v3-process-data/visualize/grafana.md b/content/shared/v3-process-data/visualize/grafana.md index baf2a753c..c809b1c4d 100644 --- a/content/shared/v3-process-data/visualize/grafana.md +++ b/content/shared/v3-process-data/visualize/grafana.md @@ -19,7 +19,7 @@ Use [Grafana](https://grafana.com/) to query and visualize data stored in If using the open source version of **Grafana**, follow the [Grafana installation instructions](https://grafana.com/docs/grafana/latest/setup-grafana/installation/) to install Grafana for your operating system. -If using **Grafana Cloud**, login to your Grafana Cloud instance. +If using **Grafana Cloud**, log in to your Grafana Cloud instance. ## InfluxDB data source @@ -134,15 +134,7 @@ use Grafana to build, run, and inspect queries against your InfluxDB {{% show-in > [!Note] > {{% sql/sql-schema-intro %}} -{{% show-in "cloud-serverless" %}} > To learn more, see [Query Data](/influxdb3/version/query-data/sql/). -{{% /show-in %}} -{{% show-in "cloud-dedicated" %}} -> To learn more, see [Query Data](/influxdb3/version/query-data/sql/). -{{% /show-in %}} -{{% show-in "clustered" %}} -> To learn more, see [Query Data](/influxdb3/version/query-data/sql/). -{{% /show-in %}} 1. Click **Explore**. 2. In the dropdown, select the saved InfluxDB data source to query. From 78de3407a1a15ca2bb38795d1ffcfd763deb1283 Mon Sep 17 00:00:00 2001 From: Sven Rebhan Date: Thu, 11 Sep 2025 20:41:36 +0200 Subject: [PATCH 27/31] Updating changelog --- content/telegraf/v1/release-notes.md | 83 ++++++++++++++++++++++++++++ 1 file changed, 83 insertions(+) diff --git a/content/telegraf/v1/release-notes.md b/content/telegraf/v1/release-notes.md index 0fd5dba61..2c96b275d 100644 --- a/content/telegraf/v1/release-notes.md +++ b/content/telegraf/v1/release-notes.md @@ -11,6 +11,89 @@ menu: weight: 60 --- +## v1.36.0 {date="2025-09-08"} + +### Important Changes + +- PR [#17355](https://github.com/influxdata/telegraf/pull/17355) changes the `profiles` support + of `inputs.opentelemetry` from the `v1 experimental` to the `v1 development` as this experimental API + is updated upstream. This will change the metric by for example removing the no-longer reported + `frame_type`, `stack_trace_id`, `build_id`, and `build_id_type` fields. Also, the value format of other fields + or tags might have changed. Please refer to the + [OpenTelemetry documentation](https://opentelemetry.io/docs/) for more details. + +### New Plugins + +- [#17368](https://github.com/influxdata/telegraf/pull/17368) `inputs.turbostat` Add plugin +- [#17078](https://github.com/influxdata/telegraf/pull/17078) `processors.round` Add plugin + +### Features + +- [#16705](https://github.com/influxdata/telegraf/pull/16705) `agent` Introduce labels and selectors to enable and disable plugins +- [#17547](https://github.com/influxdata/telegraf/pull/17547) `inputs.influxdb_v2_listener` Add `/health` route +- [#17312](https://github.com/influxdata/telegraf/pull/17312) `inputs.internal` Allow to collect statistics per plugin instance +- [#17024](https://github.com/influxdata/telegraf/pull/17024) `inputs.lvm` Add sync_percent for lvm_logical_vol +- [#17355](https://github.com/influxdata/telegraf/pull/17355) `inputs.opentelemetry` Upgrade otlp proto module +- [#17156](https://github.com/influxdata/telegraf/pull/17156) `inputs.syslog` Add support for RFC3164 over TCP +- [#17543](https://github.com/influxdata/telegraf/pull/17543) `inputs.syslog` Allow limiting message size in octet counting mode +- [#17539](https://github.com/influxdata/telegraf/pull/17539) `inputs.x509_cert` Add support for Windows certificate stores +- [#17244](https://github.com/influxdata/telegraf/pull/17244) `output.nats` Allow disabling stream creation for externally managed streams +- [#17474](https://github.com/influxdata/telegraf/pull/17474) `outputs.elasticsearch` Support array headers and preserve commas in values +- [#17548](https://github.com/influxdata/telegraf/pull/17548) `outputs.influxdb` Add internal statistics for written bytes +- [#17213](https://github.com/influxdata/telegraf/pull/17213) `outputs.nats` Allow providing a subject layout +- [#17346](https://github.com/influxdata/telegraf/pull/17346) `outputs.nats` Enable batch serialization with use_batch_format +- [#17249](https://github.com/influxdata/telegraf/pull/17249) `outputs.sql` Allow sending batches of metrics in transactions +- [#17510](https://github.com/influxdata/telegraf/pull/17510) `parsers.avro` Support record arrays at root level +- [#17365](https://github.com/influxdata/telegraf/pull/17365) `plugins.snmp` Allow debug logging in gosnmp +- [#17345](https://github.com/influxdata/telegraf/pull/17345) `selfstat` Implement collection of plugin-internal statistics + +### Bugfixes + +- [#17411](https://github.com/influxdata/telegraf/pull/17411) `inputs.diskio` Handle counter wrapping in io fields +- [#17551](https://github.com/influxdata/telegraf/pull/17551) `inputs.s7comm` Use correct value for string length with 'extra' parameter +- [#17579](https://github.com/influxdata/telegraf/pull/17579) `internal` Extract go version more robustly +- [#17566](https://github.com/influxdata/telegraf/pull/17566) `outputs` Retrigger batch-available-events only if at least one metric was written successfully +- [#17381](https://github.com/influxdata/telegraf/pull/17381) `packaging` Rename rpm from loong64 to loongarch64 + +### Dependency Updates + +- [#17519](https://github.com/influxdata/telegraf/pull/17519) `deps` Bump cloud.google.com/go/storage from 1.56.0 to 1.56.1 +- [#17532](https://github.com/influxdata/telegraf/pull/17532) `deps` Bump github.com/Azure/azure-sdk-for-go/sdk/azcore from 1.18.2 to 1.19.0 +- [#17494](https://github.com/influxdata/telegraf/pull/17494) `deps` Bump github.com/SAP/go-hdb from 1.13.12 to 1.14.0 +- [#17488](https://github.com/influxdata/telegraf/pull/17488) `deps` Bump github.com/antchfx/xpath from 1.3.4 to 1.3.5 +- [#17540](https://github.com/influxdata/telegraf/pull/17540) `deps` Bump github.com/aws/aws-sdk-go-v2/config from 1.31.0 to 1.31.2 +- [#17538](https://github.com/influxdata/telegraf/pull/17538) `deps` Bump github.com/aws/aws-sdk-go-v2/credentials from 1.18.4 to 1.18.6 +- [#17517](https://github.com/influxdata/telegraf/pull/17517) `deps` Bump github.com/aws/aws-sdk-go-v2/feature/ec2/imds from 1.18.3 to 1.18.4 +- [#17528](https://github.com/influxdata/telegraf/pull/17528) `deps` Bump github.com/aws/aws-sdk-go-v2/service/cloudwatch from 1.48.0 to 1.48.2 +- [#17536](https://github.com/influxdata/telegraf/pull/17536) `deps` Bump github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs from 1.56.0 to 1.57.0 +- [#17524](https://github.com/influxdata/telegraf/pull/17524) `deps` Bump github.com/aws/aws-sdk-go-v2/service/dynamodb from 1.46.0 to 1.49.1 +- [#17493](https://github.com/influxdata/telegraf/pull/17493) `deps` Bump github.com/aws/aws-sdk-go-v2/service/ec2 from 1.242.0 to 1.244.0 +- [#17527](https://github.com/influxdata/telegraf/pull/17527) `deps` Bump github.com/aws/aws-sdk-go-v2/service/ec2 from 1.244.0 to 1.246.0 +- [#17530](https://github.com/influxdata/telegraf/pull/17530) `deps` Bump github.com/aws/aws-sdk-go-v2/service/kinesis from 1.38.0 to 1.39.1 +- [#17534](https://github.com/influxdata/telegraf/pull/17534) `deps` Bump github.com/aws/aws-sdk-go-v2/service/sts from 1.37.0 to 1.38.0 +- [#17513](https://github.com/influxdata/telegraf/pull/17513) `deps` Bump github.com/aws/aws-sdk-go-v2/service/timestreamwrite from 1.34.0 to 1.34.2 +- [#17514](https://github.com/influxdata/telegraf/pull/17514) `deps` Bump github.com/coreos/go-systemd/v22 from 22.5.0 to 22.6.0 +- [#17563](https://github.com/influxdata/telegraf/pull/17563) `deps` Bump github.com/facebook/time from 0.0.0-20240626113945-18207c5d8ddc to 0.0.0-20250903103710-a5911c32cdb9 +- [#17526](https://github.com/influxdata/telegraf/pull/17526) `deps` Bump github.com/gophercloud/gophercloud/v2 from 2.7.0 to 2.8.0 +- [#17537](https://github.com/influxdata/telegraf/pull/17537) `deps` Bump github.com/microsoft/go-mssqldb from 1.9.2 to 1.9.3 +- [#17490](https://github.com/influxdata/telegraf/pull/17490) `deps` Bump github.com/nats-io/nats-server/v2 from 2.11.7 to 2.11.8 +- [#17523](https://github.com/influxdata/telegraf/pull/17523) `deps` Bump github.com/nats-io/nats.go from 1.44.0 to 1.45.0 +- [#17492](https://github.com/influxdata/telegraf/pull/17492) `deps` Bump github.com/safchain/ethtool from 0.5.10 to 0.6.2 +- [#17486](https://github.com/influxdata/telegraf/pull/17486) `deps` Bump github.com/snowflakedb/gosnowflake from 1.15.0 to 1.16.0 +- [#17541](https://github.com/influxdata/telegraf/pull/17541) `deps` Bump github.com/tidwall/wal from 1.1.8 to 1.2.0 +- [#17529](https://github.com/influxdata/telegraf/pull/17529) `deps` Bump github.com/vmware/govmomi from 0.51.0 to 0.52.0 +- [#17496](https://github.com/influxdata/telegraf/pull/17496) `deps` Bump go.opentelemetry.io/collector/pdata from 1.36.1 to 1.38.0 +- [#17533](https://github.com/influxdata/telegraf/pull/17533) `deps` Bump go.opentelemetry.io/collector/pdata from 1.38.0 to 1.39.0 +- [#17516](https://github.com/influxdata/telegraf/pull/17516) `deps` Bump go.step.sm/crypto from 0.69.0 to 0.70.0 +- [#17499](https://github.com/influxdata/telegraf/pull/17499) `deps` Bump golang.org/x/mod from 0.26.0 to 0.27.0 +- [#17497](https://github.com/influxdata/telegraf/pull/17497) `deps` Bump golang.org/x/net from 0.42.0 to 0.43.0 +- [#17487](https://github.com/influxdata/telegraf/pull/17487) `deps` Bump google.golang.org/api from 0.246.0 to 0.247.0 +- [#17531](https://github.com/influxdata/telegraf/pull/17531) `deps` Bump google.golang.org/api from 0.247.0 to 0.248.0 +- [#17520](https://github.com/influxdata/telegraf/pull/17520) `deps` Bump google.golang.org/grpc from 1.74.2 to 1.75.0 +- [#17518](https://github.com/influxdata/telegraf/pull/17518) `deps` Bump google.golang.org/protobuf from 1.36.7 to 1.36.8 +- [#17498](https://github.com/influxdata/telegraf/pull/17498) `deps` Bump k8s.io/client-go from 0.33.3 to 0.33.4 +- [#17515](https://github.com/influxdata/telegraf/pull/17515) `deps` Bump super-linter/super-linter from 8.0.0 to 8.1.0 + ## v1.35.4 {date="2025-08-18"} ### Bugfixes From e0b58c3e4c486fcaa65ce3488fd622f96d536980 Mon Sep 17 00:00:00 2001 From: Sven Rebhan Date: Thu, 11 Sep 2025 20:41:36 +0200 Subject: [PATCH 28/31] Updating plugin list --- data/telegraf_plugins.yml | 60 ++++++++++++++++++++++++++++++++------- 1 file changed, 49 insertions(+), 11 deletions(-) diff --git a/data/telegraf_plugins.yml b/data/telegraf_plugins.yml index ac89b5be7..369410e48 100644 --- a/data/telegraf_plugins.yml +++ b/data/telegraf_plugins.yml @@ -502,8 +502,8 @@ input: Docker containers. > [!NOTE] - > Make sure Telegraf has sufficient permissions to access the - > configured endpoint. + > Make sure Telegraf has sufficient permissions to access the configured + > endpoint. introduced: v0.1.9 os_support: [freebsd, linux, macos, solaris, windows] tags: [containers] @@ -516,8 +516,8 @@ input: > [!NOTE] > This plugin works only for containers with the `local` or `json-file` or - > `journald` logging driver. Please make sure Telegraf has sufficient - > permissions to access the configured endpoint. + > `journald` logging driver. Make sure Telegraf has sufficient permissions + > to access the configured endpoint. introduced: v1.12.0 os_support: [freebsd, linux, macos, solaris, windows] tags: [containers, logging] @@ -1970,6 +1970,11 @@ input: This service plugin receives traces, metrics, logs and profiles from [OpenTelemetry](https://opentelemetry.io) clients and compatible agents via gRPC. + + > [!NOTE] + > Telegraf v1.32 through v1.35 support the Profiles signal using the v1 + > experimental API. Telegraf v1.36+ supports the Profiles signal using the + > v1 development API. introduced: v1.19.0 os_support: [freebsd, linux, macos, solaris, windows] tags: [logging, messaging] @@ -2672,6 +2677,19 @@ input: introduced: v0.3.0 os_support: [freebsd, linux, macos, solaris, windows] tags: [testing] + - name: Turbostat + id: turbostat + description: | + This service plugin monitors system performance using the + [turbostat](https://github.com/torvalds/linux/tree/master/tools/power/x86/turbostat) + command. + + > [!IMPORTANT] + > This plugin requires the `turbostat` executable to be installed on the + > system. + introduced: v1.36.0 + os_support: [linux] + tags: [hardware, system] - name: Twemproxy id: twemproxy description: | @@ -2835,7 +2853,8 @@ input: description: | This plugin provides information about [X.509](https://en.wikipedia.org/wiki/X.509) certificates accessible e.g. - via local file, tcp, udp, https or smtp protocols. + via local file, tcp, udp, https or smtp protocols and the Windows + Certificate Store. > [!NOTE] > When using a UDP address as a certificate source, the server must @@ -2940,8 +2959,8 @@ output: Explorer](https://docs.microsoft.com/en-us/azure/data-explorer), [Azure Synapse Data Explorer](https://docs.microsoft.com/en-us/azure/synapse-analytics/data-explorer/data-explorer-overview), - and [Real-Time Intelligence in - Fabric](https://learn.microsoft.com/fabric/real-time-intelligence/overview) + and [Real time analytics in + Fabric](https://learn.microsoft.com/en-us/fabric/real-time-analytics/overview) services. Azure Data Explorer is a distributed, columnar store, purpose built for @@ -3299,9 +3318,17 @@ output: - name: Microsoft Fabric id: microsoft_fabric description: | - This plugin writes metrics to [Real time analytics in - Fabric](https://learn.microsoft.com/en-us/fabric/real-time-analytics/overview) - services. + This plugin writes metrics to [Fabric + Eventhouse](https://learn.microsoft.com/fabric/real-time-intelligence/eventhouse) + and [Fabric + Eventstream](https://learn.microsoft.com/fabric/real-time-intelligence/event-streams/overview?tabs=enhancedcapabilities) + artifacts of [Real-Time Intelligence in Microsoft + Fabric](https://learn.microsoft.com/fabric/real-time-intelligence/overview). + + Real-Time Intelligence is a SaaS service in Microsoft Fabric that allows + you to extract insights and visualize data in motion. It offers an + end-to-end solution for event-driven scenarios, streaming data, and data + logs. introduced: v1.35.0 os_support: [freebsd, linux, macos, solaris, windows] tags: [datastore] @@ -4026,6 +4053,17 @@ processor: introduced: v1.15.0 os_support: [freebsd, linux, macos, solaris, windows] tags: [annotation] + - name: Round + id: round + description: | + This plugin allows to round numerical field values to the configured + precision. This is particularly useful in combination with the [dedup + processor](/telegraf/v1/plugins/#processor-dedup) to reduce the number of + metrics sent to the output if only a lower precision is required for the + values. + introduced: v1.36.0 + os_support: [freebsd, linux, macos, solaris, windows] + tags: [transformation] - name: S2 Geo id: s2geo description: | @@ -4122,7 +4160,7 @@ processor: - name: Template id: template description: | - This plugin applies templates to metrics for generatuing a new tag. The + This plugin applies templates to metrics for generating a new tag. The primary use case of this plugin is to create a tag that can be used for dynamic routing to multiple output plugins or using an output specific routing option. From d018cdedcb27c35f5aa7e80d1f4675fbfc507d3f Mon Sep 17 00:00:00 2001 From: Sven Rebhan Date: Thu, 11 Sep 2025 20:41:37 +0200 Subject: [PATCH 29/31] Updating product version --- data/products.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/data/products.yml b/data/products.yml index ec0014361..0be800a1b 100644 --- a/data/products.yml +++ b/data/products.yml @@ -141,9 +141,9 @@ telegraf: menu_category: other list_order: 6 versions: [v1] - latest: v1.35 + latest: v1.36 latest_patches: - v1: 1.35.4 + v1: 1.36.0 ai_sample_questions: - How do I install and configure Telegraf? - How do I write a custom Telegraf plugin? From 62880c9834e4267d05b6ba117d42acc309a4ff42 Mon Sep 17 00:00:00 2001 From: Jason Stirnaman Date: Thu, 11 Sep 2025 16:01:18 -0500 Subject: [PATCH 30/31] Update content/shared/influxdb3-cli/config-options.md --- content/shared/influxdb3-cli/config-options.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/shared/influxdb3-cli/config-options.md b/content/shared/influxdb3-cli/config-options.md index 32c545778..84af67883 100644 --- a/content/shared/influxdb3-cli/config-options.md +++ b/content/shared/influxdb3-cli/config-options.md @@ -1316,7 +1316,7 @@ percentage (portion of available memory) or absolute value in MB--for example: ` Specifies the interval to flush buffered data to a WAL file. Writes that wait for WAL confirmation take up to this interval to complete. -Can be `s` for seconds or `ms` for miliseconds. 100ms is suggested for local disks. +Use `s` for seconds or `ms` for milliseconds. For local disks, `100 ms` is recommended. **Default:** `1s` From e087bc5aaede0c99f1e78254f224d7047fe3693b Mon Sep 17 00:00:00 2001 From: Jason Stirnaman Date: Fri, 12 Sep 2025 10:00:44 -0500 Subject: [PATCH 31/31] Apply suggestions from code review --- content/telegraf/v1/release-notes.md | 7 +------ data/telegraf_plugins.yml | 12 ++++++------ 2 files changed, 7 insertions(+), 12 deletions(-) diff --git a/content/telegraf/v1/release-notes.md b/content/telegraf/v1/release-notes.md index 2c96b275d..fadfe0ef0 100644 --- a/content/telegraf/v1/release-notes.md +++ b/content/telegraf/v1/release-notes.md @@ -15,12 +15,7 @@ menu: ### Important Changes -- PR [#17355](https://github.com/influxdata/telegraf/pull/17355) changes the `profiles` support - of `inputs.opentelemetry` from the `v1 experimental` to the `v1 development` as this experimental API - is updated upstream. This will change the metric by for example removing the no-longer reported - `frame_type`, `stack_trace_id`, `build_id`, and `build_id_type` fields. Also, the value format of other fields - or tags might have changed. Please refer to the - [OpenTelemetry documentation](https://opentelemetry.io/docs/) for more details. +- Pull request [#17355](https://github.com/influxdata/telegraf/pull/17355) updates `profiles` support in `inputs.opentelemetry` from v1 experimental to v1 development, following upstream changes to the experimental API. This update modifies metric output. For example, the `frame_type`, `stack_trace_id`, `build_id`, and `build_id_type` fields are no longer reported. The value format of other fields or tags might also have changed. For more information, see the [OpenTelemetry documentation](https://opentelemetry.io/docs/). ### New Plugins diff --git a/data/telegraf_plugins.yml b/data/telegraf_plugins.yml index 369410e48..7d43bcdeb 100644 --- a/data/telegraf_plugins.yml +++ b/data/telegraf_plugins.yml @@ -515,7 +515,7 @@ input: Docker containers. > [!NOTE] - > This plugin works only for containers with the `local` or `json-file` or + > This plugin works only for containers with the `local`, `json-file`, or > `journald` logging driver. Make sure Telegraf has sufficient permissions > to access the configured endpoint. introduced: v1.12.0 @@ -1972,9 +1972,9 @@ input: via gRPC. > [!NOTE] - > Telegraf v1.32 through v1.35 support the Profiles signal using the v1 - > experimental API. Telegraf v1.36+ supports the Profiles signal using the - > v1 development API. + > Telegraf v1.32 through v1.35 support the Profiles signal using the **v1 + > experimental API**. Telegraf v1.36+ supports the Profiles signal using the + > **v1 development API**. introduced: v1.19.0 os_support: [freebsd, linux, macos, solaris, windows] tags: [logging, messaging] @@ -4056,10 +4056,10 @@ processor: - name: Round id: round description: | - This plugin allows to round numerical field values to the configured + This plugin rounds numerical field values to the configured precision. This is particularly useful in combination with the [dedup processor](/telegraf/v1/plugins/#processor-dedup) to reduce the number of - metrics sent to the output if only a lower precision is required for the + metrics sent to the output when a lower precision is required for the values. introduced: v1.36.0 os_support: [freebsd, linux, macos, solaris, windows]