feat(influxdb3): add plugin library with comprehensive plugin documentation:
- Add plugin library structure for Core and Enterprise products - Create shared content directory for plugin documentation - Port 12 plugin documentation files from influxdb3_plugins repository - Add GitHub repository links in related frontmatter for all plugins - Remove emoji metadata tags and clean up content structure - Implement standardized support sections with product-specific links - Reorganize plugins navigation with dedicated library section - Include 2 example plugins and 10 official InfluxData pluginspull/6268/head^2
parent
b0de478972
commit
2b8e769697
|
@ -11,9 +11,9 @@ influxdb3/core/tags: [processing engine, python]
|
|||
related:
|
||||
- /influxdb3/core/reference/cli/influxdb3/test/wal_plugin/
|
||||
- /influxdb3/core/reference/cli/influxdb3/create/trigger/
|
||||
source: /shared/v3-core-plugins/_index.md
|
||||
source: /shared/influxdb3-plugins/_index.md
|
||||
---
|
||||
|
||||
<!--
|
||||
//SOURCE - content/shared/v3-core-plugins/_index.md
|
||||
//SOURCE - content/shared/influxdb3-plugins/_index.md
|
||||
-->
|
|
@ -8,10 +8,14 @@ menu:
|
|||
parent: Processing engine and Python plugins
|
||||
weight: 4
|
||||
influxdb3/core/tags: [processing engine, plugins, API, python]
|
||||
source: /shared/extended-plugin-api.md
|
||||
aliases:
|
||||
- /influxdb3/core/extend-plugin/
|
||||
related:
|
||||
- /influxdb3/core/reference/cli/influxdb3/create/trigger/
|
||||
- /influxdb3/core/reference/cli/influxdb3/test/
|
||||
- /influxdb3/core/reference/processing-engine/
|
||||
source: /shared/influxdb3-plugins/extended-plugin-api.md
|
||||
---
|
||||
|
||||
<!--
|
||||
// SOURCE content/shared/extended-plugin-api.md
|
||||
-->
|
||||
<!-- //SOURCE content/shared/influxdb3-plugins/extended-plugin-api.md -->
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
---
|
||||
title: Plugin library
|
||||
description: Browse available plugins for {{% product-name %}} to extend your database functionality with custom Python code.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
name: Plugin library
|
||||
parent: Processing engine and Python plugins
|
||||
weight: 5
|
||||
influxdb3/core/tags: [plugins, processing engine, python]
|
||||
source: /shared/influxdb3-plugins/plugins-library/_index.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/_index.md -->
|
|
@ -0,0 +1,13 @@
|
|||
---
|
||||
title: Example plugins
|
||||
description: Start with example plugins that demonstrate common use cases.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
name: Example plugins
|
||||
parent: Plugin library
|
||||
weight: 1
|
||||
influxdb3/core/tags: [plugins, processing engine, python, examples]
|
||||
source: /shared/influxdb3-plugins/plugins-library/examples/_index.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/examples/_index.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: System metrics plugin
|
||||
description: Collects comprehensive system performance metrics including CPU, memory, disk, and network statistics.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
name: System metrics
|
||||
parent: Example plugins
|
||||
weight: 1
|
||||
influxdb3/core/tags: [plugins, processing engine, python, monitoring, system-metrics, performance]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/examples/system-metrics, System metrics plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/examples/system-metrics.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/examples/system-metrics.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: WAL plugin
|
||||
description: Example Write-Ahead Log (WAL) plugin that demonstrates processing data as it's written to the database.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
name: WAL plugin
|
||||
parent: Example plugins
|
||||
weight: 2
|
||||
influxdb3/core/tags: [plugins, processing engine, python, wal, data-write]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/examples/wal-plugin, WAL plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/examples/wal-plugin.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/examples/wal-plugin.md -->
|
|
@ -0,0 +1,13 @@
|
|||
---
|
||||
title: Official plugins
|
||||
description: Production-ready plugins developed and maintained by InfluxData.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
name: Official plugins
|
||||
parent: Plugin library
|
||||
weight: 2
|
||||
influxdb3/core/tags: [plugins, processing engine, python, official]
|
||||
source: /shared/influxdb3-plugins/plugins-library/official/_index.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/official/_index.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: Basic transformation plugin
|
||||
description: Provides common data transformation functions for modifying and enriching time series data.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
name: Basic transformation
|
||||
parent: Official plugins
|
||||
weight: 1
|
||||
influxdb3/core/tags: [plugins, processing engine, python, transformation, data-processing]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/basic_transformation, Basic transformation plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/official/basic-transformation.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/official/basic-transformation.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: Downsampler plugin
|
||||
description: Automatically downsample and aggregate time series data at configurable intervals.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
name: Downsampler
|
||||
parent: Official plugins
|
||||
weight: 2
|
||||
influxdb3/core/tags: [plugins, processing engine, python, downsampling, aggregation, performance]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/downsampler, Downsampler plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/official/downsampler.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/official/downsampler.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: Forecast error evaluator plugin
|
||||
description: Evaluate forecast accuracy by comparing predicted values against actual measurements.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
name: Forecast error evaluator
|
||||
parent: Official plugins
|
||||
weight: 3
|
||||
influxdb3/core/tags: [plugins, processing engine, python, forecasting, evaluation, analytics]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/forecast_error_evaluator, Forecast error evaluator plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/official/forecast-error-evaluator.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/official/forecast-error-evaluator.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: InfluxDB to Iceberg plugin
|
||||
description: Export time series data from InfluxDB to Apache Iceberg table format for data lake integration.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
name: InfluxDB to Iceberg
|
||||
parent: Official plugins
|
||||
weight: 4
|
||||
influxdb3/core/tags: [plugins, processing engine, python, iceberg, export, data-lake]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/influxdb_to_iceberg, InfluxDB to Iceberg plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/official/influxdb-to-iceberg.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/official/influxdb-to-iceberg.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: MAD-based anomaly detection plugin
|
||||
description: Detect anomalies using Median Absolute Deviation (MAD) statistical analysis.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
name: MAD anomaly detection
|
||||
parent: Official plugins
|
||||
weight: 5
|
||||
influxdb3/core/tags: [plugins, processing engine, python, anomaly-detection, statistics, monitoring]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/mad_check, MAD-based anomaly detection plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/official/mad-check.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/official/mad-check.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: Notifier plugin
|
||||
description: Send notifications and alerts to various channels including email, Slack, and webhooks.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
name: Notifier
|
||||
parent: Official plugins
|
||||
weight: 6
|
||||
influxdb3/core/tags: [plugins, processing engine, python, notifications, alerting, integration]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/notifier, Notifier plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/official/notifier.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/official/notifier.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: Prophet forecasting plugin
|
||||
description: Generate time series forecasts using Facebook Prophet for predictive analytics.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
name: Prophet forecasting
|
||||
parent: Official plugins
|
||||
weight: 7
|
||||
influxdb3/core/tags: [plugins, processing engine, python, forecasting, prophet, machine-learning]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/prophet_forecasting, Prophet forecasting plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/official/prophet-forecasting.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/official/prophet-forecasting.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: State change plugin
|
||||
description: Detect and track state changes in time series data for event monitoring.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
name: State change
|
||||
parent: Official plugins
|
||||
weight: 8
|
||||
influxdb3/core/tags: [plugins, processing engine, python, state-tracking, event-detection, monitoring]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/state_change, State change plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/official/state-change.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/official/state-change.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: Stateless ADTK detector plugin
|
||||
description: Perform anomaly detection using the Anomaly Detection Toolkit (ADTK) without maintaining state.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
name: Stateless ADTK detector
|
||||
parent: Official plugins
|
||||
weight: 9
|
||||
influxdb3/core/tags: [plugins, processing engine, python, anomaly-detection, adtk, stateless]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/stateless_adtk_detector, Stateless ADTK detector plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/official/stateless-adtk-detector.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/official/stateless-adtk-detector.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: Threshold deadman checks plugin
|
||||
description: Monitor data thresholds and detect missing data with deadman checks for alerting.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
name: Threshold deadman checks
|
||||
parent: Official plugins
|
||||
weight: 10
|
||||
influxdb3/core/tags: [plugins, processing engine, python, monitoring, thresholds, deadman, alerting]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/threshold_deadman_checks, Threshold deadman checks plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/official/threshold-deadman-checks.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/official/threshold-deadman-checks.md -->
|
|
@ -11,9 +11,9 @@ influxdb3/enterprise/tags: [processing engine, python]
|
|||
related:
|
||||
- /influxdb3/enterprise/reference/cli/influxdb3/test/wal_plugin/
|
||||
- /influxdb3/enterprise/reference/cli/influxdb3/create/trigger/
|
||||
source: /shared/v3-core-plugins/_index.md
|
||||
source: /shared/influxdb3-plugins/_index.md
|
||||
---
|
||||
|
||||
<!--
|
||||
//SOURCE - content/shared/v3-core-plugins/_index.md
|
||||
//SOURCE - content/shared/influxdb3-plugins/_index.md
|
||||
-->
|
|
@ -8,9 +8,15 @@ menu:
|
|||
parent: Processing engine and Python plugins
|
||||
weight: 4
|
||||
influxdb3/enterprise/tags: [processing engine, plugins, API, python]
|
||||
source: /shared/extended-plugin-api.md
|
||||
aliases:
|
||||
- /influxdb3/enterprise/extend-plugin/
|
||||
related:
|
||||
- /influxdb3/enterprise/reference/cli/influxdb3/create/trigger/
|
||||
- /influxdb3/enterprise/reference/cli/influxdb3/test/
|
||||
- /influxdb3/enterprise/reference/processing-engine/
|
||||
source: /shared/influxdb3-plugins/extended-plugin-api.md
|
||||
---
|
||||
|
||||
<!--
|
||||
// SOURCE content/shared/extended-plugin-api.md
|
||||
// SOURCE content/shared/influxdb3-plugins/extended-plugin-api.md
|
||||
-->
|
|
@ -0,0 +1,13 @@
|
|||
---
|
||||
title: Plugin library
|
||||
description: Browse available plugins for {{% product-name %}} to extend your database functionality with custom Python code.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
name: Plugin library
|
||||
parent: Processing engine and Python plugins
|
||||
weight: 5
|
||||
influxdb3/enterprise/tags: [plugins, processing engine, python]
|
||||
source: /shared/influxdb3-plugins/plugins-library/_index.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/_index.md -->
|
|
@ -0,0 +1,13 @@
|
|||
---
|
||||
title: Example plugins
|
||||
description: Start with example plugins that demonstrate common use cases.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
name: Example plugins
|
||||
parent: Plugin library
|
||||
weight: 1
|
||||
influxdb3/enterprise/tags: [plugins, processing engine, python, examples]
|
||||
source: /shared/influxdb3-plugins/plugins-library/examples/_index.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/examples/_index.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: System metrics plugin
|
||||
description: Collects comprehensive system performance metrics including CPU, memory, disk, and network statistics.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
name: System metrics
|
||||
parent: Example plugins
|
||||
weight: 1
|
||||
influxdb3/enterprise/tags: [plugins, processing engine, python, monitoring, system-metrics, performance]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/examples/system-metrics, System metrics plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/examples/system-metrics.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/examples/system-metrics.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: WAL plugin
|
||||
description: Example Write-Ahead Log (WAL) plugin that demonstrates processing data as it's written to the database.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
name: WAL plugin
|
||||
parent: Example plugins
|
||||
weight: 2
|
||||
influxdb3/enterprise/tags: [plugins, processing engine, python, wal, data-write]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/examples/wal-plugin, WAL plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/examples/wal-plugin.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/examples/wal-plugin.md -->
|
|
@ -0,0 +1,13 @@
|
|||
---
|
||||
title: Official plugins
|
||||
description: Production-ready plugins developed and maintained by InfluxData.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
name: Official plugins
|
||||
parent: Plugin library
|
||||
weight: 2
|
||||
influxdb3/enterprise/tags: [plugins, processing engine, python, official]
|
||||
source: /shared/influxdb3-plugins/plugins-library/official/_index.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/official/_index.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: Basic transformation plugin
|
||||
description: Provides common data transformation functions for modifying and enriching time series data.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
name: Basic transformation
|
||||
parent: Official plugins
|
||||
weight: 1
|
||||
influxdb3/enterprise/tags: [plugins, processing engine, python, transformation, data-processing]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/basic_transformation, Basic transformation plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/official/basic-transformation.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/official/basic-transformation.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: Downsampler plugin
|
||||
description: Automatically downsample and aggregate time series data at configurable intervals.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
name: Downsampler
|
||||
parent: Official plugins
|
||||
weight: 2
|
||||
influxdb3/enterprise/tags: [plugins, processing engine, python, downsampling, aggregation, performance]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/downsampler, Downsampler plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/official/downsampler.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/official/downsampler.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: Forecast error evaluator plugin
|
||||
description: Evaluate forecast accuracy by comparing predicted values against actual measurements.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
name: Forecast error evaluator
|
||||
parent: Official plugins
|
||||
weight: 3
|
||||
influxdb3/enterprise/tags: [plugins, processing engine, python, forecasting, evaluation, analytics]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/forecast_error_evaluator, Forecast error evaluator plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/official/forecast-error-evaluator.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/official/forecast-error-evaluator.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: InfluxDB to Iceberg plugin
|
||||
description: Export time series data from InfluxDB to Apache Iceberg table format for data lake integration.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
name: InfluxDB to Iceberg
|
||||
parent: Official plugins
|
||||
weight: 4
|
||||
influxdb3/enterprise/tags: [plugins, processing engine, python, iceberg, export, data-lake]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/influxdb_to_iceberg, InfluxDB to Iceberg plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/official/influxdb-to-iceberg.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/official/influxdb-to-iceberg.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: MAD anomaly detection plugin
|
||||
description: Detect anomalies using Median Absolute Deviation (MAD) statistical analysis.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
name: MAD anomaly detection
|
||||
parent: Official plugins
|
||||
weight: 5
|
||||
influxdb3/enterprise/tags: [plugins, processing engine, python, anomaly-detection, statistics, monitoring]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/mad_check, MAD-based anomaly detection plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/official/mad-check.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/official/mad-check.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: Notifier plugin
|
||||
description: Send notifications and alerts to various channels including email, Slack, and webhooks.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
name: Notifier
|
||||
parent: Official plugins
|
||||
weight: 6
|
||||
influxdb3/enterprise/tags: [plugins, processing engine, python, notifications, alerting, integration]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/notifier, Notifier plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/official/notifier.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/official/notifier.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: Prophet forecasting plugin
|
||||
description: Generate time series forecasts using Facebook Prophet for predictive analytics.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
name: Prophet forecasting
|
||||
parent: Official plugins
|
||||
weight: 7
|
||||
influxdb3/enterprise/tags: [plugins, processing engine, python, forecasting, prophet, machine-learning]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/prophet_forecasting, Prophet forecasting plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/official/prophet-forecasting.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/official/prophet-forecasting.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: State change plugin
|
||||
description: Detect and track state changes in time series data for event monitoring.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
name: State change
|
||||
parent: Official plugins
|
||||
weight: 8
|
||||
influxdb3/enterprise/tags: [plugins, processing engine, python, state-tracking, event-detection, monitoring]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/state_change, State change plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/official/state-change.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/official/state-change.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: Stateless ADTK detector plugin
|
||||
description: Perform anomaly detection using the Anomaly Detection Toolkit (ADTK) without maintaining state.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
name: Stateless ADTK detector
|
||||
parent: Official plugins
|
||||
weight: 9
|
||||
influxdb3/enterprise/tags: [plugins, processing engine, python, anomaly-detection, adtk, stateless]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/stateless_adtk_detector, Stateless ADTK detector plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/official/stateless-adtk-detector.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/official/stateless-adtk-detector.md -->
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: Threshold deadman checks plugin
|
||||
description: Monitor data thresholds and detect missing data with deadman checks for alerting.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
name: Threshold deadman checks
|
||||
parent: Official plugins
|
||||
weight: 10
|
||||
influxdb3/enterprise/tags: [plugins, processing engine, python, monitoring, thresholds, deadman, alerting]
|
||||
related:
|
||||
- https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/threshold_deadman_checks, Threshold deadman checks plugin on GitHub
|
||||
source: /shared/influxdb3-plugins/plugins-library/official/threshold-deadman-checks.md
|
||||
---
|
||||
|
||||
<!-- //SOURCE - content/shared/influxdb3-plugins/plugins-library/official/threshold-deadman-checks.md -->
|
|
@ -7,7 +7,7 @@ including the following:
|
|||
> [!Tip]
|
||||
> #### Find support for {{% product-name %}}
|
||||
>
|
||||
> The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for {{% product-name %}}.
|
||||
> The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for InfluxDB 3 Core and InfluxDB 3 Enterprise.
|
||||
> For other InfluxDB versions, see the [Support and feedback](#bug-reports-and-feedback) options.
|
||||
|
||||
## Data model
|
||||
|
|
|
@ -85,17 +85,19 @@ You have two main options for adding plugins to your InfluxDB instance:
|
|||
|
||||
### Use example plugins
|
||||
|
||||
InfluxData provides a public repository of example plugins that you can use immediately.
|
||||
InfluxData provides official plugins that you can use immediately in your Processing Engine setup.
|
||||
|
||||
#### Browse plugin examples
|
||||
|
||||
Visit the [influxdb3_plugins repository](https://github.com/influxdata/influxdb3_plugins) to find examples for:
|
||||
Visit the [plugin library](/influxdb3/version/plugins/library/) to find starter examples and official plugins for:
|
||||
|
||||
- **Data transformation**: Process and transform incoming data
|
||||
- **Alerting**: Send notifications based on data thresholds
|
||||
- **Aggregation**: Calculate statistics on time series data
|
||||
- **Integration**: Connect to external services and APIs
|
||||
- **System monitoring**: Track resource usage and health metrics
|
||||
- **Data transformation**: Process and transform incoming data
|
||||
- **Alerting**: Send notifications based on data thresholds
|
||||
- **Aggregation**: Calculate statistics on time series data
|
||||
- **Integration**: Connect to external services and APIs
|
||||
- **System monitoring**: Track resource usage and health metrics
|
||||
|
||||
For more examples and community contributions, see the [influxdb3_plugins repository](https://github.com/influxdata/influxdb3_plugins) on GitHub.
|
||||
|
||||
#### Add example plugins
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
Browse plugins for {{% product-name %}}. Use these plugins to extend your database functionality with custom Python code that runs on write events, schedules, or HTTP requests.
|
||||
|
||||
{{< children show="sections" >}}
|
||||
|
||||
## Requirements
|
||||
|
||||
All plugins require:
|
||||
- InfluxDB 3 Core or InfluxDB 3 Enterprise with Processing Engine enabled
|
||||
- Python environment (managed automatically by InfluxDB 3)
|
||||
- Appropriate trigger configuration
|
||||
|
||||
## Plugin metadata
|
||||
|
||||
Plugins in this library include a JSON metadata schema in a docstring header that defines supported trigger types and configuration parameters. This metadata enables:
|
||||
|
||||
- the [InfluxDB 3 Explorer UI](/influxdb3/explorer/) to display and configure the plugin
|
||||
- automated testing and validation of plugins in the repository
|
|
@ -0,0 +1,3 @@
|
|||
Example plugins demonstrate common use cases and patterns for extending {{% product-name %}} with custom Python code.
|
||||
|
||||
{{< children >}}
|
|
@ -0,0 +1,248 @@
|
|||
The System Metrics Plugin collects comprehensive system performance metrics including CPU, memory, disk, and network statistics.
|
||||
This plugin runs on a scheduled basis to provide regular monitoring of your server infrastructure.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Optional parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `hostname` | string | "localhost" | Hostname to tag system metrics with |
|
||||
|
||||
## Requirements
|
||||
|
||||
### Software requirements
|
||||
- InfluxDB 3 Core or InfluxDB 3 Enterprise with Processing Engine enabled
|
||||
- Python packages:
|
||||
- `psutil` (for system metrics collection)
|
||||
|
||||
### Installation steps
|
||||
|
||||
1. Start InfluxDB 3 with plugin support:
|
||||
```bash
|
||||
influxdb3 serve \
|
||||
--node-id node0 \
|
||||
--object-store file \
|
||||
--data-dir ~/.influxdb3 \
|
||||
--plugin-dir ~/.plugins
|
||||
```
|
||||
|
||||
2. Install required Python packages:
|
||||
```bash
|
||||
influxdb3 install package psutil
|
||||
```
|
||||
|
||||
## Trigger setup
|
||||
|
||||
### Scheduled collection
|
||||
|
||||
Collect system metrics every 30 seconds:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database monitoring \
|
||||
--plugin-filename examples/schedule/system_metrics/system_metrics.py \
|
||||
--trigger-spec "every:30s" \
|
||||
--trigger-arguments 'hostname=server01' \
|
||||
system_monitoring
|
||||
```
|
||||
|
||||
## Example usage
|
||||
|
||||
### Example: Basic system monitoring
|
||||
|
||||
Set up comprehensive system monitoring for a server:
|
||||
|
||||
```bash
|
||||
# Create the monitoring trigger
|
||||
influxdb3 create trigger \
|
||||
--database monitoring \
|
||||
--plugin-filename examples/schedule/system_metrics/system_metrics.py \
|
||||
--trigger-spec "every:60s" \
|
||||
--trigger-arguments 'hostname=web-server-01' \
|
||||
web_server_monitoring
|
||||
|
||||
# Query CPU metrics
|
||||
influxdb3 query \
|
||||
--database monitoring \
|
||||
"SELECT host, cpu, user, system, idle FROM system_cpu WHERE time >= now() - interval '5 minutes'"
|
||||
|
||||
# Query memory metrics
|
||||
influxdb3 query \
|
||||
--database monitoring \
|
||||
"SELECT host, total, used, available, percent FROM system_memory WHERE time >= now() - interval '5 minutes'"
|
||||
```
|
||||
|
||||
### Expected output
|
||||
|
||||
**CPU Metrics (`system_cpu`)**:
|
||||
```
|
||||
host | cpu | user | system | idle | time
|
||||
------------|-------|------|--------|------|-----
|
||||
web-server-01 | total | 15.2 | 8.1 | 76.7 | 2024-01-01T12:00:00Z
|
||||
```
|
||||
|
||||
**Memory Metrics (`system_memory`)**:
|
||||
```
|
||||
host | total | used | available | percent | time
|
||||
------------|------------|------------|------------|---------|-----
|
||||
web-server-01 | 8589934592 | 4294967296 | 4294967296 | 50.0 | 2024-01-01T12:00:00Z
|
||||
```
|
||||
|
||||
## Collected Measurements
|
||||
|
||||
### system_cpu
|
||||
Overall CPU statistics and performance metrics.
|
||||
|
||||
**Tags:**
|
||||
- `host`: Hostname (from configuration)
|
||||
- `cpu`: Always "total" for aggregate metrics
|
||||
|
||||
**Fields:**
|
||||
- `user`, `system`, `idle`, `iowait`, `nice`, `irq`, `softirq`, `steal`, `guest`, `guest_nice`: CPU time percentages
|
||||
- `frequency_current`, `frequency_min`, `frequency_max`: CPU frequency in MHz
|
||||
- `ctx_switches`, `interrupts`, `soft_interrupts`, `syscalls`: System call counts
|
||||
- `load1`, `load5`, `load15`: System load averages
|
||||
|
||||
### system_cpu_cores
|
||||
Per-core CPU metrics for detailed monitoring.
|
||||
|
||||
**Tags:**
|
||||
- `host`: Hostname
|
||||
- `core`: CPU core number (0, 1, 2, etc.)
|
||||
|
||||
**Fields:**
|
||||
- `usage`: CPU usage percentage for this core
|
||||
- `user`, `system`, `idle`: CPU time breakdowns per core
|
||||
- `frequency_current`, `frequency_min`, `frequency_max`: Per-core frequency
|
||||
|
||||
### system_memory
|
||||
System memory and virtual memory statistics.
|
||||
|
||||
**Tags:**
|
||||
- `host`: Hostname
|
||||
|
||||
**Fields:**
|
||||
- `total`, `available`, `used`, `free`: Memory amounts in bytes
|
||||
- `active`, `inactive`, `buffers`, `cached`, `shared`, `slab`: Memory usage breakdown
|
||||
- `percent`: Memory usage percentage
|
||||
|
||||
### system_swap
|
||||
Swap memory usage and statistics.
|
||||
|
||||
**Tags:**
|
||||
- `host`: Hostname
|
||||
|
||||
**Fields:**
|
||||
- `total`, `used`, `free`: Swap amounts in bytes
|
||||
- `percent`: Swap usage percentage
|
||||
- `sin`, `sout`: Swap in/out operations
|
||||
|
||||
### system_disk_usage
|
||||
Disk space usage for each mounted filesystem.
|
||||
|
||||
**Tags:**
|
||||
- `host`: Hostname
|
||||
- `device`: Device name (for example, /dev/sda1)
|
||||
- `mountpoint`: Mount point path
|
||||
- `fstype`: Filesystem type
|
||||
|
||||
**Fields:**
|
||||
- `total`, `used`, `free`: Disk space in bytes
|
||||
- `percent`: Disk usage percentage
|
||||
|
||||
### system_disk_io
|
||||
Disk I/O statistics for each disk device.
|
||||
|
||||
**Tags:**
|
||||
- `host`: Hostname
|
||||
- `device`: Device name
|
||||
|
||||
**Fields:**
|
||||
- `reads`, `writes`: Number of read/write operations
|
||||
- `read_bytes`, `write_bytes`: Bytes read/written
|
||||
- `read_time`, `write_time`: Time spent on I/O operations
|
||||
- `busy_time`: Time disk was busy
|
||||
- `read_merged_count`, `write_merged_count`: Merged I/O operations
|
||||
|
||||
### system_network
|
||||
Network interface statistics.
|
||||
|
||||
**Tags:**
|
||||
- `host`: Hostname
|
||||
- `interface`: Network interface name (eth0, wlan0, etc.)
|
||||
|
||||
**Fields:**
|
||||
- `bytes_sent`, `bytes_recv`: Network traffic in bytes
|
||||
- `packets_sent`, `packets_recv`: Network packets
|
||||
- `errin`, `errout`: Input/output errors
|
||||
- `dropin`, `dropout`: Dropped packets
|
||||
|
||||
## Code overview
|
||||
|
||||
### Files
|
||||
|
||||
- `system_metrics.py`: Main plugin code that collects and writes system metrics
|
||||
|
||||
### Main functions
|
||||
|
||||
#### `process_scheduled_call(influxdb3_local, time, args)`
|
||||
Entry point for scheduled metric collection. Orchestrates collection of all metric types.
|
||||
|
||||
#### `collect_cpu_metrics(influxdb3_local, hostname)`
|
||||
Collects CPU statistics including overall and per-core metrics.
|
||||
|
||||
#### `collect_memory_metrics(influxdb3_local, hostname)`
|
||||
Collects virtual memory, swap, and memory fault statistics.
|
||||
|
||||
#### `collect_disk_metrics(influxdb3_local, hostname)`
|
||||
Collects disk usage and I/O performance metrics.
|
||||
|
||||
#### `collect_network_metrics(influxdb3_local, hostname)`
|
||||
Collects network interface statistics.
|
||||
|
||||
### Logging
|
||||
|
||||
Logs are stored in the `_internal` database in the `system.processing_engine_logs` table. To view logs:
|
||||
|
||||
{{% code-placeholders "AUTH_TOKEN" %}}
|
||||
```bash
|
||||
influxdb3 query \
|
||||
--database _internal \
|
||||
--token AUTH_TOKEN \
|
||||
"SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'system_monitoring'"
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}} with your {{% token-link "admin" %}}.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common issues
|
||||
|
||||
#### Issue: Missing psutil module
|
||||
**Solution**: Install the psutil package:
|
||||
```bash
|
||||
influxdb3 install package psutil
|
||||
```
|
||||
|
||||
#### Issue: Permission denied errors for disk metrics
|
||||
**Solution**: This is normal for system partitions that require elevated permissions. The plugin will skip these and continue collecting other metrics.
|
||||
|
||||
#### Issue: No per-core CPU metrics
|
||||
**Solution**: This can happen on some systems where per-core data isn't available. The overall CPU metrics will still be collected.
|
||||
|
||||
### Performance considerations
|
||||
|
||||
- Collection frequency: 30-60 second intervals are recommended for most use cases
|
||||
- The plugin handles errors gracefully and continues collecting available metrics
|
||||
- Some metrics may not be available on all operating systems (the plugin handles this automatically)
|
||||
|
||||
## Report an issue
|
||||
|
||||
For plugin issues, see the Plugins repository [issues page](https://github.com/influxdata/influxdb3_plugins/issues).
|
||||
|
||||
## Find support for {{% product-name %}}
|
||||
|
||||
The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for InfluxDB 3 Core and InfluxDB 3 Enterprise.
|
||||
For other InfluxDB versions, see the [Support and feedback](#bug-reports-and-feedback) options.
|
|
@ -0,0 +1,246 @@
|
|||
The example WAL plugin monitors data write operations in InfluxDB 3 by tracking row counts for each table during WAL (Write-Ahead Log) flush events.
|
||||
It creates summary reports in a `write_reports` table to help analyze data ingestion patterns and rates.
|
||||
The plugin can optionally double-count rows for a specified table to demonstrate configurable behavior.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Optional parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `double_count_table` | string | none | Table name for which to double the row count in write reports (for testing/demonstration) |
|
||||
|
||||
## Requirements
|
||||
|
||||
### Software requirements
|
||||
- InfluxDB 3 Core or InfluxDB 3 Enterprise with Processing Engine enabled
|
||||
- No additional Python packages required (uses built-in libraries)
|
||||
|
||||
### Installation steps
|
||||
|
||||
1. Start InfluxDB 3 with plugin support:
|
||||
```bash
|
||||
influxdb3 serve \
|
||||
--node-id node0 \
|
||||
--object-store file \
|
||||
--data-dir ~/.influxdb3 \
|
||||
--plugin-dir ~/.plugins
|
||||
```
|
||||
|
||||
## Trigger setup
|
||||
|
||||
### Write monitoring
|
||||
|
||||
Monitor all table writes and generate write reports:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database monitoring \
|
||||
--plugin-filename examples/wal_plugin/wal_plugin.py \
|
||||
--trigger-spec "all_tables" \
|
||||
wal_monitoring
|
||||
```
|
||||
|
||||
### Write monitoring with special handling
|
||||
|
||||
Monitor writes with special handling for a specific table:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database monitoring \
|
||||
--plugin-filename examples/wal_plugin/wal_plugin.py \
|
||||
--trigger-spec "all_tables" \
|
||||
--trigger-arguments 'double_count_table=temperature' \
|
||||
wal_monitoring_special
|
||||
```
|
||||
|
||||
## Example usage
|
||||
|
||||
### Example: Basic write monitoring
|
||||
|
||||
Set up write monitoring to track data ingestion:
|
||||
|
||||
```bash
|
||||
# Create the monitoring trigger
|
||||
influxdb3 create trigger \
|
||||
--database testdb \
|
||||
--plugin-filename examples/wal_plugin/wal_plugin.py \
|
||||
--trigger-spec "all_tables" \
|
||||
write_monitor
|
||||
|
||||
# Write test data to various tables
|
||||
influxdb3 write \
|
||||
--database testdb \
|
||||
"temperature,location=office value=22.5"
|
||||
|
||||
influxdb3 write \
|
||||
--database testdb \
|
||||
"humidity,location=office value=45.2"
|
||||
|
||||
influxdb3 write \
|
||||
--database testdb \
|
||||
"pressure,location=office value=1013.25"
|
||||
|
||||
# The plugin automatically generates write reports in the `write_reports` measurement.
|
||||
|
||||
# Query the write reports
|
||||
influxdb3 query \
|
||||
--database testdb \
|
||||
"SELECT * FROM write_reports ORDER BY time DESC"
|
||||
```
|
||||
|
||||
### Expected output
|
||||
|
||||
```
|
||||
table_name | row_count | time
|
||||
------------|-----------|-----
|
||||
pressure | 1 | 2024-01-01T12:02:00Z
|
||||
humidity | 1 | 2024-01-01T12:01:00Z
|
||||
temperature | 1 | 2024-01-01T12:00:00Z
|
||||
```
|
||||
|
||||
### Example: Monitoring with special table handling
|
||||
|
||||
Monitor writes with doubled counting for temperature data:
|
||||
|
||||
```bash
|
||||
# Create trigger with special handling
|
||||
influxdb3 create trigger \
|
||||
--database testdb \
|
||||
--plugin-filename examples/wal_plugin/wal_plugin.py \
|
||||
--trigger-spec "all_tables" \
|
||||
--trigger-arguments 'double_count_table=temperature' \
|
||||
write_monitor_special
|
||||
|
||||
# Write test data
|
||||
influxdb3 write \
|
||||
--database testdb \
|
||||
"temperature,location=office value=22.5"
|
||||
|
||||
influxdb3 write \
|
||||
--database testdb \
|
||||
"humidity,location=office value=45.2"
|
||||
|
||||
# Query the write reports
|
||||
influxdb3 query \
|
||||
--database testdb \
|
||||
"SELECT * FROM write_reports ORDER BY time DESC"
|
||||
```
|
||||
|
||||
### Expected output
|
||||
|
||||
```
|
||||
table_name | row_count | time
|
||||
------------|-----------|-----
|
||||
humidity | 1 | 2024-01-01T12:01:00Z
|
||||
temperature | 2 | 2024-01-01T12:00:00Z
|
||||
```
|
||||
|
||||
**Note**: The temperature table shows a row count of 2 despite only writing 1 row, demonstrating the `double_count_table` parameter.
|
||||
|
||||
## Generated Measurements
|
||||
|
||||
### write_reports
|
||||
Tracks the number of rows written to each table during WAL flush events.
|
||||
|
||||
**Tags:**
|
||||
- `table_name`: Name of the table that received writes
|
||||
|
||||
**Fields:**
|
||||
- `row_count`: Number of rows written in this WAL flush (integer)
|
||||
|
||||
**Special behavior:**
|
||||
- If `double_count_table` parameter matches the table name, the row count will be doubled
|
||||
- The plugin automatically skips the `write_reports` table to avoid infinite recursion
|
||||
|
||||
## Code overview
|
||||
|
||||
### Files
|
||||
|
||||
- `wal_plugin.py`: Main plugin code that processes write batches and generates reports
|
||||
|
||||
### Main functions
|
||||
|
||||
#### `process_writes(influxdb3_local, table_batches, args)`
|
||||
Entry point for processing write batches. Called each time data is written to the database.
|
||||
|
||||
**Parameters:**
|
||||
- `influxdb3_local`: InfluxDB client for writing and logging
|
||||
- `table_batches`: List of table batches containing written data
|
||||
- `args`: Configuration arguments from trigger setup
|
||||
|
||||
**Processing logic:**
|
||||
1. Iterates through each table batch in the write operation
|
||||
2. Skips the `write_reports` table to prevent recursion
|
||||
3. Counts rows in each batch
|
||||
4. Applies special handling if `double_count_table` matches
|
||||
5. Writes report record to `write_reports` measurement
|
||||
|
||||
### Logging
|
||||
|
||||
Logs are stored in the `_internal` database in the `system.processing_engine_logs` table. To view logs:
|
||||
|
||||
{{% code-placeholders "AUTH_TOKEN" %}}
|
||||
```bash
|
||||
influxdb3 query \
|
||||
--database _internal \
|
||||
--token AUTH_TOKEN \
|
||||
"SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'wal_monitoring'"
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}} with your {{% token-link "admin" %}}.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common issues
|
||||
|
||||
#### Issue: No write reports appearing
|
||||
**Solution**:
|
||||
1. Verify the trigger was created successfully:
|
||||
{{% code-placeholders "DATABASE_NAME|AUTH_TOKEN" %}}
|
||||
```bash
|
||||
influxdb3 show summary --database DATABASE_NAME --token AUTH_TOKEN
|
||||
```
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace the following:
|
||||
|
||||
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: the name of the database
|
||||
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in "enterprise" %}} with read permissions on the specified database{{% /show-in %}}
|
||||
2. Check that data is actually being written to tables other than `write_reports`
|
||||
3. Review logs for errors
|
||||
|
||||
#### Issue: Infinite recursion with write_reports
|
||||
**Solution**: This shouldn't happen as the plugin automatically skips the `write_reports` table, but if you see this:
|
||||
1. Check that you haven't modified the plugin to remove the skip logic
|
||||
2. Verify the table name comparison is working correctly
|
||||
|
||||
#### Issue: Row counts seem incorrect
|
||||
**Solution**:
|
||||
1. Remember that row counts represent WAL flush batches, not individual write operations
|
||||
2. Multiple write operations may be batched together before the plugin processes them
|
||||
3. Check if `double_count_table` is set and affecting specific tables
|
||||
|
||||
### Performance considerations
|
||||
|
||||
- This plugin processes every write operation, so it adds minimal overhead
|
||||
- The plugin generates one additional write per table per WAL flush batch
|
||||
- Consider the storage impact of write reports for high-volume systems
|
||||
|
||||
### Use cases
|
||||
|
||||
- **Write monitoring**: Track data ingestion patterns and volumes
|
||||
- **Debugging**: Identify which tables are receiving writes
|
||||
- **Performance analysis**: Monitor write batch sizes and patterns
|
||||
- **Data validation**: Verify expected write volumes
|
||||
- **Testing**: Use `double_count_table` parameter for testing scenarios
|
||||
|
||||
## Report an issue
|
||||
|
||||
For plugin issues, see the Plugins repository [issues page](https://github.com/influxdata/influxdb3_plugins/issues).
|
||||
|
||||
## Find support for {{% product-name %}}
|
||||
|
||||
The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for InfluxDB 3 Core and InfluxDB 3 Enterprise.
|
||||
For other InfluxDB versions, see the [Support and feedback](#bug-reports-and-feedback) options.
|
|
@ -0,0 +1,3 @@
|
|||
Official plugins developed and maintained by InfluxData for production use with {{% product-name %}}.
|
||||
|
||||
{{< children >}}
|
|
@ -0,0 +1,343 @@
|
|||
The Basic Transformation Plugin enables real-time and scheduled transformation of time series data in InfluxDB 3.
|
||||
Transform field and tag names, convert values between units, and apply custom string replacements to standardize or clean your data.
|
||||
The plugin supports both scheduled batch processing of historical data and real-time transformation as data is written.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Required parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `measurement` | string | required | Source measurement containing data to transform |
|
||||
| `target_measurement` | string | required | Destination measurement for transformed data |
|
||||
| `target_database` | string | current database | Database for storing transformed data |
|
||||
| `dry_run` | string | `"false"` | When `"true"`, logs transformations without writing |
|
||||
|
||||
### Transformation parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `names_transformations` | string | none | Field/tag name transformation rules. Format: `'field1:"transform1 transform2".field2:"transform3"'` |
|
||||
| `values_transformations` | string | none | Field value transformation rules. Format: `'field1:"transform1".field2:"transform2"'` |
|
||||
| `custom_replacements` | string | none | Custom string replacements. Format: `'rule_name:"find=replace"'` |
|
||||
| `custom_regex` | string | none | Regex patterns for field matching. Format: `'pattern_name:"temp%"'` |
|
||||
|
||||
### Data selection parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `window` | string | required (scheduled only) | Historical data window. Format: `<number><unit>` (for example, `"30d"`, `"1h"`) |
|
||||
| `included_fields` | string | all fields | Dot-separated list of fields to include (for example, `"temp.humidity"`) |
|
||||
| `excluded_fields` | string | none | Dot-separated list of fields to exclude |
|
||||
| `filters` | string | none | Query filters. Format: `'field:"operator value"'` |
|
||||
|
||||
### Advanced parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `config_file_path` | string | none | Path to TOML config file relative to PLUGIN_DIR |
|
||||
|
||||
## Requirements
|
||||
|
||||
### Software requirements
|
||||
- InfluxDB 3 Core or Enterprise with Processing Engine enabled
|
||||
- Python packages:
|
||||
- `pint` (for unit conversions)
|
||||
|
||||
### Installation steps
|
||||
|
||||
1. Start InfluxDB 3 with plugin support:
|
||||
```bash
|
||||
influxdb3 serve \
|
||||
--node-id node0 \
|
||||
--object-store file \
|
||||
--data-dir ~/.influxdb3 \
|
||||
--plugin-dir ~/.plugins
|
||||
```
|
||||
|
||||
2. Install required Python packages:
|
||||
```bash
|
||||
influxdb3 install package pint
|
||||
```
|
||||
|
||||
## Trigger setup
|
||||
|
||||
### Scheduled transformation
|
||||
|
||||
Run transformations periodically on historical data:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database mydb \
|
||||
--plugin-filename gh:influxdata/basic_transformation/basic_transformation.py \
|
||||
--trigger-spec "every:1h" \
|
||||
--trigger-arguments 'measurement=temperature,window=24h,target_measurement=temperature_normalized,names_transformations=temp:"snake",values_transformations=temp:"convert_degC_to_degF"' \
|
||||
hourly_temp_transform
|
||||
```
|
||||
|
||||
### Real-time transformation
|
||||
|
||||
Transform data as it's written:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database mydb \
|
||||
--plugin-filename gh:influxdata/basic_transformation/basic_transformation.py \
|
||||
--trigger-spec "all_tables" \
|
||||
--trigger-arguments 'measurement=sensor_data,target_measurement=sensor_data_clean,names_transformations=.*:"snake alnum_underscore_only"' \
|
||||
realtime_clean
|
||||
```
|
||||
|
||||
## Example usage
|
||||
|
||||
### Example 1: Temperature unit conversion
|
||||
|
||||
Convert temperature readings from Celsius to Fahrenheit while standardizing field names:
|
||||
|
||||
```bash
|
||||
# Create the trigger
|
||||
influxdb3 create trigger \
|
||||
--database weather \
|
||||
--plugin-filename gh:influxdata/basic_transformation/basic_transformation.py \
|
||||
--trigger-spec "every:30m" \
|
||||
--trigger-arguments 'measurement=raw_temps,window=1h,target_measurement=temps_fahrenheit,names_transformations=Temperature:"snake",values_transformations=temperature:"convert_degC_to_degF"' \
|
||||
temp_converter
|
||||
|
||||
# Write test data
|
||||
influxdb3 write \
|
||||
--database weather \
|
||||
"raw_temps,location=office Temperature=22.5"
|
||||
|
||||
# Query transformed data (after trigger runs)
|
||||
influxdb3 query \
|
||||
--database weather \
|
||||
"SELECT * FROM temps_fahrenheit"
|
||||
```
|
||||
|
||||
### Expected output
|
||||
```
|
||||
location | temperature | time
|
||||
---------|-------------|-----
|
||||
office | 72.5 | 2024-01-01T00:00:00Z
|
||||
```
|
||||
|
||||
**Transformation details:**
|
||||
- Before: `Temperature=22.5` (Celsius)
|
||||
- After: `temperature=72.5` (Fahrenheit, field name converted to snake_case)
|
||||
|
||||
### Example 2: Field name standardization
|
||||
|
||||
Clean and standardize field names from various sensors:
|
||||
|
||||
```bash
|
||||
# Create trigger with multiple transformations
|
||||
influxdb3 create trigger \
|
||||
--database sensors \
|
||||
--plugin-filename gh:influxdata/basic_transformation/basic_transformation.py \
|
||||
--trigger-spec "all_tables" \
|
||||
--trigger-arguments 'measurement=raw_sensors,target_measurement=clean_sensors,names_transformations=.*:"snake alnum_underscore_only collapse_underscore trim_underscore"' \
|
||||
field_cleaner
|
||||
|
||||
# Write data with inconsistent field names
|
||||
influxdb3 write \
|
||||
--database sensors \
|
||||
"raw_sensors,device=sensor1 \"Room Temperature\"=20.1,\"__Humidity_%\"=45.2"
|
||||
|
||||
# Query cleaned data
|
||||
influxdb3 query \
|
||||
--database sensors \
|
||||
"SELECT * FROM clean_sensors"
|
||||
```
|
||||
|
||||
### Expected output
|
||||
```
|
||||
device | room_temperature | humidity | time
|
||||
--------|------------------|----------|-----
|
||||
sensor1 | 20.1 | 45.2 | 2024-01-01T00:00:00Z
|
||||
```
|
||||
|
||||
**Transformation details:**
|
||||
- Before: `"Room Temperature"=20.1`, `"__Humidity_%"=45.2`
|
||||
- After: `room_temperature=20.1`, `humidity=45.2` (field names standardized)
|
||||
|
||||
### Example 3: Custom string replacements
|
||||
|
||||
Replace specific strings in field values:
|
||||
|
||||
```bash
|
||||
# Create trigger with custom replacements
|
||||
influxdb3 create trigger \
|
||||
--database inventory \
|
||||
--plugin-filename gh:influxdata/basic_transformation/basic_transformation.py \
|
||||
--trigger-spec "every:1d" \
|
||||
--trigger-arguments 'measurement=products,window=7d,target_measurement=products_updated,values_transformations=status:"status_replace",custom_replacements=status_replace:"In Stock=available.Out of Stock=unavailable"' \
|
||||
status_updater
|
||||
```
|
||||
|
||||
## Using TOML Configuration Files
|
||||
|
||||
This plugin supports using TOML configuration files to specify all plugin arguments. This is useful for complex configurations or when you want to version control your plugin settings.
|
||||
|
||||
### Important Requirements
|
||||
|
||||
**To use TOML configuration files, you must set the `PLUGIN_DIR` environment variable in the InfluxDB 3 host environment.** This is required in addition to the `--plugin-dir` flag when starting InfluxDB 3:
|
||||
|
||||
- `--plugin-dir` tells InfluxDB 3 where to find plugin Python files
|
||||
- `PLUGIN_DIR` environment variable tells the plugins where to find TOML configuration files
|
||||
|
||||
### Setting Up TOML Configuration
|
||||
|
||||
1. **Start InfluxDB 3 with the PLUGIN_DIR environment variable set**:
|
||||
```bash
|
||||
PLUGIN_DIR=~/.plugins influxdb3 serve --node-id node0 --object-store file --data-dir ~/.influxdb3 --plugin-dir ~/.plugins
|
||||
```
|
||||
|
||||
2. **Copy the example TOML configuration file to your plugin directory**:
|
||||
```bash
|
||||
cp basic_transformation_config_scheduler.toml ~/.plugins/
|
||||
# or for data writes:
|
||||
cp basic_transformation_config_data_writes.toml ~/.plugins/
|
||||
```
|
||||
|
||||
3. **Edit the TOML file** to match your requirements. The TOML file contains all the arguments defined in the plugin's argument schema (see the JSON schema in the docstring at the top of basic_transformation.py).
|
||||
|
||||
4. **Create a trigger using the `config_file_path` argument**:
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database mydb \
|
||||
--plugin-filename basic_transformation.py \
|
||||
--trigger-spec "every:1d" \
|
||||
--trigger-arguments config_file_path=basic_transformation_config_scheduler.toml \
|
||||
basic_transform_trigger
|
||||
```
|
||||
|
||||
### Important Notes
|
||||
- The `PLUGIN_DIR` environment variable must be set when starting InfluxDB 3 for TOML configuration to work
|
||||
- When using `config_file_path`, specify only the filename (not the full path)
|
||||
- The TOML file must be located in the directory specified by `PLUGIN_DIR`
|
||||
- All parameters in the TOML file will override any command-line arguments
|
||||
- Example TOML configuration files are provided:
|
||||
- `basic_transformation_config_scheduler.toml` - for scheduled triggers
|
||||
- `basic_transformation_config_data_writes.toml` - for data write triggers
|
||||
|
||||
## Code overview
|
||||
|
||||
### Files
|
||||
|
||||
- `basic_transformation.py`: The main plugin code containing handlers for scheduled tasks and data write transformations
|
||||
- `basic_transformation_config_data_writes.toml`: Example TOML configuration file for data write triggers
|
||||
- `basic_transformation_config_scheduler.toml`: Example TOML configuration file for scheduled triggers
|
||||
|
||||
### Logging
|
||||
|
||||
Logs are stored in the `_internal` database (or the database where the trigger is created) in the `system.processing_engine_logs` table. To view logs:
|
||||
|
||||
```bash
|
||||
influxdb3 query --database _internal "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'your_trigger_name'"
|
||||
```
|
||||
|
||||
Log columns:
|
||||
- **event_time**: Timestamp of the log event
|
||||
- **trigger_name**: Name of the trigger that generated the log
|
||||
- **log_level**: Severity level (INFO, WARN, ERROR)
|
||||
- **log_text**: Message describing the action or error
|
||||
|
||||
### Main functions
|
||||
|
||||
#### `process_scheduled_call(influxdb3_local, call_time, args)`
|
||||
Handles scheduled transformation tasks.
|
||||
Queries historical data within the specified window and applies transformations.
|
||||
|
||||
Key operations:
|
||||
1. Parses configuration from arguments
|
||||
2. Queries source measurement with filters
|
||||
3. Applies name and value transformations
|
||||
4. Writes transformed data to target measurement
|
||||
|
||||
#### `process_writes(influxdb3_local, table_batches, args)`
|
||||
Handles real-time transformation during data writes.
|
||||
Processes incoming data batches and applies transformations before writing.
|
||||
|
||||
Key operations:
|
||||
1. Filters relevant table batches
|
||||
2. Applies transformations to each row
|
||||
3. Writes to target measurement immediately
|
||||
|
||||
#### `apply_transformations(value, transformations)`
|
||||
Core transformation engine that applies a chain of transformations to a value.
|
||||
|
||||
Supported transformations:
|
||||
- String operations: `lower`, `upper`, `snake`
|
||||
- Space handling: `space_to_underscore`, `remove_space`
|
||||
- Character filtering: `alnum_underscore_only`
|
||||
- Underscore management: `collapse_underscore`, `trim_underscore`
|
||||
- Unit conversions: `convert_<from>_to_<to>`
|
||||
- Custom replacements: User-defined string substitutions
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common issues
|
||||
|
||||
#### Issue: Transformations not applying
|
||||
**Solution**: Check that field names match exactly (case-sensitive).
|
||||
Use regex patterns for flexible matching:
|
||||
```bash
|
||||
--trigger-arguments 'custom_regex=temp_fields:"temp%",values_transformations=temp_fields:"convert_degC_to_degF"'
|
||||
```
|
||||
|
||||
#### Issue: "Permission denied" errors in logs
|
||||
**Solution**: Ensure the plugin file has execute permissions:
|
||||
```bash
|
||||
chmod +x ~/.plugins/basic_transformation.py
|
||||
```
|
||||
|
||||
#### Issue: Unit conversion failing
|
||||
**Solution**: Verify unit names are valid pint units.
|
||||
Common units:
|
||||
- Temperature: `degC`, `degF`, `degK`
|
||||
- Length: `meter`, `foot`, `inch`
|
||||
- Time: `second`, `minute`, `hour`
|
||||
|
||||
#### Issue: No data in target measurement
|
||||
**Solution**:
|
||||
1. Check dry_run is not set to "true"
|
||||
2. Verify source measurement contains data
|
||||
3. Check logs for errors:
|
||||
```bash
|
||||
influxdb3 query \
|
||||
--database _internal \
|
||||
"SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'your_trigger_name'"
|
||||
```
|
||||
|
||||
### Debugging tips
|
||||
|
||||
1. **Enable dry run** to test transformations:
|
||||
```bash
|
||||
--trigger-arguments 'dry_run=true,...'
|
||||
```
|
||||
|
||||
2. **Use specific time windows** for testing:
|
||||
```bash
|
||||
--trigger-arguments 'window=1h,...'
|
||||
```
|
||||
|
||||
3. **Check field names** in source data:
|
||||
```bash
|
||||
influxdb3 query --database mydb "SHOW FIELD KEYS FROM measurement"
|
||||
```
|
||||
|
||||
### Performance considerations
|
||||
|
||||
- Field name caching reduces query overhead (1-hour cache)
|
||||
- Batch processing for scheduled tasks improves throughput
|
||||
- Retry mechanism (3 attempts) handles transient write failures
|
||||
- Use filters to process only relevant data
|
||||
|
||||
## Report an issue
|
||||
|
||||
For plugin issues, see the Plugins repository [issues page](https://github.com/influxdata/influxdb3_plugins/issues).
|
||||
|
||||
## Find support for {{% product-name %}}
|
||||
|
||||
The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for InfluxDB 3 Core and InfluxDB 3 Enterprise.
|
||||
For other InfluxDB versions, see the [Support and feedback](#bug-reports-and-feedback) options.
|
|
@ -0,0 +1,339 @@
|
|||
The Downsampler Plugin enables time-based data aggregation and downsampling in InfluxDB 3.
|
||||
Reduce data volume by aggregating measurements over specified time intervals using functions like avg, sum, min, max, derivative, or median.
|
||||
The plugin supports both scheduled batch processing of historical data and on-demand downsampling through HTTP requests.
|
||||
Each downsampled record includes metadata about the original data points compressed.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Required parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `source_measurement` | string | required | Source measurement containing data to downsample |
|
||||
| `target_measurement` | string | required | Destination measurement for downsampled data |
|
||||
| `window` | string | required (scheduled only) | Time window for each downsampling job. Format: `<number><unit>` (for example, `"1h"`, `"1d"`) |
|
||||
|
||||
### Aggregation parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `interval` | string | `"10min"` | Time interval for downsampling. Format: `<number><unit>` (for example, `"10min"`, `"2h"`, `"1d"`) |
|
||||
| `calculations` | string | "avg" | Aggregation functions. Single function or dot-separated field:aggregation pairs |
|
||||
| `specific_fields` | string | all fields | Dot-separated list of fields to downsample (for example, `"co.temperature"`) |
|
||||
| `excluded_fields` | string | none | Dot-separated list of fields to exclude from downsampling |
|
||||
|
||||
### Filtering parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `tag_values` | string | none | Tag filters. Format: `tag:value1@value2@value3` for multiple values |
|
||||
| `offset` | string | "0" | Time offset to apply to the window |
|
||||
|
||||
### Advanced parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `target_database` | string | "default" | Database for storing downsampled data |
|
||||
| `max_retries` | integer | 5 | Maximum number of retries for write operations |
|
||||
| `batch_size` | string | "30d" | Time interval for batch processing (HTTP mode only) |
|
||||
| `config_file_path` | string | none | Path to TOML config file relative to PLUGIN_DIR |
|
||||
|
||||
### Metadata columns
|
||||
|
||||
Each downsampled record includes three additional metadata columns:
|
||||
- `record_count`—the number of original points compressed into this single downsampled row
|
||||
- `time_from`—the minimum timestamp among the original points in the interval
|
||||
- `time_to`—the maximum timestamp among the original points in the interval
|
||||
|
||||
## Requirements
|
||||
|
||||
### Software requirements
|
||||
- InfluxDB 3 Core or Enterprise with Processing Engine enabled
|
||||
- Python packages: No additional packages required
|
||||
|
||||
### Installation steps
|
||||
|
||||
1. Start InfluxDB 3 with plugin support:
|
||||
```bash
|
||||
influxdb3 serve \
|
||||
--node-id node0 \
|
||||
--object-store file \
|
||||
--data-dir ~/.influxdb3 \
|
||||
--plugin-dir ~/.plugins
|
||||
```
|
||||
|
||||
2. No additional Python packages required for this plugin.
|
||||
|
||||
## Trigger setup
|
||||
|
||||
### Scheduled downsampling
|
||||
|
||||
Run downsampling periodically on historical data:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database mydb \
|
||||
--plugin-filename gh:influxdata/downsampler/downsampler.py \
|
||||
--trigger-spec "every:1h" \
|
||||
--trigger-arguments 'source_measurement=cpu_metrics,target_measurement=cpu_hourly,interval=1h,window=6h,calculations=avg,specific_fields=usage_user.usage_system' \
|
||||
cpu_hourly_downsample
|
||||
```
|
||||
|
||||
### On-demand downsampling
|
||||
|
||||
Trigger downsampling via HTTP requests:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database mydb \
|
||||
--plugin-filename gh:influxdata/downsampler/downsampler.py \
|
||||
--trigger-spec "request:downsample" \
|
||||
downsample_api
|
||||
```
|
||||
|
||||
## Example usage
|
||||
|
||||
### Example 1: CPU metrics hourly aggregation
|
||||
|
||||
Downsample CPU usage data from 1-minute intervals to hourly averages:
|
||||
|
||||
```bash
|
||||
# Create the trigger
|
||||
influxdb3 create trigger \
|
||||
--database system_metrics \
|
||||
--plugin-filename gh:influxdata/downsampler/downsampler.py \
|
||||
--trigger-spec "every:1h" \
|
||||
--trigger-arguments 'source_measurement=cpu,target_measurement=cpu_hourly,interval=1h,window=6h,calculations=avg,specific_fields=usage_user.usage_system.usage_idle' \
|
||||
cpu_hourly_downsample
|
||||
|
||||
# Write test data
|
||||
influxdb3 write \
|
||||
--database system_metrics \
|
||||
"cpu,host=server1 usage_user=45.2,usage_system=12.1,usage_idle=42.7"
|
||||
|
||||
# Query downsampled data (after trigger runs)
|
||||
influxdb3 query \
|
||||
--database system_metrics \
|
||||
"SELECT * FROM cpu_hourly WHERE time >= now() - 1d"
|
||||
```
|
||||
|
||||
### Expected output
|
||||
```
|
||||
host | usage_user | usage_system | usage_idle | record_count | time_from | time_to | time
|
||||
--------|------------|--------------|------------|--------------|---------------------|---------------------|-----
|
||||
server1 | 44.8 | 11.9 | 43.3 | 60 | 2024-01-01T00:00:00Z| 2024-01-01T00:59:59Z| 2024-01-01T01:00:00Z
|
||||
```
|
||||
|
||||
**Aggregation details:**
|
||||
- Before: 60 individual CPU measurements over 1 hour
|
||||
- After: 1 aggregated measurement with averages and metadata
|
||||
- Metadata shows original record count and time range
|
||||
|
||||
### Example 2: Multi-field aggregation with different functions
|
||||
|
||||
Apply different aggregation functions to different fields:
|
||||
|
||||
```bash
|
||||
# Create trigger with field-specific aggregations
|
||||
influxdb3 create trigger \
|
||||
--database sensors \
|
||||
--plugin-filename gh:influxdata/downsampler/downsampler.py \
|
||||
--trigger-spec "every:10min" \
|
||||
--trigger-arguments 'source_measurement=environment,target_measurement=environment_10min,interval=10min,window=30min,calculations=temperature:avg.humidity:avg.pressure:max' \
|
||||
env_multi_agg
|
||||
|
||||
# Write data with various sensor readings
|
||||
influxdb3 write \
|
||||
--database sensors \
|
||||
"environment,location=office temperature=22.5,humidity=45.2,pressure=1013.25"
|
||||
|
||||
# Query aggregated data
|
||||
influxdb3 query \
|
||||
--database sensors \
|
||||
"SELECT * FROM environment_10min WHERE time >= now() - 1h"
|
||||
```
|
||||
|
||||
### Expected output
|
||||
```
|
||||
location | temperature | humidity | pressure | record_count | time
|
||||
---------|-------------|----------|----------|--------------|-----
|
||||
office | 22.3 | 44.8 | 1015.1 | 10 | 2024-01-01T00:10:00Z
|
||||
```
|
||||
|
||||
### Example 3: HTTP API downsampling with backfill
|
||||
|
||||
Use HTTP API for on-demand downsampling with historical data:
|
||||
|
||||
```bash
|
||||
# Send HTTP request for backfill downsampling
|
||||
curl -X POST http://localhost:8181/api/v3/engine/downsample \
|
||||
--header "Authorization: Bearer YOUR_TOKEN" \
|
||||
--data '{
|
||||
"source_measurement": "metrics",
|
||||
"target_measurement": "metrics_daily",
|
||||
"target_database": "analytics",
|
||||
"interval": "1d",
|
||||
"batch_size": "7d",
|
||||
"calculations": [["cpu_usage", "avg"], ["memory_usage", "max"], ["disk_usage", "avg"]],
|
||||
"backfill_start": "2024-01-01T00:00:00Z",
|
||||
"backfill_end": "2024-01-31T00:00:00Z",
|
||||
"max_retries": 3
|
||||
}'
|
||||
```
|
||||
|
||||
## Using TOML Configuration Files
|
||||
|
||||
This plugin supports using TOML configuration files for complex configurations.
|
||||
|
||||
### Important Requirements
|
||||
|
||||
**To use TOML configuration files, you must set the `PLUGIN_DIR` environment variable in the InfluxDB 3 host environment:**
|
||||
|
||||
```bash
|
||||
PLUGIN_DIR=~/.plugins influxdb3 serve --node-id node0 --object-store file --data-dir ~/.influxdb3 --plugin-dir ~/.plugins
|
||||
```
|
||||
|
||||
### Example TOML Configuration
|
||||
|
||||
```toml
|
||||
# downsampling_config_scheduler.toml
|
||||
source_measurement = "cpu"
|
||||
target_measurement = "cpu_hourly"
|
||||
target_database = "analytics"
|
||||
interval = "1h"
|
||||
window = "6h"
|
||||
calculations = "avg"
|
||||
specific_fields = "usage_user.usage_system.usage_idle"
|
||||
max_retries = 3
|
||||
|
||||
[tag_values]
|
||||
host = ["server1", "server2", "server3"]
|
||||
```
|
||||
|
||||
### Create trigger using TOML config
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database mydb \
|
||||
--plugin-filename downsampler.py \
|
||||
--trigger-spec "every:1h" \
|
||||
--trigger-arguments config_file_path=downsampling_config_scheduler.toml \
|
||||
downsample_trigger
|
||||
```
|
||||
|
||||
## Code overview
|
||||
|
||||
### Files
|
||||
|
||||
- `downsampler.py`: The main plugin code containing handlers for scheduled and HTTP-triggered downsampling
|
||||
- `downsampling_config_scheduler.toml`: Example TOML configuration file for scheduled triggers
|
||||
|
||||
### Logging
|
||||
|
||||
Logs are stored in the `_internal` database (or the database where the trigger is created) in the `system.processing_engine_logs` table. To view logs:
|
||||
|
||||
```bash
|
||||
influxdb3 query --database _internal "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'your_trigger_name'"
|
||||
```
|
||||
|
||||
Log columns:
|
||||
- **event_time**: Timestamp of the log event (with nanosecond precision)
|
||||
- **trigger_name**: Name of the trigger that generated the log
|
||||
- **log_level**: Severity level (INFO, WARN, ERROR)
|
||||
- **log_text**: Message describing the action or error with unique task_id for traceability
|
||||
|
||||
### Main functions
|
||||
|
||||
#### `process_scheduled_call(influxdb3_local, call_time, args)`
|
||||
Handles scheduled downsampling tasks.
|
||||
Queries historical data within the specified window and applies aggregation functions.
|
||||
|
||||
Key operations:
|
||||
1. Parses configuration from arguments or TOML file
|
||||
2. Queries source measurement with optional tag filters
|
||||
3. Applies time-based aggregation with specified functions
|
||||
4. Writes downsampled data with metadata columns
|
||||
|
||||
#### `process_http_request(influxdb3_local, request_body, args)`
|
||||
Handles HTTP-triggered on-demand downsampling.
|
||||
Processes batch downsampling with configurable time ranges for backfill scenarios.
|
||||
|
||||
Key operations:
|
||||
1. Parses JSON request body parameters
|
||||
2. Processes data in configurable time batches
|
||||
3. Applies aggregation functions to historical data
|
||||
4. Returns processing statistics and results
|
||||
|
||||
#### `aggregate_data(data, interval, calculations)`
|
||||
Core aggregation engine that applies statistical functions to time-series data.
|
||||
|
||||
Supported aggregation functions:
|
||||
- `avg`: Average value
|
||||
- `sum`: Sum of values
|
||||
- `min`: Minimum value
|
||||
- `max`: Maximum value
|
||||
- `derivative`: Rate of change
|
||||
- `median`: Median value
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common issues
|
||||
|
||||
#### Issue: No data in target measurement
|
||||
**Solution**: Check that source measurement exists and contains data in the specified time window:
|
||||
```bash
|
||||
influxdb3 query --database mydb "SELECT COUNT(*) FROM source_measurement WHERE time >= now() - 1h"
|
||||
```
|
||||
|
||||
#### Issue: Aggregation function not working
|
||||
**Solution**: Verify field names and aggregation syntax. Use SHOW FIELD KEYS to check available fields:
|
||||
```bash
|
||||
influxdb3 query --database mydb "SHOW FIELD KEYS FROM source_measurement"
|
||||
```
|
||||
|
||||
#### Issue: Tag filters not applied
|
||||
**Solution**: Check tag value format. Use @ separator for multiple values:
|
||||
```bash
|
||||
--trigger-arguments 'tag_values=host:server1@server2@server3'
|
||||
```
|
||||
|
||||
#### Issue: HTTP endpoint not accessible
|
||||
**Solution**: Verify the trigger was created with correct request specification:
|
||||
```bash
|
||||
influxdb3 list triggers --database mydb
|
||||
```
|
||||
|
||||
### Debugging tips
|
||||
|
||||
1. **Check execution logs** with task ID filtering:
|
||||
```bash
|
||||
influxdb3 query --database _internal \
|
||||
"SELECT * FROM system.processing_engine_logs WHERE log_text LIKE '%task_id%' ORDER BY event_time DESC LIMIT 10"
|
||||
```
|
||||
|
||||
2. **Test with smaller time windows** for debugging:
|
||||
```bash
|
||||
--trigger-arguments 'window=5min,interval=1min'
|
||||
```
|
||||
|
||||
3. **Verify field types** before aggregation:
|
||||
```bash
|
||||
influxdb3 query --database mydb "SELECT * FROM source_measurement LIMIT 1"
|
||||
```
|
||||
|
||||
### Performance considerations
|
||||
|
||||
- **Batch processing**: Use appropriate batch_size for HTTP requests to balance memory usage and performance
|
||||
- **Field filtering**: Use specific_fields to process only necessary data
|
||||
- **Retry logic**: Configure max_retries based on network reliability
|
||||
- **Metadata overhead**: Metadata columns add ~20% storage overhead but provide valuable debugging information
|
||||
- **Index optimization**: Tag filters are more efficient than field filters for large datasets
|
||||
|
||||
## Report an issue
|
||||
|
||||
For plugin issues, see the Plugins repository [issues page](https://github.com/influxdata/influxdb3_plugins/issues).
|
||||
|
||||
## Find support for {{% product-name %}}
|
||||
|
||||
The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for InfluxDB 3 Core and InfluxDB 3 Enterprise.
|
||||
For other InfluxDB versions, see the [Support and feedback](#bug-reports-and-feedback) options.
|
|
@ -0,0 +1,361 @@
|
|||
The Forecast Error Evaluator Plugin validates forecast model accuracy for time series data
|
||||
in InfluxDB 3 by comparing predicted values with actual observations.
|
||||
The plugin periodically computes error metrics (MSE, MAE, or RMSE), detects anomalies based on error thresholds, and sends notifications when forecast accuracy degrades.
|
||||
It includes debounce logic to suppress transient anomalies and supports multi-channel notifications via the Notification Sender Plugin.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Required parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `forecast_measurement` | string | required | Measurement containing forecasted values |
|
||||
| `actual_measurement` | string | required | Measurement containing actual (ground truth) values |
|
||||
| `forecast_field` | string | required | Field name for forecasted values |
|
||||
| `actual_field` | string | required | Field name for actual values |
|
||||
| `error_metric` | string | required | Error metric to compute: `"mse"`, `"mae"`, or `"rmse"` |
|
||||
| `error_thresholds` | string | required | Threshold levels. Format: `INFO-"0.5":WARN-"0.9":ERROR-"1.2":CRITICAL-"1.5"` |
|
||||
| `window` | string | required | Time window for data analysis. Format: `<number><unit>` (for example, `"1h"`) |
|
||||
| `senders` | string | required | Dot-separated list of notification channels (for example, `"slack.discord"`) |
|
||||
|
||||
### Notification parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `notification_text` | string | default template | Template for notification message with variables `$measurement`, `$level`, `$field`, `$error`, `$metric`, `$tags` |
|
||||
| `notification_path` | string | "notify" | URL path for the notification sending plugin |
|
||||
| `port_override` | integer | 8181 | Port number where InfluxDB accepts requests |
|
||||
|
||||
### Timing parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `min_condition_duration` | string | none | Minimum duration for anomaly condition to persist before triggering notification |
|
||||
| `rounding_freq` | string | "1s" | Frequency to round timestamps for alignment |
|
||||
|
||||
### Authentication parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `influxdb3_auth_token` | string | env variable | API token for InfluxDB 3. Can be set via `INFLUXDB3_AUTH_TOKEN` |
|
||||
| `config_file_path` | string | none | Path to TOML config file relative to PLUGIN_DIR |
|
||||
|
||||
### Sender-specific parameters
|
||||
|
||||
#### Slack notifications
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `slack_webhook_url` | string | required | Webhook URL from Slack |
|
||||
| `slack_headers` | string | none | Base64-encoded HTTP headers |
|
||||
|
||||
#### Discord notifications
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `discord_webhook_url` | string | required | Webhook URL from Discord |
|
||||
| `discord_headers` | string | none | Base64-encoded HTTP headers |
|
||||
|
||||
#### HTTP notifications
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `http_webhook_url` | string | required | Custom webhook URL for POST requests |
|
||||
| `http_headers` | string | none | Base64-encoded HTTP headers |
|
||||
|
||||
#### SMS notifications (via Twilio)
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `twilio_sid` | string | env variable | Twilio Account SID (or `TWILIO_SID` env var) |
|
||||
| `twilio_token` | string | env variable | Twilio Auth Token (or `TWILIO_TOKEN` env var) |
|
||||
| `twilio_from_number` | string | required | Twilio sender number (for example, `"+1234567890"`) |
|
||||
| `twilio_to_number` | string | required | Recipient number (for example, `"+0987654321"`) |
|
||||
|
||||
## Requirements
|
||||
|
||||
### Software requirements
|
||||
- InfluxDB 3 Core or Enterprise with Processing Engine enabled
|
||||
- Notification Sender Plugin for InfluxDB 3 (required for notifications)
|
||||
- Python packages:
|
||||
- `pandas` (for data processing)
|
||||
- `requests` (for HTTP notifications)
|
||||
|
||||
### Installation steps
|
||||
|
||||
1. Start InfluxDB 3 with plugin support:
|
||||
```bash
|
||||
influxdb3 serve \
|
||||
--node-id node0 \
|
||||
--object-store file \
|
||||
--data-dir ~/.influxdb3 \
|
||||
--plugin-dir ~/.plugins
|
||||
```
|
||||
|
||||
2. Install required Python packages:
|
||||
```bash
|
||||
influxdb3 install package pandas
|
||||
influxdb3 install package requests
|
||||
```
|
||||
|
||||
3. Install the Notification Sender Plugin (required):
|
||||
```bash
|
||||
# Ensure notifier plugin is available in ~/.plugins/
|
||||
```
|
||||
|
||||
## Trigger setup
|
||||
|
||||
### Scheduled forecast validation
|
||||
|
||||
Run forecast error evaluation periodically:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database weather_forecasts \
|
||||
--plugin-filename gh:influxdata/forecast_error_evaluator/forecast_error_evaluator.py \
|
||||
--trigger-spec "every:30m" \
|
||||
--trigger-arguments 'forecast_measurement=temperature_forecast,actual_measurement=temperature_actual,forecast_field=predicted_temp,actual_field=temp,error_metric=rmse,error_thresholds=INFO-"0.5":WARN-"1.0":ERROR-"2.0",window=1h,senders=slack,slack_webhook_url="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"' \
|
||||
forecast_validation
|
||||
```
|
||||
|
||||
## Example usage
|
||||
|
||||
### Example 1: Temperature forecast validation with Slack alerts
|
||||
|
||||
Validate temperature forecast accuracy and send Slack notifications:
|
||||
|
||||
```bash
|
||||
# Create the trigger
|
||||
influxdb3 create trigger \
|
||||
--database weather_db \
|
||||
--plugin-filename gh:influxdata/forecast_error_evaluator/forecast_error_evaluator.py \
|
||||
--trigger-spec "every:15m" \
|
||||
--trigger-arguments 'forecast_measurement=temp_forecast,actual_measurement=temp_actual,forecast_field=predicted,actual_field=temperature,error_metric=rmse,error_thresholds=INFO-"0.5":WARN-"1.0":ERROR-"2.0":CRITICAL-"3.0",window=30m,senders=slack,slack_webhook_url="https://hooks.slack.com/services/YOUR/WEBHOOK/URL",min_condition_duration=10m' \
|
||||
temp_forecast_check
|
||||
|
||||
# Write forecast data
|
||||
influxdb3 write \
|
||||
--database weather_db \
|
||||
"temp_forecast,location=station1 predicted=22.5"
|
||||
|
||||
# Write actual data
|
||||
influxdb3 write \
|
||||
--database weather_db \
|
||||
"temp_actual,location=station1 temperature=21.8"
|
||||
|
||||
# Check logs after trigger runs
|
||||
influxdb3 query \
|
||||
--database _internal \
|
||||
"SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'temp_forecast_check'"
|
||||
```
|
||||
|
||||
### Expected behavior
|
||||
- Plugin computes RMSE between forecast and actual values
|
||||
- If RMSE > 0.5, sends INFO-level notification
|
||||
- If RMSE > 1.0, sends WARN-level notification
|
||||
- Only triggers if condition persists for 10+ minutes (debounce)
|
||||
|
||||
**Notification example:**
|
||||
```
|
||||
[WARN] Forecast error alert in temp_forecast.predicted: rmse=1.2. Tags: location=station1
|
||||
```
|
||||
|
||||
### Example 2: Multi-metric validation with multiple channels
|
||||
|
||||
Monitor multiple forecast metrics with different notification channels:
|
||||
|
||||
```bash
|
||||
# Create trigger with Discord and HTTP notifications
|
||||
influxdb3 create trigger \
|
||||
--database analytics \
|
||||
--plugin-filename gh:influxdata/forecast_error_evaluator/forecast_error_evaluator.py \
|
||||
--trigger-spec "every:1h" \
|
||||
--trigger-arguments 'forecast_measurement=sales_forecast,actual_measurement=sales_actual,forecast_field=predicted_sales,actual_field=sales_amount,error_metric=mae,error_thresholds=WARN-"1000":ERROR-"5000":CRITICAL-"10000",window=6h,senders=discord.http,discord_webhook_url="https://discord.com/api/webhooks/YOUR/WEBHOOK",http_webhook_url="https://your-api.com/alerts",notification_text="[$$level] Sales forecast error: $$metric=$$error (threshold exceeded)",rounding_freq=5min' \
|
||||
sales_forecast_monitor
|
||||
```
|
||||
|
||||
### Example 3: SMS alerts for critical forecast failures
|
||||
|
||||
Set up SMS notifications for critical forecast accuracy issues:
|
||||
|
||||
```bash
|
||||
# Set environment variables (recommended for sensitive data)
|
||||
export TWILIO_SID="your_twilio_sid"
|
||||
export TWILIO_TOKEN="your_twilio_token"
|
||||
|
||||
# Create trigger with SMS notifications
|
||||
influxdb3 create trigger \
|
||||
--database production_forecasts \
|
||||
--plugin-filename gh:influxdata/forecast_error_evaluator/forecast_error_evaluator.py \
|
||||
--trigger-spec "every:5m" \
|
||||
--trigger-arguments 'forecast_measurement=demand_forecast,actual_measurement=demand_actual,forecast_field=predicted_demand,actual_field=actual_demand,error_metric=mse,error_thresholds=CRITICAL-"100000",window=15m,senders=sms,twilio_from_number="+1234567890",twilio_to_number="+0987654321",notification_text="CRITICAL: Production demand forecast error exceeded threshold. MSE: $$error",min_condition_duration=2m' \
|
||||
critical_forecast_alert
|
||||
```
|
||||
|
||||
## Using TOML Configuration Files
|
||||
|
||||
This plugin supports using TOML configuration files for complex configurations.
|
||||
|
||||
### Important Requirements
|
||||
|
||||
**To use TOML configuration files, you must set the `PLUGIN_DIR` environment variable in the InfluxDB 3 host environment:**
|
||||
|
||||
```bash
|
||||
PLUGIN_DIR=~/.plugins influxdb3 serve --node-id node0 --object-store file --data-dir ~/.influxdb3 --plugin-dir ~/.plugins
|
||||
```
|
||||
|
||||
### Example TOML Configuration
|
||||
|
||||
```toml
|
||||
# forecast_error_config_scheduler.toml
|
||||
forecast_measurement = "temperature_forecast"
|
||||
actual_measurement = "temperature_actual"
|
||||
forecast_field = "predicted_temp"
|
||||
actual_field = "temperature"
|
||||
error_metric = "rmse"
|
||||
error_thresholds = 'INFO-"0.5":WARN-"1.0":ERROR-"2.0":CRITICAL-"3.0"'
|
||||
window = "1h"
|
||||
senders = "slack"
|
||||
slack_webhook_url = "https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
|
||||
min_condition_duration = "10m"
|
||||
rounding_freq = "1min"
|
||||
notification_text = "[$$level] Forecast validation alert: $$metric=$$error in $$measurement.$$field"
|
||||
|
||||
# Authentication (use environment variables instead when possible)
|
||||
influxdb3_auth_token = "your_token_here"
|
||||
```
|
||||
|
||||
### Create trigger using TOML config
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database weather_db \
|
||||
--plugin-filename forecast_error_evaluator.py \
|
||||
--trigger-spec "every:30m" \
|
||||
--trigger-arguments config_file_path=forecast_error_config_scheduler.toml \
|
||||
forecast_validation_trigger
|
||||
```
|
||||
|
||||
## Code overview
|
||||
|
||||
### Files
|
||||
|
||||
- `forecast_error_evaluator.py`: The main plugin code containing scheduler handler for forecast validation
|
||||
- `forecast_error_config_scheduler.toml`: Example TOML configuration file
|
||||
|
||||
### Logging
|
||||
|
||||
Logs are stored in the `_internal` database (or the database where the trigger is created) in the `system.processing_engine_logs` table. To view logs:
|
||||
|
||||
```bash
|
||||
influxdb3 query --database _internal "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'your_trigger_name'"
|
||||
```
|
||||
|
||||
Log columns:
|
||||
- **event_time**: Timestamp of the log event
|
||||
- **trigger_name**: Name of the trigger that generated the log
|
||||
- **log_level**: Severity level (INFO, WARN, ERROR)
|
||||
- **log_text**: Message describing validation results or errors
|
||||
|
||||
### Main functions
|
||||
|
||||
#### `process_scheduled_call(influxdb3_local, call_time, args)`
|
||||
Handles scheduled forecast validation tasks.
|
||||
Queries forecast and actual measurements, computes error metrics, and triggers notifications.
|
||||
|
||||
Key operations:
|
||||
1. Parses configuration from arguments or TOML file
|
||||
2. Queries forecast and actual measurements within time window
|
||||
3. Aligns timestamps using rounding frequency
|
||||
4. Computes specified error metric (MSE, MAE, or RMSE)
|
||||
5. Evaluates thresholds and applies debounce logic
|
||||
6. Sends notifications via configured channels
|
||||
|
||||
#### `compute_error_metric(forecast_values, actual_values, metric_type)`
|
||||
Core error computation engine that calculates forecast accuracy metrics.
|
||||
|
||||
Supported error metrics:
|
||||
- `mse`: Mean Squared Error
|
||||
- `mae`: Mean Absolute Error
|
||||
- `rmse`: Root Mean Squared Error (square root of MSE)
|
||||
|
||||
#### `evaluate_thresholds(error_value, threshold_config)`
|
||||
Evaluates computed error against configured thresholds to determine alert level.
|
||||
|
||||
Returns alert level based on threshold ranges:
|
||||
- `INFO`: Informational threshold exceeded
|
||||
- `WARN`: Warning threshold exceeded
|
||||
- `ERROR`: Error threshold exceeded
|
||||
- `CRITICAL`: Critical threshold exceeded
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common issues
|
||||
|
||||
#### Issue: No overlapping timestamps between forecast and actual data
|
||||
**Solution**: Check that both measurements have data in the specified time window and use `rounding_freq` for alignment:
|
||||
```bash
|
||||
influxdb3 query --database mydb "SELECT time, field_value FROM forecast_measurement WHERE time >= now() - 1h"
|
||||
influxdb3 query --database mydb "SELECT time, field_value FROM actual_measurement WHERE time >= now() - 1h"
|
||||
```
|
||||
|
||||
#### Issue: Notifications not being sent
|
||||
**Solution**: Verify the Notification Sender Plugin is installed and webhook URLs are correct:
|
||||
```bash
|
||||
# Check if notifier plugin exists
|
||||
ls ~/.plugins/notifier_plugin.py
|
||||
|
||||
# Test webhook URL manually
|
||||
curl -X POST "your_webhook_url" -d '{"text": "test message"}'
|
||||
```
|
||||
|
||||
#### Issue: Error threshold format not recognized
|
||||
**Solution**: Use proper threshold format with level prefixes:
|
||||
```bash
|
||||
--trigger-arguments 'error_thresholds=INFO-"0.5":WARN-"1.0":ERROR-"2.0":CRITICAL-"3.0"'
|
||||
```
|
||||
|
||||
#### Issue: Environment variables not loaded
|
||||
**Solution**: Set environment variables before starting InfluxDB:
|
||||
```bash
|
||||
export INFLUXDB3_AUTH_TOKEN="your_token"
|
||||
export TWILIO_SID="your_sid"
|
||||
influxdb3 serve --plugin-dir ~/.plugins
|
||||
```
|
||||
|
||||
### Debugging tips
|
||||
|
||||
1. **Check data availability** in both measurements:
|
||||
```bash
|
||||
influxdb3 query --database mydb \
|
||||
"SELECT COUNT(*) FROM forecast_measurement WHERE time >= now() - window"
|
||||
```
|
||||
|
||||
2. **Verify timestamp alignment** with rounding frequency:
|
||||
```bash
|
||||
--trigger-arguments 'rounding_freq=5min'
|
||||
```
|
||||
|
||||
3. **Test with shorter windows** for faster debugging:
|
||||
```bash
|
||||
--trigger-arguments 'window=10m,min_condition_duration=1m'
|
||||
```
|
||||
|
||||
4. **Monitor notification delivery** in logs:
|
||||
```bash
|
||||
influxdb3 query --database _internal \
|
||||
"SELECT * FROM system.processing_engine_logs WHERE log_text LIKE '%notification%'"
|
||||
```
|
||||
|
||||
### Performance considerations
|
||||
|
||||
- **Data alignment**: Use appropriate `rounding_freq` to balance accuracy and performance
|
||||
- **Window size**: Larger windows increase computation time but provide more robust error estimates
|
||||
- **Debounce duration**: Balance between noise suppression and alert responsiveness
|
||||
- **Notification throttling**: Built-in retry logic prevents notification spam
|
||||
- **Memory usage**: Plugin processes data in pandas DataFrames - consider memory for large datasets
|
||||
|
||||
## Report an issue
|
||||
|
||||
For plugin issues, see the Plugins repository [issues page](https://github.com/influxdata/influxdb3_plugins/issues).
|
||||
|
||||
## Find support for {{% product-name %}}
|
||||
|
||||
The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for InfluxDB 3 Core and InfluxDB 3 Enterprise.
|
||||
For other InfluxDB versions, see the [Support and feedback](#bug-reports-and-feedback) options.
|
|
@ -0,0 +1,389 @@
|
|||
The InfluxDB to Iceberg Plugin enables data transfer from InfluxDB 3 to Apache Iceberg tables.
|
||||
Transfer time series data to Iceberg for long-term storage, analytics, or integration with data lake architectures.
|
||||
The plugin supports both scheduled batch transfers of historical data and on-demand transfers via HTTP API.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Scheduler trigger parameters
|
||||
|
||||
#### Required parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `measurement` | string | required | Source measurement containing data to transfer |
|
||||
| `window` | string | required | Time window for data transfer. Format: `<number><unit>` (for example, `"1h"`, `"30d"`) |
|
||||
| `catalog_configs` | string | required | Base64-encoded JSON string containing Iceberg catalog configuration |
|
||||
|
||||
#### Optional parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `included_fields` | string | all fields | Dot-separated list of fields to include (for example, `"usage_user.usage_idle"`) |
|
||||
| `excluded_fields` | string | none | Dot-separated list of fields to exclude |
|
||||
| `namespace` | string | "default" | Iceberg namespace for the target table |
|
||||
| `table_name` | string | measurement name | Iceberg table name |
|
||||
| `config_file_path` | string | none | Path to TOML config file relative to PLUGIN_DIR |
|
||||
|
||||
### HTTP trigger parameters
|
||||
|
||||
#### Request body structure
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| `measurement` | string | Yes | Source measurement containing data to transfer |
|
||||
| `catalog_configs` | object | Yes | Iceberg catalog configuration dictionary. See [PyIceberg catalog documentation](https://py.iceberg.apache.org/configuration/) |
|
||||
| `included_fields` | array | No | List of field names to include in replication |
|
||||
| `excluded_fields` | array | No | List of field names to exclude from replication |
|
||||
| `namespace` | string | No | Target Iceberg namespace (default: "default") |
|
||||
| `table_name` | string | No | Target Iceberg table name (default: measurement name) |
|
||||
| `batch_size` | string | No | Batch size duration for processing (default: "1d"). Format: `<number><unit>` |
|
||||
| `backfill_start` | string | No | ISO 8601 datetime with timezone for backfill start |
|
||||
| `backfill_end` | string | No | ISO 8601 datetime with timezone for backfill end |
|
||||
|
||||
|
||||
### Schema management
|
||||
|
||||
- Automatically creates Iceberg table schema from the first batch of data
|
||||
- Maps pandas data types to Iceberg types:
|
||||
- `int64` → `IntegerType`
|
||||
- `float64` → `FloatType`
|
||||
- `datetime64[us]` → `TimestampType`
|
||||
- `object` → `StringType`
|
||||
- Fields with no null values are marked as `required`
|
||||
- The `time` column is converted to `datetime64[us]` for Iceberg compatibility
|
||||
- Tables are created in format: `<namespace>.<table_name>`
|
||||
|
||||
## Requirements
|
||||
|
||||
### Software requirements
|
||||
- InfluxDB 3 Core or Enterprise with Processing Engine enabled
|
||||
- Python packages:
|
||||
- `pandas` (for data manipulation)
|
||||
- `pyarrow` (for Parquet support)
|
||||
- `pyiceberg[catalog-options]` (for Iceberg integration)
|
||||
|
||||
### Installation steps
|
||||
|
||||
1. Start InfluxDB 3 with plugin support:
|
||||
```bash
|
||||
influxdb3 serve \
|
||||
--node-id node0 \
|
||||
--object-store file \
|
||||
--data-dir ~/.influxdb3 \
|
||||
--plugin-dir ~/.plugins
|
||||
```
|
||||
|
||||
2. Install required Python packages:
|
||||
```bash
|
||||
influxdb3 install package pandas
|
||||
influxdb3 install package pyarrow
|
||||
influxdb3 install package "pyiceberg[s3fs,hive,sql-sqlite]"
|
||||
```
|
||||
|
||||
**Note:** Include the appropriate PyIceberg extras based on your catalog type:
|
||||
- `[s3fs]` for S3 storage
|
||||
- `[hive]` for Hive metastore
|
||||
- `[sql-sqlite]` for SQL catalog with SQLite
|
||||
- See [PyIceberg documentation](https://py.iceberg.apache.org/#installation) for all options
|
||||
|
||||
## Trigger setup
|
||||
|
||||
### Scheduled data transfer
|
||||
|
||||
Periodically transfer data from InfluxDB 3 to Iceberg:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database mydb \
|
||||
--plugin-filename gh:influxdata/influxdb_to_iceberg/influxdb_to_iceberg.py \
|
||||
--trigger-spec "every:1h" \
|
||||
--trigger-arguments 'measurement=cpu,window=1h,catalog_configs="eyJ1cmkiOiAiaHR0cDovL25lc3NpZTo5MDAwIn0=",namespace=monitoring,table_name=cpu_metrics' \
|
||||
hourly_iceberg_transfer
|
||||
```
|
||||
|
||||
### HTTP API endpoint
|
||||
|
||||
Create an on-demand transfer endpoint:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database mydb \
|
||||
--plugin-filename gh:influxdata/influxdb_to_iceberg/influxdb_to_iceberg.py \
|
||||
--trigger-spec "request:replicate" \
|
||||
iceberg_http_transfer
|
||||
```
|
||||
|
||||
Enable the trigger:
|
||||
```bash
|
||||
influxdb3 enable trigger --database mydb iceberg_http_transfer
|
||||
```
|
||||
|
||||
The endpoint is registered at `/api/v3/engine/replicate`.
|
||||
|
||||
## Example usage
|
||||
|
||||
### Example 1: Basic scheduled transfer
|
||||
|
||||
Transfer CPU metrics to Iceberg every hour:
|
||||
|
||||
```bash
|
||||
# Create trigger with base64-encoded catalog config
|
||||
# Original JSON: {"uri": "http://nessie:9000"}
|
||||
# Base64: eyJ1cmkiOiAiaHR0cDovL25lc3NpZTo5MDAwIn0=
|
||||
influxdb3 create trigger \
|
||||
--database metrics \
|
||||
--plugin-filename gh:influxdata/influxdb_to_iceberg/influxdb_to_iceberg.py \
|
||||
--trigger-spec "every:1h" \
|
||||
--trigger-arguments 'measurement=cpu,window=24h,catalog_configs="eyJ1cmkiOiAiaHR0cDovL25lc3NpZTo5MDAwIn0="' \
|
||||
cpu_to_iceberg
|
||||
|
||||
# Write test data
|
||||
influxdb3 write \
|
||||
--database metrics \
|
||||
"cpu,host=server1 usage_user=45.2,usage_system=12.1"
|
||||
|
||||
# After trigger runs, data is available in Iceberg table "default.cpu"
|
||||
```
|
||||
|
||||
### Expected results
|
||||
- Creates Iceberg table `default.cpu` with schema matching the measurement
|
||||
- Transfers all CPU data from the last 24 hours
|
||||
- Appends new data on each hourly run
|
||||
|
||||
### Example 2: HTTP backfill with field filtering
|
||||
|
||||
Backfill specific fields from historical data:
|
||||
|
||||
```bash
|
||||
# Create and enable HTTP trigger
|
||||
influxdb3 create trigger \
|
||||
--database metrics \
|
||||
--plugin-filename gh:influxdata/influxdb_to_iceberg/influxdb_to_iceberg.py \
|
||||
--trigger-spec "request:replicate" \
|
||||
iceberg_backfill
|
||||
|
||||
influxdb3 enable trigger --database metrics iceberg_backfill
|
||||
|
||||
# Request backfill via HTTP
|
||||
curl -X POST http://localhost:8181/api/v3/engine/replicate \
|
||||
-H "Authorization: Bearer YOUR_TOKEN" \
|
||||
-d '{
|
||||
"measurement": "temperature",
|
||||
"catalog_configs": {
|
||||
"type": "sql",
|
||||
"uri": "sqlite:///path/to/catalog.db"
|
||||
},
|
||||
"included_fields": ["temp_celsius", "humidity"],
|
||||
"namespace": "weather",
|
||||
"table_name": "temperature_history",
|
||||
"batch_size": "12h",
|
||||
"backfill_start": "2024-01-01T00:00:00+00:00",
|
||||
"backfill_end": "2024-01-07T00:00:00+00:00"
|
||||
}'
|
||||
```
|
||||
|
||||
### Expected results
|
||||
- Creates Iceberg table `weather.temperature_history`
|
||||
- Transfers only `temp_celsius` and `humidity` fields
|
||||
- Processes data in 12-hour batches for the specified week
|
||||
- Returns status of the backfill operation
|
||||
|
||||
### Example 3: S3-backed Iceberg catalog
|
||||
|
||||
Transfer data to Iceberg tables stored in S3:
|
||||
|
||||
```bash
|
||||
# Create catalog config JSON
|
||||
cat > catalog_config.json << EOF
|
||||
{
|
||||
"type": "sql",
|
||||
"uri": "sqlite:///iceberg/catalog.db",
|
||||
"warehouse": "s3://my-bucket/iceberg-warehouse/",
|
||||
"s3.endpoint": "http://minio:9000",
|
||||
"s3.access-key-id": "minioadmin",
|
||||
"s3.secret-access-key": "minioadmin",
|
||||
"s3.path-style-access": true
|
||||
}
|
||||
EOF
|
||||
|
||||
# Encode to base64
|
||||
CATALOG_CONFIG=$(base64 < catalog_config.json)
|
||||
|
||||
# Create trigger
|
||||
influxdb3 create trigger \
|
||||
--database metrics \
|
||||
--plugin-filename gh:influxdata/influxdb_to_iceberg/influxdb_to_iceberg.py \
|
||||
--trigger-spec "every:30m" \
|
||||
--trigger-arguments "measurement=sensor_data,window=1h,catalog_configs=\"$CATALOG_CONFIG\",namespace=iot,table_name=sensors" \
|
||||
s3_iceberg_transfer
|
||||
```
|
||||
|
||||
## Using TOML Configuration Files
|
||||
|
||||
This plugin supports using TOML configuration files to specify all plugin arguments. This is useful for complex configurations or when you want to version control your plugin settings.
|
||||
|
||||
### Important Requirements
|
||||
|
||||
**To use TOML configuration files, you must set the `PLUGIN_DIR` environment variable in the InfluxDB 3 host environment.** This is required in addition to the `--plugin-dir` flag when starting InfluxDB 3:
|
||||
|
||||
- `--plugin-dir` tells InfluxDB 3 where to find plugin Python files
|
||||
- `PLUGIN_DIR` environment variable tells the plugins where to find TOML configuration files
|
||||
|
||||
### Setting Up TOML Configuration
|
||||
|
||||
1. **Start InfluxDB 3 with the PLUGIN_DIR environment variable set**:
|
||||
```bash
|
||||
PLUGIN_DIR=~/.plugins influxdb3 serve --node-id node0 --object-store file --data-dir ~/.influxdb3 --plugin-dir ~/.plugins
|
||||
```
|
||||
|
||||
2. **Copy the example TOML configuration file to your plugin directory**:
|
||||
```bash
|
||||
cp influxdb_to_iceberg_config_scheduler.toml ~/.plugins/
|
||||
```
|
||||
|
||||
3. **Edit the TOML file** to match your requirements:
|
||||
```toml
|
||||
# Required parameters
|
||||
measurement = "cpu"
|
||||
window = "1h"
|
||||
|
||||
# Optional parameters
|
||||
namespace = "monitoring"
|
||||
table_name = "cpu_metrics"
|
||||
|
||||
# Iceberg catalog configuration
|
||||
[catalog_configs]
|
||||
type = "sql"
|
||||
uri = "http://nessie:9000"
|
||||
warehouse = "s3://iceberg-warehouse/"
|
||||
```
|
||||
|
||||
4. **Create a trigger using the `config_file_path` argument**:
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database mydb \
|
||||
--plugin-filename influxdb_to_iceberg.py \
|
||||
--trigger-spec "every:1h" \
|
||||
--trigger-arguments config_file_path=influxdb_to_iceberg_config_scheduler.toml \
|
||||
iceberg_toml_trigger
|
||||
```
|
||||
|
||||
## Code overview
|
||||
|
||||
### Files
|
||||
|
||||
- `influxdb_to_iceberg.py`: The main plugin code containing handlers for scheduled and HTTP triggers
|
||||
- `influxdb_to_iceberg_config_scheduler.toml`: Example TOML configuration file for scheduled triggers
|
||||
|
||||
### Logging
|
||||
|
||||
Logs are stored in the `_internal` database (or the database where the trigger is created) in the `system.processing_engine_logs` table. To view logs:
|
||||
|
||||
```bash
|
||||
influxdb3 query --database _internal "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'your_trigger_name'"
|
||||
```
|
||||
|
||||
Log columns:
|
||||
- **event_time**: Timestamp of the log event
|
||||
- **trigger_name**: Name of the trigger that generated the log
|
||||
- **log_level**: Severity level (INFO, WARN, ERROR)
|
||||
- **log_text**: Message describing the action or error
|
||||
|
||||
### Main functions
|
||||
|
||||
#### `process_scheduled_call(influxdb3_local, call_time, args)`
|
||||
Handles scheduled data transfers.
|
||||
Queries data within the specified window and appends to Iceberg tables.
|
||||
|
||||
Key operations:
|
||||
1. Parses configuration and decodes catalog settings
|
||||
2. Queries source measurement with optional field filtering
|
||||
3. Creates Iceberg table if needed
|
||||
4. Appends data to Iceberg table
|
||||
|
||||
#### `process_http_request(influxdb3_local, request_body, args)`
|
||||
Handles on-demand data transfers via HTTP.
|
||||
Supports backfill operations with configurable batch sizes.
|
||||
|
||||
Key operations:
|
||||
1. Validates request body parameters
|
||||
2. Determines backfill time range
|
||||
3. Processes data in batches
|
||||
4. Returns transfer status
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common issues
|
||||
|
||||
#### Issue: "Failed to decode catalog_configs" error
|
||||
**Solution**: Ensure the catalog configuration is properly base64-encoded:
|
||||
```bash
|
||||
# Create JSON file
|
||||
echo '{"uri": "http://nessie:9000"}' > config.json
|
||||
# Encode to base64
|
||||
base64 config.json
|
||||
```
|
||||
|
||||
#### Issue: "Failed to create Iceberg table" error
|
||||
**Solution**:
|
||||
1. Verify catalog configuration is correct
|
||||
2. Check warehouse path permissions
|
||||
3. Ensure required PyIceberg extras are installed:
|
||||
```bash
|
||||
influxdb3 install package "pyiceberg[s3fs]"
|
||||
```
|
||||
|
||||
#### Issue: No data in Iceberg table after transfer
|
||||
**Solution**:
|
||||
1. Check if source measurement contains data:
|
||||
```bash
|
||||
influxdb3 query --database mydb "SELECT COUNT(*) FROM measurement"
|
||||
```
|
||||
2. Verify time window covers data:
|
||||
```bash
|
||||
influxdb3 query --database mydb "SELECT MIN(time), MAX(time) FROM measurement"
|
||||
```
|
||||
3. Check logs for errors:
|
||||
```bash
|
||||
influxdb3 query --database _internal "SELECT * FROM system.processing_engine_logs WHERE log_level = 'ERROR'"
|
||||
```
|
||||
|
||||
#### Issue: "Schema evolution not supported" error
|
||||
**Solution**: The plugin doesn't handle schema changes. If fields change:
|
||||
1. Create a new table with different name
|
||||
2. Or manually update the Iceberg table schema
|
||||
|
||||
### Debugging tips
|
||||
|
||||
1. **Test catalog connectivity**:
|
||||
```python
|
||||
from pyiceberg.catalog import load_catalog
|
||||
catalog = load_catalog("my_catalog", **catalog_configs)
|
||||
print(catalog.list_namespaces())
|
||||
```
|
||||
|
||||
2. **Verify field names**:
|
||||
```bash
|
||||
influxdb3 query --database mydb "SHOW FIELD KEYS FROM measurement"
|
||||
```
|
||||
|
||||
3. **Use smaller windows** for initial testing:
|
||||
```bash
|
||||
--trigger-arguments 'window=5m,...'
|
||||
```
|
||||
|
||||
### Performance considerations
|
||||
|
||||
- **File sizing**: Each scheduled run creates new Parquet files. Use appropriate window sizes to balance file count and size
|
||||
- **Batch processing**: For HTTP transfers, adjust `batch_size` based on available memory
|
||||
- **Field filtering**: Use `included_fields` to reduce data volume when only specific fields are needed
|
||||
- **Catalog choice**: SQL catalogs (SQLite) are simpler but REST catalogs scale better
|
||||
|
||||
## Report an issue
|
||||
|
||||
For plugin issues, see the Plugins repository [issues page](https://github.com/influxdata/influxdb3_plugins/issues).
|
||||
|
||||
## Find support for {{% product-name %}}
|
||||
|
||||
The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for InfluxDB 3 Core and InfluxDB 3 Enterprise.
|
||||
For other InfluxDB versions, see the [Support and feedback](#bug-reports-and-feedback) options.
|
|
@ -0,0 +1,333 @@
|
|||
The MAD-Based Anomaly Detection Plugin provides real-time anomaly detection for time series data in InfluxDB 3 using Median Absolute Deviation (MAD).
|
||||
Detect outliers in your field values as data is written, with configurable thresholds for both count-based and duration-based alerts.
|
||||
The plugin maintains in-memory deques for efficient computation and integrates with the Notification Sender Plugin to deliver alerts via multiple channels.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Required parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `measurement` | string | required | Source measurement to monitor for anomalies |
|
||||
| `mad_thresholds` | string | required | MAD threshold conditions. Format: `field:k:window_count:threshold` |
|
||||
| `senders` | string | required | Dot-separated list of notification channels (for example, `"slack.discord"`) |
|
||||
|
||||
### MAD threshold parameters
|
||||
|
||||
| Component | Description | Example |
|
||||
|-----------|-------------|---------|
|
||||
| `field_name` | The numeric field to monitor | `temp` |
|
||||
| `k` | MAD multiplier for anomaly threshold | `2.5` |
|
||||
| `window_count` | Number of recent points for MAD computation | `20` |
|
||||
| `threshold` | Count (integer) or duration (for example, `"2m"`, `"1h"`) | `5` or `2m` |
|
||||
|
||||
Multiple thresholds are separated by `@`: `temp:2.5:20:5@load:3:10:2m`
|
||||
|
||||
### Optional parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `influxdb3_auth_token` | string | env var | API token for InfluxDB 3 (or use INFLUXDB3_AUTH_TOKEN env var) |
|
||||
| `state_change_count` | string | "0" | Maximum allowed value flips before suppressing notifications |
|
||||
| `notification_count_text` | string | see below | Template for count-based alerts with variables: $table, $field, $threshold_count, $tags |
|
||||
| `notification_time_text` | string | see below | Template for duration-based alerts with variables: $table, $field, $threshold_time, $tags |
|
||||
| `notification_path` | string | "notify" | URL path for the notification sending plugin |
|
||||
| `port_override` | string | "8181" | Port number where InfluxDB accepts requests |
|
||||
| `config_file_path` | string | none | Path to TOML config file relative to PLUGIN_DIR |
|
||||
|
||||
Default notification templates:
|
||||
- Count: `"MAD count alert: Field $field in $table outlier for $threshold_count consecutive points. Tags: $tags"`
|
||||
- Time: `"MAD duration alert: Field $field in $table outlier for $threshold_time. Tags: $tags"`
|
||||
|
||||
### Notification channel parameters
|
||||
|
||||
#### Slack
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| `slack_webhook_url` | string | Yes | Webhook URL from Slack |
|
||||
| `slack_headers` | string | No | Base64-encoded HTTP headers |
|
||||
|
||||
#### Discord
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| `discord_webhook_url` | string | Yes | Webhook URL from Discord |
|
||||
| `discord_headers` | string | No | Base64-encoded HTTP headers |
|
||||
|
||||
#### HTTP
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| `http_webhook_url` | string | Yes | Custom webhook URL for POST requests |
|
||||
| `http_headers` | string | No | Base64-encoded HTTP headers |
|
||||
|
||||
#### SMS/WhatsApp (via Twilio)
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| `twilio_sid` | string | Yes | Twilio Account SID (or use TWILIO_SID env var) |
|
||||
| `twilio_token` | string | Yes | Twilio Auth Token (or use TWILIO_TOKEN env var) |
|
||||
| `twilio_from_number` | string | Yes | Sender phone number |
|
||||
| `twilio_to_number` | string | Yes | Recipient phone number |
|
||||
|
||||
## Requirements
|
||||
|
||||
### Software requirements
|
||||
- InfluxDB 3 Core or Enterprise with Processing Engine enabled
|
||||
- Python packages:
|
||||
- `requests` (for notification delivery)
|
||||
- Notification Sender Plugin (required for sending alerts)
|
||||
|
||||
### Installation steps
|
||||
|
||||
1. Start InfluxDB 3 with plugin support:
|
||||
```bash
|
||||
influxdb3 serve \
|
||||
--node-id node0 \
|
||||
--object-store file \
|
||||
--data-dir ~/.influxdb3 \
|
||||
--plugin-dir ~/.plugins
|
||||
```
|
||||
|
||||
2. Install required Python packages:
|
||||
```bash
|
||||
influxdb3 install package requests
|
||||
```
|
||||
|
||||
3. Install and configure the [Notification Sender Plugin](https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/notifier)
|
||||
|
||||
## Trigger setup
|
||||
|
||||
### Real-time anomaly detection
|
||||
|
||||
Detect anomalies as data is written:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database mydb \
|
||||
--plugin-filename gh:influxdata/mad_check/mad_check_plugin.py \
|
||||
--trigger-spec "all_tables" \
|
||||
--trigger-arguments 'measurement=cpu,mad_thresholds="temp:2.5:20:5@load:3:10:2m",senders=slack,slack_webhook_url="https://hooks.slack.com/services/..."' \
|
||||
mad_anomaly_detector
|
||||
```
|
||||
|
||||
## Example usage
|
||||
|
||||
### Example 1: Basic count-based anomaly detection
|
||||
|
||||
Detect when temperature exceeds 2.5 MADs from the median for 5 consecutive points:
|
||||
|
||||
```bash
|
||||
# Create trigger for count-based detection
|
||||
influxdb3 create trigger \
|
||||
--database sensors \
|
||||
--plugin-filename gh:influxdata/mad_check/mad_check_plugin.py \
|
||||
--trigger-spec "all_tables" \
|
||||
--trigger-arguments 'measurement=environment,mad_thresholds="temperature:2.5:20:5",senders=slack,slack_webhook_url="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"' \
|
||||
temp_anomaly_detector
|
||||
|
||||
# Write test data with an anomaly
|
||||
influxdb3 write \
|
||||
--database sensors \
|
||||
"environment,room=office temperature=22.1"
|
||||
influxdb3 write \
|
||||
--database sensors \
|
||||
"environment,room=office temperature=22.3"
|
||||
influxdb3 write \
|
||||
--database sensors \
|
||||
"environment,room=office temperature=45.8" # Anomaly
|
||||
# Continue writing anomalous values...
|
||||
```
|
||||
|
||||
### Expected results
|
||||
- Plugin maintains a 20-point window of recent temperature values
|
||||
- Computes median and MAD from this window
|
||||
- When temperature exceeds median ± 2.5*MAD for 5 consecutive points, sends Slack notification
|
||||
- Notification includes: "MAD count alert: Field temperature in environment outlier for 5 consecutive points. Tags: room=office"
|
||||
|
||||
### Example 2: Duration-based anomaly detection with multiple fields
|
||||
|
||||
Monitor CPU load and memory usage with different thresholds:
|
||||
|
||||
```bash
|
||||
# Create trigger with multiple thresholds
|
||||
influxdb3 create trigger \
|
||||
--database monitoring \
|
||||
--plugin-filename gh:influxdata/mad_check/mad_check_plugin.py \
|
||||
--trigger-spec "all_tables" \
|
||||
--trigger-arguments 'measurement=system_metrics,mad_thresholds="cpu_load:3:30:2m@memory_used:2.5:30:5m",senders=slack.discord,slack_webhook_url="https://hooks.slack.com/...",discord_webhook_url="https://discord.com/api/webhooks/..."' \
|
||||
system_anomaly_detector
|
||||
```
|
||||
|
||||
### Expected results
|
||||
- Monitors two fields independently:
|
||||
- `cpu_load`: Alerts when exceeds 3 MADs for 2 minutes
|
||||
- `memory_used`: Alerts when exceeds 2.5 MADs for 5 minutes
|
||||
- Sends notifications to both Slack and Discord
|
||||
|
||||
### Example 3: Anomaly detection with flip suppression
|
||||
|
||||
Prevent alert fatigue from rapidly fluctuating values:
|
||||
|
||||
```bash
|
||||
# Create trigger with flip suppression
|
||||
influxdb3 create trigger \
|
||||
--database iot \
|
||||
--plugin-filename gh:influxdata/mad_check/mad_check_plugin.py \
|
||||
--trigger-spec "all_tables" \
|
||||
--trigger-arguments 'measurement=sensor_data,mad_thresholds="vibration:2:50:10",state_change_count=3,senders=http,http_webhook_url="https://api.example.com/alerts",notification_count_text="Vibration anomaly detected on $table. Field: $field, Tags: $tags"' \
|
||||
vibration_monitor
|
||||
```
|
||||
|
||||
### Expected results
|
||||
- Detects vibration anomalies exceeding 2 MADs for 10 consecutive points
|
||||
- If values flip between normal/anomalous more than 3 times in the 50-point window, suppresses notifications
|
||||
- Sends custom formatted message to HTTP endpoint
|
||||
|
||||
## Using TOML Configuration Files
|
||||
|
||||
This plugin supports using TOML configuration files to specify all plugin arguments.
|
||||
|
||||
### Important Requirements
|
||||
|
||||
**To use TOML configuration files, you must set the `PLUGIN_DIR` environment variable in the InfluxDB 3 host environment.**
|
||||
|
||||
### Setting Up TOML Configuration
|
||||
|
||||
1. **Start InfluxDB 3 with the PLUGIN_DIR environment variable set**:
|
||||
```bash
|
||||
PLUGIN_DIR=~/.plugins influxdb3 serve --node-id node0 --object-store file --data-dir ~/.influxdb3 --plugin-dir ~/.plugins
|
||||
```
|
||||
|
||||
2. **Copy the example TOML configuration file to your plugin directory**:
|
||||
```bash
|
||||
cp mad_anomaly_config_data_writes.toml ~/.plugins/
|
||||
```
|
||||
|
||||
3. **Edit the TOML file** to match your requirements:
|
||||
```toml
|
||||
# Required parameters
|
||||
measurement = "cpu"
|
||||
mad_thresholds = "temp:2.5:20:5@load:3:10:2m"
|
||||
senders = "slack"
|
||||
|
||||
# Notification settings
|
||||
slack_webhook_url = "https://hooks.slack.com/services/..."
|
||||
notification_count_text = "Custom alert: $field anomaly detected"
|
||||
```
|
||||
|
||||
4. **Create a trigger using the `config_file_path` argument**:
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database mydb \
|
||||
--plugin-filename mad_check_plugin.py \
|
||||
--trigger-spec "all_tables" \
|
||||
--trigger-arguments config_file_path=mad_anomaly_config_data_writes.toml \
|
||||
mad_toml_trigger
|
||||
```
|
||||
|
||||
## Code overview
|
||||
|
||||
### Files
|
||||
|
||||
- `mad_check_plugin.py`: The main plugin code containing the handler for data write triggers
|
||||
- `mad_anomaly_config_data_writes.toml`: Example TOML configuration file
|
||||
|
||||
### Logging
|
||||
|
||||
Logs are stored in the `_internal` database in the `system.processing_engine_logs` table:
|
||||
|
||||
```bash
|
||||
influxdb3 query --database _internal "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'your_trigger_name'"
|
||||
```
|
||||
|
||||
Log columns:
|
||||
- **event_time**: Timestamp of the log event
|
||||
- **trigger_name**: Name of the trigger that generated the log
|
||||
- **log_level**: Severity level (INFO, WARN, ERROR)
|
||||
- **log_text**: Message describing the action or error
|
||||
|
||||
### Main functions
|
||||
|
||||
#### `process_writes(influxdb3_local, table_batches, args)`
|
||||
Handles real-time anomaly detection on incoming data.
|
||||
|
||||
Key operations:
|
||||
1. Filters table batches for the specified measurement
|
||||
2. Maintains in-memory deques of recent values per field
|
||||
3. Computes MAD for each monitored field
|
||||
4. Tracks consecutive outliers and duration
|
||||
5. Sends notifications when thresholds are met
|
||||
|
||||
### Key algorithms
|
||||
|
||||
#### MAD (Median Absolute Deviation) Calculation
|
||||
```python
|
||||
median = statistics.median(values)
|
||||
mad = statistics.median([abs(x - median) for x in values])
|
||||
threshold = k * mad
|
||||
is_anomaly = abs(value - median) > threshold
|
||||
```
|
||||
|
||||
#### Flip Detection
|
||||
Counts transitions between normal and anomalous states within the window to prevent alert fatigue from rapidly changing values.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common issues
|
||||
|
||||
#### Issue: No notifications being sent
|
||||
**Solution**:
|
||||
1. Verify the Notification Sender Plugin is installed and running
|
||||
2. Check webhook URLs are correct:
|
||||
```bash
|
||||
influxdb3 query --database _internal "SELECT * FROM system.processing_engine_logs WHERE log_text LIKE '%notification%'"
|
||||
```
|
||||
3. Ensure notification channel parameters are provided for selected senders
|
||||
|
||||
#### Issue: "Invalid MAD thresholds format" error
|
||||
**Solution**: Check threshold format is correct:
|
||||
- Count-based: `field:k:window:count` (for example, `temp:2.5:20:5`)
|
||||
- Duration-based: `field:k:window:duration` (for example, `temp:2.5:20:2m`)
|
||||
- Multiple thresholds separated by `@`
|
||||
|
||||
#### Issue: Too many false positive alerts
|
||||
**Solution**:
|
||||
1. Increase the k multiplier (for example, from 2.5 to 3.0)
|
||||
2. Increase the threshold count or duration
|
||||
3. Enable flip suppression with `state_change_count`
|
||||
4. Increase the window size for more stable statistics
|
||||
|
||||
#### Issue: Missing anomalies (false negatives)
|
||||
**Solution**:
|
||||
1. Decrease the k multiplier
|
||||
2. Decrease the threshold count or duration
|
||||
3. Check if data has seasonal patterns that affect the median
|
||||
|
||||
### Debugging tips
|
||||
|
||||
1. **Monitor deque sizes**:
|
||||
```bash
|
||||
influxdb3 query --database _internal "SELECT * FROM system.processing_engine_logs WHERE log_text LIKE '%Deque%'"
|
||||
```
|
||||
|
||||
2. **Check MAD calculations**:
|
||||
```bash
|
||||
influxdb3 query --database _internal "SELECT * FROM system.processing_engine_logs WHERE log_text LIKE '%MAD:%'"
|
||||
```
|
||||
|
||||
3. **Test with known anomalies**:
|
||||
Write test data with obvious outliers to verify detection
|
||||
|
||||
### Performance considerations
|
||||
|
||||
- **Memory usage**: Each field maintains a deque of `window_count` values
|
||||
- **Computation**: MAD is computed on every data write for monitored fields
|
||||
- **Caching**: Measurement and tag names are cached for 1 hour
|
||||
- **Notification retries**: Failed notifications retry up to 3 times with exponential backoff
|
||||
|
||||
## Report an issue
|
||||
|
||||
For plugin issues, see the Plugins repository [issues page](https://github.com/influxdata/influxdb3_plugins/issues).
|
||||
|
||||
## Find support for {{% product-name %}}
|
||||
|
||||
The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for InfluxDB 3 Core and InfluxDB 3 Enterprise.
|
||||
For other InfluxDB versions, see the [Support and feedback](#bug-reports-and-feedback) options.
|
|
@ -0,0 +1,196 @@
|
|||
The Notifier Plugin provides multi-channel notification capabilities for InfluxDB 3, enabling real-time alert delivery through various communication channels.
|
||||
Send notifications via Slack, Discord, HTTP webhooks, SMS, or WhatsApp based on incoming HTTP requests.
|
||||
Acts as a centralized notification dispatcher that receives data from other plugins or external systems and routes notifications to the appropriate channels.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Request body parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `notification_text` | string | required | Text content of the notification message |
|
||||
| `senders_config` | object | required | Configuration for each notification channel |
|
||||
|
||||
### Sender-specific configuration
|
||||
|
||||
The `senders_config` parameter accepts channel configurations where keys are sender names and values contain channel-specific settings:
|
||||
|
||||
#### Slack notifications
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `slack_webhook_url` | string | required | Slack webhook URL |
|
||||
| `slack_headers` | string | none | Base64-encoded JSON headers |
|
||||
|
||||
#### Discord notifications
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `discord_webhook_url` | string | required | Discord webhook URL |
|
||||
| `discord_headers` | string | none | Base64-encoded JSON headers |
|
||||
|
||||
#### HTTP webhook notifications
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `http_webhook_url` | string | required | Custom webhook URL for HTTP POST |
|
||||
| `http_headers` | string | none | Base64-encoded JSON headers |
|
||||
|
||||
#### SMS notifications (via Twilio)
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `twilio_sid` | string | required | Twilio Account SID (or use `TWILIO_SID` env var) |
|
||||
| `twilio_token` | string | required | Twilio Auth Token (or use `TWILIO_TOKEN` env var) |
|
||||
| `twilio_from_number` | string | required | Sender phone number in E.164 format |
|
||||
| `twilio_to_number` | string | required | Recipient phone number in E.164 format |
|
||||
|
||||
#### WhatsApp notifications (via Twilio)
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `twilio_sid` | string | required | Twilio Account SID (or use `TWILIO_SID` env var) |
|
||||
| `twilio_token` | string | required | Twilio Auth Token (or use `TWILIO_TOKEN` env var) |
|
||||
| `twilio_from_number` | string | required | Sender WhatsApp number in E.164 format |
|
||||
| `twilio_to_number` | string | required | Recipient WhatsApp number in E.164 format |
|
||||
|
||||
## Installation
|
||||
|
||||
### Install dependencies
|
||||
|
||||
Install required Python packages:
|
||||
|
||||
```bash
|
||||
influxdb3 install package httpx
|
||||
influxdb3 install package twilio
|
||||
```
|
||||
|
||||
### Create trigger
|
||||
|
||||
Create an HTTP trigger to handle notification requests:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database mydb \
|
||||
--plugin-filename notifier_plugin.py \
|
||||
--trigger-spec "request:notify" \
|
||||
notification_trigger
|
||||
```
|
||||
|
||||
This registers an HTTP endpoint at `/api/v3/engine/notify`.
|
||||
|
||||
### Enable trigger
|
||||
|
||||
```bash
|
||||
influxdb3 enable trigger --database mydb notification_trigger
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Slack notification
|
||||
|
||||
Send a notification to Slack:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8181/api/v3/engine/notify \
|
||||
-H "Authorization: Bearer YOUR_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"notification_text": "Database alert: High CPU usage detected",
|
||||
"senders_config": {
|
||||
"slack": {
|
||||
"slack_webhook_url": "https://hooks.slack.com/services/..."
|
||||
}
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
### SMS notification
|
||||
|
||||
Send an SMS via Twilio:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8181/api/v3/engine/notify \
|
||||
-H "Authorization: Bearer YOUR_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"notification_text": "Critical alert: System down",
|
||||
"senders_config": {
|
||||
"sms": {
|
||||
"twilio_from_number": "+1234567890",
|
||||
"twilio_to_number": "+0987654321"
|
||||
}
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
### Multi-channel notification
|
||||
|
||||
Send notifications via multiple channels simultaneously:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8181/api/v3/engine/notify \
|
||||
-H "Authorization: Bearer YOUR_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"notification_text": "Performance warning: Memory usage above threshold",
|
||||
"senders_config": {
|
||||
"slack": {
|
||||
"slack_webhook_url": "https://hooks.slack.com/services/..."
|
||||
},
|
||||
"discord": {
|
||||
"discord_webhook_url": "https://discord.com/api/webhooks/..."
|
||||
},
|
||||
"whatsapp": {
|
||||
"twilio_from_number": "+1234567890",
|
||||
"twilio_to_number": "+0987654321"
|
||||
}
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
- **Multi-channel delivery**: Support for Slack, Discord, HTTP webhooks, SMS, and WhatsApp
|
||||
- **Retry logic**: Automatic retry with exponential backoff for failed notifications
|
||||
- **Environment variables**: Credential management via environment variables
|
||||
- **Asynchronous processing**: Non-blocking HTTP notifications for better performance
|
||||
- **Flexible configuration**: Channel-specific settings and optional headers support
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common issues
|
||||
|
||||
**Notification not delivered**
|
||||
- Verify webhook URLs are correct and accessible
|
||||
- Check Twilio credentials and phone number formats
|
||||
- Review logs for specific error messages
|
||||
|
||||
**Authentication errors**
|
||||
- Ensure Twilio credentials are set via environment variables or request parameters
|
||||
- Verify webhook URLs have proper authentication if required
|
||||
|
||||
**Rate limiting**
|
||||
- Plugin includes built-in retry logic with exponential backoff
|
||||
- Consider implementing client-side rate limiting for high-frequency notifications
|
||||
|
||||
### Environment variables
|
||||
|
||||
For security, set Twilio credentials as environment variables:
|
||||
|
||||
```bash
|
||||
export TWILIO_SID=your_account_sid
|
||||
export TWILIO_TOKEN=your_auth_token
|
||||
```
|
||||
|
||||
### Viewing logs
|
||||
|
||||
Check processing logs in the InfluxDB system tables:
|
||||
|
||||
```bash
|
||||
influxdb3 query --database _internal "SELECT * FROM system.processing_engine_logs WHERE message LIKE '%notifier%' ORDER BY time DESC LIMIT 10"
|
||||
```
|
||||
|
||||
## Report an issue
|
||||
|
||||
For plugin issues, see the Plugins repository [issues page](https://github.com/influxdata/influxdb3_plugins/issues).
|
||||
|
||||
## Find support for {{% product-name %}}
|
||||
|
||||
The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for {{% product-name %}}.
|
||||
For other InfluxDB versions, see the [Support and feedback](#bug-reports-and-feedback) options.
|
|
@ -0,0 +1,234 @@
|
|||
The Prophet Forecasting Plugin enables time series forecasting for data in InfluxDB 3 using Facebook's Prophet library.
|
||||
Generate predictions for future data points based on historical patterns, including seasonality, trends, and custom events.
|
||||
Supports both scheduled batch forecasting and on-demand HTTP-triggered forecasts with model persistence and validation capabilities.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Scheduled trigger parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `measurement` | string | required | Source measurement containing historical data |
|
||||
| `field` | string | required | Field name to forecast |
|
||||
| `window` | string | required | Historical data window. Format: `<number><unit>` (for example, `"30d"`) |
|
||||
| `forecast_horizont` | string | required | Forecast duration. Format: `<number><unit>` (for example, `"2d"`) |
|
||||
| `tag_values` | string | required | Dot-separated tag filters (for example, `"region:us-west.device:sensor1"`) |
|
||||
| `target_measurement` | string | required | Destination measurement for forecast results |
|
||||
| `model_mode` | string | required | Operation mode: "train" or "predict" |
|
||||
| `unique_suffix` | string | required | Unique model identifier for versioning |
|
||||
|
||||
### HTTP trigger parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `measurement` | string | required | Source measurement containing historical data |
|
||||
| `field` | string | required | Field name to forecast |
|
||||
| `forecast_horizont` | string | required | Forecast duration. Format: `<number><unit>` (for example, `"7d"`) |
|
||||
| `tag_values` | object | required | Tag filters as JSON object (for example, {"region":"us-west"}) |
|
||||
| `target_measurement` | string | required | Destination measurement for forecast results |
|
||||
| `unique_suffix` | string | required | Unique model identifier for versioning |
|
||||
| `start_time` | string | required | Historical window start (ISO 8601 format) |
|
||||
| `end_time` | string | required | Historical window end (ISO 8601 format) |
|
||||
|
||||
### Advanced parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `seasonality_mode` | string | "additive" | Prophet seasonality mode: "additive" or "multiplicative" |
|
||||
| `changepoint_prior_scale` | number | 0.05 | Flexibility of trend changepoints |
|
||||
| `changepoints` | string/array | none | Changepoint dates (ISO format) |
|
||||
| `holiday_date_list` | string/array | none | Custom holiday dates (ISO format) |
|
||||
| `holiday_names` | string/array | none | Holiday names corresponding to dates |
|
||||
| `holiday_country_names` | string/array | none | Country codes for built-in holidays |
|
||||
| `inferred_freq` | string | auto | Manual frequency specification (for example, `"1D"`, `"1H"`) |
|
||||
| `validation_window` | string | "0s" | Validation period duration |
|
||||
| `msre_threshold` | number | infinity | Maximum acceptable Mean Squared Relative Error |
|
||||
| `target_database` | string | current | Database for forecast storage |
|
||||
| `save_mode` | string | "false" | Whether to save/load models (HTTP only) |
|
||||
|
||||
### Notification parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `is_sending_alert` | string | "false" | Enable alerts on validation failure |
|
||||
| `notification_text` | string | template | Custom alert message template |
|
||||
| `senders` | string | none | Dot-separated notification channels |
|
||||
| `notification_path` | string | "notify" | Notification endpoint path |
|
||||
| `influxdb3_auth_token` | string | env var | Authentication token |
|
||||
| `config_file_path` | string | none | TOML config file path relative to PLUGIN_DIR |
|
||||
|
||||
## Installation
|
||||
|
||||
### Install dependencies
|
||||
|
||||
Install required Python packages:
|
||||
|
||||
```bash
|
||||
influxdb3 install package pandas
|
||||
influxdb3 install package numpy
|
||||
influxdb3 install package requests
|
||||
influxdb3 install package prophet
|
||||
```
|
||||
|
||||
### Create scheduled trigger
|
||||
|
||||
Create a trigger for periodic forecasting:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database mydb \
|
||||
--plugin-filename prophet_forecasting.py \
|
||||
--trigger-spec "every:1d" \
|
||||
--trigger-arguments "measurement=temperature,field=value,window=30d,forecast_horizont=2d,tag_values=region:us-west.device:sensor1,target_measurement=temperature_forecast,model_mode=train,unique_suffix=20250619_v1" \
|
||||
prophet_forecast_trigger
|
||||
```
|
||||
|
||||
### Create HTTP trigger
|
||||
|
||||
Create a trigger for on-demand forecasting:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database mydb \
|
||||
--plugin-filename prophet_forecasting.py \
|
||||
--trigger-spec "request:forecast" \
|
||||
prophet_forecast_http_trigger
|
||||
```
|
||||
|
||||
### Enable triggers
|
||||
|
||||
```bash
|
||||
influxdb3 enable trigger --database mydb prophet_forecast_trigger
|
||||
influxdb3 enable trigger --database mydb prophet_forecast_http_trigger
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Scheduled forecasting
|
||||
|
||||
Example HTTP request for on-demand forecasting:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8181/api/v3/engine/forecast \
|
||||
-H "Authorization: Bearer YOUR_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"measurement": "temperature",
|
||||
"field": "value",
|
||||
"forecast_horizont": "7d",
|
||||
"tag_values": {"region":"us-west","device":"sensor1"},
|
||||
"target_measurement": "temperature_forecast",
|
||||
"unique_suffix": "model_v1_20250722",
|
||||
"start_time": "2025-05-20T00:00:00Z",
|
||||
"end_time": "2025-06-19T00:00:00Z",
|
||||
"seasonality_mode": "additive",
|
||||
"changepoint_prior_scale": 0.05,
|
||||
"validation_window": "3d",
|
||||
"msre_threshold": 0.05
|
||||
}'
|
||||
```
|
||||
|
||||
### Advanced forecasting with holidays
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8181/api/v3/engine/forecast \
|
||||
-H "Authorization: Bearer YOUR_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"measurement": "sales",
|
||||
"field": "revenue",
|
||||
"forecast_horizont": "30d",
|
||||
"tag_values": {"store":"main_branch"},
|
||||
"target_measurement": "revenue_forecast",
|
||||
"unique_suffix": "retail_model_v2",
|
||||
"start_time": "2024-01-01T00:00:00Z",
|
||||
"end_time": "2025-06-01T00:00:00Z",
|
||||
"holiday_country_names": ["US"],
|
||||
"holiday_date_list": ["2025-07-04"],
|
||||
"holiday_names": ["Independence Day"],
|
||||
"changepoints": ["2025-01-01", "2025-03-01"],
|
||||
"inferred_freq": "1D"
|
||||
}'
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
- **Dual trigger modes**: Support for both scheduled batch forecasting and on-demand HTTP requests
|
||||
- **Model persistence**: Save and reuse trained models for consistent predictions
|
||||
- **Forecast validation**: Built-in accuracy assessment using Mean Squared Relative Error (MSRE)
|
||||
- **Holiday support**: Built-in holiday calendars and custom holiday configuration
|
||||
- **Advanced seasonality**: Configurable seasonality modes and changepoint detection
|
||||
- **Notification integration**: Alert delivery for validation failures via multiple channels
|
||||
- **Flexible time intervals**: Support for seconds, minutes, hours, days, weeks, months, quarters, and years
|
||||
|
||||
## Output data structure
|
||||
|
||||
Forecast results are written to the target measurement with the following structure:
|
||||
|
||||
### Tags
|
||||
- `model_version`: Model identifier from unique_suffix parameter
|
||||
- Additional tags from original measurement query filters
|
||||
|
||||
### Fields
|
||||
- `forecast`: Predicted value (yhat from Prophet model)
|
||||
- `yhat_lower`: Lower bound of confidence interval
|
||||
- `yhat_upper`: Upper bound of confidence interval
|
||||
- `run_time`: Forecast execution timestamp (ISO 8601 format)
|
||||
|
||||
### Timestamp
|
||||
- `time`: Forecast timestamp in nanoseconds
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common issues
|
||||
|
||||
**Model training failures**
|
||||
- Ensure sufficient historical data points for the specified window
|
||||
- Verify data contains required time column and forecast field
|
||||
- Check for data gaps that might affect frequency inference
|
||||
- Set `inferred_freq` manually if automatic detection fails
|
||||
|
||||
**Validation failures**
|
||||
- Review MSRE threshold settings - values too low may cause frequent failures
|
||||
- Ensure validation window provides sufficient data for comparison
|
||||
- Check that validation data aligns temporally with forecast period
|
||||
|
||||
**HTTP trigger issues**
|
||||
- Verify JSON request body format matches expected schema
|
||||
- Check authentication tokens and database permissions
|
||||
- Ensure start_time and end_time are in valid ISO 8601 format with timezone
|
||||
|
||||
**Model persistence problems**
|
||||
- Verify plugin directory permissions for model storage
|
||||
- Check disk space availability in plugin directory
|
||||
- Ensure unique_suffix values don't conflict between different model versions
|
||||
|
||||
### Model storage
|
||||
|
||||
- **Location**: Models stored in `prophet_models/` directory within plugin directory
|
||||
- **Naming**: Files named `prophet_model_{unique_suffix}.json`
|
||||
- **Versioning**: Use descriptive unique_suffix values for model management
|
||||
|
||||
### Time format support
|
||||
|
||||
Supported time units for window, forecast_horizont, and validation_window:
|
||||
- `s` (seconds), `min` (minutes), `h` (hours)
|
||||
- `d` (days), `w` (weeks)
|
||||
- `m` (months ≈30.42 days), `q` (quarters ≈91.25 days), `y` (years = 365 days)
|
||||
|
||||
### Validation process
|
||||
|
||||
When validation_window is set:
|
||||
1. Training data: `current_time - window` to `current_time - validation_window`
|
||||
2. Validation data: `current_time - validation_window` to `current_time`
|
||||
3. MSRE calculation: `mean((actual - predicted)² / actual²)`
|
||||
4. Threshold comparison and optional alert dispatch
|
||||
|
||||
## Report an issue
|
||||
|
||||
For plugin issues, see the Plugins repository [issues page](https://github.com/influxdata/influxdb3_plugins/issues).
|
||||
|
||||
## Find support for {{% product-name %}}
|
||||
|
||||
The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for {{% product-name %}}.
|
||||
For other InfluxDB versions, see the [Support and feedback](#bug-reports-and-feedback) options.
|
|
@ -0,0 +1,198 @@
|
|||
The State Change Plugin provides comprehensive field monitoring and threshold detection for InfluxDB 3 data streams.
|
||||
Detect field value changes, monitor threshold conditions, and trigger notifications when specified criteria are met.
|
||||
Supports both scheduled batch monitoring and real-time data write monitoring with configurable stability checks and multi-channel alerts.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Scheduled trigger parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `measurement` | string | required | Measurement to monitor for field changes |
|
||||
| `field_change_count` | string | required | Dot-separated field thresholds (for example, `"temp:3.load:2"`) |
|
||||
| `senders` | string | required | Dot-separated notification channels |
|
||||
| `window` | string | required | Time window for analysis. Format: `<number><unit>` |
|
||||
|
||||
### Data write trigger parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `measurement` | string | required | Measurement to monitor for threshold conditions |
|
||||
| `field_thresholds` | string | required | Threshold conditions (for example, `"temp:30:10@status:ok:1h"`) |
|
||||
| `senders` | string | required | Dot-separated notification channels |
|
||||
|
||||
### Notification parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `influxdb3_auth_token` | string | env var | InfluxDB 3 API token |
|
||||
| `notification_text` | string | template | Message template for scheduled notifications |
|
||||
| `notification_count_text` | string | template | Message template for count-based notifications |
|
||||
| `notification_time_text` | string | template | Message template for time-based notifications |
|
||||
| `notification_path` | string | "notify" | Notification endpoint path |
|
||||
| `port_override` | number | 8181 | InfluxDB port override |
|
||||
|
||||
### Advanced parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `state_change_window` | number | 1 | Recent values to check for stability |
|
||||
| `state_change_count` | number | 1 | Max changes allowed within stability window |
|
||||
| `config_file_path` | string | none | TOML config file path relative to PLUGIN_DIR |
|
||||
|
||||
### Channel-specific configuration
|
||||
|
||||
Notification channels require additional parameters based on the sender type (same as the [Notifier Plugin](../notifier/README.md)).
|
||||
|
||||
## Installation
|
||||
|
||||
### Install dependencies
|
||||
|
||||
Install required Python packages:
|
||||
|
||||
```bash
|
||||
influxdb3 install package requests
|
||||
```
|
||||
|
||||
### Create scheduled trigger
|
||||
|
||||
Create a trigger for periodic field change monitoring:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database mydb \
|
||||
--plugin-filename state_change_check_plugin.py \
|
||||
--trigger-spec "every:10m" \
|
||||
--trigger-arguments "measurement=cpu,field_change_count=temp:3.load:2,window=10m,senders=slack,slack_webhook_url=https://hooks.slack.com/services/..." \
|
||||
state_change_scheduler
|
||||
```
|
||||
|
||||
### Create data write trigger
|
||||
|
||||
Create a trigger for real-time threshold monitoring:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database mydb \
|
||||
--plugin-filename state_change_check_plugin.py \
|
||||
--trigger-spec "all_tables" \
|
||||
--trigger-arguments "measurement=cpu,field_thresholds=temp:30:10@status:ok:1h,senders=slack,slack_webhook_url=https://hooks.slack.com/services/..." \
|
||||
state_change_datawrite
|
||||
```
|
||||
|
||||
### Enable triggers
|
||||
|
||||
```bash
|
||||
influxdb3 enable trigger --database mydb state_change_scheduler
|
||||
influxdb3 enable trigger --database mydb state_change_datawrite
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Scheduled field change monitoring
|
||||
|
||||
Monitor field changes over a time window and alert when thresholds are exceeded:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database sensors \
|
||||
--plugin-filename state_change_check_plugin.py \
|
||||
--trigger-spec "every:15m" \
|
||||
--trigger-arguments "measurement=temperature,field_change_count=value:5,window=1h,senders=slack,slack_webhook_url=https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX,notification_text=Temperature sensor $field changed $changes times in $window for tags $tags" \
|
||||
temp_change_monitor
|
||||
```
|
||||
|
||||
### Real-time threshold detection
|
||||
|
||||
Monitor data writes for threshold conditions:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database monitoring \
|
||||
--plugin-filename state_change_check_plugin.py \
|
||||
--trigger-spec "all_tables" \
|
||||
--trigger-arguments "measurement=system_metrics,field_thresholds=cpu_usage:80:5@memory_usage:90:10min,senders=discord,discord_webhook_url=https://discord.com/api/webhooks/..." \
|
||||
system_threshold_monitor
|
||||
```
|
||||
|
||||
### Multi-condition monitoring
|
||||
|
||||
Monitor multiple fields with different threshold types:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database application \
|
||||
--plugin-filename state_change_check_plugin.py \
|
||||
--trigger-spec "all_tables" \
|
||||
--trigger-arguments "measurement=app_health,field_thresholds=error_rate:0.05:3@response_time:500:30s@status:down:1,senders=slack.sms,slack_webhook_url=https://hooks.slack.com/services/...,twilio_from_number=+1234567890,twilio_to_number=+0987654321" \
|
||||
app_health_monitor
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
- **Dual monitoring modes**: Scheduled batch monitoring and real-time data write monitoring
|
||||
- **Flexible thresholds**: Support for count-based and duration-based conditions
|
||||
- **Stability checks**: Configurable state change detection to reduce noise
|
||||
- **Multi-channel alerts**: Integration with Slack, Discord, HTTP, SMS, and WhatsApp
|
||||
- **Template notifications**: Customizable message templates with dynamic variables
|
||||
- **Caching optimization**: Measurement and tag name caching for improved performance
|
||||
- **Environment variable support**: Credential management via environment variables
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common issues
|
||||
|
||||
**No notifications triggered**
|
||||
- Verify notification channel configuration (webhook URLs, credentials)
|
||||
- Check threshold values are appropriate for your data
|
||||
- Ensure the Notifier Plugin is installed and configured
|
||||
- Review plugin logs for error messages
|
||||
|
||||
**Too many notifications**
|
||||
- Adjust `state_change_window` and `state_change_count` for stability filtering
|
||||
- Increase threshold values to reduce sensitivity
|
||||
- Consider longer monitoring windows for scheduled triggers
|
||||
|
||||
**Authentication errors**
|
||||
- Set `INFLUXDB3_AUTH_TOKEN` environment variable
|
||||
- Verify token has appropriate database permissions
|
||||
- Check Twilio credentials for SMS/WhatsApp notifications
|
||||
|
||||
### Field threshold formats
|
||||
|
||||
**Count-based thresholds**
|
||||
- Format: `field_name:"value":count`
|
||||
- Example: `temp:"30.5":10` (10 occurrences of temperature = 30.5)
|
||||
|
||||
**Time-based thresholds**
|
||||
- Format: `field_name:"value":duration`
|
||||
- Example: `status:"error":5min` (status = error for 5 minutes)
|
||||
- Supported units: `s`, `min`, `h`, `d`, `w`
|
||||
|
||||
**Multiple conditions**
|
||||
- Separate with `@`: `temp:"30":5@humidity:"high":10min`
|
||||
|
||||
### Message template variables
|
||||
|
||||
**Scheduled notifications**
|
||||
- `$table`: Measurement name
|
||||
- `$field`: Field name
|
||||
- `$changes`: Number of changes detected
|
||||
- `$window`: Time window
|
||||
- `$tags`: Tag values
|
||||
|
||||
**Data write notifications**
|
||||
- `$table`: Measurement name
|
||||
- `$field`: Field name
|
||||
- `$value`: Threshold value
|
||||
- `$duration`: Time duration or count
|
||||
- `$row`: Unique row identifier
|
||||
|
||||
## Report an issue
|
||||
|
||||
For plugin issues, see the Plugins repository [issues page](https://github.com/influxdata/influxdb3_plugins/issues).
|
||||
|
||||
## Find support for {{% product-name %}}
|
||||
|
||||
The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for {{% product-name %}}.
|
||||
For other InfluxDB versions, see the [Support and feedback](#bug-reports-and-feedback) options.
|
|
@ -0,0 +1,196 @@
|
|||
The ADTK Anomaly Detector Plugin provides advanced time series anomaly detection for InfluxDB 3 using the ADTK (Anomaly Detection Toolkit) library.
|
||||
Apply statistical and machine learning-based detection methods to identify outliers, level shifts, volatility changes, and seasonal anomalies in your data.
|
||||
Features consensus-based detection requiring multiple detectors to agree before triggering alerts, reducing false positives.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Required parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `measurement` | string | required | Measurement to analyze for anomalies |
|
||||
| `field` | string | required | Numeric field to evaluate |
|
||||
| `detectors` | string | required | Dot-separated list of ADTK detectors |
|
||||
| `detector_params` | string | required | Base64-encoded JSON parameters for each detector |
|
||||
| `window` | string | required | Data analysis window. Format: `<number><unit>` |
|
||||
| `senders` | string | required | Dot-separated notification channels |
|
||||
|
||||
### Advanced parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `min_consensus` | number | 1 | Minimum detectors required to agree for anomaly flagging |
|
||||
| `min_condition_duration` | string | "0s" | Minimum duration for anomaly persistence |
|
||||
|
||||
### Notification parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `influxdb3_auth_token` | string | env var | InfluxDB 3 API token |
|
||||
| `notification_text` | string | template | Custom notification message template |
|
||||
| `notification_path` | string | "notify" | Notification endpoint path |
|
||||
| `port_override` | number | 8181 | InfluxDB port override |
|
||||
| `config_file_path` | string | none | TOML config file path relative to PLUGIN_DIR |
|
||||
|
||||
### Supported ADTK detectors
|
||||
|
||||
| Detector | Description | Required Parameters |
|
||||
|----------|-------------|-------------------|
|
||||
| `InterQuartileRangeAD` | Detects outliers using IQR method | None |
|
||||
| `ThresholdAD` | Detects values above/below thresholds | `high`, `low` (optional) |
|
||||
| `QuantileAD` | Detects outliers based on quantiles | `low`, `high` (optional) |
|
||||
| `LevelShiftAD` | Detects sudden level changes | `window` (int) |
|
||||
| `VolatilityShiftAD` | Detects volatility changes | `window` (int) |
|
||||
| `PersistAD` | Detects persistent anomalous values | None |
|
||||
| `SeasonalAD` | Detects seasonal pattern deviations | None |
|
||||
|
||||
## Installation
|
||||
|
||||
### Install dependencies
|
||||
|
||||
Install required Python packages:
|
||||
|
||||
```bash
|
||||
influxdb3 install package requests
|
||||
influxdb3 install package adtk
|
||||
influxdb3 install package pandas
|
||||
```
|
||||
|
||||
### Create trigger
|
||||
|
||||
Create a scheduled trigger for anomaly detection:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database mydb \
|
||||
--plugin-filename adtk_anomaly_detection_plugin.py \
|
||||
--trigger-spec "every:10m" \
|
||||
--trigger-arguments "measurement=cpu,field=usage,detectors=QuantileAD.LevelShiftAD,detector_params=eyJRdWFudGlsZUFKIjogeyJsb3ciOiAwLjA1LCAiaGlnaCI6IDAuOTV9LCAiTGV2ZWxTaGlmdEFKIjogeyJ3aW5kb3ciOiA1fX0=,window=10m,senders=slack,slack_webhook_url=https://hooks.slack.com/services/..." \
|
||||
anomaly_detector
|
||||
```
|
||||
|
||||
### Enable trigger
|
||||
|
||||
```bash
|
||||
influxdb3 enable trigger --database mydb anomaly_detector
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Basic anomaly detection
|
||||
|
||||
Detect outliers using quantile-based detection:
|
||||
|
||||
```bash
|
||||
# Base64 encode detector parameters: {"QuantileAD": {"low": 0.05, "high": 0.95}}
|
||||
echo '{"QuantileAD": {"low": 0.05, "high": 0.95}}' | base64
|
||||
|
||||
influxdb3 create trigger \
|
||||
--database sensors \
|
||||
--plugin-filename adtk_anomaly_detection_plugin.py \
|
||||
--trigger-spec "every:5m" \
|
||||
--trigger-arguments "measurement=temperature,field=value,detectors=QuantileAD,detector_params=eyJRdWFudGlsZUFKIjogeyJsb3ciOiAwLjA1LCAiaGlnaCI6IDAuOTV9fQ==,window=1h,senders=slack,slack_webhook_url=https://hooks.slack.com/services/..." \
|
||||
temp_anomaly_detector
|
||||
```
|
||||
|
||||
### Multi-detector consensus
|
||||
|
||||
Use multiple detectors with consensus requirement:
|
||||
|
||||
```bash
|
||||
# Base64 encode: {"QuantileAD": {"low": 0.1, "high": 0.9}, "LevelShiftAD": {"window": 10}}
|
||||
echo '{"QuantileAD": {"low": 0.1, "high": 0.9}, "LevelShiftAD": {"window": 10}}' | base64
|
||||
|
||||
influxdb3 create trigger \
|
||||
--database monitoring \
|
||||
--plugin-filename adtk_anomaly_detection_plugin.py \
|
||||
--trigger-spec "every:15m" \
|
||||
--trigger-arguments "measurement=cpu_metrics,field=utilization,detectors=QuantileAD.LevelShiftAD,detector_params=eyJRdWFudGlsZUFEIjogeyJsb3ciOiAwLjEsICJoaWdoIjogMC45fSwgIkxldmVsU2hpZnRBRCI6IHsid2luZG93IjogMTB9fQ==,min_consensus=2,window=30m,senders=discord,discord_webhook_url=https://discord.com/api/webhooks/..." \
|
||||
cpu_consensus_detector
|
||||
```
|
||||
|
||||
### Volatility shift detection
|
||||
|
||||
Monitor for sudden changes in data volatility:
|
||||
|
||||
```bash
|
||||
# Base64 encode: {"VolatilityShiftAD": {"window": 20}}
|
||||
echo '{"VolatilityShiftAD": {"window": 20}}' | base64
|
||||
|
||||
influxdb3 create trigger \
|
||||
--database trading \
|
||||
--plugin-filename adtk_anomaly_detection_plugin.py \
|
||||
--trigger-spec "every:1m" \
|
||||
--trigger-arguments "measurement=stock_prices,field=price,detectors=VolatilityShiftAD,detector_params=eyJWb2xhdGlsaXR5U2hpZnRBRCI6IHsid2luZG93IjogMjB9fQ==,window=1h,min_condition_duration=5m,senders=sms,twilio_from_number=+1234567890,twilio_to_number=+0987654321" \
|
||||
volatility_detector
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
- **Advanced detection methods**: Multiple ADTK detectors for different anomaly types
|
||||
- **Consensus-based filtering**: Reduce false positives with multi-detector agreement
|
||||
- **Configurable persistence**: Require anomalies to persist before alerting
|
||||
- **Multi-channel notifications**: Integration with various notification channels
|
||||
- **Template messages**: Customizable notification templates with dynamic variables
|
||||
- **Flexible scheduling**: Configurable detection intervals and time windows
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common issues
|
||||
|
||||
**Detector parameter encoding**
|
||||
- Ensure detector_params is valid Base64-encoded JSON
|
||||
- Use command line Base64 encoding: `echo '{"QuantileAD": {"low": 0.05}}' | base64`
|
||||
- Verify JSON structure matches detector requirements
|
||||
|
||||
**False positive notifications**
|
||||
- Increase `min_consensus` to require more detectors to agree
|
||||
- Add `min_condition_duration` to require anomalies to persist
|
||||
- Adjust detector-specific thresholds in `detector_params`
|
||||
|
||||
**Missing dependencies**
|
||||
- Install required packages: `adtk`, `pandas`, `requests`
|
||||
- Ensure the Notifier Plugin is installed for notifications
|
||||
|
||||
**Data quality issues**
|
||||
- Verify sufficient data points in the specified window
|
||||
- Check for null values or data gaps that affect detection
|
||||
- Ensure field contains numeric data suitable for analysis
|
||||
|
||||
### Base64 parameter encoding
|
||||
|
||||
Generate properly encoded detector parameters:
|
||||
|
||||
```bash
|
||||
# Single detector
|
||||
echo '{"QuantileAD": {"low": 0.05, "high": 0.95}}' | base64 -w 0
|
||||
|
||||
# Multiple detectors
|
||||
echo '{"QuantileAD": {"low": 0.1, "high": 0.9}, "LevelShiftAD": {"window": 15}}' | base64 -w 0
|
||||
|
||||
# Threshold detector
|
||||
echo '{"ThresholdAD": {"high": 100, "low": 10}}' | base64 -w 0
|
||||
```
|
||||
|
||||
### Message template variables
|
||||
|
||||
Available variables for notification templates:
|
||||
- `$table`: Measurement name
|
||||
- `$field`: Field name with anomaly
|
||||
- `$value`: Anomalous value
|
||||
- `$detectors`: List of detecting methods
|
||||
- `$tags`: Tag values
|
||||
- `$timestamp`: Anomaly timestamp
|
||||
|
||||
### Detector configuration reference
|
||||
|
||||
For detailed detector parameters and options, see the [ADTK documentation](https://adtk.readthedocs.io/en/stable/api/detectors.html).
|
||||
|
||||
## Report an issue
|
||||
|
||||
For plugin issues, see the Plugins repository [issues page](https://github.com/influxdata/influxdb3_plugins/issues).
|
||||
|
||||
## Find support for {{% product-name %}}
|
||||
|
||||
The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for {{% product-name %}}.
|
||||
For other InfluxDB versions, see the [Support and feedback](#bug-reports-and-feedback) options.
|
|
@ -0,0 +1,241 @@
|
|||
The Threshold Deadman Checks Plugin provides comprehensive monitoring capabilities for time series data in InfluxDB 3, combining real-time threshold detection with deadman monitoring.
|
||||
Monitor field values against configurable thresholds, detect data absence patterns, and trigger multi-level alerts based on aggregated metrics.
|
||||
Features both scheduled batch monitoring and real-time data write monitoring with configurable trigger counts and severity levels.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Scheduled trigger parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `measurement` | string | required | Measurement to monitor |
|
||||
| `senders` | string | required | Dot-separated notification channels |
|
||||
| `window` | string | required | Time window for data checking |
|
||||
|
||||
### Data write trigger parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `measurement` | string | required | Measurement to monitor for threshold conditions |
|
||||
| `field_conditions` | string | required | Threshold conditions (for example, `"temp>30-WARN:status==ok-INFO"`) |
|
||||
| `senders` | string | required | Dot-separated notification channels |
|
||||
|
||||
### Threshold check parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `field_aggregation_values` | string | none | Aggregation conditions for scheduled checks |
|
||||
| `deadman_check` | boolean | false | Enable deadman data presence checking |
|
||||
| `interval` | string | "5min" | Aggregation time interval |
|
||||
| `trigger_count` | number | 1 | Consecutive failures before alerting |
|
||||
|
||||
### Notification parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `influxdb3_auth_token` | string | env var | InfluxDB 3 API token |
|
||||
| `notification_deadman_text` | string | template | Deadman alert message template |
|
||||
| `notification_threshold_text` | string | template | Threshold alert message template |
|
||||
| `notification_text` | string | template | General notification template (data write) |
|
||||
| `notification_path` | string | "notify" | Notification endpoint path |
|
||||
| `port_override` | number | 8181 | InfluxDB port override |
|
||||
| `config_file_path` | string | none | TOML config file path relative to PLUGIN_DIR |
|
||||
|
||||
### Channel-specific configuration
|
||||
|
||||
Notification channels require additional parameters based on the sender type (same as the [Notifier Plugin](../notifier/README.md)).
|
||||
|
||||
## Installation
|
||||
|
||||
### Install dependencies
|
||||
|
||||
Install required Python packages:
|
||||
|
||||
```bash
|
||||
influxdb3 install package requests
|
||||
```
|
||||
|
||||
### Create scheduled trigger
|
||||
|
||||
Create a trigger for periodic threshold and deadman checks:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database mydb \
|
||||
--plugin-filename threshold_deadman_checks_plugin.py \
|
||||
--trigger-spec "every:10m" \
|
||||
--trigger-arguments "measurement=cpu,senders=slack,field_aggregation_values=temp:avg@>=30-ERROR,window=10m,trigger_count=3,deadman_check=true,slack_webhook_url=https://hooks.slack.com/services/..." \
|
||||
threshold_scheduler
|
||||
```
|
||||
|
||||
### Create data write trigger
|
||||
|
||||
Create a trigger for real-time threshold monitoring:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database mydb \
|
||||
--plugin-filename threshold_deadman_checks_plugin.py \
|
||||
--trigger-spec "all_tables" \
|
||||
--trigger-arguments "measurement=cpu,field_conditions=temp>30-WARN:status==ok-INFO,senders=slack,trigger_count=2,slack_webhook_url=https://hooks.slack.com/services/..." \
|
||||
threshold_datawrite
|
||||
```
|
||||
|
||||
### Enable triggers
|
||||
|
||||
```bash
|
||||
influxdb3 enable trigger --database mydb threshold_scheduler
|
||||
influxdb3 enable trigger --database mydb threshold_datawrite
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Deadman monitoring
|
||||
|
||||
Monitor for data absence and alert when no data is received:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database sensors \
|
||||
--plugin-filename threshold_deadman_checks_plugin.py \
|
||||
--trigger-spec "every:15m" \
|
||||
--trigger-arguments "measurement=heartbeat,senders=sms,window=10m,deadman_check=true,trigger_count=2,twilio_from_number=+1234567890,twilio_to_number=+0987654321,notification_deadman_text=CRITICAL: No heartbeat data from \$table between \$time_from and \$time_to" \
|
||||
heartbeat_monitor
|
||||
```
|
||||
|
||||
### Multi-level threshold monitoring
|
||||
|
||||
Monitor aggregated values with different severity levels:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database monitoring \
|
||||
--plugin-filename threshold_deadman_checks_plugin.py \
|
||||
--trigger-spec "every:5m" \
|
||||
--trigger-arguments "measurement=system_metrics,senders=slack.discord,field_aggregation_values=cpu_usage:avg@>=80-WARN\$cpu_usage:avg@>=95-ERROR\$memory_usage:max@>=90-WARN,window=5m,interval=1min,trigger_count=3,slack_webhook_url=https://hooks.slack.com/services/...,discord_webhook_url=https://discord.com/api/webhooks/..." \
|
||||
system_threshold_monitor
|
||||
```
|
||||
|
||||
### Real-time field condition monitoring
|
||||
|
||||
Monitor data writes for immediate threshold violations:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database applications \
|
||||
--plugin-filename threshold_deadman_checks_plugin.py \
|
||||
--trigger-spec "all_tables" \
|
||||
--trigger-arguments "measurement=response_times,field_conditions=latency>500-WARN:latency>1000-ERROR:error_rate>0.05-CRITICAL,senders=http,trigger_count=1,http_webhook_url=https://alertmanager.example.com/webhook,notification_text=[\$level] Application alert: \$field \$op_sym \$compare_val (actual: \$actual)" \
|
||||
app_performance_monitor
|
||||
```
|
||||
|
||||
### Combined monitoring
|
||||
|
||||
Monitor both aggregation thresholds and deadman conditions:
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database comprehensive \
|
||||
--plugin-filename threshold_deadman_checks_plugin.py \
|
||||
--trigger-spec "every:10m" \
|
||||
--trigger-arguments "measurement=temperature_sensors,senders=whatsapp,field_aggregation_values=temperature:avg@>=35-WARN\$temperature:max@>=40-ERROR,window=15m,deadman_check=true,trigger_count=2,twilio_from_number=+1234567890,twilio_to_number=+0987654321" \
|
||||
comprehensive_sensor_monitor
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
- **Dual monitoring modes**: Scheduled aggregation checks and real-time data write monitoring
|
||||
- **Deadman detection**: Monitor for data absence and missing data streams
|
||||
- **Multi-level alerting**: Support for INFO, WARN, ERROR, and CRITICAL severity levels
|
||||
- **Aggregation support**: Monitor avg, min, max, count, sum, derivative, and median values
|
||||
- **Configurable triggers**: Require multiple consecutive failures before alerting
|
||||
- **Multi-channel notifications**: Integration with various notification systems
|
||||
- **Template messages**: Customizable alert templates with dynamic variables
|
||||
- **Performance optimization**: Measurement and tag caching for improved efficiency
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common issues
|
||||
|
||||
**No alerts triggered**
|
||||
- Verify threshold values are appropriate for your data ranges
|
||||
- Check that notification channels are properly configured
|
||||
- Ensure the Notifier Plugin is installed and accessible
|
||||
- Review plugin logs for configuration errors
|
||||
|
||||
**False positive alerts**
|
||||
- Increase `trigger_count` to require more consecutive failures
|
||||
- Adjust threshold values to be less sensitive
|
||||
- Consider longer aggregation intervals for noisy data
|
||||
|
||||
**Missing deadman alerts**
|
||||
- Verify `deadman_check=true` is set in configuration
|
||||
- Check that the measurement name matches existing data
|
||||
- Ensure the time window is appropriate for your data frequency
|
||||
|
||||
**Authentication issues**
|
||||
- Set `INFLUXDB3_AUTH_TOKEN` environment variable
|
||||
- Verify API token has required database permissions
|
||||
- Check Twilio credentials for SMS/WhatsApp notifications
|
||||
|
||||
### Configuration formats
|
||||
|
||||
**Aggregation conditions (scheduled)**
|
||||
- Format: `field:aggregation@"operator value-level"`
|
||||
- Example: `temp:avg@">=30-ERROR"`
|
||||
- Multiple conditions: `temp:avg@">=30-WARN"$humidity:min@"<40-INFO"`
|
||||
|
||||
**Field conditions (data write)**
|
||||
- Format: `field operator value-level`
|
||||
- Example: `temp>30-WARN:status==ok-INFO`
|
||||
- Supported operators: `>`, `<`, `>=`, `<=`, `==`, `!=`
|
||||
|
||||
**Supported aggregations**
|
||||
- `avg`: Average value
|
||||
- `min`: Minimum value
|
||||
- `max`: Maximum value
|
||||
- `count`: Count of records
|
||||
- `sum`: Sum of values
|
||||
- `derivative`: Rate of change
|
||||
- `median`: Median value
|
||||
|
||||
### Message template variables
|
||||
|
||||
**Deadman notifications**
|
||||
- `$table`: Measurement name
|
||||
- `$time_from`: Start of checked period
|
||||
- `$time_to`: End of checked period
|
||||
|
||||
**Threshold notifications (scheduled)**
|
||||
- `$level`: Alert severity level
|
||||
- `$table`: Measurement name
|
||||
- `$field`: Field name
|
||||
- `$aggregation`: Aggregation type
|
||||
- `$op_sym`: Operator symbol
|
||||
- `$compare_val`: Threshold value
|
||||
- `$actual`: Actual measured value
|
||||
- `$row`: Unique identifier
|
||||
|
||||
**Threshold notifications (data write)**
|
||||
- `$level`: Alert severity level
|
||||
- `$field`: Field name
|
||||
- `$op_sym`: Operator symbol
|
||||
- `$compare_val`: Threshold value
|
||||
- `$actual`: Actual field value
|
||||
|
||||
### Row identification
|
||||
|
||||
The `row` variable uniquely identifies alert contexts using format:
|
||||
`measurement:level:tag1=value1:tag2=value2`
|
||||
|
||||
This ensures trigger counts are maintained independently for each unique combination of measurement, severity level, and tag values.
|
||||
|
||||
|
||||
## Report an issue
|
||||
|
||||
For plugin issues, see the Plugins repository [issues page](https://github.com/influxdata/influxdb3_plugins/issues).
|
||||
|
||||
## Find support for {{% product-name %}}
|
||||
|
||||
The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for {{% product-name %}}.
|
||||
For other InfluxDB versions, see the [Support and feedback](#bug-reports-and-feedback) options.
|
Loading…
Reference in New Issue