fix(plugins): porting README details to docs (#6806)

- Handle multi-line emoji metadata ( on line 1, 🔧 on line 2)

    Content updates:
    - Remove residual emoji metadata from 5 plugins
    - Clarify HTTP request body parameters in notifier plugin
    - Update CLI examples (--plugin-filename → --path) from source
    - Preserve "InfluxDB 3 Explorer" product name
    - Fix "Pandas" → "pandas" capitalization

Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>
pull/6790/head^2
github-actions[bot] 2026-02-09 14:55:23 -06:00 committed by GitHub
parent 322b77e280
commit 0c2f9e8dbc
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
12 changed files with 1394 additions and 1046 deletions

View File

@ -9,7 +9,7 @@ If a plugin supports multiple trigger specifications, some parameters may depend
### Plugin metadata
This plugin includes a JSON metadata schema in its docstring that defines supported trigger types and configuration parameters.
This plugin includes a JSON metadata schema in its docstring that defines supported trigger types and configuration parameters. This metadata enables the [InfluxDB 3 Explorer](https://docs.influxdata.com/influxdb3/explorer/) UI to display and configure the plugin.
### Required parameters
@ -97,7 +97,7 @@ Run transformations periodically on historical data:
```bash
influxdb3 create trigger \
--database mydb \
--plugin-filename gh:influxdata/basic_transformation/basic_transformation.py \
--path "gh:influxdata/basic_transformation/basic_transformation.py" \
--trigger-spec "every:1h" \
--trigger-arguments 'measurement=temperature,window=24h,target_measurement=temperature_normalized,names_transformations=temp:"snake",values_transformations=temp:"convert_degC_to_degF"' \
hourly_temp_transform
@ -109,7 +109,7 @@ Transform data as it's written:
```bash
influxdb3 create trigger \
--database mydb \
--plugin-filename gh:influxdata/basic_transformation/basic_transformation.py \
--path "gh:influxdata/basic_transformation/basic_transformation.py" \
--trigger-spec "all_tables" \
--trigger-arguments 'measurement=sensor_data,target_measurement=sensor_data_clean,names_transformations=.*:"snake remove_special_chars normalize_underscores"' \
realtime_clean
@ -124,7 +124,7 @@ Convert temperature readings from Celsius to Fahrenheit while standardizing fiel
# Create the trigger
influxdb3 create trigger \
--database weather \
--plugin-filename gh:influxdata/basic_transformation/basic_transformation.py \
--path "gh:influxdata/basic_transformation/basic_transformation.py" \
--trigger-spec "every:30m" \
--trigger-arguments 'measurement=raw_temps,window=1h,target_measurement=temps_fahrenheit,names_transformations=Temperature:"snake",values_transformations=temperature:"convert_degC_to_degF"' \
temp_converter
@ -158,7 +158,7 @@ Clean and standardize field names from various sensors:
# Create trigger with multiple transformations
influxdb3 create trigger \
--database sensors \
--plugin-filename gh:influxdata/basic_transformation/basic_transformation.py \
--path "gh:influxdata/basic_transformation/basic_transformation.py" \
--trigger-spec "all_tables" \
--trigger-arguments 'measurement=raw_sensors,target_measurement=clean_sensors,names_transformations=.*:"remove_special_chars snake collapse_underscore trim_underscore"' \
field_cleaner
@ -192,7 +192,7 @@ Replace specific strings in field values:
# Create trigger with custom replacements
influxdb3 create trigger \
--database inventory \
--plugin-filename gh:influxdata/basic_transformation/basic_transformation.py \
--path "gh:influxdata/basic_transformation/basic_transformation.py" \
--trigger-spec "every:1d" \
--trigger-arguments 'measurement=products,window=7d,target_measurement=products_updated,values_transformations=status:"status_replace",custom_replacements=status_replace:"In Stock=available.Out of Stock=unavailable"' \
status_updater
@ -233,7 +233,7 @@ This plugin supports using TOML configuration files to specify all plugin argume
```bash
influxdb3 create trigger \
--database mydb \
--plugin-filename basic_transformation.py \
--path "gh:influxdata/basic_transformation/basic_transformation.py" \
--trigger-spec "every:1d" \
--trigger-arguments config_file_path=basic_transformation_config_scheduler.toml \
basic_transform_trigger
@ -248,10 +248,10 @@ This plugin supports using TOML configuration files to specify all plugin argume
### Logging
Logs are stored in the `_internal` database (or the database where the trigger is created) in the `system.processing_engine_logs` table. To view logs:
Logs are stored in the trigger's database in the `system.processing_engine_logs` table. To view logs:
```bash
influxdb3 query --database _internal "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'your_trigger_name'"
influxdb3 query --database YOUR_DATABASE "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'your_trigger_name'"
```
Log columns:
@ -376,7 +376,7 @@ chmod +x ~/.plugins/basic_transformation.py
```bash
influxdb3 query \
--database _internal \
--database YOUR_DATABASE \
"SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'your_trigger_name'"
```
### Debugging tips

View File

@ -9,7 +9,7 @@ If a plugin supports multiple trigger specifications, some parameters may depend
### Plugin metadata
This plugin includes a JSON metadata schema in its docstring that defines supported trigger types and configuration parameters.
This plugin includes a JSON metadata schema in its docstring that defines supported trigger types and configuration parameters. This metadata enables the [InfluxDB 3 Explorer](https://docs.influxdata.com/influxdb3/explorer/) UI to display and configure the plugin.
### Required parameters
@ -61,9 +61,9 @@ For more information on using TOML configuration files, see the Using TOML Confi
Each downsampled record includes three additional metadata columns:
- `record_count` the number of original points compressed into this single downsampled row
- `time_from` the minimum timestamp among the original points in the interval
- `time_to` the maximum timestamp among the original points in the interval
- `record_count`: the number of original points compressed into this single downsampled row
- `time_from`: the minimum timestamp among the original points in the interval
- `time_to`: the maximum timestamp among the original points in the interval
## Installation steps
@ -87,7 +87,7 @@ Run downsampling periodically on historical data:
```bash
influxdb3 create trigger \
--database mydb \
--plugin-filename gh:influxdata/downsampler/downsampler.py \
--path "gh:influxdata/downsampler/downsampler.py" \
--trigger-spec "every:1h" \
--trigger-arguments 'source_measurement=cpu_metrics,target_measurement=cpu_hourly,interval=1h,window=6h,calculations=avg,specific_fields=usage_user.usage_system' \
cpu_hourly_downsample
@ -99,7 +99,7 @@ Trigger downsampling via HTTP requests:
```bash
influxdb3 create trigger \
--database mydb \
--plugin-filename gh:influxdata/downsampler/downsampler.py \
--path "gh:influxdata/downsampler/downsampler.py" \
--trigger-spec "request:downsample" \
downsample_api
```
@ -113,7 +113,7 @@ Downsample CPU usage data from 1-minute intervals to hourly averages:
# Create the trigger
influxdb3 create trigger \
--database system_metrics \
--plugin-filename gh:influxdata/downsampler/downsampler.py \
--path "gh:influxdata/downsampler/downsampler.py" \
--trigger-spec "every:1h" \
--trigger-arguments 'source_measurement=cpu,target_measurement=cpu_hourly,interval=1h,window=6h,calculations=avg,specific_fields=usage_user.usage_system.usage_idle' \
cpu_hourly_downsample
@ -148,7 +148,7 @@ Apply different aggregation functions to different fields:
# Create trigger with field-specific aggregations
influxdb3 create trigger \
--database sensors \
--plugin-filename gh:influxdata/downsampler/downsampler.py \
--path "gh:influxdata/downsampler/downsampler.py" \
--trigger-spec "every:10min" \
--trigger-arguments 'source_measurement=environment,target_measurement=environment_10min,interval=10min,window=30min,calculations=temperature:avg.humidity:avg.pressure:max' \
env_multi_agg
@ -199,10 +199,10 @@ curl -X POST http://localhost:8181/api/v3/engine/downsample \
### Logging
Logs are stored in the `_internal` database (or the database where the trigger is created) in the `system.processing_engine_logs` table. To view logs:
Logs are stored in the trigger's database in the `system.processing_engine_logs` table. To view logs:
```bash
influxdb3 query --database _internal "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'your_trigger_name'"
influxdb3 query --database YOUR_DATABASE "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'your_trigger_name'"
```
Log columns:
@ -290,7 +290,7 @@ influxdb3 list triggers --database mydb
1. **Check execution logs** with task ID filtering:
```bash
influxdb3 query --database _internal \
influxdb3 query --database YOUR_DATABASE \
"SELECT * FROM system.processing_engine_logs WHERE log_text LIKE '%task_id%' ORDER BY event_time DESC LIMIT 10"
```
2. **Test with smaller time windows** for debugging:
@ -321,7 +321,7 @@ Combine all field calculations for a measurement in one trigger:
```bash
influxdb3 create trigger \
--database mydb \
--plugin-filename gh:influxdata/downsampler/downsampler.py \
--path "gh:influxdata/downsampler/downsampler.py" \
--trigger-spec "every:1h" \
--trigger-arguments 'source_measurement=temperature,target_measurement=temperature_hourly,interval=1h,window=6h,calculations=temp:avg.temp:max.temp:min,specific_fields=temp' \
temperature_hourly_downsample

View File

@ -1,72 +1,81 @@
The Forecast Error Evaluator Plugin validates forecast model accuracy for time series data
in InfluxDB 3 by comparing predicted values with actual observations.
The plugin periodically computes error metrics (MSE, MAE, or RMSE), detects anomalies based on error thresholds, and sends notifications when forecast accuracy degrades.
It includes debounce logic to suppress transient anomalies and supports multi-channel notifications via the Notification Sender Plugin.
The Forecast Error Evaluator Plugin validates forecast model accuracy for time series data in {{% product-name %}} by comparing predicted values with actual observations. The plugin periodically computes error metrics (MSE, MAE, RMSE, MAPE, or SMAPE), detects anomalies based on error thresholds, and sends notifications when forecast accuracy degrades. It includes debounce logic to suppress transient anomalies and supports multi-channel notifications via the Notification Sender Plugin.
## Configuration
Plugin parameters may be specified as key-value pairs in the `--trigger-arguments` flag (CLI) or in the `trigger_arguments` field (API) when creating a trigger. Some plugins support TOML configuration files, which can be specified using the plugin's `config_file_path` parameter.
If a plugin supports multiple trigger specifications, some parameters may depend on the trigger specification that you use.
### Plugin metadata
This plugin includes a JSON metadata schema in its docstring that defines supported trigger types and configuration parameters. This metadata enables the [InfluxDB 3 Explorer](https://docs.influxdata.com/influxdb3/explorer/) UI to display and configure the plugin.
### Required parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `forecast_measurement` | string | required | Measurement containing forecasted values |
| `actual_measurement` | string | required | Measurement containing actual (ground truth) values |
| `forecast_field` | string | required | Field name for forecasted values |
| `actual_field` | string | required | Field name for actual values |
| `error_metric` | string | required | Error metric to compute: `"mse"`, `"mae"`, or `"rmse"` |
| `error_thresholds` | string | required | Threshold levels. Format: `INFO-"0.5":WARN-"0.9":ERROR-"1.2":CRITICAL-"1.5"` |
| `window` | string | required | Time window for data analysis. Format: `<number><unit>` (for example, `"1h"`) |
| `senders` | string | required | Dot-separated list of notification channels (for example, `"slack.discord"`) |
| Parameter | Type | Default | Description |
|------------------------|--------|----------|------------------------------------------------------------------------------|
| `forecast_measurement` | string | required | Measurement containing forecasted values |
| `actual_measurement` | string | required | Measurement containing actual (ground truth) values |
| `forecast_field` | string | required | Field name for forecasted values |
| `actual_field` | string | required | Field name for actual values |
| `error_metric` | string | required | Error metric to compute: "mse", "mae", "rmse", "mape", or "smape" |
| `error_thresholds` | string | required | Threshold levels. Format: `INFO-"0.5":WARN-"0.9":ERROR-"1.2":CRITICAL-"1.5"` |
| `window` | string | required | Time window for data analysis. Format: `<number><unit>` (for example, "1h") |
| `senders` | string | required | Dot-separated list of notification channels (for example, "slack.discord") |
### Notification parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `notification_text` | string | default template | Template for notification message with variables `$measurement`, `$level`, `$field`, `$error`, `$metric`, `$tags` |
| `notification_path` | string | "notify" | URL path for the notification sending plugin |
| `port_override` | integer | 8181 | Port number where InfluxDB accepts requests |
| Parameter | Type | Default | Description |
|---------------------|---------|------------------|-------------------------------------------------------------------------------------------------------------------|
| `notification_text` | string | default template | Template for notification message with variables `$measurement`, `$level`, `$field`, `$error`, `$metric`, `$tags` |
| `notification_path` | string | "notify" | URL path for the notification sending plugin |
| `port_override` | integer | 8181 | Port number where InfluxDB accepts requests |
### Timing parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `min_condition_duration` | string | none | Minimum duration for anomaly condition to persist before triggering notification |
| `rounding_freq` | string | "1s" | Frequency to round timestamps for alignment |
| Parameter | Type | Default | Description |
|--------------------------|--------|---------|----------------------------------------------------------------------------------|
| `min_condition_duration` | string | none | Minimum duration for anomaly condition to persist before triggering notification |
| `rounding_freq` | string | "1s" | Frequency to round timestamps for alignment |
### Authentication parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `influxdb3_auth_token` | string | env variable | API token for InfluxDB 3. Can be set via `INFLUXDB3_AUTH_TOKEN` |
| `config_file_path` | string | none | Path to TOML config file relative to PLUGIN_DIR |
| Parameter | Type | Default | Description |
|------------------------|--------|--------------|-----------------------------------------------------------------|
| `influxdb3_auth_token` | string | env variable | API token for {{% product-name %}}. Can be set via `INFLUXDB3_AUTH_TOKEN` |
### Sender-specific parameters
#### Slack notifications
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `slack_webhook_url` | string | required | Webhook URL from Slack |
| `slack_headers` | string | none | Base64-encoded HTTP headers |
| Parameter | Type | Default | Description |
|---------------------|--------|----------|-----------------------------|
| `slack_webhook_url` | string | required | Webhook URL from Slack |
| `slack_headers` | string | none | Base64-encoded HTTP headers |
#### Discord notifications
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `discord_webhook_url` | string | required | Webhook URL from Discord |
| `discord_headers` | string | none | Base64-encoded HTTP headers |
| Parameter | Type | Default | Description |
|-----------------------|--------|----------|-----------------------------|
| `discord_webhook_url` | string | required | Webhook URL from Discord |
| `discord_headers` | string | none | Base64-encoded HTTP headers |
#### HTTP notifications
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| Parameter | Type | Default | Description |
|--------------------|--------|----------|--------------------------------------|
| `http_webhook_url` | string | required | Custom webhook URL for POST requests |
| `http_headers` | string | none | Base64-encoded HTTP headers |
| `http_headers` | string | none | Base64-encoded HTTP headers |
#### SMS notifications (via Twilio)
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `twilio_sid` | string | env variable | Twilio Account SID (or `TWILIO_SID` env var) |
| `twilio_token` | string | env variable | Twilio Auth Token (or `TWILIO_TOKEN` env var) |
| `twilio_from_number` | string | required | Twilio sender number (for example, `"+1234567890"`) |
| `twilio_to_number` | string | required | Recipient number (for example, `"+0987654321"`) |
| Parameter | Type | Default | Description |
|----------------------|--------|--------------|-----------------------------------------------|
| `twilio_sid` | string | env variable | Twilio Account SID (or `TWILIO_SID` env var) |
| `twilio_token` | string | env variable | Twilio Auth Token (or `TWILIO_TOKEN` env var) |
| `twilio_from_number` | string | required | Twilio sender number (for example, "+1234567890") |
| `twilio_to_number` | string | required | Recipient number (for example, "+0987654321") |
### TOML configuration
@ -74,18 +83,26 @@ It includes debounce logic to suppress transient anomalies and supports multi-ch
|--------------------|--------|---------|----------------------------------------------------------------------------------|
| `config_file_path` | string | none | TOML config file path relative to `PLUGIN_DIR` (required for TOML configuration) |
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting InfluxDB 3.
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting {{% product-name %}}.
#### Example TOML configuration
[forecast_error_config_scheduler.toml](https://github.com/influxdata/influxdb3_plugins/blob/master/influxdata/forecast_error_evaluator/forecast_error_config_scheduler.toml)
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins
/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
## Installation steps
## Software Requirements
- **{{% product-name %}}**: with the Processing Engine enabled.
- **Notification Sender Plugin for {{% product-name %}}**: Required for sending notifications. See the [influxdata/notifier plugin](../notifier/README.md).
- **Python packages**:
- `pandas` (for data processing)
- `requests` (for HTTP notifications)
### Installation steps
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`):
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`)
```bash
influxdb3 serve \
--node-id node0 \
@ -93,22 +110,13 @@ For more information on using TOML configuration files, see the Using TOML Confi
--data-dir ~/.influxdb3 \
--plugin-dir ~/.plugins
```
2. Install required Python packages:
- `pandas` (for data processing)
- `requests` (for HTTP notifications)
```bash
influxdb3 install package pandas
influxdb3 install package requests
```
3. Install the Notification Sender Plugin (required):
```bash
# Ensure notifier plugin is available in ~/.plugins/
```
3. Install the [influxdata/notifier plugin](../notifier/README.md) (required)
## Trigger setup
@ -119,11 +127,12 @@ Run forecast error evaluation periodically:
```bash
influxdb3 create trigger \
--database weather_forecasts \
--plugin-filename gh:influxdata/forecast_error_evaluator/forecast_error_evaluator.py \
--path "gh:influxdata/forecast_error_evaluator/forecast_error_evaluator.py" \
--trigger-spec "every:30m" \
--trigger-arguments 'forecast_measurement=temperature_forecast,actual_measurement=temperature_actual,forecast_field=predicted_temp,actual_field=temp,error_metric=rmse,error_thresholds=INFO-"0.5":WARN-"1.0":ERROR-"2.0",window=1h,senders=slack,slack_webhook_url="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"' \
--trigger-arguments 'forecast_measurement=temperature_forecast,actual_measurement=temperature_actual,forecast_field=predicted_temp,actual_field=temp,error_metric=rmse,error_thresholds=INFO-"0.5":WARN-"1.0":ERROR-"2.0",window=1h,senders=slack,slack_webhook_url="$SLACK_WEBHOOK_URL"' \
forecast_validation
```
Set `SLACK_WEBHOOK_URL` to your Slack incoming webhook URL.
## Example usage
@ -135,9 +144,9 @@ Validate temperature forecast accuracy and send Slack notifications:
# Create the trigger
influxdb3 create trigger \
--database weather_db \
--plugin-filename gh:influxdata/forecast_error_evaluator/forecast_error_evaluator.py \
--path "gh:influxdata/forecast_error_evaluator/forecast_error_evaluator.py" \
--trigger-spec "every:15m" \
--trigger-arguments 'forecast_measurement=temp_forecast,actual_measurement=temp_actual,forecast_field=predicted,actual_field=temperature,error_metric=rmse,error_thresholds=INFO-"0.5":WARN-"1.0":ERROR-"2.0":CRITICAL-"3.0",window=30m,senders=slack,slack_webhook_url="https://hooks.slack.com/services/YOUR/WEBHOOK/URL",min_condition_duration=10m' \
--trigger-arguments 'forecast_measurement=temp_forecast,actual_measurement=temp_actual,forecast_field=predicted,actual_field=temperature,error_metric=rmse,error_thresholds=INFO-"0.5":WARN-"1.0":ERROR-"2.0":CRITICAL-"3.0",window=30m,senders=slack,slack_webhook_url="$SLACK_WEBHOOK_URL",min_condition_duration=10m' \
temp_forecast_check
# Write forecast data
@ -152,20 +161,21 @@ influxdb3 write \
# Check logs after trigger runs
influxdb3 query \
--database _internal \
--database YOUR_DATABASE \
"SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'temp_forecast_check'"
```
**Expected output**
### Expected behavior
- Plugin computes RMSE between forecast and actual values
- If RMSE > 0.5, sends INFO-level notification
- If RMSE > 1.0, sends WARN-level notification
- Only triggers if condition persists for 10+ minutes (debounce)
Set `SLACK_WEBHOOK_URL` to your Slack incoming webhook URL.
**Notification example:**
```
[WARN] Forecast error alert in temp_forecast.predicted: rmse=1.2. Tags: location=station1
```
### Example 2: Multi-metric validation with multiple channels
@ -175,11 +185,12 @@ Monitor multiple forecast metrics with different notification channels:
# Create trigger with Discord and HTTP notifications
influxdb3 create trigger \
--database analytics \
--plugin-filename gh:influxdata/forecast_error_evaluator/forecast_error_evaluator.py \
--path "gh:influxdata/forecast_error_evaluator/forecast_error_evaluator.py" \
--trigger-spec "every:1h" \
--trigger-arguments 'forecast_measurement=sales_forecast,actual_measurement=sales_actual,forecast_field=predicted_sales,actual_field=sales_amount,error_metric=mae,error_thresholds=WARN-"1000":ERROR-"5000":CRITICAL-"10000",window=6h,senders=discord.http,discord_webhook_url="https://discord.com/api/webhooks/YOUR/WEBHOOK",http_webhook_url="https://your-api.com/alerts",notification_text="[$$level] Sales forecast error: $$metric=$$error (threshold exceeded)",rounding_freq=5min' \
--trigger-arguments 'forecast_measurement=sales_forecast,actual_measurement=sales_actual,forecast_field=predicted_sales,actual_field=sales_amount,error_metric=mae,error_thresholds=WARN-"1000":ERROR-"5000":CRITICAL-"10000",window=6h,senders=discord.http,discord_webhook_url="$DISCORD_WEBHOOK_URL",http_webhook_url="$HTTP_WEBHOOK_URL",notification_text="[$$level] Sales forecast error: $$metric=$$error (threshold exceeded)",rounding_freq=5min' \
sales_forecast_monitor
```
Set `DISCORD_WEBHOOK_URL` and `HTTP_WEBHOOK_URL` to your webhook URLs.
### Example 3: SMS alerts for critical forecast failures
@ -193,24 +204,26 @@ export TWILIO_TOKEN="your_twilio_token"
# Create trigger with SMS notifications
influxdb3 create trigger \
--database production_forecasts \
--plugin-filename gh:influxdata/forecast_error_evaluator/forecast_error_evaluator.py \
--path "gh:influxdata/forecast_error_evaluator/forecast_error_evaluator.py" \
--trigger-spec "every:5m" \
--trigger-arguments 'forecast_measurement=demand_forecast,actual_measurement=demand_actual,forecast_field=predicted_demand,actual_field=actual_demand,error_metric=mse,error_thresholds=CRITICAL-"100000",window=15m,senders=sms,twilio_from_number="+1234567890",twilio_to_number="+0987654321",notification_text="CRITICAL: Production demand forecast error exceeded threshold. MSE: $$error",min_condition_duration=2m' \
critical_forecast_alert
```
## Using TOML Configuration Files
This plugin supports using TOML configuration files for complex configurations.
### Important Requirements
**To use TOML configuration files, you must set the `PLUGIN_DIR` environment variable in the InfluxDB 3 host environment:**
**To use TOML configuration files, you must set the `PLUGIN_DIR` environment variable in the {{% product-name %}} host environment:**
```bash
PLUGIN_DIR=~/.plugins influxdb3 serve --node-id node0 --object-store file --data-dir ~/.influxdb3 --plugin-dir ~/.plugins
PLUGIN_DIR=~/.plugins influxdb3 serve \
--node-id node0 \
--object-store file \
--data-dir ~/.influxdb3 \
--plugin-dir ~/.plugins
```
### Example TOML Configuration
```toml
@ -223,7 +236,7 @@ error_metric = "rmse"
error_thresholds = 'INFO-"0.5":WARN-"1.0":ERROR-"2.0":CRITICAL-"3.0"'
window = "1h"
senders = "slack"
slack_webhook_url = "https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
slack_webhook_url = "$SLACK_WEBHOOK_URL"
min_condition_duration = "10m"
rounding_freq = "1min"
notification_text = "[$$level] Forecast validation alert: $$metric=$$error in $$measurement.$$field"
@ -231,18 +244,18 @@ notification_text = "[$$level] Forecast validation alert: $$metric=$$error in $$
# Authentication (use environment variables instead when possible)
influxdb3_auth_token = "your_token_here"
```
Set `SLACK_WEBHOOK_URL` to your Slack incoming webhook URL.
### Create trigger using TOML config
```bash
influxdb3 create trigger \
--database weather_db \
--plugin-filename forecast_error_evaluator.py \
--path "gh:influxdata/forecast_error_evaluator/forecast_error_evaluator.py" \
--trigger-spec "every:30m" \
--trigger-arguments config_file_path=forecast_error_config_scheduler.toml \
forecast_validation_trigger
```
## Code overview
### Files
@ -252,13 +265,13 @@ influxdb3 create trigger \
### Logging
Logs are stored in the `_internal` database (or the database where the trigger is created) in the `system.processing_engine_logs` table. To view logs:
Logs are stored in the trigger's database in the `system.processing_engine_logs` table. To view logs:
```bash
influxdb3 query --database _internal "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'your_trigger_name'"
influxdb3 query --database YOUR_DATABASE "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'your_trigger_name'"
```
Log columns:
- **event_time**: Timestamp of the log event
- **trigger_name**: Name of the trigger that generated the log
- **log_level**: Severity level (INFO, WARN, ERROR)
@ -267,29 +280,36 @@ Log columns:
### Main functions
#### `process_scheduled_call(influxdb3_local, call_time, args)`
Handles scheduled forecast validation tasks.
Queries forecast and actual measurements, computes error metrics, and triggers notifications.
Handles scheduled forecast validation tasks. Queries forecast and actual measurements, computes error metrics, and triggers notifications.
Key operations:
1. Parses configuration from arguments or TOML file
2. Queries forecast and actual measurements within time window
3. Aligns timestamps using rounding frequency
4. Computes specified error metric (MSE, MAE, or RMSE)
4. Computes specified error metric (MSE, MAE, RMSE, MAPE, or SMAPE)
5. Evaluates thresholds and applies debounce logic
6. Sends notifications via configured channels
#### `compute_error_metric(forecast_values, actual_values, metric_type)`
Core error computation engine that calculates forecast accuracy metrics.
Supported error metrics:
- `mse`: Mean Squared Error
- `mae`: Mean Absolute Error
- `rmse`: Root Mean Squared Error (square root of MSE)
- `mse`: Mean Squared Error - measures average squared differences
- `mae`: Mean Absolute Error - measures average absolute differences
- `rmse`: Root Mean Squared Error - square root of MSE, same units as original data
- `mape`: Mean Absolute Percentage Error - percentage-based error
- `smape`: Symmetric Mean Absolute Percentage Error - bounded 0-200%, handles over/under-estimation symmetrically
#### `evaluate_thresholds(error_value, threshold_config)`
Evaluates computed error against configured thresholds to determine alert level.
Returns alert level based on threshold ranges:
- `INFO`: Informational threshold exceeded
- `WARN`: Warning threshold exceeded
- `ERROR`: Error threshold exceeded
@ -300,14 +320,17 @@ Returns alert level based on threshold ranges:
### Common issues
#### Issue: No overlapping timestamps between forecast and actual data
**Solution**: Check that both measurements have data in the specified time window and use `rounding_freq` for alignment:
```bash
influxdb3 query --database mydb "SELECT time, field_value FROM forecast_measurement WHERE time >= now() - 1h"
influxdb3 query --database mydb "SELECT time, field_value FROM actual_measurement WHERE time >= now() - 1h"
```
#### Issue: Notifications not being sent
**Solution**: Verify the Notification Sender Plugin is installed and webhook URLs are correct:
```bash
# Check if notifier plugin exists
ls ~/.plugins/notifier_plugin.py
@ -315,45 +338,54 @@ ls ~/.plugins/notifier_plugin.py
# Test webhook URL manually
curl -X POST "your_webhook_url" -d '{"text": "test message"}'
```
#### Issue: Error threshold format not recognized
**Solution**: Use proper threshold format with level prefixes:
**Solution**: Use proper threshold format with level prefixes. Note that MAPE and SMAPE thresholds are in percentages:
```bash
# For absolute metrics (MSE, MAE, RMSE)
--trigger-arguments 'error_thresholds=INFO-"0.5":WARN-"1.0":ERROR-"2.0":CRITICAL-"3.0"'
# For percentage metrics (MAPE, SMAPE)
--trigger-arguments 'error_thresholds=INFO-"5.0":WARN-"10.0":ERROR-"20.0":CRITICAL-"30.0"'
```
#### Issue: MAPE/SMAPE calculation errors with zero values
**Solution**: MAPE cannot be calculated when actual values are zero, and SMAPE cannot be calculated when both forecast and actual are zero. The plugin automatically skips such rows and logs warnings. For datasets with frequent zero values, consider using MAE or RMSE instead.
#### Issue: Environment variables not loaded
**Solution**: Set environment variables before starting InfluxDB:
```bash
export INFLUXDB3_AUTH_TOKEN="your_token"
export TWILIO_SID="your_sid"
influxdb3 serve --plugin-dir ~/.plugins
```
### Debugging tips
1. **Check data availability** in both measurements:
```bash
influxdb3 query --database mydb \
"SELECT COUNT(*) FROM forecast_measurement WHERE time >= now() - window"
```
```bash
influxdb3 query --database mydb \
"SELECT COUNT(*) FROM forecast_measurement WHERE time >= now() - window"
```
2. **Verify timestamp alignment** with rounding frequency:
```bash
--trigger-arguments 'rounding_freq=5min'
```
```bash
--trigger-arguments 'rounding_freq=5min'
```
3. **Test with shorter windows** for faster debugging:
```bash
--trigger-arguments 'window=10m,min_condition_duration=1m'
```
```bash
--trigger-arguments 'window=10m,min_condition_duration=1m'
```
4. **Monitor notification delivery** in logs:
```bash
influxdb3 query --database _internal \
"SELECT * FROM system.processing_engine_logs WHERE log_text LIKE '%notification%'"
```
```bash
influxdb3 query --database YOUR_DATABASE \
"SELECT * FROM system.processing_engine_logs WHERE log_text LIKE '%notification%'"
```
### Performance considerations
- **Data alignment**: Use appropriate `rounding_freq` to balance accuracy and performance

View File

@ -1,66 +1,35 @@
The InfluxDB to Iceberg Plugin enables data transfer from InfluxDB 3 to Apache Iceberg tables.
Transfer time series data to Iceberg for long-term storage, analytics, or integration with data lake architectures.
The plugin supports both scheduled batch transfers of historical data and on-demand transfers via HTTP API.
The InfluxDB to Iceberg Plugin enables data transfer from {{% product-name %}} to Apache Iceberg tables. Transfer time series data to Iceberg for long-term storage, analytics, or integration with data lake architectures. The plugin supports both scheduled batch transfers of historical data and on-demand transfers via HTTP API.
## Configuration
Plugin parameters may be specified as key-value pairs in the `--trigger-arguments` flag (CLI) or in the `trigger_arguments` field (API) when creating a trigger. Some plugins support TOML configuration files, which can be specified using the plugin's `config_file_path` parameter.
If a plugin supports multiple trigger specifications, some parameters may depend on the trigger specification that you use.
### Plugin metadata
This plugin includes a JSON metadata schema in its docstring that defines supported trigger types and configuration parameters. This metadata enables the [InfluxDB 3 Explorer](https://docs.influxdata.com/influxdb3/explorer/) UI to display and configure the plugin.
### Scheduler trigger parameters
#### Required parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `measurement` | string | required | Source measurement containing data to transfer |
| `window` | string | required | Time window for data transfer. Format: `<number><unit>` (for example, `"1h"`, `"30d"`) |
| `catalog_configs` | string | required | Base64-encoded JSON string containing Iceberg catalog configuration |
| Parameter | Type | Default | Description |
|-------------------|--------|----------|-----------------------------------------------------------------------------|
| `measurement` | string | required | Source measurement containing data to transfer |
| `window` | string | required | Time window for data transfer. Format: `<number><unit>` (for example, "1h", "30d") |
| `catalog_configs` | string | required | Base64-encoded JSON string containing Iceberg catalog configuration |
#### Optional parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `included_fields` | string | all fields | Dot-separated list of fields to include (for example, `"usage_user.usage_idle"`) |
| `excluded_fields` | string | none | Dot-separated list of fields to exclude |
| `namespace` | string | "default" | Iceberg namespace for the target table |
| `table_name` | string | measurement name | Iceberg table name |
| `config_file_path` | string | none | Path to TOML config file relative to PLUGIN_DIR |
### HTTP trigger parameters
#### Request body structure
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `measurement` | string | Yes | Source measurement containing data to transfer |
| `catalog_configs` | object | Yes | Iceberg catalog configuration dictionary. See [PyIceberg catalog documentation](https://py.iceberg.apache.org/configuration/) |
| `included_fields` | array | No | List of field names to include in replication |
| `excluded_fields` | array | No | List of field names to exclude from replication |
| `namespace` | string | No | Target Iceberg namespace (default: "default") |
| `table_name` | string | No | Target Iceberg table name (default: measurement name) |
| `batch_size` | string | No | Batch size duration for processing (default: "1d"). Format: `<number><unit>` |
| `backfill_start` | string | No | ISO 8601 datetime with timezone for backfill start |
| `backfill_end` | string | No | ISO 8601 datetime with timezone for backfill end |
### Schema management
- Automatically creates Iceberg table schema from the first batch of data
- Maps pandas data types to Iceberg types:
- `int64``IntegerType`
- `float64``FloatType`
- `datetime64[us]``TimestampType`
- `object``StringType`
- Fields with no null values are marked as `required`
- The `time` column is converted to `datetime64[us]` for Iceberg compatibility
- Tables are created in format: `<namespace>.<table_name>`
## Schema requirements
The plugin assumes that the Iceberg table schema is already defined in the database, as it relies on this schema to retrieve field and tag names required for processing.
> [!WARNING]
> #### Requires existing schema
>
> By design, the plugin returns an error if the schema doesn't exist or doesn't contain the expected columns.
| Parameter | Type | Default | Description |
|----------------------|--------|-------------------|-----------------------------------------------------------------------------------|
| `included_fields` | string | all fields/tags | Dot-separated list of fields and tags to include (for example, "usage_user.host") |
| `excluded_fields` | string | none | Dot-separated list of fields and tags to exclude |
| `namespace` | string | "default" | Iceberg namespace for the target table |
| `table_name` | string | measurement name | Iceberg table name |
| `auto_update_schema` | string | false | Automatically update Iceberg table schema when data doesn't match existing schema |
### TOML configuration
@ -68,52 +37,103 @@ The plugin assumes that the Iceberg table schema is already defined in the datab
|--------------------|--------|---------|----------------------------------------------------------------------------------|
| `config_file_path` | string | none | TOML config file path relative to `PLUGIN_DIR` (required for TOML configuration) |
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting InfluxDB 3.
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting {{% product-name %}}.
#### Example TOML configuration
[influxdb_to_iceberg_config_scheduler.toml](https://github.com/influxdata/influxdb3_plugins/blob/master/influxdata/influxdb_to_iceberg/influxdb_to_iceberg_config_scheduler.toml)
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins
/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
## Installation steps
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`)
### HTTP trigger parameters
#### Request body structure
| Parameter | Type | Required | Description |
|----------------------|---------|----------|--------------------------------------------------------------------------------------------------------------------------------|
| `measurement` | string | Yes | Source measurement containing data to transfer |
| `catalog_configs` | object | Yes | Iceberg catalog configuration dictionary. See [PyIceberg catalog documentation](https://py.iceberg.apache.org/configuration/) |
| `included_fields` | array | No | List of field and tag names to include in replication |
| `excluded_fields` | array | No | List of field and tag names to exclude from replication |
| `namespace` | string | No | Target Iceberg namespace (default: "default") |
| `table_name` | string | No | Target Iceberg table name (default: measurement name) |
| `batch_size` | string | No | Batch size duration for processing (default: "1d"). Format: `<number><unit>` |
| `backfill_start` | string | No | ISO 8601 datetime with timezone for backfill start |
| `backfill_end` | string | No | ISO 8601 datetime with timezone for backfill end |
| `auto_update_schema` | boolean | No | Automatically update Iceberg table schema when data doesn't match existing schema (default: false) |
## Schema management
- Automatically creates Iceberg table schema from the first batch of data
- Maps pandas data types to Iceberg types:
- `int64``IntegerType`
- `float64``FloatType`
- `datetime64[us]``TimestampType`
- `object``StringType`
- Fields with no null values are marked as `required`
- The `time` column is converted to `datetime64[us]` for Iceberg compatibility
- Tables are created in format: `<namespace>.<table_name>`
### Automatic schema updates
When `auto_update_schema=true`:
- **New fields**: Automatically added to Iceberg table schema as optional (nullable) columns
- **Missing fields**: Added to DataFrame with null values based on existing schema types
- **Schema evolution**: Ensures data compatibility between InfluxDB and Iceberg without manual intervention
- **Backward compatibility**: Existing data remains valid as new columns are always optional
## Software Requirements
- **{{% product-name %}}**: with the Processing Engine enabled
- **Python packages**:
- `pandas` (for data manipulation)
- `pyarrow` (for Parquet support)
- `pyiceberg[catalog-options]` (for Iceberg integration)
### Installation steps
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`):
```bash
influxdb3 serve \
--node-id node0 \
--object-store file \
--data-dir ~/.influxdb3 \
--plugin-dir ~/.plugins
```
2. Install required Python packages:
- `pandas` (for data manipulation)
- `pyarrow` (for Parquet support)
- `pyiceberg[catalog-options]` (for Iceberg integration)
```bash
influxdb3 install package pandas
influxdb3 install package pyarrow
influxdb3 install package "pyiceberg[s3fs,hive,sql-sqlite]"
```
**Note:** Include the appropriate PyIceberg extras based on your catalog type:
**Note:** Include the appropriate PyIceberg extras based on your catalog type:
- `[s3fs]` for S3 storage
- `[hive]` for Hive metastore
- `[sql-sqlite]` for SQL catalog with SQLite
- See [PyIceberg documentation](https://py.iceberg.apache.org/#installation) for all options
- `[s3fs]` for S3 storage
- `[hive]` for Hive metastore
- `[sql-sqlite]` for SQL catalog with SQLite
- See [PyIceberg documentation](https://py.iceberg.apache.org/#installation) for all options
## Schema requirement
The plugin assumes that the table schema is already defined in the database, as it relies on this schema to retrieve field and tag names required for processing.
## Trigger setup
### Scheduled data transfer
Periodically transfer data from InfluxDB 3 to Iceberg:
Periodically transfer data from {{% product-name %}} to Iceberg:
```bash
influxdb3 create trigger \
--database mydb \
--plugin-filename gh:influxdata/influxdb_to_iceberg/influxdb_to_iceberg.py \
--path "gh:influxdata/influxdb_to_iceberg/influxdb_to_iceberg.py" \
--trigger-spec "every:1h" \
--trigger-arguments 'measurement=cpu,window=1h,catalog_configs="eyJ1cmkiOiAiaHR0cDovL25lc3NpZTo5MDAwIn0=",namespace=monitoring,table_name=cpu_metrics' \
hourly_iceberg_transfer
```
### HTTP API endpoint
Create an on-demand transfer endpoint:
@ -121,16 +141,15 @@ Create an on-demand transfer endpoint:
```bash
influxdb3 create trigger \
--database mydb \
--plugin-filename gh:influxdata/influxdb_to_iceberg/influxdb_to_iceberg.py \
--path "gh:influxdata/influxdb_to_iceberg/influxdb_to_iceberg.py" \
--trigger-spec "request:replicate" \
iceberg_http_transfer
```
Enable the trigger:
```bash
influxdb3 enable trigger --database mydb iceberg_http_transfer
```
The endpoint is registered at `/api/v3/engine/replicate`.
## Example usage
@ -145,7 +164,7 @@ Transfer CPU metrics to Iceberg every hour:
# Base64: eyJ1cmkiOiAiaHR0cDovL25lc3NpZTo5MDAwIn0=
influxdb3 create trigger \
--database metrics \
--plugin-filename gh:influxdata/influxdb_to_iceberg/influxdb_to_iceberg.py \
--path "gh:influxdata/influxdb_to_iceberg/influxdb_to_iceberg.py" \
--trigger-spec "every:1h" \
--trigger-arguments 'measurement=cpu,window=24h,catalog_configs="eyJ1cmkiOiAiaHR0cDovL25lc3NpZTo5MDAwIn0="' \
cpu_to_iceberg
@ -157,8 +176,8 @@ influxdb3 write \
# After trigger runs, data is available in Iceberg table "default.cpu"
```
**Expected output**
### Expected results
- Creates Iceberg table `default.cpu` with schema matching the measurement
- Transfers all CPU data from the last 24 hours
- Appends new data on each hourly run
@ -171,7 +190,7 @@ Backfill specific fields from historical data:
# Create and enable HTTP trigger
influxdb3 create trigger \
--database metrics \
--plugin-filename gh:influxdata/influxdb_to_iceberg/influxdb_to_iceberg.py \
--path "gh:influxdata/influxdb_to_iceberg/influxdb_to_iceberg.py" \
--trigger-spec "request:replicate" \
iceberg_backfill
@ -186,7 +205,7 @@ curl -X POST http://localhost:8181/api/v3/engine/replicate \
"type": "sql",
"uri": "sqlite:///path/to/catalog.db"
},
"included_fields": ["temp_celsius", "humidity"],
"included_fields": ["temp_celsius", "humidity", "sensor_id"],
"namespace": "weather",
"table_name": "temperature_history",
"batch_size": "12h",
@ -194,8 +213,8 @@ curl -X POST http://localhost:8181/api/v3/engine/replicate \
"backfill_end": "2024-01-07T00:00:00+00:00"
}'
```
**Expected output**
### Expected results
- Creates Iceberg table `weather.temperature_history`
- Transfers only `temp_celsius` and `humidity` fields
- Processes data in 12-hour batches for the specified week
@ -225,62 +244,12 @@ CATALOG_CONFIG=$(base64 < catalog_config.json)
# Create trigger
influxdb3 create trigger \
--database metrics \
--plugin-filename gh:influxdata/influxdb_to_iceberg/influxdb_to_iceberg.py \
--path "gh:influxdata/influxdb_to_iceberg/influxdb_to_iceberg.py" \
--trigger-spec "every:30m" \
--trigger-arguments "measurement=sensor_data,window=1h,catalog_configs=\"$CATALOG_CONFIG\",namespace=iot,table_name=sensors" \
s3_iceberg_transfer
```
## Using TOML Configuration Files
This plugin supports using TOML configuration files to specify all plugin arguments. This is useful for complex configurations or when you want to version control your plugin settings.
### Important Requirements
**To use TOML configuration files, you must set the `PLUGIN_DIR` environment variable in the InfluxDB 3 host environment.** This is required in addition to the `--plugin-dir` flag when starting InfluxDB 3:
- `--plugin-dir` tells InfluxDB 3 where to find plugin Python files
- `PLUGIN_DIR` environment variable tells the plugins where to find TOML configuration files
### Setting Up TOML Configuration
1. **Start InfluxDB 3 with the PLUGIN_DIR environment variable set**:
```bash
PLUGIN_DIR=~/.plugins influxdb3 serve --node-id node0 --object-store file --data-dir ~/.influxdb3 --plugin-dir ~/.plugins
```
2. **Copy the example TOML configuration file to your plugin directory**:
```bash
cp influxdb_to_iceberg_config_scheduler.toml ~/.plugins/
```
3. **Edit the TOML file** to match your requirements:
```toml
# Required parameters
measurement = "cpu"
window = "1h"
# Optional parameters
namespace = "monitoring"
table_name = "cpu_metrics"
# Iceberg catalog configuration
[catalog_configs]
type = "sql"
uri = "http://nessie:9000"
warehouse = "s3://iceberg-warehouse/"
```
4. **Create a trigger using the `config_file_path` argument**:
```bash
influxdb3 create trigger \
--database mydb \
--plugin-filename influxdb_to_iceberg.py \
--trigger-spec "every:1h" \
--trigger-arguments config_file_path=influxdb_to_iceberg_config_scheduler.toml \
iceberg_toml_trigger
```
## Code overview
### Files
@ -290,13 +259,13 @@ This plugin supports using TOML configuration files to specify all plugin argume
### Logging
Logs are stored in the `_internal` database (or the database where the trigger is created) in the `system.processing_engine_logs` table. To view logs:
Logs are stored in the trigger's database in the `system.processing_engine_logs` table. To view logs:
```bash
influxdb3 query --database _internal "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'your_trigger_name'"
influxdb3 query --database YOUR_DATABASE "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'your_trigger_name'"
```
Log columns:
- **event_time**: Timestamp of the log event
- **trigger_name**: Name of the trigger that generated the log
- **log_level**: Severity level (INFO, WARN, ERROR)
@ -305,20 +274,22 @@ Log columns:
### Main functions
#### `process_scheduled_call(influxdb3_local, call_time, args)`
Handles scheduled data transfers.
Queries data within the specified window and appends to Iceberg tables.
Handles scheduled data transfers. Queries data within the specified window and appends to Iceberg tables.
Key operations:
1. Parses configuration and decodes catalog settings
2. Queries source measurement with optional field filtering
3. Creates Iceberg table if needed
4. Appends data to Iceberg table
#### `process_http_request(influxdb3_local, request_body, args)`
Handles on-demand data transfers via HTTP.
Supports backfill operations with configurable batch sizes.
Handles on-demand data transfers via HTTP. Supports backfill operations with configurable batch sizes.
Key operations:
1. Validates request body parameters
2. Determines backfill time range
3. Processes data in batches
@ -329,67 +300,70 @@ Key operations:
### Common issues
#### Issue: "Failed to decode catalog_configs" error
**Solution**: Ensure the catalog configuration is properly base64-encoded:
```bash
# Create JSON file
echo '{"uri": "http://nessie:9000"}' > config.json
# Encode to base64
base64 config.json
```
#### Issue: "Failed to create Iceberg table" error
**Solution**:
**Solution**:
1. Verify catalog configuration is correct
2. Check warehouse path permissions
3. Ensure required PyIceberg extras are installed:
```bash
influxdb3 install package "pyiceberg[s3fs]"
```
3. Ensure required PyIceberg extras are installed:`bash
influxdb3 install package "pyiceberg[s3fs]"
`
#### Issue: No data in Iceberg table after transfer
**Solution**:
1. Check if source measurement contains data:
```bash
influxdb3 query --database mydb "SELECT COUNT(*) FROM measurement"
```
2. Verify time window covers data:
```bash
influxdb3 query --database mydb "SELECT MIN(time), MAX(time) FROM measurement"
```
3. Check logs for errors:
```bash
influxdb3 query --database _internal "SELECT * FROM system.processing_engine_logs WHERE log_level = 'ERROR'"
```
#### Issue: "Schema evolution not supported" error
**Solution**: The plugin doesn't handle schema changes. If fields change:
1. Create a new table with different name
2. Or manually update the Iceberg table schema
**Solution**:
1. Check if source measurement contains data:`bash
influxdb3 query --database mydb "SELECT COUNT(*) FROM measurement"
`
2. Verify time window covers data:`bash
influxdb3 query --database mydb "SELECT MIN(time), MAX(time) FROM measurement"
`
3. Check logs for errors:`bash
influxdb3 query --database YOUR_DATABASE "SELECT * FROM system.processing_engine_logs WHERE log_level = 'ERROR'"
`
#### Issue: "Incompatible change: cannot add required column" error
**Solution**: This occurs when trying to add a required (non-nullable) column to an existing table. With `auto_update_schema=true`, new columns are automatically added as optional. If you encounter this error:
1. Ensure `auto_update_schema=true` in your configuration
2. Check that you're using the latest version of the plugin
### Debugging tips
1. **Test catalog connectivity**:
```python
from pyiceberg.catalog import load_catalog
catalog = load_catalog("my_catalog", **catalog_configs)
print(catalog.list_namespaces())
```
```python
from pyiceberg.catalog import load_catalog
catalog = load_catalog("my_catalog", **catalog_configs)
print(catalog.list_namespaces())
```
2. **Verify field names**:
```bash
influxdb3 query --database mydb "SHOW FIELD KEYS FROM measurement"
```
3. **Use smaller windows** for initial testing:
```bash
--trigger-arguments 'window=5m,...'
```
### Performance considerations
- **File sizing**: Each scheduled run creates new Parquet files. Use appropriate window sizes to balance file count and size
- **Batch processing**: For HTTP transfers, adjust `batch_size` based on available memory
- **Field filtering**: Use `included_fields` to reduce data volume when only specific fields are needed
- **Field and tag filtering**: Use `included_fields` to reduce data volume when only specific fields and tags are needed
- **Catalog choice**: SQL catalogs (SQLite) are simpler but REST catalogs scale better
## Report an issue

View File

@ -1,80 +1,82 @@
The MAD-Based Anomaly Detection Plugin provides real-time anomaly detection for time series data in InfluxDB 3 using Median Absolute Deviation (MAD).
Detect outliers in your field values as data is written, with configurable thresholds for both count-based and duration-based alerts.
The plugin maintains in-memory deques for efficient computation and integrates with the Notification Sender Plugin to deliver alerts via multiple channels.
The MAD-Based Anomaly Detection Plugin provides real-time anomaly detection for time series data in {{% product-name %}} using Median Absolute Deviation (MAD). Detect outliers in your field values as data is written, with configurable thresholds for both count-based and duration-based alerts. The plugin maintains in-memory deques for efficient computation and integrates with the Notification Sender Plugin to deliver alerts via multiple channels.
## Configuration
Plugin parameters may be specified as key-value pairs in the `--trigger-arguments` flag (CLI) or in the `trigger_arguments` field (API) when creating a trigger. Some plugins support TOML configuration files, which can be specified using the plugin's `config_file_path` parameter.
If a plugin supports multiple trigger specifications, some parameters may depend on the trigger specification that you use.
### Plugin metadata
This plugin includes a JSON metadata schema in its docstring that defines supported trigger types and configuration parameters. This metadata enables the [InfluxDB 3 Explorer](https://docs.influxdata.com/influxdb3/explorer/) UI to display and configure the plugin.
### Required parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `measurement` | string | required | Source measurement to monitor for anomalies |
| `mad_thresholds` | string | required | MAD threshold conditions. Format: `field:k:window_count:threshold` |
| `senders` | string | required | Dot-separated list of notification channels (for example, `"slack.discord"`) |
| Parameter | Type | Default | Description |
|------------------|--------|----------|---------------------------------------------------------------------|
| `measurement` | string | required | Source measurement to monitor for anomalies |
| `mad_thresholds` | string | required | MAD threshold conditions. Format: `field:k:window_count:threshold` |
| `senders` | string | required | Dot-separated list of notification channels (for example, "slack.discord") |
### MAD threshold parameters
| Component | Description | Example |
|-----------|-------------|---------|
| `field_name` | The numeric field to monitor | `temp` |
| `k` | MAD multiplier for anomaly threshold | `2.5` |
| `window_count` | Number of recent points for MAD computation | `20` |
| `threshold` | Count (integer) or duration (for example, `"2m"`, `"1h"`) | `5` or `2m` |
| Component | Description | Example |
|----------------|------------------------------------------------|-------------|
| `field_name` | The numeric field to monitor | `temp` |
| `k` | MAD multiplier for anomaly threshold | `2.5` |
| `window_count` | Number of recent points for MAD computation | `20` |
| `threshold` | Count (integer) or duration (for example, "2m", "1h") | `5` or `2m` |
Multiple thresholds are separated by `@`: `temp:2.5:20:5@load:3:10:2m`
### Optional parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `influxdb3_auth_token` | string | env var | API token for InfluxDB 3 (or use INFLUXDB3_AUTH_TOKEN env var) |
| `state_change_count` | string | "0" | Maximum allowed value flips before suppressing notifications |
| `notification_count_text` | string | see below | Template for count-based alerts with variables: $table, $field, $threshold_count, $tags |
| `notification_time_text` | string | see below | Template for duration-based alerts with variables: $table, $field, $threshold_time, $tags |
| `notification_path` | string | "notify" | URL path for the notification sending plugin |
| `port_override` | string | "8181" | Port number where InfluxDB accepts requests |
| `config_file_path` | string | none | Path to TOML config file relative to PLUGIN_DIR |
| Parameter | Type | Default | Description |
|---------------------------|--------|--------------------------------------|-------------------------------------------------------------------------------------------|
| `influxdb3_auth_token` | string | env var | API token for {{% product-name %}} (or use INFLUXDB3_AUTH_TOKEN env var) |
| `state_change_count` | string | "0" | Maximum allowed value flips before suppressing notifications |
| `notification_count_text` | string | see *Default notification templates* | Template for count-based alerts with variables: $table, $field, $threshold_count, $tags |
| `notification_time_text` | string | see *Default notification templates* | Template for duration-based alerts with variables: $table, $field, $threshold_time, $tags |
| `notification_path` | string | "notify" | URL path for the notification sending plugin |
| `port_override` | string | "8181" | Port number where InfluxDB accepts requests |
#### Default notification templates
Default notification templates:
- Count: `"MAD count alert: Field $field in $table outlier for $threshold_count consecutive points. Tags: $tags"`
- Time: `"MAD duration alert: Field $field in $table outlier for $threshold_time. Tags: $tags"`
### Notification channel parameters
#### Slack
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `slack_webhook_url` | string | Yes | Webhook URL from Slack |
| `slack_headers` | string | No | Base64-encoded HTTP headers |
| Parameter | Type | Required | Description |
|---------------------|--------|----------|-----------------------------|
| `slack_webhook_url` | string | Yes | Webhook URL from Slack |
| `slack_headers` | string | No | Base64-encoded HTTP headers |
#### Discord
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `discord_webhook_url` | string | Yes | Webhook URL from Discord |
| `discord_headers` | string | No | Base64-encoded HTTP headers |
| Parameter | Type | Required | Description |
|-----------------------|--------|----------|-----------------------------|
| `discord_webhook_url` | string | Yes | Webhook URL from Discord |
| `discord_headers` | string | No | Base64-encoded HTTP headers |
#### HTTP
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `http_webhook_url` | string | Yes | Custom webhook URL for POST requests |
| `http_headers` | string | No | Base64-encoded HTTP headers |
| Parameter | Type | Required | Description |
|--------------------|--------|----------|--------------------------------------|
| `http_webhook_url` | string | Yes | Custom webhook URL for POST requests |
| `http_headers` | string | No | Base64-encoded HTTP headers |
#### SMS/WhatsApp (via Twilio)
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `twilio_sid` | string | Yes | Twilio Account SID (or use TWILIO_SID env var) |
| `twilio_token` | string | Yes | Twilio Auth Token (or use TWILIO_TOKEN env var) |
| `twilio_from_number` | string | Yes | Sender phone number |
| `twilio_to_number` | string | Yes | Recipient phone number |
## Schema requirements
The plugin assumes that the table schema is already defined in the database, as it relies on this schema to retrieve field and tag names required for processing.
> [!WARNING]
> #### Requires existing schema
>
> By design, the plugin returns an error if the schema doesn't exist or doesn't contain the expected columns.
| Parameter | Type | Required | Description |
|----------------------|--------|----------|-------------------------------------------------|
| `twilio_sid` | string | Yes | Twilio Account SID (or use TWILIO_SID env var) |
| `twilio_token` | string | Yes | Twilio Auth Token (or use TWILIO_TOKEN env var) |
| `twilio_from_number` | string | Yes | Sender phone number |
| `twilio_to_number` | string | Yes | Recipient phone number |
### TOML configuration
@ -82,28 +84,42 @@ The plugin assumes that the table schema is already defined in the database, as
|--------------------|--------|---------|----------------------------------------------------------------------------------|
| `config_file_path` | string | none | TOML config file path relative to `PLUGIN_DIR` (required for TOML configuration) |
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting InfluxDB 3.
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting {{% product-name %}}.
#### Example TOML configuration
[mad_anomaly_config_data_writes.toml](https://github.com/influxdata/influxdb3_plugins/blob/master/influxdata/mad_check/mad_anomaly_config_data_writes.toml)
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins
/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
## Installation
## Software Requirements
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`)
- **{{% product-name %}}**: with the Processing Engine enabled.
- **Python packages**:
- `requests` (for notification delivery)
- **Notification Sender Plugin** *(optional)*: Required if using the `senders` parameter. See the [influxdata/notifier plugin](../notifier/README.md).
### Installation steps
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`):
```bash
influxdb3 serve \
--node-id node0 \
--object-store file \
--data-dir ~/.influxdb3 \
--plugin-dir ~/.plugins
```
2. Install required Python packages:
- `requests` (for notification delivery)
```bash
influxdb3 install package requests
```
3. *(Optional)* For notifications, install the [influxdata/notifier plugin](../notifier/README.md) and create an HTTP trigger for it.
3. Install and configure the official [Notifier plugin](https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/notifier)
## Schema requirement
The plugin assumes that the table schema is already defined in the database, as it relies on this schema to retrieve field and tag names required for processing.
## Trigger setup
@ -114,11 +130,12 @@ Detect anomalies as data is written:
```bash
influxdb3 create trigger \
--database mydb \
--plugin-filename gh:influxdata/mad_check/mad_check_plugin.py \
--path "gh:influxdata/mad_check/mad_check_plugin.py" \
--trigger-spec "all_tables" \
--trigger-arguments 'measurement=cpu,mad_thresholds="temp:2.5:20:5@load:3:10:2m",senders=slack,slack_webhook_url="https://hooks.slack.com/services/..."' \
--trigger-arguments 'measurement=cpu,mad_thresholds="temp:2.5:20:5@load:3:10:2m",senders=slack,slack_webhook_url="$SLACK_WEBHOOK_URL"' \
mad_anomaly_detector
```
Set `SLACK_WEBHOOK_URL` to your Slack incoming webhook URL.
## Example usage
@ -130,9 +147,9 @@ Detect when temperature exceeds 2.5 MADs from the median for 5 consecutive point
# Create trigger for count-based detection
influxdb3 create trigger \
--database sensors \
--plugin-filename gh:influxdata/mad_check/mad_check_plugin.py \
--path "gh:influxdata/mad_check/mad_check_plugin.py" \
--trigger-spec "all_tables" \
--trigger-arguments 'measurement=environment,mad_thresholds="temperature:2.5:20:5",senders=slack,slack_webhook_url="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"' \
--trigger-arguments 'measurement=environment,mad_thresholds="temperature:2.5:20:5",senders=slack,slack_webhook_url="$SLACK_WEBHOOK_URL"' \
temp_anomaly_detector
# Write test data with an anomaly
@ -147,8 +164,10 @@ influxdb3 write \
"environment,room=office temperature=45.8" # Anomaly
# Continue writing anomalous values...
```
Set `SLACK_WEBHOOK_URL` to your Slack incoming webhook URL.
**Expected output**
### Expected results
- Plugin maintains a 20-point window of recent temperature values
- Computes median and MAD from this window
- When temperature exceeds median ± 2.5*MAD for 5 consecutive points, sends Slack notification
@ -162,16 +181,18 @@ Monitor CPU load and memory usage with different thresholds:
# Create trigger with multiple thresholds
influxdb3 create trigger \
--database monitoring \
--plugin-filename gh:influxdata/mad_check/mad_check_plugin.py \
--path "gh:influxdata/mad_check/mad_check_plugin.py" \
--trigger-spec "all_tables" \
--trigger-arguments 'measurement=system_metrics,mad_thresholds="cpu_load:3:30:2m@memory_used:2.5:30:5m",senders=slack.discord,slack_webhook_url="https://hooks.slack.com/...",discord_webhook_url="https://discord.com/api/webhooks/..."' \
--trigger-arguments 'measurement=system_metrics,mad_thresholds="cpu_load:3:30:2m@memory_used:2.5:30:5m",senders=slack.discord,slack_webhook_url="$SLACK_WEBHOOK_URL",discord_webhook_url="$DISCORD_WEBHOOK_URL"' \
system_anomaly_detector
```
Set `SLACK_WEBHOOK_URL` and `DISCORD_WEBHOOK_URL` to your webhook URLs.
**Expected output**
### Expected results
- Monitors two fields independently:
- `cpu_load`: Alerts when exceeds 3 MADs for 2 minutes
- `memory_used`: Alerts when exceeds 2.5 MADs for 5 minutes
- `cpu_load`: Alerts when exceeds 3 MADs for 2 minutes
- `memory_used`: Alerts when exceeds 2.5 MADs for 5 minutes
- Sends notifications to both Slack and Discord
### Example 3: Anomaly detection with flip suppression
@ -182,13 +203,15 @@ Prevent alert fatigue from rapidly fluctuating values:
# Create trigger with flip suppression
influxdb3 create trigger \
--database iot \
--plugin-filename gh:influxdata/mad_check/mad_check_plugin.py \
--path "gh:influxdata/mad_check/mad_check_plugin.py" \
--trigger-spec "all_tables" \
--trigger-arguments 'measurement=sensor_data,mad_thresholds="vibration:2:50:10",state_change_count=3,senders=http,http_webhook_url="https://api.example.com/alerts",notification_count_text="Vibration anomaly detected on $table. Field: $field, Tags: $tags"' \
--trigger-arguments 'measurement=sensor_data,mad_thresholds="vibration:2:50:10",state_change_count=3,senders=http,http_webhook_url="$HTTP_WEBHOOK_URL",notification_count_text="Vibration anomaly detected on $table. Field: $field, Tags: $tags"' \
vibration_monitor
```
Set `HTTP_WEBHOOK_URL` to your HTTP webhook endpoint.
**Expected output**
### Expected results
- Detects vibration anomalies exceeding 2 MADs for 10 consecutive points
- If values flip between normal/anomalous more than 3 times in the 50-point window, suppresses notifications
- Sends custom formatted message to HTTP endpoint
@ -199,42 +222,48 @@ This plugin supports using TOML configuration files to specify all plugin argume
### Important Requirements
**To use TOML configuration files, you must set the `PLUGIN_DIR` environment variable in the InfluxDB 3 host environment.**
**To use TOML configuration files, you must set the `PLUGIN_DIR` environment variable in the {{% product-name %}} host environment.**
### Setting Up TOML Configuration
1. **Start InfluxDB 3 with the PLUGIN_DIR environment variable set**:
```bash
PLUGIN_DIR=~/.plugins influxdb3 serve --node-id node0 --object-store file --data-dir ~/.influxdb3 --plugin-dir ~/.plugins
```
1. **Start {{% product-name %}} with the PLUGIN_DIR environment variable set**:
```bash
PLUGIN_DIR=~/.plugins influxdb3 serve \
--node-id node0 \
--object-store file \
--data-dir ~/.influxdb3 \
--plugin-dir ~/.plugins
```
2. **Copy the example TOML configuration file to your plugin directory**:
```bash
cp mad_anomaly_config_data_writes.toml ~/.plugins/
```
```bash
cp mad_anomaly_config_data_writes.toml ~/.plugins/
```
3. **Edit the TOML file** to match your requirements:
```toml
# Required parameters
measurement = "cpu"
mad_thresholds = "temp:2.5:20:5@load:3:10:2m"
senders = "slack"
# Notification settings
slack_webhook_url = "https://hooks.slack.com/services/..."
slack_webhook_url = "$SLACK_WEBHOOK_URL"
notification_count_text = "Custom alert: $field anomaly detected"
```
Set `SLACK_WEBHOOK_URL` to your Slack incoming webhook URL.
4. **Create a trigger using the `config_file_path` argument**:
```bash
influxdb3 create trigger \
--database mydb \
--plugin-filename mad_check_plugin.py \
--path "gh:influxdata/mad_check/mad_check_plugin.py" \
--trigger-spec "all_tables" \
--trigger-arguments config_file_path=mad_anomaly_config_data_writes.toml \
mad_toml_trigger
```
## Code overview
### Files
@ -244,13 +273,13 @@ This plugin supports using TOML configuration files to specify all plugin argume
### Logging
Logs are stored in the `_internal` database in the `system.processing_engine_logs` table:
Logs are stored in the trigger's database in the `system.processing_engine_logs` table:
```bash
influxdb3 query --database _internal "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'your_trigger_name'"
influxdb3 query --database YOUR_DATABASE "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'your_trigger_name'"
```
Log columns:
- **event_time**: Timestamp of the log event
- **trigger_name**: Name of the trigger that generated the log
- **log_level**: Severity level (INFO, WARN, ERROR)
@ -259,9 +288,11 @@ Log columns:
### Main functions
#### `process_writes(influxdb3_local, table_batches, args)`
Handles real-time anomaly detection on incoming data.
Key operations:
1. Filters table batches for the specified measurement
2. Maintains in-memory deques of recent values per field
3. Computes MAD for each monitored field
@ -271,14 +302,15 @@ Key operations:
### Key algorithms
#### MAD (Median Absolute Deviation) Calculation
```python
median = statistics.median(values)
mad = statistics.median([abs(x - median) for x in values])
threshold = k * mad
is_anomaly = abs(value - median) > threshold
```
#### Flip Detection
Counts transitions between normal and anomalous states within the window to prevent alert fatigue from rapidly changing values.
## Troubleshooting
@ -286,29 +318,36 @@ Counts transitions between normal and anomalous states within the window to prev
### Common issues
#### Issue: No notifications being sent
**Solution**:
1. Verify the Notification Sender Plugin is installed and running
2. Check webhook URLs are correct:
```bash
influxdb3 query --database _internal "SELECT * FROM system.processing_engine_logs WHERE log_text LIKE '%notification%'"
```
2. Check webhook URLs are correct:`bash
influxdb3 query --database YOUR_DATABASE "SELECT * FROM system.processing_engine_logs WHERE log_text LIKE '%notification%'"
`
3. Ensure notification channel parameters are provided for selected senders
#### Issue: "Invalid MAD thresholds format" error
**Solution**: Check threshold format is correct:
- Count-based: `field:k:window:count` (for example, `temp:2.5:20:5`)
- Duration-based: `field:k:window:duration` (for example, `temp:2.5:20:2m`)
- Multiple thresholds separated by `@`
#### Issue: Too many false positive alerts
**Solution**:
1. Increase the k multiplier (for example, from 2.5 to 3.0)
2. Increase the threshold count or duration
3. Enable flip suppression with `state_change_count`
4. Increase the window size for more stable statistics
#### Issue: Missing anomalies (false negatives)
**Solution**:
1. Decrease the k multiplier
2. Decrease the threshold count or duration
3. Check if data has seasonal patterns that affect the median
@ -316,17 +355,16 @@ Counts transitions between normal and anomalous states within the window to prev
### Debugging tips
1. **Monitor deque sizes**:
```bash
influxdb3 query --database _internal "SELECT * FROM system.processing_engine_logs WHERE log_text LIKE '%Deque%'"
```
```bash
influxdb3 query --database YOUR_DATABASE "SELECT * FROM system.processing_engine_logs WHERE log_text LIKE '%Deque%'"
```
2. **Check MAD calculations**:
```bash
influxdb3 query --database _internal "SELECT * FROM system.processing_engine_logs WHERE log_text LIKE '%MAD:%'"
```
3. **Test with known anomalies**:
Write test data with obvious outliers to verify detection
```bash
influxdb3 query --database YOUR_DATABASE "SELECT * FROM system.processing_engine_logs WHERE log_text LIKE '%MAD:%'"
```
3. **Test with known anomalies**: Write test data with obvious outliers to verify detection
### Performance considerations

View File

@ -1,80 +1,103 @@
The Notifier Plugin provides multi-channel notification capabilities for InfluxDB 3, enabling real-time alert delivery through various communication channels.
Send notifications via Slack, Discord, HTTP webhooks, SMS, or WhatsApp based on incoming HTTP requests.
Acts as a centralized notification dispatcher that receives data from other plugins or external systems and routes notifications to the appropriate channels.
The Notifier Plugin provides multi-channel notification capabilities for {{% product-name %}}, enabling real-time alert delivery through various communication channels. Send notifications via Slack, Discord, HTTP webhooks, SMS, or WhatsApp based on incoming HTTP requests. Acts as a centralized notification dispatcher that receives data from other plugins or external systems and routes notifications to the appropriate channels.
## Configuration
This HTTP plugin receives all configuration via the request body. No trigger arguments are required.
### Plugin metadata
This plugin includes a JSON metadata schema in its docstring that defines supported trigger types and configuration parameters. This metadata enables the [InfluxDB 3 Explorer](https://docs.influxdata.com/influxdb3/explorer/) UI to display and configure the plugin.
### Request body parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `notification_text` | string | required | Text content of the notification message |
| `senders_config` | object | required | Configuration for each notification channel |
Send these parameters as JSON in the HTTP POST request body:
### Sender-specific configuration
| Parameter | Type | Default | Description |
|---------------------|--------|----------|---------------------------------------------|
| `notification_text` | string | required | Text content of the notification message |
| `senders_config` | object | required | Configuration for each notification channel |
The `senders_config` parameter accepts channel configurations where keys are sender names and values contain channel-specific settings:
### Sender-specific configuration (in request body)
The `senders_config` object accepts channel configurations where keys are sender names and values contain channel-specific settings:
#### Slack notifications
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `slack_webhook_url` | string | required | Slack webhook URL |
| `slack_headers` | string | none | Base64-encoded JSON headers |
| Parameter | Type | Default | Description |
|---------------------|--------|----------|-----------------------------|
| `slack_webhook_url` | string | required | Slack webhook URL |
| `slack_headers` | string | none | Base64-encoded JSON headers |
#### Discord notifications
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `discord_webhook_url` | string | required | Discord webhook URL |
| `discord_headers` | string | none | Base64-encoded JSON headers |
| Parameter | Type | Default | Description |
|-----------------------|--------|----------|-----------------------------|
| `discord_webhook_url` | string | required | Discord webhook URL |
| `discord_headers` | string | none | Base64-encoded JSON headers |
#### HTTP webhook notifications
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| Parameter | Type | Default | Description |
|--------------------|--------|----------|----------------------------------|
| `http_webhook_url` | string | required | Custom webhook URL for HTTP POST |
| `http_headers` | string | none | Base64-encoded JSON headers |
| `http_headers` | string | none | Base64-encoded JSON headers |
#### SMS notifications (via Twilio)
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `twilio_sid` | string | required | Twilio Account SID (or use `TWILIO_SID` env var) |
| `twilio_token` | string | required | Twilio Auth Token (or use `TWILIO_TOKEN` env var) |
| `twilio_from_number` | string | required | Sender phone number in E.164 format |
| `twilio_to_number` | string | required | Recipient phone number in E.164 format |
| Parameter | Type | Default | Description |
|----------------------|--------|----------|---------------------------------------------------|
| `twilio_sid` | string | required | Twilio Account SID (or use `TWILIO_SID` env var) |
| `twilio_token` | string | required | Twilio Auth Token (or use `TWILIO_TOKEN` env var) |
| `twilio_from_number` | string | required | Sender phone number in E.164 format |
| `twilio_to_number` | string | required | Recipient phone number in E.164 format |
#### WhatsApp notifications (via Twilio)
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `twilio_sid` | string | required | Twilio Account SID (or use `TWILIO_SID` env var) |
| `twilio_token` | string | required | Twilio Auth Token (or use `TWILIO_TOKEN` env var) |
| `twilio_from_number` | string | required | Sender WhatsApp number in E.164 format |
| `twilio_to_number` | string | required | Recipient WhatsApp number in E.164 format |
## Installation
| Parameter | Type | Default | Description |
|----------------------|--------|----------|---------------------------------------------------|
| `twilio_sid` | string | required | Twilio Account SID (or use `TWILIO_SID` env var) |
| `twilio_token` | string | required | Twilio Auth Token (or use `TWILIO_TOKEN` env var) |
| `twilio_from_number` | string | required | Sender WhatsApp number in E.164 format |
| `twilio_to_number` | string | required | Recipient WhatsApp number in E.164 format |
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`)
## Software Requirements
- **{{% product-name %}}**: with the Processing Engine enabled.
- **Python packages**:
- `httpx` (for HTTP requests)
- `twilio` (for SMS/WhatsApp notifications)
### Installation steps
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`):
```bash
influxdb3 serve \
--node-id node0 \
--object-store file \
--data-dir ~/.influxdb3 \
--plugin-dir ~/.plugins
```
2. Install required Python packages:
- `httpx` (for HTTP requests)
- `twilio` (for SMS and WhatsApp notifications)
```bash
influxdb3 install package httpx
influxdb3 install package twilio
```
## Trigger setup
## Create trigger
### HTTP trigger
Create an HTTP trigger to handle notification requests:
```bash
influxdb3 create trigger \
--database mydb \
--plugin-filename notifier_plugin.py \
--path "gh:influxdata/notifier/notifier_plugin.py" \
--trigger-spec "request:notify" \
notification_trigger
```
This registers an HTTP endpoint at `/api/v3/engine/notify`.
### Enable trigger
@ -82,95 +105,114 @@ This registers an HTTP endpoint at `/api/v3/engine/notify`.
```bash
influxdb3 enable trigger --database mydb notification_trigger
```
## Example usage
## Examples
### Slack notification
### Example 1: Slack notification
Send a notification to Slack:
```bash
curl -X POST http://localhost:8181/api/v3/engine/notify \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Authorization: Bearer $INFLUXDB3_AUTH_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"notification_text": "Database alert: High CPU usage detected",
"notification_text": "Alert: High CPU usage detected on server1",
"senders_config": {
"slack": {
"slack_webhook_url": "https://hooks.slack.com/services/..."
"slack_webhook_url": "'"$SLACK_WEBHOOK_URL"'"
}
}
}'
```
Set `INFLUXDB3_AUTH_TOKEN` and `SLACK_WEBHOOK_URL` to your credentials.
### SMS notification
**Expected output**
Notification sent to Slack channel with message: "Alert: High CPU usage detected on server1"
### Example 2: SMS notification
Send an SMS via Twilio:
```bash
curl -X POST http://localhost:8181/api/v3/engine/notify \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Authorization: Bearer $INFLUXDB3_AUTH_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"notification_text": "Critical alert: System down",
"senders_config": {
"sms": {
"twilio_from_number": "+1234567890",
"twilio_to_number": "+0987654321"
"twilio_from_number": "'"$TWILIO_FROM_NUMBER"'",
"twilio_to_number": "'"$TWILIO_TO_NUMBER"'"
}
}
}'
```
Set `TWILIO_FROM_NUMBER` and `TWILIO_TO_NUMBER` to your phone numbers. Twilio credentials can be set via `TWILIO_SID` and `TWILIO_TOKEN` environment variables.
### Multi-channel notification
### Example 3: Multi-channel notification
Send notifications via multiple channels simultaneously:
```bash
curl -X POST http://localhost:8181/api/v3/engine/notify \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Authorization: Bearer $INFLUXDB3_AUTH_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"notification_text": "Performance warning: Memory usage above threshold",
"senders_config": {
"slack": {
"slack_webhook_url": "https://hooks.slack.com/services/..."
"slack_webhook_url": "'"$SLACK_WEBHOOK_URL"'"
},
"discord": {
"discord_webhook_url": "https://discord.com/api/webhooks/..."
},
"whatsapp": {
"twilio_from_number": "+1234567890",
"twilio_to_number": "+0987654321"
"discord_webhook_url": "'"$DISCORD_WEBHOOK_URL"'"
}
}
}'
```
Set `SLACK_WEBHOOK_URL` and `DISCORD_WEBHOOK_URL` to your webhook URLs.
## Features
## Code overview
- **Multi-channel delivery**: Support for Slack, Discord, HTTP webhooks, SMS, and WhatsApp
- **Retry logic**: Automatic retry with exponential backoff for failed notifications
- **Environment variables**: Credential management via environment variables
- **Asynchronous processing**: Non-blocking HTTP notifications for better performance
- **Flexible configuration**: Channel-specific settings and optional headers support
### Files
- `notifier_plugin.py`: The main plugin code containing the HTTP handler for notification dispatch
### Logging
Logs are stored in the trigger's database in the `system.processing_engine_logs` table. To view logs:
```bash
influxdb3 query --database YOUR_DATABASE "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'notification_trigger'"
```
### Main functions
#### `process_http_request(influxdb3_local, request_body, args)`
Handles incoming HTTP notification requests. Parses the request body, extracts notification text and sender configurations, and dispatches notifications to configured channels.
Key operations:
1. Validates request body for required `notification_text` and `senders_config`
2. Iterates through sender configurations (Slack, Discord, HTTP, SMS, WhatsApp)
3. Dispatches notifications with built-in retry logic and error handling
4. Returns success/failure status for each channel
## Troubleshooting
### Common issues
**Notification not delivered**
- Verify webhook URLs are correct and accessible
- Check Twilio credentials and phone number formats
- Review logs for specific error messages
#### Issue: Notification not delivered
**Authentication errors**
- Ensure Twilio credentials are set via environment variables or request parameters
- Verify webhook URLs have proper authentication if required
**Solution**: Verify webhook URLs are correct and accessible. Check Twilio credentials and phone number formats. Review logs for specific error messages.
**Rate limiting**
- Plugin includes built-in retry logic with exponential backoff
- Consider implementing client-side rate limiting for high-frequency notifications
#### Issue: Authentication errors
**Solution**: Ensure Twilio credentials are set via environment variables or request parameters. Verify webhook URLs have proper authentication if required.
#### Issue: Rate limiting
**Solution**: Plugin includes built-in retry logic with exponential backoff. Consider implementing client-side rate limiting for high-frequency notifications.
### Environment variables
@ -180,13 +222,12 @@ For security, set Twilio credentials as environment variables:
export TWILIO_SID=your_account_sid
export TWILIO_TOKEN=your_auth_token
```
### Viewing logs
Check processing logs in the InfluxDB system tables:
```bash
influxdb3 query --database _internal "SELECT * FROM system.processing_engine_logs WHERE message LIKE '%notifier%' ORDER BY time DESC LIMIT 10"
influxdb3 query --database YOUR_DATABASE "SELECT * FROM system.processing_engine_logs WHERE log_text LIKE '%notifier%' ORDER BY event_time DESC LIMIT 10"
```
## Report an issue
@ -195,5 +236,5 @@ For plugin issues, see the Plugins repository [issues page](https://github.com/i
## Find support for {{% product-name %}}
The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for {{% product-name %}}.
The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for InfluxDB 3 Core and InfluxDB 3 Enterprise.
For other InfluxDB versions, see the [Support and feedback](#bug-reports-and-feedback) options.

View File

@ -1,60 +1,77 @@
The Prophet Forecasting Plugin enables time series forecasting for data in InfluxDB 3 using Facebook's Prophet library.
Generate predictions for future data points based on historical patterns, including seasonality, trends, and custom events.
Supports both scheduled batch forecasting and on-demand HTTP-triggered forecasts with model persistence and validation capabilities.
The Prophet Forecasting Plugin enables time series forecasting for data in {{% product-name %}} using Facebook's Prophet library. Generate predictions for future data points based on historical patterns, including seasonality, trends, and custom events. Supports both scheduled batch forecasting and on-demand HTTP-triggered forecasts with model persistence and validation capabilities.
- **Model persistence**: Save and reuse trained models for consistent predictions
- **Forecast validation**: Built-in accuracy assessment using Mean Squared Relative Error (MSRE)
- **Holiday support**: Built-in holiday calendars and custom holiday configuration
- **Advanced seasonality**: Configurable seasonality modes and changepoint detection
- **Flexible time intervals**: Support for seconds, minutes, hours, days, weeks, months, quarters, and years
## Configuration
Plugin parameters may be specified as key-value pairs in the `--trigger-arguments` flag (CLI) or in the `trigger_arguments` field (API) when creating a trigger. Some plugins support TOML configuration files, which can be specified using the plugin's `config_file_path` parameter.
If a plugin supports multiple trigger specifications, some parameters may depend on the trigger specification that you use.
### Plugin metadata
This plugin includes a JSON metadata schema in its docstring that defines supported trigger types and configuration parameters. This metadata enables the [InfluxDB 3 Explorer](https://docs.influxdata.com/influxdb3/explorer/) UI to display and configure the plugin.
### Scheduled trigger parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `measurement` | string | required | Source measurement containing historical data |
| `field` | string | required | Field name to forecast |
| `window` | string | required | Historical data window. Format: `<number><unit>` (for example, `"30d"`) |
| `forecast_horizont` | string | required | Forecast duration. Format: `<number><unit>` (for example, `"2d"`) |
| `tag_values` | string | required | Dot-separated tag filters (for example, `"region:us-west.device:sensor1"`) |
| `target_measurement` | string | required | Destination measurement for forecast results |
| `model_mode` | string | required | Operation mode: "train" or "predict" |
| `unique_suffix` | string | required | Unique model identifier for versioning |
Set these parameters with `--trigger-arguments` when creating a scheduled trigger:
### HTTP trigger parameters
| Parameter | Type | Default | Description |
|----------------------|--------|----------|-------------------------------------------------------------------|
| `measurement` | string | required | Source measurement containing historical data |
| `field` | string | required | Field name to forecast |
| `window` | string | required | Historical data window. Format: `<number><unit>` (for example, "30d") |
| `forecast_horizont` | string | required | Forecast duration. Format: `<number><unit>` (for example, "2d") |
| `tag_values` | string | required | Dot-separated tag filters (for example, "region:us-west.device:sensor1") |
| `target_measurement` | string | required | Destination measurement for forecast results |
| `model_mode` | string | required | Operation mode: "train" or "predict" |
| `unique_suffix` | string | required | Unique model identifier for versioning |
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `measurement` | string | required | Source measurement containing historical data |
| `field` | string | required | Field name to forecast |
| `forecast_horizont` | string | required | Forecast duration. Format: `<number><unit>` (for example, `"7d"`) |
| `tag_values` | object | required | Tag filters as JSON object (for example, {"region":"us-west"}) |
| `target_measurement` | string | required | Destination measurement for forecast results |
| `unique_suffix` | string | required | Unique model identifier for versioning |
| `start_time` | string | required | Historical window start (ISO 8601 format) |
| `end_time` | string | required | Historical window end (ISO 8601 format) |
### HTTP request parameters
Send these parameters as JSON in the HTTP POST request body:
| Parameter | Type | Default | Description |
|----------------------|--------|----------|----------------------------------------------------------|
| `measurement` | string | required | Source measurement containing historical data |
| `field` | string | required | Field name to forecast |
| `forecast_horizont` | string | required | Forecast duration. Format: `<number><unit>` (for example, "7d") |
| `tag_values` | object | required | Tag filters as JSON object (for example, {"region":"us-west"}) |
| `target_measurement` | string | required | Destination measurement for forecast results |
| `unique_suffix` | string | required | Unique model identifier for versioning |
| `start_time` | string | required | Historical window start (ISO 8601 format) |
| `end_time` | string | required | Historical window end (ISO 8601 format) |
### Advanced parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `seasonality_mode` | string | "additive" | Prophet seasonality mode: "additive" or "multiplicative" |
| `changepoint_prior_scale` | number | 0.05 | Flexibility of trend changepoints |
| `changepoints` | string/array | none | Changepoint dates (ISO format) |
| `holiday_date_list` | string/array | none | Custom holiday dates (ISO format) |
| `holiday_names` | string/array | none | Holiday names corresponding to dates |
| `holiday_country_names` | string/array | none | Country codes for built-in holidays |
| `inferred_freq` | string | auto | Manual frequency specification (for example, `"1D"`, `"1H"`) |
| `validation_window` | string | "0s" | Validation period duration |
| `msre_threshold` | number | infinity | Maximum acceptable Mean Squared Relative Error |
| `target_database` | string | current | Database for forecast storage |
| `save_mode` | string | "false" | Whether to save/load models (HTTP only) |
| Parameter | Type | Default | Description |
|---------------------------|--------------|------------|----------------------------------------------------------|
| `seasonality_mode` | string | "additive" | Prophet seasonality mode: "additive" or "multiplicative" |
| `changepoint_prior_scale` | number | 0.05 | Flexibility of trend changepoints |
| `changepoints` | string/array | none | Changepoint dates (ISO format) |
| `holiday_date_list` | string/array | none | Custom holiday dates (ISO format) |
| `holiday_names` | string/array | none | Holiday names corresponding to dates |
| `holiday_country_names` | string/array | none | Country codes for built-in holidays |
| `inferred_freq` | string | auto | Manual frequency specification (for example, "1D", "1H") |
| `validation_window` | string | "0s" | Validation period duration |
| `msre_threshold` | number | infinity | Maximum acceptable Mean Squared Relative Error |
| `target_database` | string | current | Database for forecast storage |
| `save_mode` | string | "false" | Whether to save/load models (HTTP only) |
### Notification parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `is_sending_alert` | string | "false" | Enable alerts on validation failure |
| `notification_text` | string | template | Custom alert message template |
| `senders` | string | none | Dot-separated notification channels |
| `notification_path` | string | "notify" | Notification endpoint path |
| `influxdb3_auth_token` | string | env var | Authentication token |
| Parameter | Type | Default | Description |
|------------------------|--------|----------|-------------------------------------|
| `is_sending_alert` | string | "false" | Enable alerts on validation failure |
| `notification_text` | string | template | Custom alert message template |
| `senders` | string | none | Dot-separated notification channels |
| `notification_path` | string | "notify" | Notification endpoint path |
| `influxdb3_auth_token` | string | env var | Authentication token |
### TOML configuration
@ -62,19 +79,35 @@ Supports both scheduled batch forecasting and on-demand HTTP-triggered forecasts
|--------------------|--------|---------|----------------------------------------------------------------------------------|
| `config_file_path` | string | none | TOML config file path relative to `PLUGIN_DIR` (required for TOML configuration) |
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting InfluxDB 3.
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting {{% product-name %}}.
#### Example TOML configuration
[prophet_forecasting_scheduler.toml](https://github.com/influxdata/influxdb3_plugins/blob/master/influxdata/prophet_forecasting/prophet_forecasting_scheduler.toml)
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins
/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
## Installation
## Software Requirements
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`)
- **{{% product-name %}}**: with the Processing Engine enabled.
- **Python packages**:
- `pandas` (for data manipulation)
- `numpy` (for numerical operations)
- `requests` (for HTTP requests)
- `prophet` (for time series forecasting)
- **Notification Sender Plugin** *(optional)*: Required if using the `senders` parameter. See the [influxdata/notifier plugin](../notifier/README.md).
### Installation steps
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`):
```bash
influxdb3 serve \
--node-id node0 \
--object-store file \
--data-dir ~/.influxdb3 \
--plugin-dir ~/.plugins
```
2. Install required Python packages:
```bash
@ -83,42 +116,77 @@ For more information on using TOML configuration files, see the Using TOML Confi
influxdb3 install package requests
influxdb3 install package prophet
```
3. *(Optional)* For notifications, install the [influxdata/notifier plugin](../notifier/README.md) and create an HTTP trigger for it.
### Create scheduled trigger
## Trigger setup
### Scheduled trigger
Create a trigger for periodic forecasting:
```bash
influxdb3 create trigger \
--database mydb \
--plugin-filename prophet_forecasting.py \
--path "gh:influxdata/prophet_forecasting/prophet_forecasting.py" \
--trigger-spec "every:1d" \
--trigger-arguments "measurement=temperature,field=value,window=30d,forecast_horizont=2d,tag_values=region:us-west.device:sensor1,target_measurement=temperature_forecast,model_mode=train,unique_suffix=20250619_v1" \
prophet_forecast_trigger
```
### Create HTTP trigger
### HTTP trigger
Create a trigger for on-demand forecasting:
```bash
influxdb3 create trigger \
--database mydb \
--plugin-filename prophet_forecasting.py \
--path "gh:influxdata/prophet_forecasting/prophet_forecasting.py" \
--trigger-spec "request:forecast" \
prophet_forecast_http_trigger
```
### Enable triggers
```bash
influxdb3 enable trigger --database mydb prophet_forecast_trigger
influxdb3 enable trigger --database mydb prophet_forecast_http_trigger
```
## Example usage
## Examples
### Example 1: Basic scheduled forecasting
### Scheduled forecasting
Write historical data and create a forecast:
```bash
# Write historical temperature data
influxdb3 write \
--database mydb \
"temperature,region=us-west,device=sensor1 value=22.5"
# Create and enable the trigger
influxdb3 create trigger \
--database mydb \
--path "gh:influxdata/prophet_forecasting/prophet_forecasting.py" \
--trigger-spec "every:1d" \
--trigger-arguments "measurement=temperature,field=value,window=30d,forecast_horizont=2d,tag_values=region:us-west.device:sensor1,target_measurement=temperature_forecast,model_mode=train,unique_suffix=v1" \
prophet_forecast
influxdb3 enable trigger --database mydb prophet_forecast
# Query forecast results (after trigger runs)
influxdb3 query \
--database mydb \
"SELECT time, forecast, yhat_lower, yhat_upper FROM temperature_forecast ORDER BY time DESC LIMIT 5"
```
**Expected output**
```
+----------------------+---------+------------+------------+
| time | forecast| yhat_lower | yhat_upper |
+----------------------+---------+------------+------------+
| 2025-06-21T00:00:00Z | 23.2 | 21.8 | 24.6 |
| 2025-06-20T00:00:00Z | 22.9 | 21.5 | 24.3 |
+----------------------+---------+------------+------------+
```
### Example 2: On-demand HTTP forecasting
Example HTTP request for on-demand forecasting:
@ -141,7 +209,6 @@ curl -X POST http://localhost:8181/api/v3/engine/forecast \
"msre_threshold": 0.05
}'
```
### Advanced forecasting with holidays
```bash
@ -164,58 +231,77 @@ curl -X POST http://localhost:8181/api/v3/engine/forecast \
"inferred_freq": "1D"
}'
```
## Features
- **Dual trigger modes**: Support for both scheduled batch forecasting and on-demand HTTP requests
- **Model persistence**: Save and reuse trained models for consistent predictions
- **Forecast validation**: Built-in accuracy assessment using Mean Squared Relative Error (MSRE)
- **Holiday support**: Built-in holiday calendars and custom holiday configuration
- **Advanced seasonality**: Configurable seasonality modes and changepoint detection
- **Notification integration**: Alert delivery for validation failures via multiple channels
- **Flexible time intervals**: Support for seconds, minutes, hours, days, weeks, months, quarters, and years
## Output data structure
Forecast results are written to the target measurement with the following structure:
### Tags
- `model_version`: Model identifier from unique_suffix parameter
- Additional tags from original measurement query filters
### Fields
- `forecast`: Predicted value (yhat from Prophet model)
- `yhat_lower`: Lower bound of confidence interval
- `yhat_upper`: Upper bound of confidence interval
- `run_time`: Forecast execution timestamp (ISO 8601 format)
### Timestamp
- `time`: Forecast timestamp in nanoseconds
## Code overview
### Files
- `prophet_forecasting.py`: The main plugin code containing handlers for scheduled and HTTP triggers
- `prophet_forecasting_scheduler.toml`: Example TOML configuration file for scheduled triggers
### Logging
Logs are stored in the trigger's database in the `system.processing_engine_logs` table. To view logs:
```bash
influxdb3 query --database YOUR_DATABASE "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'prophet_forecast_trigger'"
```
### Main functions
#### `process_scheduled_call(influxdb3_local, call_time, args)`
Handles scheduled forecasting tasks. Queries historical data, trains or loads Prophet model, generates forecasts, and writes results.
Key operations:
1. Parses configuration from arguments or TOML file
2. Queries historical data within specified window
3. Trains Prophet model or loads existing model
4. Generates forecasts for specified horizon
5. Optionally validates against actual data and sends alerts
#### `process_http_request(influxdb3_local, request_body, args)`
Handles on-demand forecast requests via HTTP. Supports backfill operations with configurable time ranges.
## Troubleshooting
### Common issues
**Model training failures**
- Ensure sufficient historical data points for the specified window
- Verify data contains required time column and forecast field
- Check for data gaps that might affect frequency inference
- Set `inferred_freq` manually if automatic detection fails
#### Issue: Model training failures
**Validation failures**
- Review MSRE threshold settings - values too low may cause frequent failures
- Ensure validation window provides sufficient data for comparison
- Check that validation data aligns temporally with forecast period
**Solution**: Ensure sufficient historical data points for the specified window. Verify data contains required time column and forecast field. Check for data gaps that might affect frequency inference. Set `inferred_freq` manually if automatic detection fails.
**HTTP trigger issues**
- Verify JSON request body format matches expected schema
- Check authentication tokens and database permissions
- Ensure start_time and end_time are in valid ISO 8601 format with timezone
#### Issue: Validation failures
**Model persistence problems**
- Verify plugin directory permissions for model storage
- Check disk space availability in plugin directory
- Ensure unique_suffix values don't conflict between different model versions
**Solution**: Review MSRE threshold settings - values too low may cause frequent failures. Ensure validation window provides sufficient data for comparison. Check that validation data aligns temporally with forecast period.
#### Issue: HTTP trigger issues
**Solution**: Verify JSON request body format matches expected schema. Check authentication tokens and database permissions. Ensure start_time and end_time are in valid ISO 8601 format with timezone.
#### Issue: Model persistence problems
**Solution**: Verify plugin directory permissions for model storage. Check disk space availability in plugin directory. Ensure unique_suffix values don't conflict between different model versions.
### Model storage
@ -226,6 +312,7 @@ Forecast results are written to the target measurement with the following struct
### Time format support
Supported time units for window, forecast_horizont, and validation_window:
- `s` (seconds), `min` (minutes), `h` (hours)
- `d` (days), `w` (weeks)
- `m` (months ≈30.42 days), `q` (quarters ≈91.25 days), `y` (years = 365 days)
@ -233,6 +320,7 @@ Supported time units for window, forecast_horizont, and validation_window:
### Validation process
When validation_window is set:
1. Training data: `current_time - window` to `current_time - validation_window`
2. Validation data: `current_time - validation_window` to `current_time`
3. MSRE calculation: `mean((actual - predicted)² / actual²)`
@ -244,5 +332,5 @@ For plugin issues, see the Plugins repository [issues page](https://github.com/i
## Find support for {{% product-name %}}
The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for {{% product-name %}}.
The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for InfluxDB 3 Core and InfluxDB 3 Enterprise.
For other InfluxDB versions, see the [Support and feedback](#bug-reports-and-feedback) options.

View File

@ -1,57 +1,50 @@
The State Change Plugin provides comprehensive field monitoring and threshold detection for InfluxDB 3 data streams.
Detect field value changes, monitor threshold conditions, and trigger notifications when specified criteria are met.
Supports both scheduled batch monitoring and real-time data write monitoring with configurable stability checks and multi-channel alerts.
The State Change Plugin provides comprehensive field monitoring and threshold detection for {{% product-name %}} data streams. Detect field value changes, monitor threshold conditions, and trigger notifications when specified criteria are met. Supports both scheduled batch monitoring and real-time data write monitoring with configurable stability checks and multi-channel alerts.
## Configuration
### Scheduled trigger parameters
Plugin parameters may be specified as key-value pairs in the `--trigger-arguments` flag (CLI) or in the `trigger_arguments` field (API) when creating a trigger. Some plugins support TOML configuration files, which can be specified using the plugin's `config_file_path` parameter.
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `measurement` | string | required | Measurement to monitor for field changes |
| `field_change_count` | string | required | Dot-separated field thresholds (for example, `"temp:3.load:2"`) |
| `senders` | string | required | Dot-separated notification channels |
| `window` | string | required | Time window for analysis. Format: `<number><unit>` |
If a plugin supports multiple trigger specifications, some parameters may depend on the trigger specification that you use.
### Data write trigger parameters
### Plugin metadata
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `measurement` | string | required | Measurement to monitor for threshold conditions |
| `field_thresholds` | string | required | Threshold conditions (for example, `"temp:30:10@status:ok:1h"`) |
| `senders` | string | required | Dot-separated notification channels |
This plugin includes a JSON metadata schema in its docstring that defines supported trigger types and configuration parameters. This metadata enables the [InfluxDB 3 Explorer](https://docs.influxdata.com/influxdb3/explorer/) UI to display and configure the plugin.
### Required parameters
| Parameter | Type | Default | Description |
|----------------------|--------|----------|----------------------------------------------------------------------------------------------|
| `measurement` | string | required | Measurement to monitor for field changes |
| `field_change_count` | string | required | Dot-separated field thresholds (for example, "temp:3.load:2"). Supports count-based conditions |
| `senders` | string | required | Dot-separated notification channels with multi-channel alert support (Slack, Discord, etc.) |
| `window` | string | required | Time window for analysis. Format: `<number><unit>` (for example, "10m", "1h") |
### Data write trigger parameters
| Parameter | Type | Default | Description |
|--------------------|--------|----------|----------------------------------------------------------------------------------------------------------------|
| `measurement` | string | required | Measurement to monitor for threshold conditions |
| `field_thresholds` | string | required | Flexible threshold conditions with count-based and duration-based support (for example, "temp:30:10@status:ok:1h") |
| `senders` | string | required | Dot-separated notification channels with multi-channel alert support (Slack, Discord, HTTP, SMS, WhatsApp) |
### Notification parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `influxdb3_auth_token` | string | env var | InfluxDB 3 API token |
| `notification_text` | string | template | Message template for scheduled notifications |
| `notification_count_text` | string | template | Message template for count-based notifications |
| `notification_time_text` | string | template | Message template for time-based notifications |
| `notification_path` | string | "notify" | Notification endpoint path |
| `port_override` | number | 8181 | InfluxDB port override |
| Parameter | Type | Default | Description |
|---------------------------|--------|----------|-----------------------------------------------------------------------------------------|
| `influxdb3_auth_token` | string | env var | {{% product-name %}} API token with environment variable support for credential management |
| `notification_text` | string | template | Customizable message template for scheduled notifications with dynamic variables |
| `notification_count_text` | string | template | Customizable message template for count-based notifications with dynamic variables |
| `notification_time_text` | string | template | Customizable message template for time-based notifications with dynamic variables |
| `notification_path` | string | "notify" | Notification endpoint path |
| `port_override` | number | 8181 | InfluxDB port override |
### Advanced parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `state_change_window` | number | 1 | Recent values to check for stability |
| `state_change_count` | number | 1 | Max changes allowed within stability window |
| `config_file_path` | string | none | TOML config file path relative to PLUGIN_DIR |
### Channel-specific configuration
Notification channels require additional parameters based on the sender type (same as the [Notifier Plugin](../notifier/README.md)).
## Schema requirements
The plugin assumes that the table schema is already defined in the database, as it relies on this schema to retrieve field and tag names required for processing.
> [!WARNING]
> #### Requires existing schema
>
> By design, the plugin returns an error if the schema doesn't exist or doesn't contain the expected columns.
| Parameter | Type | Default | Description |
|-----------------------|--------|---------|-------------------------------------------------------------------------------------------|
| `state_change_window` | number | 1 | Recent values to check for stability (configurable state change detection to reduce noise) |
| `state_change_count` | number | 1 | Max changes allowed within stability window (configurable state change detection) |
### TOML configuration
@ -59,52 +52,77 @@ The plugin assumes that the table schema is already defined in the database, as
|--------------------|--------|---------|----------------------------------------------------------------------------------|
| `config_file_path` | string | none | TOML config file path relative to `PLUGIN_DIR` (required for TOML configuration) |
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting InfluxDB 3.
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting {{% product-name %}}.
Example TOML configuration files provided:
- [state_change_config_scheduler.toml](https://github.com/influxdata/influxdb3_plugins/blob/master/influxdata/state_change/state_change_config_scheduler.toml) - for scheduled triggers
- [state_change_config_data_writes.toml](https://github.com/influxdata/influxdb3_plugins/blob/master/influxdata/state_change/state_change_config_data_writes.toml) - for data write triggers
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins
/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
## Installation
### Channel-specific configuration
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`)
Notification channels require additional parameters based on the sender type (same as the [influxdata/notifier plugin](../notifier/README.md)).
## Schema requirement
The plugin assumes that the table schema is already defined in the database, as it relies on this schema to retrieve field and tag names required for processing.
## Software Requirements
- **{{% product-name %}}**: with the Processing Engine enabled.
- **Notification Sender Plugin for {{% product-name %}}**: Required for sending notifications. See the [influxdata/notifier plugin](../notifier/README.md).
- **Python packages**:
- `requests` (for HTTP notifications)
### Installation steps
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`):
```bash
influxdb3 serve \
--node-id node0 \
--object-store file \
--data-dir ~/.influxdb3 \
--plugin-dir ~/.plugins
```
2. Install required Python packages:
- `requests` (for HTTP requests)
```bash
influxdb3 install package requests
```
3. *Optional*: For notifications, install and configure the [influxdata/notifier plugin](../notifier/README.md)
### Create scheduled trigger
## Trigger setup
### Scheduled trigger
Create a trigger for periodic field change monitoring:
```bash
influxdb3 create trigger \
--database mydb \
--plugin-filename state_change_check_plugin.py \
--path "gh:influxdata/state_change/state_change_check_plugin.py" \
--trigger-spec "every:10m" \
--trigger-arguments "measurement=cpu,field_change_count=temp:3.load:2,window=10m,senders=slack,slack_webhook_url=https://hooks.slack.com/services/..." \
--trigger-arguments "measurement=cpu,field_change_count=temp:3.load:2,window=10m,senders=slack,slack_webhook_url=$SLACK_WEBHOOK_URL" \
state_change_scheduler
```
Set `SLACK_WEBHOOK_URL` to your Slack incoming webhook URL.
### Create data write trigger
### Data write trigger
Create a trigger for real-time threshold monitoring:
```bash
influxdb3 create trigger \
--database mydb \
--plugin-filename state_change_check_plugin.py \
--path "gh:influxdata/state_change/state_change_check_plugin.py" \
--trigger-spec "all_tables" \
--trigger-arguments "measurement=cpu,field_thresholds=temp:30:10@status:ok:1h,senders=slack,slack_webhook_url=https://hooks.slack.com/services/..." \
--trigger-arguments "measurement=cpu,field_thresholds=temp:30:10@status:ok:1h,senders=slack,slack_webhook_url=$SLACK_WEBHOOK_URL" \
state_change_datawrite
```
Set `SLACK_WEBHOOK_URL` to your Slack incoming webhook URL.
### Enable triggers
@ -112,21 +130,65 @@ influxdb3 create trigger \
influxdb3 enable trigger --database mydb state_change_scheduler
influxdb3 enable trigger --database mydb state_change_datawrite
```
## Example usage
## Examples
### Example 1: Scheduled field change monitoring
### Scheduled field change monitoring
Monitor field changes over a time window and alert when thresholds are exceeded:
```bash
# Write test data with changing values (7 writes = 6 changes)
influxdb3 write \
--database sensors \
"temperature,location=office value=22.5"
influxdb3 write \
--database sensors \
"temperature,location=office value=25.0"
influxdb3 write \
--database sensors \
"temperature,location=office value=22.8"
influxdb3 write \
--database sensors \
"temperature,location=office value=26.5"
influxdb3 write \
--database sensors \
"temperature,location=office value=23.0"
influxdb3 write \
--database sensors \
"temperature,location=office value=27.2"
influxdb3 write \
--database sensors \
"temperature,location=office value=24.0"
# Create and enable the trigger
influxdb3 create trigger \
--database sensors \
--path "gh:influxdata/state_change/state_change_check_plugin.py" \
--trigger-spec "every:15m" \
--trigger-arguments "measurement=temperature,field_change_count=value:5,window=1h,senders=slack,slack_webhook_url=$SLACK_WEBHOOK_URL" \
temp_change_monitor
influxdb3 enable trigger --database sensors temp_change_monitor
```
Set `SLACK_WEBHOOK_URL` to your Slack incoming webhook URL.
**Expected output**
When the field changes more than 5 times within 1 hour, a notification is sent: "Temperature sensor value changed 6 times in 1h for tags location=office"
### Example 2: Advanced scheduled field change monitoring
Monitor field changes over a time window and alert when thresholds are exceeded:
```bash
influxdb3 create trigger \
--database sensors \
--plugin-filename state_change_check_plugin.py \
--path "gh:influxdata/state_change/state_change_check_plugin.py" \
--trigger-spec "every:15m" \
--trigger-arguments "measurement=temperature,field_change_count=value:5,window=1h,senders=slack,slack_webhook_url=https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX,notification_text=Temperature sensor $field changed $changes times in $window for tags $tags" \
--trigger-arguments "measurement=temperature,field_change_count=value:5,window=1h,senders=slack,slack_webhook_url=$SLACK_WEBHOOK_URL,notification_text=Temperature sensor $field changed $changes times in $window for tags $tags" \
temp_change_monitor
```
Set `SLACK_WEBHOOK_URL` to your Slack incoming webhook URL.
### Real-time threshold detection
@ -135,11 +197,12 @@ Monitor data writes for threshold conditions:
```bash
influxdb3 create trigger \
--database monitoring \
--plugin-filename state_change_check_plugin.py \
--path "gh:influxdata/state_change/state_change_check_plugin.py" \
--trigger-spec "all_tables" \
--trigger-arguments "measurement=system_metrics,field_thresholds=cpu_usage:80:5@memory_usage:90:10min,senders=discord,discord_webhook_url=https://discord.com/api/webhooks/..." \
--trigger-arguments "measurement=system_metrics,field_thresholds=cpu_usage:80:5@memory_usage:90:10min,senders=discord,discord_webhook_url=$DISCORD_WEBHOOK_URL" \
system_threshold_monitor
```
Set `DISCORD_WEBHOOK_URL` to your Discord incoming webhook URL.
### Multi-condition monitoring
@ -148,59 +211,75 @@ Monitor multiple fields with different threshold types:
```bash
influxdb3 create trigger \
--database application \
--plugin-filename state_change_check_plugin.py \
--path "gh:influxdata/state_change/state_change_check_plugin.py" \
--trigger-spec "all_tables" \
--trigger-arguments "measurement=app_health,field_thresholds=error_rate:0.05:3@response_time:500:30s@status:down:1,senders=slack.sms,slack_webhook_url=https://hooks.slack.com/services/...,twilio_from_number=+1234567890,twilio_to_number=+0987654321" \
--trigger-arguments "measurement=app_health,field_thresholds=error_rate:0.05:3@response_time:500:30s@status:down:1,senders=slack.sms,slack_webhook_url=$SLACK_WEBHOOK_URL,twilio_from_number=+1234567890,twilio_to_number=+0987654321" \
app_health_monitor
```
Set `SLACK_WEBHOOK_URL` to your Slack incoming webhook URL.
## Features
## Code overview
- **Dual monitoring modes**: Scheduled batch monitoring and real-time data write monitoring
- **Flexible thresholds**: Support for count-based and duration-based conditions
- **Stability checks**: Configurable state change detection to reduce noise
- **Multi-channel alerts**: Integration with Slack, Discord, HTTP, SMS, and WhatsApp
- **Template notifications**: Customizable message templates with dynamic variables
- **Caching optimization**: Measurement and tag name caching for improved performance
- **Environment variable support**: Credential management via environment variables
### Files
- `state_change_check_plugin.py`: The main plugin code containing handlers for scheduled and data write triggers
- `state_change_config_scheduler.toml`: Example TOML configuration for scheduled triggers
- `state_change_config_data_writes.toml`: Example TOML configuration for data write triggers
### Logging
Logs are stored in the trigger's database in the `system.processing_engine_logs` table. To view logs:
```bash
influxdb3 query --database YOUR_DATABASE "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'state_change_scheduler'"
```
### Main functions
#### `process_scheduled_call(influxdb3_local, call_time, args)`
Handles scheduled field change monitoring. Queries data within the specified window and counts field value changes.
#### `process_writes(influxdb3_local, table_batches, args)`
Handles real-time threshold monitoring on data writes. Evaluates incoming data against configured thresholds.
## Troubleshooting
### Common issues
**No notifications triggered**
- Verify notification channel configuration (webhook URLs, credentials)
- Check threshold values are appropriate for your data
- Ensure the Notifier Plugin is installed and configured
- Review plugin logs for error messages
#### Issue: No notifications triggered
**Too many notifications**
- Adjust `state_change_window` and `state_change_count` for stability filtering
- Increase threshold values to reduce sensitivity
- Consider longer monitoring windows for scheduled triggers
**Solution**: Verify notification channel configuration (webhook URLs, credentials). Check threshold values are appropriate for your data. Ensure the Notifier Plugin is installed and configured. Review plugin logs for error messages.
**Authentication errors**
- Set `INFLUXDB3_AUTH_TOKEN` environment variable
- Verify token has appropriate database permissions
- Check Twilio credentials for SMS/WhatsApp notifications
#### Issue: Too many notifications
**Solution**: Adjust `state_change_window` and `state_change_count` for stability filtering. Increase threshold values to reduce sensitivity. Consider longer monitoring windows for scheduled triggers.
#### Issue: Authentication errors
**Solution**: Set `INFLUXDB3_AUTH_TOKEN` environment variable. Verify token has appropriate database permissions. Check Twilio credentials for SMS/WhatsApp notifications.
### Field threshold formats
**Count-based thresholds**
- Format: `field_name:"value":count`
- Example: `temp:"30.5":10` (10 occurrences of temperature = 30.5)
**Time-based thresholds**
**Time-based thresholds**
- Format: `field_name:"value":duration`
- Example: `status:"error":5min` (status = error for 5 minutes)
- Supported units: `s`, `min`, `h`, `d`, `w`
**Multiple conditions**
- Separate with `@`: `temp:"30":5@humidity:"high":10min`
### Message template variables
**Scheduled notifications**
- `$table`: Measurement name
- `$field`: Field name
- `$changes`: Number of changes detected
@ -208,6 +287,7 @@ influxdb3 create trigger \
- `$tags`: Tag values
**Data write notifications**
- `$table`: Measurement name
- `$field`: Field name
- `$value`: Threshold value
@ -220,5 +300,5 @@ For plugin issues, see the Plugins repository [issues page](https://github.com/i
## Find support for {{% product-name %}}
The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for {{% product-name %}}.
The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for InfluxDB 3 Core and InfluxDB 3 Enterprise.
For other InfluxDB versions, see the [Support and feedback](#bug-reports-and-feedback) options.

View File

@ -1,48 +1,42 @@
The ADTK Anomaly Detector Plugin provides advanced time series anomaly detection for InfluxDB 3 using the ADTK (Anomaly Detection Toolkit) library.
Apply statistical and machine learning-based detection methods to identify outliers, level shifts, volatility changes, and seasonal anomalies in your data.
Features consensus-based detection requiring multiple detectors to agree before triggering alerts, reducing false positives.
The ADTK Anomaly Detector Plugin provides advanced time series anomaly detection for {{% product-name %}} using the ADTK (Anomaly Detection Toolkit) library. Apply statistical and machine learning-based detection methods to identify outliers, level shifts, volatility changes, and seasonal anomalies in your data. Features consensus-based detection requiring multiple detectors to agree before triggering alerts, reducing false positives.
## Configuration
Plugin parameters may be specified as key-value pairs in the `--trigger-arguments` flag (CLI) or in the `trigger_arguments` field (API) when creating a trigger. Some plugins support TOML configuration files, which can be specified using the plugin's `config_file_path` parameter.
If a plugin supports multiple trigger specifications, some parameters may depend on the trigger specification that you use.
### Plugin metadata
This plugin includes a JSON metadata schema in its docstring that defines supported trigger types and configuration parameters. This metadata enables the [InfluxDB 3 Explorer](https://docs.influxdata.com/influxdb3/explorer/) UI to display and configure the plugin.
### Required parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `measurement` | string | required | Measurement to analyze for anomalies |
| `field` | string | required | Numeric field to evaluate |
| `detectors` | string | required | Dot-separated list of ADTK detectors |
| `detector_params` | string | required | Base64-encoded JSON parameters for each detector |
| `window` | string | required | Data analysis window. Format: `<number><unit>` |
| `senders` | string | required | Dot-separated notification channels |
| Parameter | Type | Default | Description |
|-------------------|--------|----------|---------------------------------------------------------------------------------------------|
| `measurement` | string | required | Measurement to analyze for anomalies |
| `field` | string | required | Numeric field to evaluate |
| `detectors` | string | required | Dot-separated list of advanced ADTK detectors for different anomaly types |
| `detector_params` | string | required | Base64-encoded JSON parameters for each detector |
| `window` | string | required | Data analysis window with flexible scheduling. Format: `<number><unit>` (for example, "1h", "30m") |
| `senders` | string | required | Dot-separated notification channels with multi-channel notification support |
### Advanced parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `min_consensus` | number | 1 | Minimum detectors required to agree for anomaly flagging |
| `min_condition_duration` | string | "0s" | Minimum duration for anomaly persistence |
| Parameter | Type | Default | Description |
|--------------------------|--------|---------|----------------------------------------------------------------------------------------------|
| `min_consensus` | number | 1 | Minimum detectors required to agree for consensus-based filtering to reduce false positives |
| `min_condition_duration` | string | "0s" | Minimum duration for configurable anomaly persistence before alerting |
### Notification parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `influxdb3_auth_token` | string | env var | InfluxDB 3 API token |
| `notification_text` | string | template | Custom notification message template |
| `notification_path` | string | "notify" | Notification endpoint path |
| `port_override` | number | 8181 | InfluxDB port override |
| `config_file_path` | string | none | TOML config file path relative to PLUGIN_DIR |
### Supported ADTK detectors
| Detector | Description | Required Parameters |
|----------|-------------|-------------------|
| `InterQuartileRangeAD` | Detects outliers using IQR method | None |
| `ThresholdAD` | Detects values above/below thresholds | `high`, `low` (optional) |
| `QuantileAD` | Detects outliers based on quantiles | `low`, `high` (optional) |
| `LevelShiftAD` | Detects sudden level changes | `window` (int) |
| `VolatilityShiftAD` | Detects volatility changes | `window` (int) |
| `PersistAD` | Detects persistent anomalous values | None |
| `SeasonalAD` | Detects seasonal pattern deviations | None |
| Parameter | Type | Default | Description |
|------------------------|--------|----------|--------------------------------------------------------------------------|
| `influxdb3_auth_token` | string | env var | {{% product-name %}} API token |
| `notification_text` | string | template | Customizable notification template message with dynamic variables |
| `notification_path` | string | "notify" | Notification endpoint path |
| `port_override` | number | 8181 | InfluxDB port override |
### TOML configuration
@ -50,55 +44,82 @@ Features consensus-based detection requiring multiple detectors to agree before
|--------------------|--------|---------|----------------------------------------------------------------------------------|
| `config_file_path` | string | none | TOML config file path relative to `PLUGIN_DIR` (required for TOML configuration) |
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting InfluxDB 3.
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting {{% product-name %}}.
#### Example TOML configuration
[adtk_anomaly_config_scheduler.toml](https://github.com/influxdata/influxdb3_plugins/blob/master/influxdata/stateless_adtk_detector/adtk_anomaly_config_scheduler.toml)
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins
/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
## Installation
### Supported ADTK detectors
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`)
| Detector | Description | Required Parameters |
|------------------------|---------------------------------------|--------------------------|
| `GeneralizedESDTestAD` | Extreme Studentized Deviate test | `alpha` (optional) |
| `InterQuartileRangeAD` | Detects outliers using IQR method | None |
| `ThresholdAD` | Detects values above/below thresholds | `high`, `low` (optional) |
| `QuantileAD` | Detects outliers based on quantiles | `low`, `high` (optional) |
| `LevelShiftAD` | Detects sudden level changes | `window` (int) |
| `VolatilityShiftAD` | Detects volatility changes | `window` (int) |
| `PersistAD` | Detects persistent anomalous values | None |
| `SeasonalAD` | Detects seasonal pattern deviations | None |
## Software Requirements
- **{{% product-name %}}**: with the Processing Engine enabled.
- **Python packages**:
- `adtk` (for anomaly detection)
- `pandas` (for data manipulation)
- `requests` (for HTTP notifications)
- **Notification Sender Plugin** *(optional)*: Required if using the `senders` parameter. See the [influxdata/notifier plugin](../notifier/README.md).
### Installation steps
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`):
```bash
influxdb3 serve \
--node-id node0 \
--object-store file \
--data-dir ~/.influxdb3 \
--plugin-dir ~/.plugins
```
2. Install required Python packages:
- `requests` (for HTTP requests)
- `adtk` (for anomaly detection)
- `pandas` (for data manipulation)
```bash
influxdb3 install package requests
influxdb3 install package adtk
influxdb3 install package pandas
```
3. *(Optional)* For notifications, install the [influxdata/notifier plugin](../notifier/README.md) and create an HTTP trigger for it.
### Create trigger
## Trigger setup
### Scheduled trigger
Create a scheduled trigger for anomaly detection:
```bash
influxdb3 create trigger \
--database mydb \
--plugin-filename adtk_anomaly_detection_plugin.py \
--path "gh:influxdata/stateless_adtk_detector/adtk_anomaly_detection_plugin.py" \
--trigger-spec "every:10m" \
--trigger-arguments "measurement=cpu,field=usage,detectors=QuantileAD.LevelShiftAD,detector_params=eyJRdWFudGlsZUFKIjogeyJsb3ciOiAwLjA1LCAiaGlnaCI6IDAuOTV9LCAiTGV2ZWxTaGlmdEFKIjogeyJ3aW5kb3ciOiA1fX0=,window=10m,senders=slack,slack_webhook_url=https://hooks.slack.com/services/..." \
--trigger-arguments "measurement=cpu,field=usage,detectors=QuantileAD.LevelShiftAD,detector_params=eyJRdWFudGlsZUFKIjogeyJsb3ciOiAwLjA1LCAiaGlnaCI6IDAuOTV9LCAiTGV2ZWxTaGlmdEFKIjogeyJ3aW5kb3ciOiA1fX0=,window=10m,senders=slack,slack_webhook_url=$SLACK_WEBHOOK_URL" \
anomaly_detector
```
Set `SLACK_WEBHOOK_URL` to your Slack incoming webhook URL.
### Enable trigger
```bash
influxdb3 enable trigger --database mydb anomaly_detector
```
## Example usage
## Examples
### Example 1: Quantile-based detection
### Basic anomaly detection
Detect outliers using quantile-based detection:
Detect outliers using quantile-based detection. This plugin analyzes existing time series data and sends notifications when anomalies are detected.
```bash
# Base64 encode detector parameters: {"QuantileAD": {"low": 0.05, "high": 0.95}}
@ -106,13 +127,14 @@ echo '{"QuantileAD": {"low": 0.05, "high": 0.95}}' | base64
influxdb3 create trigger \
--database sensors \
--plugin-filename adtk_anomaly_detection_plugin.py \
--path "gh:influxdata/stateless_adtk_detector/adtk_anomaly_detection_plugin.py" \
--trigger-spec "every:5m" \
--trigger-arguments "measurement=temperature,field=value,detectors=QuantileAD,detector_params=eyJRdWFudGlsZUFKIjogeyJsb3ciOiAwLjA1LCAiaGlnaCI6IDAuOTV9fQ==,window=1h,senders=slack,slack_webhook_url=https://hooks.slack.com/services/..." \
--trigger-arguments "measurement=temperature,field=value,detectors=QuantileAD,detector_params=eyJRdWFudGlsZUFKIjogeyJsb3ciOiAwLjA1LCAiaGlnaCI6IDAuOTV9fQ==,window=1h,senders=slack,slack_webhook_url=$SLACK_WEBHOOK_URL" \
temp_anomaly_detector
```
Set `SLACK_WEBHOOK_URL` to your Slack incoming webhook URL.
### Multi-detector consensus
### Example 2: Multi-detector consensus
Use multiple detectors with consensus requirement:
@ -122,11 +144,12 @@ echo '{"QuantileAD": {"low": 0.1, "high": 0.9}, "LevelShiftAD": {"window": 10}}'
influxdb3 create trigger \
--database monitoring \
--plugin-filename adtk_anomaly_detection_plugin.py \
--path "gh:influxdata/stateless_adtk_detector/adtk_anomaly_detection_plugin.py" \
--trigger-spec "every:15m" \
--trigger-arguments "measurement=cpu_metrics,field=utilization,detectors=QuantileAD.LevelShiftAD,detector_params=eyJRdWFudGlsZUFEIjogeyJsb3ciOiAwLjEsICJoaWdoIjogMC45fSwgIkxldmVsU2hpZnRBRCI6IHsid2luZG93IjogMTB9fQ==,min_consensus=2,window=30m,senders=discord,discord_webhook_url=https://discord.com/api/webhooks/..." \
--trigger-arguments "measurement=cpu_metrics,field=utilization,detectors=QuantileAD.LevelShiftAD,detector_params=eyJRdWFudGlsZUFEIjogeyJsb3ciOiAwLjEsICJoaWdoIjogMC45fSwgIkxldmVsU2hpZnRBRCI6IHsid2luZG93IjogMTB9fQ==,min_consensus=2,window=30m,senders=discord,discord_webhook_url=$DISCORD_WEBHOOK_URL" \
cpu_consensus_detector
```
Set `DISCORD_WEBHOOK_URL` to your Discord incoming webhook URL.
### Volatility shift detection
@ -138,43 +161,59 @@ echo '{"VolatilityShiftAD": {"window": 20}}' | base64
influxdb3 create trigger \
--database trading \
--plugin-filename adtk_anomaly_detection_plugin.py \
--path "gh:influxdata/stateless_adtk_detector/adtk_anomaly_detection_plugin.py" \
--trigger-spec "every:1m" \
--trigger-arguments "measurement=stock_prices,field=price,detectors=VolatilityShiftAD,detector_params=eyJWb2xhdGlsaXR5U2hpZnRBRCI6IHsid2luZG93IjogMjB9fQ==,window=1h,min_condition_duration=5m,senders=sms,twilio_from_number=+1234567890,twilio_to_number=+0987654321" \
volatility_detector
```
## Features
## Code overview
- **Advanced detection methods**: Multiple ADTK detectors for different anomaly types
- **Consensus-based filtering**: Reduce false positives with multi-detector agreement
- **Configurable persistence**: Require anomalies to persist before alerting
- **Multi-channel notifications**: Integration with various notification channels
- **Template messages**: Customizable notification templates with dynamic variables
- **Flexible scheduling**: Configurable detection intervals and time windows
### Files
- `adtk_anomaly_detection_plugin.py`: The main plugin code containing the scheduled handler for anomaly detection
- `adtk_anomaly_config_scheduler.toml`: Example TOML configuration file
### Logging
Logs are stored in the trigger's database in the `system.processing_engine_logs` table. To view logs:
```bash
influxdb3 query --database YOUR_DATABASE "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'anomaly_detector'"
```
### Main functions
#### `process_scheduled_call(influxdb3_local, call_time, args)`
Handles scheduled anomaly detection tasks. Queries data within the specified window, applies ADTK detectors, and sends notifications for detected anomalies.
Key operations:
1. Parses configuration and decodes detector parameters
2. Queries data from source measurement
3. Applies configured ADTK detectors
4. Evaluates consensus across detectors
5. Sends notifications when anomalies are confirmed
## Troubleshooting
### Common issues
**Detector parameter encoding**
- Ensure detector_params is valid Base64-encoded JSON
- Use command line Base64 encoding: `echo '{"QuantileAD": {"low": 0.05}}' | base64`
- Verify JSON structure matches detector requirements
#### Issue: Detector parameter encoding errors
**False positive notifications**
- Increase `min_consensus` to require more detectors to agree
- Add `min_condition_duration` to require anomalies to persist
- Adjust detector-specific thresholds in `detector_params`
**Solution**: Ensure detector_params is valid Base64-encoded JSON. Use command line Base64 encoding: `echo '{"QuantileAD": {"low": 0.05}}' | base64`. Verify JSON structure matches detector requirements.
**Missing dependencies**
- Install required packages: `adtk`, `pandas`, `requests`
- Ensure the Notifier Plugin is installed for notifications
#### Issue: False positive notifications
**Data quality issues**
- Verify sufficient data points in the specified window
- Check for null values or data gaps that affect detection
- Ensure field contains numeric data suitable for analysis
**Solution**: Increase `min_consensus` to require more detectors to agree. Add `min_condition_duration` to require anomalies to persist. Adjust detector-specific thresholds in `detector_params`.
#### Issue: Missing dependencies
**Solution**: Install required packages: `adtk`, `pandas`, `requests`. Ensure the Notifier Plugin is installed for notifications.
#### Issue: Data quality issues
**Solution**: Verify sufficient data points in the specified window. Check for null values or data gaps that affect detection. Ensure field contains numeric data suitable for analysis.
### Base64 parameter encoding
@ -190,10 +229,10 @@ echo '{"QuantileAD": {"low": 0.1, "high": 0.9}, "LevelShiftAD": {"window": 15}}'
# Threshold detector
echo '{"ThresholdAD": {"high": 100, "low": 10}}' | base64 -w 0
```
### Message template variables
Available variables for notification templates:
- `$table`: Measurement name
- `$field`: Field name with anomaly
- `$value`: Anomalous value
@ -211,5 +250,5 @@ For plugin issues, see the Plugins repository [issues page](https://github.com/i
## Find support for {{% product-name %}}
The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for {{% product-name %}}.
The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for InfluxDB 3 Core and InfluxDB 3 Enterprise.
For other InfluxDB versions, see the [Support and feedback](#bug-reports-and-feedback) options.

View File

@ -1,14 +1,17 @@
The System Metrics Plugin provides comprehensive system monitoring capabilities for InfluxDB 3, collecting CPU, memory, disk, and network metrics from the host system.
Monitor detailed performance insights including per-core CPU statistics, memory usage breakdowns, disk I/O performance, and network interface statistics.
Features configurable metric collection with robust error handling and retry logic for reliable monitoring.
The System Metrics Plugin provides comprehensive system monitoring capabilities for {{% product-name %}}, collecting CPU, memory, disk, and network metrics from the host system. Monitor detailed performance insights including per-core CPU statistics, memory usage breakdowns, disk I/O performance, and network interface statistics. Features configurable metric collection with robust error handling and retry logic for reliable monitoring.
## Configuration
### Required parameters
Plugin parameters may be specified as key-value pairs in the `--trigger-arguments` flag (CLI) or in the `trigger_arguments` field (API) when creating a trigger. Some plugins support TOML configuration files, which can be specified using the plugin's `config_file_path` parameter.
No required parameters - all system metrics are collected by default with sensible defaults.
If a plugin supports multiple trigger specifications, some parameters may depend on the trigger specification that you use.
### System monitoring parameters
### Plugin metadata
This plugin includes a JSON metadata schema in its docstring that defines supported trigger types and configuration parameters. This metadata enables the [InfluxDB 3 Explorer](https://docs.influxdata.com/influxdb3/explorer/) UI to display and configure the plugin.
### Optional parameters
| Parameter | Type | Default | Description |
|-------------------|---------|-------------|--------------------------------------------------------------------------------|
@ -19,133 +22,263 @@ No required parameters - all system metrics are collected by default with sensib
| `include_network` | boolean | `true` | Include network metrics collection (interface statistics and error counts) |
| `max_retries` | integer | `3` | Maximum retry attempts on failure with graceful error handling |
*Note: This plugin has no required parameters. All parameters have sensible defaults.*
### TOML configuration
| Parameter | Type | Default | Description |
|--------------------|--------|---------|----------------------------------------------------------------------------------|
| `config_file_path` | string | none | TOML config file path relative to `PLUGIN_DIR` (required for TOML configuration) |
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting InfluxDB 3.
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting {{% product-name %}}.
#### Example TOML configuration
[system_metrics_config_scheduler.toml](https://github.com/influxdata/influxdb3_plugins/blob/master/influxdata/system_metrics/system_metrics_config_scheduler.toml)
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins
/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
## Installation steps
## Software Requirements
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`)
- **{{% product-name %}}**: with the Processing Engine enabled.
- **Python packages**:
- `psutil` (for system metrics collection)
### Installation steps
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`):
```bash
influxdb3 serve \
--node-id node0 \
--object-store file \
--data-dir ~/.influxdb3 \
--plugin-dir ~/.plugins
```
2. Install required Python packages:
- `psutil` (for system metrics collection)
```bash
influxdb3 install package psutil
```
## Trigger setup
### Basic scheduled trigger
Monitor system performance every 30 seconds:
### Basic Scheduled Trigger
```bash
influxdb3 create trigger \
--database system_monitoring \
--plugin-filename gh:influxdata/system_metrics/system_metrics.py \
--path "gh:influxdata/system_metrics/system_metrics.py" \
--trigger-spec "every:30s" \
system_metrics_trigger
```
### Custom configuration
Monitor specific metrics with custom hostname:
### Using Configuration File
```bash
influxdb3 create trigger \
--database system_monitoring \
--plugin-filename gh:influxdata/system_metrics/system_metrics.py \
--path "gh:influxdata/system_metrics/system_metrics.py" \
--trigger-spec "every:1m" \
--trigger-arguments config_file_path=system_metrics_config_scheduler.toml \
system_metrics_config_trigger
```
### Custom Configuration
```bash
influxdb3 create trigger \
--database system_monitoring \
--path "gh:influxdata/system_metrics/system_metrics.py" \
--trigger-spec "every:30s" \
--trigger-arguments hostname=web-server-01,include_disk=false,max_retries=5 \
system_metrics_custom_trigger
```
## Example usage
### Example 1: Web server monitoring
Monitor web server performance every 15 seconds with network statistics:
### Monitor Web Server Performance
```bash
# Create trigger for web server monitoring
# Create trigger for web server monitoring every 15 seconds
influxdb3 create trigger \
--database web_monitoring \
--plugin-filename gh:influxdata/system_metrics/system_metrics.py \
--path "gh:influxdata/system_metrics/system_metrics.py" \
--trigger-spec "every:15s" \
--trigger-arguments hostname=web-server-01,include_network=true \
web_server_metrics
# Query recent CPU metrics
influxdb3 query \
--database web_monitoring \
"SELECT * FROM system_cpu WHERE time >= now() - interval '5 minutes' LIMIT 5"
```
### Expected output
```
+---------------+-------+------+--------+------+--------+-------+-------+-----------+------------------+
| host | cpu | user | system | idle | iowait | nice | load1 | load5 | time |
+---------------+-------+------+--------+------+--------+-------+-------+-----------+------------------+
| web-server-01 | total | 12.5 | 5.3 | 81.2 | 0.8 | 0.0 | 0.85 | 0.92 | 2024-01-15 10:00 |
| web-server-01 | total | 13.1 | 5.5 | 80.4 | 0.7 | 0.0 | 0.87 | 0.93 | 2024-01-15 10:01 |
| web-server-01 | total | 11.8 | 5.1 | 82.0 | 0.9 | 0.0 | 0.83 | 0.91 | 2024-01-15 10:02 |
+---------------+-------+------+--------+------+--------+-------+-------+-----------+------------------+
```
### Example 2: Database server monitoring
Focus on CPU and disk metrics for database server:
### Database Server Monitoring
```bash
# Create trigger for database server
# Focus on CPU and disk metrics for database server
influxdb3 create trigger \
--database db_monitoring \
--plugin-filename gh:influxdata/system_metrics/system_metrics.py \
--path "gh:influxdata/system_metrics/system_metrics.py" \
--trigger-spec "every:30s" \
--trigger-arguments hostname=db-primary,include_disk=true,include_cpu=true,include_network=false \
database_metrics
# Query disk usage
influxdb3 query \
--database db_monitoring \
"SELECT * FROM system_disk_usage WHERE host = 'db-primary'"
```
### Example 3: High-frequency monitoring
Collect all metrics every 10 seconds with higher retry tolerance:
### High-Frequency System Monitoring
```bash
# Create high-frequency monitoring trigger
# Collect all metrics every 10 seconds with higher retry tolerance
influxdb3 create trigger \
--database system_monitoring \
--plugin-filename gh:influxdata/system_metrics/system_metrics.py \
--path "gh:influxdata/system_metrics/system_metrics.py" \
--trigger-spec "every:10s" \
--trigger-arguments hostname=critical-server,max_retries=10 \
high_freq_metrics
```
### Query collected metrics
This plugin collects system metrics automatically. After the trigger runs, query to view the collected data:
```bash
influxdb3 query \
--database system_monitoring \
"SELECT * FROM system_cpu WHERE time >= now() - interval '5 minutes' LIMIT 5"
```
**Expected output**
+------+--------+-------+--------+------+--------+-------+--------+-------+-------+------------+------------------+
| host | cpu | user | system | idle | iowait | nice | irq | load1 | load5 | load15 | time |
+------+--------+-------+--------+------+--------+-------+--------+-------+-------+------------+------------------+
| srv1 | total | 12.5 | 5.3 | 81.2 | 0.8 | 0.0 | 0.2 | 0.85 | 0.92 | 0.88 | 2024-01-15 10:00 |
| srv1 | total | 13.1 | 5.5 | 80.4 | 0.7 | 0.0 | 0.3 | 0.87 | 0.93 | 0.88 | 2024-01-15 10:01 |
| srv1 | total | 11.8 | 5.1 | 82.0 | 0.9 | 0.0 | 0.2 | 0.83 | 0.91 | 0.88 | 2024-01-15 10:02 |
| srv1 | total | 14.2 | 5.8 | 79.0 | 0.8 | 0.0 | 0.2 | 0.89 | 0.92 | 0.88 | 2024-01-15 10:03 |
| srv1 | total | 12.9 | 5.4 | 80.6 | 0.9 | 0.0 | 0.2 | 0.86 | 0.92 | 0.88 | 2024-01-15 10:04 |
+------+--------+-------+--------+------+--------+-------+--------+-------+-------+------------+------------------+
## Code overview
### Files
### Main Functions
- `system_metrics.py`: The main plugin code containing system metrics collection logic
- `system_metrics_config_scheduler.toml`: Example TOML configuration file for scheduled triggers
#### `process_scheduled_call()`
### Logging
The main entry point for scheduled triggers. Collects system metrics based on configuration and writes them to InfluxDB.
```python
def process_scheduled_call(influxdb3_local, call_time, args):
# Parse configuration
config = parse_config(args)
# Collect metrics based on configuration
if config['include_cpu']:
collect_cpu_metrics(influxdb3_local, config['hostname'])
if config['include_memory']:
collect_memory_metrics(influxdb3_local, config['hostname'])
# ... additional metric collections
```
### Measurements and Fields
#### system_cpu
Overall CPU statistics and metrics:
- **Tags**: `host`, `cpu=total`
- **Fields**: `user`, `system`, `idle`, `iowait`, `nice`, `irq`, `softirq`, `steal`, `guest`, `guest_nice`, `frequency_current`, `frequency_min`, `frequency_max`, `ctx_switches`, `interrupts`, `soft_interrupts`, `syscalls`, `load1`, `load5`, `load15`
#### system_cpu_cores
Per-core CPU statistics:
- **Tags**: `host`, `core` (core number)
- **Fields**: `usage`, `user`, `system`, `idle`, `iowait`, `nice`, `irq`, `softirq`, `steal`, `guest`, `guest_nice`, `frequency_current`, `frequency_min`, `frequency_max`
#### system_memory
System memory statistics:
- **Tags**: `host`
- **Fields**: `total`, `available`, `used`, `free`, `active`, `inactive`, `buffers`, `cached`, `shared`, `slab`, `percent`
#### system_swap
Swap memory statistics:
- **Tags**: `host`
- **Fields**: `total`, `used`, `free`, `percent`, `sin`, `sout`
#### system_memory_faults
Memory page fault information (when available):
- **Tags**: `host`
- **Fields**: `page_faults`, `major_faults`, `minor_faults`, `rss`, `vms`, `dirty`, `uss`, `pss`
#### system_disk_usage
Disk partition usage:
- **Tags**: `host`, `device`, `mountpoint`, `fstype`
- **Fields**: `total`, `used`, `free`, `percent`
#### system_disk_io
Disk I/O statistics:
- **Tags**: `host`, `device`
- **Fields**: `reads`, `writes`, `read_bytes`, `write_bytes`, `read_time`, `write_time`, `busy_time`, `read_merged_count`, `write_merged_count`
#### system_disk_performance
Calculated disk performance metrics:
- **Tags**: `host`, `device`
- **Fields**: `read_bytes_per_sec`, `write_bytes_per_sec`, `read_iops`, `write_iops`, `avg_read_latency_ms`, `avg_write_latency_ms`, `util_percent`
#### system_network
Network interface statistics:
- **Tags**: `host`, `interface`
- **Fields**: `bytes_sent`, `bytes_recv`, `packets_sent`, `packets_recv`, `errin`, `errout`, `dropin`, `dropout`
## Troubleshooting
### Common issues
#### Issue: Permission errors for disk I/O metrics
**Solution**: The plugin will continue collecting other metrics even if some require elevated permissions. Run InfluxDB with appropriate permissions if disk I/O metrics are required.
#### Issue: Missing psutil library
**Solution**: Install the psutil package:
```bash
influxdb3 install package psutil
```
#### Issue: High CPU usage from plugin
**Solution**: Increase the trigger interval (for example, from `every:10s` to `every:30s`). Disable unnecessary metric types. Reduce the number of disk partitions monitored.
### Viewing Logs
Logs are stored in the trigger's database in the `system.processing_engine_logs` table:
```bash
influxdb3 query \
--database YOUR_DATABASE \
"SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'system_metrics_trigger' ORDER BY event_time DESC LIMIT 10"
```
### Verifying Data Collection
Check that metrics are being collected:
```bash
# List all system metric measurements
influxdb3 query \
--database system_monitoring \
"SHOW MEASUREMENTS WHERE measurement =~ /^system_/"
# Check recent CPU metrics
influxdb3 query \
--database system_monitoring \
"SELECT COUNT(*) FROM system_cpu WHERE time >= now() - interval '1 hour'"
```
## Logging
Logs are stored in the `_internal` database (or the database where the trigger is created) in the `system.processing_engine_logs` table. To view logs:
@ -159,154 +292,6 @@ Log columns:
- **log_level**: Severity level (INFO, WARN, ERROR)
- **log_text**: Message describing the action or error
### Main functions
#### `process_scheduled_call(influxdb3_local, call_time, args)`
The main entry point for scheduled triggers. Collects system metrics based on configuration and writes them to InfluxDB.
Key operations:
1. Parses configuration from arguments
2. Collects CPU, memory, disk, and network metrics based on configuration
3. Writes metrics to InfluxDB with proper error handling and retry logic
#### `collect_cpu_metrics(influxdb3_local, hostname)`
Collects CPU utilization and performance metrics including per-core statistics and system load averages.
#### `collect_memory_metrics(influxdb3_local, hostname)`
Collects memory usage statistics including RAM, swap, and page fault information.
#### `collect_disk_metrics(influxdb3_local, hostname)`
Collects disk usage and I/O statistics for all mounted partitions.
#### `collect_network_metrics(influxdb3_local, hostname)`
Collects network interface statistics including bytes transferred and error counts.
### Measurements and Fields
#### system_cpu
Overall CPU statistics and metrics:
- **Tags**: `host`, `cpu=total`
- **Fields**: `user`, `system`, `idle`, `iowait`, `nice`, `irq`, `softirq`, `steal`, `guest`, `guest_nice`, `frequency_current`, `frequency_min`, `frequency_max`, `ctx_switches`, `interrupts`, `soft_interrupts`, `syscalls`, `load1`, `load5`, `load15`
#### system_cpu_cores
Per-core CPU statistics:
- **Tags**: `host`, `core` (core number)
- **Fields**: `usage`, `user`, `system`, `idle`, `iowait`, `nice`, `irq`, `softirq`, `steal`, `guest`, `guest_nice`, `frequency_current`, `frequency_min`, `frequency_max`
#### system_memory
System memory statistics:
- **Tags**: `host`
- **Fields**: `total`, `available`, `used`, `free`, `active`, `inactive`, `buffers`, `cached`, `shared`, `slab`, `percent`
#### system_swap
Swap memory statistics:
- **Tags**: `host`
- **Fields**: `total`, `used`, `free`, `percent`, `sin`, `sout`
#### system_memory_faults
Memory page fault information (when available):
- **Tags**: `host`
- **Fields**: `page_faults`, `major_faults`, `minor_faults`, `rss`, `vms`, `dirty`, `uss`, `pss`
#### system_disk_usage
Disk partition usage:
- **Tags**: `host`, `device`, `mountpoint`, `fstype`
- **Fields**: `total`, `used`, `free`, `percent`
#### system_disk_io
Disk I/O statistics:
- **Tags**: `host`, `device`
- **Fields**: `reads`, `writes`, `read_bytes`, `write_bytes`, `read_time`, `write_time`, `busy_time`, `read_merged_count`, `write_merged_count`
#### system_disk_performance
Calculated disk performance metrics:
- **Tags**: `host`, `device`
- **Fields**: `read_bytes_per_sec`, `write_bytes_per_sec`, `read_iops`, `write_iops`, `avg_read_latency_ms`, `avg_write_latency_ms`, `util_percent`
#### system_network
Network interface statistics:
- **Tags**: `host`, `interface`
- **Fields**: `bytes_sent`, `bytes_recv`, `packets_sent`, `packets_recv`, `errin`, `errout`, `dropin`, `dropout`
## Troubleshooting
### Common issues
#### Issue: Permission errors on disk I/O metrics
Some disk I/O metrics may require elevated permissions.
**Solution**: The plugin will continue collecting other metrics even if some require elevated permissions. Consider running InfluxDB 3 with appropriate permissions if disk I/O metrics are critical.
#### Issue: Missing psutil library
```
ERROR: No module named 'psutil'
```
**Solution**: Install the psutil package:
```bash
influxdb3 install package psutil
```
#### Issue: High CPU usage from plugin
If the plugin causes high CPU usage, consider:
- Increasing the trigger interval (for example, from `every:10s` to `every:30s`)
- Disabling unnecessary metric types
- Reducing the number of disk partitions monitored
#### Issue: No data being collected
**Solution**:
1. Check that the trigger is active:
```bash
influxdb3 query --database _internal "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'your_trigger_name'"
```
2. Verify system permissions allow access to system metrics
3. Check that the psutil package is properly installed
### Debugging tips
1. **Check recent metrics collection**:
```bash
# List all system metric measurements
influxdb3 query \
--database system_monitoring \
"SHOW MEASUREMENTS WHERE measurement =~ /^system_/"
# Check recent CPU metrics
influxdb3 query \
--database system_monitoring \
"SELECT COUNT(*) FROM system_cpu WHERE time >= now() - interval '1 hour'"
```
2. **Monitor plugin logs**:
```bash
influxdb3 query \
--database _internal \
"SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'system_metrics_trigger' ORDER BY time DESC LIMIT 10"
```
3. **Test metric collection manually**:
```bash
influxdb3 test schedule_plugin \
--database system_monitoring \
--schedule "0 0 * * * ?" \
system_metrics.py
```
### Performance considerations
- The plugin collects comprehensive system metrics efficiently using the psutil library
- Metric collection is optimized to minimize system overhead
- Error handling and retry logic ensure reliable operation
- Configurable metric types allow focusing on relevant metrics only
## Report an issue
For plugin issues, see the Plugins repository [issues page](https://github.com/influxdata/influxdb3_plugins/issues).

View File

@ -1,58 +1,49 @@
The Threshold Deadman Checks Plugin provides comprehensive monitoring capabilities for time series data in InfluxDB 3, combining real-time threshold detection with deadman monitoring.
Monitor field values against configurable thresholds, detect data absence patterns, and trigger multi-level alerts based on aggregated metrics.
Features both scheduled batch monitoring and real-time data write monitoring with configurable trigger counts and severity levels.
The Threshold Deadman Checks Plugin provides comprehensive monitoring capabilities for time series data in {{% product-name %}}, combining real-time threshold detection with deadman monitoring. Monitor field values against configurable thresholds, detect data absence patterns, and trigger multi-level alerts based on aggregated metrics. Features both scheduled batch monitoring and real-time data write monitoring with configurable trigger counts and severity levels.
## Configuration
### Scheduled trigger parameters
Plugin parameters may be specified as key-value pairs in the `--trigger-arguments` flag (CLI) or in the `trigger_arguments` field (API) when creating a trigger.
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `measurement` | string | required | Measurement to monitor |
| `senders` | string | required | Dot-separated notification channels |
| `window` | string | required | Time window for data checking |
### Plugin metadata
This plugin includes a JSON metadata schema in its docstring that defines supported trigger types and configuration parameters. This metadata enables the [InfluxDB 3 Explorer](https://docs.influxdata.com/influxdb3/explorer/) UI to display and configure the plugin.
### Required parameters
| Parameter | Type | Default | Description |
|---------------|--------|----------|---------------------------------------------------------------------------------------------------|
| `measurement` | string | required | Measurement to monitor for deadman alerts and aggregation-based conditions |
| `senders` | string | required | Dot-separated notification channels with multi-channel notification integration |
| `window` | string | required | Time window for periodic data presence checking |
### Data write trigger parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `measurement` | string | required | Measurement to monitor for threshold conditions |
| `field_conditions` | string | required | Threshold conditions (for example, `"temp>30-WARN:status==ok-INFO"`) |
| `senders` | string | required | Dot-separated notification channels |
| Parameter | Type | Default | Description |
|--------------------|--------|----------|----------------------------------------------------------------------------------------------------------------------|
| `measurement` | string | required | Measurement to monitor for real-time threshold violations in dual monitoring mode |
| `field_conditions` | string | required | Real-time threshold conditions with multi-level alerting (INFO, WARN, ERROR, CRITICAL severity levels) |
| `senders` | string | required | Dot-separated notification channels with multi-channel notification integration |
### Threshold check parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `field_aggregation_values` | string | none | Aggregation conditions for scheduled checks |
| `deadman_check` | boolean | false | Enable deadman data presence checking |
| `interval` | string | "5min" | Aggregation time interval |
| `trigger_count` | number | 1 | Consecutive failures before alerting |
| Parameter | Type | Default | Description |
|----------------------------|---------|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `field_aggregation_values` | string | none | Multi-level aggregation conditions with aggregation support for avg, min, max, count, sum, median, stddev, first_value, last_value, var, and approx_median values |
| `deadman_check` | boolean | false | Enable deadman detection to monitor for data absence and missing data streams |
| `interval` | string | "5min" | Configurable aggregation time interval for batch processing with performance optimization |
| `trigger_count` | number | 1 | Configurable triggers requiring multiple consecutive failures before alerting |
### Notification parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `influxdb3_auth_token` | string | env var | InfluxDB 3 API token |
| `notification_deadman_text` | string | template | Deadman alert message template |
| `notification_threshold_text` | string | template | Threshold alert message template |
| `notification_text` | string | template | General notification template (data write) |
| `notification_path` | string | "notify" | Notification endpoint path |
| `port_override` | number | 8181 | InfluxDB port override |
| `config_file_path` | string | none | TOML config file path relative to PLUGIN_DIR |
### Channel-specific configuration
Notification channels require additional parameters based on the sender type (same as the [Notifier Plugin](../notifier/README.md)).
## Schema requirements
The plugin assumes that the table schema is already defined in the database, as it relies on this schema to retrieve field and tag names required for processing.
> [!WARNING]
> #### Requires existing schema
>
> By design, the plugin returns an error if the schema doesn't exist or doesn't contain the expected columns.
| Parameter | Type | Default | Description |
|-------------------------------|--------|----------|--------------------------------------------------------------------------------------------|
| `influxdb3_auth_token` | string | env var | {{% product-name %}} API token with environment variable support |
| `notification_deadman_text` | string | template | Customizable deadman alert template message with dynamic variables |
| `notification_threshold_text` | string | template | Customizable threshold alert template message with dynamic variables |
| `notification_text` | string | template | Customizable notification template message for data write triggers with dynamic variables |
| `notification_path` | string | "notify" | Notification endpoint path with retry logic and exponential backoff |
| `port_override` | number | 8181 | InfluxDB port override for notification delivery |
### TOML configuration
@ -60,52 +51,75 @@ The plugin assumes that the table schema is already defined in the database, as
|--------------------|--------|---------|----------------------------------------------------------------------------------|
| `config_file_path` | string | none | TOML config file path relative to `PLUGIN_DIR` (required for TOML configuration) |
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting InfluxDB 3.
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting {{% product-name %}}.
Example TOML configuration files provided:
- [threshold_deadman_config_scheduler.toml](https://github.com/influxdata/influxdb3_plugins/blob/master/influxdata/threshold_deadman_checks/threshold_deadman_config_scheduler.toml) - for scheduled triggers
- [threshold_deadman_config_data_writes.toml](https://github.com/influxdata/influxdb3_plugins/blob/master/influxdata/threshold_deadman_checks/threshold_deadman_config_data_writes.toml) - for data write triggers
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins
/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
## Installation
### Channel-specific configuration
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`)
Notification channels require additional parameters based on the sender type (same as the [influxdata/notifier plugin](../notifier/README.md)).
2. Install required Python packages:
## Schema requirement
- `requests` (for HTTP requests)
```bash
influxdb3 install package requests
```
The plugin assumes that the table schema is already defined in the database, as it relies on this schema to retrieve field and tag names required for processing.
### Create scheduled trigger
## Software requirements
- **InfluxDB v3 Core/Enterprise**: with the Processing Engine enabled.
- **Notification Sender Plugin for {{% product-name %}}**: This plugin is required for sending notifications. See the [influxdata/notifier plugin](../notifier/README.md).
## Installation steps
1. **Start {{% product-name %}} with the Processing Engine enabled** (`--plugin-dir /path/to/plugins`):
```bash
influxdb3 serve \
--node-id node0 \
--object-store file \
--data-dir ~/.influxdb3 \
--plugin-dir ~/.plugins
```
2. **Install required Python packages**:
```bash
influxdb3 install package requests
```
3. **Optional**: For notifications, install and configure the [influxdata/notifier plugin](../notifier/README.md)
## Trigger setup
### Scheduled trigger
Create a trigger for periodic threshold and deadman checks:
```bash
influxdb3 create trigger \
--database mydb \
--plugin-filename threshold_deadman_checks_plugin.py \
--path "gh:influxdata/threshold_deadman_checks/threshold_deadman_checks_plugin.py" \
--trigger-spec "every:10m" \
--trigger-arguments "measurement=cpu,senders=slack,field_aggregation_values=temp:avg@>=30-ERROR,window=10m,trigger_count=3,deadman_check=true,slack_webhook_url=https://hooks.slack.com/services/..." \
--trigger-arguments "measurement=cpu,senders=slack,field_aggregation_values=temp:avg@>=30-ERROR,window=10m,trigger_count=3,deadman_check=true,slack_webhook_url=$SLACK_WEBHOOK_URL" \
threshold_scheduler
```
Set `SLACK_WEBHOOK_URL` to your Slack incoming webhook URL.
### Create data write trigger
### Data write trigger
Create a trigger for real-time threshold monitoring:
```bash
influxdb3 create trigger \
--database mydb \
--plugin-filename threshold_deadman_checks_plugin.py \
--path "gh:influxdata/threshold_deadman_checks/threshold_deadman_checks_plugin.py" \
--trigger-spec "all_tables" \
--trigger-arguments "measurement=cpu,field_conditions=temp>30-WARN:status==ok-INFO,senders=slack,trigger_count=2,slack_webhook_url=https://hooks.slack.com/services/..." \
--trigger-arguments "measurement=cpu,field_conditions=temp>30-WARN:status==ok-INFO,senders=slack,trigger_count=2,slack_webhook_url=$SLACK_WEBHOOK_URL" \
threshold_datawrite
```
Set `SLACK_WEBHOOK_URL` to your Slack incoming webhook URL.
### Enable triggers
@ -113,22 +127,55 @@ influxdb3 create trigger \
influxdb3 enable trigger --database mydb threshold_scheduler
influxdb3 enable trigger --database mydb threshold_datawrite
```
## Example usage
## Examples
### Example 1: Basic threshold monitoring
### Deadman monitoring
Write test data and monitor for threshold violations:
```bash
# Write test data
influxdb3 write \
--database sensors \
"heartbeat,host=server1 status=1"
influxdb3 write \
--database sensors \
"heartbeat,host=server1 status=0"
# Create and enable the trigger
influxdb3 create trigger \
--database sensors \
--path "gh:influxdata/threshold_deadman_checks/threshold_deadman_checks_plugin.py" \
--trigger-spec "every:5m" \
--trigger-arguments "measurement=heartbeat,senders=slack,window=5m,deadman_check=true,slack_webhook_url=$SLACK_WEBHOOK_URL" \
heartbeat_monitor
influxdb3 enable trigger --database sensors heartbeat_monitor
# Query to verify data
influxdb3 query \
--database sensors \
"SELECT * FROM heartbeat ORDER BY time DESC LIMIT 5"
```
Set `SLACK_WEBHOOK_URL` to your Slack incoming webhook URL.
**Expected output**
When no data is received within the window, a deadman alert is sent: "CRITICAL: No heartbeat data from heartbeat between 2025-06-01T10:00:00Z and 2025-06-01T10:05:00Z"
### Example 2: Deadman monitoring
Monitor for data absence and alert when no data is received:
```bash
influxdb3 create trigger \
--database sensors \
--plugin-filename threshold_deadman_checks_plugin.py \
--path "gh:influxdata/threshold_deadman_checks/threshold_deadman_checks_plugin.py" \
--trigger-spec "every:15m" \
--trigger-arguments "measurement=heartbeat,senders=sms,window=10m,deadman_check=true,trigger_count=2,twilio_from_number=+1234567890,twilio_to_number=+0987654321,notification_deadman_text=CRITICAL: No heartbeat data from \$table between \$time_from and \$time_to" \
heartbeat_monitor
```
### Multi-level threshold monitoring
Monitor aggregated values with different severity levels:
@ -136,11 +183,12 @@ Monitor aggregated values with different severity levels:
```bash
influxdb3 create trigger \
--database monitoring \
--plugin-filename threshold_deadman_checks_plugin.py \
--path "gh:influxdata/threshold_deadman_checks/threshold_deadman_checks_plugin.py" \
--trigger-spec "every:5m" \
--trigger-arguments "measurement=system_metrics,senders=slack.discord,field_aggregation_values=cpu_usage:avg@>=80-WARN\$cpu_usage:avg@>=95-ERROR\$memory_usage:max@>=90-WARN,window=5m,interval=1min,trigger_count=3,slack_webhook_url=https://hooks.slack.com/services/...,discord_webhook_url=https://discord.com/api/webhooks/..." \
--trigger-arguments "measurement=system_metrics,senders=slack.discord,field_aggregation_values='cpu_usage:avg@>=80-WARN cpu_usage:avg@>=95-ERROR memory_usage:max@>=90-WARN',window=5m,interval=1min,trigger_count=3,slack_webhook_url=$SLACK_WEBHOOK_URL,discord_webhook_url=$DISCORD_WEBHOOK_URL" \
system_threshold_monitor
```
Set `SLACK_WEBHOOK_URL` and `DISCORD_WEBHOOK_URL` to your webhook URLs.
### Real-time field condition monitoring
@ -149,11 +197,12 @@ Monitor data writes for immediate threshold violations:
```bash
influxdb3 create trigger \
--database applications \
--plugin-filename threshold_deadman_checks_plugin.py \
--path "gh:influxdata/threshold_deadman_checks/threshold_deadman_checks_plugin.py" \
--trigger-spec "all_tables" \
--trigger-arguments "measurement=response_times,field_conditions=latency>500-WARN:latency>1000-ERROR:error_rate>0.05-CRITICAL,senders=http,trigger_count=1,http_webhook_url=https://alertmanager.example.com/webhook,notification_text=[\$level] Application alert: \$field \$op_sym \$compare_val (actual: \$actual)" \
--trigger-arguments "measurement=response_times,field_conditions=latency>500-WARN:latency>1000-ERROR:error_rate>0.05-CRITICAL,senders=http,trigger_count=1,http_webhook_url=$HTTP_WEBHOOK_URL,notification_text=[\$level] Application alert: \$field \$op_sym \$compare_val (actual: \$actual)" \
app_performance_monitor
```
Set `HTTP_WEBHOOK_URL` to your HTTP webhook endpoint.
### Combined monitoring
@ -162,77 +211,95 @@ Monitor both aggregation thresholds and deadman conditions:
```bash
influxdb3 create trigger \
--database comprehensive \
--plugin-filename threshold_deadman_checks_plugin.py \
--path "gh:influxdata/threshold_deadman_checks/threshold_deadman_checks_plugin.py" \
--trigger-spec "every:10m" \
--trigger-arguments "measurement=temperature_sensors,senders=whatsapp,field_aggregation_values=temperature:avg@>=35-WARN\$temperature:max@>=40-ERROR,window=15m,deadman_check=true,trigger_count=2,twilio_from_number=+1234567890,twilio_to_number=+0987654321" \
--trigger-arguments "measurement=temperature_sensors,senders=whatsapp,field_aggregation_values='temperature:avg@>=35-WARN temperature:max@>=40-ERROR',window=15m,deadman_check=true,trigger_count=2,twilio_from_number=+1234567890,twilio_to_number=+0987654321" \
comprehensive_sensor_monitor
```
## Features
## Code overview
- **Dual monitoring modes**: Scheduled aggregation checks and real-time data write monitoring
- **Deadman detection**: Monitor for data absence and missing data streams
- **Multi-level alerting**: Support for INFO, WARN, ERROR, and CRITICAL severity levels
- **Aggregation support**: Monitor avg, min, max, count, sum, derivative, and median values
- **Configurable triggers**: Require multiple consecutive failures before alerting
- **Multi-channel notifications**: Integration with various notification systems
- **Template messages**: Customizable alert templates with dynamic variables
- **Performance optimization**: Measurement and tag caching for improved efficiency
### Files
- `threshold_deadman_checks_plugin.py`: The main plugin code containing handlers for scheduled and data write triggers
- `threshold_deadman_config_scheduler.toml`: Example TOML configuration for scheduled triggers
- `threshold_deadman_config_data_writes.toml`: Example TOML configuration for data write triggers
### Logging
Logs are stored in the trigger's database in the `system.processing_engine_logs` table. To view logs:
```bash
influxdb3 query --database YOUR_DATABASE "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'threshold_scheduler'"
```
### Main functions
#### `process_scheduled_call(influxdb3_local, call_time, args)`
Handles scheduled threshold and deadman checks. Queries data within the specified window, evaluates aggregation-based conditions, and checks for data absence.
#### `process_writes(influxdb3_local, table_batches, args)`
Handles real-time threshold monitoring on data writes. Evaluates incoming data against configured field conditions with multi-level severity.
## Troubleshooting
### Common issues
**No alerts triggered**
- Verify threshold values are appropriate for your data ranges
- Check that notification channels are properly configured
- Ensure the Notifier Plugin is installed and accessible
- Review plugin logs for configuration errors
#### Issue: No alerts triggered
**False positive alerts**
- Increase `trigger_count` to require more consecutive failures
- Adjust threshold values to be less sensitive
- Consider longer aggregation intervals for noisy data
**Solution**: Verify threshold values are appropriate for your data ranges. Check that notification channels are properly configured. Ensure the Notifier Plugin is installed and accessible. Review plugin logs for configuration errors.
**Missing deadman alerts**
- Verify `deadman_check=true` is set in configuration
- Check that the measurement name matches existing data
- Ensure the time window is appropriate for your data frequency
#### Issue: False positive alerts
**Authentication issues**
- Set `INFLUXDB3_AUTH_TOKEN` environment variable
- Verify API token has required database permissions
- Check Twilio credentials for SMS/WhatsApp notifications
**Solution**: Increase `trigger_count` to require more consecutive failures. Adjust threshold values to be less sensitive. Consider longer aggregation intervals for noisy data.
#### Issue: Missing deadman alerts
**Solution**: Verify `deadman_check=true` is set in configuration. Check that the measurement name matches existing data. Ensure the time window is appropriate for your data frequency.
#### Issue: Authentication issues
**Solution**: Set `INFLUXDB3_AUTH_TOKEN` environment variable. Verify API token has required database permissions. Check Twilio credentials for SMS/WhatsApp notifications.
### Configuration formats
**Aggregation conditions (scheduled)**
- Format: `field:aggregation@"operator value-level"`
- Example: `temp:avg@">=30-ERROR"`
- Multiple conditions: `temp:avg@">=30-WARN"$humidity:min@"<40-INFO"`
- Format: `field:aggregation@operator value-level`
- Example: `temp:avg@>=30-ERROR`
- Multiple conditions: `"temp:avg@>=30-WARN humidity:min@<40-INFO"`
**Field conditions (data write)**
- Format: `field operator value-level`
- Example: `temp>30-WARN:status==ok-INFO`
- Supported operators: `>`, `<`, `>=`, `<=`, `==`, `!=`
**Supported aggregations**
- `avg`: Average value
- `min`: Minimum value
- `max`: Maximum value
- `count`: Count of records
- `sum`: Sum of values
- `derivative`: Rate of change
- `median`: Median value
- `stddev`: Standard deviation
- `first_value`: First value in time interval
- `last_value`: Last value in time interval
- `var`: Variance of values
- `approx_median`: Approximate median (faster than exact median)
### Message template variables
**Deadman notifications**
- `$table`: Measurement name
- `$time_from`: Start of checked period
- `$time_to`: End of checked period
**Threshold notifications (scheduled)**
- `$level`: Alert severity level
- `$table`: Measurement name
- `$field`: Field name
@ -243,6 +310,7 @@ influxdb3 create trigger \
- `$row`: Unique identifier
**Threshold notifications (data write)**
- `$level`: Alert severity level
- `$field`: Field name
- `$op_sym`: Operator symbol
@ -251,17 +319,15 @@ influxdb3 create trigger \
### Row identification
The `row` variable uniquely identifies alert contexts using format:
`measurement:level:tag1=value1:tag2=value2`
The `row` variable uniquely identifies alert contexts using format: `measurement:level:tag1=value1:tag2=value2`
This ensures trigger counts are maintained independently for each unique combination of measurement, severity level, and tag values.
## Report an issue
For plugin issues, see the Plugins repository [issues page](https://github.com/influxdata/influxdb3_plugins/issues).
## Find support for {{% product-name %}}
The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for {{% product-name %}}.
The [InfluxDB Discord server](https://discord.gg/9zaNCW2PRT) is the best place to find support for InfluxDB 3 Core and InfluxDB 3 Enterprise.
For other InfluxDB versions, see the [Support and feedback](#bug-reports-and-feedback) options.

View File

@ -26,12 +26,17 @@ async function loadMappingConfig(configPath = 'docs_mapping.yaml') {
}
/**
* Remove the emoji metadata line from content.
* Remove the emoji metadata lines from content.
* Handles both single-line and multi-line formats:
* - Single: scheduled 🔧 InfluxDB 3
* - Multi: scheduled\n🏷 tags 🔧 InfluxDB 3
*/
function removeEmojiMetadata(content) {
// Remove the emoji line (it's already in the plugin's JSON metadata)
const pattern = /^⚡.*?🔧.*?$\n*/gm;
return content.replace(pattern, '');
// Remove multi-line emoji metadata (⚡ on first line, 🔧 on second line)
content = content.replace(/^⚡[^\n]*\n🏷[^\n]*🔧[^\n]*\n*/gm, '');
// Remove single-line emoji metadata (⚡ and 🔧 on same line)
content = content.replace(/^⚡.*?🔧.*?$\n*/gm, '');
return content;
}
/**
@ -126,8 +131,8 @@ function addProductShortcodes(content) {
[/InfluxDB 3 Core\/Enterprise/g, '{{% product-name %}}'],
[/InfluxDB 3 Core and InfluxDB 3 Enterprise/g, '{{% product-name %}}'],
[/InfluxDB 3 Core, InfluxDB 3 Enterprise/g, '{{% product-name %}}'],
// Be careful not to replace in URLs or code blocks
[/(?<!\/)InfluxDB 3(?![/_])/g, '{{% product-name %}}'],
// Be careful not to replace in URLs, code blocks, or product names like "InfluxDB 3 Explorer"
[/(?<!\/)InfluxDB 3(?! Explorer)(?![/_])/g, '{{% product-name %}}'],
];
for (const [pattern, replacement] of replacements) {