feat(influxdb3): add TOML configuration documentation to plugin library

- Add TOML configuration section to plugins-library index explaining usage
- Add TOML config tables to all official plugin documentation files
- Standardize TOML section format across plugins with config_file_path parameter
- Update system-metrics plugin moved from examples to official
- Remove redundant config_file_path from individual parameter tables
- Ensure consistent placement before Installation/Requirements sections
- Fix linting: replace 'e.g.' with 'for example' in system-metrics.md

This completes the TOML configuration documentation updates from PR 6244
pull/6268/head^2
Jason Stirnaman 2025-07-30 04:17:05 -05:00
parent 55e128cb78
commit df6c963b35
13 changed files with 453 additions and 335 deletions

View File

@ -15,3 +15,46 @@ Plugins in this library include a JSON metadata schema in a docstring header tha
- the [InfluxDB 3 Explorer UI](/influxdb3/explorer/) to display and configure the plugin
- automated testing and validation of plugins in the repository
## Using TOML Configuration Files
Many plugins in this library support using TOML configuration files to specify all plugin arguments. This is useful for complex configurations or when you want to version control your plugin settings.
### Important Requirements
**To use TOML configuration files, you must set the `PLUGIN_DIR` environment variable in the {{% product-name %}} host environment.** This is required in addition to the `--plugin-dir` flag when starting {{% product-name %}}:
- `--plugin-dir` tells {{% product-name %}} where to find plugin Python files
- `PLUGIN_DIR` environment variable tells the plugins where to find TOML configuration files
### Set up TOML Configuration
1. **Start {{% product-name %}} with the PLUGIN_DIR environment variable set**:
```bash
PLUGIN_DIR=~/.plugins influxdb3 serve --node-id node0 --object-store file --data-dir ~/.influxdb3 --plugin-dir ~/.plugins
```
2. **Copy or create a TOML configuration file in your plugin directory**:
```bash
# Example: copy a plugin's configuration template
cp plugin_config_example.toml ~/.plugins/my_config.toml
```
3. **Edit the TOML file** to match your requirements. The TOML file should contain all the arguments defined in the plugin's argument schema.
4. **Create a trigger with the `config_file_path` argument**:
When creating a trigger, specify the `config_file_path` argument to point to your TOML configuration file.
- Specify only the filename (not the full path)
- The file must be located under `PLUGIN_DIR`
```bash
influxdb3 create trigger \
--database mydb \
--plugin-filename plugin_name.py \
--trigger-spec "every:1d" \
--trigger-arguments config_file_path=my_config.toml \
my_trigger_name
```
For more information on using TOML configuration files, see the project [README](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).

View File

@ -10,15 +10,9 @@ The plugin can optionally double-count rows for a specified table to demonstrate
|-----------|------|---------|-------------|
| `double_count_table` | string | none | Table name for which to double the row count in write reports (for testing/demonstration) |
## Requirements
## Installation steps
### Software requirements
- InfluxDB 3 Core or InfluxDB 3 Enterprise with Processing Engine enabled
- No additional Python packages required (uses built-in libraries)
### Installation steps
1. Start InfluxDB 3 with plugin support:
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`)
```bash
influxdb3 serve \
--node-id node0 \

View File

@ -31,22 +31,43 @@ The plugin supports both scheduled batch processing of historical data and real-
| `excluded_fields` | string | none | Dot-separated list of fields to exclude |
| `filters` | string | none | Query filters. Format: `'field:"operator value"'` |
### Advanced parameters
### TOML configuration
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `config_file_path` | string | none | Path to TOML config file relative to PLUGIN_DIR |
| Parameter | Type | Default | Description |
|--------------------|--------|---------|----------------------------------------------------------------------------------|
| `config_file_path` | string | none | TOML config file path relative to `PLUGIN_DIR` (required for TOML configuration) |
## Requirements
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting InfluxDB 3.
### Software requirements
- InfluxDB 3 Core or Enterprise with Processing Engine enabled
- Python packages:
- `pint` (for unit conversions)
#### Example TOML configurations
### Installation steps
- [basic_transformation_config_scheduler.toml](https://github.com/influxdata/influxdb3_plugins/blob/master/influxdata/basic_transformation/basic_transformation_config_scheduler.toml) - for scheduled triggers
- [basic_transformation_config_data_writes.toml](https://github.com/influxdata/influxdb3_plugins/blob/master/influxdata/basic_transformation/basic_transformation_config_data_writes.toml) - for data write triggers
1. Start InfluxDB 3 with plugin support:
```bash
influxdb3 create trigger \
--database mydb \
--plugin-filename basic_transformation.py \
--trigger-spec "every:1d" \
--trigger-arguments config_file_path=basic_transformation_config_scheduler.toml \
basic_transform_trigger
```
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins
/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
## Schema requirements
The plugin assumes that the table schema is already defined in the database, as it relies on this schema to retrieve field and tag names required for processing.
> [!WARNING]
> #### Requires existing schema
>
> By design, the plugin returns an error if the schema doesn't exist or doesn't contain the expected columns.
## Installation steps
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`)
```bash
influxdb3 serve \
--node-id node0 \
@ -56,6 +77,9 @@ The plugin supports both scheduled batch processing of historical data and real-
```
2. Install required Python packages:
- `pint` (for unit conversions)
```bash
influxdb3 install package pint
```
@ -174,52 +198,6 @@ influxdb3 create trigger \
status_updater
```
## Using TOML Configuration Files
This plugin supports using TOML configuration files to specify all plugin arguments. This is useful for complex configurations or when you want to version control your plugin settings.
### Important Requirements
**To use TOML configuration files, you must set the `PLUGIN_DIR` environment variable in the InfluxDB 3 host environment.** This is required in addition to the `--plugin-dir` flag when starting InfluxDB 3:
- `--plugin-dir` tells InfluxDB 3 where to find plugin Python files
- `PLUGIN_DIR` environment variable tells the plugins where to find TOML configuration files
### Setting Up TOML Configuration
1. **Start InfluxDB 3 with the PLUGIN_DIR environment variable set**:
```bash
PLUGIN_DIR=~/.plugins influxdb3 serve --node-id node0 --object-store file --data-dir ~/.influxdb3 --plugin-dir ~/.plugins
```
2. **Copy the example TOML configuration file to your plugin directory**:
```bash
cp basic_transformation_config_scheduler.toml ~/.plugins/
# or for data writes:
cp basic_transformation_config_data_writes.toml ~/.plugins/
```
3. **Edit the TOML file** to match your requirements. The TOML file contains all the arguments defined in the plugin's argument schema (see the JSON schema in the docstring at the top of basic_transformation.py).
4. **Create a trigger using the `config_file_path` argument**:
```bash
influxdb3 create trigger \
--database mydb \
--plugin-filename basic_transformation.py \
--trigger-spec "every:1d" \
--trigger-arguments config_file_path=basic_transformation_config_scheduler.toml \
basic_transform_trigger
```
### Important Notes
- The `PLUGIN_DIR` environment variable must be set when starting InfluxDB 3 for TOML configuration to work
- When using `config_file_path`, specify only the filename (not the full path)
- The TOML file must be located in the directory specified by `PLUGIN_DIR`
- All parameters in the TOML file will override any command-line arguments
- Example TOML configuration files are provided:
- `basic_transformation_config_scheduler.toml` - for scheduled triggers
- `basic_transformation_config_data_writes.toml` - for data write triggers
## Code overview
### Files

View File

@ -36,24 +36,32 @@ Each downsampled record includes metadata about the original data points compres
| `target_database` | string | "default" | Database for storing downsampled data |
| `max_retries` | integer | 5 | Maximum number of retries for write operations |
| `batch_size` | string | "30d" | Time interval for batch processing (HTTP mode only) |
| `config_file_path` | string | none | Path to TOML config file relative to PLUGIN_DIR |
### Metadata columns
### TOML configuration
| Parameter | Type | Default | Description |
|--------------------|--------|---------|----------------------------------------------------------------------------------|
| `config_file_path` | string | none | TOML config file path relative to `PLUGIN_DIR` (required for TOML configuration) |
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting InfluxDB 3.
#### Example TOML configuration
[downsampling_config_scheduler.toml](downsampling_config_scheduler.toml)
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins
/README.md](/README.md).
## Schema management
Each downsampled record includes three additional metadata columns:
- `record_count`—the number of original points compressed into this single downsampled row
- `time_from`—the minimum timestamp among the original points in the interval
- `time_to`—the maximum timestamp among the original points in the interval
## Requirements
## Installation steps
### Software requirements
- InfluxDB 3 Core or Enterprise with Processing Engine enabled
- Python packages: No additional packages required
### Installation steps
1. Start InfluxDB 3 with plugin support:
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`)
```bash
influxdb3 serve \
--node-id node0 \
@ -181,46 +189,6 @@ curl -X POST http://localhost:8181/api/v3/engine/downsample \
}'
```
## Using TOML Configuration Files
This plugin supports using TOML configuration files for complex configurations.
### Important Requirements
**To use TOML configuration files, you must set the `PLUGIN_DIR` environment variable in the InfluxDB 3 host environment:**
```bash
PLUGIN_DIR=~/.plugins influxdb3 serve --node-id node0 --object-store file --data-dir ~/.influxdb3 --plugin-dir ~/.plugins
```
### Example TOML Configuration
```toml
# downsampling_config_scheduler.toml
source_measurement = "cpu"
target_measurement = "cpu_hourly"
target_database = "analytics"
interval = "1h"
window = "6h"
calculations = "avg"
specific_fields = "usage_user.usage_system.usage_idle"
max_retries = 3
[tag_values]
host = ["server1", "server2", "server3"]
```
### Create trigger using TOML config
```bash
influxdb3 create trigger \
--database mydb \
--plugin-filename downsampler.py \
--trigger-spec "every:1h" \
--trigger-arguments config_file_path=downsampling_config_scheduler.toml \
downsample_trigger
```
## Code overview
### Files

View File

@ -68,18 +68,24 @@ It includes debounce logic to suppress transient anomalies and supports multi-ch
| `twilio_from_number` | string | required | Twilio sender number (for example, `"+1234567890"`) |
| `twilio_to_number` | string | required | Recipient number (for example, `"+0987654321"`) |
## Requirements
### TOML configuration
### Software requirements
- InfluxDB 3 Core or Enterprise with Processing Engine enabled
- Notification Sender Plugin for InfluxDB 3 (required for notifications)
- Python packages:
- `pandas` (for data processing)
- `requests` (for HTTP notifications)
| Parameter | Type | Default | Description |
|--------------------|--------|---------|----------------------------------------------------------------------------------|
| `config_file_path` | string | none | TOML config file path relative to `PLUGIN_DIR` (required for TOML configuration) |
### Installation steps
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting InfluxDB 3.
1. Start InfluxDB 3 with plugin support:
#### Example TOML configuration
[forecast_error_config_scheduler.toml](https://github.com/influxdata/influxdb3_plugins/blob/master/influxdata/forecast_error_evaluator/forecast_error_config_scheduler.toml)
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins
/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
## Installation steps
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`)
```bash
influxdb3 serve \
--node-id node0 \
@ -89,12 +95,17 @@ It includes debounce logic to suppress transient anomalies and supports multi-ch
```
2. Install required Python packages:
- `pandas` (for data processing)
- `requests` (for HTTP notifications)
```bash
influxdb3 install package pandas
influxdb3 install package requests
```
3. Install the Notification Sender Plugin (required):
```bash
# Ensure notifier plugin is available in ~/.plugins/
```

View File

@ -53,27 +53,40 @@ The plugin supports both scheduled batch transfers of historical data and on-dem
- The `time` column is converted to `datetime64[us]` for Iceberg compatibility
- Tables are created in format: `<namespace>.<table_name>`
## Requirements
## Schema requirements
### Software requirements
- InfluxDB 3 Core or Enterprise with Processing Engine enabled
- Python packages:
- `pandas` (for data manipulation)
- `pyarrow` (for Parquet support)
- `pyiceberg[catalog-options]` (for Iceberg integration)
The plugin assumes that the Iceberg table schema is already defined in the database, as it relies on this schema to retrieve field and tag names required for processing.
### Installation steps
> [!WARNING]
> #### Requires existing schema
>
> By design, the plugin returns an error if the schema doesn't exist or doesn't contain the expected columns.
1. Start InfluxDB 3 with plugin support:
```bash
influxdb3 serve \
--node-id node0 \
--object-store file \
--data-dir ~/.influxdb3 \
--plugin-dir ~/.plugins
```
### TOML configuration
| Parameter | Type | Default | Description |
|--------------------|--------|---------|----------------------------------------------------------------------------------|
| `config_file_path` | string | none | TOML config file path relative to `PLUGIN_DIR` (required for TOML configuration) |
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting InfluxDB 3.
#### Example TOML configuration
[influxdb_to_iceberg_config_scheduler.toml](https://github.com/influxdata/influxdb3_plugins/blob/master/influxdata/influxdb_to_iceberg/influxdb_to_iceberg_config_scheduler.toml)
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins
/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
## Installation steps
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`)
2. Install required Python packages:
- `pandas` (for data manipulation)
- `pyarrow` (for Parquet support)
- `pyiceberg[catalog-options]` (for Iceberg integration)
```bash
influxdb3 install package pandas
influxdb3 install package pyarrow

View File

@ -67,31 +67,43 @@ Default notification templates:
| `twilio_from_number` | string | Yes | Sender phone number |
| `twilio_to_number` | string | Yes | Recipient phone number |
## Requirements
## Schema requirements
### Software requirements
- InfluxDB 3 Core or Enterprise with Processing Engine enabled
- Python packages:
- `requests` (for notification delivery)
- Notification Sender Plugin (required for sending alerts)
The plugin assumes that the table schema is already defined in the database, as it relies on this schema to retrieve field and tag names required for processing.
### Installation steps
> [!WARNING]
> #### Requires existing schema
>
> By design, the plugin returns an error if the schema doesn't exist or doesn't contain the expected columns.
1. Start InfluxDB 3 with plugin support:
```bash
influxdb3 serve \
--node-id node0 \
--object-store file \
--data-dir ~/.influxdb3 \
--plugin-dir ~/.plugins
```
### TOML configuration
| Parameter | Type | Default | Description |
|--------------------|--------|---------|----------------------------------------------------------------------------------|
| `config_file_path` | string | none | TOML config file path relative to `PLUGIN_DIR` (required for TOML configuration) |
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting InfluxDB 3.
#### Example TOML configuration
[mad_anomaly_config_data_writes.toml](https://github.com/influxdata/influxdb3_plugins/blob/master/influxdata/mad_check/mad_anomaly_config_data_writes.toml)
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins
/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
## Installation
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`)
2. Install required Python packages:
- `requests` (for notification delivery)
```bash
influxdb3 install package requests
```
3. Install and configure the [Notification Sender Plugin](https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/notifier)
3. Install and configure the official [Notifier plugin](https://github.com/influxdata/influxdb3_plugins/tree/main/influxdata/notifier)
## Trigger setup

View File

@ -51,16 +51,19 @@ The `senders_config` parameter accepts channel configurations where keys are sen
## Installation
### Install dependencies
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`)
Install required Python packages:
2. Install required Python packages:
```bash
influxdb3 install package httpx
influxdb3 install package twilio
```
- `httpx` (for HTTP requests)
- `twilio` (for SMS and WhatsApp notifications)
### Create trigger
```bash
influxdb3 install package httpx
influxdb3 install package twilio
```
## Create trigger
Create an HTTP trigger to handle notification requests:

View File

@ -55,20 +55,34 @@ Supports both scheduled batch forecasting and on-demand HTTP-triggered forecasts
| `senders` | string | none | Dot-separated notification channels |
| `notification_path` | string | "notify" | Notification endpoint path |
| `influxdb3_auth_token` | string | env var | Authentication token |
| `config_file_path` | string | none | TOML config file path relative to PLUGIN_DIR |
### TOML configuration
| Parameter | Type | Default | Description |
|--------------------|--------|---------|----------------------------------------------------------------------------------|
| `config_file_path` | string | none | TOML config file path relative to `PLUGIN_DIR` (required for TOML configuration) |
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting InfluxDB 3.
#### Example TOML configuration
[prophet_forecasting_scheduler.toml](https://github.com/influxdata/influxdb3_plugins/blob/master/influxdata/prophet_forecasting/prophet_forecasting_scheduler.toml)
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins
/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
## Installation
### Install dependencies
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`)
Install required Python packages:
2. Install required Python packages:
```bash
influxdb3 install package pandas
influxdb3 install package numpy
influxdb3 install package requests
influxdb3 install package prophet
```
```bash
influxdb3 install package pandas
influxdb3 install package numpy
influxdb3 install package requests
influxdb3 install package prophet
```
### Create scheduled trigger

View File

@ -44,15 +44,41 @@ Supports both scheduled batch monitoring and real-time data write monitoring wit
Notification channels require additional parameters based on the sender type (same as the [Notifier Plugin](../notifier/README.md)).
## Schema requirements
The plugin assumes that the table schema is already defined in the database, as it relies on this schema to retrieve field and tag names required for processing.
> [!WARNING]
> #### Requires existing schema
>
> By design, the plugin returns an error if the schema doesn't exist or doesn't contain the expected columns.
### TOML configuration
| Parameter | Type | Default | Description |
|--------------------|--------|---------|----------------------------------------------------------------------------------|
| `config_file_path` | string | none | TOML config file path relative to `PLUGIN_DIR` (required for TOML configuration) |
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting InfluxDB 3.
Example TOML configuration files provided:
- [state_change_config_scheduler.toml](https://github.com/influxdata/influxdb3_plugins/blob/master/influxdata/state_change/state_change_config_scheduler.toml) - for scheduled triggers
- [state_change_config_data_writes.toml](https://github.com/influxdata/influxdb3_plugins/blob/master/influxdata/state_change/state_change_config_data_writes.toml) - for data write triggers
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins
/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
## Installation
### Install dependencies
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`)
Install required Python packages:
2. Install required Python packages:
```bash
influxdb3 install package requests
```
- `requests` (for HTTP requests)
```bash
influxdb3 install package requests
```
### Create scheduled trigger

View File

@ -44,17 +44,36 @@ Features consensus-based detection requiring multiple detectors to agree before
| `PersistAD` | Detects persistent anomalous values | None |
| `SeasonalAD` | Detects seasonal pattern deviations | None |
### TOML configuration
| Parameter | Type | Default | Description |
|--------------------|--------|---------|----------------------------------------------------------------------------------|
| `config_file_path` | string | none | TOML config file path relative to `PLUGIN_DIR` (required for TOML configuration) |
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting InfluxDB 3.
#### Example TOML configuration
[adtk_anomaly_config_scheduler.toml](https://github.com/influxdata/influxdb3_plugins/blob/master/influxdata/stateless_adtk_detector/adtk_anomaly_config_scheduler.toml)
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins
/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
## Installation
### Install dependencies
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`)
Install required Python packages:
2. Install required Python packages:
```bash
influxdb3 install package requests
influxdb3 install package adtk
influxdb3 install package pandas
```
- `requests` (for HTTP requests)
- `adtk` (for anomaly detection)
- `pandas` (for data manipulation)
```bash
influxdb3 install package requests
influxdb3 install package adtk
influxdb3 install package pandas
```
### Create trigger

View File

@ -1,186 +1,185 @@
A comprehensive system monitoring plugin that collects CPU, memory, disk, and network metrics from the host system.
This plugin provides detailed performance insights including per-core CPU statistics, memory usage breakdowns, disk I/O performance, and network interface statistics.
The System Metrics Plugin provides comprehensive system monitoring capabilities for InfluxDB 3, collecting CPU, memory, disk, and network metrics from the host system.
Monitor detailed performance insights including per-core CPU statistics, memory usage breakdowns, disk I/O performance, and network interface statistics.
Features configurable metric collection with robust error handling and retry logic for reliable monitoring.
## Configuration
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `hostname` | string | `localhost` | Hostname to tag metrics with |
| `include_cpu` | boolean | `true` | Include CPU metrics collection |
| `include_memory` | boolean | `true` | Include memory metrics collection |
| `include_disk` | boolean | `true` | Include disk metrics collection |
| `include_network` | boolean | `true` | Include network metrics collection |
| `max_retries` | integer | `3` | Maximum number of retry attempts on failure |
| `config_file_path` | string | None | Path to configuration file from PLUGIN_DIR env var |
### Required parameters
## Requirements
No required parameters - all system metrics are collected by default with sensible defaults.
- InfluxDB 3 Core or InfluxDB 3 Enterprise
- Python psutil library (automatically installed)
### System monitoring parameters
### Files
| Parameter | Type | Default | Description |
|-------------------|---------|-------------|--------------------------------------------------------------------------------|
| `hostname` | string | `localhost` | Hostname to tag all metrics with for system identification |
| `include_cpu` | boolean | `true` | Include comprehensive CPU metrics collection (overall and per-core statistics) |
| `include_memory` | boolean | `true` | Include memory metrics collection (RAM usage, swap statistics, page faults) |
| `include_disk` | boolean | `true` | Include disk metrics collection (partition usage, I/O statistics, performance) |
| `include_network` | boolean | `true` | Include network metrics collection (interface statistics and error counts) |
| `max_retries` | integer | `3` | Maximum retry attempts on failure with graceful error handling |
- `system_metrics.py`: Main plugin file containing metric collection logic
- `system_metrics_config_scheduler.toml`: Configuration template for scheduled triggers
- `README.md`: This documentation file
### TOML configuration
### Features
| Parameter | Type | Default | Description |
|--------------------|--------|---------|----------------------------------------------------------------------------------|
| `config_file_path` | string | none | TOML config file path relative to `PLUGIN_DIR` (required for TOML configuration) |
- **CPU Metrics**: Overall and per-core CPU usage, frequency, load averages, context switches, and interrupts
- **Memory Metrics**: RAM usage, swap statistics, and memory page fault information
- **Disk Metrics**: Partition usage, I/O statistics, throughput, IOPS, and latency calculations
- **Network Metrics**: Interface statistics including bytes/packets sent/received and error counts
- **Configurable Collection**: Enable/disable specific metric types via configuration
- **Robust Error Handling**: Retry logic and graceful handling of permission errors
- **Task Tracking**: UUID-based task identification for debugging and log correlation
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting InfluxDB 3.
## Trigger Setup
#### Example TOML configuration
### Install Required Dependencies
[system_metrics_config_scheduler.toml](https://github.com/influxdata/influxdb3_plugins/blob/master/influxdata/system_metrics/system_metrics_config_scheduler.toml)
```bash
influxdb3 install package psutil
```
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins
/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
### Basic Scheduled Trigger
## Installation steps
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`)
2. Install required Python packages:
- `psutil` (for system metrics collection)
```bash
influxdb3 install package psutil
```
## Trigger setup
### Basic scheduled trigger
Monitor system performance every 30 seconds:
```bash
influxdb3 create trigger \
--database system_monitoring \
--plugin-filename system_metrics.py \
--plugin-filename gh:influxdata/system_metrics/system_metrics.py \
--trigger-spec "every:30s" \
system_metrics_trigger
```
### Using Configuration File
### Custom configuration
Monitor specific metrics with custom hostname:
```bash
influxdb3 create trigger \
--database system_monitoring \
--plugin-filename system_metrics.py \
--trigger-spec "every:1m" \
--trigger-arguments config_file_path=system_metrics_config_scheduler.toml \
system_metrics_config_trigger
```
### Custom Configuration
```bash
influxdb3 create trigger \
--database system_monitoring \
--plugin-filename system_metrics.py \
--plugin-filename gh:influxdata/system_metrics/system_metrics.py \
--trigger-spec "every:30s" \
--trigger-arguments hostname=web-server-01,include_disk=false,max_retries=5 \
system_metrics_custom_trigger
```
## Example Usage
## Example usage
### Monitor Web Server Performance
### Example 1: Web server monitoring
Monitor web server performance every 15 seconds with network statistics:
```bash
# Create trigger for web server monitoring every 15 seconds
# Create trigger for web server monitoring
influxdb3 create trigger \
--database web_monitoring \
--plugin-filename system_metrics.py \
--plugin-filename gh:influxdata/system_metrics/system_metrics.py \
--trigger-spec "every:15s" \
--trigger-arguments hostname=web-server-01,include_network=true \
web_server_metrics
# Query recent CPU metrics
influxdb3 query \
--database web_monitoring \
"SELECT * FROM system_cpu WHERE time >= now() - interval '5 minutes' LIMIT 5"
```
### Database Server Monitoring
### Expected output
```
+---------------+-------+------+--------+------+--------+-------+-------+-----------+------------------+
| host | cpu | user | system | idle | iowait | nice | load1 | load5 | time |
+---------------+-------+------+--------+------+--------+-------+-------+-----------+------------------+
| web-server-01 | total | 12.5 | 5.3 | 81.2 | 0.8 | 0.0 | 0.85 | 0.92 | 2024-01-15 10:00 |
| web-server-01 | total | 13.1 | 5.5 | 80.4 | 0.7 | 0.0 | 0.87 | 0.93 | 2024-01-15 10:01 |
| web-server-01 | total | 11.8 | 5.1 | 82.0 | 0.9 | 0.0 | 0.83 | 0.91 | 2024-01-15 10:02 |
+---------------+-------+------+--------+------+--------+-------+-------+-----------+------------------+
```
### Example 2: Database server monitoring
Focus on CPU and disk metrics for database server:
```bash
# Focus on CPU and disk metrics for database server
# Create trigger for database server
influxdb3 create trigger \
--database db_monitoring \
--plugin-filename system_metrics.py \
--plugin-filename gh:influxdata/system_metrics/system_metrics.py \
--trigger-spec "every:30s" \
--trigger-arguments hostname=db-primary,include_disk=true,include_cpu=true,include_network=false \
database_metrics
# Query disk usage
influxdb3 query \
--database db_monitoring \
"SELECT * FROM system_disk_usage WHERE host = 'db-primary'"
```
### High-Frequency System Monitoring
### Example 3: High-frequency monitoring
Collect all metrics every 10 seconds with higher retry tolerance:
```bash
# Collect all metrics every 10 seconds with higher retry tolerance
# Create high-frequency monitoring trigger
influxdb3 create trigger \
--database system_monitoring \
--plugin-filename system_metrics.py \
--plugin-filename gh:influxdata/system_metrics/system_metrics.py \
--trigger-spec "every:10s" \
--trigger-arguments hostname=critical-server,max_retries=10 \
high_freq_metrics
```
### Expected Output
## Code overview
After creating a trigger, you can query the collected metrics:
### Files
- `system_metrics.py`: The main plugin code containing system metrics collection logic
- `system_metrics_config_scheduler.toml`: Example TOML configuration file for scheduled triggers
### Logging
Logs are stored in the `_internal` database (or the database where the trigger is created) in the `system.processing_engine_logs` table. To view logs:
```bash
influxdb3 query \
--database system_monitoring \
"SELECT * FROM system_cpu WHERE time >= now() - interval '5 minutes' LIMIT 5"
influxdb3 query --database _internal "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'your_trigger_name'"
```
#### Sample Output
Log columns:
- **event_time**: Timestamp of the log event
- **trigger_name**: Name of the trigger that generated the log
- **log_level**: Severity level (INFO, WARN, ERROR)
- **log_text**: Message describing the action or error
```
+------+--------+-------+--------+------+--------+-------+--------+-------+-------+------------+------------------+
| host | cpu | user | system | idle | iowait | nice | irq | load1 | load5 | load15 | time |
+------+--------+-------+--------+------+--------+-------+--------+-------+-------+------------+------------------+
| srv1 | total | 12.5 | 5.3 | 81.2 | 0.8 | 0.0 | 0.2 | 0.85 | 0.92 | 0.88 | 2024-01-15 10:00 |
| srv1 | total | 13.1 | 5.5 | 80.4 | 0.7 | 0.0 | 0.3 | 0.87 | 0.93 | 0.88 | 2024-01-15 10:01 |
| srv1 | total | 11.8 | 5.1 | 82.0 | 0.9 | 0.0 | 0.2 | 0.83 | 0.91 | 0.88 | 2024-01-15 10:02 |
| srv1 | total | 14.2 | 5.8 | 79.0 | 0.8 | 0.0 | 0.2 | 0.89 | 0.92 | 0.88 | 2024-01-15 10:03 |
| srv1 | total | 12.9 | 5.4 | 80.6 | 0.9 | 0.0 | 0.2 | 0.86 | 0.92 | 0.88 | 2024-01-15 10:04 |
+------+--------+-------+--------+------+--------+-------+--------+-------+-------+------------+------------------+
```
### Main functions
## Code Overview
#### `process_scheduled_call(influxdb3_local, call_time, args)`
The main entry point for scheduled triggers. Collects system metrics based on configuration and writes them to InfluxDB.
### Main Functions
Key operations:
1. Parses configuration from arguments
2. Collects CPU, memory, disk, and network metrics based on configuration
3. Writes metrics to InfluxDB with proper error handling and retry logic
#### `process_scheduled_call()`
#### `collect_cpu_metrics(influxdb3_local, hostname)`
Collects CPU utilization and performance metrics including per-core statistics and system load averages.
The main entry point for scheduled triggers.
Collects system metrics based on configuration and writes them to InfluxDB.
#### `collect_memory_metrics(influxdb3_local, hostname)`
Collects memory usage statistics including RAM, swap, and page fault information.
```python
def process_scheduled_call(influxdb3_local, call_time, args):
# Parse configuration
config = parse_config(args)
# Collect metrics based on configuration
if config['include_cpu']:
collect_cpu_metrics(influxdb3_local, config['hostname'])
if config['include_memory']:
collect_memory_metrics(influxdb3_local, config['hostname'])
# ... additional metric collections
```
#### `collect_disk_metrics(influxdb3_local, hostname)`
Collects disk usage and I/O statistics for all mounted partitions.
#### `collect_cpu_metrics()`
Collects CPU utilization and performance metrics:
```python
def collect_cpu_metrics(influxdb3_local, hostname):
# Get overall CPU stats
cpu_percent = psutil.cpu_percent(interval=1, percpu=False)
cpu_times = psutil.cpu_times()
# Build and write CPU metrics
line = LineBuilder("system_cpu")
.tag("host", hostname)
.tag("cpu", "total")
.float64_field("user", cpu_times.user)
.float64_field("system", cpu_times.system)
.float64_field("idle", cpu_times.idle)
.time_ns(time.time_ns())
influxdb3_local.write(line)
```
#### `collect_network_metrics(influxdb3_local, hostname)`
Collects network interface statistics including bytes transferred and error counts.
### Measurements and Fields
@ -240,62 +239,74 @@ Network interface statistics:
## Troubleshooting
### Common Issues
### Common issues
#### Permission Errors
#### Issue: Permission errors on disk I/O metrics
Some disk I/O metrics may require elevated permissions.
Some disk I/O metrics may require elevated permissions:
```
ERROR: [Permission denied] Unable to access disk I/O statistics
```
**Solution**: The plugin will continue collecting other metrics even if some require elevated permissions.
#### Missing psutil Library
**Solution**: The plugin will continue collecting other metrics even if some require elevated permissions. Consider running InfluxDB 3 with appropriate permissions if disk I/O metrics are critical.
#### Issue: Missing psutil library
```
ERROR: No module named 'psutil'
```
**Solution**: Install the psutil package:
```bash
influxdb3 install package psutil
```
#### High CPU Usage
#### Issue: High CPU usage from plugin
If the plugin causes high CPU usage, consider:
- Increasing the trigger interval (for example, from `every:10s` to `every:30s`)
- Disabling unnecessary metric types
- Reducing the number of disk partitions monitored
### Viewing Logs
#### Issue: No data being collected
**Solution**:
1. Check that the trigger is active:
```bash
influxdb3 query --database _internal "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'your_trigger_name'"
```
2. Verify system permissions allow access to system metrics
3. Check that the psutil package is properly installed
Logs are stored in the `_internal` database in the `system.processing_engine_logs` table:
### Debugging tips
```bash
influxdb3 query \
--database _internal \
"SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'system_metrics_trigger' ORDER BY time DESC LIMIT 10"
```
1. **Check recent metrics collection**:
```bash
# List all system metric measurements
influxdb3 query \
--database system_monitoring \
"SHOW MEASUREMENTS WHERE measurement =~ /^system_/"
### Verifying Data Collection
# Check recent CPU metrics
influxdb3 query \
--database system_monitoring \
"SELECT COUNT(*) FROM system_cpu WHERE time >= now() - interval '1 hour'"
```
Check that metrics are being collected:
2. **Monitor plugin logs**:
```bash
influxdb3 query \
--database _internal \
"SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'system_metrics_trigger' ORDER BY time DESC LIMIT 10"
```
```bash
# List all system metric measurements
influxdb3 query \
--database system_monitoring \
"SHOW MEASUREMENTS WHERE measurement =~ /^system_/"
3. **Test metric collection manually**:
```bash
influxdb3 test schedule_plugin \
--database system_monitoring \
--schedule "0 0 * * * ?" \
system_metrics.py
```
# Check recent CPU metrics
influxdb3 query \
--database system_monitoring \
"SELECT COUNT(*) FROM system_cpu WHERE time >= now() - interval '1 hour'"
```
### Performance considerations
- The plugin collects comprehensive system metrics efficiently using the psutil library
- Metric collection is optimized to minimize system overhead
- Error handling and retry logic ensure reliable operation
- Configurable metric types allow focusing on relevant metrics only
## Report an issue
For plugin issues, see the Plugins repository [issues page](https://github.com/influxdata/influxdb3_plugins/issues).

View File

@ -45,15 +45,41 @@ Features both scheduled batch monitoring and real-time data write monitoring wit
Notification channels require additional parameters based on the sender type (same as the [Notifier Plugin](../notifier/README.md)).
## Schema requirements
The plugin assumes that the table schema is already defined in the database, as it relies on this schema to retrieve field and tag names required for processing.
> [!WARNING]
> #### Requires existing schema
>
> By design, the plugin returns an error if the schema doesn't exist or doesn't contain the expected columns.
### TOML configuration
| Parameter | Type | Default | Description |
|--------------------|--------|---------|----------------------------------------------------------------------------------|
| `config_file_path` | string | none | TOML config file path relative to `PLUGIN_DIR` (required for TOML configuration) |
*To use a TOML configuration file, set the `PLUGIN_DIR` environment variable and specify the `config_file_path` in the trigger arguments.* This is in addition to the `--plugin-dir` flag when starting InfluxDB 3.
Example TOML configuration files provided:
- [threshold_deadman_config_scheduler.toml](https://github.com/influxdata/influxdb3_plugins/blob/master/influxdata/threshold_deadman_checks/threshold_deadman_config_scheduler.toml) - for scheduled triggers
- [threshold_deadman_config_data_writes.toml](https://github.com/influxdata/influxdb3_plugins/blob/master/influxdata/threshold_deadman_checks/threshold_deadman_config_data_writes.toml) - for data write triggers
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the [influxdb3_plugins
/README.md](https://github.com/influxdata/influxdb3_plugins/blob/master/README.md).
## Installation
### Install dependencies
1. Start {{% product-name %}} with the Processing Engine enabled (`--plugin-dir /path/to/plugins`)
Install required Python packages:
2. Install required Python packages:
```bash
influxdb3 install package requests
```
- `requests` (for HTTP requests)
```bash
influxdb3 install package requests
```
### Create scheduled trigger