Merge remote-tracking branch 'origin/jts-6161-3-2-api-ref' into core-ent-3.2
commit
a40220091c
|
|
@ -922,9 +922,25 @@ paths:
|
|||
summary: Delete a database
|
||||
description: |
|
||||
Soft deletes a database.
|
||||
The database is scheduled for deletion and unavailable for querying.
|
||||
The database is scheduled for deletion and unavailable for querying.
|
||||
Use the `hard_delete_at` parameter to schedule a hard deletion.
|
||||
parameters:
|
||||
- $ref: '#/components/parameters/db'
|
||||
- name: hard_delete_at
|
||||
in: query
|
||||
required: false
|
||||
schema:
|
||||
type: string
|
||||
format: date-time
|
||||
description: |
|
||||
Schedule the database for hard deletion at the specified time.
|
||||
If not provided, the database will be soft deleted.
|
||||
Use ISO 8601 date-time format (for example, "2025-12-31T23:59:59Z").
|
||||
|
||||
#### Deleting a database cannot be undone
|
||||
|
||||
Deleting a database is a destructive action.
|
||||
Once a database is deleted, data stored in that database cannot be recovered.
|
||||
responses:
|
||||
'200':
|
||||
description: Success. Database deleted.
|
||||
|
|
@ -961,7 +977,13 @@ paths:
|
|||
summary: Delete a table
|
||||
description: |
|
||||
Soft deletes a table.
|
||||
The table is scheduled for deletion and unavailable for querying.
|
||||
The table is scheduled for deletion and unavailable for querying.
|
||||
Use the `hard_delete_at` parameter to schedule a hard deletion.
|
||||
|
||||
#### Deleting a table cannot be undone
|
||||
|
||||
Deleting a table is a destructive action.
|
||||
Once a table is deleted, data stored in that table cannot be recovered.
|
||||
parameters:
|
||||
- $ref: '#/components/parameters/db'
|
||||
- name: table
|
||||
|
|
@ -969,6 +991,16 @@ paths:
|
|||
required: true
|
||||
schema:
|
||||
type: string
|
||||
- name: hard_delete_at
|
||||
in: query
|
||||
required: false
|
||||
schema:
|
||||
type: string
|
||||
format: date-time
|
||||
description: |
|
||||
Schedule the table for hard deletion at the specified time.
|
||||
If not provided, the table will be soft deleted.
|
||||
Use ISO 8601 format (for example, "2025-12-31T23:59:59Z").
|
||||
responses:
|
||||
'200':
|
||||
description: Success (no content). The table has been deleted.
|
||||
|
|
@ -1078,7 +1110,7 @@ paths:
|
|||
In `"cron:CRON_EXPRESSION"`, `CRON_EXPRESSION` uses extended 6-field cron format.
|
||||
The cron expression `0 0 6 * * 1-5` means the trigger will run at 6:00 AM every weekday (Monday to Friday).
|
||||
value:
|
||||
db: DATABASE_NAME
|
||||
db: mydb
|
||||
plugin_filename: schedule.py
|
||||
trigger_name: schedule_cron_trigger
|
||||
trigger_specification: cron:0 0 6 * * 1-5
|
||||
|
|
@ -1136,7 +1168,7 @@ paths:
|
|||
db: mydb
|
||||
plugin_filename: request.py
|
||||
trigger_name: hello_world_trigger
|
||||
trigger_specification: path:hello-world
|
||||
trigger_specification: request:hello-world
|
||||
cron_friday_afternoon:
|
||||
summary: Cron trigger for Friday afternoons
|
||||
description: |
|
||||
|
|
@ -1365,16 +1397,16 @@ paths:
|
|||
description: Plugin not enabled.
|
||||
tags:
|
||||
- Processing engine
|
||||
/api/v3/engine/{plugin_path}:
|
||||
/api/v3/engine/{request_path}:
|
||||
parameters:
|
||||
- name: plugin_path
|
||||
- name: request_path
|
||||
description: |
|
||||
The path configured in the request trigger specification "path:<plugin_path>"` for the plugin.
|
||||
The path configured in the request trigger specification for the plugin.
|
||||
|
||||
For example, if you define a trigger with the following:
|
||||
|
||||
```json
|
||||
trigger-spec: "path:hello-world"
|
||||
trigger_specification: "request:hello-world"
|
||||
```
|
||||
|
||||
then, the HTTP API exposes the following plugin endpoint:
|
||||
|
|
@ -1390,7 +1422,7 @@ paths:
|
|||
operationId: GetProcessingEnginePluginRequest
|
||||
summary: On Request processing engine plugin request
|
||||
description: |
|
||||
Executes the On Request processing engine plugin specified in `<plugin_path>`.
|
||||
Executes the On Request processing engine plugin specified in the trigger's `plugin_filename`.
|
||||
The request can include request headers, query string parameters, and a request body, which InfluxDB passes to the plugin.
|
||||
|
||||
An On Request plugin implements the following signature:
|
||||
|
|
@ -1417,7 +1449,7 @@ paths:
|
|||
operationId: PostProcessingEnginePluginRequest
|
||||
summary: On Request processing engine plugin request
|
||||
description: |
|
||||
Executes the On Request processing engine plugin specified in `<plugin_path>`.
|
||||
Executes the On Request processing engine plugin specified in the trigger's `plugin_filename`.
|
||||
The request can include request headers, query string parameters, and a request body, which InfluxDB passes to the plugin.
|
||||
|
||||
An On Request plugin implements the following signature:
|
||||
|
|
@ -1868,7 +1900,7 @@ components:
|
|||
`schedule.py` or `endpoints/report.py`.
|
||||
The path can be absolute or relative to the `--plugins-dir` directory configured when starting InfluxDB 3.
|
||||
|
||||
The plugin file must implement the trigger interface associated with the trigger's specification (`trigger_spec`).
|
||||
The plugin file must implement the trigger interface associated with the trigger's specification.
|
||||
trigger_name:
|
||||
type: string
|
||||
trigger_specification:
|
||||
|
|
@ -1911,12 +1943,12 @@ components:
|
|||
- `table:TABLE_NAME` - Triggers on write events to a specific table
|
||||
|
||||
### On-demand triggers
|
||||
Format: `path:ENDPOINT_NAME`
|
||||
Format: `request:REQUEST_PATH`
|
||||
|
||||
Creates an HTTP endpoint `/api/v3/engine/ENDPOINT_NAME` for manual invocation:
|
||||
- `path:hello-world` - Creates endpoint `/api/v3/engine/hello-world`
|
||||
- `path:data-export` - Creates endpoint `/api/v3/engine/data-export`
|
||||
pattern: ^(cron:[0-9 *,/-]+|every:[0-9]+[smhd]|all_tables|table:[a-zA-Z_][a-zA-Z0-9_]*|path:[a-zA-Z0-9_-]+)$
|
||||
Creates an HTTP endpoint `/api/v3/engine/REQUEST_PATH` for manual invocation:
|
||||
- `request:hello-world` - Creates endpoint `/api/v3/engine/hello-world`
|
||||
- `request:data-export` - Creates endpoint `/api/v3/engine/data-export`
|
||||
pattern: ^(cron:[0-9 *,/-]+|every:[0-9]+[smhd]|all_tables|table:[a-zA-Z_][a-zA-Z0-9_]*|request:[a-zA-Z0-9_-]+)$
|
||||
example: cron:0 0 6 * * 1-5
|
||||
trigger_arguments:
|
||||
type: object
|
||||
|
|
@ -2013,6 +2045,65 @@ components:
|
|||
- m
|
||||
- h
|
||||
type: string
|
||||
UpdateDatabaseRequest:
|
||||
type: object
|
||||
properties:
|
||||
retention_period:
|
||||
type: string
|
||||
description: |
|
||||
The retention period for the database. Specifies how long data should be retained.
|
||||
Use duration format (for example, "1d", "1h", "30m", "7d").
|
||||
example: "7d"
|
||||
description: Request schema for updating database configuration.
|
||||
UpdateTableRequest:
|
||||
type: object
|
||||
properties:
|
||||
db:
|
||||
type: string
|
||||
description: The name of the database containing the table.
|
||||
table:
|
||||
type: string
|
||||
description: The name of the table to update.
|
||||
retention_period:
|
||||
type: string
|
||||
description: |
|
||||
The retention period for the table. Specifies how long data in this table should be retained.
|
||||
Use duration format (for example, "1d", "1h", "30m", "7d").
|
||||
example: "30d"
|
||||
required:
|
||||
- db
|
||||
- table
|
||||
description: Request schema for updating table configuration.
|
||||
LicenseResponse:
|
||||
type: object
|
||||
properties:
|
||||
license_type:
|
||||
type: string
|
||||
description: The type of license (for example, "enterprise", "trial").
|
||||
example: "enterprise"
|
||||
expires_at:
|
||||
type: string
|
||||
format: date-time
|
||||
description: The expiration date of the license in ISO 8601 format.
|
||||
example: "2025-12-31T23:59:59Z"
|
||||
features:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
description: List of features enabled by the license.
|
||||
example:
|
||||
- "clustering"
|
||||
- "processing_engine"
|
||||
- "advanced_auth"
|
||||
status:
|
||||
type: string
|
||||
enum:
|
||||
- "active"
|
||||
- "expired"
|
||||
- "invalid"
|
||||
description: The current status of the license.
|
||||
example: "active"
|
||||
description: Response schema for license information.
|
||||
responses:
|
||||
Unauthorized:
|
||||
description: Unauthorized access.
|
||||
|
|
|
|||
|
|
@ -922,9 +922,25 @@ paths:
|
|||
summary: Delete a database
|
||||
description: |
|
||||
Soft deletes a database.
|
||||
The database is scheduled for deletion and unavailable for querying.
|
||||
The database is scheduled for deletion and unavailable for querying.
|
||||
Use the `hard_delete_at` parameter to schedule a hard deletion.
|
||||
parameters:
|
||||
- $ref: '#/components/parameters/db'
|
||||
- name: hard_delete_at
|
||||
in: query
|
||||
required: false
|
||||
schema:
|
||||
type: string
|
||||
format: date-time
|
||||
description: |
|
||||
Schedule the database for hard deletion at the specified time.
|
||||
If not provided, the database will be soft deleted.
|
||||
Use ISO 8601 date-time format (for example, "2025-12-31T23:59:59Z").
|
||||
|
||||
#### Deleting a database cannot be undone
|
||||
|
||||
Deleting a database is a destructive action.
|
||||
Once a database is deleted, data stored in that database cannot be recovered.
|
||||
responses:
|
||||
'200':
|
||||
description: Success. Database deleted.
|
||||
|
|
@ -961,7 +977,13 @@ paths:
|
|||
summary: Delete a table
|
||||
description: |
|
||||
Soft deletes a table.
|
||||
The table is scheduled for deletion and unavailable for querying.
|
||||
The table is scheduled for deletion and unavailable for querying.
|
||||
Use the `hard_delete_at` parameter to schedule a hard deletion.
|
||||
|
||||
#### Deleting a table cannot be undone
|
||||
|
||||
Deleting a table is a destructive action.
|
||||
Once a table is deleted, data stored in that table cannot be recovered.
|
||||
parameters:
|
||||
- $ref: '#/components/parameters/db'
|
||||
- name: table
|
||||
|
|
@ -969,6 +991,16 @@ paths:
|
|||
required: true
|
||||
schema:
|
||||
type: string
|
||||
- name: hard_delete_at
|
||||
in: query
|
||||
required: false
|
||||
schema:
|
||||
type: string
|
||||
format: date-time
|
||||
description: |
|
||||
Schedule the table for hard deletion at the specified time.
|
||||
If not provided, the table will be soft deleted.
|
||||
Use ISO 8601 format (for example, "2025-12-31T23:59:59Z").
|
||||
responses:
|
||||
'200':
|
||||
description: Success (no content). The table has been deleted.
|
||||
|
|
@ -978,6 +1010,77 @@ paths:
|
|||
description: Table not found.
|
||||
tags:
|
||||
- Table
|
||||
patch:
|
||||
operationId: PatchConfigureTable
|
||||
summary: Update a table
|
||||
description: |
|
||||
Updates table configuration, such as retention period.
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/UpdateTableRequest'
|
||||
responses:
|
||||
'200':
|
||||
description: Success. The table has been updated.
|
||||
'400':
|
||||
description: Bad request.
|
||||
'401':
|
||||
$ref: '#/components/responses/Unauthorized'
|
||||
'404':
|
||||
description: Table not found.
|
||||
tags:
|
||||
- Table
|
||||
/api/v3/configure/database/{db}:
|
||||
patch:
|
||||
operationId: PatchConfigureDatabase
|
||||
summary: Update a database
|
||||
description: |
|
||||
Updates database configuration, such as retention period.
|
||||
parameters:
|
||||
- name: db
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
description: The name of the database to update.
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/UpdateDatabaseRequest'
|
||||
responses:
|
||||
'200':
|
||||
description: Success. The database has been updated.
|
||||
'400':
|
||||
description: Bad request.
|
||||
'401':
|
||||
$ref: '#/components/responses/Unauthorized'
|
||||
'404':
|
||||
description: Database not found.
|
||||
tags:
|
||||
- Database
|
||||
/api/v3/show/license:
|
||||
get:
|
||||
operationId: GetShowLicense
|
||||
summary: Show license information
|
||||
description: |
|
||||
Retrieves information about the current InfluxDB 3 Enterprise license.
|
||||
responses:
|
||||
'200':
|
||||
description: Success. The response body contains license information.
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/LicenseResponse'
|
||||
'401':
|
||||
$ref: '#/components/responses/Unauthorized'
|
||||
'403':
|
||||
description: Access denied.
|
||||
tags:
|
||||
- Server information
|
||||
/api/v3/configure/distinct_cache:
|
||||
post:
|
||||
operationId: PostConfigureDistinctCache
|
||||
|
|
@ -1136,7 +1239,8 @@ paths:
|
|||
db: mydb
|
||||
plugin_filename: request.py
|
||||
trigger_name: hello_world_trigger
|
||||
trigger_specification: path:hello-world
|
||||
# trigger_specification: "request:hello-world" - For 3.2.1 (issue#6171)
|
||||
trigger_specification: {"request_path": {"path": "hello-world"}}
|
||||
cron_friday_afternoon:
|
||||
summary: Cron trigger for Friday afternoons
|
||||
description: |
|
||||
|
|
@ -1365,23 +1469,26 @@ paths:
|
|||
description: Plugin not enabled.
|
||||
tags:
|
||||
- Processing engine
|
||||
/api/v3/engine/{plugin_path}:
|
||||
/api/v3/engine/{request_path}:
|
||||
parameters:
|
||||
- name: plugin_path
|
||||
- name: request_path
|
||||
description: |
|
||||
The path configured in the request trigger specification "path:<plugin_path>"` for the plugin.
|
||||
The path configured in the request trigger specification for the plugin.
|
||||
|
||||
For example, if you define a trigger with the following:
|
||||
|
||||
```json
|
||||
trigger-spec: "path:hello-world"
|
||||
trigger_specification: {"request_path": {"path": "hello-world"}}
|
||||
```
|
||||
|
||||
|
||||
then, the HTTP API exposes the following plugin endpoint:
|
||||
|
||||
```
|
||||
<INFLUXDB3_HOST>/api/v3/engine/hello-world
|
||||
```
|
||||
|
||||
***Note:*** Currently, due to a bug in InfluxDB 3 Enterprise, the request trigger specification is different from Core.
|
||||
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
|
|
@ -1390,7 +1497,7 @@ paths:
|
|||
operationId: GetProcessingEnginePluginRequest
|
||||
summary: On Request processing engine plugin request
|
||||
description: |
|
||||
Executes the On Request processing engine plugin specified in `<plugin_path>`.
|
||||
Executes the On Request processing engine plugin specified in the trigger's `plugin_filename`.
|
||||
The request can include request headers, query string parameters, and a request body, which InfluxDB passes to the plugin.
|
||||
|
||||
An On Request plugin implements the following signature:
|
||||
|
|
@ -1417,7 +1524,7 @@ paths:
|
|||
operationId: PostProcessingEnginePluginRequest
|
||||
summary: On Request processing engine plugin request
|
||||
description: |
|
||||
Executes the On Request processing engine plugin specified in `<plugin_path>`.
|
||||
Executes the On Request processing engine plugin specified in the trigger's `plugin_filename`.
|
||||
The request can include request headers, query string parameters, and a request body, which InfluxDB passes to the plugin.
|
||||
|
||||
An On Request plugin implements the following signature:
|
||||
|
|
@ -1812,6 +1919,16 @@ components:
|
|||
properties:
|
||||
db:
|
||||
type: string
|
||||
pattern: '^[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9]$|^[a-zA-Z0-9]$'
|
||||
description: |-
|
||||
The database name. Database names cannot contain underscores (_).
|
||||
Names must start and end with alphanumeric characters and can contain hyphens (-) in the middle.
|
||||
retention_period:
|
||||
type: string
|
||||
description: |-
|
||||
The retention period for the database. Specifies how long data should be retained.
|
||||
Use duration format (for example, "1d", "1h", "30m", "7d").
|
||||
example: "7d"
|
||||
required:
|
||||
- db
|
||||
CreateTableRequest:
|
||||
|
|
@ -1843,6 +1960,12 @@ components:
|
|||
required:
|
||||
- name
|
||||
- type
|
||||
retention_period:
|
||||
type: string
|
||||
description: |-
|
||||
The retention period for the table. Specifies how long data in this table should be retained.
|
||||
Use duration format (for example, "1d", "1h", "30m", "7d").
|
||||
example: "30d"
|
||||
required:
|
||||
- db
|
||||
- table
|
||||
|
|
@ -1929,11 +2052,10 @@ components:
|
|||
`schedule.py` or `endpoints/report.py`.
|
||||
The path can be absolute or relative to the `--plugins-dir` directory configured when starting InfluxDB 3.
|
||||
|
||||
The plugin file must implement the trigger interface associated with the trigger's specification (`trigger_spec`).
|
||||
The plugin file must implement the trigger interface associated with the trigger's specification.
|
||||
trigger_name:
|
||||
type: string
|
||||
trigger_specification:
|
||||
type: string
|
||||
description: |
|
||||
Specifies when and how the processing engine trigger should be invoked.
|
||||
|
||||
|
|
@ -1972,12 +2094,25 @@ components:
|
|||
- `table:TABLE_NAME` - Triggers on write events to a specific table
|
||||
|
||||
### On-demand triggers
|
||||
Format: `path:ENDPOINT_NAME`
|
||||
Format: `{"request_path": {"path": "REQUEST_PATH"}}`
|
||||
|
||||
Creates an HTTP endpoint `/api/v3/engine/ENDPOINT_NAME` for manual invocation:
|
||||
- `path:hello-world` - Creates endpoint `/api/v3/engine/hello-world`
|
||||
- `path:data-export` - Creates endpoint `/api/v3/engine/data-export`
|
||||
pattern: ^(cron:[0-9 *,/-]+|every:[0-9]+[smhd]|all_tables|table:[a-zA-Z_][a-zA-Z0-9_]*|path:[a-zA-Z0-9_-]+)$
|
||||
Creates an HTTP endpoint `/api/v3/engine/REQUEST_PATH` for manual invocation:
|
||||
- `{"request_path": {"path": "hello-world"}}` - Creates endpoint `/api/v3/engine/hello-world`
|
||||
- `{"request_path": {"path": "data-export"}}` - Creates endpoint `/api/v3/engine/data-export`
|
||||
|
||||
***Note:*** Currently, due to a bug in InfluxDB 3 Enterprise, the request trigger specification is different from Core. Use the JSON object format shown above.
|
||||
|
||||
oneOf:
|
||||
- type: string
|
||||
pattern: ^(cron:[0-9 *,/-]+|every:[0-9]+[smhd]|all_tables|table:[a-zA-Z_][a-zA-Z0-9_]*)$
|
||||
- type: object
|
||||
properties:
|
||||
request_path:
|
||||
type: object
|
||||
properties:
|
||||
path:
|
||||
type: string
|
||||
pattern: ^[a-zA-Z0-9_-]+$
|
||||
example: cron:0 0 6 * * 1-5
|
||||
trigger_arguments:
|
||||
type: object
|
||||
|
|
@ -2074,6 +2209,65 @@ components:
|
|||
- m
|
||||
- h
|
||||
type: string
|
||||
UpdateDatabaseRequest:
|
||||
type: object
|
||||
properties:
|
||||
retention_period:
|
||||
type: string
|
||||
description: |
|
||||
The retention period for the database. Specifies how long data should be retained.
|
||||
Use duration format (for example, "1d", "1h", "30m", "7d").
|
||||
example: "7d"
|
||||
description: Request schema for updating database configuration.
|
||||
UpdateTableRequest:
|
||||
type: object
|
||||
properties:
|
||||
db:
|
||||
type: string
|
||||
description: The name of the database containing the table.
|
||||
table:
|
||||
type: string
|
||||
description: The name of the table to update.
|
||||
retention_period:
|
||||
type: string
|
||||
description: |
|
||||
The retention period for the table. Specifies how long data in this table should be retained.
|
||||
Use duration format (for example, "1d", "1h", "30m", "7d").
|
||||
example: "30d"
|
||||
required:
|
||||
- db
|
||||
- table
|
||||
description: Request schema for updating table configuration.
|
||||
LicenseResponse:
|
||||
type: object
|
||||
properties:
|
||||
license_type:
|
||||
type: string
|
||||
description: The type of license (for example, "enterprise", "trial").
|
||||
example: "enterprise"
|
||||
expires_at:
|
||||
type: string
|
||||
format: date-time
|
||||
description: The expiration date of the license in ISO 8601 format.
|
||||
example: "2025-12-31T23:59:59Z"
|
||||
features:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
description: List of features enabled by the license.
|
||||
example:
|
||||
- "clustering"
|
||||
- "processing_engine"
|
||||
- "advanced_auth"
|
||||
status:
|
||||
type: string
|
||||
enum:
|
||||
- "active"
|
||||
- "expired"
|
||||
- "invalid"
|
||||
description: The current status of the license.
|
||||
example: "active"
|
||||
description: Response schema for license information.
|
||||
responses:
|
||||
Unauthorized:
|
||||
description: Unauthorized access.
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ influxdb3 create trigger [OPTIONS] \
|
|||
| `-d` | `--database` | _({{< req >}})_ Name of the database to operate on |
|
||||
| | `--token` | _({{< req >}})_ Authentication token |
|
||||
| | `--plugin-filename` | _({{< req >}})_ Name of the file, stored in the server's `plugin-dir`, that contains the Python plugin code to run |
|
||||
| | `--trigger-spec` | Trigger specification--for example `table:<TABLE_NAME>` or `all_tables` |
|
||||
| | `--trigger-spec` | Trigger specification: `table:<TABLE_NAME>`, `all_tables`, `every:<DURATION>`, `cron:<EXPRESSION>`, or `request:<REQUEST_PATH>` |
|
||||
| | `--disabled` | Create the trigger in disabled state |
|
||||
| | `--tls-ca` | Path to a custom TLS certificate authority (for testing or self-signed certificates) |
|
||||
| `-h` | `--help` | Print help information |
|
||||
|
|
@ -113,3 +113,15 @@ influxdb3 create trigger \
|
|||
Creating a trigger in a disabled state prevents it from running immediately. You can enable it later when you're ready to activate it.
|
||||
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
> [!Warning]
|
||||
> #### Request trigger specification format differs between CLI and API
|
||||
>
|
||||
> Due to a bug in InfluxDB 3 Enterprise, the request trigger specification format differs:
|
||||
>
|
||||
> - **CLI**: `request:<REQUEST_PATH>` (same as Core CLI and API)
|
||||
> - **Enterprise API**: `{"request_path": {"path": "<REQUEST_PATH>"}}`
|
||||
>
|
||||
> See the [API reference](/influxdb3/enterprise/api/#operation/PostConfigureProcessingEngineTrigger) for examples. Use `influxdb3 show summary` to verify the actual trigger specification.
|
||||
{{% /show-in %}}
|
||||
|
|
|
|||
|
|
@ -23,24 +23,36 @@ engine [trigger](#trigger).
|
|||
### Trigger
|
||||
|
||||
When you create a trigger, you specify a [plugin](#plugin), a database, optional
|
||||
arguments, and a _trigger-spec_, which defines when the plugin is executed and
|
||||
arguments, and a trigger specification, which defines when the plugin is executed and
|
||||
what data it receives.
|
||||
|
||||
#### Trigger types
|
||||
|
||||
InfluxDB 3 provides the following types of triggers, each with specific
|
||||
trigger-specs:
|
||||
specifications:
|
||||
|
||||
- **On WAL flush**: Sends a batch of written data (for a specific table or all
|
||||
tables) to a plugin (by default, every second).
|
||||
- **On Schedule**: Executes a plugin on a user-configured schedule (using a
|
||||
- **Data write** (`table:` or `all_tables`): Sends a batch of written data (for a specific table or all
|
||||
tables) to a plugin when the database flushes data to the Write-Ahead Log (by default, every second).
|
||||
- **Scheduled** (`every:` or `cron:`): Executes a plugin on a user-configured schedule (using a
|
||||
crontab or a duration). This trigger type is useful for data collection and
|
||||
deadman monitoring.
|
||||
- **On Request**: Binds a plugin to a custom HTTP API endpoint at
|
||||
`/api/v3/engine/<ENDPOINT_PATH>`.
|
||||
- **HTTP request** (`request:`): Binds a plugin to a custom HTTP API endpoint at
|
||||
`/api/v3/engine/<REQUEST_PATH>`.
|
||||
The plugin receives the HTTP request headers and content, and can parse,
|
||||
process, and send the data into the database or to third-party services.
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
> [!Warning]
|
||||
> #### Request trigger specification format differs between CLI and API
|
||||
>
|
||||
> Due to a bug in InfluxDB 3 Enterprise, the request trigger specification format differs:
|
||||
>
|
||||
> - **CLI**: `request:<REQUEST_PATH>` (same as Core CLI and API)
|
||||
> - **Enterprise API**: `{"request_path": {"path": "<REQUEST_PATH>"}}`
|
||||
>
|
||||
> See the [API reference](/influxdb3/enterprise/api/#operation/PostConfigureProcessingEngineTrigger) for examples. Use `influxdb3 show summary` to verify the actual trigger specification.
|
||||
{{% /show-in %}}
|
||||
|
||||
## Activate the processing engine
|
||||
|
||||
To activate the processing engine, include the `--plugin-dir <PLUGIN_DIR>` option
|
||||
|
|
@ -64,10 +76,10 @@ to the current working directory of the `influxdb3` server.
|
|||
## Create a plugin
|
||||
|
||||
To create a plugin, write and store a Python file in your configured `PLUGIN_DIR`.
|
||||
The following example is a WAL flush plugin that processes data before it gets
|
||||
The following example is a data write plugin that processes data before it gets
|
||||
persisted to the object store.
|
||||
|
||||
##### Example Python plugin for WAL rows
|
||||
##### Example Python plugin for data writes
|
||||
|
||||
```python
|
||||
# This is the basic structure for Python plugin code that runs in the
|
||||
|
|
@ -77,9 +89,9 @@ persisted to the object store.
|
|||
# allowing you to write generic code that uses variables such as monitoring
|
||||
# thresholds, environment variables, and host names.
|
||||
#
|
||||
# Use the following exact signature to define a function for the WAL flush
|
||||
# Use the following exact signature to define a function for the data write
|
||||
# trigger.
|
||||
# When you create a trigger for a WAL flush plugin, you specify the database
|
||||
# When you create a trigger for a data write plugin, you specify the database
|
||||
# and tables that the plugin receives written data from on every WAL flush
|
||||
# (default is once per second).
|
||||
def process_writes(influxdb3_local, table_batches, args=None):
|
||||
|
|
@ -98,9 +110,8 @@ def process_writes(influxdb3_local, table_batches, args=None):
|
|||
# value.
|
||||
influxdb3_local.info("query result: " + str(query_result))
|
||||
|
||||
# this is the data that is sent when the WAL is flushed of writes the server
|
||||
# received for the DB or table of interest. One batch for each table (will
|
||||
# only be one if triggered on a single table)
|
||||
# this is the data that is sent when data is written to the database and flushed to the WAL.
|
||||
# One batch for each table (will only be one if triggered on a single table)
|
||||
for table_batch in table_batches:
|
||||
# here you can see that the table_name is available.
|
||||
influxdb3_local.info("table: " + table_batch["table_name"])
|
||||
|
|
@ -151,7 +162,7 @@ affecting actual data. During a plugin test:
|
|||
|
||||
To test a plugin:
|
||||
|
||||
1. Save the [example plugin code](#example-python-plugin-for-wal-rows) to a
|
||||
1. Save the [example plugin code](#example-python-plugin-for-data-writes) to a
|
||||
plugin file inside of the plugin directory. If you haven't yet written data
|
||||
to the table in the example, comment out the lines where it queries.
|
||||
2. To run the test, enter the following command with the following options:
|
||||
|
|
|
|||
|
|
@ -22,20 +22,11 @@ Ensure you have:
|
|||
|
||||
Once you have all the prerequisites in place, follow these steps to implement the Processing Engine for your data automation needs.
|
||||
|
||||
1. [Set up the Processing Engine](#set-up-the-processing-engine)
|
||||
2. [Add a Processing Engine plugin](#add-a-processing-engine-plugin)
|
||||
- [Use example plugins](#use-example-plugins)
|
||||
- [Create a custom plugin](#create-a-custom-plugin)
|
||||
3. [Set up a trigger](#set-up-a-trigger)
|
||||
- [Understand trigger types](#understand-trigger-types)
|
||||
- [Use the create trigger command](#use-the-create-trigger-command)
|
||||
- [Trigger specification examples](#trigger-specification-examples)
|
||||
4. [Advanced trigger configuration](#advanced-trigger-configuration)
|
||||
- [Access community plugins from GitHub](#access-community-plugins-from-github)
|
||||
- [Pass arguments to plugins](#pass-arguments-to-plugins)
|
||||
- [Control trigger execution](#control-trigger-execution)
|
||||
- [Configure error handling for a trigger](#configure-error-handling-for-a-trigger)
|
||||
- [Install Python dependencies](#install-python-dependencies)
|
||||
- [Set up the Processing Engine](#set-up-the-processing-engine)
|
||||
- [Add a Processing Engine plugin](#add-a-processing-engine-plugin)
|
||||
- [Set up a trigger](#set-up-a-trigger)
|
||||
- [Advanced trigger configuration](#advanced-trigger-configuration)
|
||||
- [Distributed cluster considerations](#distributed-cluster-considerations)
|
||||
|
||||
## Set up the Processing Engine
|
||||
|
||||
|
|
@ -75,6 +66,8 @@ When running {{% product-name %}} in a distributed setup, follow these steps to
|
|||
>
|
||||
> Configure your plugin directory on the same system as the nodes that run the triggers and plugins.
|
||||
|
||||
For more information about configuring distributed environments, see the [Distributed cluster considerations](#distributed-cluster-considerations) section.
|
||||
|
||||
## Add a Processing Engine plugin
|
||||
|
||||
A plugin is a Python script that defines a specific function signature for a trigger (_trigger spec_). When the specified event occurs, InfluxDB runs the plugin.
|
||||
|
|
@ -168,11 +161,11 @@ Before you begin, make sure:
|
|||
|
||||
Choose a plugin type based on your automation goals:
|
||||
|
||||
| Plugin Type | Best For | Trigger Type |
|
||||
| ---------------- | ------------------------------------------- | ------------------------ |
|
||||
| **Data write** | Processing data as it arrives | `table:` or `all_tables` |
|
||||
| **Scheduled** | Running code at specific intervals or times | `every:` or `cron:` |
|
||||
| **HTTP request** | Running code on demand via API endpoints | `path:` |
|
||||
| Plugin Type | Best For |
|
||||
| ---------------- | ------------------------------------------- |
|
||||
| **Data write** | Processing data as it arrives |
|
||||
| **Scheduled** | Running code at specific intervals or times |
|
||||
| **HTTP request** | Running code on demand via API endpoints |
|
||||
|
||||
#### Create your plugin file
|
||||
|
||||
|
|
@ -184,7 +177,7 @@ After writing your plugin, [create a trigger](#use-the-create-trigger-command) t
|
|||
|
||||
#### Create a data write plugin
|
||||
|
||||
Use a data write plugin to process data as it's written to the database. Ideal use cases include:
|
||||
Use a data write plugin to process data as it's written to the database. These plugins use [`table:` or `all_tables:`](#trigger-on-data-writes) trigger specifications. Ideal use cases include:
|
||||
|
||||
- Data transformation and enrichment
|
||||
- Alerting on incoming values
|
||||
|
|
@ -209,7 +202,7 @@ def process_writes(influxdb3_local, table_batches, args=None):
|
|||
|
||||
#### Create a scheduled plugin
|
||||
|
||||
Scheduled plugins run at defined intervals. Use them for:
|
||||
Scheduled plugins run at defined intervals using [`every:` or `cron:`](#trigger-on-a-schedule) trigger specifications. Use them for:
|
||||
|
||||
- Periodic data aggregation
|
||||
- Report generation
|
||||
|
|
@ -231,7 +224,7 @@ def process_scheduled_call(influxdb3_local, call_time, args=None):
|
|||
|
||||
#### Create an HTTP request plugin
|
||||
|
||||
HTTP request plugins respond to API calls. Use them for:
|
||||
HTTP request plugins respond to API calls using [`request:`](#trigger-on-http-requests) trigger specifications{{% show-in "enterprise" %}} (CLI) or `{"request_path": {"path": "..."}}` (HTTP API){{% /show-in %}}. Use them for:
|
||||
|
||||
- Creating custom API endpoints
|
||||
- Webhooks for external integrations
|
||||
|
|
@ -270,7 +263,7 @@ After writing your plugin:
|
|||
|------------|----------------------|-----------------|
|
||||
| Data write | `table:<TABLE_NAME>` or `all_tables` | When data is written to tables |
|
||||
| Scheduled | `every:<DURATION>` or `cron:<EXPRESSION>` | At specified time intervals |
|
||||
| HTTP request | `path:<ENDPOINT_PATH>` | When HTTP requests are received |
|
||||
| HTTP request | `request:<REQUEST_PATH>`{{% show-in "enterprise" %}} (CLI) or `{"request_path": {"path": "<REQUEST_PATH>"}}` (HTTP API){{% /show-in %}} | When HTTP requests are received |
|
||||
|
||||
### Use the create trigger command
|
||||
|
||||
|
|
@ -302,7 +295,7 @@ In the example above, replace the following:
|
|||
|
||||
### Trigger specification examples
|
||||
|
||||
#### Data write example
|
||||
#### Trigger on data writes
|
||||
|
||||
```bash
|
||||
# Trigger on writes to a specific table
|
||||
|
|
@ -325,13 +318,13 @@ The trigger runs when the database flushes ingested data for the specified table
|
|||
|
||||
The plugin receives the written data and table information.
|
||||
|
||||
#### Scheduled events example
|
||||
#### Trigger on a schedule
|
||||
|
||||
```bash
|
||||
# Run every 5 minutes
|
||||
influxdb3 create trigger \
|
||||
--trigger-spec "every:5m" \
|
||||
--plugin-filename "hourly_check.py" \
|
||||
--plugin-filename "periodic_check.py" \
|
||||
--database my_database \
|
||||
regular_check
|
||||
|
||||
|
|
@ -346,7 +339,7 @@ influxdb3 create trigger \
|
|||
|
||||
The plugin receives the scheduled call time.
|
||||
|
||||
#### HTTP requests example
|
||||
#### Trigger on HTTP requests
|
||||
|
||||
```bash
|
||||
# Create an endpoint at /api/v3/engine/webhook
|
||||
|
|
@ -357,7 +350,9 @@ influxdb3 create trigger \
|
|||
webhook_processor
|
||||
```
|
||||
|
||||
Access your endpoint available at `/api/v3/engine/<ENDPOINT_PATH>`.
|
||||
Access your endpoint at `/api/v3/engine/{REQUEST_PATH}` (in this example, `/api/v3/engine/webhook`).
|
||||
The trigger is enabled by default and runs when an HTTP request is received at the specified path.
|
||||
|
||||
To run the plugin, send a `GET` or `POST` request to the endpoint--for example:
|
||||
|
||||
```bash
|
||||
|
|
@ -366,6 +361,24 @@ curl http://{{% influxdb/host %}}/api/v3/engine/webhook
|
|||
|
||||
The plugin receives the HTTP request object with methods, headers, and body.
|
||||
|
||||
To view triggers associated with a database, use the `influxdb3 show summary` command:
|
||||
|
||||
```bash
|
||||
influxdb3 show summary --database my_database --token AUTH_TOKEN
|
||||
```
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
> [!Warning]
|
||||
> #### Request trigger specification format differs between CLI and API
|
||||
>
|
||||
> Due to a bug in InfluxDB 3 Enterprise, the request trigger specification format differs:
|
||||
>
|
||||
> - **CLI**: `request:<REQUEST_PATH>` (same as Core CLI and API)
|
||||
> - **Enterprise API**: `{"request_path": {"path": "<REQUEST_PATH>"}}`
|
||||
>
|
||||
> See the [API reference](/influxdb3/enterprise/api/#operation/PostConfigureProcessingEngineTrigger) for examples. Use `influxdb3 show summary` to verify the actual trigger specification.
|
||||
{{% /show-in %}}
|
||||
|
||||
### Pass arguments to plugins
|
||||
|
||||
Use trigger arguments to pass configuration from a trigger to the plugin it runs. You can use this for:
|
||||
|
|
@ -587,7 +600,19 @@ Each plugin must run on a node that supports its trigger type:
|
|||
|--------------------|--------------------------|-----------------------------|
|
||||
| Data write | `table:` or `all_tables` | Ingester nodes |
|
||||
| Scheduled | `every:` or `cron:` | Any node with scheduler |
|
||||
| HTTP request | `path:` | Nodes that serve API traffic|
|
||||
| HTTP request | `request:`{{% show-in "enterprise" %}} (CLI) or `{"request_path": {"path": "..."}}` (HTTP API){{% /show-in %}} | Nodes that serve API traffic|
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
> [!Note]
|
||||
> #### Request trigger specification format differs between CLI and API
|
||||
>
|
||||
> Due to a bug in InfluxDB 3 Enterprise, the request trigger specification format differs:
|
||||
>
|
||||
> - **CLI**: `request:<REQUEST_PATH>` (same as Core CLI and API)
|
||||
> - **Enterprise API**: `{"request_path": {"path": "<REQUEST_PATH>"}}`
|
||||
>
|
||||
> See the [API reference](/influxdb3/enterprise/api/#operation/PostConfigureProcessingEngineTrigger) for examples.
|
||||
{{% /show-in %}}
|
||||
|
||||
For example:
|
||||
- Run write-ahead log (WAL) plugins on ingester nodes.
|
||||
|
|
|
|||
|
|
@ -1,14 +1,55 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Script to generate release notes for InfluxDB v3.x releases
|
||||
# Usage: ./generate-release-notes.sh <from_version> <to_version> <repo_path>
|
||||
# Usage: ./generate-release-notes.sh [--no-fetch] [--pull] <from_version> <to_version> <primary_repo_path> [additional_repo_paths...]
|
||||
#
|
||||
# Options:
|
||||
# --no-fetch Skip fetching latest commits from remote
|
||||
# --pull Pull latest changes (implies fetch) - use with caution as it may change your working directory
|
||||
#
|
||||
# Example: ./generate-release-notes.sh v3.1.0 v3.2.0 /path/to/influxdb /path/to/influxdb_pro /path/to/influxdb_iox
|
||||
# Example: ./generate-release-notes.sh --no-fetch v3.1.0 v3.2.0 /path/to/influxdb
|
||||
# Example: ./generate-release-notes.sh --pull v3.1.0 v3.2.0 /path/to/influxdb /path/to/influxdb_pro
|
||||
|
||||
set -e
|
||||
|
||||
# Default values
|
||||
REPO_PATH="${3:-/Users/ja/Documents/github/influxdb}"
|
||||
# Parse command line options
|
||||
FETCH_COMMITS=true
|
||||
PULL_COMMITS=false
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--no-fetch)
|
||||
FETCH_COMMITS=false
|
||||
shift
|
||||
;;
|
||||
--pull)
|
||||
PULL_COMMITS=true
|
||||
FETCH_COMMITS=true
|
||||
shift
|
||||
;;
|
||||
-*)
|
||||
echo "Unknown option $1"
|
||||
exit 1
|
||||
;;
|
||||
*)
|
||||
break
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Parse remaining arguments
|
||||
FROM_VERSION="${1:-v3.1.0}"
|
||||
TO_VERSION="${2:-v3.2.0}"
|
||||
PRIMARY_REPO="${3:-/Users/ja/Documents/github/influxdb}"
|
||||
|
||||
# Collect additional repositories (all arguments after the third)
|
||||
ADDITIONAL_REPOS=()
|
||||
shift 3 2>/dev/null || true
|
||||
while [ $# -gt 0 ]; do
|
||||
ADDITIONAL_REPOS+=("$1")
|
||||
shift
|
||||
done
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
|
|
@ -18,7 +59,13 @@ BLUE='\033[0;34m'
|
|||
NC='\033[0m' # No Color
|
||||
|
||||
echo -e "${BLUE}Generating release notes for ${TO_VERSION}${NC}"
|
||||
echo -e "Repository: ${REPO_PATH}"
|
||||
echo -e "Primary Repository: ${PRIMARY_REPO}"
|
||||
if [ ${#ADDITIONAL_REPOS[@]} -gt 0 ]; then
|
||||
echo -e "Additional Repositories:"
|
||||
for repo in "${ADDITIONAL_REPOS[@]}"; do
|
||||
echo -e " - ${repo}"
|
||||
done
|
||||
fi
|
||||
echo -e "From: ${FROM_VERSION} To: ${TO_VERSION}\n"
|
||||
|
||||
# Function to extract PR number from commit message
|
||||
|
|
@ -26,28 +73,156 @@ extract_pr_number() {
|
|||
echo "$1" | grep -oE '#[0-9]+' | head -1 | sed 's/#//'
|
||||
}
|
||||
|
||||
# Function to get commits from a repository
|
||||
get_commits_from_repo() {
|
||||
local repo_path="$1"
|
||||
local pattern="$2"
|
||||
local format="${3:-%h %s}"
|
||||
|
||||
if [ -d "$repo_path" ]; then
|
||||
git -C "$repo_path" log --format="$format" "${FROM_VERSION}..${TO_VERSION}" 2>/dev/null | grep -E "$pattern" || true
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to analyze API-related commits
|
||||
analyze_api_changes() {
|
||||
local repo_path="$1"
|
||||
local repo_name="$2"
|
||||
|
||||
if [ ! -d "$repo_path" ]; then
|
||||
return
|
||||
fi
|
||||
|
||||
# Look for API-related file changes
|
||||
local api_files=$(git -C "$repo_path" diff --name-only "${FROM_VERSION}..${TO_VERSION}" 2>/dev/null | grep -E "(api|handler|endpoint|route)" | head -10 || true)
|
||||
|
||||
# Look for specific API endpoint patterns in commit messages and diffs
|
||||
local api_commits=$(git -C "$repo_path" log --format="%h %s" "${FROM_VERSION}..${TO_VERSION}" 2>/dev/null | \
|
||||
grep -iE "(api|endpoint|/write|/query|/ping|/health|/metrics|v1|v2|v3)" || true)
|
||||
|
||||
if [ -n "$api_files" ] || [ -n "$api_commits" ]; then
|
||||
echo " Repository: $repo_name"
|
||||
if [ -n "$api_files" ]; then
|
||||
echo " Modified API files:"
|
||||
echo "$api_files" | while read -r file; do
|
||||
echo " - $file"
|
||||
done
|
||||
fi
|
||||
if [ -n "$api_commits" ]; then
|
||||
echo " API-related commits:"
|
||||
echo "$api_commits" | while read -r commit; do
|
||||
echo " - $commit"
|
||||
done
|
||||
fi
|
||||
echo
|
||||
fi
|
||||
}
|
||||
|
||||
# Get the release date
|
||||
RELEASE_DATE=$(git -C "$REPO_PATH" log -1 --format=%ai "$TO_VERSION" | cut -d' ' -f1)
|
||||
RELEASE_DATE=$(git -C "$PRIMARY_REPO" log -1 --format=%ai "$TO_VERSION" | cut -d' ' -f1)
|
||||
echo -e "${GREEN}Release Date: ${RELEASE_DATE}${NC}\n"
|
||||
|
||||
# Collect commits by category
|
||||
echo -e "${YELLOW}Analyzing commits...${NC}"
|
||||
# Create array of all repositories
|
||||
ALL_REPOS=("$PRIMARY_REPO")
|
||||
for repo in "${ADDITIONAL_REPOS[@]}"; do
|
||||
ALL_REPOS+=("$repo")
|
||||
done
|
||||
|
||||
# Features
|
||||
echo -e "\n${GREEN}Features:${NC}"
|
||||
FEATURES=$(git -C "$REPO_PATH" log --format="%h %s" "${FROM_VERSION}..${TO_VERSION}" | grep -E "^[a-f0-9]+ feat:" | sed 's/^[a-f0-9]* feat: //')
|
||||
# Fetch latest commits from all repositories (if enabled)
|
||||
if [ "$FETCH_COMMITS" = true ]; then
|
||||
if [ "$PULL_COMMITS" = true ]; then
|
||||
echo -e "${YELLOW}Pulling latest changes from all repositories...${NC}"
|
||||
echo -e "${RED}Warning: This will modify your working directories!${NC}"
|
||||
else
|
||||
echo -e "${YELLOW}Fetching latest commits from all repositories...${NC}"
|
||||
fi
|
||||
|
||||
for repo in "${ALL_REPOS[@]}"; do
|
||||
if [ -d "$repo" ]; then
|
||||
repo_name=$(basename "$repo")
|
||||
|
||||
if [ "$PULL_COMMITS" = true ]; then
|
||||
echo -e " Pulling changes in $repo_name..."
|
||||
if git -C "$repo" pull origin 2>/dev/null; then
|
||||
echo -e " ${GREEN}✓${NC} Successfully pulled changes in $repo_name"
|
||||
else
|
||||
echo -e " ${RED}✗${NC} Failed to pull changes in $repo_name (trying fetch only)"
|
||||
if git -C "$repo" fetch origin 2>/dev/null; then
|
||||
echo -e " ${GREEN}✓${NC} Successfully fetched from $repo_name"
|
||||
else
|
||||
echo -e " ${RED}✗${NC} Failed to fetch from $repo_name (continuing with local commits)"
|
||||
fi
|
||||
fi
|
||||
else
|
||||
echo -e " Fetching from $repo_name..."
|
||||
if git -C "$repo" fetch origin 2>/dev/null; then
|
||||
echo -e " ${GREEN}✓${NC} Successfully fetched from $repo_name"
|
||||
else
|
||||
echo -e " ${RED}✗${NC} Failed to fetch from $repo_name (continuing with local commits)"
|
||||
fi
|
||||
fi
|
||||
else
|
||||
echo -e " ${RED}✗${NC} Repository not found: $repo"
|
||||
fi
|
||||
done
|
||||
else
|
||||
echo -e "${YELLOW}Skipping fetch (using local commits only)${NC}"
|
||||
fi
|
||||
|
||||
# Fixes
|
||||
echo -e "\n${GREEN}Bug Fixes:${NC}"
|
||||
FIXES=$(git -C "$REPO_PATH" log --format="%h %s" "${FROM_VERSION}..${TO_VERSION}" | grep -E "^[a-f0-9]+ fix:" | sed 's/^[a-f0-9]* fix: //')
|
||||
# Collect commits by category from all repositories
|
||||
echo -e "\n${YELLOW}Analyzing commits across all repositories...${NC}"
|
||||
|
||||
# Breaking changes
|
||||
echo -e "\n${GREEN}Breaking Changes:${NC}"
|
||||
BREAKING=$(git -C "$REPO_PATH" log --format="%h %s" "${FROM_VERSION}..${TO_VERSION}" | grep -iE "^[a-f0-9]+ .*(BREAKING|breaking change)" | sed 's/^[a-f0-9]* //')
|
||||
# Initialize variables
|
||||
FEATURES=""
|
||||
FIXES=""
|
||||
BREAKING=""
|
||||
PERF=""
|
||||
API_CHANGES=""
|
||||
|
||||
# Performance improvements
|
||||
echo -e "\n${GREEN}Performance:${NC}"
|
||||
PERF=$(git -C "$REPO_PATH" log --format="%h %s" "${FROM_VERSION}..${TO_VERSION}" | grep -E "^[a-f0-9]+ perf:" | sed 's/^[a-f0-9]* perf: //')
|
||||
# Collect commits from all repositories
|
||||
for repo in "${ALL_REPOS[@]}"; do
|
||||
if [ -d "$repo" ]; then
|
||||
repo_name=$(basename "$repo")
|
||||
echo -e " Analyzing $repo_name..."
|
||||
|
||||
# Features
|
||||
repo_features=$(get_commits_from_repo "$repo" "^[a-f0-9]+ feat:" | sed "s/^[a-f0-9]* feat: /- [$repo_name] /")
|
||||
if [ -n "$repo_features" ]; then
|
||||
FEATURES="$FEATURES$repo_features"$'\n'
|
||||
fi
|
||||
|
||||
# Fixes
|
||||
repo_fixes=$(get_commits_from_repo "$repo" "^[a-f0-9]+ fix:" | sed "s/^[a-f0-9]* fix: /- [$repo_name] /")
|
||||
if [ -n "$repo_fixes" ]; then
|
||||
FIXES="$FIXES$repo_fixes"$'\n'
|
||||
fi
|
||||
|
||||
# Breaking changes
|
||||
repo_breaking=$(get_commits_from_repo "$repo" "^[a-f0-9]+ .*(BREAKING|breaking change)" | sed "s/^[a-f0-9]* /- [$repo_name] /")
|
||||
if [ -n "$repo_breaking" ]; then
|
||||
BREAKING="$BREAKING$repo_breaking"$'\n'
|
||||
fi
|
||||
|
||||
# Performance improvements
|
||||
repo_perf=$(get_commits_from_repo "$repo" "^[a-f0-9]+ perf:" | sed "s/^[a-f0-9]* perf: /- [$repo_name] /")
|
||||
if [ -n "$repo_perf" ]; then
|
||||
PERF="$PERF$repo_perf"$'\n'
|
||||
fi
|
||||
|
||||
# API changes
|
||||
repo_api=$(get_commits_from_repo "$repo" "(api|endpoint|/write|/query|/ping|/health|/metrics|v1|v2|v3)" | sed "s/^[a-f0-9]* /- [$repo_name] /")
|
||||
if [ -n "$repo_api" ]; then
|
||||
API_CHANGES="$API_CHANGES$repo_api"$'\n'
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
# Analyze API changes in detail
|
||||
echo -e "\n${YELLOW}Analyzing HTTP API changes...${NC}"
|
||||
for repo in "${ALL_REPOS[@]}"; do
|
||||
repo_name=$(basename "$repo")
|
||||
analyze_api_changes "$repo" "$repo_name"
|
||||
done
|
||||
|
||||
# Generate markdown output
|
||||
OUTPUT_FILE="release-notes-${TO_VERSION}.md"
|
||||
|
|
@ -60,16 +235,18 @@ EOF
|
|||
|
||||
# Add features
|
||||
if [ -n "$FEATURES" ]; then
|
||||
while IFS= read -r line; do
|
||||
PR=$(extract_pr_number "$line")
|
||||
# Clean up the commit message
|
||||
CLEAN_LINE=$(echo "$line" | sed -E 's/ \(#[0-9]+\)$//')
|
||||
if [ -n "$PR" ]; then
|
||||
echo "- $CLEAN_LINE ([#$PR](https://github.com/influxdata/influxdb/pull/$PR))" >> "$OUTPUT_FILE"
|
||||
else
|
||||
echo "- $CLEAN_LINE" >> "$OUTPUT_FILE"
|
||||
echo "$FEATURES" | while IFS= read -r line; do
|
||||
if [ -n "$line" ]; then
|
||||
PR=$(extract_pr_number "$line")
|
||||
# Clean up the commit message
|
||||
CLEAN_LINE=$(echo "$line" | sed -E 's/ \(#[0-9]+\)$//')
|
||||
if [ -n "$PR" ]; then
|
||||
echo "$CLEAN_LINE ([#$PR](https://github.com/influxdata/influxdb/pull/$PR))" >> "$OUTPUT_FILE"
|
||||
else
|
||||
echo "$CLEAN_LINE" >> "$OUTPUT_FILE"
|
||||
fi
|
||||
fi
|
||||
done <<< "$FEATURES"
|
||||
done
|
||||
else
|
||||
echo "- No new features in this release" >> "$OUTPUT_FILE"
|
||||
fi
|
||||
|
|
@ -82,15 +259,17 @@ cat >> "$OUTPUT_FILE" << EOF
|
|||
EOF
|
||||
|
||||
if [ -n "$FIXES" ]; then
|
||||
while IFS= read -r line; do
|
||||
PR=$(extract_pr_number "$line")
|
||||
CLEAN_LINE=$(echo "$line" | sed -E 's/ \(#[0-9]+\)$//')
|
||||
if [ -n "$PR" ]; then
|
||||
echo "- $CLEAN_LINE ([#$PR](https://github.com/influxdata/influxdb/pull/$PR))" >> "$OUTPUT_FILE"
|
||||
else
|
||||
echo "- $CLEAN_LINE" >> "$OUTPUT_FILE"
|
||||
echo "$FIXES" | while IFS= read -r line; do
|
||||
if [ -n "$line" ]; then
|
||||
PR=$(extract_pr_number "$line")
|
||||
CLEAN_LINE=$(echo "$line" | sed -E 's/ \(#[0-9]+\)$//')
|
||||
if [ -n "$PR" ]; then
|
||||
echo "$CLEAN_LINE ([#$PR](https://github.com/influxdata/influxdb/pull/$PR))" >> "$OUTPUT_FILE"
|
||||
else
|
||||
echo "$CLEAN_LINE" >> "$OUTPUT_FILE"
|
||||
fi
|
||||
fi
|
||||
done <<< "$FIXES"
|
||||
done
|
||||
else
|
||||
echo "- No bug fixes in this release" >> "$OUTPUT_FILE"
|
||||
fi
|
||||
|
|
@ -102,15 +281,17 @@ if [ -n "$BREAKING" ]; then
|
|||
### Breaking Changes
|
||||
|
||||
EOF
|
||||
while IFS= read -r line; do
|
||||
PR=$(extract_pr_number "$line")
|
||||
CLEAN_LINE=$(echo "$line" | sed -E 's/ \(#[0-9]+\)$//')
|
||||
if [ -n "$PR" ]; then
|
||||
echo "- $CLEAN_LINE ([#$PR](https://github.com/influxdata/influxdb/pull/$PR))" >> "$OUTPUT_FILE"
|
||||
else
|
||||
echo "- $CLEAN_LINE" >> "$OUTPUT_FILE"
|
||||
echo "$BREAKING" | while IFS= read -r line; do
|
||||
if [ -n "$line" ]; then
|
||||
PR=$(extract_pr_number "$line")
|
||||
CLEAN_LINE=$(echo "$line" | sed -E 's/ \(#[0-9]+\)$//')
|
||||
if [ -n "$PR" ]; then
|
||||
echo "$CLEAN_LINE ([#$PR](https://github.com/influxdata/influxdb/pull/$PR))" >> "$OUTPUT_FILE"
|
||||
else
|
||||
echo "$CLEAN_LINE" >> "$OUTPUT_FILE"
|
||||
fi
|
||||
fi
|
||||
done <<< "$BREAKING"
|
||||
done
|
||||
fi
|
||||
|
||||
# Add performance improvements if any
|
||||
|
|
@ -120,16 +301,54 @@ if [ -n "$PERF" ]; then
|
|||
### Performance Improvements
|
||||
|
||||
EOF
|
||||
while IFS= read -r line; do
|
||||
PR=$(extract_pr_number "$line")
|
||||
CLEAN_LINE=$(echo "$line" | sed -E 's/ \(#[0-9]+\)$//')
|
||||
if [ -n "$PR" ]; then
|
||||
echo "- $CLEAN_LINE ([#$PR](https://github.com/influxdata/influxdb/pull/$PR))" >> "$OUTPUT_FILE"
|
||||
else
|
||||
echo "- $CLEAN_LINE" >> "$OUTPUT_FILE"
|
||||
echo "$PERF" | while IFS= read -r line; do
|
||||
if [ -n "$line" ]; then
|
||||
PR=$(extract_pr_number "$line")
|
||||
CLEAN_LINE=$(echo "$line" | sed -E 's/ \(#[0-9]+\)$//')
|
||||
if [ -n "$PR" ]; then
|
||||
echo "$CLEAN_LINE ([#$PR](https://github.com/influxdata/influxdb/pull/$PR))" >> "$OUTPUT_FILE"
|
||||
else
|
||||
echo "$CLEAN_LINE" >> "$OUTPUT_FILE"
|
||||
fi
|
||||
fi
|
||||
done <<< "$PERF"
|
||||
done
|
||||
fi
|
||||
|
||||
# Add HTTP API changes if any
|
||||
if [ -n "$API_CHANGES" ]; then
|
||||
cat >> "$OUTPUT_FILE" << EOF
|
||||
|
||||
### HTTP API Changes
|
||||
|
||||
EOF
|
||||
echo "$API_CHANGES" | while IFS= read -r line; do
|
||||
if [ -n "$line" ]; then
|
||||
PR=$(extract_pr_number "$line")
|
||||
CLEAN_LINE=$(echo "$line" | sed -E 's/ \(#[0-9]+\)$//')
|
||||
if [ -n "$PR" ]; then
|
||||
echo "$CLEAN_LINE ([#$PR](https://github.com/influxdata/influxdb/pull/$PR))" >> "$OUTPUT_FILE"
|
||||
else
|
||||
echo "$CLEAN_LINE" >> "$OUTPUT_FILE"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# Add API analysis summary
|
||||
cat >> "$OUTPUT_FILE" << EOF
|
||||
|
||||
### API Analysis Summary
|
||||
|
||||
The following endpoints may have been affected in this release:
|
||||
- v1 API endpoints: \`/write\`, \`/query\`, \`/ping\`
|
||||
- v2 API endpoints: \`/api/v2/write\`, \`/api/v2/query\`
|
||||
- v3 API endpoints: \`/api/v3/*\`
|
||||
- System endpoints: \`/health\`, \`/metrics\`
|
||||
|
||||
Please review the commit details above and consult the API documentation for specific changes.
|
||||
|
||||
EOF
|
||||
|
||||
echo -e "\n${GREEN}Release notes generated in: ${OUTPUT_FILE}${NC}"
|
||||
echo -e "${YELLOW}Please review and edit the generated notes before adding to documentation.${NC}"
|
||||
echo -e "${YELLOW}Please review and edit the generated notes before adding to documentation.${NC}"
|
||||
echo -e "${BLUE}API changes have been automatically detected and included.${NC}"
|
||||
|
|
@ -0,0 +1,20 @@
|
|||
# Create a processing engine request trigger
|
||||
# // SECTION - influxdb3-core
|
||||
curl -v -X POST "http://localhost:8181/api/v3/configure/processingengine/trigger" \
|
||||
--header "Authorization: Bearer ${INFLUXDB3_ENTERPRISE_ADMIN_TOKEN}" \
|
||||
--json '{
|
||||
"db": "sensors",
|
||||
"plugin_filename": "request.py",
|
||||
"trigger_name": "Process request trigger",
|
||||
"trigger_specification": "request:process-request"
|
||||
}'
|
||||
|
||||
# // SECTION - influxdb3-enterprise
|
||||
curl -v -X POST "http://localhost:8181/api/v3/configure/processingengine/trigger" \
|
||||
--header "Authorization: Bearer ${INFLUXDB3_ENTERPRISE_ADMIN_TOKEN}" \
|
||||
--json '{
|
||||
"db": "sensors",
|
||||
"plugin_filename": "request.py",
|
||||
"trigger_name": "Process request trigger",
|
||||
"trigger_specification": {"request_path": {"path": "process-request"}}
|
||||
}'
|
||||
Loading…
Reference in New Issue