Merge pull request #5895 from influxdata/pbarnett/update-examples-and-pe-cache
Updates for new cluster configurations in Enterprise and new in-memory cachepull/5901/head
commit
081a5ed02e
|
@ -118,7 +118,7 @@ tags:
|
|||
InfluxDB 3 Core provides the InfluxDB 3 Processing engine, an embedded Python VM that can dynamically load and trigger Python plugins in response to events in your database.
|
||||
Use Processing engine plugins and triggers to run code and perform tasks for different database events.
|
||||
|
||||
To get started with the Processing engine, see the [Processing engine and Python plugins](/influxdb3/core/processing-engine/) guide.
|
||||
To get started with the Processing Engine, see the [Processing Engine and Python plugins](/influxdb3/core/processing-engine/) guide.
|
||||
- name: Quick start
|
||||
description: |
|
||||
1. [Check the status](#section/Server-information) of the InfluxDB server.
|
||||
|
|
|
@ -118,7 +118,7 @@ tags:
|
|||
InfluxDB 3 Enterprise provides the InfluxDB 3 Processing engine, an embedded Python VM that can dynamically load and trigger Python plugins in response to events in your database.
|
||||
Use Processing engine plugins and triggers to run code and perform tasks for different database events.
|
||||
|
||||
To get started with the Processing engine, see the [Processing engine and Python plugins](/influxdb3/enterprise/processing-engine/) guide.
|
||||
To get started with the Processing Engine, see the [Processing Engine and Python plugins](/influxdb3/enterprise/processing-engine/) guide.
|
||||
- name: Quick start
|
||||
description: |
|
||||
1. [Check the status](#section/Server-information) of the InfluxDB server.
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
---
|
||||
title: Processing engine and Python plugins
|
||||
title: Processing Engine and Python plugins
|
||||
description: Use the Python processing engine to trigger and execute custom code on different events in an {{< product-name >}} instance.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
name: Processing engine and Python plugins
|
||||
name: Processing Engine and Python plugins
|
||||
weight: 4
|
||||
influxdb3/core/tags: []
|
||||
related:
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
---
|
||||
title: Processing engine and Python plugins
|
||||
title: Processing Engine and Python plugins
|
||||
description: Use the Python processing engine to trigger and execute custom code on different events in an {{< product-name >}} instance.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
name: Processing engine and Python plugins
|
||||
name: Processing Engine and Python plugins
|
||||
weight: 4
|
||||
influxdb3/core/tags: []
|
||||
related:
|
||||
|
|
|
@ -156,14 +156,14 @@ The following examples show how to start InfluxDB 3 with different object store
|
|||
```bash
|
||||
# Memory object store
|
||||
# Stores data in RAM; doesn't persist data
|
||||
influxdb3 serve --node-id=local01 --object-store=memory
|
||||
influxdb3 serve --node-id=host01 --object-store=memory
|
||||
```
|
||||
|
||||
```bash
|
||||
# Filesystem object store
|
||||
# Provide the filesystem directory
|
||||
influxdb3 serve \
|
||||
--node-id=local01 \
|
||||
--node-id=host01 \
|
||||
--object-store=file \
|
||||
--data-dir ~/.influxdb3
|
||||
```
|
||||
|
@ -198,7 +198,7 @@ docker run -it \
|
|||
|
||||
```bash
|
||||
influxdb3 serve \
|
||||
--node-id=local01 \
|
||||
--node-id=host01 \
|
||||
--object-store=s3 \
|
||||
--bucket=BUCKET \
|
||||
--aws-access-key=AWS_ACCESS_KEY \
|
||||
|
@ -211,7 +211,7 @@ influxdb3 serve \
|
|||
# Specify the object store type and associated options
|
||||
|
||||
```bash
|
||||
influxdb3 serve --node-id=local01 --object-store=s3 --bucket=BUCKET \
|
||||
influxdb3 serve --node-id=host01 --object-store=s3 --bucket=BUCKET \
|
||||
--aws-access-key=AWS_ACCESS_KEY \
|
||||
--aws-secret-access-key=AWS_SECRET_ACCESS_KEY \
|
||||
--aws-endpoint=ENDPOINT \
|
||||
|
|
|
@ -1,15 +1,15 @@
|
|||
|
||||
Use the {{% product-name %}} Processing engine to run code and perform tasks
|
||||
Use the {{% product-name %}} Processing Engine to run code and perform tasks
|
||||
for different database events.
|
||||
|
||||
{{% product-name %}} provides the InfluxDB 3 Processing engine, an embedded Python VM that can dynamically load and trigger Python plugins
|
||||
{{% product-name %}} provides the InfluxDB 3 Processing Engine, an embedded Python VM that can dynamically load and trigger Python plugins
|
||||
in response to events in your database.
|
||||
|
||||
## Key Concepts
|
||||
|
||||
### Plugins
|
||||
|
||||
A Processing engine _plugin_ is Python code you provide to run tasks, such as
|
||||
A Processing Engine _plugin_ is Python code you provide to run tasks, such as
|
||||
downsampling data, monitoring, creating alerts, or calling external services.
|
||||
|
||||
> [!Note]
|
||||
|
@ -25,7 +25,7 @@ A _trigger_ is an InfluxDB 3 resource you create to associate a database
|
|||
event (for example, a WAL flush) with the plugin that should run.
|
||||
When an event occurs, the trigger passes configuration details, optional arguments, and event data to the plugin.
|
||||
|
||||
The Processing engine provides four types of triggers--each type corresponds to
|
||||
The Processing Engine provides four types of triggers--each type corresponds to
|
||||
an event type with event-specific configuration to let you handle events with targeted logic.
|
||||
|
||||
- **WAL Flush**: Triggered when the write-ahead log (WAL) is flushed to the object store (default is every second).
|
||||
|
@ -35,9 +35,9 @@ an event type with event-specific configuration to let you handle events with ta
|
|||
- **Parquet Persistence (coming soon)**: Triggered when InfluxDB 3 persists data to object storage Parquet files.
|
||||
-->
|
||||
|
||||
### Activate the Processing engine
|
||||
### Activate the Processing Engine
|
||||
|
||||
To enable the Processing engine, start the {{% product-name %}} server with the
|
||||
To enable the Processing Engine, start the {{% product-name %}} server with the
|
||||
`--plugin-dir` option and a path to your plugins directory.
|
||||
If the directory doesn’t exist, the server creates it.
|
||||
|
||||
|
@ -234,7 +234,7 @@ influx create trigger --run-asynchronously
|
|||
#### Configure error handling
|
||||
#### Configure error behavior for plugins
|
||||
|
||||
The Processing engine logs all plugin errors to stdout and the `system.processing_engine_logs` system table.
|
||||
The Processing Engine logs all plugin errors to stdout and the `system.processing_engine_logs` system table.
|
||||
|
||||
To configure additional error handling for a trigger, use the `--error-behavior` flag:
|
||||
|
||||
|
@ -466,3 +466,153 @@ To run the plugin, you send an HTTP request to `<HOST>/api/v3/engine/my-plugin`.
|
|||
Because all On Request plugins for a server share the same `<host>/api/v3/engine/` base URL,
|
||||
the trigger-spec you define must be unique across all plugins configured for a server,
|
||||
regardless of which database they are associated with.
|
||||
|
||||
|
||||
## In-memory cache
|
||||
|
||||
The Processing Engine provides a powerful in-memory cache system that enables plugins to persist and retrieve data between executions. This cache system is essential for maintaining state, tracking metrics over time, and optimizing performance when working with external data sources.
|
||||
|
||||
### Key Benefits
|
||||
|
||||
- **State persistence**: Maintain counters, timestamps, and other state variables across plugin executions.
|
||||
- **Performance and cost optimization**: Store frequently used data to avoid expensive recalculations. Minimize external API calls by caching responses and avoiding rate limits.
|
||||
- **Data Enrichment**: Cache lookup tables, API responses, or reference data to enrich data efficiently.
|
||||
|
||||
### Cache API
|
||||
|
||||
The cache API is accessible via the `cache` property on the `influxdb3_local` object provided to all plugin types:
|
||||
|
||||
```python
|
||||
# Basic usage pattern
|
||||
influxdb3_local.cache.METHOD(PARAMETERS)
|
||||
```
|
||||
|
||||
|
||||
| Method | Parameters | Returns | Description |
|
||||
|--------|------------|---------|-------------|
|
||||
| `put` | `key` (str): The key to store the value under<br>`value` (Any): Any Python object to cache<br>`ttl` (Optional[float], default=None): Time in seconds before expiration<br>`use_global` (bool, default=False): If True, uses global namespace | None | Stores a value in the cache with an optional time-to-live |
|
||||
| `get` | `key` (str): The key to retrieve<br>`default` (Any, default=None): Value to return if key not found<br>`use_global` (bool, default=False): If True, uses global namespace | Any | Retrieves a value from the cache or returns default if not found |
|
||||
| `delete` | `key` (str): The key to delete<br>`use_global` (bool, default=False): If True, uses global namespace | bool | Deletes a value from the cache. Returns True if deleted, False if not found |
|
||||
|
||||
### Cache Namespaces
|
||||
|
||||
The cache system offers two distinct namespaces, providing flexibility for different use cases:
|
||||
|
||||
| Namespace | Scope | Best For |
|
||||
| --- | --- | --- |
|
||||
| **Trigger-specific** (default) | Isolated to a single trigger | Plugin state, counters, timestamps specific to one plugin |
|
||||
| **Global** | Shared across all triggers | Configuration, lookup tables, service states that should be available to all plugins |
|
||||
|
||||
### Using the In-Memory Cache
|
||||
|
||||
The following examples show how to use the cache API in plugins:
|
||||
|
||||
```python
|
||||
# Store values in the trigger-specific namespace
|
||||
influxdb3_local.cache.put("last_processed_time", time.time())
|
||||
influxdb3_local.cache.put("error_count", 0)
|
||||
influxdb3_local.cache.put("processed_records", {"total": 0, "errors": 0})
|
||||
|
||||
# Store values with expiration
|
||||
influxdb3_local.cache.put("temp_data", {"value": 42}, ttl=300) # Expires in 5 minutes
|
||||
influxdb3_local.cache.put("auth_token", "t0k3n", ttl=3600) # Expires in 1 hour
|
||||
|
||||
# Store values in the global namespace
|
||||
influxdb3_local.cache.put("app_config", {"version": "1.0.2"}, use_global=True)
|
||||
influxdb3_local.cache.put("global_counter", 0, use_global=True)
|
||||
|
||||
# Retrieve values
|
||||
last_time = influxdb3_local.cache.get("last_processed_time")
|
||||
auth = influxdb3_local.cache.get("auth_token")
|
||||
config = influxdb3_local.cache.get("app_config", use_global=True)
|
||||
|
||||
# Provide defaults for missing keys
|
||||
missing = influxdb3_local.cache.get("missing_key", default="Not found")
|
||||
count = influxdb3_local.cache.get("visit_count", default=0)
|
||||
|
||||
# Delete cached values
|
||||
influxdb3_local.cache.delete("temp_data")
|
||||
influxdb3_local.cache.delete("app_config", use_global=True)
|
||||
```
|
||||
|
||||
#### Example: Maintaining State Between Executions
|
||||
|
||||
This example shows a WAL plugin that uses the cache to maintain a counter across executions:
|
||||
|
||||
```python
|
||||
|
||||
def process_writes(influxdb3_local, table_batches, args=None):
|
||||
# Get the current counter value or default to 0
|
||||
counter = influxdb3_local.cache.get("execution_counter", default=0)
|
||||
|
||||
# Increment the counter
|
||||
counter += 1
|
||||
|
||||
# Store the updated counter back in the cache
|
||||
influxdb3_local.cache.put("execution_counter", counter)
|
||||
|
||||
influxdb3_local.info(f"This plugin has been executed {counter} times")
|
||||
|
||||
# Process writes normally...
|
||||
```
|
||||
|
||||
#### Example: Sharing Configuration Across Triggers
|
||||
|
||||
One benefit of using a global namespace is being more responsive to changing conditions. This example demonstrates using the global namespace to share configuration, so a scheduled call can check thresholds placed by prior trigger calls, without making a query to the DB itself:
|
||||
|
||||
```python
|
||||
def process_scheduled_call(influxdb3_local, time, args=None):
|
||||
# Check if we have cached configuration
|
||||
config = influxdb3_local.cache.get("alert_config", use_global=True)
|
||||
|
||||
if not config:
|
||||
# Load configuration from database
|
||||
results = influxdb3_local.query("SELECT * FROM system.alert_config")
|
||||
|
||||
# Transform query results into config object
|
||||
config = {row["name"]: row["value"] for row in results}
|
||||
|
||||
# Cache the configuration with a 5-minute TTL
|
||||
influxdb3_local.cache.put("alert_config", config, ttl=300, use_global=True)
|
||||
influxdb3_local.info("Loaded fresh configuration from database")
|
||||
else:
|
||||
influxdb3_local.info("Using cached configuration")
|
||||
|
||||
# Use the configuration
|
||||
threshold = float(config.get("cpu_threshold", "90.0"))
|
||||
# ...
|
||||
```
|
||||
|
||||
The cache is designed to support stateful operations while maintaining isolation between different triggers. Use the trigger-specific namespace for most operations and the global namespace only when data sharing across triggers is necessary.
|
||||
|
||||
### Best Practices
|
||||
|
||||
#### Use TTL Appropriately
|
||||
Set realistic expiration times based on how frequently data changes.
|
||||
|
||||
```python
|
||||
# Cache external API responses for 5 minutes
|
||||
influxdb3_local.cache.put("weather_data", api_response, ttl=300)
|
||||
```
|
||||
|
||||
#### Cache Computation Results
|
||||
Store the results of expensive calculations that need to be utilized frequently.
|
||||
```python
|
||||
# Cache aggregated statistics
|
||||
influxdb3_local.cache.put("daily_stats", calculate_statistics(data), ttl=3600)
|
||||
```
|
||||
|
||||
#### Implement Cache Warm-Up
|
||||
Prime the cache at startup for critical data. This can be especially useful for global namespace data where multiple triggers will need this data.
|
||||
|
||||
```python
|
||||
# Check if cache needs to be initialized
|
||||
if not influxdb3_local.cache.get("lookup_table"):
|
||||
influxdb3_local.cache.put("lookup_table", load_lookup_data())
|
||||
```
|
||||
|
||||
#### Cache Limitations
|
||||
|
||||
- **Memory Usage**: Since cache contents are stored in memory, monitor your memory usage when caching large datasets.
|
||||
- **Server Restarts**: The cache is cleared when the server restarts, so it's recommended you design your plugins to handle cache initialization (as noted above).
|
||||
- **Concurrency**: Be cautious when multiple trigger instances might update the same cache key simultaneously to prevent inaccurate or out-of-date data access.
|
|
@ -147,14 +147,15 @@ The following examples show how to start InfluxDB 3 with different object store
|
|||
```bash
|
||||
# Memory object store
|
||||
# Stores data in RAM; doesn't persist data
|
||||
influxdb3 serve --node-id=local01 --object-store=memory
|
||||
influxdb3 serve --node-id=host01 --cluster-id=cluster01 --object-store=memory
|
||||
```
|
||||
|
||||
```bash
|
||||
# Filesystem object store
|
||||
# Provide the filesystem directory
|
||||
influxdb3 serve \
|
||||
--node-id=local01 \
|
||||
--node-id=host01 \
|
||||
--cluster-id=cluster01 \
|
||||
--object-store=file \
|
||||
--data-dir ~/.influxdb3
|
||||
```
|
||||
|
@ -178,6 +179,7 @@ docker run -it \
|
|||
-v /path/on/host:/path/in/container \
|
||||
quay.io/influxdb/influxdb3-enterprise:latest serve \
|
||||
--node-id my_host \
|
||||
--cluster-id my_cluster \
|
||||
--object-store file \
|
||||
--data-dir /path/in/container
|
||||
```
|
||||
|
@ -188,7 +190,8 @@ docker run -it \
|
|||
|
||||
```bash
|
||||
influxdb3 serve \
|
||||
--node-id=local01 \
|
||||
--node-id=host01 \
|
||||
--cluster-id=cluster01 \
|
||||
--object-store=s3 \
|
||||
--bucket=BUCKET \
|
||||
--aws-access-key=AWS_ACCESS_KEY \
|
||||
|
@ -201,7 +204,11 @@ influxdb3 serve \
|
|||
# Specify the object store type and associated options
|
||||
|
||||
```bash
|
||||
influxdb3 serve --node-id=local01 --object-store=s3 --bucket=BUCKET \
|
||||
influxdb3 serve \
|
||||
--node-id=host01 \
|
||||
--cluster-id=cluster01 \
|
||||
--object-store=s3 \
|
||||
--bucket=BUCKET \
|
||||
--aws-access-key=AWS_ACCESS_KEY \
|
||||
--aws-secret-access-key=AWS_SECRET_ACCESS_KEY \
|
||||
--aws-endpoint=ENDPOINT \
|
||||
|
@ -844,23 +851,18 @@ In a basic HA setup:
|
|||
> Compacted data is meant for a single writer, and many readers.
|
||||
|
||||
The following examples show how to configure and start two nodes
|
||||
for a basic HA setup.
|
||||
The example commands pass the following options:
|
||||
|
||||
- `--read-from-node-ids`: makes the node a _read replica_, which checks the Object store for data arriving from other nodes
|
||||
- `--compactor-id`: activates the Compactor for a node. Only one node can run compaction
|
||||
- `--run-compactions`: ensures the Compactor runs the compaction process
|
||||
for a basic HA setup. _Node 1_ is configured as the compactor (`--mode` includes `compact`).
|
||||
|
||||
```bash
|
||||
## NODE 1
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host01'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
# compactor-id: 'c01'
|
||||
|
||||
|
||||
influxdb3 serve --node-id=host01 --read-from-node-ids=host02 --compactor-id=c01 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://{{< influxdb/host >}} --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
|
||||
influxdb3 serve --node-id=host01 --cluster-id=cluster01 --mode=ingest,query,compact --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://{{< influxdb/host >}} --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
```
|
||||
|
@ -868,51 +870,52 @@ influxdb3 serve --node-id=host01 --read-from-node-ids=host02 --compactor-id=c01
|
|||
|
||||
# Example variables
|
||||
# node-id: 'host02'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve --node-id=host02 --read-from-node-ids=host01 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8282
|
||||
influxdb3 serve --node-id=host02 --cluster-id=cluster01 --mode=ingest,query --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8282
|
||||
--aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
After the nodes have started, querying either node returns data for both nodes, and `NODE 1` runs compaction.
|
||||
To add nodes to this setup, start more read replicas:
|
||||
|
||||
```bash
|
||||
influxdb3 serve --read-from-node-ids=host01,host02 [...OPTIONS]
|
||||
```
|
||||
After the nodes have started, querying either node returns data for both nodes, and _NODE 1_ runs compaction.
|
||||
To add nodes to this setup, start more read replicas with the same cluster ID:
|
||||
|
||||
> [!Note]
|
||||
> To run this setup for testing, you can start nodes in separate terminals and pass a different `--http-bind` value for each--for example:
|
||||
>
|
||||
> ```bash
|
||||
> # In terminal 1
|
||||
> influxdb3 serve --node-id=host01 --http-bind=http://{{< influxdb/host >}} [...OPTIONS]
|
||||
> influxdb3 serve --node-id=host01 \
|
||||
> --cluster-id=cluster01 \
|
||||
> --http-bind=http://{{< influxdb/host >}} [...OPTIONS]
|
||||
> ```
|
||||
>
|
||||
> ```bash
|
||||
> # In terminal 2
|
||||
> influxdb3 serve --node-id=host01 --http-bind=http://{{< influxdb/host >}} [...OPTIONS]
|
||||
> influxdb3 serve --node-id=host01 \
|
||||
> --cluster-id=cluster01 \
|
||||
> --http-bind=http://{{< influxdb/host >}} [...OPTIONS]
|
||||
|
||||
### High availability with a dedicated Compactor
|
||||
|
||||
Data compaction in InfluxDB 3 is one of the more computationally expensive operations.
|
||||
To ensure that your read-write node doesn’t slow down due to compaction work, set up a compactor-only node for consistent and high performance across all nodes.
|
||||
To ensure that your read-write nodes don't slow down due to compaction work, set up a compactor-only node for consistent and high performance across all nodes.
|
||||
|
||||
{{< img-hd src="/img/influxdb/influxdb-3-enterprise-dedicated-compactor.png" alt="Dedicated Compactor setup" />}}
|
||||
|
||||
The following examples show how to set up HA with a dedicated Compactor node:
|
||||
|
||||
1. Start two read-write nodes as read replicas, similar to the previous example,
|
||||
and pass the `--compactor-id` option with a dedicated compactor ID (which you'll configure in the next step).
|
||||
1. Start two read-write nodes as read replicas, similar to the previous example.
|
||||
|
||||
```
|
||||
## NODE 1 — Writer/Reader Node #1
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host01'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve --node-id=host01 --compactor-id=c01 --read-from-node-ids=host02 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://{{< influxdb/host >}} --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
|
||||
influxdb3 serve --node-id=host01 --cluster-id=cluster01 --mode=ingest,query --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://{{< influxdb/host >}} --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
```bash
|
||||
|
@ -920,17 +923,13 @@ The following examples show how to set up HA with a dedicated Compactor node:
|
|||
|
||||
# Example variables
|
||||
# node-id: 'host02'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve --node-id=host02 --compactor-id=c01 --read-from-node-ids=host01 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8282 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
|
||||
influxdb3 serve --node-id=host02 --cluster-id=cluster01 --mode=ingest,query --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8282 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
2. Start the dedicated compactor node, which uses the following options:
|
||||
|
||||
- `--mode=compactor`: Ensures the node **only** runs compaction.
|
||||
- `--compaction-hosts`: Specifies a comma-delimited list of hosts to run compaction for.
|
||||
|
||||
_**Don't include the replicas (`--read-from-node-ids`) parameter because this node doesn't replicate data._
|
||||
2. Start the dedicated compactor node, with the `--mode=compact` option. This ensures the node **only** runs compaction.
|
||||
|
||||
```bash
|
||||
|
||||
|
@ -938,10 +937,10 @@ The following examples show how to set up HA with a dedicated Compactor node:
|
|||
|
||||
# Example variables
|
||||
# node-id: 'host03'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
# compactor-id: 'c01'
|
||||
|
||||
influxdb3 serve --node-id=host03 --mode=compactor --compactor-id=c01 --compaction-hosts=host01,host02 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
|
||||
influxdb3 serve --node-id=host03 --cluster-id=cluster01 --mode=compact --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
### High availability with read replicas and a dedicated Compactor
|
||||
|
@ -950,18 +949,18 @@ For a very robust and effective setup for managing time-series data, you can run
|
|||
|
||||
{{< img-hd src="/img/influxdb/influxdb-3-enterprise-workload-isolation.png" alt="Workload Isolation Setup" />}}
|
||||
|
||||
1. Start writer nodes for ingest. Enterprise doesn’t designate a write-only mode, so assign them **`read_write`** mode.
|
||||
To achieve the benefits of workload isolation, you'll send _only write requests_ to these read-write nodes. Later, you'll configure the _read-only_ nodes.
|
||||
1. Start ingest nodes by assigning them the **`ingest`** mode.
|
||||
To achieve the benefits of workload isolation, you'll send _only write requests_ to these ingest nodes. Later, you'll configure the _read-only_ nodes.
|
||||
|
||||
```
|
||||
```bash
|
||||
## NODE 1 — Writer Node #1
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host01'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve --node-id=host01 --mode=read_write --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://{{< influxdb/host >}} --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
|
||||
|
||||
influxdb3 serve --node-id=host01 --cluster-id=cluster01 --mode=ingest --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://{{< influxdb/host >}} --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
<!-- The following examples use different ports for different nodes. Don't use the influxdb/host shortcode below. -->
|
||||
|
@ -971,47 +970,45 @@ For a very robust and effective setup for managing time-series data, you can run
|
|||
|
||||
# Example variables
|
||||
# node-id: 'host02'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
Usage: $ influxdb3 serve --node-id=host02 --mode=read_write --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8282 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
|
||||
Usage: $ influxdb3 serve --node-id=host02 --cluster-id=cluster01 --mode=ingest --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8282 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
2. Start the dedicated Compactor node (`--mode=compactor`) and ensure it runs compactions on the specified `compaction-hosts`.
|
||||
2. Start the dedicated Compactor node with `--mode=compact`.
|
||||
|
||||
```
|
||||
```bash
|
||||
## NODE 3 — Compactor Node
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host03'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve --node-id=host03 --mode=compactor --compaction-hosts=host01,host02 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
|
||||
influxdb3 serve --node-id=host03 --cluster-id=cluster01 --mode=compact --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
3. Finally, start the query nodes as _read-only_.
|
||||
Include the following options:
|
||||
|
||||
- `--mode=read`: Sets the node to _read-only_
|
||||
- `--read-from-node-ids=host01,host02`: A comma-demlimited list of host IDs to read data from
|
||||
3. Finally, start the query nodes as _read-only_ with `--mode=query`.
|
||||
|
||||
```bash
|
||||
## NODE 4 — Read Node #1
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host04'
|
||||
# cluster-id: 'cluster01'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve --node-id=host04 --mode=read --object-store=s3 --read-from-node-ids=host01,host02 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8383 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
|
||||
influxdb3 serve --node-id=host04 --cluster-id=cluster01 --mode=query --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8383 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
```
|
||||
```bash
|
||||
## NODE 5 — Read Node #2
|
||||
|
||||
# Example variables
|
||||
# node-id: 'host05'
|
||||
# bucket: 'influxdb-3-enterprise-storage'
|
||||
|
||||
influxdb3 serve --node-id=host05 --mode=read --object-store=s3 --read-from-node-ids=host01,host02 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8484 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
|
||||
influxdb3 serve --node-id=host05 --cluster-id=cluster01 --mode=query --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=http://localhost:8484 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
|
||||
```
|
||||
|
||||
Congratulations, you have a robust setup to workload isolation using {{% product-name %}}.
|
||||
|
|
Loading…
Reference in New Issue