Core and Ent3 performance tuning guide and configuration options (#6421)
* feat(influxdb3): Core and Ent performance tuning guide:Add an admin/performance-tuning/ page with specific workload and capacity configurations.Part of #6403. * fix(influxdb3): product-specific link fragments for flags * fix(influxdb3): enterprise-specific link fragments * Apply suggestion from @jstirnaman * fix(influxdb3): duplicate licensing and resource limits sections- Rem… (#6470) * fix(influxdb3): duplicate licensing and resource limits sections- Remove duplicate licensing section- Resolve resource limits duplicates, merging details into the Resource limits section. * fix(influxdb3): fix broken links and enterprise-only flags in config options - Comment out TOC links to undocumented datafusion-runtime-* dev flags - Wrap enterprise-only section references (#licensing, #resource-limits) in conditionals - Fix num-datafusion-threads incorrectly marked as enterprise-only - Move Resource Limits section heading outside enterprise wrapper Resolves broken fragment links for both Core and Enterprise builds. * feat(enterprise): add cluster management documentation (#6431) Add comprehensive guide for managing InfluxDB 3 Enterprise clusters including: - Node configuration and deployment - Cluster initialization and scaling - Node removal and replacement procedures - Best practices for production deployments * Fixes multiple influxdb3 config option issues: - Fixed option placement (global vs serve options) in performance-tuning.md - Fixed --datafusion-num-threads option name (was --num-datafusion-threads) - Fixed --parquet-mem-cache-size option name and defaults for Core - Commented out unreleased --compaction-row-limit option - Added v3.0.0 breaking changes to release notes - Updated config-options.md with correct defaults and value formats All changes verified against InfluxDB v3.5.0 release binaries and git history. * fix(influxdb3): config options in clustering.md - Correctly place server options - Comment out unreleased optionspull/6476/head
parent
9606e1bd3e
commit
a30345170c
|
|
@ -0,0 +1,22 @@
|
|||
---
|
||||
title: Performance tuning
|
||||
seotitle: InfluxDB 3 Core performance tuning and optimization
|
||||
description: >
|
||||
Optimize {{% product-name %}} performance by tuning thread allocation,
|
||||
memory settings, and other configuration options for your specific workload.
|
||||
weight: 205
|
||||
menu:
|
||||
influxdb3_core:
|
||||
parent: Administer InfluxDB
|
||||
name: Performance tuning
|
||||
related:
|
||||
- /influxdb3/core/reference/internals/runtime-architecture/
|
||||
- /influxdb3/core/reference/config-options/
|
||||
- /influxdb3/core/admin/query-system-data/
|
||||
source: /shared/influxdb3-admin/performance-tuning.md
|
||||
---
|
||||
|
||||
<!--
|
||||
The content of this file is located at
|
||||
//SOURCE - content/shared/influxdb3-admin/performance-tuning.md
|
||||
-->
|
||||
|
|
@ -0,0 +1,642 @@
|
|||
---
|
||||
title: Configure specialized cluster nodes
|
||||
seotitle: Configure InfluxDB 3 Enterprise cluster nodes for optimal performance
|
||||
description: >
|
||||
Learn how to configure specialized nodes in your InfluxDB 3 Enterprise cluster
|
||||
for ingest, query, compaction, and processing workloads with optimal thread allocation.
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
parent: Administer InfluxDB
|
||||
name: Configure specialized cluster nodes
|
||||
weight: 100
|
||||
related:
|
||||
- /influxdb3/enterprise/admin/performance-tuning/
|
||||
- /influxdb3/enterprise/reference/internals/runtime-architecture/
|
||||
- /influxdb3/enterprise/reference/config-options/
|
||||
- /influxdb3/enterprise/admin/query-system-data/
|
||||
influxdb3/enterprise/tags: [clustering, performance, tuning, ingest, threads]
|
||||
---
|
||||
|
||||
Optimize performance for specific workloads in your {{% product-name %}} cluster
|
||||
by configuring specialized nodes in distributed deployments.
|
||||
Assign specific modes and thread allocations to nodes to maximize
|
||||
cluster efficiency.
|
||||
|
||||
- [Specialize nodes for specific workloads](#specialize-nodes-for-specific-workloads)
|
||||
- [Configure node modes](#configure-node-modes)
|
||||
- [Allocate threads by node type](#allocate-threads-by-node-type)
|
||||
- [Configure ingest nodes](#configure-ingest-nodes)
|
||||
- [Configure query nodes](#configure-query-nodes)
|
||||
- [Configure compactor nodes](#configure-compactor-nodes)
|
||||
- [Configure process nodes](#configure-process-nodes)
|
||||
- [Multi-mode configurations](#multi-mode-configurations)
|
||||
- [Cluster architecture examples](#cluster-architecture-examples)
|
||||
- [Scale your cluster](#scale-your-cluster)
|
||||
- [Monitor performance](#monitor-performance)
|
||||
- [Troubleshoot node configurations](#troubleshoot-node-configurations)
|
||||
- [Best practices](#best-practices)
|
||||
- [Migrate to specialized nodes](#migrate-to-specialized-nodes)
|
||||
- [Manage configurations](#manage-configurations)
|
||||
|
||||
|
||||
## Specialize nodes for specific workloads
|
||||
|
||||
In an {{% product-name %}} cluster, you can dedicate nodes to specific tasks:
|
||||
|
||||
- **Ingest nodes**: Optimized for high-throughput data ingestion
|
||||
- **Query nodes**: Maximized for complex analytical queries
|
||||
- **Compactor nodes**: Dedicated to data compaction and optimization
|
||||
- **Process nodes**: Focused on data processing and transformations
|
||||
- **All-in-one nodes**: Balanced for mixed workloads
|
||||
|
||||
## Configure node modes
|
||||
|
||||
Pass the `--mode` parameter when starting the node to specify its capabilities:
|
||||
|
||||
```bash
|
||||
# Single mode
|
||||
influxdb3 serve --mode=ingest
|
||||
|
||||
# Multiple modes
|
||||
influxdb3 serve --mode=ingest,query
|
||||
|
||||
# All modes (default)
|
||||
influxdb3 serve --mode=all
|
||||
```
|
||||
|
||||
Available modes:
|
||||
- `all`: All capabilities enabled (default)
|
||||
- `ingest`: Data ingestion and line protocol parsing
|
||||
- `query`: Query execution and data retrieval
|
||||
- `compact`: Background compaction and optimization
|
||||
- `process`: Data processing and transformations
|
||||
|
||||
## Allocate threads by node type
|
||||
|
||||
### Critical concept: Thread pools
|
||||
|
||||
Every node has two thread pools that must be properly configured:
|
||||
|
||||
1. **IO threads**: Parse line protocol, handle HTTP requests
|
||||
2. **DataFusion threads**: Execute queries, create data snapshots (convert [WAL data](/influxdb3/enterprise/reference/internals/durability/#write-ahead-log-wal) to Parquet files), perform compaction
|
||||
|
||||
> [!Note]
|
||||
> Even specialized nodes need both thread types. Ingest nodes use DataFusion threads
|
||||
> for creating data snapshots that convert [WAL data](/influxdb3/enterprise/reference/internals/durability/#write-ahead-log-wal) to Parquet files, and query nodes use IO threads for handling requests.
|
||||
|
||||
## Configure ingest nodes
|
||||
|
||||
Ingest nodes handle high-volume data writes and require significant IO thread allocation
|
||||
for line protocol parsing.
|
||||
|
||||
### Example medium ingester (32 cores)
|
||||
|
||||
```bash
|
||||
influxdb3 \
|
||||
--num-io-threads=12 \
|
||||
serve \
|
||||
--num-cores=32 \
|
||||
--datafusion-num-threads=20 \
|
||||
--exec-mem-pool-bytes=60% \
|
||||
--mode=ingest \
|
||||
--node-id=ingester-01
|
||||
```
|
||||
|
||||
**Configuration rationale:**
|
||||
- **12 IO threads**: Handle multiple concurrent writers (Telegraf agents, applications)
|
||||
- **20 DataFusion threads**: Required for data snapshot operations that convert buffered writes to Parquet files
|
||||
- **60% memory pool**: Balance between write buffers and data snapshot operations
|
||||
|
||||
### Monitor ingest performance
|
||||
|
||||
Key metrics for ingest nodes:
|
||||
|
||||
```bash
|
||||
# Monitor IO thread utilization
|
||||
top -H -p $(pgrep influxdb3) | grep io_worker
|
||||
|
||||
# Check write request counts by endpoint
|
||||
curl -s http://localhost:8181/metrics | grep 'http_requests_total.*write'
|
||||
|
||||
# Check overall HTTP request metrics
|
||||
curl -s http://localhost:8181/metrics | grep 'http_requests_total'
|
||||
|
||||
# Monitor WAL size
|
||||
du -sh /path/to/data/wal/
|
||||
```
|
||||
|
||||
> [!Important]
|
||||
> #### Scale IO threads with concurrent writers
|
||||
>
|
||||
> If you see only 2 CPU cores at 100% on a large ingester, increase
|
||||
> `--num-io-threads`.
|
||||
> Each concurrent writer can utilize approximately one IO thread.
|
||||
|
||||
## Configure query nodes
|
||||
|
||||
Query nodes execute complex analytical queries and need maximum DataFusion threads.
|
||||
|
||||
### Analytical query node (64 cores)
|
||||
|
||||
<!-- DEV-ONLY FLAGS: DO NOT DOCUMENT --datafusion-runtime-type IN PRODUCTION DOCS
|
||||
This flag will be removed in future versions.
|
||||
Only multi-thread mode should be used (which is the default).
|
||||
The current-thread option is deprecated and will be removed.
|
||||
Future editors: Keep this commented out or remove the flag entirely. -->
|
||||
|
||||
```bash
|
||||
influxdb3 \
|
||||
--num-io-threads=4 \
|
||||
serve \
|
||||
--num-cores=64 \
|
||||
--datafusion-num-threads=60 \
|
||||
--exec-mem-pool-bytes=90% \
|
||||
--parquet-mem-cache-size=8GB \
|
||||
--mode=query \
|
||||
--node-id=query-01 \
|
||||
--cluster-id=prod-cluster
|
||||
```
|
||||
|
||||
**Configuration rationale:**
|
||||
- **4 IO threads**: Minimal, just for HTTP request handling
|
||||
- **60 DataFusion threads**: Maximum parallelism for query execution
|
||||
- **90% memory pool**: Maximize memory for complex aggregations
|
||||
- **8 GB Parquet cache**: Keep frequently accessed data in memory
|
||||
|
||||
### Real-time query node (32 cores)
|
||||
|
||||
```bash
|
||||
influxdb3 \
|
||||
--num-io-threads=6 \
|
||||
serve \
|
||||
--num-cores=32 \
|
||||
--datafusion-num-threads=26 \
|
||||
--exec-mem-pool-bytes=80% \
|
||||
--parquet-mem-cache-size=4GB \
|
||||
--mode=query \
|
||||
--node-id=query-02
|
||||
```
|
||||
|
||||
### Optimize query settings
|
||||
|
||||
You can configure `datafusion` properties for additional tuning of query nodes:
|
||||
|
||||
```bash
|
||||
influxdb3 serve \
|
||||
--datafusion-config "datafusion.execution.batch_size:16384,datafusion.execution.target_partitions:60" \
|
||||
--mode=query
|
||||
```
|
||||
|
||||
## Configure compactor nodes
|
||||
|
||||
Compactor nodes optimize stored data through background compaction processes.
|
||||
|
||||
### Dedicated compactor (32 cores)
|
||||
|
||||
```bash
|
||||
influxdb3 \
|
||||
--num-io-threads=2 \
|
||||
serve \
|
||||
--num-cores=32 \
|
||||
--datafusion-num-threads=30 \
|
||||
--compaction-gen2-duration=24h \
|
||||
--compaction-check-interval=5m \
|
||||
--mode=compact \
|
||||
--node-id=compactor-01 \
|
||||
--cluster-id=prod-cluster
|
||||
|
||||
# Note: --compaction-row-limit option is not yet released in v3.5.0
|
||||
# Uncomment when available in a future release:
|
||||
# --compaction-row-limit=2000000 \
|
||||
```
|
||||
|
||||
**Configuration rationale:**
|
||||
- **2 IO threads**: Minimal, compaction is DataFusion-intensive
|
||||
- **30 DataFusion threads**: Maximum threads for sort/merge operations
|
||||
- **24h gen2 duration**: Time-based compaction strategy
|
||||
|
||||
### Tune compaction parameters
|
||||
|
||||
You can adjust compaction strategies to balance performance and resource usage:
|
||||
|
||||
```bash
|
||||
# Configure compaction strategy
|
||||
--compaction-multipliers=4,8,16 \
|
||||
--compaction-max-num-files-per-plan=100 \
|
||||
--compaction-cleanup-wait=10m
|
||||
```
|
||||
|
||||
## Configure process nodes
|
||||
|
||||
Process nodes handle data transformations and processing plugins.
|
||||
|
||||
### Processing node (16 cores)
|
||||
|
||||
```bash
|
||||
influxdb3 \
|
||||
--num-io-threads=4 \
|
||||
serve \
|
||||
--num-cores=16 \
|
||||
--datafusion-num-threads=12 \
|
||||
--plugin-dir=/path/to/plugins \
|
||||
--mode=process \
|
||||
--node-id=processor-01 \
|
||||
--cluster-id=prod-cluster
|
||||
```
|
||||
|
||||
## Multi-mode configurations
|
||||
|
||||
Some deployments benefit from nodes handling multiple responsibilities.
|
||||
|
||||
### Ingest + Query node (48 cores)
|
||||
|
||||
```bash
|
||||
influxdb3 \
|
||||
--num-io-threads=12 \
|
||||
serve \
|
||||
--num-cores=48 \
|
||||
--datafusion-num-threads=36 \
|
||||
--exec-mem-pool-bytes=75% \
|
||||
--mode=ingest,query \
|
||||
--node-id=hybrid-01
|
||||
```
|
||||
|
||||
### Query + Compact node (32 cores)
|
||||
|
||||
```bash
|
||||
influxdb3 \
|
||||
--num-io-threads=4 \
|
||||
serve \
|
||||
--num-cores=32 \
|
||||
--datafusion-num-threads=28 \
|
||||
--mode=query,compact \
|
||||
--node-id=qc-01
|
||||
```
|
||||
|
||||
## Cluster architecture examples
|
||||
|
||||
### Small cluster (3 nodes)
|
||||
|
||||
```yaml
|
||||
# Node 1: All-in-one primary
|
||||
mode: all
|
||||
cores: 32
|
||||
io_threads: 8
|
||||
datafusion_threads: 24
|
||||
|
||||
# Node 2: All-in-one secondary
|
||||
mode: all
|
||||
cores: 32
|
||||
io_threads: 8
|
||||
datafusion_threads: 24
|
||||
|
||||
# Node 3: All-in-one tertiary
|
||||
mode: all
|
||||
cores: 32
|
||||
io_threads: 8
|
||||
datafusion_threads: 24
|
||||
```
|
||||
|
||||
### Medium cluster (6 nodes)
|
||||
|
||||
```yaml
|
||||
# Nodes 1-2: Ingesters
|
||||
mode: ingest
|
||||
cores: 48
|
||||
io_threads: 16
|
||||
datafusion_threads: 32
|
||||
|
||||
# Nodes 3-4: Query nodes
|
||||
mode: query
|
||||
cores: 48
|
||||
io_threads: 4
|
||||
datafusion_threads: 44
|
||||
|
||||
# Nodes 5-6: Compactor + Process
|
||||
mode: compact,process
|
||||
cores: 32
|
||||
io_threads: 4
|
||||
datafusion_threads: 28
|
||||
```
|
||||
|
||||
### Large cluster (12+ nodes)
|
||||
|
||||
```yaml
|
||||
# Nodes 1-4: High-throughput ingesters
|
||||
mode: ingest
|
||||
cores: 96
|
||||
io_threads: 20
|
||||
datafusion_threads: 76
|
||||
|
||||
# Nodes 5-8: Query nodes
|
||||
mode: query
|
||||
cores: 64
|
||||
io_threads: 4
|
||||
datafusion_threads: 60
|
||||
|
||||
# Nodes 9-10: Dedicated compactors
|
||||
mode: compact
|
||||
cores: 32
|
||||
io_threads: 2
|
||||
datafusion_threads: 30
|
||||
|
||||
# Nodes 11-12: Process nodes
|
||||
mode: process
|
||||
cores: 32
|
||||
io_threads: 6
|
||||
datafusion_threads: 26
|
||||
```
|
||||
|
||||
## Scale your cluster
|
||||
|
||||
### Vertical scaling limitations
|
||||
|
||||
{{< product-name >}} uses a shared-nothing architecture where ingest nodes handle all writes. To maximize ingest performance:
|
||||
|
||||
- **Scale IO threads with concurrent writers**: Each concurrent writer can utilize approximately one IO thread for line protocol parsing
|
||||
- **Use high-core machines**: Line protocol parsing is CPU-intensive and benefits from more cores
|
||||
- **Deploy multiple ingest nodes**: Run several ingest nodes behind a load balancer to distribute write load
|
||||
- **Optimize batch sizes**: Configure clients to send larger batches to reduce per-request overhead
|
||||
|
||||
### Scale queries horizontally
|
||||
|
||||
Query nodes can scale horizontally since they all access the same object store:
|
||||
|
||||
```bash
|
||||
# Add query nodes as needed
|
||||
for i in {1..10}; do
|
||||
influxdb3 \
|
||||
--num-io-threads=4 \
|
||||
serve \
|
||||
--num-cores=32 \
|
||||
--datafusion-num-threads=28 \
|
||||
--mode=query \
|
||||
--node-id=query-$i &
|
||||
done
|
||||
```
|
||||
|
||||
## Monitor performance
|
||||
|
||||
### Node-specific metrics
|
||||
|
||||
Monitor specialized nodes differently based on their role:
|
||||
|
||||
#### Ingest nodes
|
||||
|
||||
```sql
|
||||
-- Monitor write activity through parquet file creation
|
||||
SELECT
|
||||
table_name,
|
||||
count(*) as files_created,
|
||||
sum(row_count) as total_rows,
|
||||
sum(size_bytes) as total_bytes
|
||||
FROM system.parquet_files
|
||||
WHERE max_time > extract(epoch from now() - INTERVAL '5 minutes') * 1000000000
|
||||
GROUP BY table_name;
|
||||
```
|
||||
|
||||
#### Query nodes
|
||||
```sql
|
||||
-- Monitor query performance
|
||||
SELECT
|
||||
count(*) as query_count,
|
||||
avg(execute_duration) as avg_execute_time,
|
||||
max(max_memory) as max_memory_bytes
|
||||
FROM system.queries
|
||||
WHERE issue_time > now() - INTERVAL '5 minutes'
|
||||
AND success = true;
|
||||
```
|
||||
|
||||
#### Compactor nodes
|
||||
```sql
|
||||
-- Monitor compaction progress
|
||||
SELECT
|
||||
event_type,
|
||||
event_status,
|
||||
count(*) as event_count,
|
||||
avg(event_duration) as avg_duration
|
||||
FROM system.compaction_events
|
||||
WHERE event_time > now() - INTERVAL '1 hour'
|
||||
GROUP BY event_type, event_status
|
||||
ORDER BY event_count DESC;
|
||||
```
|
||||
|
||||
### Monitor cluster-wide metrics
|
||||
|
||||
```bash
|
||||
# Check node health via HTTP endpoints
|
||||
for node in ingester-01:8181 query-01:8181 compactor-01:8181; do
|
||||
echo "Node: $node"
|
||||
curl -s "http://$node/health"
|
||||
done
|
||||
|
||||
# Monitor metrics from each node
|
||||
for node in ingester-01:8181 query-01:8181 compactor-01:8181; do
|
||||
echo "=== Metrics from $node ==="
|
||||
curl -s "http://$node/metrics" | grep -E "(cpu_usage|memory_usage|http_requests_total)"
|
||||
done
|
||||
|
||||
# Query system tables for cluster-wide monitoring
|
||||
curl -X POST "http://query-01:8181/api/v3/query_sql" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Authorization: Bearer YOUR_TOKEN" \
|
||||
-d '{
|
||||
"q": "SELECT * FROM system.queries WHERE issue_time > now() - INTERVAL '\''5 minutes'\'' ORDER BY issue_time DESC LIMIT 10",
|
||||
"db": "sensors"
|
||||
}'
|
||||
```
|
||||
|
||||
> [!Tip]
|
||||
> ### Extend monitoring with plugins
|
||||
>
|
||||
> Enhance your cluster monitoring capabilities using the InfluxDB 3 processing engine. The [InfluxDB 3 plugins library](https://github.com/influxdata/influxdb3_plugins) includes several monitoring and alerting plugins:
|
||||
>
|
||||
> - **System metrics collection**: Collect CPU, memory, disk, and network statistics
|
||||
> - **Threshold monitoring**: Monitor metrics with configurable thresholds and alerting
|
||||
> - **Multi-channel notifications**: Send alerts via Slack, Discord, SMS, WhatsApp, and webhooks
|
||||
> - **Anomaly detection**: Identify unusual patterns in your data
|
||||
> - **Deadman checks**: Detect missing data streams
|
||||
>
|
||||
> For complete plugin documentation and setup instructions, see [Process data in InfluxDB 3 Enterprise](/influxdb3/enterprise/get-started/process/).
|
||||
|
||||
### Monitor and respond to performance issues
|
||||
|
||||
Use the [monitoring queries](#monitor-cluster-wide-metrics) to identify the following patterns and their solutions:
|
||||
|
||||
#### High CPU with low throughput (Ingest nodes)
|
||||
|
||||
**Detection query:**
|
||||
```sql
|
||||
-- Check for high failed query rate indicating parsing issues
|
||||
SELECT
|
||||
count(*) as total_queries,
|
||||
sum(CASE WHEN success = true THEN 1 ELSE 0 END) as successful_queries,
|
||||
sum(CASE WHEN success = false THEN 1 ELSE 0 END) as failed_queries
|
||||
FROM system.queries
|
||||
WHERE issue_time > now() - INTERVAL '5 minutes';
|
||||
```
|
||||
|
||||
**Symptoms:**
|
||||
- Only 2 CPU cores at 100% on large machines
|
||||
- High write latency despite available resources
|
||||
- Failed queries due to parsing timeouts
|
||||
|
||||
**Solution:** Increase IO threads (see [Ingest node issues](#ingest-node-issues))
|
||||
|
||||
#### Memory pressure alerts (Query nodes)
|
||||
|
||||
**Detection query:**
|
||||
```sql
|
||||
-- Monitor queries with high memory usage or failures
|
||||
SELECT
|
||||
avg(max_memory) as avg_memory_bytes,
|
||||
max(max_memory) as peak_memory_bytes,
|
||||
sum(CASE WHEN success = false THEN 1 ELSE 0 END) as failed_queries
|
||||
FROM system.queries
|
||||
WHERE issue_time > now() - INTERVAL '5 minutes'
|
||||
AND query_type = 'sql';
|
||||
```
|
||||
|
||||
**Symptoms:**
|
||||
- Queries failing with out-of-memory errors
|
||||
- High memory usage approaching pool limits
|
||||
- Slow query execution times
|
||||
|
||||
**Solution:** Increase memory pool or optimize queries (see [Query node issues](#query-node-issues))
|
||||
|
||||
#### Compaction falling behind (Compactor nodes)
|
||||
|
||||
**Detection query:**
|
||||
```sql
|
||||
-- Check compaction event frequency and success rate
|
||||
SELECT
|
||||
event_type,
|
||||
count(*) as event_count,
|
||||
sum(CASE WHEN event_status = 'success' THEN 1 ELSE 0 END) as successful_events
|
||||
FROM system.compaction_events
|
||||
WHERE event_time > now() - INTERVAL '1 hour'
|
||||
GROUP BY event_type;
|
||||
```
|
||||
|
||||
**Symptoms:**
|
||||
- Decreasing compaction event frequency
|
||||
- Growing number of small Parquet files
|
||||
- Increasing query times due to file fragmentation
|
||||
|
||||
**Solution:** Add compactor nodes or increase DataFusion threads (see [Compactor node issues](#compactor-node-issues))
|
||||
|
||||
## Troubleshoot node configurations
|
||||
|
||||
### Ingest node issues
|
||||
|
||||
**Problem**: Low throughput despite available CPU
|
||||
```bash
|
||||
# Check: Are only 2 cores busy?
|
||||
top -H -p $(pgrep influxdb3)
|
||||
|
||||
# Solution: Increase IO threads
|
||||
--num-io-threads=16
|
||||
```
|
||||
|
||||
**Problem**: Data snapshot creation affecting ingest
|
||||
```bash
|
||||
# Check: DataFusion threads at 100% during data snapshots to Parquet
|
||||
# Solution: Reserve more DataFusion threads for snapshot operations
|
||||
--datafusion-num-threads=40
|
||||
```
|
||||
|
||||
### Query node issues
|
||||
|
||||
**Problem**: Slow queries despite resources
|
||||
```bash
|
||||
# Check: Memory pressure
|
||||
free -h
|
||||
|
||||
# Solution: Increase memory pool
|
||||
--exec-mem-pool-bytes=90%
|
||||
```
|
||||
|
||||
**Problem**: Poor cache hit rates
|
||||
```bash
|
||||
# Solution: Increase Parquet cache
|
||||
--parquet-mem-cache-size=10GB
|
||||
```
|
||||
|
||||
### Compactor node issues
|
||||
|
||||
**Problem**: Compaction falling behind
|
||||
```bash
|
||||
# Check: Compaction queue length
|
||||
# Solution: Add more compactor nodes or increase threads
|
||||
--datafusion-num-threads=30
|
||||
```
|
||||
|
||||
## Best practices
|
||||
|
||||
1. **Start with monitoring**: Understand bottlenecks before specializing nodes
|
||||
2. **Test mode combinations**: Some workloads benefit from multi-mode nodes
|
||||
3. **Plan for failure**: Ensure redundancy in critical node types
|
||||
4. **Document your topology**: Keep clear records of node configurations
|
||||
5. **Regular rebalancing**: Adjust thread allocation as workloads evolve
|
||||
6. **Capacity planning**: Monitor trends and scale proactively
|
||||
|
||||
## Migrate to specialized nodes
|
||||
|
||||
### From all-in-one to specialized
|
||||
|
||||
```bash
|
||||
# Phase 1: Baseline (all nodes identical)
|
||||
all nodes: --mode=all --num-io-threads=8
|
||||
|
||||
# Phase 2: Identify workload patterns
|
||||
# Monitor which nodes handle most writes vs queries
|
||||
|
||||
# Phase 3: Gradual specialization
|
||||
node1: --mode=ingest,query --num-io-threads=12
|
||||
node2: --mode=query,compact --num-io-threads=4
|
||||
|
||||
# Phase 4: Full specialization
|
||||
node1: --mode=ingest --num-io-threads=16
|
||||
node2: --mode=query --num-io-threads=4
|
||||
node3: --mode=compact --num-io-threads=2
|
||||
```
|
||||
|
||||
## Manage configurations
|
||||
|
||||
### Use configuration files
|
||||
|
||||
Create node-specific configuration files:
|
||||
|
||||
```toml
|
||||
# ingester.toml
|
||||
node-id = "ingester-01"
|
||||
cluster-id = "prod"
|
||||
mode = "ingest"
|
||||
num-cores = 96
|
||||
num-io-threads = 20
|
||||
datafusion-num-threads = 76
|
||||
|
||||
# query.toml
|
||||
node-id = "query-01"
|
||||
cluster-id = "prod"
|
||||
mode = "query"
|
||||
num-cores = 64
|
||||
num-io-threads = 4
|
||||
datafusion-num-threads = 60
|
||||
```
|
||||
|
||||
Launch with configuration:
|
||||
```bash
|
||||
influxdb3 serve --config ingester.toml
|
||||
```
|
||||
|
||||
### Configure using environment variables
|
||||
|
||||
```bash
|
||||
# Set environment variables for node type
|
||||
export INFLUXDB3_ENTERPRISE_MODE=ingest
|
||||
export INFLUXDB3_NUM_IO_THREADS=20
|
||||
export INFLUXDB3_DATAFUSION_NUM_THREADS=76
|
||||
|
||||
influxdb3 serve --node-id=$HOSTNAME --cluster-id=prod
|
||||
```
|
||||
|
|
@ -0,0 +1,23 @@
|
|||
---
|
||||
title: Performance tuning
|
||||
seotitle: InfluxDB 3 Enterprise performance tuning and optimization
|
||||
description: >
|
||||
Optimize {{% product-name %}} performance by tuning thread allocation,
|
||||
memory settings, and other configuration options for your specific workload.
|
||||
weight: 205
|
||||
menu:
|
||||
influxdb3_enterprise:
|
||||
parent: Administer InfluxDB
|
||||
name: Performance tuning
|
||||
related:
|
||||
- /influxdb3/enterprise/reference/internals/runtime-architecture/
|
||||
- /influxdb3/enterprise/reference/config-options/
|
||||
- /influxdb3/enterprise/admin/clustering/
|
||||
- /influxdb3/enterprise/admin/query-system-data/
|
||||
source: /shared/influxdb3-admin/performance-tuning.md
|
||||
---
|
||||
|
||||
<!--
|
||||
The content of this file is located at
|
||||
//SOURCE - content/shared/influxdb3-admin/performance-tuning.md
|
||||
-->
|
||||
|
|
@ -0,0 +1,681 @@
|
|||
Configure thread allocation, memory settings, and other parameters to optimize {{% product-name %}} performance
|
||||
based on your workload characteristics.
|
||||
|
||||
- [Best practices](#best-practices)
|
||||
- [General monitoring principles](#general-monitoring-principles)
|
||||
- [Essential settings for performance](#essential-settings-for-performance)
|
||||
- [Common performance issues](#common-performance-issues)
|
||||
- [Configuration examples by workload](#configuration-examples-by-workload)
|
||||
- [Thread allocation details](#thread-allocation-details)
|
||||
{{% show-in "enterprise" %}}
|
||||
- [Enterprise mode-specific tuning](#enterprise-mode-specific-tuning)
|
||||
{{% /show-in %}}
|
||||
- [Memory tuning](#memory-tuning)
|
||||
- [Advanced tuning options](#advanced-tuning-options)
|
||||
- [Monitoring and validation](#monitoring-and-validation)
|
||||
- [Common performance issues](#common-performance-issues-1)
|
||||
|
||||
## Best practices
|
||||
|
||||
1. **Start with monitoring**: Understand your current bottlenecks before tuning
|
||||
2. **Change one parameter at a time**: Isolate the impact of each change
|
||||
3. **Test with production-like workloads**: Use realistic data and query patterns
|
||||
4. **Document your configuration**: Keep track of what works for your workload
|
||||
5. **Plan for growth**: Leave headroom for traffic increases
|
||||
6. **Regular review**: Periodically reassess as workloads evolve
|
||||
|
||||
## General monitoring principles
|
||||
|
||||
Before tuning performance, establish baseline metrics to identify bottlenecks:
|
||||
|
||||
### Key metrics to monitor
|
||||
|
||||
1. **CPU usage per core**
|
||||
- Monitor individual core utilization to identify thread pool imbalances
|
||||
- Watch for cores at 100% while others are idle (indicates thread allocation issues)
|
||||
- Use `top -H` or `htop` to view per-thread CPU usage
|
||||
|
||||
2. **Memory consumption**
|
||||
- Track heap usage vs available RAM
|
||||
- Monitor query execution memory pool utilization
|
||||
- Watch for OOM errors or excessive swapping
|
||||
|
||||
3. **IO and network**
|
||||
- Measure write throughput (points/second)
|
||||
- Track query response times
|
||||
- Monitor object store latency for cloud deployments
|
||||
- Check disk IO wait times with `iostat`
|
||||
|
||||
### Establish baselines
|
||||
|
||||
```bash
|
||||
# Monitor CPU per thread
|
||||
top -H -p $(pgrep influxdb3)
|
||||
|
||||
# Track memory usage
|
||||
free -h
|
||||
watch -n 1 "free -h"
|
||||
|
||||
# Check IO wait
|
||||
iostat -x 1
|
||||
```
|
||||
|
||||
> [!Tip]
|
||||
> For comprehensive metrics monitoring, see [Monitor metrics](/influxdb3/version/admin/monitor-metrics/).
|
||||
|
||||
## Essential settings for performance
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
Use the following to tune performance in _all-in-one_ deployments:
|
||||
|
||||
> [!Note]
|
||||
> For specialized cluster nodes (ingest-only, query-only, etc.), see [Configure specialized cluster nodes](/influxdb3/version/admin/clustering/) for mode-specific optimizations.
|
||||
{{% /show-in %}}
|
||||
|
||||
### Thread allocation (--num-io-threads{{% show-in "enterprise" %}}, --datafusion-num-threads{{% /show-in %}})
|
||||
|
||||
**IO threads** handle HTTP requests and line protocol parsing. **Default: 2** (often insufficient).
|
||||
{{% show-in "enterprise" %}}**DataFusion threads** process queries and snapshots.{{% /show-in %}}
|
||||
|
||||
> [!Note]
|
||||
> {{% product-name %}} automatically allocates remaining cores to DataFusion after reserving IO threads. You can configure both thread pools explicitly by setting the `--num-io-threads` and `--datafusion-num-threads` options.
|
||||
|
||||
{{% show-in "core" %}}
|
||||
```bash
|
||||
# Write-heavy: More IO threads
|
||||
influxdb3 --num-io-threads=12 serve \
|
||||
--node-id=node0 \
|
||||
--object-store=file --data-dir=~/.influxdb3
|
||||
|
||||
# Query-heavy: Fewer IO threads
|
||||
influxdb3 --num-io-threads=4 serve \
|
||||
--node-id=node0 \
|
||||
--object-store=file --data-dir=~/.influxdb3
|
||||
```
|
||||
{{% /show-in %}}
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
```bash
|
||||
# Write-heavy: More IO threads, adequate DataFusion
|
||||
influxdb3 --num-io-threads=12 serve \
|
||||
--datafusion-num-threads=20 \
|
||||
--node-id=node0 --cluster-id=cluster0 \
|
||||
--object-store=file --data-dir=~/.influxdb3
|
||||
|
||||
# Query-heavy: Fewer IO threads, more DataFusion
|
||||
influxdb3 --num-io-threads=4 serve \
|
||||
--datafusion-num-threads=28 \
|
||||
--node-id=node0 --cluster-id=cluster0 \
|
||||
--object-store=file --data-dir=~/.influxdb3
|
||||
```
|
||||
{{% /show-in %}}
|
||||
|
||||
> [!Warning]
|
||||
> #### Increase IO threads for concurrent writers
|
||||
>
|
||||
> If you have multiple concurrent writers (for example, Telegraf agents), the default of 2 IO threads can bottleneck write performance.
|
||||
|
||||
### Memory pool (--exec-mem-pool-bytes)
|
||||
|
||||
Controls memory for query execution.
|
||||
Default: {{% show-in "core" %}}70%{{% /show-in %}}{{% show-in "enterprise" %}}20%{{% /show-in %}} of RAM.
|
||||
|
||||
{{% show-in "core" %}}
|
||||
```bash
|
||||
# Increase for query-heavy workloads
|
||||
influxdb3 --exec-mem-pool-bytes=90% serve \
|
||||
--node-id=node0 \
|
||||
--object-store=file --data-dir=~/.influxdb3
|
||||
|
||||
# Decrease if experiencing memory pressure
|
||||
influxdb3 --exec-mem-pool-bytes=60% serve \
|
||||
--node-id=node0 \
|
||||
--object-store=file --data-dir=~/.influxdb3
|
||||
```
|
||||
{{% /show-in %}}
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
```bash
|
||||
# Increase for query-heavy workloads
|
||||
influxdb3 --exec-mem-pool-bytes=90% serve \
|
||||
--node-id=node0 --cluster-id=cluster0 \
|
||||
--object-store=file --data-dir=~/.influxdb3
|
||||
|
||||
# Decrease if experiencing memory pressure
|
||||
influxdb3 --exec-mem-pool-bytes=60% serve \
|
||||
--node-id=node0 --cluster-id=cluster0 \
|
||||
--object-store=file --data-dir=~/.influxdb3
|
||||
```
|
||||
{{% /show-in %}}
|
||||
|
||||
### Parquet cache (--parquet-mem-cache-size)
|
||||
|
||||
Caches frequently accessed data files in memory.
|
||||
|
||||
{{% show-in "core" %}}
|
||||
```bash
|
||||
# Enable caching for better query performance
|
||||
influxdb3 serve \
|
||||
--parquet-mem-cache-size=4096 \
|
||||
--node-id=node0 \
|
||||
--object-store=file --data-dir=~/.influxdb3
|
||||
```
|
||||
{{% /show-in %}}
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
```bash
|
||||
# Enable caching for better query performance
|
||||
influxdb3 --parquet-mem-cache-size=4GB serve \
|
||||
--node-id=node0 --cluster-id=cluster0 \
|
||||
--object-store=file --data-dir=~/.influxdb3
|
||||
```
|
||||
{{% /show-in %}}
|
||||
|
||||
### WAL flush interval (--wal-flush-interval)
|
||||
|
||||
Controls write latency vs throughput. Default: 1s.
|
||||
|
||||
{{% show-in "core" %}}
|
||||
```bash
|
||||
# Reduce latency for real-time data
|
||||
influxdb3 --wal-flush-interval=100ms serve \
|
||||
--node-id=node0 \
|
||||
--object-store=file --data-dir=~/.influxdb3
|
||||
```
|
||||
{{% /show-in %}}
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
```bash
|
||||
# Reduce latency for real-time data
|
||||
influxdb3 --wal-flush-interval=100ms serve \
|
||||
--node-id=node0 --cluster-id=cluster0 \
|
||||
--object-store=file --data-dir=~/.influxdb3
|
||||
```
|
||||
{{% /show-in %}}
|
||||
|
||||
## Common performance issues
|
||||
|
||||
### High write latency
|
||||
|
||||
**Symptoms:** Increasing write response times, timeouts, points dropped
|
||||
|
||||
**Solutions:**
|
||||
1. Increase [IO threads](#thread-allocation-num-io-threads{{% show-in "enterprise" %}}-datafusion-num-threads{{% /show-in %}}) (default is only 2)
|
||||
2. Reduce [WAL flush interval](#wal-flush-interval-wal-flush-interval) (from 1s to 100ms)
|
||||
3. Check disk IO performance
|
||||
|
||||
### Slow query performance
|
||||
|
||||
**Symptoms:** Long execution times, high memory usage, query timeouts
|
||||
|
||||
**Solutions:**
|
||||
1. {{% show-in "enterprise" %}}Increase [DataFusion threads](#thread-allocation-num-io-threads-datafusion-num-threads)
|
||||
2. {{% /show-in %}}Increase [execution memory pool](#memory-pool-exec-mem-pool-bytes) (to 90%)
|
||||
3. Enable [Parquet caching](#parquet-cache-parquet-mem-cache-size)
|
||||
|
||||
### Memory pressure
|
||||
|
||||
**Symptoms:** OOM errors, swapping, high memory usage
|
||||
|
||||
**Solutions:**
|
||||
1. Reduce [execution memory pool](#memory-pool-exec-mem-pool-bytes) (to 60%)
|
||||
2. Lower snapshot threshold (`--force-snapshot-mem-threshold=70%`)
|
||||
|
||||
### CPU bottlenecks
|
||||
|
||||
**Symptoms:** 100% CPU utilization, uneven thread usage (only 2 cores for writes)
|
||||
|
||||
**Solutions:**
|
||||
1. Rebalance [thread allocation](#thread-allocation-num-io-threads{{% show-in "enterprise" %}}-datafusion-num-threads{{% /show-in %}})
|
||||
2. Check if only 2 cores are used for write parsing (increase IO threads)
|
||||
|
||||
> [!Important]
|
||||
> #### "My ingesters are only using 2 cores"
|
||||
>
|
||||
> Increase `--num-io-threads` to 8-16+ for ingest nodes.{{% show-in "enterprise" %}} For dedicated ingest nodes with `--mode=ingest`, see [Configure ingest nodes](/influxdb3/version/admin/clustering/#configure-ingest-nodes).{{% /show-in %}}
|
||||
|
||||
## Configuration examples by workload
|
||||
|
||||
### Write-heavy workloads (>100k points/second)
|
||||
|
||||
{{% show-in "core" %}}
|
||||
```bash
|
||||
# 32-core system, high ingest rate
|
||||
influxdb3 --num-io-threads=12 \
|
||||
--exec-mem-pool-bytes=80% \
|
||||
--wal-flush-interval=100ms \
|
||||
serve \
|
||||
--node-id=node0 \
|
||||
--object-store=file \
|
||||
--data-dir=~/.influxdb3
|
||||
```
|
||||
{{% /show-in %}}
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
```bash
|
||||
# 32-core system, high ingest rate
|
||||
influxdb3 --num-io-threads=12 serve \
|
||||
--datafusion-num-threads=20 \
|
||||
--exec-mem-pool-bytes=80% \
|
||||
--wal-flush-interval=100ms \
|
||||
--node-id=node0 \
|
||||
--cluster-id=cluster0 \
|
||||
--object-store=file \
|
||||
--data-dir=~/.influxdb3
|
||||
```
|
||||
{{% /show-in %}}
|
||||
|
||||
### Query-heavy workloads (complex analytics)
|
||||
|
||||
{{% show-in "core" %}}
|
||||
```bash
|
||||
# 32-core system, analytical queries
|
||||
influxdb3 --num-io-threads=4 serve \
|
||||
--exec-mem-pool-bytes=90% \
|
||||
--parquet-mem-cache-size=2048 \
|
||||
--node-id=node0 \
|
||||
--object-store=file \
|
||||
--data-dir=~/.influxdb3
|
||||
```
|
||||
{{% /show-in %}}
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
```bash
|
||||
# 32-core system, analytical queries
|
||||
influxdb3 --num-io-threads=4 serve \
|
||||
--datafusion-num-threads=28 \
|
||||
--exec-mem-pool-bytes=90% \
|
||||
--parquet-mem-cache-size=2GB \
|
||||
--node-id=node0 \
|
||||
--cluster-id=cluster0 \
|
||||
--object-store=file \
|
||||
--data-dir=~/.influxdb3
|
||||
```
|
||||
{{% /show-in %}}
|
||||
|
||||
### Mixed workloads (real-time dashboards)
|
||||
|
||||
{{% show-in "core" %}}
|
||||
```bash
|
||||
# 32-core system, balanced operations
|
||||
influxdb3 --num-io-threads=8 serve \
|
||||
--exec-mem-pool-bytes=70% \
|
||||
--parquet-mem-cache-size=1024 \
|
||||
--node-id=node0 \
|
||||
--object-store=file \
|
||||
--data-dir=~/.influxdb3
|
||||
```
|
||||
{{% /show-in %}}
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
```bash
|
||||
# 32-core system, balanced operations
|
||||
influxdb3 --num-io-threads=8 serve \
|
||||
--datafusion-num-threads=24 \
|
||||
--exec-mem-pool-bytes=70% \
|
||||
--parquet-mem-cache-size=1GB \
|
||||
--node-id=node0 \
|
||||
--cluster-id=cluster0 \
|
||||
--object-store=file \
|
||||
--data-dir=~/.influxdb3
|
||||
```
|
||||
{{% /show-in %}}
|
||||
|
||||
## Thread allocation details
|
||||
|
||||
### Calculate optimal thread counts
|
||||
|
||||
Use this formula as a starting point:
|
||||
|
||||
```
|
||||
Total cores = N
|
||||
Concurrent writers = W
|
||||
Query complexity factor = Q (1-10, where 10 is most complex)
|
||||
|
||||
IO threads = min(W + 2, N * 0.4)
|
||||
DataFusion threads = N - IO threads
|
||||
```
|
||||
|
||||
### Example configurations by system size
|
||||
|
||||
#### Small system (4 cores, 16 GB RAM)
|
||||
|
||||
{{% show-in "core" %}}
|
||||
```bash
|
||||
# Balanced configuration
|
||||
influxdb3 --num-io-threads=2 serve \
|
||||
--exec-mem-pool-bytes=10GB \
|
||||
--parquet-mem-cache-size=500 \
|
||||
--node-id=node0 \
|
||||
--object-store=file \
|
||||
--data-dir=~/.influxdb3
|
||||
```
|
||||
{{% /show-in %}}
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
```bash
|
||||
# Balanced configuration
|
||||
influxdb3 --num-io-threads=2 \
|
||||
--exec-mem-pool-bytes=10GB \
|
||||
--parquet-mem-cache-size=500MB \
|
||||
serve \
|
||||
--node-id=node0 \
|
||||
--cluster-id=cluster0 \
|
||||
--object-store=file \
|
||||
--data-dir=~/.influxdb3
|
||||
```
|
||||
{{% /show-in %}}
|
||||
|
||||
#### Medium system (16 cores, 64 GB RAM)
|
||||
|
||||
{{% show-in "core" %}}
|
||||
```bash
|
||||
# Write-optimized configuration
|
||||
influxdb3 --num-io-threads=6 serve \
|
||||
--exec-mem-pool-bytes=45GB \
|
||||
--parquet-mem-cache-size=2048 \
|
||||
--node-id=node0 \
|
||||
--object-store=file \
|
||||
--data-dir=~/.influxdb3
|
||||
```
|
||||
{{% /show-in %}}
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
```bash
|
||||
# Write-optimized configuration
|
||||
influxdb3 --num-io-threads=6 serve \
|
||||
--datafusion-num-threads=10 \
|
||||
--exec-mem-pool-bytes=45GB \
|
||||
--parquet-mem-cache-size=2GB \
|
||||
--node-id=node0 \
|
||||
--cluster-id=cluster0 \
|
||||
--object-store=file \
|
||||
--data-dir=~/.influxdb3
|
||||
```
|
||||
{{% /show-in %}}
|
||||
|
||||
#### Large system (64 cores, 256 GB RAM)
|
||||
|
||||
{{% show-in "core" %}}
|
||||
```bash
|
||||
# Query-optimized configuration
|
||||
influxdb3 --num-io-threads=8 serve \
|
||||
--exec-mem-pool-bytes=200GB \
|
||||
--parquet-mem-cache-size=10240 \
|
||||
--object-store-connection-limit=200 \
|
||||
--node-id=node0 \
|
||||
--object-store=file \
|
||||
--data-dir=~/.influxdb3
|
||||
```
|
||||
{{% /show-in %}}
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
```bash
|
||||
# Query-optimized configuration
|
||||
influxdb3 --num-io-threads=8 serve \
|
||||
--datafusion-num-threads=56 \
|
||||
--exec-mem-pool-bytes=200GB \
|
||||
--parquet-mem-cache-size=10GB \
|
||||
--object-store-connection-limit=200 \
|
||||
--node-id=node0 \
|
||||
--cluster-id=cluster0 \
|
||||
--object-store=file \
|
||||
--data-dir=~/.influxdb3
|
||||
```
|
||||
{{% /show-in %}}
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
## Enterprise mode-specific tuning
|
||||
|
||||
### Ingest mode optimization
|
||||
|
||||
Dedicated ingest nodes require significant IO threads:
|
||||
|
||||
```bash
|
||||
# High-throughput ingester (96 cores)
|
||||
influxdb3 --num-io-threads=24 serve \
|
||||
--mode=ingest \
|
||||
--num-cores=96 \
|
||||
--datafusion-num-threads=72 \
|
||||
--force-snapshot-mem-threshold=90% \
|
||||
--node-id=ingester0 \
|
||||
--cluster-id=cluster0 \
|
||||
--object-store=file \
|
||||
--data-dir=~/.influxdb3
|
||||
```
|
||||
|
||||
> [!Warning]
|
||||
> Without explicitly setting `--num-io-threads`, a 96-core ingester uses only 2 cores
|
||||
> for parsing line protocol, wasting 94% of available CPU for ingest operations.
|
||||
|
||||
### Query mode optimization
|
||||
|
||||
Query nodes should maximize DataFusion threads:
|
||||
|
||||
```bash
|
||||
# Query-optimized node (64 cores)
|
||||
influxdb3 --num-io-threads=4 serve \
|
||||
--mode=query \
|
||||
--num-cores=64 \
|
||||
--datafusion-num-threads=60 \
|
||||
--exec-mem-pool-bytes=90% \
|
||||
--parquet-mem-cache-size=4GB \
|
||||
--node-id=query0 \
|
||||
--cluster-id=cluster0 \
|
||||
--object-store=file \
|
||||
--data-dir=~/.influxdb3
|
||||
```
|
||||
|
||||
### Compactor mode optimization
|
||||
|
||||
Compaction is DataFusion-intensive:
|
||||
|
||||
```bash
|
||||
# Dedicated compactor (32 cores)
|
||||
influxdb3 --num-io-threads=2 serve \
|
||||
--mode=compact \
|
||||
--num-cores=32 \
|
||||
--datafusion-num-threads=30 \
|
||||
--node-id=compactor0 \
|
||||
--cluster-id=cluster0 \
|
||||
--object-store=file \
|
||||
--data-dir=~/.influxdb3
|
||||
|
||||
# Note: --compaction-row-limit option is not yet released in v3.5.0
|
||||
# Uncomment when available in a future release:
|
||||
# --compaction-row-limit=1000000 \
|
||||
```
|
||||
{{% /show-in %}}
|
||||
|
||||
## Memory tuning
|
||||
|
||||
### Execution memory pool
|
||||
|
||||
Configure the query execution memory pool:
|
||||
|
||||
```bash
|
||||
# Absolute value in bytes
|
||||
--exec-mem-pool-bytes=8589934592 # 8GB
|
||||
|
||||
# Percentage of available RAM
|
||||
--exec-mem-pool-bytes=80% # 80% of system RAM
|
||||
```
|
||||
|
||||
**Guidelines:**
|
||||
- **Write-heavy**: 60-70% (leave room for OS cache)
|
||||
- **Query-heavy**: 80-90% (maximize query memory)
|
||||
- **Mixed**: 70% (balanced approach)
|
||||
|
||||
### Parquet cache configuration
|
||||
|
||||
Cache frequently accessed Parquet files:
|
||||
|
||||
```bash
|
||||
# Set cache size
|
||||
--parquet-mem-cache-size=2147483648 # 2GB
|
||||
|
||||
# Configure cache behavior
|
||||
--parquet-mem-cache-prune-interval=1m \
|
||||
--parquet-mem-cache-prune-percentage=20
|
||||
```
|
||||
|
||||
### WAL and snapshot tuning
|
||||
|
||||
Control memory pressure from write buffers:
|
||||
|
||||
```bash
|
||||
# Force snapshot when memory usage exceeds threshold
|
||||
--force-snapshot-mem-threshold=80%
|
||||
|
||||
# Configure WAL rotation
|
||||
--wal-flush-interval=10s \
|
||||
--wal-snapshot-size=100MB
|
||||
```
|
||||
|
||||
## Advanced tuning options
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
### Specialized cluster nodes
|
||||
|
||||
For performance optimizations using dedicated ingest, query, compaction, or processing nodes, see [Configure specialized cluster nodes](/influxdb3/version/admin/clustering/).
|
||||
{{% /show-in %}}
|
||||
|
||||
For less common performance optimizations and detailed configuration options, see:
|
||||
|
||||
### DataFusion engine tuning
|
||||
|
||||
<!-- DEV-ONLY FLAGS: DO NOT DOCUMENT --datafusion-runtime-type IN PRODUCTION DOCS
|
||||
This flag will be removed in InfluxDB 3.5 Enterprise.
|
||||
Only multi-thread mode should be used (which is the default).
|
||||
The current-thread option is deprecated and will be removed.
|
||||
Future editors: Keep this commented out. -->
|
||||
|
||||
<!-- DEV-ONLY FLAGS: DO NOT DOCUMENT TOKIO RUNTIME FLAGS IN PRODUCTION DOCS
|
||||
--datafusion-runtime-max-blocking-threads and --datafusion-runtime-thread-priority
|
||||
are advanced tokio runtime configurations that should not be exposed to end users.
|
||||
Future editors: Remove these tokio runtime flags from production documentation. -->
|
||||
|
||||
Advanced DataFusion runtime parameters:
|
||||
- [`--datafusion-config`](/influxdb3/version/reference/cli/influxdb3/serve/#datafusion-config)
|
||||
|
||||
### HTTP and network tuning
|
||||
|
||||
Request size and network optimization:
|
||||
- [`--max-http-request-size`](/influxdb3/version/reference/cli/influxdb3/serve/#max-http-request-size) - For large batches (default: 10 MB)
|
||||
- [`--http-bind`](/influxdb3/version/reference/cli/influxdb3/serve/#http-bind) - Bind address
|
||||
|
||||
### Object store optimization
|
||||
|
||||
Performance tuning for cloud object stores:
|
||||
- [`--object-store-connection-limit`](/influxdb3/version/reference/cli/influxdb3/serve/#object-store-connection-limit) - Connection pool size
|
||||
- [`--object-store-max-retries`](/influxdb3/version/reference/cli/influxdb3/serve/#object-store-max-retries) - Retry configuration
|
||||
- [`--object-store-http2-only`](/influxdb3/version/reference/cli/influxdb3/serve/#object-store-http2-only) - Force HTTP/2
|
||||
|
||||
### Complete configuration reference
|
||||
|
||||
For all available configuration options, see:
|
||||
- [CLI serve command reference](/influxdb3/version/reference/cli/influxdb3/serve/)
|
||||
- [Configuration options](/influxdb3/version/reference/config-options/)
|
||||
|
||||
## Monitoring and validation
|
||||
|
||||
### Monitor thread utilization
|
||||
|
||||
```bash
|
||||
# Linux: View per-thread CPU usage
|
||||
top -H -p $(pgrep influxdb3)
|
||||
|
||||
# Monitor specific threads
|
||||
watch -n 1 "ps -eLf | grep influxdb3 | head -20"
|
||||
```
|
||||
|
||||
### Check performance metrics
|
||||
|
||||
Monitor key indicators:
|
||||
|
||||
```sql
|
||||
-- Query system.threads table (Enterprise)
|
||||
SELECT * FROM system.threads
|
||||
WHERE cpu_usage > 90
|
||||
ORDER BY cpu_usage DESC;
|
||||
|
||||
-- Check write throughput
|
||||
SELECT
|
||||
count(*) as points_written,
|
||||
max(timestamp) - min(timestamp) as time_range
|
||||
FROM your_measurement
|
||||
WHERE timestamp > now() - INTERVAL '1 minute';
|
||||
```
|
||||
|
||||
### Validate configuration
|
||||
|
||||
Verify your tuning changes:
|
||||
|
||||
```bash
|
||||
# Check effective configuration
|
||||
influxdb3 serve --help-all | grep -E "num-io-threads|num-datafusion-threads"
|
||||
|
||||
# Monitor memory usage
|
||||
free -h
|
||||
watch -n 1 "free -h"
|
||||
|
||||
# Check IO wait
|
||||
iostat -x 1
|
||||
```
|
||||
|
||||
## Common performance issues
|
||||
|
||||
### High write latency
|
||||
|
||||
**Symptoms:**
|
||||
- Increasing write response times
|
||||
- Timeouts from write clients
|
||||
- Points dropped or rejected
|
||||
|
||||
**Solutions:**
|
||||
1. Increase IO threads: `--num-io-threads=16`
|
||||
2. Reduce batch sizes in writers
|
||||
3. Increase WAL flush frequency
|
||||
4. Check disk IO performance
|
||||
|
||||
### Slow query performance
|
||||
|
||||
**Symptoms:**
|
||||
- Long query execution times
|
||||
- High memory usage during queries
|
||||
- Query timeouts
|
||||
|
||||
**Solutions:**
|
||||
{{% show-in "core" %}}1. Increase execution memory pool: `--exec-mem-pool-bytes=90%`
|
||||
2. Enable Parquet caching: `--parquet-mem-cache-size=4GB`
|
||||
3. Optimize query patterns (smaller time ranges, fewer fields){{% /show-in %}}
|
||||
{{% show-in "enterprise" %}}1. Increase DataFusion threads: `--datafusion-num-threads=30`
|
||||
2. Increase execution memory pool: `--exec-mem-pool-bytes=90%`
|
||||
3. Enable Parquet caching: `--parquet-mem-cache-size=4GB`
|
||||
4. Optimize query patterns (smaller time ranges, fewer fields){{% /show-in %}}
|
||||
|
||||
### Memory pressure
|
||||
|
||||
**Symptoms:**
|
||||
- Out of memory errors
|
||||
- Frequent garbage collection
|
||||
- System swapping
|
||||
|
||||
**Solutions:**
|
||||
1. Reduce execution memory pool: `--exec-mem-pool-bytes=60%`
|
||||
2. Lower snapshot threshold: `--force-snapshot-mem-threshold=70%`
|
||||
3. Decrease cache sizes
|
||||
4. Add more RAM or reduce workload
|
||||
|
||||
### CPU bottlenecks
|
||||
|
||||
**Symptoms:**
|
||||
- 100% CPU utilization
|
||||
- Uneven thread pool usage
|
||||
- Performance plateaus
|
||||
|
||||
**Solutions:**
|
||||
1. Rebalance thread allocation based on workload
|
||||
2. Add more CPU cores
|
||||
3. Optimize client batching
|
||||
4. {{% show-in "enterprise" %}}Distribute workload across specialized nodes{{% /show-in %}}
|
||||
|
|
@ -68,14 +68,20 @@ The following options apply to the `influxdb3` CLI globally and must be specifie
|
|||
Sets the number of threads allocated to the IO runtime thread pool. IO threads handle HTTP request serving, line protocol parsing, and file operations.
|
||||
|
||||
> [!Important]
|
||||
> This is a **global option** that must be specified before the `serve` command.
|
||||
> `--num-io-threads` is a **global option** that must be specified before the `serve` command.
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
**Default:** `2`
|
||||
{{% /show-in %}}
|
||||
|
||||
```bash
|
||||
# Set IO threads (global option before serve)
|
||||
influxdb3 --num-io-threads=8 serve --node-id=node0 --object-store=file
|
||||
```
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
For detailed information about thread allocation, see the [Resource Limits](#resource-limits) section.
|
||||
{{% /show-in %}}
|
||||
|
||||
| influxdb3 option | Environment variable |
|
||||
| :--------------- | :------------------------- |
|
||||
|
|
@ -86,145 +92,28 @@ For detailed information about thread allocation, see the [Resource Limits](#res
|
|||
## Server configuration options
|
||||
|
||||
- [General](#general)
|
||||
{{% show-in "enterprise" %}} - [cluster-id](#cluster-id){{% /show-in %}}
|
||||
- [data-dir](#data-dir)
|
||||
{{% show-in "enterprise" %}} - [mode](#mode){{% /show-in %}}
|
||||
- [node-id](#node-id)
|
||||
{{% show-in "enterprise" %}} - [node-id-from-env](#node-id-from-env){{% /show-in %}}
|
||||
- [object-store](#object-store)
|
||||
{{% show-in "enterprise" %}}
|
||||
- [num-cores](#num-cores)
|
||||
- [num-database-limit](#num-database-limit)
|
||||
- [num-table-limit](#num-table-limit)
|
||||
- [num-total-columns-per-table-limit](#num-total-columns-per-table-limit)
|
||||
- [Licensing](#licensing)
|
||||
- [license-email](#license-email)
|
||||
- [license-file](#license-file)
|
||||
- [license-type](#license-type){{% /show-in %}}
|
||||
{{% show-in "enterprise" %}}- [Licensing](#licensing){{% /show-in %}}
|
||||
- [Security](#security)
|
||||
- [tls-key](#tls-key)
|
||||
- [tls-cert](#tls-cert)
|
||||
- [tls-minimum-versions](#tls-minimum-version)
|
||||
- [without-auth](#without-auth)
|
||||
- [disable-authz](#disable-authz)
|
||||
- [admin-token-recovery-http-bind](#admin-token-recovery-http-bind)
|
||||
- [admin-token-file](#admin-token-file)
|
||||
{{% show-in "enterprise" %}}- [permission-tokens-file](#permission-tokens-file){{% /show-in %}}
|
||||
- [AWS](#aws)
|
||||
- [aws-access-key-id](#aws-access-key-id)
|
||||
- [aws-secret-access-key](#aws-secret-access-key)
|
||||
- [aws-default-region](#aws-default-region)
|
||||
- [aws-endpoint](#aws-endpoint)
|
||||
- [aws-session-token](#aws-session-token)
|
||||
- [aws-allow-http](#aws-allow-http)
|
||||
- [aws-skip-signature](#aws-skip-signature)
|
||||
- [aws-credentials-file](#aws-credentials-file)
|
||||
- [Google Cloud Service](#google-cloud-service)
|
||||
- [google-service-account](#google-service-account)
|
||||
- [Microsoft Azure](#microsoft-azure)
|
||||
- [azure-storage-account](#azure-storage-account)
|
||||
- [azure-storage-access-key](#azure-storage-access-key)
|
||||
- [azure-endpoint](#azure-endpoint)
|
||||
- [azure-allow-http](#azure-allow-http)
|
||||
- [Object Storage](#object-storage)
|
||||
- [bucket](#bucket)
|
||||
- [object-store-connection-limit](#object-store-connection-limit)
|
||||
- [object-store-http2-only](#object-store-http2-only)
|
||||
- [object-store-http2-max-frame-size](#object-store-http2-max-frame-size)
|
||||
- [object-store-max-retries](#object-store-max-retries)
|
||||
- [object-store-retry-timeout](#object-store-retry-timeout)
|
||||
- [object-store-cache-endpoint](#object-store-cache-endpoint)
|
||||
- [Logs](#logs)
|
||||
- [log-filter](#log-filter)
|
||||
- [log-destination](#log-destination)
|
||||
- [log-format](#log-format)
|
||||
- [query-log-size](#query-log-size)
|
||||
- [Traces](#traces)
|
||||
- [traces-exporter](#traces-exporter)
|
||||
- [traces-exporter-jaeger-agent-host](#traces-exporter-jaeger-agent-host)
|
||||
- [traces-exporter-jaeger-agent-port](#traces-exporter-jaeger-agent-port)
|
||||
- [traces-exporter-jaeger-service-name](#traces-exporter-jaeger-service-name)
|
||||
- [traces-exporter-jaeger-trace-context-header-name](#traces-exporter-jaeger-trace-context-header-name)
|
||||
- [traces-jaeger-debug-name](#traces-jaeger-debug-name)
|
||||
- [traces-jaeger-tags](#traces-jaeger-tags)
|
||||
- [traces-jaeger-max-msgs-per-second](#traces-jaeger-max-msgs-per-second)
|
||||
- [DataFusion](#datafusion)
|
||||
- [datafusion-num-threads](#datafusion-num-threads)
|
||||
<!-- DEV-ONLY FLAGS: DO NOT DOCUMENT IN PRODUCTION - TOKIO RUNTIME FLAGS
|
||||
- datafusion-runtime-type
|
||||
- datafusion-runtime-disable-lifo-slot
|
||||
- datafusion-runtime-event-interval
|
||||
- datafusion-runtime-global-queue-interval
|
||||
- datafusion-runtime-max-blocking-threads
|
||||
- datafusion-runtime-max-io-events-per-tick
|
||||
- datafusion-runtime-thread-keep-alive
|
||||
- datafusion-runtime-thread-priority
|
||||
END DEV-ONLY FLAGS -->
|
||||
- [datafusion-max-parquet-fanout](#datafusion-max-parquet-fanout)
|
||||
- [datafusion-use-cached-parquet-loader](#datafusion-use-cached-parquet-loader)
|
||||
- [datafusion-config](#datafusion-config)
|
||||
- [HTTP](#http)
|
||||
- [max-http-request-size](#max-http-request-size)
|
||||
- [http-bind](#http-bind)
|
||||
- [Memory](#memory)
|
||||
- [exec-mem-pool-bytes](#exec-mem-pool-bytes)
|
||||
- [force-snapshot-mem-threshold](#force-snapshot-mem-threshold)
|
||||
- [Write-Ahead Log (WAL)](#write-ahead-log-wal)
|
||||
- [wal-flush-interval](#wal-flush-interval)
|
||||
- [wal-snapshot-size](#wal-snapshot-size)
|
||||
- [wal-max-write-buffer-size](#wal-max-write-buffer-size)
|
||||
- [snapshotted-wal-files-to-keep](#snapshotted-wal-files-to-keep)
|
||||
- [wal-replay-fail-on-error](#wal-replay-fail-on-error)
|
||||
- [wal-replay-concurrency-limit](#wal-replay-concurrency-limit)
|
||||
- [Compaction](#compaction)
|
||||
{{% show-in "enterprise" %}} - [compaction-row-limit](#compaction-row-limit)
|
||||
- [compaction-max-num-files-per-plan](#compaction-max-num-files-per-plan)
|
||||
- [compaction-gen2-duration](#compaction-gen2-duration)
|
||||
- [compaction-multipliers](#compaction-multipliers)
|
||||
- [compaction-cleanup-wait](#compaction-cleanup-wait)
|
||||
- [compaction-check-interval](#compaction-check-interval){{% /show-in %}}
|
||||
- [gen1-duration](#gen1-duration)
|
||||
- [Caching](#caching)
|
||||
- [preemptive-cache-age](#preemptive-cache-age)
|
||||
- [parquet-mem-cache-size](#parquet-mem-cache-size)
|
||||
- [parquet-mem-cache-prune-percentage](#parquet-mem-cache-prune-percentage)
|
||||
- [parquet-mem-cache-prune-interval](#parquet-mem-cache-prune-interval)
|
||||
- [parquet-mem-cache-query-path-duration](#parquet-mem-cache-query-path-duration)
|
||||
- [disable-parquet-mem-cache](#disable-parquet-mem-cache)
|
||||
- [table-index-cache-max-entries](#table-index-cache-max-entries)
|
||||
- [table-index-cache-concurrency-limit](#table-index-cache-concurrency-limit)
|
||||
{{% show-in "enterprise" %}} - [last-value-cache-disable-from-history](#last-value-cache-disable-from-history){{% /show-in %}}
|
||||
- [last-cache-eviction-interval](#last-cache-eviction-interval)
|
||||
{{% show-in "enterprise" %}} - [distinct-value-cache-disable-from-history](#distinct-value-cache-disable-from-history){{% /show-in %}}
|
||||
- [distinct-cache-eviction-interval](#distinct-cache-eviction-interval)
|
||||
- [query-file-limit](#query-file-limit)
|
||||
- [Processing Engine](#processing-engine)
|
||||
- [plugin-dir](#plugin-dir)
|
||||
- [plugin-repo](#plugin-repo)
|
||||
- [virtual-env-location](#virtual-env-location)
|
||||
- [package-manager](#package-manager)
|
||||
{{% show-in "enterprise" %}}
|
||||
- [Cluster Management](#cluster-management)
|
||||
- [replication-interval](#replication-interval)
|
||||
- [catalog-sync-interval](#catalog-sync-interval)
|
||||
- [wait-for-running-ingestor](#wait-for-running-ingestor)
|
||||
- [Resource Limits](#resource-limits)
|
||||
- [num-cores](#num-cores)
|
||||
- [num-database-limit](#num-database-limit)
|
||||
- [num-table-limit](#num-table-limit)
|
||||
- [num-total-columns-per-table-limit](#num-total-columns-per-table-limit)
|
||||
{{% /show-in %}}
|
||||
- [Resource Limits](#resource-limits)
|
||||
- [Data Lifecycle Management](#data-lifecycle-management)
|
||||
- [gen1-lookback-duration](#gen1-lookback-duration)
|
||||
- [retention-check-interval](#retention-check-interval)
|
||||
- [delete-grace-period](#delete-grace-period)
|
||||
- [hard-delete-default-duration](#hard-delete-default-duration)
|
||||
- [Telemetry](#telemetry)
|
||||
- [telemetry-disable-upload](#telemetry-disable-upload)
|
||||
- [telemetry-endpoint](#telemetry-endpoint)
|
||||
- [TCP Listeners](#tcp-listeners)
|
||||
- [tcp-listener-file-path](#tcp-listener-file-path)
|
||||
- [admin-token-recovery-tcp-listener-file-path](#admin-token-recovery-tcp-listener-file-path)
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -302,13 +191,13 @@ You can specify multiple modes using a comma-delimited list (for example, `inges
|
|||
**Example configurations:**
|
||||
```bash
|
||||
# High-throughput ingest node (32 cores)
|
||||
influxdb3 --num-io-threads=12 serve --mode=ingest --num-datafusion-threads=20
|
||||
influxdb3 --num-io-threads=12 serve --mode=ingest --datafusion-num-threads=20
|
||||
|
||||
# Query-optimized node (32 cores)
|
||||
influxdb3 --num-io-threads=4 serve --mode=query --num-datafusion-threads=28
|
||||
influxdb3 --num-io-threads=4 serve --mode=query --datafusion-num-threads=28
|
||||
|
||||
# Balanced all-in-one (32 cores)
|
||||
influxdb3 --num-io-threads=6 serve --mode=all --num-datafusion-threads=26
|
||||
influxdb3 --num-io-threads=6 serve --mode=all --datafusion-num-threads=26
|
||||
```
|
||||
|
||||
| influxdb3 serve option | Environment variable |
|
||||
|
|
@ -370,58 +259,6 @@ This option supports the following values:
|
|||
| :--------------------- | :----------------------- |
|
||||
| `--object-store` | `INFLUXDB3_OBJECT_STORE` |
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
---
|
||||
|
||||
#### num-cores
|
||||
|
||||
Limits the total number of CPU cores that can be used by the server.
|
||||
Default is determined by your {{% product-name %}} license:
|
||||
|
||||
- **Trial**: up to 256 cores
|
||||
- **At-Home**: 2 cores
|
||||
- **Commercial**: per contract
|
||||
|
||||
| influxdb3 serve option | Environment variable |
|
||||
| :--------------------- | :--------------------------------- |
|
||||
| `--num-cores` | `INFLUXDB3_ENTERPRISE_NUM_CORES` |
|
||||
|
||||
For more information about licensing, see [Manage license](/influxdb3/enterprise/admin/license).
|
||||
|
||||
---
|
||||
|
||||
#### num-database-limit
|
||||
|
||||
Limits the total number of active databases.
|
||||
Default is {{% influxdb3/limit "database" %}}.
|
||||
|
||||
| influxdb3 serve option | Environment variable |
|
||||
| :---------------------- | :---------------------------------------- |
|
||||
| `--num-database-limit` | `INFLUXDB3_ENTERPRISE_NUM_DATABASE_LIMIT` |
|
||||
|
||||
---
|
||||
|
||||
#### num-table-limit
|
||||
|
||||
Limits the total number of active tables across all databases.
|
||||
Default is {{% influxdb3/limit "table" %}}.
|
||||
|
||||
| influxdb3 serve option | Environment variable |
|
||||
| :--------------------- | :------------------------------------- |
|
||||
| `--num-table-limit` | `INFLUXDB3_ENTERPRISE_NUM_TABLE_LIMIT` |
|
||||
|
||||
---
|
||||
|
||||
#### num-total-columns-per-table-limit
|
||||
|
||||
Limits the total number of columns per table.
|
||||
Default is {{% influxdb3/limit "column" %}}.
|
||||
|
||||
| influxdb3 serve option | Environment variable |
|
||||
| :------------------------------------ | :------------------------------------------------------- |
|
||||
| `--num-total-columns-per-table-limit` | `INFLUXDB3_ENTERPRISE_NUM_TOTAL_COLUMNS_PER_TABLE_LIMIT` |
|
||||
{{% /show-in %}}
|
||||
|
||||
---
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
|
|
@ -600,7 +437,7 @@ influxdb3 create token --admin \
|
|||
<!-- pytest.mark.skip -->
|
||||
|
||||
```bash { placeholders="./path/to/admin-token.json" }
|
||||
# Generate and admin token offline
|
||||
# Generate an admin token offline
|
||||
influxdb3 create token \
|
||||
--admin \
|
||||
--name "example-admin-token" \
|
||||
|
|
@ -676,7 +513,7 @@ influxdb3 create token \
|
|||
<!-- pytest.mark.skip -->
|
||||
|
||||
```bash { placeholders="./path/to/tokens.json" }
|
||||
# Generate and admin token offline
|
||||
# Generate an admin token offline
|
||||
influxdb3 create token \
|
||||
--name "example-token" \
|
||||
--permission "db:db1,db2:read,write" \
|
||||
|
|
@ -693,49 +530,6 @@ influxdb3 serve --permission-tokens-file ./path/to/tokens.json
|
|||
---
|
||||
{{% /show-in %}}
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
### Licensing
|
||||
|
||||
#### license-email
|
||||
|
||||
Specifies the email address to associate with your {{< product-name >}} license
|
||||
and automatically responds to the interactive email prompt when the server starts.
|
||||
This option is mutually exclusive with [license-file](#license-file).
|
||||
|
||||
| influxdb3 serve option | Environment variable |
|
||||
| :--------------------- | :----------------------------------- |
|
||||
| `--license-email` | `INFLUXDB3_ENTERPRISE_LICENSE_EMAIL` |
|
||||
|
||||
---
|
||||
|
||||
#### license-file
|
||||
|
||||
Specifies the path to a license file for {{< product-name >}}. When provided, the license
|
||||
file's contents are used instead of requesting a new license.
|
||||
This option is mutually exclusive with [license-email](#license-email).
|
||||
|
||||
| influxdb3 serve option | Environment variable |
|
||||
| :--------------------- | :----------------------------------- |
|
||||
| `--license-file` | `INFLUXDB3_ENTERPRISE_LICENSE_FILE` |
|
||||
|
||||
---
|
||||
|
||||
#### license-type
|
||||
|
||||
Specifies the type of {{% product-name %}} license to use and bypasses the
|
||||
interactive license prompt. Provide one of the following license types:
|
||||
|
||||
- `home`
|
||||
- `trial`
|
||||
- `commercial`
|
||||
|
||||
| influxdb3 serve option | Environment variable |
|
||||
| :--------------------- | :----------------------------------- |
|
||||
| `--license-type` | `INFLUXDB3_ENTERPRISE_LICENSE_TYPE` |
|
||||
|
||||
---
|
||||
{{% /show-in %}}
|
||||
|
||||
### AWS
|
||||
|
||||
- [aws-access-key-id](#aws-access-key-id)
|
||||
|
|
@ -1178,17 +972,19 @@ Specifies the maximum number of messages sent to a Jaeger service per second.
|
|||
### DataFusion
|
||||
|
||||
- [datafusion-num-threads](#datafusion-num-threads)
|
||||
- [datafusion-runtime-type](#datafusion-runtime-type)
|
||||
- [datafusion-runtime-disable-lifo-slot](#datafusion-runtime-disable-lifo-slot)
|
||||
- [datafusion-runtime-event-interval](#datafusion-runtime-event-interval)
|
||||
- [datafusion-runtime-global-queue-interval](#datafusion-runtime-global-queue-interval)
|
||||
- [datafusion-runtime-max-blocking-threads](#datafusion-runtime-max-blocking-threads)
|
||||
- [datafusion-runtime-max-io-events-per-tick](#datafusion-runtime-max-io-events-per-tick)
|
||||
- [datafusion-runtime-thread-keep-alive](#datafusion-runtime-thread-keep-alive)
|
||||
- [datafusion-runtime-thread-priority](#datafusion-runtime-thread-priority)
|
||||
- [datafusion-max-parquet-fanout](#datafusion-max-parquet-fanout)
|
||||
- [datafusion-use-cached-parquet-loader](#datafusion-use-cached-parquet-loader)
|
||||
- [datafusion-config](#datafusion-config)
|
||||
<!-- DEV-ONLY FLAGS: DO NOT DOCUMENT IN PRODUCTION - TOKIO RUNTIME FLAGS
|
||||
- datafusion-runtime-type
|
||||
- datafusion-runtime-disable-lifo-slot
|
||||
- datafusion-runtime-event-interval
|
||||
- datafusion-runtime-global-queue-interval
|
||||
- datafusion-runtime-max-blocking-threads
|
||||
- datafusion-runtime-max-io-events-per-tick
|
||||
- datafusion-runtime-thread-keep-alive
|
||||
- datafusion-runtime-thread-priority
|
||||
END DEV-ONLY FLAGS -->
|
||||
|
||||
#### datafusion-num-threads
|
||||
|
||||
|
|
@ -1200,28 +996,6 @@ Sets the maximum number of DataFusion runtime threads to use.
|
|||
|
||||
---
|
||||
|
||||
<!-- DEV-ONLY FLAGS: DO NOT DOCUMENT TOKIO RUNTIME FLAGS - THEY ARE INTERNAL TUNING PARAMETERS AND MAY BE REMOVED OR CHANGED AT ANY TIME
|
||||
--datafusion-runtime-type, INFLUXDB3_DATAFUSION_RUNTIME_TYPE
|
||||
This flag will be removed in InfluxDB 3.5 Enterprise.
|
||||
Only multi-thread mode should be used (which is the default).
|
||||
The current-thread option is deprecated and will be removed.
|
||||
Future editors: Keep this commented out.
|
||||
|
||||
--datafusion-runtime-event-interval, INFLUXDB3_DATAFUSION_RUNTIME_EVENT_INTERVAL
|
||||
|
||||
--datafusion-runtime-global-queue-interval, INFLUXDB3_DATAFUSION_RUNTIME_GLOBAL_QUEUE_INTERVAL
|
||||
--datafusion-runtime-max-blocking-threads, INFLUXDB3_DATAFUSION_RUNTIME_MAX_BLOCKING_THREADS
|
||||
|
||||
--datafusion-runtime-max-io-events-per-tick, INFLUXDB3_DATAFUSION_RUNTIME_MAX_IO_EVENTS_PER_TICK
|
||||
|
||||
--datafusion-runtime-thread-keep-alive, INFLUXDB3_DATAFUSION_RUNTIME_THREAD_KEEP_ALIVE
|
||||
|
||||
--datafusion-runtime-thread-priority, INFLUXDB3_DATAFUSION_RUNTIME_THREAD_PRIORITY
|
||||
|
||||
END DEV-ONLY TOKIO RUNTIME FLAGS -->
|
||||
|
||||
---
|
||||
|
||||
#### datafusion-max-parquet-fanout
|
||||
|
||||
When multiple parquet files are required in a sorted way
|
||||
|
|
@ -1411,7 +1185,7 @@ The default is dynamically determined.
|
|||
### Compaction
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
- [compaction-row-limit](#compaction-row-limit)
|
||||
<!--- [compaction-row-limit](#compaction-row-limit) - NOT YET RELEASED in v3.5.0 -->
|
||||
- [compaction-max-num-files-per-plan](#compaction-max-num-files-per-plan)
|
||||
- [compaction-gen2-duration](#compaction-gen2-duration)
|
||||
- [compaction-multipliers](#compaction-multipliers)
|
||||
|
|
@ -1421,8 +1195,11 @@ The default is dynamically determined.
|
|||
- [gen1-duration](#gen1-duration)
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
<!---
|
||||
#### compaction-row-limit
|
||||
|
||||
NOTE: This option is not yet released in v3.5.0. Uncomment when available in a future release.
|
||||
|
||||
Specifies the soft limit for the number of rows per file that the compactor
|
||||
writes. The compactor may write more rows than this limit.
|
||||
|
||||
|
|
@ -1433,6 +1210,7 @@ writes. The compactor may write more rows than this limit.
|
|||
| `--compaction-row-limit` | `INFLUXDB3_ENTERPRISE_COMPACTION_ROW_LIMIT` |
|
||||
|
||||
---
|
||||
-->
|
||||
|
||||
#### compaction-max-num-files-per-plan
|
||||
|
||||
|
|
@ -1550,15 +1328,20 @@ Specifies the interval to prefetch into the Parquet cache during compaction.
|
|||
|
||||
#### parquet-mem-cache-size
|
||||
|
||||
Specifies the size of the in-memory Parquet cache{{% show-in "core" %}} in megabytes (MB){{% /show-in %}}{{% show-in "enterprise" %}} in megabytes or percentage of total available memory{{% /show-in %}}.
|
||||
Specifies the size of the in-memory Parquet cache. Accepts values in megabytes (as an integer) or as a percentage of total available memory (for example, `20%`, `4096`).
|
||||
|
||||
{{% show-in "core" %}}**Default:** `1000`{{% /show-in %}}
|
||||
{{% show-in "enterprise" %}}**Default:** `20%`{{% /show-in %}}
|
||||
**Default:** `20%`
|
||||
|
||||
> [!Note]
|
||||
> #### Breaking change in v3.0.0
|
||||
>
|
||||
> In v3.0.0, `--parquet-mem-cache-size-mb` was replaced with `--parquet-mem-cache-size`.
|
||||
> The new option accepts both megabytes (integer) and percentage values.
|
||||
> The default changed from `1000` MB to `20%` of total available memory.
|
||||
|
||||
| influxdb3 serve option | Environment variable |
|
||||
| :---------------------------- | :---------------------------------- |
|
||||
{{% show-in "core" %}}| `--parquet-mem-cache-size-mb` | `INFLUXDB3_PARQUET_MEM_CACHE_SIZE_MB` |{{% /show-in %}}
|
||||
{{% show-in "enterprise" %}}| `--parquet-mem-cache-size` | `INFLUXDB3_PARQUET_MEM_CACHE_SIZE` |{{% /show-in %}}
|
||||
| `--parquet-mem-cache-size` | `INFLUXDB3_PARQUET_MEM_CACHE_SIZE` |
|
||||
|
||||
#### parquet-mem-cache-prune-percentage
|
||||
|
||||
|
|
@ -1862,22 +1645,33 @@ Specifies how long to wait for a running ingestor during startup.
|
|||
| :------------------------------- | :------------------------------------------------ |
|
||||
| `--wait-for-running-ingestor` | `INFLUXDB3_ENTERPRISE_WAIT_FOR_RUNNING_INGESTOR` |
|
||||
|
||||
{{% /show-in %}}
|
||||
|
||||
---
|
||||
|
||||
### Resource Limits
|
||||
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
- [num-cores](#num-cores)
|
||||
{{% show-in "enterprise" %}}- [num-datafusion-threads](#num-datafusion-threads){{% /show-in %}}
|
||||
{{% /show-in %}}
|
||||
- [datafusion-num-threads](#datafusion-num-threads)
|
||||
- _[num-io-threads](#num-io-threads) - See [Global configuration options](#global-configuration-options)_
|
||||
{{% show-in "enterprise" %}}
|
||||
- [num-database-limit](#num-database-limit)
|
||||
- [num-table-limit](#num-table-limit)
|
||||
- [num-total-columns-per-table-limit](#num-total-columns-per-table-limit)
|
||||
|
||||
#### num-cores
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
Limits the number of CPU cores that the InfluxDB 3 Enterprise process can use when running on systems where resources are shared.
|
||||
|
||||
**Default:** All available cores on the system
|
||||
|
||||
Maximum cores allowed is determined by your {{% product-name %}} license:
|
||||
|
||||
- **Trial**: up to 256 cores
|
||||
- **At-Home**: 2 cores
|
||||
- **Commercial**: per contract
|
||||
|
||||
When specified, InfluxDB automatically assigns the number of DataFusion threads and IO threads based on the core count.
|
||||
|
||||
**Default thread assignment logic when `num-cores` is set:**
|
||||
|
|
@ -1885,37 +1679,42 @@ When specified, InfluxDB automatically assigns the number of DataFusion threads
|
|||
- **3 cores**: 1 IO thread, 2 DataFusion threads
|
||||
- **4+ cores**: 2 IO threads, (n-2) DataFusion threads
|
||||
|
||||
This automatic allocation applies when you don't explicitly set [`--num-io-threads`](#num-io-threads) and [`--datafusion-num-threads`](#datafusion-num-threads).
|
||||
|
||||
> [!Note]
|
||||
> You can override the automatic thread assignment by explicitly setting [`--num-io-threads`](#num-io-threads) (global option)
|
||||
> and [`--num-datafusion-threads`](#num-datafusion-threads).
|
||||
> You can override the automatic thread assignment by explicitly setting [`--num-io-threads`](#num-io-threads)
|
||||
> and [`--datafusion-num-threads`](#datafusion-num-threads).
|
||||
> This is particularly important for specialized
|
||||
> workloads like [ingest mode](#mode) where you may need more IO threads than the default allocation.
|
||||
|
||||
**Constraints:**
|
||||
- Must be at least 2
|
||||
- Cannot exceed the number of cores available on the system
|
||||
- Total thread count from `--num-io-threads` (global option) and `--num-datafusion-threads` cannot exceed the `num-cores` value
|
||||
- Total thread count from `--num-io-threads` (global option) and `--datafusion-num-threads` cannot exceed the `num-cores` value
|
||||
|
||||
| influxdb3 serve option | Environment variable |
|
||||
| :--------------------- | :-------------------------------- |
|
||||
| `--num-cores` | `INFLUXDB3_ENTERPRISE_NUM_CORES` |
|
||||
{{% /show-in %}}
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
#### datafusion-num-threads
|
||||
|
||||
> [!Note]
|
||||
> The [`--num-io-threads`](#num-io-threads) option is a global flag.
|
||||
---
|
||||
|
||||
#### num-datafusion-threads
|
||||
|
||||
Sets the number of threads allocated to the DataFusion runtime thread pool. DataFusion threads handle:
|
||||
Sets the number of threads allocated to the DataFusion runtime thread pool.
|
||||
DataFusion threads handle:
|
||||
- Query execution and processing
|
||||
- Data aggregation and transformation
|
||||
- Snapshot creation (sort/dedupe operations)
|
||||
- Parquet file generation
|
||||
|
||||
**Default behavior:**
|
||||
{{% show-in "core" %}}
|
||||
**Default:** All available cores minus IO threads
|
||||
|
||||
> [!Note]
|
||||
> DataFusion threads are used for both query processing and snapshot operations.
|
||||
{{% /show-in %}}
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
**Default:**
|
||||
- If not specified and `--num-cores` is not set: All available cores minus IO threads
|
||||
- If not specified and `--num-cores` is set: Automatically determined based on core count (see [`--num-cores`](#num-cores))
|
||||
|
||||
|
|
@ -1923,19 +1722,23 @@ Sets the number of threads allocated to the DataFusion runtime thread pool. Data
|
|||
> DataFusion threads are used for both query processing and snapshot operations.
|
||||
> Even ingest-only nodes use DataFusion threads during WAL snapshot creation.
|
||||
|
||||
**Constraints:**
|
||||
- When used with `--num-cores`, the sum of `--num-io-threads` and `--num-datafusion-threads` cannot exceed the `num-cores` value
|
||||
**Constraints:** When used with `--num-cores`, the sum of `--num-io-threads` and `--datafusion-num-threads` cannot exceed the `num-cores` value
|
||||
{{% /show-in %}}
|
||||
|
||||
| influxdb3 serve option | Environment variable |
|
||||
| :----------------------------- | :-------------------------------------- |
|
||||
| `--num-datafusion-threads` | `INFLUXDB3_NUM_DATAFUSION_THREADS` |
|
||||
{{% /show-in %}}
|
||||
| `--datafusion-num-threads` | `INFLUXDB3_DATAFUSION_NUM_THREADS` |
|
||||
|
||||
> [!Note]
|
||||
> [`--num-io-threads`](#num-io-threads) is a [global configuration option](#global-configuration-options).
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
---
|
||||
|
||||
#### num-database-limit
|
||||
|
||||
Sets the maximum number of databases that can be created.
|
||||
Limits the total number of active databases.
|
||||
Default is {{% influxdb3/limit "database" %}}.
|
||||
|
||||
| influxdb3 serve option | Environment variable |
|
||||
| :------------------------ | :---------------------------------------- |
|
||||
|
|
@ -1945,7 +1748,8 @@ Sets the maximum number of databases that can be created.
|
|||
|
||||
#### num-table-limit
|
||||
|
||||
Defines the maximum number of tables that can be created across all databases.
|
||||
Limits the total number of active tables across all databases.
|
||||
Default is {{% influxdb3/limit "table" %}}.
|
||||
|
||||
| influxdb3 serve option | Environment variable |
|
||||
| :---------------------- | :------------------------------------- |
|
||||
|
|
@ -1955,7 +1759,8 @@ Defines the maximum number of tables that can be created across all databases.
|
|||
|
||||
#### num-total-columns-per-table-limit
|
||||
|
||||
Sets the maximum number of columns allowed per table.
|
||||
Limits the total number of columns per table.
|
||||
Default is {{% influxdb3/limit "column" %}}.
|
||||
|
||||
| influxdb3 serve option | Environment variable |
|
||||
| :--------------------------------------- | :---------------------------------------------------------- |
|
||||
|
|
|
|||
|
|
@ -417,6 +417,13 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
|
|||
|
||||
### Core
|
||||
|
||||
#### Breaking Changes
|
||||
|
||||
- **Parquet cache configuration**: Replaced `--parquet-mem-cache-size-mb` option with `--parquet-mem-cache-size`. The new option accepts values in megabytes (as an integer) or as a percentage of total available memory (for example, `20%`). The default value changed from `1000` MB to `20%` of total available memory. The environment variable `INFLUXDB3_PARQUET_MEM_CACHE_SIZE_MB` was replaced with `INFLUXDB3_PARQUET_MEM_CACHE_SIZE`. ([#26023](https://github.com/influxdata/influxdb/pull/26023))
|
||||
- **Memory settings updates**:
|
||||
- Force snapshot memory threshold now defaults to `50%` of available memory
|
||||
- DataFusion execution memory pool now defaults to `20%` of available memory
|
||||
|
||||
#### General Updates
|
||||
|
||||
- Performance and reliability improvements.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,845 @@
|
|||
#!/bin/sh -e
|
||||
|
||||
# ==========================Script Config==========================
|
||||
|
||||
readonly GREEN='\033[0;32m'
|
||||
readonly BLUE='\033[0;34m'
|
||||
readonly BOLD='\033[1m'
|
||||
readonly BOLDGREEN='\033[1;32m'
|
||||
readonly DIM='\033[2m'
|
||||
readonly NC='\033[0m' # No Color
|
||||
|
||||
# No diagnostics for: 'printf "...${FOO}"'
|
||||
# shellcheck disable=SC2059
|
||||
|
||||
ARCHITECTURE=$(uname -m)
|
||||
ARTIFACT=""
|
||||
OS=""
|
||||
INSTALL_LOC=~/.influxdb
|
||||
BINARY_NAME="influxdb3"
|
||||
PORT=8181
|
||||
|
||||
# Set the default (latest) version here. Users may specify a version using the
|
||||
# --version arg (handled below)
|
||||
INFLUXDB_VERSION="3.5.0"
|
||||
EDITION="Core"
|
||||
EDITION_TAG="core"
|
||||
|
||||
|
||||
# Parse command line arguments
|
||||
while [ $# -gt 0 ]; do
|
||||
case "$1" in
|
||||
--version)
|
||||
INFLUXDB_VERSION="$2"
|
||||
shift 2
|
||||
;;
|
||||
enterprise)
|
||||
EDITION="Enterprise"
|
||||
EDITION_TAG="enterprise"
|
||||
shift 1
|
||||
;;
|
||||
*)
|
||||
echo "Usage: $0 [enterprise] [--version VERSION]"
|
||||
echo " enterprise: Install the Enterprise edition (optional)"
|
||||
echo " --version VERSION: Specify InfluxDB version (default: $INFLUXDB_VERSION)"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
|
||||
|
||||
# ==========================Detect OS/Architecture==========================
|
||||
|
||||
case "$(uname -s)" in
|
||||
Linux*) OS="Linux";;
|
||||
Darwin*) OS="Darwin";;
|
||||
*) OS="UNKNOWN";;
|
||||
esac
|
||||
|
||||
if [ "${OS}" = "Linux" ]; then
|
||||
# ldd is a shell script but on some systems (eg Ubuntu) security hardening
|
||||
# prevents it from running when invoked directly. Since we only want to
|
||||
# use '--verbose', find the path to ldd, then invoke under sh to bypass ldd
|
||||
# hardening.
|
||||
if [ "${ARCHITECTURE}" = "x86_64" ] || [ "${ARCHITECTURE}" = "amd64" ]; then
|
||||
ARTIFACT="linux_amd64"
|
||||
elif [ "${ARCHITECTURE}" = "aarch64" ] || [ "${ARCHITECTURE}" = "arm64" ]; then
|
||||
ARTIFACT="linux_arm64"
|
||||
fi
|
||||
elif [ "${OS}" = "Darwin" ]; then
|
||||
if [ "${ARCHITECTURE}" = "x86_64" ]; then
|
||||
printf "Intel Mac support is coming soon!\n"
|
||||
printf "Visit our public Discord at \033[4;94mhttps://discord.gg/az4jPm8x${NC} for additional guidance.\n"
|
||||
printf "View alternative binaries on our Getting Started guide at \033[4;94mhttps://docs.influxdata.com/influxdb3/${EDITION_TAG}/${NC}.\n"
|
||||
exit 1
|
||||
else
|
||||
ARTIFACT="darwin_arm64"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Exit if unsupported system
|
||||
[ -n "${ARTIFACT}" ] || {
|
||||
printf "Unfortunately this script doesn't support your '${OS}' | '${ARCHITECTURE}' setup, or was unable to identify it correctly.\n"
|
||||
printf "Visit our public Discord at \033[4;94mhttps://discord.gg/az4jPm8x${NC} for additional guidance.\n"
|
||||
printf "View alternative binaries on our Getting Started guide at \033[4;94mhttps://docs.influxdata.com/influxdb3/${EDITION_TAG}/${NC}.\n"
|
||||
exit 1
|
||||
}
|
||||
|
||||
URL="https://dl.influxdata.com/influxdb/releases/influxdb3-${EDITION_TAG}-${INFLUXDB_VERSION}_${ARTIFACT}.tar.gz"
|
||||
|
||||
|
||||
|
||||
# ==========================Reusable Script Functions ==========================
|
||||
|
||||
# Function to find available port
|
||||
find_available_port() {
|
||||
show_progress="${1:-true}"
|
||||
lsof_exec=$(command -v lsof) && {
|
||||
while [ -n "$lsof_exec" ] && lsof -i:"$PORT" -t >/dev/null 2>&1; do
|
||||
if [ "$show_progress" = "true" ]; then
|
||||
printf "├─${DIM} Port %s is in use. Finding new port.${NC}\n" "$PORT"
|
||||
fi
|
||||
PORT=$((PORT + 1))
|
||||
if [ "$PORT" -gt 32767 ]; then
|
||||
printf "└─${DIM} Could not find an available port. Aborting.${NC}\n"
|
||||
exit 1
|
||||
fi
|
||||
if ! "$lsof_exec" -i:"$PORT" -t >/dev/null 2>&1; then
|
||||
if [ "$show_progress" = "true" ]; then
|
||||
printf "└─${DIM} Found an available port: %s${NC}\n" "$PORT"
|
||||
fi
|
||||
break
|
||||
fi
|
||||
done
|
||||
}
|
||||
}
|
||||
|
||||
# Function to set up Quick Start defaults for both Core and Enterprise
|
||||
setup_quick_start_defaults() {
|
||||
edition="${1:-core}"
|
||||
|
||||
NODE_ID="node0"
|
||||
STORAGE_TYPE="File Storage"
|
||||
STORAGE_PATH="$HOME/.influxdb/data"
|
||||
PLUGIN_PATH="$HOME/.influxdb/plugins"
|
||||
STORAGE_FLAGS="--object-store=file --data-dir ${STORAGE_PATH} --plugin-dir ${PLUGIN_PATH}"
|
||||
STORAGE_FLAGS_ECHO="--object-store=file --data-dir ${STORAGE_PATH} --plugin-dir ${PLUGIN_PATH}"
|
||||
START_SERVICE="y" # Always set for Quick Start
|
||||
|
||||
# Enterprise-specific settings
|
||||
if [ "$edition" = "enterprise" ]; then
|
||||
CLUSTER_ID="cluster0"
|
||||
LICENSE_FILE_PATH="${STORAGE_PATH}/${CLUSTER_ID}/trial_or_home_license"
|
||||
fi
|
||||
|
||||
# Create directories
|
||||
mkdir -p "${STORAGE_PATH}"
|
||||
mkdir -p "${PLUGIN_PATH}"
|
||||
}
|
||||
|
||||
# Function to configure AWS S3 storage
|
||||
configure_aws_s3_storage() {
|
||||
echo
|
||||
printf "${BOLD}AWS S3 Configuration${NC}\n"
|
||||
printf "├─ Enter AWS Access Key ID: "
|
||||
read -r AWS_KEY
|
||||
|
||||
printf "├─ Enter AWS Secret Access Key: "
|
||||
stty -echo
|
||||
read -r AWS_SECRET
|
||||
stty echo
|
||||
|
||||
echo
|
||||
printf "├─ Enter S3 Bucket: "
|
||||
read -r AWS_BUCKET
|
||||
|
||||
printf "└─ Enter AWS Region (default: us-east-1): "
|
||||
read -r AWS_REGION
|
||||
AWS_REGION=${AWS_REGION:-"us-east-1"}
|
||||
|
||||
STORAGE_FLAGS="--object-store=s3 --bucket=${AWS_BUCKET}"
|
||||
if [ -n "$AWS_REGION" ]; then
|
||||
STORAGE_FLAGS="$STORAGE_FLAGS --aws-default-region=${AWS_REGION}"
|
||||
fi
|
||||
STORAGE_FLAGS="$STORAGE_FLAGS --aws-access-key-id=${AWS_KEY}"
|
||||
STORAGE_FLAGS_ECHO="$STORAGE_FLAGS --aws-secret-access-key=..."
|
||||
STORAGE_FLAGS="$STORAGE_FLAGS --aws-secret-access-key=${AWS_SECRET}"
|
||||
}
|
||||
|
||||
# Function to configure Azure storage
|
||||
configure_azure_storage() {
|
||||
echo
|
||||
printf "${BOLD}Azure Storage Configuration${NC}\n"
|
||||
printf "├─ Enter Storage Account Name: "
|
||||
read -r AZURE_ACCOUNT
|
||||
|
||||
printf "└─ Enter Storage Access Key: "
|
||||
stty -echo
|
||||
read -r AZURE_KEY
|
||||
stty echo
|
||||
|
||||
echo
|
||||
STORAGE_FLAGS="--object-store=azure --azure-storage-account=${AZURE_ACCOUNT}"
|
||||
STORAGE_FLAGS_ECHO="$STORAGE_FLAGS --azure-storage-access-key=..."
|
||||
STORAGE_FLAGS="$STORAGE_FLAGS --azure-storage-access-key=${AZURE_KEY}"
|
||||
}
|
||||
|
||||
# Function to configure Google Cloud storage
|
||||
configure_google_cloud_storage() {
|
||||
echo
|
||||
printf "${BOLD}Google Cloud Storage Configuration${NC}\n"
|
||||
printf "└─ Enter path to service account JSON file: "
|
||||
read -r GOOGLE_SA
|
||||
STORAGE_FLAGS="--object-store=google --google-service-account=${GOOGLE_SA}"
|
||||
STORAGE_FLAGS_ECHO="$STORAGE_FLAGS"
|
||||
}
|
||||
|
||||
# Function to set up license for Enterprise Quick Start
|
||||
setup_license_for_quick_start() {
|
||||
# Check if license file exists
|
||||
if [ -f "$LICENSE_FILE_PATH" ]; then
|
||||
printf "${DIM}Found existing license file, using it for quick start.${NC}\n"
|
||||
LICENSE_TYPE=""
|
||||
LICENSE_EMAIL=""
|
||||
LICENSE_DESC="Existing"
|
||||
else
|
||||
# Prompt for license type and email only
|
||||
echo
|
||||
printf "${BOLD}License Setup Required${NC}\n"
|
||||
printf "1) ${GREEN}Trial${NC} ${DIM}- Full features for 30 days (up to 256 cores)${NC}\n"
|
||||
printf "2) ${GREEN}Home${NC} ${DIM}- Free for non-commercial use (max 2 cores, single node)${NC}\n"
|
||||
echo
|
||||
printf "Enter choice (1-2): "
|
||||
read -r LICENSE_CHOICE
|
||||
|
||||
case "${LICENSE_CHOICE:-1}" in
|
||||
1)
|
||||
LICENSE_TYPE="trial"
|
||||
LICENSE_DESC="Trial"
|
||||
;;
|
||||
2)
|
||||
LICENSE_TYPE="home"
|
||||
LICENSE_DESC="Home"
|
||||
;;
|
||||
*)
|
||||
LICENSE_TYPE="trial"
|
||||
LICENSE_DESC="Trial"
|
||||
;;
|
||||
esac
|
||||
|
||||
printf "Enter your email: "
|
||||
read -r LICENSE_EMAIL
|
||||
while [ -z "$LICENSE_EMAIL" ]; do
|
||||
printf "Email is required. Enter your email: "
|
||||
read -r LICENSE_EMAIL
|
||||
done
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to prompt for storage configuration
|
||||
prompt_storage_configuration() {
|
||||
# Prompt for storage solution
|
||||
echo
|
||||
printf "${BOLD}Select Your Storage Solution${NC}\n"
|
||||
printf "├─ 1) File storage (Persistent)\n"
|
||||
printf "├─ 2) Object storage (Persistent)\n"
|
||||
printf "├─ 3) In-memory storage (Non-persistent)\n"
|
||||
printf "└─ Enter your choice (1-3): "
|
||||
read -r STORAGE_CHOICE
|
||||
|
||||
case "$STORAGE_CHOICE" in
|
||||
1)
|
||||
STORAGE_TYPE="File Storage"
|
||||
echo
|
||||
printf "Enter storage path (default: %s/data): " "${INSTALL_LOC}"
|
||||
read -r STORAGE_PATH
|
||||
STORAGE_PATH=${STORAGE_PATH:-"$INSTALL_LOC/data"}
|
||||
STORAGE_FLAGS="--object-store=file --data-dir ${STORAGE_PATH}"
|
||||
STORAGE_FLAGS_ECHO="$STORAGE_FLAGS"
|
||||
;;
|
||||
2)
|
||||
STORAGE_TYPE="Object Storage"
|
||||
echo
|
||||
printf "${BOLD}Select Cloud Provider${NC}\n"
|
||||
printf "├─ 1) Amazon S3\n"
|
||||
printf "├─ 2) Azure Storage\n"
|
||||
printf "├─ 3) Google Cloud Storage\n"
|
||||
printf "└─ Enter your choice (1-3): "
|
||||
read -r CLOUD_CHOICE
|
||||
|
||||
case $CLOUD_CHOICE in
|
||||
1) # AWS S3
|
||||
configure_aws_s3_storage
|
||||
;;
|
||||
|
||||
2) # Azure Storage
|
||||
configure_azure_storage
|
||||
;;
|
||||
|
||||
3) # Google Cloud Storage
|
||||
configure_google_cloud_storage
|
||||
;;
|
||||
|
||||
*)
|
||||
printf "Invalid cloud provider choice. Defaulting to file storage.\n"
|
||||
STORAGE_TYPE="File Storage"
|
||||
STORAGE_FLAGS="--object-store=file --data-dir $INSTALL_LOC/data"
|
||||
STORAGE_FLAGS_ECHO="$STORAGE_FLAGS"
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
3)
|
||||
STORAGE_TYPE="memory"
|
||||
STORAGE_FLAGS="--object-store=memory"
|
||||
STORAGE_FLAGS_ECHO="$STORAGE_FLAGS"
|
||||
;;
|
||||
|
||||
*)
|
||||
printf "Invalid choice. Defaulting to file storage.\n"
|
||||
STORAGE_TYPE="File Storage"
|
||||
STORAGE_FLAGS="--object-store=file --data-dir $INSTALL_LOC/data"
|
||||
STORAGE_FLAGS_ECHO="$STORAGE_FLAGS"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Function to perform health check on server
|
||||
perform_server_health_check() {
|
||||
timeout_seconds="${1:-30}"
|
||||
is_enterprise="${2:-false}"
|
||||
|
||||
SUCCESS=0
|
||||
EMAIL_MESSAGE_SHOWN=false
|
||||
|
||||
for i in $(seq 1 "$timeout_seconds"); do
|
||||
# on systems without a usable lsof, sleep a second to see if the pid is
|
||||
# still there to give influxdb a chance to error out in case an already
|
||||
# running influxdb is running on this port
|
||||
if [ -z "$lsof_exec" ]; then
|
||||
sleep 1
|
||||
fi
|
||||
|
||||
if ! kill -0 "$PID" 2>/dev/null ; then
|
||||
if [ "$is_enterprise" = "true" ]; then
|
||||
printf "└─${DIM} Server process stopped unexpectedly${NC}\n"
|
||||
fi
|
||||
break
|
||||
fi
|
||||
|
||||
if curl --max-time 1 -s "http://localhost:$PORT/health" >/dev/null 2>&1; then
|
||||
printf "\n${BOLDGREEN}✓ InfluxDB 3 ${EDITION} is now installed and running on port %s. Nice!${NC}\n" "$PORT"
|
||||
SUCCESS=1
|
||||
break
|
||||
fi
|
||||
|
||||
# Show email verification message after 10 seconds for Enterprise
|
||||
if [ "$is_enterprise" = "true" ] && [ "$i" -eq 10 ] && [ "$EMAIL_MESSAGE_SHOWN" = "false" ]; then
|
||||
printf "├─${DIM} Checking license activation - please verify your email${NC}\n"
|
||||
EMAIL_MESSAGE_SHOWN=true
|
||||
fi
|
||||
|
||||
# Show progress updates every 15 seconds after initial grace period
|
||||
if [ "$is_enterprise" = "true" ] && [ "$i" -gt 5 ] && [ $((i % 15)) -eq 0 ]; then
|
||||
printf "├─${DIM} Waiting for license verification (%s/%ss)${NC}\n" "$i" "$timeout_seconds"
|
||||
fi
|
||||
|
||||
sleep 1
|
||||
done
|
||||
|
||||
if [ $SUCCESS -eq 0 ]; then
|
||||
if [ "$is_enterprise" = "true" ]; then
|
||||
printf "└─${BOLD} ERROR: InfluxDB Enterprise failed to start within %s seconds${NC}\n" "$timeout_seconds"
|
||||
if [ "$show_progress" = "true" ]; then
|
||||
printf " This may be due to:\n"
|
||||
printf " ├─ Email verification required (check your email)\n"
|
||||
printf " ├─ Network connectivity issues during license retrieval\n"
|
||||
printf " ├─ Invalid license type or email format\n"
|
||||
printf " ├─ Port %s already in use\n" "$PORT"
|
||||
printf " └─ Server startup issues\n"
|
||||
else
|
||||
if [ -n "$LICENSE_TYPE" ]; then
|
||||
printf " ├─ Check your email for license verification if required\n"
|
||||
fi
|
||||
printf " ├─ Network connectivity issues\n"
|
||||
printf " └─ Port %s conflicts\n" "$PORT"
|
||||
fi
|
||||
|
||||
# Kill the background process if it's still running
|
||||
if kill -0 "$PID" 2>/dev/null; then
|
||||
printf " Stopping background server process...\n"
|
||||
kill "$PID" 2>/dev/null
|
||||
fi
|
||||
else
|
||||
printf "└─${BOLD} ERROR: InfluxDB failed to start; check permissions or other potential issues.${NC}\n"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to display Enterprise server command
|
||||
display_enterprise_server_command() {
|
||||
is_quick_start="${1:-false}"
|
||||
|
||||
if [ "$is_quick_start" = "true" ]; then
|
||||
# Quick Start format
|
||||
printf "└─${DIM} Command: ${NC}\n"
|
||||
printf "${DIM} influxdb3 serve \\\\${NC}\n"
|
||||
printf "${DIM} --cluster-id=%s \\\\${NC}\n" "$CLUSTER_ID"
|
||||
printf "${DIM} --node-id=%s \\\\${NC}\n" "$NODE_ID"
|
||||
if [ -n "$LICENSE_TYPE" ] && [ -n "$LICENSE_EMAIL" ]; then
|
||||
printf "${DIM} --license-type=%s \\\\${NC}\n" "$LICENSE_TYPE"
|
||||
printf "${DIM} --license-email=%s \\\\${NC}\n" "$LICENSE_EMAIL"
|
||||
fi
|
||||
printf "${DIM} --http-bind=0.0.0.0:%s \\\\${NC}\n" "$PORT"
|
||||
printf "${DIM} %s${NC}\n" "$STORAGE_FLAGS_ECHO"
|
||||
echo
|
||||
else
|
||||
# Custom configuration format
|
||||
printf "│\n"
|
||||
printf "├─ Running serve command:\n"
|
||||
printf "├─${DIM} influxdb3 serve \\\\${NC}\n"
|
||||
printf "├─${DIM} --cluster-id='%s' \\\\${NC}\n" "$CLUSTER_ID"
|
||||
printf "├─${DIM} --node-id='%s' \\\\${NC}\n" "$NODE_ID"
|
||||
printf "├─${DIM} --license-type='%s' \\\\${NC}\n" "$LICENSE_TYPE"
|
||||
printf "├─${DIM} --license-email='%s' \\\\${NC}\n" "$LICENSE_EMAIL"
|
||||
printf "├─${DIM} --http-bind='0.0.0.0:%s' \\\\${NC}\n" "$PORT"
|
||||
printf "├─${DIM} %s${NC}\n" "$STORAGE_FLAGS_ECHO"
|
||||
printf "│\n"
|
||||
fi
|
||||
}
|
||||
|
||||
|
||||
|
||||
# =========================Installation==========================
|
||||
|
||||
# Attempt to clear screen and show welcome message
|
||||
clear 2>/dev/null || true # clear isn't available everywhere
|
||||
printf "┌───────────────────────────────────────────────────┐\n"
|
||||
printf "│ ${BOLD}Welcome to InfluxDB!${NC} We'll make this quick. │\n"
|
||||
printf "└───────────────────────────────────────────────────┘\n"
|
||||
|
||||
echo
|
||||
printf "${BOLD}Select Installation Type${NC}\n"
|
||||
echo
|
||||
printf "1) ${GREEN}Docker Image${NC} ${DIM}(The official Docker image)${NC}\n"
|
||||
printf "2) ${GREEN}Simple Download${NC} ${DIM}(No dependencies required)${NC}\n"
|
||||
echo
|
||||
printf "Enter your choice (1-2): "
|
||||
read -r INSTALL_TYPE
|
||||
|
||||
case "$INSTALL_TYPE" in
|
||||
1)
|
||||
printf "\n\n${BOLD}Download and Tag Docker Image${NC}\n"
|
||||
printf "├─ ${DIM}docker pull influxdb:${EDITION_TAG}${NC}\n"
|
||||
printf "└─ ${DIM}docker tag influxdb:${EDITION_TAG} influxdb3-${EDITION_TAG}${NC}\n\n"
|
||||
if ! docker pull "influxdb:3-${EDITION_TAG}"; then
|
||||
printf "└─ Error: Failed to download Docker image.\n"
|
||||
exit 1
|
||||
fi
|
||||
docker tag influxdb:3-${EDITION_TAG} influxdb3-${EDITION_TAG}
|
||||
# Exit script after Docker installation
|
||||
echo
|
||||
printf "${BOLDGREEN}✓ InfluxDB 3 ${EDITION} successfully pulled. Nice!${NC}\n\n"
|
||||
printf "${BOLD}NEXT STEPS${NC}\n"
|
||||
printf "1) Run the Docker image:\n"
|
||||
printf " └─ ${BOLD}docker run -it -p 8181:8181 --name influxdb3-container \\"
|
||||
printf "\n --volume ~/.influxdb3_data:/.data --volume ~/.influxdb3_plugins:/plugins influxdb:3-${EDITION_TAG} \\"
|
||||
printf "\n influxdb3 serve"
|
||||
if [ "${EDITION}" = "Enterprise" ]; then
|
||||
printf " --cluster-id c0"
|
||||
fi
|
||||
printf " --node-id node0 --object-store file --data-dir /.data --plugin-dir /plugins${NC}\n\n"
|
||||
printf "2) ${NC}Create a token: ${BOLD}docker exec -it influxdb3-container influxdb3 create token --admin${NC} \n\n"
|
||||
printf "3) Begin writing data! Learn more at https://docs.influxdata.com/influxdb3/${EDITION_TAG}/get-started/write/\n\n"
|
||||
printf "┌────────────────────────────────────────────────────────────────────────────────────────┐\n"
|
||||
printf "│ Looking to use a UI for querying, plugins, management, and more? │\n"
|
||||
printf "│ Get InfluxDB 3 Explorer at ${BLUE}https://docs.influxdata.com/influxdb3/explorer/#quick-start${NC} │\n"
|
||||
printf "└────────────────────────────────────────────────────────────────────────────────────────┘\n\n"
|
||||
exit 0
|
||||
;;
|
||||
2)
|
||||
printf "\n\n"
|
||||
;;
|
||||
*)
|
||||
printf "Invalid choice. Defaulting to binary installation.\n\n"
|
||||
;;
|
||||
esac
|
||||
|
||||
# attempt to find the user's shell config
|
||||
shellrc=
|
||||
if [ -n "$SHELL" ]; then
|
||||
tmp=~/.$(basename "$SHELL")rc
|
||||
if [ -e "$tmp" ]; then
|
||||
shellrc="$tmp"
|
||||
fi
|
||||
fi
|
||||
|
||||
printf "${BOLD}Downloading InfluxDB 3 %s to %s${NC}\n" "$EDITION" "$INSTALL_LOC"
|
||||
printf "├─${DIM} mkdir -p '%s'${NC}\n" "$INSTALL_LOC"
|
||||
mkdir -p "$INSTALL_LOC"
|
||||
printf "└─${DIM} curl -sSL '%s' -o '%s/influxdb3-${EDITION_TAG}.tar.gz'${NC}\n" "${URL}" "$INSTALL_LOC"
|
||||
curl -sSL "${URL}" -o "$INSTALL_LOC/influxdb3-${EDITION_TAG}.tar.gz"
|
||||
|
||||
echo
|
||||
printf "${BOLD}Verifying '%s/influxdb3-${EDITION_TAG}.tar.gz'${NC}\n" "$INSTALL_LOC"
|
||||
printf "└─${DIM} curl -sSL '%s.sha256' -o '%s/influxdb3-${EDITION_TAG}.tar.gz.sha256'${NC}\n" "${URL}" "$INSTALL_LOC"
|
||||
curl -sSL "${URL}.sha256" -o "$INSTALL_LOC/influxdb3-${EDITION_TAG}.tar.gz.sha256"
|
||||
dl_sha=$(cut -d ' ' -f 1 "$INSTALL_LOC/influxdb3-${EDITION_TAG}.tar.gz.sha256" | grep -E '^[0-9a-f]{64}$')
|
||||
if [ -z "$dl_sha" ]; then
|
||||
printf "Could not find properly formatted SHA256 in '%s/influxdb3-${EDITION_TAG}.tar.gz.sha256'. Aborting.\n" "$INSTALL_LOC"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ch_sha=
|
||||
if [ "${OS}" = "Darwin" ]; then
|
||||
printf "└─${DIM} shasum -a 256 '%s/influxdb3-${EDITION_TAG}.tar.gz'" "$INSTALL_LOC"
|
||||
ch_sha=$(shasum -a 256 "$INSTALL_LOC/influxdb3-${EDITION_TAG}.tar.gz" | cut -d ' ' -f 1)
|
||||
else
|
||||
printf "└─${DIM} sha256sum '%s/influxdb3-${EDITION_TAG}.tar.gz'" "$INSTALL_LOC"
|
||||
ch_sha=$(sha256sum "$INSTALL_LOC/influxdb3-${EDITION_TAG}.tar.gz" | cut -d ' ' -f 1)
|
||||
fi
|
||||
if [ "$ch_sha" = "$dl_sha" ]; then
|
||||
printf " (OK: %s = %s)${NC}\n" "$ch_sha" "$dl_sha"
|
||||
else
|
||||
printf " (ERROR: %s != %s). Aborting.${NC}\n" "$ch_sha" "$dl_sha"
|
||||
exit 1
|
||||
fi
|
||||
printf "└─${DIM} rm '%s/influxdb3-${EDITION_TAG}.tar.gz.sha256'${NC}\n" "$INSTALL_LOC"
|
||||
rm "$INSTALL_LOC/influxdb3-${EDITION_TAG}.tar.gz.sha256"
|
||||
|
||||
echo
|
||||
printf "${BOLD}Extracting and Processing${NC}\n"
|
||||
|
||||
# some tarballs have a leading component, check for that
|
||||
TAR_LEVEL=0
|
||||
if tar -tf "$INSTALL_LOC/influxdb3-${EDITION_TAG}.tar.gz" | grep -q '[a-zA-Z0-9]/influxdb3$' ; then
|
||||
TAR_LEVEL=1
|
||||
fi
|
||||
printf "├─${DIM} tar -xf '%s/influxdb3-${EDITION_TAG}.tar.gz' --strip-components=${TAR_LEVEL} -C '%s'${NC}\n" "$INSTALL_LOC" "$INSTALL_LOC"
|
||||
tar -xf "$INSTALL_LOC/influxdb3-${EDITION_TAG}.tar.gz" --strip-components="${TAR_LEVEL}" -C "$INSTALL_LOC"
|
||||
|
||||
printf "└─${DIM} rm '%s/influxdb3-${EDITION_TAG}.tar.gz'${NC}\n" "$INSTALL_LOC"
|
||||
rm "$INSTALL_LOC/influxdb3-${EDITION_TAG}.tar.gz"
|
||||
|
||||
if [ -n "$shellrc" ] && ! grep -q "export PATH=.*$INSTALL_LOC" "$shellrc"; then
|
||||
echo
|
||||
printf "${BOLD}Adding InfluxDB to '%s'${NC}\n" "$shellrc"
|
||||
printf "└─${DIM} export PATH=\"\$PATH:%s/\" >> '%s'${NC}\n" "$INSTALL_LOC" "$shellrc"
|
||||
echo "export PATH=\"\$PATH:$INSTALL_LOC/\"" >> "$shellrc"
|
||||
fi
|
||||
|
||||
if [ "${EDITION}" = "Core" ]; then
|
||||
# Prompt user for startup options
|
||||
echo
|
||||
printf "${BOLD}What would you like to do next?${NC}\n"
|
||||
printf "1) ${GREEN}Quick Start${NC} ${DIM}(recommended; data stored at %s/data)${NC}\n" "${INSTALL_LOC}"
|
||||
printf "2) ${GREEN}Custom Configuration${NC} ${DIM}(configure all options manually)${NC}\n"
|
||||
printf "3) ${GREEN}Skip startup${NC} ${DIM}(install only)${NC}\n"
|
||||
echo
|
||||
printf "Enter your choice (1-3): "
|
||||
read -r STARTUP_CHOICE
|
||||
STARTUP_CHOICE=${STARTUP_CHOICE:-1}
|
||||
|
||||
case "$STARTUP_CHOICE" in
|
||||
1)
|
||||
# Quick Start - use defaults
|
||||
setup_quick_start_defaults core
|
||||
;;
|
||||
2)
|
||||
# Custom Configuration - existing detailed flow
|
||||
START_SERVICE="y"
|
||||
;;
|
||||
3)
|
||||
# Skip startup
|
||||
START_SERVICE="n"
|
||||
;;
|
||||
*)
|
||||
printf "Invalid choice. Using Quick Start (option 1).\n"
|
||||
setup_quick_start_defaults core
|
||||
;;
|
||||
esac
|
||||
|
||||
if [ "$START_SERVICE" = "y" ] && [ "$STARTUP_CHOICE" = "2" ]; then
|
||||
# Prompt for Node ID
|
||||
echo
|
||||
printf "${BOLD}Enter Your Node ID${NC}\n"
|
||||
printf "├─ A Node ID is a unique, uneditable identifier for a service.\n"
|
||||
printf "└─ Enter a Node ID (default: node0): "
|
||||
read -r NODE_ID
|
||||
NODE_ID=${NODE_ID:-node0}
|
||||
|
||||
# Prompt for storage solution
|
||||
prompt_storage_configuration
|
||||
|
||||
# Ensure port is available; if not, find a new one.
|
||||
find_available_port
|
||||
|
||||
# Start and give up to 30 seconds to respond
|
||||
echo
|
||||
|
||||
# Create logs directory and generate timestamped log filename
|
||||
mkdir -p "$INSTALL_LOC/logs"
|
||||
LOG_FILE="$INSTALL_LOC/logs/$(date +%Y%m%d_%H%M%S).log"
|
||||
|
||||
printf "${BOLD}Starting InfluxDB${NC}\n"
|
||||
printf "├─${DIM} Node ID: %s${NC}\n" "$NODE_ID"
|
||||
printf "├─${DIM} Storage: %s${NC}\n" "$STORAGE_TYPE"
|
||||
printf "├─${DIM} Logs: %s${NC}\n" "$LOG_FILE"
|
||||
printf "├─${DIM} influxdb3 serve \\\\${NC}\n"
|
||||
printf "├─${DIM} --node-id='%s' \\\\${NC}\n" "$NODE_ID"
|
||||
printf "├─${DIM} --http-bind='0.0.0.0:%s' \\\\${NC}\n" "$PORT"
|
||||
printf "└─${DIM} %s${NC}\n" "$STORAGE_FLAGS_ECHO"
|
||||
|
||||
"$INSTALL_LOC/$BINARY_NAME" serve --node-id="$NODE_ID" --http-bind="0.0.0.0:$PORT" $STORAGE_FLAGS >> "$LOG_FILE" 2>&1 &
|
||||
PID="$!"
|
||||
|
||||
perform_server_health_check 30
|
||||
|
||||
elif [ "$START_SERVICE" = "y" ] && [ "$STARTUP_CHOICE" = "1" ]; then
|
||||
# Quick Start flow - minimal output, just start the server
|
||||
echo
|
||||
printf "${BOLD}Starting InfluxDB (Quick Start)${NC}\n"
|
||||
printf "├─${DIM} Node ID: %s${NC}\n" "$NODE_ID"
|
||||
printf "├─${DIM} Storage: %s/data${NC}\n" "${INSTALL_LOC}"
|
||||
printf "├─${DIM} Plugins: %s/plugins${NC}\n" "${INSTALL_LOC}"
|
||||
printf "├─${DIM} Logs: %s/logs/$(date +%Y%m%d_%H%M%S).log${NC}\n" "${INSTALL_LOC}"
|
||||
|
||||
# Ensure port is available; if not, find a new one.
|
||||
ORIGINAL_PORT="$PORT"
|
||||
find_available_port false
|
||||
|
||||
# Show port result
|
||||
if [ "$PORT" != "$ORIGINAL_PORT" ]; then
|
||||
printf "├─${DIM} Found available port: %s (%s-%s in use)${NC}\n" "$PORT" "$ORIGINAL_PORT" "$((PORT - 1))"
|
||||
fi
|
||||
|
||||
# Show the command being executed
|
||||
printf "└─${DIM} Command:${NC}\n"
|
||||
printf "${DIM} influxdb3 serve \\\\${NC}\n"
|
||||
printf "${DIM} --node-id=%s \\\\${NC}\n" "$NODE_ID"
|
||||
printf "${DIM} --http-bind=0.0.0.0:%s \\\\${NC}\n" "$PORT"
|
||||
printf "${DIM} %s${NC}\n\n" "$STORAGE_FLAGS_ECHO"
|
||||
|
||||
# Create logs directory and generate timestamped log filename
|
||||
mkdir -p "$INSTALL_LOC/logs"
|
||||
LOG_FILE="$INSTALL_LOC/logs/$(date +%Y%m%d_%H%M%S).log"
|
||||
|
||||
# Start server in background
|
||||
"$INSTALL_LOC/$BINARY_NAME" serve --node-id="$NODE_ID" --http-bind="0.0.0.0:$PORT" $STORAGE_FLAGS >> "$LOG_FILE" 2>&1 &
|
||||
PID="$!"
|
||||
|
||||
perform_server_health_check 30
|
||||
|
||||
else
|
||||
echo
|
||||
printf "${BOLDGREEN}✓ InfluxDB 3 ${EDITION} is now installed. Nice!${NC}\n"
|
||||
fi
|
||||
else
|
||||
# Enterprise startup options
|
||||
echo
|
||||
printf "${BOLD}What would you like to do next?${NC}\n"
|
||||
printf "1) ${GREEN}Quick Start${NC} ${DIM}(recommended; data stored at %s/data)${NC}\n" "${INSTALL_LOC}"
|
||||
printf "2) ${GREEN}Custom Configuration${NC} ${DIM}(configure all options manually)${NC}\n"
|
||||
printf "3) ${GREEN}Skip startup${NC} ${DIM}(install only)${NC}\n"
|
||||
echo
|
||||
printf "Enter your choice (1-3): "
|
||||
read -r STARTUP_CHOICE
|
||||
STARTUP_CHOICE=${STARTUP_CHOICE:-1}
|
||||
|
||||
case "$STARTUP_CHOICE" in
|
||||
1)
|
||||
# Quick Start - use defaults and check for existing license
|
||||
setup_quick_start_defaults enterprise
|
||||
setup_license_for_quick_start
|
||||
|
||||
STORAGE_FLAGS="--object-store=file --data-dir ${STORAGE_PATH} --plugin-dir ${PLUGIN_PATH}"
|
||||
STORAGE_FLAGS_ECHO="--object-store=file --data-dir ${STORAGE_PATH} --plugin-dir ${PLUGIN_PATH}"
|
||||
START_SERVICE="y"
|
||||
;;
|
||||
2)
|
||||
# Custom Configuration - existing detailed flow
|
||||
START_SERVICE="y"
|
||||
;;
|
||||
3)
|
||||
# Skip startup
|
||||
START_SERVICE="n"
|
||||
;;
|
||||
*)
|
||||
printf "Invalid choice. Using Quick Start (option 1).\n"
|
||||
# Same as option 1
|
||||
setup_quick_start_defaults enterprise
|
||||
setup_license_for_quick_start
|
||||
|
||||
STORAGE_FLAGS="--object-store=file --data-dir ${STORAGE_PATH} --plugin-dir ${PLUGIN_PATH}"
|
||||
STORAGE_FLAGS_ECHO="--object-store=file --data-dir ${STORAGE_PATH} --plugin-dir ${PLUGIN_PATH}"
|
||||
START_SERVICE="y"
|
||||
;;
|
||||
esac
|
||||
|
||||
if [ "$START_SERVICE" = "y" ] && [ "$STARTUP_CHOICE" = "1" ]; then
|
||||
# Enterprise Quick Start flow
|
||||
echo
|
||||
printf "${BOLD}Starting InfluxDB Enterprise (Quick Start)${NC}\n"
|
||||
printf "├─${DIM} Cluster ID: %s${NC}\n" "$CLUSTER_ID"
|
||||
printf "├─${DIM} Node ID: %s${NC}\n" "$NODE_ID"
|
||||
if [ -n "$LICENSE_TYPE" ]; then
|
||||
printf "├─${DIM} License Type: %s${NC}\n" "$LICENSE_DESC"
|
||||
fi
|
||||
if [ -n "$LICENSE_EMAIL" ]; then
|
||||
printf "├─${DIM} Email: %s${NC}\n" "$LICENSE_EMAIL"
|
||||
fi
|
||||
printf "├─${DIM} Storage: %s/data${NC}\n" "${INSTALL_LOC}"
|
||||
printf "├─${DIM} Plugins: %s/plugins${NC}\n" "${INSTALL_LOC}"
|
||||
|
||||
# Create logs directory and generate timestamped log filename
|
||||
mkdir -p "$INSTALL_LOC/logs"
|
||||
LOG_FILE="$INSTALL_LOC/logs/$(date +%Y%m%d_%H%M%S).log"
|
||||
printf "├─${DIM} Logs: %s${NC}\n" "$LOG_FILE"
|
||||
|
||||
# Ensure port is available; if not, find a new one.
|
||||
ORIGINAL_PORT="$PORT"
|
||||
find_available_port false
|
||||
|
||||
# Show port result
|
||||
if [ "$PORT" != "$ORIGINAL_PORT" ]; then
|
||||
printf "├─${DIM} Found available port: %s (%s-%s in use)${NC}\n" "$PORT" "$ORIGINAL_PORT" "$((PORT - 1))"
|
||||
fi
|
||||
|
||||
# Show the command being executed
|
||||
display_enterprise_server_command true
|
||||
|
||||
# Start server in background with or without license flags
|
||||
if [ -n "$LICENSE_TYPE" ] && [ -n "$LICENSE_EMAIL" ]; then
|
||||
# New license needed
|
||||
"$INSTALL_LOC/$BINARY_NAME" serve --cluster-id="$CLUSTER_ID" --node-id="$NODE_ID" --license-type="$LICENSE_TYPE" --license-email="$LICENSE_EMAIL" --http-bind="0.0.0.0:$PORT" $STORAGE_FLAGS >> "$LOG_FILE" 2>&1 &
|
||||
else
|
||||
# Existing license file
|
||||
"$INSTALL_LOC/$BINARY_NAME" serve --cluster-id="$CLUSTER_ID" --node-id="$NODE_ID" --http-bind="0.0.0.0:$PORT" $STORAGE_FLAGS >> "$LOG_FILE" 2>&1 &
|
||||
fi
|
||||
PID="$!"
|
||||
|
||||
printf "├─${DIM} Server started in background (PID: %s)${NC}\n" "$PID"
|
||||
|
||||
perform_server_health_check 90 true
|
||||
|
||||
elif [ "$START_SERVICE" = "y" ] && [ "$STARTUP_CHOICE" = "2" ]; then
|
||||
# Enterprise Custom Start flow
|
||||
echo
|
||||
# Prompt for Cluster ID
|
||||
printf "${BOLD}Enter Your Cluster ID${NC}\n"
|
||||
printf "├─ A Cluster ID determines part of the storage path hierarchy.\n"
|
||||
printf "├─ All nodes within the same cluster share this identifier.\n"
|
||||
printf "└─ Enter a Cluster ID (default: cluster0): "
|
||||
read -r CLUSTER_ID
|
||||
CLUSTER_ID=${CLUSTER_ID:-cluster0}
|
||||
|
||||
# Prompt for Node ID
|
||||
echo
|
||||
printf "${BOLD}Enter Your Node ID${NC}\n"
|
||||
printf "├─ A Node ID distinguishes individual server instances within the cluster.\n"
|
||||
printf "└─ Enter a Node ID (default: node0): "
|
||||
read -r NODE_ID
|
||||
NODE_ID=${NODE_ID:-node0}
|
||||
|
||||
# Prompt for license type
|
||||
echo
|
||||
printf "${BOLD}Select Your License Type${NC}\n"
|
||||
printf "├─ 1) Trial - Full features for 30 days (up to 256 cores)\n"
|
||||
printf "├─ 2) Home - Free for non-commercial use (max 2 cores, single node)\n"
|
||||
printf "└─ Enter your choice (1-2): "
|
||||
read -r LICENSE_CHOICE
|
||||
|
||||
case "$LICENSE_CHOICE" in
|
||||
1)
|
||||
LICENSE_TYPE="trial"
|
||||
LICENSE_DESC="Trial"
|
||||
;;
|
||||
2)
|
||||
LICENSE_TYPE="home"
|
||||
LICENSE_DESC="Home"
|
||||
;;
|
||||
*)
|
||||
printf "Invalid choice. Defaulting to trial.\n"
|
||||
LICENSE_TYPE="trial"
|
||||
LICENSE_DESC="Trial"
|
||||
;;
|
||||
esac
|
||||
|
||||
# Prompt for email
|
||||
echo
|
||||
printf "${BOLD}Enter Your Email Address${NC}\n"
|
||||
printf "├─ Required for license verification and activation\n"
|
||||
printf "├─ You may need to check your email for verification\n"
|
||||
printf "└─ Email: "
|
||||
read -r LICENSE_EMAIL
|
||||
|
||||
while [ -z "$LICENSE_EMAIL" ]; do
|
||||
printf "├─ Email address is required. Please enter your email: "
|
||||
read -r LICENSE_EMAIL
|
||||
done
|
||||
|
||||
# Prompt for storage solution
|
||||
prompt_storage_configuration
|
||||
|
||||
# Ensure port is available; if not, find a new one.
|
||||
find_available_port
|
||||
|
||||
# Start Enterprise in background with licensing and give up to 90 seconds to respond (licensing takes longer)
|
||||
echo
|
||||
printf "${BOLD}Starting InfluxDB Enterprise${NC}\n"
|
||||
printf "├─${DIM} Cluster ID: %s${NC}\n" "$CLUSTER_ID"
|
||||
printf "├─${DIM} Node ID: %s${NC}\n" "$NODE_ID"
|
||||
printf "├─${DIM} License Type: %s${NC}\n" "$LICENSE_DESC"
|
||||
printf "├─${DIM} Email: %s${NC}\n" "$LICENSE_EMAIL"
|
||||
printf "├─${DIM} Storage: %s${NC}\n" "$STORAGE_TYPE"
|
||||
|
||||
# Create logs directory and generate timestamped log filename
|
||||
mkdir -p "$INSTALL_LOC/logs"
|
||||
LOG_FILE="$INSTALL_LOC/logs/$(date +%Y%m%d_%H%M%S).log"
|
||||
printf "├─${DIM} Logs: %s${NC}\n" "$LOG_FILE"
|
||||
|
||||
display_enterprise_server_command false
|
||||
|
||||
# Start server in background
|
||||
"$INSTALL_LOC/$BINARY_NAME" serve --cluster-id="$CLUSTER_ID" --node-id="$NODE_ID" --license-type="$LICENSE_TYPE" --license-email="$LICENSE_EMAIL" --http-bind="0.0.0.0:$PORT" $STORAGE_FLAGS >> "$LOG_FILE" 2>&1 &
|
||||
PID="$!"
|
||||
|
||||
printf "├─${DIM} Server started in background (PID: %s)${NC}\n" "$PID"
|
||||
|
||||
perform_server_health_check 90 true
|
||||
|
||||
else
|
||||
echo
|
||||
printf "${BOLDGREEN}✓ InfluxDB 3 ${EDITION} is now installed. Nice!${NC}\n"
|
||||
fi
|
||||
fi
|
||||
|
||||
### SUCCESS INFORMATION ###
|
||||
echo
|
||||
if [ "${EDITION}" = "Enterprise" ] && [ "$SUCCESS" -eq 0 ] 2>/dev/null; then
|
||||
printf "${BOLD}Server startup failed${NC} - troubleshooting options:\n"
|
||||
printf "├─ ${BOLD}Check email verification:${NC} Look for verification email and click the link\n"
|
||||
printf "├─ ${BOLD}Manual startup:${NC} Try running the server manually to see detailed logs:\n"
|
||||
printf " influxdb3 serve \\\\\n"
|
||||
printf " --cluster-id=%s \\\\\n" "${CLUSTER_ID:-cluster0}"
|
||||
printf " --node-id=%s \\\\\n" "${NODE_ID:-node0}"
|
||||
printf " --license-type=%s \\\\\n" "${LICENSE_TYPE:-trial}"
|
||||
printf " --license-email=%s \\\\\n" "${LICENSE_EMAIL:-your@email.com}"
|
||||
printf " %s\n" "${STORAGE_FLAGS_ECHO:-"--object-store=file --data-dir $INSTALL_LOC/data --plugin-dir $INSTALL_LOC/plugins"}"
|
||||
printf "└─ ${BOLD}Common issues:${NC} Network connectivity, invalid email format, port conflicts\n"
|
||||
else
|
||||
printf "${BOLD}Next Steps${NC}\n"
|
||||
if [ -n "$shellrc" ]; then
|
||||
printf "├─ Run ${BOLD}source '%s'${NC}, then access InfluxDB with ${BOLD}influxdb3${NC} command.\n" "$shellrc"
|
||||
else
|
||||
printf "├─ Access InfluxDB with the ${BOLD}influxdb3${NC} command.\n"
|
||||
fi
|
||||
printf "├─ Create admin token: ${BOLD}influxdb3 create token --admin${NC}\n"
|
||||
printf "└─ Begin writing data! Learn more at https://docs.influxdata.com/influxdb3/${EDITION_TAG}/get-started/write/\n\n"
|
||||
fi
|
||||
|
||||
printf "┌────────────────────────────────────────────────────────────────────────────────────────┐\n"
|
||||
printf "│ Looking to use a UI for querying, plugins, management, and more? │\n"
|
||||
printf "│ Get InfluxDB 3 Explorer at ${BLUE}https://docs.influxdata.com/influxdb3/explorer/#quick-start${NC} │\n"
|
||||
printf "└────────────────────────────────────────────────────────────────────────────────────────┘\n\n"
|
||||
Loading…
Reference in New Issue