fix: updating writer-id to node-id

pull/5803/head
Peter Barnett 2025-01-24 12:40:08 -05:00
parent 92ebd7f238
commit d3625a52fb
2 changed files with 38 additions and 38 deletions

View File

@ -139,21 +139,21 @@ To start your InfluxDB instance, use the `influxdb3 serve` command
and provide the following:
- `--object-store`: Specifies the type of Object store to use. InfluxDB supports the following: local file system (`file`), `memory`, S3 (and compatible services like Ceph or Minio) (`s3`), Google Cloud Storage (`google`), and Azure Blob Storage (`azure`).
- `--writer-id`: A string identifier that determines the server's storage path within the configured storage location
- `--node-id`: A string identifier that determines the server's storage path within the configured storage location
The following examples show how to start InfluxDB 3 with different object store configurations:
```bash
# MEMORY
# Stores data in RAM; doesn't persist data
influxdb3 serve --writer-id=local01 --object-store=memory
influxdb3 serve --node-id=local01 --object-store=memory
```
```bash
# FILESYSTEM
# Provide the filesystem directory
influxdb3 serve \
--writer-id=local01 \
--node-id=local01 \
--object-store=file \
--data-dir ~/.influxdb3
```
@ -170,7 +170,7 @@ To run the [Docker image](/influxdb3/core/install/#docker-image) and persist dat
docker run -it \
-v /path/on/host:/path/in/container \
quay.io/influxdb/influxdb3-core:latest serve \
--writer-id my_host \
--node-id my_host \
--object-store file \
--data-dir /path/in/container
```
@ -178,13 +178,13 @@ docker run -it \
```bash
# S3 (defaults to us-east-1 for region)
# Specify the Object store type and associated options
influxdb3 serve --writer-id=local01 --object-store=s3 --bucket=[BUCKET] --aws-access-key=[AWS ACCESS KEY] --aws-secret-access-key=[AWS SECRET ACCESS KEY]
influxdb3 serve --node-id=local01 --object-store=s3 --bucket=[BUCKET] --aws-access-key=[AWS ACCESS KEY] --aws-secret-access-key=[AWS SECRET ACCESS KEY]
```
```bash
# Minio/Open Source Object Store (Uses the AWS S3 API, with additional parameters)
# Specify the Object store type and associated options
influxdb3 serve --writer-id=local01 --object-store=s3 --bucket=[BUCKET] --aws-access-key=[AWS ACCESS KEY] --aws-secret-access-key=[AWS SECRET ACCESS KEY] --aws-endpoint=[ENDPOINT] --aws-allow-http
influxdb3 serve --node-id=local01 --object-store=s3 --bucket=[BUCKET] --aws-access-key=[AWS ACCESS KEY] --aws-secret-access-key=[AWS SECRET ACCESS KEY] --aws-endpoint=[ENDPOINT] --aws-allow-http
```
_For more information about server options, run `influxdb3 serve --help`._

View File

@ -130,21 +130,21 @@ To start your InfluxDB instance, use the `influxdb3 serve` command
and provide the following:
- `--object-store`: Specifies the type of Object store to use. InfluxDB supports the following: local file system (`file`), `memory`, S3 (and compatible services like Ceph or Minio) (`s3`), Google Cloud Storage (`google`), and Azure Blob Storage (`azure`).
- `--writer-id`: A string identifier that determines the server's storage path within the configured storage location, and, in a multi-node setup, is used to reference the node
- `--node-id`: A string identifier that determines the server's storage path within the configured storage location, and, in a multi-node setup, is used to reference the node
The following examples show how to start InfluxDB 3 with different object store configurations:
```bash
# MEMORY
# Stores data in RAM; doesn't persist data
influxdb3 serve --writer-id=local01 --object-store=memory
influxdb3 serve --node-id=local01 --object-store=memory
```
```bash
# FILESYSTEM
# Provide the filesystem directory
influxdb3 serve \
--writer-id=local01 \
--node-id=local01 \
--object-store=file \
--data-dir ~/.influxdb3
```
@ -161,7 +161,7 @@ To run the [Docker image](/influxdb3/enterprise/install/#docker-image) and persi
docker run -it \
-v /path/on/host:/path/in/container \
quay.io/influxdb/influxdb3-enterprise:latest serve \
--writer-id my_host \
--node-id my_host \
--object-store file \
--data-dir /path/in/container
```
@ -169,13 +169,13 @@ docker run -it \
```bash
# S3 (defaults to us-east-1 for region)
# Specify the Object store type and associated options
influxdb3 serve --writer-id=local01 --object-store=s3 --bucket=[BUCKET] --aws-access-key=[AWS ACCESS KEY] --aws-secret-access-key=[AWS SECRET ACCESS KEY]
influxdb3 serve --node-id=local01 --object-store=s3 --bucket=[BUCKET] --aws-access-key=[AWS ACCESS KEY] --aws-secret-access-key=[AWS SECRET ACCESS KEY]
```
```bash
# Minio/Open Source Object Store (Uses the AWS S3 API, with additional parameters)
# Specify the Object store type and associated options
influxdb3 serve --writer-id=local01 --object-store=s3 --bucket=[BUCKET] --aws-access-key=[AWS ACCESS KEY] --aws-secret-access-key=[AWS SECRET ACCESS KEY] --aws-endpoint=[ENDPOINT] --aws-allow-http
influxdb3 serve --node-id=local01 --object-store=s3 --bucket=[BUCKET] --aws-access-key=[AWS ACCESS KEY] --aws-secret-access-key=[AWS SECRET ACCESS KEY] --aws-endpoint=[ENDPOINT] --aws-allow-http
```
_For more information about server options, run `influxdb3 serve --help`._
@ -783,7 +783,7 @@ The following examples show how to configure and start two nodes
for a basic HA setup.
The example commands pass the following options:
- `--read-from-writer-ids`: makes the node a _read replica_, which checks the Object store for data arriving from other nodes
- `--read-from-node-ids`: makes the node a _read replica_, which checks the Object store for data arriving from other nodes
- `--compactor-id`: activates the Compactor for a node. Only one node can run compaction
- `--run-compactions`: ensures the Compactor runs the compaction process
@ -791,22 +791,22 @@ The example commands pass the following options:
## NODE 1
# Example variables
# writer-id: 'host01'
# node-id: 'host01'
# bucket: 'influxdb-3-enterprise-storage'
# compactor-id: 'c01'
influxdb3 serve --writer-id=host01 --read-from-writer-ids=host02 --compactor-id=c01 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8181 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
influxdb3 serve --node-id=host01 --read-from-node-ids=host02 --compactor-id=c01 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8181 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
```
## NODE 2
# Example variables
# writer-id: 'host02'
# node-id: 'host02'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve --writer-id=host02 --read-from-writer-ids=host01 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8282
influxdb3 serve --node-id=host02 --read-from-node-ids=host01 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8282
--aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
@ -814,7 +814,7 @@ After the nodes have started, querying either node returns data for both nodes,
To add nodes to this setup, start more read replicas:
```bash
influxdb3 serve --read-from-writer-ids=host01,host02 [...OPTIONS]
influxdb3 serve --read-from-node-ids=host01,host02 [...OPTIONS]
```
> [!Note]
@ -822,12 +822,12 @@ influxdb3 serve --read-from-writer-ids=host01,host02 [...OPTIONS]
>
> ```bash
> # In terminal 1
> influxdb3 serve --writer-id=host01 --http-bind=http://127.0.0.1:8181 [...OPTIONS]
> influxdb3 serve --node-id=host01 --http-bind=http://127.0.0.1:8181 [...OPTIONS]
> ```
>
> ```bash
> # In terminal 2
> influxdb3 serve --writer-id=host01 --http-bind=http://127.0.0.1:8181 [...OPTIONS]
> influxdb3 serve --node-id=host01 --http-bind=http://127.0.0.1:8181 [...OPTIONS]
### High availability with a dedicated Compactor
@ -845,20 +845,20 @@ The following examples show how to set up HA with a dedicated Compactor node:
## NODE 1 — Writer/Reader Node #1
# Example variables
# writer-id: 'host01'
# node-id: 'host01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve --writer-id=host01 --compactor-id=c01 --read-from-writer-ids=host02 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8181 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
influxdb3 serve --node-id=host01 --compactor-id=c01 --read-from-node-ids=host02 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8181 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
```bash
## NODE 2 — Writer/Reader Node #2
# Example variables
# writer-id: 'host02'
# node-id: 'host02'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve --writer-id=host02 --compactor-id=c01 --read-from-writer-ids=host01 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8282 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
influxdb3 serve --node-id=host02 --compactor-id=c01 --read-from-node-ids=host01 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8282 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
2. Start the dedicated compactor node, which uses the following options:
@ -866,18 +866,18 @@ The following examples show how to set up HA with a dedicated Compactor node:
- `--mode=compactor`: Ensures the node **only** runs compaction.
- `--compaction-hosts`: Specifies a comma-delimited list of hosts to run compaction for.
_**Don't include the replicas (`--read-from-writer-ids`) parameter because this node doesn't replicate data._
_**Don't include the replicas (`--read-from-node-ids`) parameter because this node doesn't replicate data._
```bash
## NODE 3 — Compactor Node
# Example variables
# writer-id: 'host03'
# node-id: 'host03'
# bucket: 'influxdb-3-enterprise-storage'
# compactor-id: 'c01'
influxdb3 serve --writer-id=host03 --mode=compactor --compactor-id=c01 --compaction-hosts=host01,host02 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
influxdb3 serve --node-id=host03 --mode=compactor --compactor-id=c01 --compaction-hosts=host01,host02 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
### High availability with read replicas and a dedicated Compactor
@ -893,10 +893,10 @@ For a very robust and effective setup for managing time-series data, you can run
## NODE 1 — Writer Node #1
# Example variables
# writer-id: 'host01'
# node-id: 'host01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve --writer-id=host01 --mode=read_write --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8181 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
influxdb3 serve --node-id=host01 --mode=read_write --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8181 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
@ -904,10 +904,10 @@ For a very robust and effective setup for managing time-series data, you can run
## NODE 2 — Writer Node #2
# Example variables
# writer-id: 'host02'
# node-id: 'host02'
# bucket: 'influxdb-3-enterprise-storage'
Usage: $ influxdb3 serve --writer-id=host02 --mode=read_write --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8282 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
Usage: $ influxdb3 serve --node-id=host02 --mode=read_write --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8282 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
2. Start the dedicated Compactor node (`--mode=compactor`) and ensure it runs compactions on the specified `compaction-hosts`.
@ -916,36 +916,36 @@ For a very robust and effective setup for managing time-series data, you can run
## NODE 3 — Compactor Node
# Example variables
# writer-id: 'host03'
# node-id: 'host03'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve --writer-id=host03 --mode=compactor --compaction-hosts=host01,host02 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
influxdb3 serve --node-id=host03 --mode=compactor --compaction-hosts=host01,host02 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
3. Finally, start the query nodes as _read-only_.
Include the following options:
- `--mode=read`: Sets the node to _read-only_
- `--read-from-writer-ids=host01,host02`: A comma-demlimited list of host IDs to read data from
- `--read-from-node-ids=host01,host02`: A comma-demlimited list of host IDs to read data from
```bash
## NODE 4 — Read Node #1
# Example variables
# writer-id: 'host04'
# node-id: 'host04'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve --writer-id=host04 --mode=read --object-store=s3 --read-from-writer-ids=host01,host02 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8383 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
influxdb3 serve --node-id=host04 --mode=read --object-store=s3 --read-from-node-ids=host01,host02 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8383 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
```
## NODE 5 — Read Node #2
# Example variables
# writer-id: 'host05'
# node-id: 'host05'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve --writer-id=host05 --mode=read --object-store=s3 --read-from-writer-ids=host01,host02 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8484 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
influxdb3 serve --node-id=host05 --mode=read --object-store=s3 --read-from-node-ids=host01,host02 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8484 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
Congratulations, you have a robust setup to workload isolation using {{% product-name %}}.