Merge pull request #5770 from influxdata/js/multi-node

fix(monolith): Multi-node cleanup, misc fixes
pull/5771/head
Jason Stirnaman 2025-01-13 11:47:51 -06:00 committed by GitHub
commit 17beb58ea8
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
3 changed files with 169 additions and 116 deletions

View File

@ -12,7 +12,7 @@ alt_links:
- [System Requirements](#system-requirements)
- [Quick install](#quick-install)
- [Download InfluxDB 3 Core binaries](#download-influxdb-3-enterprise-binaries)
- [Download InfluxDB 3 Core binaries](#download-influxdb-3-core-binaries)
- [Docker image](#docker-image)
## System Requirements

View File

@ -47,7 +47,7 @@ This guide covers InfluxDB 3 Core (the open source release), including the follo
* [Data Model](#data-model)
* [Write data to the database](#write-data)
* [Query the database](#query-the-database)
* [Last Values Cache](#last-values-cache)
* [Last values cache](#last-values-cache)
* [Distinct Values Cache](#distinct-values-cache)
* [Python plugins and the processing engine](#python-plugins-and-the-processing-engine)
* [Diskless architecture](#diskless-architecture)
@ -373,7 +373,7 @@ Options:
```
You can create a last value cache per time series, but be mindful of high cardinality tables that could take excessive memory.
You can create a last values cache per time series, but be mindful of high cardinality tables that could take excessive memory.
An example of creating this cache in use:
@ -389,7 +389,7 @@ An example of creating this cache in use:
influxdb3 create last-cache --database=servers --table=cpu --cache-name=cpuCache --key-columns=host,application --value-columns=usage_percent,status --count=5
```
### Querying a Last Values Cache
### Querying a Last values cache
To leverage the LVC, you need to specifically call on it using the `last_cache()` function. An example of this type of query:
@ -402,9 +402,9 @@ Usage: $ influxdb3 query --database=servers "SELECT * FROM last_cache('cpu', 'cp
The Last Value Cache only works with SQL, not InfluxQL; SQL is the default language.
{{% /note %}}
### Deleting a Last Values Cache
### Deleting a Last values cache
Removing a Last Values Cache is also easy and straightforward, with the instructions below.
Removing a Last values cache is also easy and straightforward, with the instructions below.
```
@ -421,7 +421,7 @@ Options:
## Distinct Values Cache
Similar to the Last Values Cache, the database can cache in RAM the distinct values for a single column in a table or a heirarchy of columns. This is useful for fast metadata lookups, which can return in under 30 milliseoncds. Many of the options are similar to the last value cache. See the CLI output for more information:
Similar to the Last values cache, the database can cache in RAM the distinct values for a single column in a table or a heirarchy of columns. This is useful for fast metadata lookups, which can return in under 30 milliseoncds. Many of the options are similar to the last value cache. See the CLI output for more information:
```bash
influxdb3 create distinct_cache -h
@ -564,7 +564,7 @@ influxdb3 create trigger -d mydb --plugin=test_plugin --trigger-spec="table:foo"
After you've tested it, you can create the plugin in the server(the file will need to be there in the plugin-dir) and then create a trigger to trigger it on WAL flushes.
### Diskless Architecture
### Diskless architecture
InfluxDB 3 is able to operate using only object storage with no locally attached disk. While it can use only a disk with no dependencies, the ability to operate without one is a new capability with this release. The figure below illustrates the write path for data landing in the database.

View File

@ -39,7 +39,7 @@ This guide covers Enterprise as well as InfluxDB 3 Core, including the following
* [Data Model](#data-model)
* [Write data to the database](#write-data)
* [Query the database](#query-the-database)
* [Last Values Cache](#last-values-cache)
* [Last values cache](#last-values-cache)
* [Distinct Values Cache](#distinct-values-cache)
* [Python plugins and the processing engine](#python-plugins-and-the-processing-engine)
* [Diskless architecture](#diskless-architecture)
@ -371,7 +371,7 @@ Options:
```
You can create a last value cache per time series, but be mindful of high cardinality tables that could take excessive memory.
You can create a last values cache per time series, but be mindful of high cardinality tables that could take excessive memory.
An example of creating this cache in use:
@ -387,7 +387,7 @@ An example of creating this cache in use:
influxdb3 create last-cache --database=servers --table=cpu --cache-name=cpuCache --key-columns=host,application --value-columns=usage_percent,status --count=5
```
### Querying a Last Values Cache
### Querying a Last values cache
To leverage the LVC, you need to specifically call on it using the `last_cache()` function. An example of this type of query:
@ -400,9 +400,9 @@ Usage: $ influxdb3 query --database=servers "SELECT * FROM last_cache('cpu', 'cp
The Last Value Cache only works with SQL, not InfluxQL; SQL is the default language.
{{% /note %}}
### Deleting a Last Values Cache
### Deleting a Last values cache
Removing a Last Values Cache is also easy and straightforward, with the instructions below.
Removing a Last values cache is also easy and straightforward, with the instructions below.
```
@ -419,7 +419,7 @@ Options:
## Distinct Values Cache
Similar to the Last Values Cache, the database can cache in RAM the distinct values for a single column in a table or a heirarchy of columns. This is useful for fast metadata lookups, which can return in under 30 milliseoncds. Many of the options are similar to the last value cache. See the CLI output for more information:
Similar to the Last values cache, the database can cache in RAM the distinct values for a single column in a table or a heirarchy of columns. This is useful for fast metadata lookups, which can return in under 30 milliseoncds. Many of the options are similar to the last value cache. See the CLI output for more information:
```bash
influxdb3 create distinct_cache -h
@ -562,7 +562,7 @@ influxdb3 create trigger -d mydb --plugin=test_plugin --trigger-spec="table:foo"
After you've tested it, you can create the plugin in the server(the file will need to be there in the plugin-dir) and then create a trigger to trigger it on WAL flushes.
### Diskless Architecture
### Diskless architecture
InfluxDB 3 is able to operate using only object storage with no locally attached disk. While it can use only a disk with no dependencies, the ability to operate without one is a new capability with this release. The figure below illustrates the write path for data landing in the database.
@ -574,24 +574,38 @@ InfluxDB periodically snapshots the WAL to persist the oldest data in the querya
When the data is persisted out of the queryable buffer it is put into the configured object store as Parquet files. Those files are also put into an in-memory cache so that queries against the most recently persisted data do not have to go to object storage.
### Multi-Server Setup
### Multi-server setup
{{% product-name %}} is built to support multi-node setups for high availability, read replicas, and flexible implementations depending on use case.
### High Availability
### High availability
This functionality is built on top of the diskless engine, leveraging the object store as the solution for ensuring that if a node fails, you can still continue reading from and writing to a secondary node. Enterprise is designed to be architecturally flexible, giving operators options on how to configure multiple servers together. At a minimum, a two-node setup—both with read/write permissions—will enable high availability with excellent performance.
Enterprise is architecturally flexible, giving you options on how to configure multiple servers that work together for high availability (HA) and high performance.
Built on top of the diskless engine and leveraging the Object store, an HA setup ensures that if a node fails, you can still continue reading from, and writing to, a secondary node.
{{< img-hd src="/img/influxdb/influxdb-3-enterprise-high-availability.png" alt="Basic High Availability Setup" />}}
A two-node setup is the minimum for basic high availability, with both nodes having read-write permissions.
In this setup, we have two nodes both writing data to the same object store and servicing queries as well. On instantiation, you can enable Node 1 and Node 2 to read from each others object store directories. Importantly, youll also notice that one of the nodes is designated as the compactor in this instance as well to ensure long-range queries are high performance.
{{< img-hd src="/img/influxdb/influxdb-3-enterprise-high-availability.png" alt="Basic high availability setup" />}}
| IMPORTANT Only one node can be designated as the compactor. The area of compacted data is meant to be single writer, many reader. |
| :---- |
In a basic HA setup:
Using the `--read-from-writer-ids` option, we instruct the server to check the object store for data landing from the other servers. We additionally will set the compactor to be active for Node 1 using the `--compactor-id` option. We *do not* set a compactor ID for Node 2\. We additionally pass a `--run-compactions` option to ensure Node 1 runs the compaction process.
- Two nodes both write data to the same Object store and both handle queries
- Node 1 and Node 2 are _read replicas_ that read from each others Object store directories
- One of the nodes is designated as the Compactor node
```
> [!Note]
> Only one node can be designated as the Compactor.
> Compacted data is meant for a single writer, and many readers.
The following examples show how to configure and start two nodes
for a basic HA setup.
The example commands pass the following options:
- `--read-from-writer-ids`: makes the node a _read replica_, which checks the Object store for data arriving from other nodes
- `--compactor-id`: activates the Compactor for a node. Only one node can run compaction
- `--run-compactions`: ensures the Compactor runs the compaction process
```bash
## NODE 1
# Example variables
@ -600,7 +614,7 @@ Using the `--read-from-writer-ids` option, we instruct the server to check the o
# compactor-id: 'c01'
Usage: $ influxdb3 serve --writer-id=host01 --read-from-writer-ids=host02 --compactor-id=c01 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8181 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
influxdb3 serve --writer-id=host01 --read-from-writer-ids=host02 --compactor-id=c01 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8181 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
```
@ -610,133 +624,165 @@ Usage: $ influxdb3 serve --writer-id=host01 --read-from-writer-ids=host02 --comp
# writer-id: 'host02'
# bucket: 'influxdb-3-enterprise-storage'
Usage: $ influxdb3 serve --writer-id=host02 --read-from-writer-ids=host01 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8282
influxdb3 serve --writer-id=host02 --read-from-writer-ids=host01 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8282
--aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
Thats it\! Querying either node will return data for both nodes. Additionally, compaction will be running on Node 1\. To add additional nodes to this setup, simply add to the replicas list.
| NOTE | If you want to run this setup on the same node for testing, you can run both commands in separate terminals and pass a different `--http-bind` parameter. E.g., pass `--http-bind=http://127.0.0.1:8181` for terminal 1s `serve` command and `--http-bind=http://127.0.0.1:8282` for terminal 2s. |
| :---- | :---- |
### High Availability with Dedicated Compactor
One of the more computationally expensive operations is compaction. To ensure that your node servicing writes and reads doesnt slow down due to compaction work, we suggest setting up a compactor-only node for high and level performance across all nodes.
{{< img-hd src="/img/influxdb/influxdb-3-enterprise-dedicated-compactor.png" alt="Dedicated Compactor Setup" />}}
For our first two nodes, we are going to keep them similar except for the host id and replicas list (which are flipped). We also need to specify where the compacted data is going to land with the `compactor-id` setting.
```
## NODE 1 — Writer/Reader Node #1
# Example variables
# writer-id: 'host01'
# bucket: 'influxdb-3-enterprise-storage'
Usage: $ influxdb3 serve --writer-id=host01 --compactor-id=c01 --read-from-writer-ids=host02 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8181 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
After the nodes have started, querying either node returns data for both nodes, and `NODE 1` runs compaction.
To add nodes to this setup, start more read replicas:
```bash
influxdb3 serve --read-from-writer-ids=host01,host02 [...OPTIONS]
```
```
## NODE 2 — Writer/Reader Node #2
> [!Note]
> To run this setup for testing, you can start nodes in separate terminals and pass a different `--http-bind` value for each--for example:
>
> ```bash
> # In terminal 1
> influxdb3 serve --writer-id=host01 --http-bind=http://127.0.0.1:8181 [...OPTIONS]
> ```
>
> ```bash
> # In terminal 2
> influxdb3 serve --writer-id=host01 --http-bind=http://127.0.0.1:8181 [...OPTIONS]
# Example variables
# writer-id: 'host02'
# bucket: 'influxdb-3-enterprise-storage'
### High availability with a dedicated Compactor
Usage: $ influxdb3 serve --writer-id=host02 --compactor-id=c01 --read-from-writer-ids=host01 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8282 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
Data compaction in InfluxDB 3 is one of the more computationally expensive operations.
To ensure that your read-write node doesnt slow down due to compaction work, set up a compactor-only node for consistent and high performance across all nodes.
For the compactor node, we need to set a few more options. First, we need to specify the mode which needs to be `--mode=compactor`; this ensures not only that it runs compaction, but that it *only* runs compaction. Since this node isnt replicating data, we dont pass it the replicas parameter, which means we need another way to tell it the hosts to run compaction. To do this, we set `--compaction-hosts` option with a comma-delimited list, similar to the replicas option.
{{< img-hd src="/img/influxdb/influxdb-3-enterprise-dedicated-compactor.png" alt="Dedicated Compactor setup" />}}
```
## NODE 3 — Compactor Node
The following examples show how to set up HA with a dedicated Compactor node:
# Example variables
# writer-id: 'host03'
# bucket: 'influxdb-3-enterprise-storage'
# compactor-id: 'c01'
1. Start two read-write nodes as read replicas, similar to the previous example,
and pass the `--compactor-id` option with a dedicated compactor ID (which you'll configure in the next step).
Usage: $ influxdb3 serve --writer-id=host03 --mode=compactor --compactor-id=c01 --compaction-hosts=host01,host02 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
```
## NODE 1 — Writer/Reader Node #1
###
# Example variables
# writer-id: 'host01'
# bucket: 'influxdb-3-enterprise-storage'
### High Availability with Read Replicas and a Dedicated Compactor
influxdb3 serve --writer-id=host01 --compactor-id=c01 --read-from-writer-ids=host02 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8181 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
To create a very robust and effective setup for managing time-series data, we recommend running ingest nodes alongside read-only nodes, and leveraging a compactor-node for excellent performance.
```bash
## NODE 2 — Writer/Reader Node #2
# Example variables
# writer-id: 'host02'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve --writer-id=host02 --compactor-id=c01 --read-from-writer-ids=host01 --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8282 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
2. Start the dedicated compactor node, which uses the following options:
- `--mode=compactor`: Ensures the node **only** runs compaction.
- `--compaction-hosts`: Specifies a comma-delimited list of hosts to run compaction for.
_**Don't include the replicas (`--read-from-writer-ids`) parameter because this node doesn't replicate data._
```bash
## NODE 3 — Compactor Node
# Example variables
# writer-id: 'host03'
# bucket: 'influxdb-3-enterprise-storage'
# compactor-id: 'c01'
influxdb3 serve --writer-id=host03 --mode=compactor --compactor-id=c01 --compaction-hosts=host01,host02 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
### High availability with read replicas and a dedicated Compactor
For a very robust and effective setup for managing time-series data, you can run ingest nodes alongside read-only nodes, and a dedicated Compactor node.
{{< img-hd src="/img/influxdb/influxdb-3-enterprise-workload-isolation.png" alt="Workload Isolation Setup" />}}
First, we want to set up our writer nodes for ingest. Enterprise doesnt designate a write-only mode, so writers set their mode to **`read_write`**. To properly leverage this architecture though, you should only send requests to reader nodes that have their mode set for reading only; more on that in a moment.
1. Start writer nodes for ingest. Enterprise doesnt designate a write-only mode, so assign them **`read_write`** mode.
To achieve the benefits of workload isolation, you'll send _only write requests_ to these read-write nodes. Later, you'll configure the _read-only_ nodes.
```
## NODE 1 — Writer Node #1
```
## NODE 1 — Writer Node #1
# Example variables
# writer-id: 'host01'
# bucket: 'influxdb-3-enterprise-storage'
# Example variables
# writer-id: 'host01'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve --writer-id=host01 --mode=read_write --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8181 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
Usage: $ influxdb3 serve --writer-id=host01 --mode=read_write --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8181 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
```
```
## NODE 2 — Writer Node #2
```
## NODE 2 — Writer Node #2
# Example variables
# writer-id: 'host02'
# bucket: 'influxdb-3-enterprise-storage'
# Example variables
# writer-id: 'host02'
# bucket: 'influxdb-3-enterprise-storage'
Usage: $ influxdb3 serve --writer-id=host02 --mode=read_write --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8282 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
Usage: $ influxdb3 serve --writer-id=host02 --mode=read_write --object-store=s3 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8282 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
2. Start the dedicated Compactor node (`--mode=compactor`) and ensure it runs compactions on the specified `compaction-hosts`.
For the compactor node, we want to follow the same principles we used earlier, by setting its mode to compaction only, and ensuring its running compactions on the proper set of replicas.
```
## NODE 3 — Compactor Node
```
## NODE 3 — Compactor Node
# Example variables
# writer-id: 'host03'
# bucket: 'influxdb-3-enterprise-storage'
# Example variables
# writer-id: 'host03'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve --writer-id=host03 --mode=compactor --compaction-hosts=host01,host02 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
Usage: $ influxdb3 serve --writer-id=host03 --mode=compactor --compaction-hosts=host01,host02 --run-compactions --object-store=s3 --bucket=influxdb-3-enterprise-storage --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
3. Finally, start the query nodes as _read-only_.
Include the following options:
- `--mode=read`: Sets the node to _read-only_
- `--read-from-writer-ids=host01,host02`: A comma-demlimited list of host IDs to read data from
Finally, we have the query nodes, which we need to set the mode to read-only. We use `--mode=read` as our option parameter, along with unique host IDs.
```bash
## NODE 4 — Read Node #1
```
## NODE 4 — Read Node #1
# Example variables
# writer-id: 'host04'
# bucket: 'influxdb-3-enterprise-storage'
# Example variables
# writer-id: 'host04'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve --writer-id=host04 --mode=read --object-store=s3 --read-from-writer-ids=host01,host02 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8383 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
Usage: $ influxdb3 serve --writer-id=host04 --mode=read --object-store=s3 --read-from-writer-ids=host01,host02 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8383 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
```
## NODE 5 — Read Node #2
```
## NODE 5 — Read Node #2
# Example variables
# writer-id: 'host05'
# bucket: 'influxdb-3-enterprise-storage'
# Example variables
# writer-id: 'host05'
# bucket: 'influxdb-3-enterprise-storage'
influxdb3 serve --writer-id=host05 --mode=read --object-store=s3 --read-from-writer-ids=host01,host02 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8484 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
Usage: $ influxdb3 serve --writer-id=host05 --mode=read --object-store=s3 --read-from-writer-ids=host01,host02 --bucket=influxdb-3-enterprise-storage --http-bind=0.0.0.0:8484 --aws-access-key-id=<AWS_ACCESS_KEY_ID> --aws-secret-access-key=<AWS_SECRET_ACCESS_KEY>
```
Thats it\! A full fledged setup of a robust implementation for {{% product-name %}} is now complete with
### Writing/Querying on {{% product-name %}}
Congratulations, you have a robust setup to workload isolation using {{% product-name %}}.
### Writing and Querying for Multi-Node Setups
If youre running {{% product-name %}} in a single-instance setup, writing and querying is the same as for {{% product-name %}}. Additionally, if you want to leverage the default port of 8181 for any write or query, then no changes need to be made to your commands.
If youre running {{% product-name %}} in a single-instance setup, writing and querying is the same as for {{% product-name %}}.
You can use the default port `8181` for any write or query, without changing any of the commands.
The key change in leveraging read/writes on this wider architecture is in ensuring that youre specifying the correct host. If you run locally and serve an instance on 8181 (the default port), you dont need to specify which host. However, when running multiple local instances for testing, or separate nodes in production, specifying the host ensures writes and queries are routed to the correct instance.
> [!Note]
> #### Specify hosts for writes and queries
>
> To benefit from this multi-node, isolated architecture specify hosts:
>
> - In write requests, specify a host designated for _write-only_
> - In query requests, specify a host designated for _read-only
>
> When running multiple local instances for testing, or separate nodes in production, specifying the host ensures writes and queries are routed to the correct instance.
> If you run locally and serve an instance on 8181 (the default port), then you dont need to specify the host.
```
# Example variables on a query
@ -746,13 +792,20 @@ Usage: $ influxdb3 query --host=http://127.0.0.1:8585 -d <DATABASE> "<QUERY>"
### File index settings
To accelerate performance on specific queries, you can define non-primary keys to index on, which will especially help improve performance on single-series queries. This functionality is reserved for Enterprise and is not available on Enterprise.
To accelerate performance on specific queries, you can define non-primary keys to index on, which helps improve performance for single-series queries.
This feature is only available in Enterprise and is not available in Core.
```
#### Create a file index
```bash
# Example variables on a query
# HTTP-bound Port: 8585
Create Usage: $ influxdb3 file-index create --host=http://127.0.0.1:8585 -d <DATABASE> -t <TABLE> <COLUMNS>
Delete Usage: $ influxdb3 file-index delete --host=http://127.0.0.1:8585 -d <DATABASE> -t <TABLE>
influxdb3 file-index create --host=http://127.0.0.1:8585 -d <DATABASE> -t <TABLE> <COLUMNS>
```
#### Delete a file index
```bash
influxdb3 file-index delete --host=http://127.0.0.1:8585 -d <DATABASE> -t <TABLE>
```