Merge origin/master into jts-create-docs

Resolved conflicts by keeping enhancements:
- stdin support for draft content
- link extraction and following (local files + external URLs)
- alphabetical product sorting with detected products first
- --from-draft and --follow-external flags
jts-multifile-plugins-guide
Jason Stirnaman 2025-10-31 15:08:11 -05:00
commit 61ae161501
26 changed files with 2002 additions and 889 deletions

View File

@ -4,22 +4,31 @@ import remarkPresetLintMarkdownStyleGuide from 'remark-preset-lint-markdown-styl
import remarkFrontmatter from 'remark-frontmatter';
import remarkFrontmatterSchema from 'remark-lint-frontmatter-schema';
import remarkNoShellDollars from 'remark-lint-no-shell-dollars';
import remarkLintNoUndefinedReferences from 'remark-lint-no-undefined-references';
import remarkToc from 'remark-toc';
const remarkConfig = {
settings: {
bullet: '-',
plugins: [
remarkPresetLintConsistent,
remarkPresetLintRecommended,
remarkPresetLintMarkdownStyleGuide,
remarkFrontmatter,
remarkFrontmatterSchema,
remarkNoShellDollars,
// Generate a table of contents in `## Contents`
[remarkToc, { heading: '' }],
],
},
plugins: [
remarkPresetLintConsistent,
remarkPresetLintRecommended,
remarkPresetLintMarkdownStyleGuide,
remarkFrontmatter,
remarkFrontmatterSchema,
remarkNoShellDollars,
// Override no-undefined-references to allow GitHub Alerts syntax
// This prevents lint warnings for [!Note], [!Tip], etc. in blockquotes
[
remarkLintNoUndefinedReferences,
{
allow: ['!Note', '!Tip', '!Important', '!Warning', '!Caution'],
},
],
// Generate a table of contents in `## Contents`
[remarkToc, { heading: '' }],
],
};
export default remarkConfig;

View File

@ -9,6 +9,7 @@
"remark-preset-lint-recommended": "7.0.0",
"remark-frontmatter": "5.0.0",
"remark-lint-frontmatter-schema": "3.15.4",
"remark-lint-no-shell-dollars": "4.0.0"
"remark-lint-no-shell-dollars": "4.0.0",
"remark-lint-no-undefined-references": "5.0.2"
}
}

View File

@ -22,14 +22,14 @@ provides an alternative method for deploying your InfluxDB cluster using
resource. When using Helm, apply configuration options in a
a `values.yaml` on your local machine.
InfluxData provides the following items:
InfluxData provides the following items:
- **`influxdb-docker-config.json`**: an authenticated Docker configuration file.
The InfluxDB Clustered software is in a secure container registry.
This file grants access to the collection of container images required to
install InfluxDB Clustered.
---
***
## Configuration data
@ -40,23 +40,24 @@ available:
API endpoints
- **PostgreSQL-style data source name (DSN)**: used to access your
PostgreSQL-compatible database that stores the InfluxDB Catalog.
- **Object store credentials** _(AWS S3 or S3-compatible)_
- **Object store credentials** *(AWS S3 or S3-compatible)*
- Endpoint URL
- Access key
- Bucket name
- Region (required for S3, may not be required for other object stores)
- **Local storage information** _(for ingester pods)_
- **Local storage information** *(for ingester pods)*
- Storage class
- Storage size
InfluxDB is deployed to a Kubernetes namespace which, throughout the following
installation procedure, is referred to as the _target_ namespace.
installation procedure, is referred to as the *target* namespace.
For simplicity, we assume this namespace is `influxdb`, however
you may use any name you like.
> [!Note]
> \[!Note]
>
> #### Set namespaceOverride if using a namespace other than influxdb
>
>
> If you use a namespace name other than `influxdb`, update the `namespaceOverride`
> field in your `values.yaml` to use your custom namespace name.
@ -85,21 +86,21 @@ which simplifies the installation and management of the InfluxDB Clustered packa
It manages the application of the jsonnet templates used to install, manage, and
update an InfluxDB cluster.
> [!Note]
> \[!Note]
> If you already installed the `kubecfg kubit` operator separately when
> [setting up prerequisites](/influxdb3/clustered/install/set-up-cluster/prerequisites/#install-the-kubecfg-kubit-operator)
> for your cluster, in your `values.yaml`, set `skipOperator` to `true`.
>
>
> ```yaml
> skipOperator: true
> ```
## Configure your cluster
1. [Install Helm](#install-helm)
2. [Create a values.yaml file](#create-a-valuesyaml-file)
3. [Configure access to the InfluxDB container registry](#configure-access-to-the-influxdb-container-registry)
4. [Modify the configuration file to point to prerequisites](#modify-the-configuration-file-to-point-to-prerequisites)
1. [Install Helm](#install-helm)
2. [Create a values.yaml file](#create-a-valuesyaml-file)
3. [Configure access to the InfluxDB container registry](#configure-access-to-the-influxdb-container-registry)
4. [Modify the configuration file to point to prerequisites](#modify-the-configuration-file-to-point-to-prerequisites)
### Install Helm
@ -136,11 +137,11 @@ In both scenarios, you need a valid container registry secret file.
Use [crane](https://github.com/google/go-containerregistry/tree/main/cmd/crane)
to create a container registry secret file.
1. [Install crane](https://github.com/google/go-containerregistry/tree/main/cmd/crane#installation)
2. Use the following command to create a container registry secret file and
retrieve the necessary secrets:
1. [Install crane](https://github.com/google/go-containerregistry/tree/main/cmd/crane#installation)
2. Use the following command to create a container registry secret file and
retrieve the necessary secrets:
{{% code-placeholders "PACKAGE_VERSION" %}}
{{% code-placeholders "PACKAGE\_VERSION" %}}
```sh
mkdir /tmp/influxdbsecret
@ -152,12 +153,12 @@ DOCKER_CONFIG=/tmp/influxdbsecret \
{{% /code-placeholders %}}
---
***
Replace {{% code-placeholder-key %}}`PACKAGE_VERSION`{{% /code-placeholder-key %}}
with your InfluxDB Clustered package version.
---
***
If your Docker configuration is valid and youre able to connect to the container
registry, the command succeeds and the output is the JSON manifest for the Docker
@ -206,6 +207,7 @@ Error: fetching manifest us-docker.pkg.dev/influxdb2-artifacts/clustered/influxd
{{% /tabs %}}
{{% tab-content %}}
<!--------------------------- BEGIN Public Registry --------------------------->
#### Public registry
@ -229,8 +231,10 @@ If you change the name of this secret, you must also change the value of the
`imagePullSecrets.name` field in your `values.yaml`.
<!---------------------------- END Public Registry ---------------------------->
{{% /tab-content %}}
{{% tab-content %}}
<!--------------------------- BEGIN Private Registry -------------------------->
#### Private registry (air-gapped)
@ -284,9 +288,8 @@ In addition to the InfluxDB images, copy the kubit operator images:
```bash
# Create a list of kubit-related images
cat > /tmp/kubit-images.txt << EOF
ghcr.io/kubecfg/kubit:v0.0.20
ghcr.io/kubecfg/kubit:v0.0.22
ghcr.io/kubecfg/kubecfg/kubecfg:latest
bitnami/kubectl:1.27.5
registry.k8s.io/kubectl:v1.28.0
EOF
@ -298,7 +301,8 @@ cat /tmp/kubit-images.txt | xargs -I% crane cp % YOUR_PRIVATE_REGISTRY/%
Configure your `values.yaml` to use your private registry:
{{% code-placeholders "REGISTRY_HOSTNAME" %}}
{{% code-placeholders "REGISTRY\_HOSTNAME" %}}
```yaml
# Configure registry override for all images
images:
@ -307,8 +311,8 @@ images:
# Configure kubit operator images
kubit:
controller:
image: REGISTRY_HOSTNAME/ghcr.io/kubecfg/kubit:v0.0.20
apply_step_image: REGISTRY_HOSTNAME/bitnami/kubectl:1.27.5
image: REGISTRY_HOSTNAME/ghcr.io/kubecfg/kubit:v0.0.22
apply_step_image: REGISTRY_HOSTNAME/registry.k8s.io/kubectl:v1.28.0
render_step_image: REGISTRY_HOSTNAME/registry.k8s.io/kubectl:v1.28.0
kubecfg_image: REGISTRY_HOSTNAME/ghcr.io/kubecfg/kubecfg/kubecfg:latest
@ -316,6 +320,7 @@ kubit:
imagePullSecrets:
- name: your-registry-pull-secret
```
{{% /code-placeholders %}}
Replace {{% code-placeholder-key %}}`REGISTRY_HOSTNAME`{{% /code-placeholder-key %}} with your private registry hostname.
@ -345,13 +350,13 @@ To configure ingress, provide values for the following fields in your
Provide the hostnames that Kubernetes should use to expose the InfluxDB API
endpoints--for example: `{{< influxdb/host >}}`.
_You can provide multiple hostnames. The ingress layer accepts incoming
*You can provide multiple hostnames. The ingress layer accepts incoming
requests for all listed hostnames. This can be useful if you want to have
distinct paths for your internal and external traffic._
distinct paths for your internal and external traffic.*
> [!Note]
> \[!Note]
> You are responsible for configuring and managing DNS. Options include:
>
>
> - Manually managing DNS records
> - Using [external-dns](https://github.com/kubernetes-sigs/external-dns) to
> synchronize exposed Kubernetes services and ingresses with DNS providers.
@ -361,16 +366,16 @@ To configure ingress, provide values for the following fields in your
(Optional): Provide the name of the secret that contains your TLS certificate
and key. The examples in this guide use the name `ingress-tls`.
_The `tlsSecretName` field is optional. You may want to use it if you already
have a TLS certificate for your DNS name._
*The `tlsSecretName` field is optional. You may want to use it if you already
have a TLS certificate for your DNS name.*
> [!Note]
> \[!Note]
> Writing to and querying data from InfluxDB does not require TLS.
> For simplicity, you can wait to enable TLS before moving into production.
> For more information, see Phase 4 of the InfluxDB Clustered installation
> process, [Secure your cluster](/influxdb3/clustered/install/secure-cluster/).
{{% code-callout "ingress-tls|cluster-host\.com" "green" %}}
{{% code-callout "ingress-tls|cluster-host.com" "green" %}}
```yaml
ingress:
@ -405,14 +410,14 @@ following fields in your `values.yaml`:
- `bucket`: Object storage bucket name
- `s3`:
- `endpoint`: Object storage endpoint URL
- `allowHttp`: _Set to `true` to allow unencrypted HTTP connections_
- `allowHttp`: *Set to `true` to allow unencrypted HTTP connections*
- `accessKey.value`: Object storage access key
_(can use a `value` literal or `valueFrom` to retrieve the value from a secret)_
*(can use a `value` literal or `valueFrom` to retrieve the value from a secret)*
- `secretKey.value`: Object storage secret key
_(can use a `value` literal or `valueFrom` to retrieve the value from a secret)_
*(can use a `value` literal or `valueFrom` to retrieve the value from a secret)*
- `region`: Object storage region
{{% code-placeholders "S3_(URL|ACCESS_KEY|SECRET_KEY|BUCKET_NAME|REGION)" %}}
{{% code-placeholders "S3\_(URL|ACCESS\_KEY|SECRET\_KEY|BUCKET\_NAME|REGION)" %}}
```yml
objectStore:
@ -442,7 +447,7 @@ objectStore:
{{% /code-placeholders %}}
---
***
Replace the following:
@ -452,7 +457,7 @@ Replace the following:
- {{% code-placeholder-key %}}`S3_SECRET_KEY`{{% /code-placeholder-key %}}: Object storage secret key
- {{% code-placeholder-key %}}`S3_REGION`{{% /code-placeholder-key %}}: Object storage region
---
***
<!----------------------------------- END S3 ---------------------------------->
@ -468,11 +473,11 @@ following fields in your `values.yaml`:
- `bucket`: Azure Blob Storage bucket name
- `azure`:
- `accessKey.value`: Azure Blob Storage access key
_(can use a `value` literal or `valueFrom` to retrieve the value from a secret)_
*(can use a `value` literal or `valueFrom` to retrieve the value from a secret)*
- `account.value`: Azure Blob Storage account ID
_(can use a `value` literal or `valueFrom` to retrieve the value from a secret)_
*(can use a `value` literal or `valueFrom` to retrieve the value from a secret)*
{{% code-placeholders "AZURE_(BUCKET_NAME|ACCESS_KEY|STORAGE_ACCOUNT)" %}}
{{% code-placeholders "AZURE\_(BUCKET\_NAME|ACCESS\_KEY|STORAGE\_ACCOUNT)" %}}
```yml
objectStore:
@ -493,7 +498,7 @@ objectStore:
{{% /code-placeholders %}}
---
***
Replace the following:
@ -501,7 +506,7 @@ Replace the following:
- {{% code-placeholder-key %}}`AZURE_ACCESS_KEY`{{% /code-placeholder-key %}}: Azure Blob Storage access key
- {{% code-placeholder-key %}}`AZURE_STORAGE_ACCOUNT`{{% /code-placeholder-key %}}: Azure Blob Storage account ID
---
***
<!--------------------------------- END AZURE --------------------------------->
@ -521,7 +526,7 @@ following fields in your `values.yaml`:
- `serviceAccountSecret.key`: the key inside of your Google IAM secret that
contains your Google IAM account credentials
{{% code-placeholders "GOOGLE_(BUCKET_NAME|IAM_SECRET|CREDENTIALS_KEY)" %}}
{{% code-placeholders "GOOGLE\_(BUCKET\_NAME|IAM\_SECRET|CREDENTIALS\_KEY)" %}}
```yml
objectStore:
@ -541,7 +546,7 @@ objectStore:
{{% /code-placeholders %}}
---
***
Replace the following:
@ -550,11 +555,11 @@ Replace the following:
- {{% code-placeholder-key %}}`GOOGLE_IAM_SECRET`{{% /code-placeholder-key %}}:
the Kubernetes Secret name that contains your Google IAM service account
credentials
- {{% code-placeholder-key %}}`GOOGLE_CREDENTIALS_KEY`{{% /code-placeholder-key %}}:
- {{% code-placeholder-key %}}`GOOGLE_CREDENTIALS_KEY`{{% /code-placeholder-key %}}:
the key inside of your Google IAM secret that contains your Google IAM account
credentials
---
***
<!--------------------------------- END AZURE --------------------------------->
@ -568,7 +573,7 @@ metadata about your time series data.
To connect your InfluxDB cluster to your PostgreSQL-compatible database,
provide values for the following fields in your `values.yaml`:
> [!Note]
> \[!Note]
> We recommend storing sensitive credentials, such as your PostgreSQL-compatible DSN,
> as secrets in your Kubernetes cluster.
@ -576,7 +581,7 @@ provide values for the following fields in your `values.yaml`:
- `SecretName`: Secret name
- `SecretKey`: Key in the secret that contains the DSN
{{% code-placeholders "SECRET_(NAME|KEY)" %}}
{{% code-placeholders "SECRET\_(NAME|KEY)" %}}
```yml
catalog:
@ -591,7 +596,7 @@ catalog:
{{% /code-placeholders %}}
---
***
Replace the following:
@ -600,58 +605,64 @@ Replace the following:
- {{% code-placeholder-key %}}`SECRET_KEY`{{% /code-placeholder-key %}}:
Key in the secret that references your PostgreSQL-compatible DSN
---
***
> [!Warning]
> \[!Warning]
>
> ##### Percent-encode special symbols in PostgreSQL DSNs
>
>
> Special symbols in PostgreSQL DSNs should be percent-encoded to ensure they
> are parsed correctly by InfluxDB Clustered. This is important to consider when
> using DSNs containing auto-generated passwords which may include special
> symbols to make passwords more secure.
>
>
> A DSN with special characters that are not percent-encoded result in an error
> similar to:
>
>
> ```txt
> Catalog DSN error: A catalog error occurred: unhandled external error: error with configuration: invalid port number
> ```
>
>
> For more information, see the [PostgreSQL Connection URI docs](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING-URIS).
>
> {{< expand-wrapper >}}
{{% expand "View percent-encoded DSN example" %}}
To use the following DSN containing special characters:
> {{% expand "View percent-encoded DSN example" %}}
> To use the following DSN containing special characters:
{{% code-callout "#" %}}
```txt
postgresql://postgres:meow#meow@my-fancy.cloud-database.party:5432/postgres
```
{{% /code-callout %}}
You must percent-encode the special characters in the connection string:
{{% code-callout "%23" %}}
```txt
postgresql://postgres:meow%23meow@my-fancy.cloud-database.party:5432/postgres
```
{{% /code-callout %}}
{{% /expand %}}
{{< /expand-wrapper >}}
> [!Note]
>
> \[!Note]
>
> ##### PostgreSQL instances without TLS or SSL
>
>
> If your PostgreSQL-compatible instance runs without TLS or SSL, you must include
> the `sslmode=disable` parameter in the DSN. For example:
>
>
> {{% code-callout "sslmode=disable" %}}
```
postgres://username:passw0rd@mydomain:5432/influxdb?sslmode=disable
```
{{% /code-callout %}}
#### Configure local storage for ingesters
@ -666,7 +677,7 @@ following fields in your `values.yaml`:
This differs based on the Kubernetes environment and desired storage characteristics.
- `storage`: Storage size. We recommend a minimum of 2 gibibytes (`2Gi`).
{{% code-placeholders "STORAGE_(CLASS|SIZE)" %}}
{{% code-placeholders "STORAGE\_(CLASS|SIZE)" %}}
```yaml
ingesterStorage:
@ -680,7 +691,7 @@ ingesterStorage:
{{% /code-placeholders %}}
---
***
Replace the following:
@ -689,7 +700,7 @@ Replace the following:
- {{% code-placeholder-key %}}`STORAGE_SIZE`{{% /code-placeholder-key %}}:
Storage size (example: `2Gi`)
---
***
### Deploy your cluster
@ -775,6 +786,7 @@ helm upgrade influxdb ./influxdb3-clustered-X.Y.Z.tgz \
```
{{% note %}}
#### Understanding kubit's role in air-gapped environments
When deploying with Helm in an air-gapped environment:
@ -796,9 +808,10 @@ This is why mirroring both the InfluxDB images and the kubit operator images is
### Common issues
1. **Image pull errors**
```
Error: failed to create labeled resources: failed to create resources: failed to create resources:
Internal error occurred: failed to create pod sandbox: rpc error: code = Unknown
desc = failed to pull image "us-docker.pkg.dev/...": failed to pull and unpack image "...":
failed to resolve reference "...": failed to do request: ... i/o timeout
```

View File

@ -29,9 +29,10 @@ To compare these tools and deployment methods, see [Choose the right deployment
## Prerequisites
If you haven't already set up and configured your cluster, see how to
[install InfluxDB Clustered](/influxdb3/clustered/install/).
[install InfluxDB Clustered](/influxdb3/clustered/install/).
<!-- Hidden anchor for links to the kubectl/kubit/helm tabs -->
<span id="kubectl-kubit-helm"></span>
{{< tabs-wrapper >}}
@ -41,7 +42,9 @@ If you haven't already set up and configured your cluster, see how to
[helm](#)
{{% /tabs %}}
{{% tab-content %}}
<!------------------------------- BEGIN kubectl ------------------------------->
- [`kubectl` standard deployment (with internet access)](#kubectl-standard-deployment-with-internet-access)
- [`kubectl` air-gapped deployment](#kubectl-air-gapped-deployment)
@ -56,34 +59,37 @@ kubectl apply \
--namespace influxdb
```
> [!Note]
> \[!Note]
> Due to the additional complexity and maintenance requirements, using `kubectl apply` isn't
> recommended for air-gapped environments.
> Instead, consider using the [`kubit` CLI approach](#kubit-cli), which is specifically designed for air-gapped deployments.
<!-------------------------------- END kubectl -------------------------------->
{{% /tab-content %}}
{{% tab-content %}}
<!-------------------------------- BEGIN kubit CLI -------------------------------->
## Standard and air-gapped deployments
_This approach avoids the need for installing the kubit operator in the cluster,
making it ideal for air-gapped clusters._
*This approach avoids the need for installing the kubit operator in the cluster,
making it ideal for air-gapped clusters.*
> [!Important]
> \[!Important]
> For air-gapped deployment, ensure you have [configured access to a private registry for InfluxDB images](/influxdb3/clustered/install/set-up-cluster/configure-cluster/directly/#configure-access-to-the-influxDB-container-registry).
1. On a machine with internet access, download the [`kubit` CLI](https://github.com/kubecfg/kubit#cli-tool)--for example:
```bash
curl -L -o kubit https://github.com/kubecfg/kubit/archive/refs/tags/v0.0.20.tar.gz
curl -L -o kubit https://github.com/kubecfg/kubit/archive/refs/tags/v0.0.22.tar.gz
chmod +x kubit
```
Replace {{% code-placeholder-key %}}`v0.0.20`{{% /code-placeholder-key%}} with the [latest release version](https://github.com/kubecfg/kubit/releases/latest).
Replace {{% code-placeholder-key %}}`v0.0.22`{{% /code-placeholder-key%}} with the [latest release version](https://github.com/kubecfg/kubit/releases/latest).
2. If deploying InfluxDB in an air-gapped environment (without internet access),
transfer the binary to your air-gapped environment.
transfer the binary to your air-gapped environment.
3. Use the `kubit local apply` command to process your [custom-configured `myinfluxdb.yml`](/influxdb3/clustered/install/set-up-cluster/configure-cluster/directly/) locally
and apply the resulting resources to your cluster:
@ -108,7 +114,9 @@ applies the resulting Kubernetes resources directly to your cluster.
{{% /tab-content %}}
{{% tab-content %}}
<!-------------------------------- BEGIN Helm --------------------------------->
- [Helm standard deployment (with internet access)](#helm-standard-deployment-with-internet-access)
- [Helm air-gapped deployment](#helm-air-gapped-deployment)
@ -145,7 +153,7 @@ helm upgrade influxdb influxdata/influxdb3-clustered \
## Helm air-gapped deployment
> [!Important]
> \[!Important]
> For air-gapped deployment, ensure you have [configured access to a private registry for InfluxDB images and the kubit operator](/influxdb3/clustered/install/set-up-cluster/configure-cluster/use-helm/#configure-access-to-the-influxDB-container-registry).
1. On a machine with internet access, download the Helm chart:
@ -153,14 +161,14 @@ helm upgrade influxdb influxdata/influxdb3-clustered \
```bash
# Add the InfluxData repository
helm repo add influxdata https://helm.influxdata.com/
# Update the repositories
helm repo update
# Download the chart as a tarball
helm pull influxdata/influxdb3-clustered --version X.Y.Z
```
Replace `X.Y.Z` with the specific chart version you want to use.
2. Transfer the chart tarball to your air-gapped environment using your secure file transfer method.
@ -188,7 +196,8 @@ helm upgrade influxdb ./influxdb3-clustered-X.Y.Z.tgz \
--namespace influxdb
```
> [!Note]
> \[!Note]
>
> #### kubit's role in air-gapped environments
>
> When deploying with Helm in an air-gapped environment:
@ -200,6 +209,7 @@ helm upgrade influxdb ./influxdb3-clustered-X.Y.Z.tgz \
> This is why you need to [mirror InfluxDB images and kubit operator images](/influxdb3/clustered/install/set-up-cluster/configure-cluster/use-helm/#mirror-influxdb-images) for air-gapped deployments.
<!--------------------------------- END Helm ---------------------------------->
{{% /tab-content %}}
{{< /tabs-wrapper >}}
@ -208,7 +218,7 @@ helm upgrade influxdb ./influxdb3-clustered-X.Y.Z.tgz \
Kubernetes deployments take some time to complete. To check on the status of a
deployment, use the `kubectl get` command:
> [!Note]
> \[!Note]
> The following example uses the [`yq` command-line YAML parser](https://github.com/mikefarah/yq)
> to parse and format the YAML output.
> You can also specify the output as `json` and use the

View File

@ -13,17 +13,17 @@ aliases:
- /influxdb3/clustered/install/prerequisites/
---
InfluxDB Clustered requires the following prerequisite external dependencies:
InfluxDB Clustered requires the following prerequisite external dependencies:
- **kubectl command line tool**
- **Kubernetes cluster**
- **kubecfg kubit operator**
- **Kubernetes ingress controller**
- **Object storage**: AWS S3 or S3-compatible storage (including Google Cloud Storage
or Azure Blob Storage) to store the InfluxDB Parquet files.
- **PostgreSQL-compatible database** _(AWS Aurora, hosted PostgreSQL, etc.)_:
or Azure Blob Storage) to store the InfluxDB Parquet files.
- **PostgreSQL-compatible database** *(AWS Aurora, hosted PostgreSQL, etc.)*:
Stores the [InfluxDB Catalog](/influxdb3/clustered/reference/internals/storage-engine/#catalog).
- **Local or attached storage**:
- **Local or attached storage**:
Stores the Write-Ahead Log (WAL) for
[InfluxDB Ingesters](/influxdb3/clustered/reference/internals/storage-engine/#ingester).
@ -45,7 +45,7 @@ cluster.
Follow instructions to install `kubectl` on your local machine:
> [!Note]
> \[!Note]
> InfluxDB Clustered Kubernetes deployments require `kubectl` 1.27 or higher.
- [Install kubectl on Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/)
@ -54,35 +54,35 @@ Follow instructions to install `kubectl` on your local machine:
#### Set up your Kubernetes cluster
1. Deploy a Kubernetes cluster. The deployment process depends on your
Kubernetes environment or Kubernetes cloud provider. Refer to the
[Kubernetes documentation](https://kubernetes.io/docs/home/) or your cloud
provider's documentation for information about deploying a Kubernetes cluster.
1. Deploy a Kubernetes cluster. The deployment process depends on your
Kubernetes environment or Kubernetes cloud provider. Refer to the
[Kubernetes documentation](https://kubernetes.io/docs/home/) or your cloud
provider's documentation for information about deploying a Kubernetes cluster.
2. Ensure `kubectl` can connect to your Kubernetes cluster.
Your Manage [kubeconfig file](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/)
defines cluster connection credentials.
2. Ensure `kubectl` can connect to your Kubernetes cluster.
Your Manage [kubeconfig file](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/)
defines cluster connection credentials.
3. Create two namespaces--`influxdb` and `kubit`. Use
[`kubectl create namespace`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_namespace/) to create the
namespaces:
3. Create two namespaces--`influxdb` and `kubit`. Use
[`kubectl create namespace`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_namespace/) to create the
namespaces:
<!-- pytest.mark.skip -->
<!-- pytest.mark.skip -->
```bash
kubectl create namespace influxdb && \
kubectl create namespace kubit
```
```bash
kubectl create namespace influxdb && \
kubectl create namespace kubit
```
4. Install an [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/)
in the cluster and a mechanism to obtain a valid TLS certificate
(for example: [cert-manager](https://cert-manager.io/) or provide the
certificate PEM manually out of band).
To use the InfluxDB-specific ingress controller, install [Ingress NGINX](https://github.com/kubernetes/ingress-nginx).
4. Install an [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/)
in the cluster and a mechanism to obtain a valid TLS certificate
(for example: [cert-manager](https://cert-manager.io/) or provide the
certificate PEM manually out of band).
To use the InfluxDB-specific ingress controller, install [Ingress NGINX](https://github.com/kubernetes/ingress-nginx).
5. Ensure your Kubernetes cluster can access the InfluxDB container registry,
or, if running in an air-gapped environment, a local container registry to
which you can copy the InfluxDB images.
5. Ensure your Kubernetes cluster can access the InfluxDB container registry,
or, if running in an air-gapped environment, a local container registry to
which you can copy the InfluxDB images.
### Cluster sizing recommendation
@ -97,10 +97,11 @@ following sizing for {{% product-name %}} components:
[On-Prem](#)
{{% /tabs %}}
{{% tab-content %}}
<!--------------------------------- BEGIN AWS --------------------------------->
- **Catalog store (PostgreSQL-compatible database) (x1):**
- _[See below](#postgresql-compatible-database-requirements)_
- *[See below](#postgresql-compatible-database-requirements)*
- **Ingesters and Routers (x3):**
- EC2 m6i.2xlarge (8 CPU, 32 GB RAM)
- Local storage: minimum of 2 GB (high-speed SSD)
@ -112,12 +113,14 @@ following sizing for {{% product-name %}} components:
- EC2 t3.large (2 CPU, 8 GB RAM)
<!---------------------------------- END AWS ---------------------------------->
{{% /tab-content %}}
{{% tab-content %}}
<!--------------------------------- BEGIN GCP --------------------------------->
- **Catalog store (PostgreSQL-compatible database) (x1):**
- _[See below](#postgresql-compatible-database-requirements)_
- *[See below](#postgresql-compatible-database-requirements)*
- **Ingesters and Routers (x3):**
- GCE c2-standard-8 (8 CPU, 32 GB RAM)
- Local storage: minimum of 2 GB (high-speed SSD)
@ -129,25 +132,29 @@ following sizing for {{% product-name %}} components:
- GCE c2d-standard-2 (2 CPU, 8 GB RAM)
<!---------------------------------- END GCP ---------------------------------->
{{% /tab-content %}}
{{% tab-content %}}
<!-------------------------------- BEGIN Azure -------------------------------->
- **Catalog store (PostgreSQL-compatible database) (x1):**
- _[See below](#postgresql-compatible-database-requirements)_
- *[See below](#postgresql-compatible-database-requirements)*
- **Ingesters and Routers (x3):**
- Standard_D8s_v3 (8 CPU, 32 GB RAM)
- Standard\_D8s\_v3 (8 CPU, 32 GB RAM)
- Local storage: minimum of 2 GB (high-speed SSD)
- **Queriers (x3):**
- Standard_D8s_v3 (8 CPU, 32 GB RAM)
- Standard\_D8s\_v3 (8 CPU, 32 GB RAM)
- **Compactors (x1):**
- Standard_D8s_v3 (8 CPU, 32 GB RAM)
- Standard\_D8s\_v3 (8 CPU, 32 GB RAM)
- **Kubernetes Control Plane (x1):**
- Standard_B2ms (2 CPU, 8 GB RAM)
- Standard\_B2ms (2 CPU, 8 GB RAM)
<!--------------------------------- END Azure --------------------------------->
{{% /tab-content %}}
{{% tab-content %}}
<!------------------------------- BEGIN ON-PREM ------------------------------->
- **Catalog store (PostgreSQL-compatible database) (x1):**
@ -168,6 +175,7 @@ following sizing for {{% product-name %}} components:
- RAM: 8 GB
<!-------------------------------- END ON-PREM -------------------------------->
{{% /tab-content %}}
{{< /tabs-wrapper >}}
@ -181,20 +189,21 @@ simplifies the installation and management of the InfluxDB Clustered package.
It manages the application of the jsonnet templates used to install, manage, and
update an InfluxDB cluster.
> [!Note]
> \[!Note]
>
> #### The InfluxDB Clustered Helm chart includes the kubit operator
>
>
> If using the [InfluxDB Clustered Helm chart](https://github.com/influxdata/helm-charts/tree/master/charts/influxdb3-clustered)
> to deploy your InfluxDB cluster, you do not need to install the kubit operator
> separately. The Helm chart installs the kubit operator.
Use `kubectl` to install the [kubecfg kubit](https://github.com/kubecfg/kubit)
operator **v0.0.18 or later**.
operator **v0.0.22 or later**.
<!-- pytest.mark.skip -->
```bash
kubectl apply -k 'https://github.com/kubecfg/kubit//kustomize/global?ref=v0.0.19'
kubectl apply -k 'https://github.com/kubecfg/kubit//kustomize/global?ref=v0.0.22'
```
### Set up a Kubernetes ingress controller
@ -206,7 +215,8 @@ You can provide your own ingress or you can install
[Nginx Ingress Controller](https://github.com/kubernetes/ingress-nginx) to use
the InfluxDB-defined ingress.
> [!Important]
> \[!Important]
>
> #### Allow gRPC/HTTP2
>
> InfluxDB Clustered components use gRPC/HTTP2 protocols.
@ -232,19 +242,20 @@ that work with InfluxDB Clustered. Other S3-compatible object stores should work
as well.
{{% /caption %}}
> [!Important]
> \[!Important]
>
> #### Object storage recommendations
>
>
> We **strongly** recommend the following:
>
>
> - ##### Enable object versioning
>
>
> Enable object versioning in your object store.
> Refer to your object storage provider's documentation for information about
> enabling object versioning.
>
>
> - ##### Run the object store in a separate namespace or outside of Kubernetes
>
>
> Run the Object store in a separate namespace from InfluxDB or external to
> Kubernetes entirely. Doing so makes management of the InfluxDB cluster easier
> and helps to prevent accidental data loss. While deploying everything in the
@ -260,7 +271,8 @@ the correct permissions to allow InfluxDB to perform all the actions it needs to
The IAM role that you use to access AWS S3 should have the following policy:
{{% code-placeholders "S3_BUCKET_NAME" %}}
{{% code-placeholders "S3\_BUCKET\_NAME" %}}
```json
{
"Version": "2012-10-17",
@ -297,6 +309,7 @@ The IAM role that you use to access AWS S3 should have the following policy:
]
}
```
{{% /code-placeholders %}}
Replace the following:
@ -310,13 +323,15 @@ Replace the following:
To use Google Cloud Storage (GCS) as your object store, your [IAM principal](https://cloud.google.com/iam/docs/overview) should be granted the `roles/storage.objectUser` role.
For example, if using [Google Service Accounts](https://cloud.google.com/iam/docs/service-account-overview):
{{% code-placeholders "GCP_SERVICE_ACCOUNT|GCP_BUCKET" %}}
{{% code-placeholders "GCP\_SERVICE\_ACCOUNT|GCP\_BUCKET" %}}
```bash
gcloud storage buckets add-iam-policy-binding \
gs://GCP_BUCKET \
--member="serviceAccount:GCP_SERVICE_ACCOUNT" \
--role="roles/storage.objectUser"
```
{{% /code-placeholders %}}
Replace the following:
@ -333,13 +348,15 @@ should be granted the `Storage Blob Data Contributor` role.
This is a built-in role for Azure which encompasses common permissions.
You can assign it using the following command:
{{% code-placeholders "PRINCIPAL|AZURE_SUBSCRIPTION|AZURE_RESOURCE_GROUP|AZURE_STORAGE_ACCOUNT|AZURE_STORAGE_CONTAINER" %}}
{{% code-placeholders "PRINCIPAL|AZURE\_SUBSCRIPTION|AZURE\_RESOURCE\_GROUP|AZURE\_STORAGE\_ACCOUNT|AZURE\_STORAGE\_CONTAINER" %}}
```bash
az role assignment create \
--role "Storage Blob Data Contributor" \
--assignee PRINCIPAL \
--scope "/subscriptions/AZURE_SUBSCRIPTION/resourceGroups/AZURE_RESOURCE_GROUP/providers/Microsoft.Storage/storageAccounts/AZURE_STORAGE_ACCOUNT/blobServices/default/containers/AZURE_STORAGE_CONTAINER"
```
{{% /code-placeholders %}}
Replace the following:
@ -354,7 +371,7 @@ Replace the following:
{{< /expand-wrapper >}}
> [!Note]
> \[!Note]
> To configure permissions with MinIO, use the
> [example AWS access policy](#view-example-aws-s3-access-policy).
@ -362,7 +379,7 @@ Replace the following:
The [InfluxDB Catalog](/influxdb3/clustered/reference/internals/storage-engine/#catalog)
that stores metadata related to your time series data requires a PostgreSQL or
PostgreSQL-compatible database _(AWS Aurora, hosted PostgreSQL, etc.)_.
PostgreSQL-compatible database *(AWS Aurora, hosted PostgreSQL, etc.)*.
The process for installing and setting up your PostgreSQL-compatible database
depends on the database and database provider you use.
Refer to your database's or provider's documentation for setting up your
@ -376,12 +393,12 @@ PostgreSQL-compatible database.
applications, ensure that your PostgreSQL-compatible instance is dedicated
exclusively to InfluxDB.
> [!Note]
> \[!Note]
> We **strongly** recommended running the PostgreSQL-compatible database
> in a separate namespace from InfluxDB or external to Kubernetes entirely.
> Doing so makes management of the InfluxDB cluster easier and helps to prevent
> accidental data loss.
>
>
> While deploying everything in the same namespace is possible, we do not
> recommend it for production environments.

View File

@ -22,8 +22,8 @@ influxdb3 [GLOBAL-OPTIONS] [COMMAND]
## Commands
| Command | Description |
| :--------------------------------------------------------------| :---------------------------------- |
| Command | Description |
| :---------------------------------------------------------- | :---------------------------------- |
| [create](/influxdb3/core/reference/cli/influxdb3/create/) | Create resources |
| [delete](/influxdb3/core/reference/cli/influxdb3/delete/) | Delete resources |
| [disable](/influxdb3/core/reference/cli/influxdb3/disable/) | Disable resources |
@ -37,14 +37,39 @@ influxdb3 [GLOBAL-OPTIONS] [COMMAND]
## Global options
| Option | | Description |
| :----- | :---------------- | :-------------------------------------------------------------------- |
| `-h` | `--help` | Print help information |
| | `--help-all` | Print detailed help information including runtime configuration options |
| `-V` | `--version` | Print version |
| Option | | Description |
| :----- | :----------- | :---------------------------------------------------------------------- |
| `-h` | `--help` | Print help information |
| | `--help-all` | Print detailed help information including runtime configuration options |
| `-V` | `--version` | Print version |
For advanced global configuration options (including `--num-io-threads` and other runtime settings), see [Configuration options](/influxdb3/core/reference/config-options/#global-configuration-options).
## Quick-Start Mode
For development, testing, and home use, you can start {{< product-name >}} by running `influxdb3` without the `serve` subcommand or any configuration parameters. The system automatically generates required values:
- **`node-id`**: `{hostname}-node` (fallback: `primary-node`)
- **`object-store`**: `file`
- **`data-dir`**: `~/.influxdb`
The system displays warning messages showing the auto-generated identifiers:
```
Using auto-generated node id: mylaptop-node. For production deployments, explicitly set --node-id
```
> \[!Important]
>
> #### Production deployments
>
> Quick-start mode is designed for development and testing environments.
> For production deployments, use explicit configuration with the `serve` subcommand
> and specify all required parameters as shown in the [Examples](#examples) below.
**Configuration precedence**: CLI flags > environment variables > auto-generated defaults
For more information about quick-start mode, see [Get started](/influxdb3/core/get-started/setup/#quick-start-mode-development).
## Examples
@ -54,6 +79,21 @@ with a unique identifier for your {{< product-name >}} server.
{{% code-placeholders "my-host-01" %}}
<!--pytest.mark.skip-->
### Quick-start InfluxDB 3 server
```bash
# Zero-config startup
influxdb3
# Override specific defaults
influxdb3 --object-store memory
# Use environment variables to override defaults
INFLUXDB3_NODE_IDENTIFIER_PREFIX=my-node influxdb3
```
### Run the InfluxDB 3 server
<!--pytest.mark.skip-->
@ -104,7 +144,7 @@ influxdb3 serve \
--verbose
```
### Run {{% product-name %}} with debug logging using LOG_FILTER
### Run {{% product-name %}} with debug logging using LOG\_FILTER
<!--pytest.mark.skip-->
@ -115,4 +155,4 @@ LOG_FILTER=debug influxdb3 serve \
--node-id my-host-01
```
{{% /code-placeholders %}}
{{% /code-placeholders %}}

View File

@ -18,20 +18,22 @@ The `influxdb3 serve` command starts the {{< product-name >}} server.
<!--pytest.mark.skip-->
```bash
influxdb3 serve [OPTIONS] --node-id <HOST_IDENTIFIER_PREFIX>
influxdb3 serve [OPTIONS]
```
## Required parameters
## Required Parameters
- **node-id**: A unique identifier for your server instance. Must be unique for any hosts sharing the same object store.
- **object-store**: Determines where time series data is stored.
- Other object store parameters depending on the selected `object-store` type.
> [!NOTE]
> \[!NOTE]
> `--node-id` supports alphanumeric strings with optional hyphens.
> [!Important]
> \[!Important]
>
> #### Global configuration options
>
> Some configuration options (like [`--num-io-threads`](/influxdb3/core/reference/config-options/#num-io-threads)) are **global** and must be specified **before** the `serve` command:
>
> ```bash
@ -44,99 +46,95 @@ influxdb3 serve [OPTIONS] --node-id <HOST_IDENTIFIER_PREFIX>
| Option | | Description |
| :--------------- | :--------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------ |
| {{< req "\*" >}} | `--node-id` | _See [configuration options](/influxdb3/core/reference/config-options/#node-id)_ |
| | `--object-store` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store)_ |
| | `--admin-token-recovery-http-bind` | _See [configuration options](/influxdb3/core/reference/config-options/#admin-token-recovery-http-bind)_ |
| | `--admin-token-recovery-tcp-listener-file-path` | _See [configuration options](/influxdb3/core/reference/config-options/#admin-token-recovery-tcp-listener-file-path)_ |
| | `--admin-token-file` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-file)_ |
| | `--aws-access-key-id` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-access-key-id)_ |
| | `--aws-allow-http` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-allow-http)_ |
| | `--aws-credentials-file` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-credentials-file)_ |
| | `--aws-default-region` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-default-region)_ |
| | `--aws-endpoint` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-endpoint)_ |
| | `--aws-secret-access-key` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-secret-access-key)_ |
| | `--aws-session-token` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-session-token)_ |
| | `--aws-skip-signature` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-skip-signature)_ |
| | `--azure-allow-http` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-allow-http)_ |
| | `--azure-endpoint` | _See [configuration options](/influxdb3/enterprise/reference/config-options/##azure-endpoint)_ |
| | `--azure-storage-access-key` | _See [configuration options](/influxdb3/core/reference/config-options/#azure-storage-access-key)_ |
| | `--azure-storage-account` | _See [configuration options](/influxdb3/core/reference/config-options/#azure-storage-account)_ |
| | `--bucket` | _See [configuration options](/influxdb3/core/reference/config-options/#bucket)_ |
| | `--data-dir` | _See [configuration options](/influxdb3/core/reference/config-options/#data-dir)_ |
| | `--datafusion-config` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-config)_ |
| | `--datafusion-max-parquet-fanout` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-max-parquet-fanout)_ |
| | `--datafusion-num-threads` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-num-threads)_ |
| | `--datafusion-runtime-disable-lifo-slot` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-disable-lifo-slot)_ |
| | `--datafusion-runtime-event-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-event-interval)_ |
| | `--datafusion-runtime-global-queue-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-global-queue-interval)_ |
| | `--datafusion-runtime-max-blocking-threads` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-max-blocking-threads)_ |
| | `--datafusion-runtime-max-io-events-per-tick` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-max-io-events-per-tick)_ |
| | `--datafusion-runtime-thread-keep-alive` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-thread-keep-alive)_ |
| | `--datafusion-runtime-thread-priority` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-thread-priority)_ |
| | `--datafusion-runtime-type` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-type)_ |
| | `--datafusion-use-cached-parquet-loader` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-use-cached-parquet-loader)_ |
| | `--delete-grace-period` | _See [configuration options](/influxdb3/core/reference/config-options/#delete-grace-period)_ |
| | `--disable-authz` | _See [configuration options](/influxdb3/core/reference/config-options/#disable-authz)_ |
| | `--disable-parquet-mem-cache` | _See [configuration options](/influxdb3/core/reference/config-options/#disable-parquet-mem-cache)_ |
| | `--distinct-cache-eviction-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#distinct-cache-eviction-interval)_ |
| | `--exec-mem-pool-bytes` | _See [configuration options](/influxdb3/core/reference/config-options/#exec-mem-pool-bytes)_ |
| | `--force-snapshot-mem-threshold` | _See [configuration options](/influxdb3/core/reference/config-options/#force-snapshot-mem-threshold)_ |
| | `--gen1-duration` | _See [configuration options](/influxdb3/core/reference/config-options/#gen1-duration)_ |
| | `--gen1-lookback-duration` | _See [configuration options](/influxdb3/core/reference/config-options/#gen1-lookback-duration)_ |
| | `--google-service-account` | _See [configuration options](/influxdb3/core/reference/config-options/#google-service-account)_ |
| | `--hard-delete-default-duration` | _See [configuration options](/influxdb3/core/reference/config-options/#hard-delete-default-duration)_ |
| {{< req "\*" >}} | `--node-id` | *See [configuration options](/influxdb3/core/reference/config-options/#node-id)* |
| {{< req "\*" >}} | `--object-store` | *See [configuration options](/influxdb3/core/reference/config-options/#object-store)* |
| | `--admin-token-recovery-http-bind` | *See [configuration options](/influxdb3/core/reference/config-options/#admin-token-recovery-http-bind)* |
| | `--admin-token-recovery-tcp-listener-file-path` | *See [configuration options](/influxdb3/core/reference/config-options/#admin-token-recovery-tcp-listener-file-path)* |
| | `--admin-token-file` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-file)* |
| | `--aws-access-key-id` | *See [configuration options](/influxdb3/core/reference/config-options/#aws-access-key-id)* |
| | `--aws-allow-http` | *See [configuration options](/influxdb3/core/reference/config-options/#aws-allow-http)* |
| | `--aws-credentials-file` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-credentials-file)* |
| | `--aws-default-region` | *See [configuration options](/influxdb3/core/reference/config-options/#aws-default-region)* |
| | `--aws-endpoint` | *See [configuration options](/influxdb3/core/reference/config-options/#aws-endpoint)* |
| | `--aws-secret-access-key` | *See [configuration options](/influxdb3/core/reference/config-options/#aws-secret-access-key)* |
| | `--aws-session-token` | *See [configuration options](/influxdb3/core/reference/config-options/#aws-session-token)* |
| | `--aws-skip-signature` | *See [configuration options](/influxdb3/core/reference/config-options/#aws-skip-signature)* |
| | `--azure-allow-http` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-allow-http)* |
| | `--azure-endpoint` | *See [configuration options](/influxdb3/enterprise/reference/config-options/##azure-endpoint)* |
| | `--azure-storage-access-key` | *See [configuration options](/influxdb3/core/reference/config-options/#azure-storage-access-key)* |
| | `--azure-storage-account` | *See [configuration options](/influxdb3/core/reference/config-options/#azure-storage-account)* |
| | `--bucket` | *See [configuration options](/influxdb3/core/reference/config-options/#bucket)* |
| | `--data-dir` | *See [configuration options](/influxdb3/core/reference/config-options/#data-dir)* |
| | `--datafusion-config` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-config)* |
| | `--datafusion-max-parquet-fanout` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-max-parquet-fanout)* |
| | `--datafusion-num-threads` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-num-threads)* |
| | `--datafusion-runtime-disable-lifo-slot` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-disable-lifo-slot)* |
| | `--datafusion-runtime-event-interval` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-event-interval)* |
| | `--datafusion-runtime-global-queue-interval` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-global-queue-interval)* |
| | `--datafusion-runtime-max-blocking-threads` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-max-blocking-threads)* |
| | `--datafusion-runtime-max-io-events-per-tick` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-max-io-events-per-tick)* |
| | `--datafusion-runtime-thread-keep-alive` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-thread-keep-alive)* |
| | `--datafusion-runtime-thread-priority` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-thread-priority)* |
| | `--datafusion-runtime-type` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-type)* |
| | `--datafusion-use-cached-parquet-loader` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-use-cached-parquet-loader)* |
| | `--delete-grace-period` | *See [configuration options](/influxdb3/core/reference/config-options/#delete-grace-period)* |
| | `--disable-authz` | *See [configuration options](/influxdb3/core/reference/config-options/#disable-authz)* |
| | `--disable-parquet-mem-cache` | *See [configuration options](/influxdb3/core/reference/config-options/#disable-parquet-mem-cache)* |
| | `--distinct-cache-eviction-interval` | *See [configuration options](/influxdb3/core/reference/config-options/#distinct-cache-eviction-interval)* |
| | `--exec-mem-pool-bytes` | *See [configuration options](/influxdb3/core/reference/config-options/#exec-mem-pool-bytes)* |
| | `--force-snapshot-mem-threshold` | *See [configuration options](/influxdb3/core/reference/config-options/#force-snapshot-mem-threshold)* |
| | `--gen1-duration` | *See [configuration options](/influxdb3/core/reference/config-options/#gen1-duration)* |
| | `--gen1-lookback-duration` | *See [configuration options](/influxdb3/core/reference/config-options/#gen1-lookback-duration)* |
| | `--google-service-account` | *See [configuration options](/influxdb3/core/reference/config-options/#google-service-account)* |
| | `--hard-delete-default-duration` | *See [configuration options](/influxdb3/core/reference/config-options/#hard-delete-default-duration)* |
| `-h` | `--help` | Print help information |
| | `--help-all` | Print detailed help information |
| | `--http-bind` | _See [configuration options](/influxdb3/core/reference/config-options/#http-bind)_ |
| | `--last-cache-eviction-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#last-cache-eviction-interval)_ |
| | `--log-destination` | _See [configuration options](/influxdb3/core/reference/config-options/#log-destination)_ |
| | `--log-filter` | _See [configuration options](/influxdb3/core/reference/config-options/#log-filter)_ |
| | `--log-format` | _See [configuration options](/influxdb3/core/reference/config-options/#log-format)_ |
| | `--max-http-request-size` | _See [configuration options](/influxdb3/core/reference/config-options/#max-http-request-size)_ |
| | `--object-store-cache-endpoint` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-cache-endpoint)_ |
| | `--object-store-connection-limit` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-connection-limit)_ |
| | `--object-store-http2-max-frame-size` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-http2-max-frame-size)_ |
| | `--object-store-http2-only` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-http2-only)_ |
| | `--object-store-max-retries` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-max-retries)_ |
| | `--object-store-retry-timeout` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-retry-timeout)_ |
| | `--package-manager` | _See [configuration options](/influxdb3/core/reference/config-options/#package-manager)_ |
| | `--parquet-mem-cache-prune-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-prune-interval)_ |
| | `--parquet-mem-cache-prune-percentage` | _See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-prune-percentage)_ |
| | `--parquet-mem-cache-query-path-duration` | _See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-query-path-duration)_ |
| | `--parquet-mem-cache-size` | _See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-size)_ |
| | `--plugin-dir` | _See [configuration options](/influxdb3/core/reference/config-options/#plugin-dir)_ |
| | `--preemptive-cache-age` | _See [configuration options](/influxdb3/core/reference/config-options/#preemptive-cache-age)_ |
| | `--query-file-limit` | _See [configuration options](/influxdb3/core/reference/config-options/#query-file-limit)_ |
| | `--query-log-size` | _See [configuration options](/influxdb3/core/reference/config-options/#query-log-size)_ |
| | `--retention-check-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#retention-check-interval)_ |
| | `--snapshotted-wal-files-to-keep` | _See [configuration options](/influxdb3/core/reference/config-options/#snapshotted-wal-files-to-keep)_ |
| | `--table-index-cache-concurrency-limit` | _See [configuration options](/influxdb3/core/reference/config-options/#table-index-cache-concurrency-limit)_ |
| | `--table-index-cache-max-entries` | _See [configuration options](/influxdb3/core/reference/config-options/#table-index-cache-max-entries)_ |
| | `--tcp-listener-file-path` | _See [configuration options](/influxdb3/core/reference/config-options/#tcp-listener-file-path)_ |
| | `--telemetry-disable-upload` | _See [configuration options](/influxdb3/core/reference/config-options/#telemetry-disable-upload)_ |
| | `--telemetry-endpoint` | _See [configuration options](/influxdb3/core/reference/config-options/#telemetry-endpoint)_ |
| | `--tls-cert` | _See [configuration options](/influxdb3/core/reference/config-options/#tls-cert)_ |
| | `--tls-key` | _See [configuration options](/influxdb3/core/reference/config-options/#tls-key)_ |
| | `--tls-minimum-version` | _See [configuration options](/influxdb3/core/reference/config-options/#tls-minimum-version)_ |
| | `--traces-exporter` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter)_ |
| | `--traces-exporter-jaeger-agent-host` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-agent-host)_ |
| | `--traces-exporter-jaeger-agent-port` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-agent-port)_ |
| | `--traces-exporter-jaeger-service-name` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-service-name)_ |
| | `--traces-exporter-jaeger-trace-context-header-name` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-trace-context-header-name)_ |
| | `--traces-jaeger-debug-name` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-debug-name)_ |
| | `--traces-jaeger-max-msgs-per-second` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-max-msgs-per-second)_ |
| | `--traces-jaeger-tags` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-tags)_ |
| | `--virtual-env-location` | _See [configuration options](/influxdb3/core/reference/config-options/#virtual-env-location)_ |
| | `--wal-flush-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#wal-flush-interval)_ |
| | `--wal-max-write-buffer-size` | _See [configuration options](/influxdb3/core/reference/config-options/#wal-max-write-buffer-size)_ |
| | `--wal-replay-concurrency-limit` | _See [configuration options](/influxdb3/core/reference/config-options/#wal-replay-concurrency-limit)_ |
| | `--wal-replay-fail-on-error` | _See [configuration options](/influxdb3/core/reference/config-options/#wal-replay-fail-on-error)_ |
| | `--wal-snapshot-size` | _See [configuration options](/influxdb3/core/reference/config-options/#wal-snapshot-size)_ |
| | `--without-auth` | _See [configuration options](/influxdb3/core/reference/config-options/#without-auth)_ |
{{< caption >}}
{{< req text="\* Required options" >}}
{{< /caption >}}
| | `--http-bind` | *See [configuration options](/influxdb3/core/reference/config-options/#http-bind)* |
| | `--last-cache-eviction-interval` | *See [configuration options](/influxdb3/core/reference/config-options/#last-cache-eviction-interval)* |
| | `--log-destination` | *See [configuration options](/influxdb3/core/reference/config-options/#log-destination)* |
| | `--log-filter` | *See [configuration options](/influxdb3/core/reference/config-options/#log-filter)* |
| | `--log-format` | *See [configuration options](/influxdb3/core/reference/config-options/#log-format)* |
| | `--max-http-request-size` | *See [configuration options](/influxdb3/core/reference/config-options/#max-http-request-size)* |
| | `--object-store-cache-endpoint` | *See [configuration options](/influxdb3/core/reference/config-options/#object-store-cache-endpoint)* |
| | `--object-store-connection-limit` | *See [configuration options](/influxdb3/core/reference/config-options/#object-store-connection-limit)* |
| | `--object-store-http2-max-frame-size` | *See [configuration options](/influxdb3/core/reference/config-options/#object-store-http2-max-frame-size)* |
| | `--object-store-http2-only` | *See [configuration options](/influxdb3/core/reference/config-options/#object-store-http2-only)* |
| | `--object-store-max-retries` | *See [configuration options](/influxdb3/core/reference/config-options/#object-store-max-retries)* |
| | `--object-store-retry-timeout` | *See [configuration options](/influxdb3/core/reference/config-options/#object-store-retry-timeout)* |
| | `--package-manager` | *See [configuration options](/influxdb3/core/reference/config-options/#package-manager)* |
| | `--parquet-mem-cache-prune-interval` | *See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-prune-interval)* |
| | `--parquet-mem-cache-prune-percentage` | *See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-prune-percentage)* |
| | `--parquet-mem-cache-query-path-duration` | *See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-query-path-duration)* |
| | `--parquet-mem-cache-size` | *See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-size)* |
| | `--plugin-dir` | *See [configuration options](/influxdb3/core/reference/config-options/#plugin-dir)* |
| | `--preemptive-cache-age` | *See [configuration options](/influxdb3/core/reference/config-options/#preemptive-cache-age)* |
| | `--query-file-limit` | *See [configuration options](/influxdb3/core/reference/config-options/#query-file-limit)* |
| | `--query-log-size` | *See [configuration options](/influxdb3/core/reference/config-options/#query-log-size)* |
| | `--retention-check-interval` | *See [configuration options](/influxdb3/core/reference/config-options/#retention-check-interval)* |
| | `--snapshotted-wal-files-to-keep` | *See [configuration options](/influxdb3/core/reference/config-options/#snapshotted-wal-files-to-keep)* |
| | `--table-index-cache-concurrency-limit` | *See [configuration options](/influxdb3/core/reference/config-options/#table-index-cache-concurrency-limit)* |
| | `--table-index-cache-max-entries` | *See [configuration options](/influxdb3/core/reference/config-options/#table-index-cache-max-entries)* |
| | `--tcp-listener-file-path` | *See [configuration options](/influxdb3/core/reference/config-options/#tcp-listener-file-path)* |
| | `--telemetry-disable-upload` | *See [configuration options](/influxdb3/core/reference/config-options/#telemetry-disable-upload)* |
| | `--telemetry-endpoint` | *See [configuration options](/influxdb3/core/reference/config-options/#telemetry-endpoint)* |
| | `--tls-cert` | *See [configuration options](/influxdb3/core/reference/config-options/#tls-cert)* |
| | `--tls-key` | *See [configuration options](/influxdb3/core/reference/config-options/#tls-key)* |
| | `--tls-minimum-version` | *See [configuration options](/influxdb3/core/reference/config-options/#tls-minimum-version)* |
| | `--traces-exporter` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter)* |
| | `--traces-exporter-jaeger-agent-host` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-agent-host)* |
| | `--traces-exporter-jaeger-agent-port` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-agent-port)* |
| | `--traces-exporter-jaeger-service-name` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-service-name)* |
| | `--traces-exporter-jaeger-trace-context-header-name` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-trace-context-header-name)* |
| | `--traces-jaeger-debug-name` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-debug-name)* |
| | `--traces-jaeger-max-msgs-per-second` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-max-msgs-per-second)* |
| | `--traces-jaeger-tags` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-tags)* |
| | `--virtual-env-location` | *See [configuration options](/influxdb3/core/reference/config-options/#virtual-env-location)* |
| | `--wal-flush-interval` | *See [configuration options](/influxdb3/core/reference/config-options/#wal-flush-interval)* |
| | `--wal-max-write-buffer-size` | *See [configuration options](/influxdb3/core/reference/config-options/#wal-max-write-buffer-size)* |
| | `--wal-replay-concurrency-limit` | *See [configuration options](/influxdb3/core/reference/config-options/#wal-replay-concurrency-limit)* |
| | `--wal-replay-fail-on-error` | *See [configuration options](/influxdb3/core/reference/config-options/#wal-replay-fail-on-error)* |
| | `--wal-snapshot-size` | *See [configuration options](/influxdb3/core/reference/config-options/#wal-snapshot-size)* |
| | `--without-auth` | *See [configuration options](/influxdb3/core/reference/config-options/#without-auth)* |
### Option environment variables
@ -144,11 +142,52 @@ You can use environment variables to define most `influxdb3 serve` options.
For more information, see
[Configuration options](/influxdb3/core/reference/config-options/).
## Quick-Start Mode
For development, testing, and home use, you can start {{< product-name >}} by running `influxdb3` without the `serve` subcommand or any configuration parameters. The system automatically generates required values:
- **`node-id`**: `{hostname}-node` (fallback: `primary-node`)
- **`object-store`**: `file`
- **`data-dir`**: `~/.influxdb`
The system displays warning messages showing the auto-generated identifiers:
```
Using auto-generated node id: mylaptop-node. For production deployments, explicitly set --node-id
```
### Quick-start examples
<!--pytest.mark.skip-->
```bash
# Zero-config startup
influxdb3
# Override specific defaults
influxdb3 --object-store memory
# Use environment variables to override defaults
INFLUXDB3_NODE_IDENTIFIER_PREFIX=my-node influxdb3
```
> \[!Important]
>
> #### Production deployments
>
> Quick-start mode is designed for development and testing environments.
> For production deployments, use explicit configuration with the `serve` subcommand
> and specify all required parameters as shown in the [Examples](#examples) below.
**Configuration precedence**: CLI flags > environment variables > auto-generated defaults
For more information about quick-start mode, see [Get started](/influxdb3/core/get-started/setup/#quick-start-mode-development).
## Examples
- [Run the InfluxDB 3 server](#run-the-influxdb-3-server)
- [Run the InfluxDB 3 server with extra verbose logging](#run-the-influxdb-3-server-with-extra-verbose-logging)
- [Run InfluxDB 3 with debug logging using LOG_FILTER](#run-influxdb-3-with-debug-logging-using-log_filter)
- [Run InfluxDB 3 with debug logging using LOG\_FILTER](#run-influxdb-3-with-debug-logging-using-log_filter)
In the examples below, replace
{{% code-placeholder-key %}}`my-host-01`{{% /code-placeholder-key %}}:
@ -179,7 +218,7 @@ influxdb3 serve \
--verbose
```
### Run InfluxDB 3 with debug logging using LOG_FILTER
### Run InfluxDB 3 with debug logging using LOG\_FILTER
<!--pytest.mark.skip-->
@ -192,13 +231,12 @@ LOG_FILTER=debug influxdb3 serve \
{{% /code-placeholders %}}
## Troubleshooting
### Common Issues
- **Error: "Failed to connect to object store"**
- **Error: "Failed to connect to object store"**\
Verify your `--object-store` setting and ensure all required parameters for that storage type are provided.
- **Permission errors when using S3, Google Cloud, or Azure storage**
- **Permission errors when using S3, Google Cloud, or Azure storage**\
Check that your authentication credentials are correct and have sufficient permissions.

View File

@ -0,0 +1,16 @@
---
title: influxdb3 show plugins
description: >
The `influxdb3 show plugins` command lists loaded Processing Engine plugins in your
InfluxDB 3 Core server.
menu:
influxdb3_core:
parent: influxdb3 show
name: influxdb3 show plugins
weight: 350
source: /shared/influxdb3-cli/show/plugins.md
---
<!--
// SOURCE content/shared/influxdb3-cli/show/plugins.md
-->

View File

@ -0,0 +1,15 @@
---
title: influxdb3 update trigger
description: >
The `influxdb3 update trigger` command updates an existing trigger.
menu:
influxdb3_core:
parent: influxdb3 update
name: influxdb3 update trigger
weight: 401
source: /shared/influxdb3-cli/update/trigger.md
---
<!--
// SOURCE content/shared/influxdb3-cli/update/trigger.md
-->

View File

@ -22,8 +22,8 @@ influxdb3 [GLOBAL-OPTIONS] [COMMAND]
## Commands
| Command | Description |
| :--------------------------------------------------------------| :---------------------------------- |
| Command | Description |
| :---------------------------------------------------------------- | :---------------------------------- |
| [create](/influxdb3/enterprise/reference/cli/influxdb3/create/) | Create resources |
| [delete](/influxdb3/enterprise/reference/cli/influxdb3/delete/) | Delete resources |
| [disable](/influxdb3/enterprise/reference/cli/influxdb3/disable/) | Disable resources |
@ -37,27 +37,69 @@ influxdb3 [GLOBAL-OPTIONS] [COMMAND]
## Global options
| Option | | Description |
| :----- | :---------------- | :-------------------------------------------------------------------- |
| `-h` | `--help` | Print help information |
| | `--help-all` | Print detailed help information including runtime configuration options |
| `-V` | `--version` | Print version |
| Option | | Description |
| :----- | :----------- | :---------------------------------------------------------------------- |
| `-h` | `--help` | Print help information |
| | `--help-all` | Print detailed help information including runtime configuration options |
| `-V` | `--version` | Print version |
For advanced global configuration options (including `--num-io-threads` and other runtime settings), see [Configuration options](/influxdb3/enterprise/reference/config-options/#global-configuration-options).
## Quick-Start Mode
For development, testing, and home use, you can start {{< product-name >}} by running `influxdb3` without the `serve` subcommand or any configuration parameters. The system automatically generates required values:
- **`node-id`**: `{hostname}-node` (fallback: `primary-node`)
- **`cluster-id`**: `{hostname}-cluster` (fallback: `primary-cluster`)
- **`object-store`**: `file`
- **`data-dir`**: `~/.influxdb`
The system displays warning messages showing the auto-generated identifiers:
```
Using auto-generated node id: mylaptop-node. For production deployments, explicitly set --node-id
Using auto-generated cluster id: mylaptop-cluster. For production deployments, explicitly set --cluster-id
```
> \[!Important]
>
> #### Production deployments
>
> Quick-start mode is designed for development and testing environments.
> For production deployments, use explicit configuration with the `serve` subcommand
> and specify all required parameters as shown in the [Examples](#examples) below.
**Configuration precedence**: CLI flags > environment variables > auto-generated defaults
For more information about quick-start mode, see [Get started](/influxdb3/enterprise/get-started/setup/#quick-start-mode-development).
## Examples
In the examples below, replace the following:
- {{% code-placeholder-key %}}`my-host-01`{{% /code-placeholder-key %}}:
a unique identifier for your {{< product-name >}} server.
a unique identifier for your {{< product-name >}} server.
- {{% code-placeholder-key %}}`my-cluster-01`{{% /code-placeholder-key %}}:
a unique identifier for your {{< product-name >}} cluster.
The value you use must be different from `--node-id` values in the cluster.
a unique identifier for your {{< product-name >}} cluster.
The value you use must be different from `--node-id` values in the cluster.
{{% code-placeholders "my-host-01|my-cluster-01" %}}
### Quick-start influxdb3 server
<!--pytest.mark.skip-->
```bash
# Zero-config startup
influxdb3
# Override specific defaults
influxdb3 --object-store memory
# Use environment variables to override defaults
INFLUXDB3_NODE_IDENTIFIER_PREFIX=my-node influxdb3
```
### Run the InfluxDB 3 server
<!--pytest.mark.skip-->
@ -111,7 +153,7 @@ influxdb3 serve \
--verbose
```
### Run {{% product-name %}} with debug logging using LOG_FILTER
### Run {{% product-name %}} with debug logging using LOG\_FILTER
<!--pytest.mark.skip-->
@ -123,4 +165,4 @@ LOG_FILTER=debug influxdb3 serve \
--cluster-id my-cluster-01
```
{{% /code-placeholders %}}
{{% /code-placeholders %}}

View File

@ -18,23 +18,23 @@ The `influxdb3 serve` command starts the {{< product-name >}} server.
<!--pytest.mark.skip-->
```bash
influxdb3 serve [OPTIONS] \
--node-id <NODE_IDENTIFIER_PREFIX> \
--cluster-id <CLUSTER_IDENTIFIER_PREFIX>
influxdb3 serve [OPTIONS]
```
## Required parameters
## Required Parameters
- **node-id**: A unique identifier for your server instance. Must be unique for any hosts sharing the same object store.
- **cluster-id**: A unique identifier for your cluster. Must be different from any node-id in your cluster.
- **object-store**: Determines where time series data is stored.
- Other object store parameters depending on the selected `object-store` type.
> [!NOTE]
> \[!NOTE]
> `--node-id` and `--cluster-id` support alphanumeric strings with optional hyphens.
> [!Important]
> \[!Important]
>
> #### Global configuration options
>
> Some configuration options (like [`--num-io-threads`](/influxdb3/enterprise/reference/config-options/#num-io-threads)) are **global** and must be specified **before** the `serve` command:
>
> ```bash
@ -45,124 +45,120 @@ influxdb3 serve [OPTIONS] \
## Options
| Option | | Description |
| :--------------- | :--------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------ |
| | `--admin-token-recovery-http-bind` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-recovery-http-bind)_ |
| | `--admin-token-recovery-tcp-listener-file-path` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-recovery-tcp-listener-file-path)_ |
| | `--admin-token-file` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-file)_ |
| | `--aws-access-key-id` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-access-key-id)_ |
| | `--aws-allow-http` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-allow-http)_ |
| | `--aws-credentials-file` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-credentials-file)_ |
| | `--aws-default-region` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-default-region)_ |
| | `--aws-endpoint` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-endpoint)_ |
| | `--aws-secret-access-key` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-secret-access-key)_ |
| | `--aws-session-token` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-session-token)_ |
| | `--aws-skip-signature` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-skip-signature)_ |
| | `--azure-allow-http` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-allow-http)_ |
| | `--azure-endpoint` | _See [configuration options](/influxdb3/enterprise/reference/config-options/##azure-endpoint)_ |
| | `--azure-storage-access-key` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-storage-access-key)_ |
| | `--azure-storage-account` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-storage-account)_ |
| | `--bucket` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#bucket)_ |
| | `--catalog-sync-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#catalog-sync-interval)_ |
| {{< req "\*" >}} | `--cluster-id` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#cluster-id)_ |
| | `--compaction-check-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-check-interval)_ |
| | `--compaction-cleanup-wait` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-cleanup-wait)_ |
| | `--compaction-gen2-duration` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-gen2-duration)_ |
| | `--compaction-max-num-files-per-plan` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-max-num-files-per-plan)_ |
| | `--compaction-multipliers` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-multipliers)_ |
| | `--compaction-row-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-row-limit)_ |
| | `--data-dir` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#data-dir)_ |
| | `--datafusion-config` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-config)_ |
| | `--datafusion-max-parquet-fanout` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-max-parquet-fanout)_ |
| | `--datafusion-num-threads` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-num-threads)_ |
| | `--datafusion-runtime-disable-lifo-slot` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-disable-lifo-slot)_ |
| | `--datafusion-runtime-event-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-event-interval)_ |
| | `--datafusion-runtime-global-queue-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-global-queue-interval)_ |
| | `--datafusion-runtime-max-blocking-threads` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-max-blocking-threads)_ |
| | `--datafusion-runtime-max-io-events-per-tick` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-max-io-events-per-tick)_ |
| | `--datafusion-runtime-thread-keep-alive` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-thread-keep-alive)_ |
| | `--datafusion-runtime-thread-priority` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-thread-priority)_ |
| | `--datafusion-runtime-type` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-type)_ |
| | `--datafusion-use-cached-parquet-loader` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-use-cached-parquet-loader)_ |
| | `--delete-grace-period` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#delete-grace-period)_ |
| | `--disable-authz` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#disable-authz)_ |
| | `--disable-parquet-mem-cache` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#disable-parquet-mem-cache)_ |
| | `--distinct-cache-eviction-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#distinct-cache-eviction-interval)_ |
| | `--distinct-value-cache-disable-from-history` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#distinct-value-cache-disable-from-history)_ |
| | `--exec-mem-pool-bytes` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#exec-mem-pool-bytes)_ |
| | `--force-snapshot-mem-threshold` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#force-snapshot-mem-threshold)_ |
| | `--gen1-duration` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#gen1-duration)_ |
| | `--gen1-lookback-duration` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#gen1-lookback-duration)_ |
| | `--google-service-account` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#google-service-account)_ |
| | `--hard-delete-default-duration` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#hard-delete-default-duration)_ |
| `-h` | `--help` | Print help information |
| | `--help-all` | Print detailed help information |
| | `--http-bind` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#http-bind)_ |
| | `--last-cache-eviction-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#last-cache-eviction-interval)_ |
| | `--last-value-cache-disable-from-history` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#last-value-cache-disable-from-history)_ |
| | `--license-email` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#license-email)_ |
| | `--license-file` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#license-file)_ |
| | `--log-destination` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#log-destination)_ |
| | `--log-filter` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#log-filter)_ |
| | `--log-format` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#log-format)_ |
| | `--max-http-request-size` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#max-http-request-size)_ |
| | `--mode` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#mode)_ |
| {{< req "\*" >}} | `--node-id` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#node-id)_ |
| | `--node-id-from-env` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#node-id-from-env)_ |
| | `--num-cores` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-cores)_ |
| | `--num-datafusion-threads` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-datafusion-threads)_ |
| | `--num-database-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-database-limit)_ |
| | `--num-table-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-table-limit)_ |
| | `--num-total-columns-per-table-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-total-columns-per-table-limit)_ |
| | `--object-store` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store)_ |
| | `--object-store-cache-endpoint` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-cache-endpoint)_ |
| | `--object-store-connection-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-connection-limit)_ |
| | `--object-store-http2-max-frame-size` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-http2-max-frame-size)_ |
| | `--object-store-http2-only` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-http2-only)_ |
| | `--object-store-max-retries` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-max-retries)_ |
| | `--object-store-retry-timeout` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-retry-timeout)_ |
| | `--package-manager` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#package-manager)_ |
| | `--parquet-mem-cache-prune-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-prune-interval)_ |
| | `--parquet-mem-cache-prune-percentage` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-prune-percentage)_ |
| | `--parquet-mem-cache-query-path-duration` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-query-path-duration)_ |
| | `--parquet-mem-cache-size` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-size)_ |
| | `--permission-tokens-file` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#permission-tokens-file)_ |
| | `--plugin-dir` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#plugin-dir)_ |
| | `--preemptive-cache-age` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#preemptive-cache-age)_ |
| | `--query-file-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#query-file-limit)_ |
| | `--query-log-size` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#query-log-size)_ |
| | `--replication-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#replication-interval)_ |
| | `--retention-check-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#retention-check-interval)_ |
| | `--snapshotted-wal-files-to-keep` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#snapshotted-wal-files-to-keep)_ |
| | `--table-index-cache-concurrency-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#table-index-cache-concurrency-limit)_ |
| | `--table-index-cache-max-entries` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#table-index-cache-max-entries)_ |
| | `--tcp-listener-file-path` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#tcp-listener-file-path)_ |
| | `--telemetry-disable-upload` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#telemetry-disable-upload)_ |
| | `--telemetry-endpoint` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#telemetry-endpoint)_ |
| | `--tls-cert` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#tls-cert)_ |
| | `--tls-key` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#tls-key)_ |
| | `--tls-minimum-version` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#tls-minimum-version)_ |
| | `--traces-exporter` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter)_ |
| | `--traces-exporter-jaeger-agent-host` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-agent-host)_ |
| | `--traces-exporter-jaeger-agent-port` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-agent-port)_ |
| | `--traces-exporter-jaeger-service-name` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-service-name)_ |
| | `--traces-exporter-jaeger-trace-context-header-name` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-trace-context-header-name)_ |
| | `--traces-jaeger-debug-name` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-jaeger-debug-name)_ |
| | `--traces-jaeger-max-msgs-per-second` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-jaeger-max-msgs-per-second)_ |
| | `--traces-jaeger-tags` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-jaeger-tags)_ |
| | `--use-pacha-tree` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#use-pacha-tree)_ |
| | `--virtual-env-location` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#virtual-env-location)_ |
| | `--wait-for-running-ingestor` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wait-for-running-ingestor)_ |
| | `--wal-flush-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-flush-interval)_ |
| | `--wal-max-write-buffer-size` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-max-write-buffer-size)_ |
| | `--wal-replay-concurrency-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-replay-concurrency-limit)_ |
| | `--wal-replay-fail-on-error` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-replay-fail-on-error)_ |
| | `--wal-snapshot-size` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-snapshot-size)_ |
| | `--without-auth` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#without-auth)_ |
{{< caption >}}
{{< req text="\* Required options" >}}
{{< /caption >}}
| Option | | Description |
| :----- | :--------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------ |
| | `--admin-token-recovery-http-bind` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-recovery-http-bind)* |
| | `--admin-token-recovery-tcp-listener-file-path` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-recovery-tcp-listener-file-path)* |
| | `--admin-token-file` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-file)* |
| | `--aws-access-key-id` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-access-key-id)* |
| | `--aws-allow-http` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-allow-http)* |
| | `--aws-credentials-file` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-credentials-file)* |
| | `--aws-default-region` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-default-region)* |
| | `--aws-endpoint` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-endpoint)* |
| | `--aws-secret-access-key` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-secret-access-key)* |
| | `--aws-session-token` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-session-token)* |
| | `--aws-skip-signature` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-skip-signature)* |
| | `--azure-allow-http` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-allow-http)* |
| | `--azure-endpoint` | *See [configuration options](/influxdb3/enterprise/reference/config-options/##azure-endpoint)* |
| | `--azure-storage-access-key` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-storage-access-key)* |
| | `--azure-storage-account` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-storage-account)* |
| | `--bucket` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#bucket)* |
| | `--catalog-sync-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#catalog-sync-interval)* |
| | `--cluster-id` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#cluster-id)* |
| | `--compaction-check-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-check-interval)* |
| | `--compaction-cleanup-wait` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-cleanup-wait)* |
| | `--compaction-gen2-duration` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-gen2-duration)* |
| | `--compaction-max-num-files-per-plan` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-max-num-files-per-plan)* |
| | `--compaction-multipliers` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-multipliers)* |
| | `--compaction-row-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-row-limit)* |
| | `--data-dir` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#data-dir)* |
| | `--datafusion-config` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-config)* |
| | `--datafusion-max-parquet-fanout` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-max-parquet-fanout)* |
| | `--datafusion-num-threads` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-num-threads)* |
| | `--datafusion-runtime-disable-lifo-slot` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-disable-lifo-slot)* |
| | `--datafusion-runtime-event-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-event-interval)* |
| | `--datafusion-runtime-global-queue-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-global-queue-interval)* |
| | `--datafusion-runtime-max-blocking-threads` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-max-blocking-threads)* |
| | `--datafusion-runtime-max-io-events-per-tick` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-max-io-events-per-tick)* |
| | `--datafusion-runtime-thread-keep-alive` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-thread-keep-alive)* |
| | `--datafusion-runtime-thread-priority` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-thread-priority)* |
| | `--datafusion-runtime-type` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-type)* |
| | `--datafusion-use-cached-parquet-loader` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-use-cached-parquet-loader)* |
| | `--delete-grace-period` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#delete-grace-period)* |
| | `--disable-authz` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#disable-authz)* |
| | `--disable-parquet-mem-cache` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#disable-parquet-mem-cache)* |
| | `--distinct-cache-eviction-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#distinct-cache-eviction-interval)* |
| | `--distinct-value-cache-disable-from-history` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#distinct-value-cache-disable-from-history)* |
| | `--exec-mem-pool-bytes` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#exec-mem-pool-bytes)* |
| | `--force-snapshot-mem-threshold` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#force-snapshot-mem-threshold)* |
| | `--gen1-duration` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#gen1-duration)* |
| | `--gen1-lookback-duration` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#gen1-lookback-duration)* |
| | `--google-service-account` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#google-service-account)* |
| | `--hard-delete-default-duration` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#hard-delete-default-duration)* |
| `-h` | `--help` | Print help information |
| | `--help-all` | Print detailed help information |
| | `--http-bind` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#http-bind)* |
| | `--last-cache-eviction-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#last-cache-eviction-interval)* |
| | `--last-value-cache-disable-from-history` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#last-value-cache-disable-from-history)* |
| | `--license-email` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#license-email)* |
| | `--license-file` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#license-file)* |
| | `--log-destination` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#log-destination)* |
| | `--log-filter` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#log-filter)* |
| | `--log-format` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#log-format)* |
| | `--max-http-request-size` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#max-http-request-size)* |
| | `--mode` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#mode)* |
| | `--node-id` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#node-id)* |
| | `--node-id-from-env` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#node-id-from-env)* |
| | `--num-cores` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#num-cores)* |
| | `--num-datafusion-threads` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#num-datafusion-threads)* |
| | `--num-database-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#num-database-limit)* |
| | `--num-table-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#num-table-limit)* |
| | `--num-total-columns-per-table-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#num-total-columns-per-table-limit)* |
| | `--object-store` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store)* |
| | `--object-store-cache-endpoint` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-cache-endpoint)* |
| | `--object-store-connection-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-connection-limit)* |
| | `--object-store-http2-max-frame-size` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-http2-max-frame-size)* |
| | `--object-store-http2-only` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-http2-only)* |
| | `--object-store-max-retries` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-max-retries)* |
| | `--object-store-retry-timeout` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-retry-timeout)* |
| | `--package-manager` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#package-manager)* |
| | `--parquet-mem-cache-prune-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-prune-interval)* |
| | `--parquet-mem-cache-prune-percentage` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-prune-percentage)* |
| | `--parquet-mem-cache-query-path-duration` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-query-path-duration)* |
| | `--parquet-mem-cache-size` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-size)* |
| | `--permission-tokens-file` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#permission-tokens-file)* |
| | `--plugin-dir` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#plugin-dir)* |
| | `--preemptive-cache-age` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#preemptive-cache-age)* |
| | `--query-file-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#query-file-limit)* |
| | `--query-log-size` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#query-log-size)* |
| | `--replication-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#replication-interval)* |
| | `--retention-check-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#retention-check-interval)* |
| | `--snapshotted-wal-files-to-keep` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#snapshotted-wal-files-to-keep)* |
| | `--table-index-cache-concurrency-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#table-index-cache-concurrency-limit)* |
| | `--table-index-cache-max-entries` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#table-index-cache-max-entries)* |
| | `--tcp-listener-file-path` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#tcp-listener-file-path)* |
| | `--telemetry-disable-upload` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#telemetry-disable-upload)* |
| | `--telemetry-endpoint` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#telemetry-endpoint)* |
| | `--tls-cert` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#tls-cert)* |
| | `--tls-key` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#tls-key)* |
| | `--tls-minimum-version` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#tls-minimum-version)* |
| | `--traces-exporter` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter)* |
| | `--traces-exporter-jaeger-agent-host` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-agent-host)* |
| | `--traces-exporter-jaeger-agent-port` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-agent-port)* |
| | `--traces-exporter-jaeger-service-name` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-service-name)* |
| | `--traces-exporter-jaeger-trace-context-header-name` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-trace-context-header-name)* |
| | `--traces-jaeger-debug-name` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-jaeger-debug-name)* |
| | `--traces-jaeger-max-msgs-per-second` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-jaeger-max-msgs-per-second)* |
| | `--traces-jaeger-tags` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-jaeger-tags)* |
| | `--use-pacha-tree` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#use-pacha-tree)* |
| | `--virtual-env-location` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#virtual-env-location)* |
| | `--wait-for-running-ingestor` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#wait-for-running-ingestor)* |
| | `--wal-flush-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-flush-interval)* |
| | `--wal-max-write-buffer-size` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-max-write-buffer-size)* |
| | `--wal-replay-concurrency-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-replay-concurrency-limit)* |
| | `--wal-replay-fail-on-error` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-replay-fail-on-error)* |
| | `--wal-snapshot-size` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-snapshot-size)* |
| | `--without-auth` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#without-auth)* |
### Option environment variables
@ -170,19 +166,62 @@ You can use environment variables to define most `influxdb3 serve` options.
For more information, see
[Configuration options](/influxdb3/enterprise/reference/config-options/).
## Quick-Start Mode
For development, testing, and home use, you can start {{< product-name >}} by running `influxdb3` without the `serve` subcommand or any configuration parameters. The system automatically generates required values:
- **`node-id`**: `{hostname}-node` (fallback: `primary-node`)
- **`cluster-id`**: `{hostname}-cluster` (fallback: `primary-cluster`)
- **`object-store`**: `file`
- **`data-dir`**: `~/.influxdb`
The system displays warning messages showing the auto-generated identifiers:
```
Using auto-generated node id: mylaptop-node. For production deployments, explicitly set --node-id
Using auto-generated cluster id: mylaptop-cluster. For production deployments, explicitly set --cluster-id
```
### Quick-start examples
<!--pytest.mark.skip-->
```bash
# Zero-config startup
influxdb3
# Override specific defaults
influxdb3 --object-store memory
# Use environment variables to override defaults
INFLUXDB3_NODE_IDENTIFIER_PREFIX=my-node influxdb3
```
> \[!Important]
>
> #### Production deployments
>
> Quick-start mode is designed for development and testing environments.
> For production deployments, use explicit configuration with the `serve` subcommand
> and specify all required parameters as shown in the [Examples](#examples) below.
**Configuration precedence**: CLI flags > environment variables > auto-generated defaults
For more information about quick-start mode, see [Get started](/influxdb3/enterprise/get-started/setup/#quick-start-mode-development).
## Examples
- [Run the InfluxDB 3 server](#run-the-influxdb-3-server)
- [Run the InfluxDB 3 server with extra verbose logging](#run-the-influxdb-3-server-with-extra-verbose-logging)
- [Run InfluxDB 3 with debug logging using LOG_FILTER](#run-influxdb-3-with-debug-logging-using-log_filter)
- [Run InfluxDB 3 with debug logging using LOG\_FILTER](#run-influxdb-3-with-debug-logging-using-log_filter)
In the examples below, replace the following:
- {{% code-placeholder-key %}}`my-host-01`{{% /code-placeholder-key %}}:
a unique string that identifies your {{< product-name >}} server.
a unique string that identifies your {{< product-name >}} server.
- {{% code-placeholder-key %}}`my-cluster-01`{{% /code-placeholder-key %}}:
a unique string that identifies your {{< product-name >}} cluster.
The value you use must be different from `--node-id` values in the cluster.
a unique string that identifies your {{< product-name >}} cluster.
The value you use must be different from `--node-id` values in the cluster.
{{% code-placeholders "my-host-01|my-cluster-01" %}}
@ -237,7 +276,7 @@ influxdb3 serve \
--verbose
```
### Run InfluxDB 3 with debug logging using LOG_FILTER
### Run InfluxDB 3 with debug logging using LOG\_FILTER
<!--pytest.mark.skip-->
@ -251,16 +290,15 @@ LOG_FILTER=debug influxdb3 serve \
{{% /code-placeholders %}}
## Troubleshooting
### Common Issues
- **Error: "cluster-id cannot match any node-id in the cluster"**
- **Error: "cluster-id cannot match any node-id in the cluster"**\
Ensure your `--cluster-id` value is different from all `--node-id` values in your cluster.
- **Error: "Failed to connect to object store"**
- **Error: "Failed to connect to object store"**\
Verify your `--object-store` setting and ensure all required parameters for that storage type are provided.
- **Permission errors when using S3, Google Cloud, or Azure storage**
- **Permission errors when using S3, Google Cloud, or Azure storage**\
Check that your authentication credentials are correct and have sufficient permissions.

View File

@ -0,0 +1,16 @@
---
title: influxdb3 show plugins
description: >
The `influxdb3 show plugins` command lists loaded Processing Engine plugins in your
InfluxDB 3 Enterprise server.
menu:
influxdb3_enterprise:
parent: influxdb3 show
name: influxdb3 show plugins
weight: 350
source: /shared/influxdb3-cli/show/plugins.md
---
<!--
// SOURCE content/shared/influxdb3-cli/show/plugins.md
-->

View File

@ -0,0 +1,15 @@
---
title: influxdb3 update trigger
description: >
The `influxdb3 update trigger` command updates an existing trigger.
menu:
influxdb3_enterprise:
parent: influxdb3 update
name: influxdb3 update trigger
weight: 401
source: /shared/influxdb3-cli/update/trigger.md
---
<!--
// SOURCE content/shared/influxdb3-cli/update/trigger.md
-->

View File

@ -1,5 +1,6 @@
<!--Shortcode-->
{{% product-name %}} stores data related to the database server, queries, and tables in _system tables_.
{{% product-name %}} stores data related to the database server, queries, and tables in *system tables*.
You can query the system tables for information about your running server, databases, and and table schemas.
## Query system tables
@ -8,12 +9,13 @@ You can query the system tables for information about your running server, datab
- [Examples](#examples)
- [Show tables](#show-tables)
- [View column information for a table](#view-column-information-for-a-table)
- [Recently executed queries](#recently-executed-queries)
- [Query plugin files](#query-plugin-files)
### Use the HTTP query API
### Use the HTTP query API
Use the HTTP API `/api/v3/query_sql` endpoint to retrieve system information about your database server and table schemas in {{% product-name %}}.
To execute a query, send a `GET` or `POST` request to the endpoint:
- `GET`: Pass parameters in the URL query string (for simple queries)
@ -21,16 +23,17 @@ To execute a query, send a `GET` or `POST` request to the endpoint:
Include the following parameters:
- `q`: _({{< req >}})_ The SQL query to execute.
- `db`: _({{< req >}})_ The database to execute the query against.
- `params`: A JSON object containing parameters to be used in a _parameterized query_.
- `q`: *({{< req >}})* The SQL query to execute.
- `db`: *({{< req >}})* The database to execute the query against.
- `params`: A JSON object containing parameters to be used in a *parameterized query*.
- `format`: The format of the response (`json`, `jsonl`, `csv`, `pretty`, or `parquet`).
JSONL (`jsonl`) is preferred because it streams results back to the client.
`pretty` is for human-readable output. Default is `json`.
#### Examples
> [!Note]
> \[!Note]
>
> #### system\_ sample data
>
> In examples, tables with `"table_name":"system_` are user-created tables for CPU, memory, disk,
@ -88,8 +91,8 @@ A table has one of the following `table_schema` values:
The following query sends a `POST` request that executes an SQL query to
retrieve information about columns in the sample `system_swap` table schema:
_Note: when you send a query in JSON, you must escape single quotes
that surround field names._
*Note: when you send a query in JSON, you must escape single quotes
that surround field names.*
```bash
curl "http://localhost:8181/api/v3/query_sql" \
@ -134,3 +137,58 @@ The output is similar to the following:
{"id":"cdd63409-1822-4e65-8e3a-d274d553dbb3","phase":"success","issue_time":"2025-01-20T17:01:40.690067","query_type":"sql","query_text":"show tables","partitions":0,"parquet_files":0,"plan_duration":"PT0.032689S","permit_duration":"PT0.000202S","execute_duration":"PT0.000223S","end2end_duration":"PT0.033115S","compute_duration":"P0D","max_memory":0,"success":true,"running":false,"cancelled":false}
{"id":"47f8d312-5e75-4db2-837a-6fcf94c09927","phase":"success","issue_time":"2025-01-20T17:02:32.627782","query_type":"sql","query_text":"show tables","partitions":0,"parquet_files":0,"plan_duration":"PT0.000583S","permit_duration":"PT0.000015S","execute_duration":"PT0.000063S","end2end_duration":"PT0.000662S","compute_duration":"P0D","max_memory":0,"success":true,"running":false,"cancelled":false}
```
#### Query plugin files
To view loaded Processing Engine plugins, query the `plugin_files` system table in the `_internal` database.
The `system.plugin_files` table provides information about plugin files loaded by the Processing Engine:
**Columns:**
- `plugin_name` (String): Name of a trigger using this plugin
- `file_name` (String): Plugin filename
- `file_path` (String): Full server path to the plugin file
- `size_bytes` (Int64): File size in bytes
- `last_modified` (Int64): Last modification timestamp (milliseconds since epoch)
```bash
curl "http://localhost:8181/api/v3/query_sql" \
--header "Authorization: Bearer AUTH_TOKEN" \
--json '{
"db": "_internal",
"q": "SELECT * FROM system.plugin_files",
"format": "jsonl"
}'
```
The output is similar to the following:
```jsonl
{"plugin_name":"my_trigger","file_name":"my_plugin.py","file_path":"/path/to/plugins/my_plugin.py","size_bytes":2048,"last_modified":1704067200000}
{"plugin_name":"scheduled_trigger","file_name":"scheduler.py","file_path":"/path/to/plugins/scheduler.py","size_bytes":4096,"last_modified":1704153600000}
```
**Filter plugins by trigger name:**
```bash
curl "http://localhost:8181/api/v3/query_sql" \
--header "Authorization: Bearer AUTH_TOKEN" \
--json '{
"db": "_internal",
"q": "SELECT * FROM system.plugin_files WHERE plugin_name = '"'my_trigger'"'",
"format": "jsonl"
}'
```
**Find plugins by file pattern:**
```bash
curl "http://localhost:8181/api/v3/query_sql" \
--header "Authorization: Bearer AUTH_TOKEN" \
--json '{
"db": "_internal",
"q": "SELECT * FROM system.plugin_files WHERE file_name LIKE '"'%scheduler%'"'",
"format": "jsonl"
}'
```

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +1,3 @@
The `influxdb3 create trigger` command creates a new trigger for the
processing engine.
@ -17,30 +16,31 @@ influxdb3 create trigger [OPTIONS] \
## Arguments
- **TRIGGER_NAME**: A name for the new trigger.
- **TRIGGER\_NAME**: A name for the new trigger.
## Options
| Option | | Description |
| :----- | :------------------ | :------------------------------------------------------------------------------------------------------- |
| `-H` | `--host` | Host URL of the running {{< product-name >}} server (default is `http://127.0.0.1:8181`) |
| `-d` | `--database` | _({{< req >}})_ Name of the database to operate on |
| | `--token` | _({{< req >}})_ Authentication token |
| | `--plugin-filename` | _({{< req >}})_ Name of the file, stored in the server's `plugin-dir`, that contains the Python plugin code to run |
| | `--trigger-spec` | Trigger specification: `table:<TABLE_NAME>`, `all_tables`, `every:<DURATION>`, `cron:<EXPRESSION>`, or `request:<REQUEST_PATH>` |
| | `--trigger-arguments` | Additional arguments for the trigger, in the format `key=value`, separated by commas (for example, `arg1=val1,arg2=val2`) |
| | `--disabled` | Create the trigger in disabled state |
| | `--error-behavior` | Error handling behavior: `log`, `retry`, or `disable` |
| | `--run-asynchronous` | Run the trigger asynchronously, allowing multiple triggers to run simultaneously (default is synchronous) |
{{% show-in "enterprise" %}}| | `--node-spec` | Which node(s) the trigger should be configured on. Two value formats are supported: `all` (default) - applies to all nodes, or `nodes:<node-id>[,<node-id>..]` - applies only to specified comma-separated list of nodes |{{% /show-in %}}
| | `--tls-ca` | Path to a custom TLS certificate authority (for testing or self-signed certificates) |
| `-h` | `--help` | Print help information |
| | `--help-all` | Print detailed help information |
| Option | | Description | | |
| :--------------------------- | :-------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------- |
| `-H` | `--host` | Host URL of the running {{< product-name >}} server (default is `http://127.0.0.1:8181`) | | |
| `-d` | `--database` | *({{< req >}})* Name of the database to operate on | | |
| | `--token` | *({{< req >}})* Authentication token | | |
| `-p` | `--path` | Path to plugin file or directory (single `.py` file or directory containing `__init__.py` for multifile plugins). Can be local path (with `--upload`) or server path. Replaces `--plugin-filename`. | | |
| | `--upload` | Upload local plugin files to the server. Requires admin token. Use with `--path` to specify local files. | | |
| | `--plugin-filename` | *(Deprecated: use `--path` instead)* Name of the file, stored in the server's `plugin-dir`, that contains the Python plugin code to run | | |
| | `--trigger-spec` | Trigger specification: `table:<TABLE_NAME>`, `all_tables`, `every:<DURATION>`, `cron:<EXPRESSION>`, or `request:<REQUEST_PATH>` | | |
| | `--trigger-arguments` | Additional arguments for the trigger, in the format `key=value`, separated by commas (for example, `arg1=val1,arg2=val2`) | | |
| | `--disabled` | Create the trigger in disabled state | | |
| | `--error-behavior` | Error handling behavior: `log`, `retry`, or `disable` | | |
| | `--run-asynchronous` | Run the trigger asynchronously, allowing multiple triggers to run simultaneously (default is synchronous) | | |
| {{% show-in "enterprise" %}} | | `--node-spec` | Which node(s) the trigger should be configured on. Two value formats are supported: `all` (default) - applies to all nodes, or `nodes:<node-id>[,<node-id>..]` - applies only to specified comma-separated list of nodes | {{% /show-in %}} |
| | `--tls-ca` | Path to a custom TLS certificate authority (for testing or self-signed certificates) | | |
| `-h` | `--help` | Print help information | | |
| | `--help-all` | Print detailed help information | | |
If you want to use a plugin from the [Plugin Library](https://github.com/influxdata/influxdb3_plugins) repo, use the URL path with `gh:` specified as the prefix.
For example, to use the [System Metrics](https://github.com/influxdata/influxdb3_plugins/blob/main/influxdata/system_metrics/system_metrics.py) plugin, the plugin filename is `gh:influxdata/system_metrics/system_metrics.py`.
### Option environment variables
You can use the following environment variables to set command options:
@ -59,11 +59,13 @@ The following examples show how to use the `influxdb3 create trigger` command to
- [Create a trigger for all tables](#create-a-trigger-for-all-tables)
- [Create a trigger with a schedule](#create-a-trigger-with-a-schedule)
- [Create a trigger for HTTP requests](#create-a-trigger-for-http-requests)
- [Create a trigger with a multifile plugin](#create-a-trigger-with-a-multifile-plugin)
- [Upload and create a trigger with a local plugin](#upload-and-create-a-trigger-with-a-local-plugin)
- [Create a trigger with additional arguments](#create-a-trigger-with-additional-arguments)
- [Create a disabled trigger](#create-a-disabled-trigger)
- [Create a trigger with error handling](#create-a-trigger-with-error-handling)
---
***
Replace the following placeholders with your values:
@ -71,11 +73,11 @@ Replace the following placeholders with your values:
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: Authentication token
- {{% code-placeholder-key %}}`PLUGIN_FILENAME`{{% /code-placeholder-key %}}: Python plugin filename
- {{% code-placeholder-key %}}`TRIGGER_NAME`{{% /code-placeholder-key %}}:
Name of the trigger to create
Name of the trigger to create
- {{% code-placeholder-key %}}`TABLE_NAME`{{% /code-placeholder-key %}}:
Name of the table to trigger on
Name of the table to trigger on
{{% code-placeholders "(DATABASE|TRIGGER)_NAME|AUTH_TOKEN|TABLE_NAME" %}}
{{% code-placeholders "(DATABASE|TRIGGER)\_NAME|AUTH\_TOKEN|TABLE\_NAME" %}}
### Create a trigger for a specific table
@ -133,12 +135,13 @@ second minute hour day_of_month month day_of_week
```
Fields:
- **second**: 0-59
- **minute**: 0-59
- **hour**: 0-23
- **day_of_month**: 1-31
- **day\_of\_month**: 1-31
- **month**: 1-12 or JAN-DEC
- **day_of_week**: 0-7 (0 or 7 is Sunday) or SUN-SAT
- **day\_of\_week**: 0-7 (0 or 7 is Sunday) or SUN-SAT
Example: Run at 6:00 AM every weekday (Monday-Friday):
@ -168,6 +171,66 @@ influxdb3 create trigger \
`PLUGIN_FILENAME` must implement the [HTTP request plugin](/influxdb3/version/plugins/#create-an-http-request-plugin) interface.
### Create a trigger with a multifile plugin
Create a trigger using a plugin organized in multiple files. The plugin directory must contain an `__init__.py` file.
<!--pytest.mark.skip-->
```bash
influxdb3 create trigger \
--database DATABASE_NAME \
--token AUTH_TOKEN \
--path "my_complex_plugin" \
--trigger-spec "every:5m" \
TRIGGER_NAME
```
The `--path` points to a directory in the server's `plugin-dir` with the following structure:
```
my_complex_plugin/
├── __init__.py # Required entry point
├── processors.py # Supporting modules
└── utils.py
```
For more information about multifile plugins, see [Create your plugin file](/influxdb3/version/plugins/#create-your-plugin-file).
### Upload and create a trigger with a local plugin
Upload plugin files from your local machine and create a trigger in a single command. Requires admin token.
<!--pytest.mark.skip-->
```bash
# Upload single-file plugin
influxdb3 create trigger \
--database DATABASE_NAME \
--token AUTH_TOKEN \
--path "/local/path/to/plugin.py" \
--upload \
--trigger-spec "every:1m" \
TRIGGER_NAME
# Upload multifile plugin directory
influxdb3 create trigger \
--database DATABASE_NAME \
--token AUTH_TOKEN \
--path "/local/path/to/plugin-dir" \
--upload \
--trigger-spec "table:TABLE_NAME" \
TRIGGER_NAME
```
The `--upload` flag transfers local files to the server's plugin directory. This is useful for:
- Local plugin development and testing
- Deploying plugins without SSH access
- Automating plugin deployment
For more information, see [Upload plugins from local machine](/influxdb3/version/plugins/#upload-plugins-from-local-machine).
### Create a trigger with additional arguments
```bash
@ -182,7 +245,7 @@ influxdb3 create trigger \
### Create a disabled trigger
Create a trigger in a disabled state.
Create a trigger in a disabled state.
<!--pytest.mark.skip-->

View File

@ -1,4 +1,3 @@
The `influxdb3 show` command lists resources in your {{< product-name >}} server.
## Usage
@ -11,13 +10,14 @@ influxdb3 show <SUBCOMMAND>
## Subcommands
| Subcommand | Description |
| :---------------------------------------------------------------------- | :--------------------------------------------- |
| [databases](/influxdb3/version/reference/cli/influxdb3/show/databases/) | List database |
{{% show-in "enterprise" %}}| [license](/influxdb3/version/reference/cli/influxdb3/show/license/) | Display license information |{{% /show-in %}}
| [system](/influxdb3/version/reference/cli/influxdb3/show/system/) | Display system table data |
| [tokens](/influxdb3/version/reference/cli/influxdb3/show/tokens/) | List authentication tokens |
| help | Print command help or the help of a subcommand |
| Subcommand | Description | | |
| :---------------------------------------------------------------------- | :------------------------------------------------------------------ | --------------------------- | ---------------- |
| [databases](/influxdb3/version/reference/cli/influxdb3/show/databases/) | List database | | |
| {{% show-in "enterprise" %}} | [license](/influxdb3/version/reference/cli/influxdb3/show/license/) | Display license information | {{% /show-in %}} |
| [plugins](/influxdb3/version/reference/cli/influxdb3/show/plugins/) | List loaded plugins | | |
| [system](/influxdb3/version/reference/cli/influxdb3/show/system/) | Display system table data | | |
| [tokens](/influxdb3/version/reference/cli/influxdb3/show/tokens/) | List authentication tokens | | |
| help | Print command help or the help of a subcommand | | |
## Options

View File

@ -0,0 +1,89 @@
The `influxdb3 show plugins` command lists loaded Processing Engine plugins in your
{{< product-name >}} server.
## Usage
<!--pytest.mark.skip-->
```bash
influxdb3 show plugins [OPTIONS]
```
## Options
| Option | | Description |
| :----- | :----------- | :--------------------------------------------------------------------------------------- |
| `-H` | `--host` | Host URL of the running {{< product-name >}} server (default is `http://127.0.0.1:8181`) |
| | `--token` | *({{< req >}})* Authentication token |
| | `--format` | Output format (`pretty` *(default)*, `json`, `jsonl`, `csv`, or `parquet`) |
| | `--output` | Path where to save output when using the `parquet` format |
| | `--tls-ca` | Path to a custom TLS certificate authority (for testing or self-signed certificates) |
| `-h` | `--help` | Print help information |
| | `--help-all` | Print detailed help information |
### Option environment variables
You can use the following environment variables to set command options:
| Environment Variable | Option |
| :--------------------- | :-------- |
| `INFLUXDB3_HOST_URL` | `--host` |
| `INFLUXDB3_AUTH_TOKEN` | `--token` |
## Output
The command returns information about loaded plugin files:
- **plugin\_name**: Name of a trigger using this plugin
- **file\_name**: Plugin filename
- **file\_path**: Full server path to the plugin file
- **size\_bytes**: File size in bytes
- **last\_modified**: Last modification timestamp (milliseconds since epoch)
> \[!Note]
> This command queries the `system.plugin_files` table in the `_internal` database.
> For more advanced queries and filtering, see [Query system data](/influxdb3/version/admin/query-system-data/).
## Examples
- [List all plugins](#list-all-plugins)
- [List plugins in different output formats](#list-plugins-in-different-output-formats)
- [Output plugins to a Parquet file](#output-plugins-to-a-parquet-file)
### List all plugins
<!--pytest.mark.skip-->
```bash
influxdb3 show plugins
```
### List plugins in different output formats
You can specify the output format using the `--format` option:
<!--pytest.mark.skip-->
```bash
# JSON format
influxdb3 show plugins --format json
# JSON Lines format
influxdb3 show plugins --format jsonl
# CSV format
influxdb3 show plugins --format csv
```
### Output plugins to a Parquet file
[Parquet](https://parquet.apache.org/) is a binary format.
Use the `--output` option to specify the file where you want to save the Parquet data.
<!--pytest.mark.skip-->
```bash
influxdb3 show plugins \
--format parquet \
--output /Users/me/plugins.parquet
```

View File

@ -1,4 +1,4 @@
The `influxdb3 update` command updates resources such as databases and tables.
The `influxdb3 update` command updates resources in your {{< product-name >}} instance.
## Usage
@ -11,23 +11,27 @@ influxdb3 update <SUBCOMMAND>
## Subcommands
{{% show-in "enterprise" %}}
| Subcommand | Description |
| :----------------------------------------------------------------- | :--------------------- |
| [database](/influxdb3/version/reference/cli/influxdb3/update/database/) | Update a database |
| [table](/influxdb3/version/reference/cli/influxdb3/update/table/) | Update a table |
| help | Print command help or the help of a subcommand |
{{% /show-in %}}
| Subcommand | Description |
| :---------------------------------------------------------------------- | :--------------------------------------------- |
| [database](/influxdb3/version/reference/cli/influxdb3/update/database/) | Update a database |
| [table](/influxdb3/version/reference/cli/influxdb3/update/table/) | Update a table |
| [trigger](/influxdb3/version/reference/cli/influxdb3/update/trigger/) | Update a trigger |
| help | Print command help or the help of a subcommand |
| {{% /show-in %}} | |
{{% show-in "core" %}}
| Subcommand | Description |
| :----------------------------------------------------------------- | :--------------------- |
| [database](/influxdb3/version/reference/cli/influxdb3/update/database/) | Update a database |
| help | Print command help or the help of a subcommand |
{{% /show-in %}}
| Subcommand | Description |
| :---------------------------------------------------------------------- | :--------------------------------------------- |
| [database](/influxdb3/version/reference/cli/influxdb3/update/database/) | Update a database |
| [trigger](/influxdb3/version/reference/cli/influxdb3/update/trigger/) | Update a trigger |
| help | Print command help or the help of a subcommand |
| {{% /show-in %}} | |
## Options
| Option | | Description |
| :----- | :----------- | :------------------------------ |
| `-h` | `--help` | Print help information |
| | `--help-all` | Print detailed help information |
| | `--help-all` | Print detailed help information |

View File

@ -0,0 +1,174 @@
The `influxdb3 update trigger` command updates an existing trigger in your {{< product-name >}} instance.
Use this command to update trigger plugin code, configuration, or behavior without recreating the trigger. This preserves trigger history and configuration while allowing you to iterate on plugin development.
## Usage
<!--pytest.mark.skip-->
```bash
influxdb3 update trigger [OPTIONS] \
--database <DATABASE_NAME> \
--trigger-name <TRIGGER_NAME>
```
## Arguments
- **`DATABASE_NAME`**: (Required) The name of the database containing the trigger.
- **`TRIGGER_NAME`**: (Required) The name of the trigger to update.
## Options
| Option | | Description |
| :----- | :-------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `-H` | `--host` | Host URL of the running {{< product-name >}} server (default is `http://127.0.0.1:8181`) |
| `-d` | `--database` | *({{< req >}})* Name of the database containing the trigger |
| | `--trigger-name` | *({{< req >}})* Name of the trigger to update |
| `-p` | `--path` | Path to plugin file or directory (single `.py` file or directory containing `__init__.py` for multifile plugins). Can be local path (with `--upload`) or server path. |
| | `--upload` | Upload local plugin files to the server. Requires admin token. Use with `--path` to specify local files. |
| | `--trigger-arguments` | Additional arguments for the trigger, in the format `key=value`, separated by commas (for example, `arg1=val1,arg2=val2`) |
| | `--disabled` | Set the trigger state to disabled |
| | `--enabled` | Set the trigger state to enabled |
| | `--error-behavior` | Error handling behavior: `log`, `retry`, or `disable` |
| | `--token` | Authentication token |
| | `--tls-ca` | Path to a custom TLS certificate authority (for testing or self-signed certificates) |
| `-h` | `--help` | Print help information |
| | `--help-all` | Print detailed help information |
### Option environment variables
You can use the following environment variables instead of providing CLI options directly:
| Environment Variable | Option |
| :------------------------ | :----------- |
| `INFLUXDB3_HOST_URL` | `--host` |
| `INFLUXDB3_DATABASE_NAME` | `--database` |
| `INFLUXDB3_AUTH_TOKEN` | `--token` |
| `INFLUXDB3_TLS_CA` | `--tls-ca` |
## Examples
The following examples show how to update triggers in different scenarios.
- [Update trigger plugin code](#update-trigger-plugin-code)
- [Upload and update with a local plugin](#upload-and-update-with-a-local-plugin)
- [Update trigger arguments](#update-trigger-arguments)
- [Enable or disable a trigger](#enable-or-disable-a-trigger)
- [Update error handling behavior](#update-error-handling-behavior)
***
Replace the following placeholders with your values:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: Database name
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: Authentication token
- {{% code-placeholder-key %}}`TRIGGER_NAME`{{% /code-placeholder-key %}}: Name of the trigger to update
{{% code-placeholders "(DATABASE|TRIGGER)\_NAME|AUTH\_TOKEN" %}}
### Update trigger plugin code
Update a trigger to use modified plugin code from the server's plugin directory.
<!--pytest.mark.skip-->
```bash
influxdb3 update trigger \
--database DATABASE_NAME \
--trigger-name TRIGGER_NAME \
--path "my_plugin.py" \
--token AUTH_TOKEN
```
### Upload and update with a local plugin
Upload new plugin code from your local machine and update the trigger in a single operation. Requires admin token.
<!--pytest.mark.skip-->
```bash
# Upload single-file plugin
influxdb3 update trigger \
--database DATABASE_NAME \
--trigger-name TRIGGER_NAME \
--path "/local/path/to/updated_plugin.py" \
--upload \
--token AUTH_TOKEN
# Upload multifile plugin directory
influxdb3 update trigger \
--database DATABASE_NAME \
--trigger-name TRIGGER_NAME \
--path "/local/path/to/plugin_directory" \
--upload \
--token AUTH_TOKEN
```
The `--upload` flag transfers local files to the server's plugin directory, making it easy to iterate on plugin development without manual file copying.
### Update trigger arguments
Modify the arguments passed to a trigger's plugin code.
<!--pytest.mark.skip-->
```bash
influxdb3 update trigger \
--database DATABASE_NAME \
--trigger-name TRIGGER_NAME \
--trigger-arguments threshold=100,window=5m \
--token AUTH_TOKEN
```
### Enable or disable a trigger
Change the trigger's enabled state without modifying other configuration.
<!--pytest.mark.skip-->
```bash
# Disable a trigger
influxdb3 update trigger \
--database DATABASE_NAME \
--trigger-name TRIGGER_NAME \
--disabled \
--token AUTH_TOKEN
# Enable a trigger
influxdb3 update trigger \
--database DATABASE_NAME \
--trigger-name TRIGGER_NAME \
--enabled \
--token AUTH_TOKEN
```
### Update error handling behavior
Change how the trigger responds to errors.
<!--pytest.mark.skip-->
```bash
# Log errors without retrying
influxdb3 update trigger \
--database DATABASE_NAME \
--trigger-name TRIGGER_NAME \
--error-behavior log \
--token AUTH_TOKEN
# Retry on errors
influxdb3 update trigger \
--database DATABASE_NAME \
--trigger-name TRIGGER_NAME \
--error-behavior retry \
--token AUTH_TOKEN
# Disable trigger on error
influxdb3 update trigger \
--database DATABASE_NAME \
--trigger-name TRIGGER_NAME \
--error-behavior disable \
--token AUTH_TOKEN
```
{{% /code-placeholders %}}

View File

@ -1,11 +1,13 @@
<!-- TOC -->
- [Prerequisites](#prerequisites)
- [Quick-Start Mode (Development)](#quick-start-mode-development)
- [Start InfluxDB](#start-influxdb)
- [Object store examples](#object-store-examples)
{{% show-in "enterprise" %}}
{{% show-in "enterprise" %}}
- [Set up licensing](#set-up-licensing)
- [Available license types](#available-license-types)
{{% /show-in %}}
{{% /show-in %}}
- [Set up authorization](#set-up-authorization)
- [Create an operator token](#create-an-operator-token)
- [Set your token for authorization](#set-your-token-for-authorization)
@ -21,6 +23,62 @@ To get started, you'll need:
- A directory on your local disk where you can persist data (used by examples in this guide)
- S3-compatible object store and credentials
## Quick-Start Mode (Development)
For development, testing, and home use, you can start {{% product-name %}} without
any arguments. The system automatically generates required configuration values
based on your system's hostname:
```bash
influxdb3
```
When you run `influxdb3` without arguments, the following values are auto-generated:
{{% show-in "enterprise" %}}
- **`node-id`**: `{hostname}-node` (or `primary-node` if hostname is unavailable)
- **`cluster-id`**: `{hostname}-cluster` (or `primary-cluster` if hostname is unavailable)
{{% /show-in %}}
{{% show-in "core" %}}
- **`node-id`**: `{hostname}-node` (or `primary-node` if hostname is unavailable)
{{% /show-in %}}
- **`object-store`**: `file`
- **`data-dir`**: `~/.influxdb`
The system displays warning messages showing the auto-generated identifiers:
{{% show-in "enterprise" %}}
```
Using auto-generated node id: mylaptop-node. For production deployments, explicitly set --node-id
Using auto-generated cluster id: mylaptop-cluster. For production deployments, explicitly set --cluster-id
```
{{% /show-in %}}
{{% show-in "core" %}}
```
Using auto-generated node id: mylaptop-node. For production deployments, explicitly set --node-id
```
{{% /show-in %}}
> \[!Important]
>
> #### When to use quick-start mode
>
> Quick-start mode is designed for development, testing, and home lab environments
> where simplicity is prioritized over explicit configuration.
>
> **For production deployments**, use explicit configuration values with the
> [`influxdb3 serve` command](/influxdb3/version/reference/cli/influxdb3/serve/)
> as shown in the [Start InfluxDB](#start-influxdb) section below.
**Configuration precedence**: Environment variables override auto-generated defaults.
For example, if you set `INFLUXDB3_NODE_IDENTIFIER_PREFIX=my-node`, the system
uses `my-node` instead of generating `{hostname}-node`.
## Start InfluxDB
Use the [`influxdb3 serve` command](/influxdb3/version/reference/cli/influxdb3/serve/)
@ -28,24 +86,28 @@ to start {{% product-name %}}.
Provide the following:
{{% show-in "enterprise" %}}
- `--node-id`: A string identifier that distinguishes individual server
instances within the cluster. This forms the final part of the storage path:
`<CONFIGURED_PATH>/<CLUSTER_ID>/<NODE_ID>`.
In a multi-node setup, this ID is used to reference specific nodes.
- `--cluster-id`: A string identifier that determines part of the storage path
hierarchy. All nodes within the same cluster share this identifier.
The storage path follows the pattern `<CONFIGURED_PATH>/<CLUSTER_ID>/<NODE_ID>`.
In a multi-node setup, this ID is used to reference the entire cluster.
{{% /show-in %}}
{{% show-in "core" %}}
{{% /show-in %}}
{{% show-in "core" %}}
- `--node-id`: A string identifier that distinguishes individual server instances.
This forms the final part of the storage path: `<CONFIGURED_PATH>/<NODE_ID>`.
{{% /show-in %}}
{{% /show-in %}}
- `--object-store`: Specifies the type of object store to use.
InfluxDB supports the following:
- `file`: local file system
- `memory`: in memory _(no object persistence)_
- `file`: local file system
- `memory`: in memory *(no object persistence)*
- `memory-throttled`: like `memory` but with latency and throughput that
somewhat resembles a cloud-based object store
- `s3`: AWS S3 and S3-compatible services like Ceph or Minio
@ -55,14 +117,15 @@ Provide the following:
- Other object store parameters depending on the selected `object-store` type.
For example, if you use `s3`, you must provide the bucket name and credentials.
> [!Note]
> \[!Note]
>
> #### Diskless architecture
>
> InfluxDB 3 supports a diskless architecture that can operate with object
> storage alone, eliminating the need for locally attached disks.
> {{% product-name %}} can also work with only local disk storage when needed.
> {{% product-name %}} can also work with only local disk storage when needed.
>
> {{% show-in "enterprise" %}}
> {{% show-in "enterprise" %}}
> The combined path structure `<CONFIGURED_PATH>/<CLUSTER_ID>/<NODE_ID>` ensures
> proper organization of data in your object store, allowing for clean
> separation between clusters and individual nodes.
@ -72,6 +135,7 @@ For this getting started guide, use the `file` object store to persist data to
your local disk.
{{% show-in "enterprise" %}}
```bash
# File system object store
# Provide the filesystem directory
@ -81,8 +145,10 @@ influxdb3 serve \
--object-store file \
--data-dir ~/.influxdb3
```
{{% /show-in %}}
{{% show-in "core" %}}
```bash
# File system object store
# Provide the file system directory
@ -91,6 +157,7 @@ influxdb3 serve \
--object-store file \
--data-dir ~/.influxdb3
```
{{% /show-in %}}
### Object store examples
@ -104,6 +171,7 @@ This is the default object store type.
Replace the following with your values:
{{% show-in "enterprise" %}}
```bash
# Filesystem object store
# Provide the filesystem directory
@ -113,8 +181,10 @@ influxdb3 serve \
--object-store file \
--data-dir ~/.influxdb3
```
{{% /show-in %}}
{{% show-in "core" %}}
```bash
# File system object store
# Provide the file system directory
@ -123,6 +193,7 @@ influxdb3 serve \
--object-store file \
--data-dir ~/.influxdb3
```
{{% /show-in %}}
{{% /expand %}}
@ -136,7 +207,9 @@ provide the following options with your `docker run` command:
- `--object-store file --data-dir /path/in/container`: Uses the volume for object storage
{{% show-in "enterprise" %}}
<!--pytest.mark.skip-->
```bash
# File system object store with Docker
# Create a mount
@ -149,9 +222,12 @@ docker run -it \
--object-store file \
--data-dir /path/in/container
```
{{% /show-in %}}
{{% show-in "core" %}}
<!--pytest.mark.skip-->
```bash
# File system object store with Docker
# Create a mount
@ -163,10 +239,11 @@ docker run -it \
--object-store file \
--data-dir /path/in/container
```
{{% /show-in %}}
> [!Note]
>
> \[!Note]
>
> The {{% product-name %}} Docker image exposes port `8181`, the `influxdb3`
> server default for HTTP connections.
> To map the exposed port to a different port when running a container, see the
@ -175,8 +252,9 @@ docker run -it \
{{% /expand %}}
{{% expand "Docker compose with a mounted file system object store" %}}
Open `compose.yaml` for editing and add a `services` entry for
{{% product-name %}}--for example:
{{% product-name %}}--for example:
{{% show-in "enterprise" %}}
```yaml
# compose.yaml
services:
@ -206,11 +284,13 @@ services:
# Path to store plugins in the container
target: /var/lib/influxdb3/plugins
```
Replace `EMAIL_ADDRESS` with your email address to bypass the email prompt
when generating a trial or at-home license. For more information, see [Manage your
{{% product-name %}} license](/influxdb3/version/admin/license/).
Replace `EMAIL_ADDRESS` with your email address to bypass the email prompt
when generating a trial or at-home license. For more information, see [Manage your
{{% product-name %}} license](/influxdb3/version/admin/license/).
{{% /show-in %}}
{{% show-in "core" %}}
```yaml
# compose.yaml
services:
@ -237,11 +317,13 @@ services:
# Path to store plugins in the container
target: /var/lib/influxdb3/plugins
```
{{% /show-in %}}
Use the Docker Compose CLI to start the server--for example:
<!--pytest.mark.skip-->
```bash
docker compose pull && docker compose up influxdb3-{{< product-key >}}
```
@ -250,7 +332,8 @@ The command pulls the latest {{% product-name %}} Docker image and starts
`influxdb3` in a container with host port `8181` mapped to container port
`8181`, the server default for HTTP connections.
> [!Tip]
> \[!Tip]
>
> #### Custom port mapping
>
> To customize your `influxdb3` server hostname and port, specify the
@ -267,6 +350,7 @@ This is useful for production deployments that require high availability and dur
Provide your bucket name and credentials to access the S3 object store.
{{% show-in "enterprise" %}}
```bash
# S3 object store (default is the us-east-1 region)
# Specify the object store type and associated options
@ -293,8 +377,10 @@ influxdb3 serve \
--aws-endpoint ENDPOINT \
--aws-allow-http
```
{{% /show-in %}}
{{% show-in "core" %}}
```bash
# S3 object store (default is the us-east-1 region)
# Specify the object store type and associated options
@ -319,6 +405,7 @@ influxdb3 serve \
--aws-endpoint ENDPOINT \
--aws-allow-http
```
{{% /show-in %}}
{{% /expand %}}
@ -328,6 +415,7 @@ Store data in RAM without persisting it on shutdown.
It's useful for rapid testing and development.
{{% show-in "enterprise" %}}
```bash
# Memory object store
# Stores data in RAM; doesn't persist data
@ -336,8 +424,10 @@ influxdb3 serve \
--cluster-id cluster01 \
--object-store memory
```
{{% /show-in %}}
{{% show-in "core" %}}
```bash
# Memory object store
# Stores data in RAM; doesn't persist data
@ -345,6 +435,7 @@ influxdb3 serve \
--node-id host01 \
--object-store memory
```
{{% /show-in %}}
{{% /expand %}}
@ -358,6 +449,7 @@ influxdb3 serve --help
```
{{% show-in "enterprise" %}}
## Set up licensing
When you first start a new instance, {{% product-name %}} prompts you to select a
@ -375,27 +467,29 @@ InfluxDB 3 Enterprise licenses:
- **At-Home**: For at-home hobbyist use with limited access to InfluxDB 3 Enterprise capabilities.
- **Commercial**: Commercial license with full access to InfluxDB 3 Enterprise capabilities.
> [!Important]
> \[!Important]
>
> #### Trial and at-home licenses with Docker
>
> To generate the trial or home license in Docker, bypass the email prompt.
> The first time you start a new instance, provide your email address with the
> `--license-email` option or the `INFLUXDB3_ENTERPRISE_LICENSE_EMAIL` environment variable.
>
> _Currently, if you use Docker and enter your email address in the prompt, a bug may
> prevent the container from generating the license ._
> *Currently, if you use Docker and enter your email address in the prompt, a bug may
> prevent the container from generating the license .*
>
> For more information, see [the Docker Compose example](/influxdb3/enterprise/admin/license/?t=Docker+compose#start-the-server-with-your-license-email).
{{% /show-in %}}
> {{% /show-in %}}
> [!Tip]
> \[!Tip]
>
> #### Use the InfluxDB 3 Explorer query interface
>
> You can complete the remaining steps in this guide using InfluxDB 3 Explorer,
> the web-based query and administrative interface for InfluxDB 3.
> Explorer provides visual management of databases and tokens and an
> easy way to write and query your time series data.
>
>
> For more information, see the [InfluxDB 3 Explorer documentation](/influxdb3/explorer/).
## Set up authorization
@ -416,17 +510,17 @@ commands and HTTP API requests.
database
- A system token grants read access to system information endpoints and
metrics for the server
{{% /show-in %}}
{{% show-in "core" %}}
{{% product-name %}} supports _admin_ tokens, which grant access to all CLI actions and API endpoints.
{{% /show-in %}}
{{% /show-in %}}
{{% show-in "core" %}}
{{% product-name %}} supports *admin* tokens, which grant access to all CLI actions and API endpoints.
{{% /show-in %}}
For more information about tokens and authorization, see [Manage tokens](/influxdb3/version/admin/tokens/).
### Create an operator token
After you start the server, create your first admin token.
The first admin token you create is the _operator_ token for the server.
The first admin token you create is the *operator* token for the server.
Use the [`influxdb3 create token` command](/influxdb3/version/reference/cli/influxdb3/create/token/)
with the `--admin` option to create your operator token:
@ -445,11 +539,13 @@ influxdb3 create token --admin
{{% /code-tab-content %}}
{{% code-tab-content %}}
{{% code-placeholders "CONTAINER_NAME" %}}
{{% code-placeholders "CONTAINER\_NAME" %}}
```bash
# With Docker — in a new terminal:
docker exec -it CONTAINER_NAME influxdb3 create token --admin
```
{{% /code-placeholders %}}
Replace {{% code-placeholder-key %}}`CONTAINER_NAME`{{% /code-placeholder-key %}} with the name of your running Docker container.
@ -459,9 +555,10 @@ Replace {{% code-placeholder-key %}}`CONTAINER_NAME`{{% /code-placeholder-key %}
The command returns a token string for authenticating CLI commands and API requests.
> [!Important]
> \[!Important]
>
> #### Store your token securely
>
>
> InfluxDB displays the token string only when you create it.
> Store your token securely—you cannot retrieve it from the database later.
@ -486,10 +583,12 @@ In your command, replace {{% code-placeholder-key %}}`YOUR_AUTH_TOKEN`{{% /code-
Set the `INFLUXDB3_AUTH_TOKEN` environment variable to have the CLI use your
token automatically:
{{% code-placeholders "YOUR_AUTH_TOKEN" %}}
{{% code-placeholders "YOUR\_AUTH\_TOKEN" %}}
```bash
export INFLUXDB3_AUTH_TOKEN=YOUR_AUTH_TOKEN
```
{{% /code-placeholders %}}
{{% /tab-content %}}
@ -497,10 +596,12 @@ export INFLUXDB3_AUTH_TOKEN=YOUR_AUTH_TOKEN
Include the `--token` option with CLI commands:
{{% code-placeholders "YOUR_AUTH_TOKEN" %}}
{{% code-placeholders "YOUR\_AUTH\_TOKEN" %}}
```bash
influxdb3 show databases --token YOUR_AUTH_TOKEN
```
{{% /code-placeholders %}}
{{% /tab-content %}}
@ -508,37 +609,41 @@ influxdb3 show databases --token YOUR_AUTH_TOKEN
For HTTP API requests, include your token in the `Authorization` header--for example:
{{% code-placeholders "YOUR_AUTH_TOKEN" %}}
{{% code-placeholders "YOUR\_AUTH\_TOKEN" %}}
```bash
curl "http://{{< influxdb/host >}}/api/v3/configure/database" \
--header "Authorization: Bearer YOUR_AUTH_TOKEN"
```
{{% /code-placeholders %}}
#### Learn more about tokens and permissions
- [Manage admin tokens](/influxdb3/version/admin/tokens/admin/) - Understand and
manage operator and named admin tokens
{{% show-in "enterprise" %}}
{{% show-in "enterprise" %}}
- [Manage resource tokens](/influxdb3/version/admin/tokens/resource/) - Create,
list, and delete resource tokens
{{% /show-in %}}
{{% /show-in %}}
- [Authentication](/influxdb3/version/reference/internals/authentication/) -
Understand authentication, authorizations, and permissions in {{% product-name %}}
<!-- //TODO - Authenticate with compatibility APIs -->
{{% show-in "core" %}}
{{% page-nav
prev="/influxdb3/version/get-started/"
prevText="Get started"
next="/influxdb3/version/get-started/write/"
nextText="Write data"
prev="/influxdb3/version/get-started/"
prevText="Get started"
next="/influxdb3/version/get-started/write/"
nextText="Write data"
%}}
{{% /show-in %}}
{{% show-in "enterprise" %}}
{{% page-nav
prev="/influxdb3/version/get-started/"
prevText="Get started"
next="/influxdb3/version/get-started/multi-server/"
nextText="Create a multi-node cluster"
prev="/influxdb3/version/get-started/"
prevText="Get started"
next="/influxdb3/version/get-started/multi-server/"
nextText="Create a multi-node cluster"
%}}
{{% /show-in %}}

View File

@ -1,8 +1,8 @@
Use the Processing Engine in {{% product-name %}} to extend your database with custom Python code. Trigger your code on write, on a schedule, or on demand to automate workflows, transform data, and create API endpoints.
Use the Processing Engine in {{% product-name %}} to extend your database with custom Python code. Trigger your code on write, on a schedule, or on demand to automate workflows, transform data, and create API endpoints.
## What is the Processing Engine?
The Processing Engine is an embedded Python virtual machine that runs inside your {{% product-name %}} database. You configure _triggers_ to run your Python _plugin_ code in response to:
The Processing Engine is an embedded Python virtual machine that runs inside your {{% product-name %}} database. You configure *triggers* to run your Python *plugin* code in response to:
- **Data writes** - Process and transform data as it enters the database
- **Scheduled events** - Run code at defined intervals or specific times
@ -14,7 +14,8 @@ This guide walks you through setting up the Processing Engine, creating your fir
## Before you begin
Ensure you have:
Ensure you have:
- A working {{% product-name %}} instance
- Access to command line
- Python installed if you're writing your own plugin
@ -24,21 +25,27 @@ Once you have all the prerequisites in place, follow these steps to implement th
- [Set up the Processing Engine](#set-up-the-processing-engine)
- [Add a Processing Engine plugin](#add-a-processing-engine-plugin)
- [Upload plugins from local machine](#upload-plugins-from-local-machine)
- [Update existing plugins](#update-existing-plugins)
- [View loaded plugins](#view-loaded-plugins)
- [Set up a trigger](#set-up-a-trigger)
- [Manage plugin dependencies](#manage-plugin-dependencies)
{{% show-in "enterprise" %}}
- [Plugin security](#plugin-security)
{{% show-in "enterprise" %}}
- [Distributed cluster considerations](#distributed-cluster-considerations)
{{% /show-in %}}
{{% /show-in %}}
## Set up the Processing Engine
To activate the Processing Engine, start your {{% product-name %}} server with the `--plugin-dir` flag. This flag tells InfluxDB where to load your plugin files.
> [!Important]
> \[!Important]
>
> #### Keep the influxdb3 binary with its python directory
>
> The influxdb3 binary requires the adjacent `python/` directory to function.
> The influxdb3 binary requires the adjacent `python/` directory to function.
> If you manually extract from tar.gz, keep them in the same parent directory:
>
> ```
> your-install-location/
> ├── influxdb3
@ -47,7 +54,7 @@ To activate the Processing Engine, start your {{% product-name %}} server with t
>
> Add the parent directory to your PATH; do not move the binary out of this directory.
{{% code-placeholders "NODE_ID|OBJECT_STORE_TYPE|PLUGIN_DIR" %}}
{{% code-placeholders "NODE\_ID|OBJECT\_STORE\_TYPE|PLUGIN\_DIR" %}}
```bash
influxdb3 serve \
@ -64,11 +71,12 @@ In the example above, replace the following:
- {{% code-placeholder-key %}}`OBJECT_STORE_TYPE`{{% /code-placeholder-key %}}: Type of object store (for example, file or s3)
- {{% code-placeholder-key %}}`PLUGIN_DIR`{{% /code-placeholder-key %}}: Absolute path to the directory where plugin files are stored. Store all plugin files in this directory or its subdirectories.
> [!Note]
> \[!Note]
>
> #### Use custom plugin repositories
>
> By default, plugins referenced with the `gh:` prefix are fetched from the official
> [influxdata/influxdb3_plugins](https://github.com/influxdata/influxdb3_plugins) repository.
> [influxdata/influxdb3\_plugins](https://github.com/influxdata/influxdb3_plugins) repository.
> To use a custom repository, add the `--plugin-repo` flag when starting the server.
> See [Use a custom plugin repository](#option-3-use-a-custom-plugin-repository) for details.
@ -84,7 +92,8 @@ When running {{% product-name %}} in a distributed setup, follow these steps to
3. Maintain identical plugin files across all instances where plugins run
- Use shared storage or file synchronization tools to keep plugins consistent
> [!Note]
> \[!Note]
>
> #### Provide plugins to nodes that run them
>
> Configure your plugin directory on the same system as the nodes that run the triggers and plugins.
@ -95,7 +104,7 @@ For more information about configuring distributed environments, see the [Distri
## Add a Processing Engine plugin
A plugin is a Python script that defines a specific function signature for a trigger (_trigger spec_). When the specified event occurs, InfluxDB runs the plugin.
A plugin is a Python script that defines a specific function signature for a trigger (*trigger spec*). When the specified event occurs, InfluxDB runs the plugin.
### Choose a plugin strategy
@ -110,13 +119,13 @@ InfluxData maintains a repository of official and community plugins that you can
Browse the [plugin library](/influxdb3/version/plugins/library/) to find examples and InfluxData official plugins for:
- **Data transformation**: Process and transform incoming data
- **Alerting**: Send notifications based on data thresholds
- **Aggregation**: Calculate statistics on time series data
- **Integration**: Connect to external services and APIs
- **System monitoring**: Track resource usage and health metrics
- **Data transformation**: Process and transform incoming data
- **Alerting**: Send notifications based on data thresholds
- **Aggregation**: Calculate statistics on time series data
- **Integration**: Connect to external services and APIs
- **System monitoring**: Track resource usage and health metrics
For community contributions, see the [influxdb3_plugins repository](https://github.com/influxdata/influxdb3_plugins) on GitHub.
For community contributions, see the [influxdb3\_plugins repository](https://github.com/influxdata/influxdb3_plugins) on GitHub.
#### Add example plugins
@ -189,17 +198,17 @@ influxdb3 create trigger \
The `--plugin-repo` option accepts any HTTP/HTTPS URL that serves raw plugin files.
See the [plugin-repo configuration option](/influxdb3/version/reference/config-options/#plugin-repo) for more details.
Plugins have various functions such as:
Plugins have various functions such as:
- Receive plugin-specific arguments (such as written data, call time, or an HTTP request)
- Access keyword arguments (as `args`) passed from _trigger arguments_ configurations
- Access keyword arguments (as `args`) passed from *trigger arguments* configurations
- Access the `influxdb3_local` shared API to write data, query data, and managing state between executions
For more information about available functions, arguments, and how plugins interact with InfluxDB, see how to [Extend plugins](/influxdb3/version/extend-plugin/).
For more information about available functions, arguments, and how plugins interact with InfluxDB, see how to [Extend plugins](/influxdb3/version/extend-plugin/).
### Create a custom plugin
To build custom functionality, you can create your own Processing Engine plugin.
To build custom functionality, you can create your own Processing Engine plugin.
#### Prerequisites
@ -227,10 +236,53 @@ Choose a plugin type based on your automation goals:
#### Create your plugin file
Plugins now support both single-file and multifile architectures:
**Single-file plugins:**
- Create a `.py` file in your plugins directory
- Add the appropriate function signature based on your chosen plugin type
- Write your processing logic inside the function
**Multifile plugins:**
- Create a directory in your plugins directory
- Add an `__init__.py` file as the entry point (required)
- Organize supporting modules in additional `.py` files
- Import and use modules within your plugin code
##### Example multifile plugin structure
```
my_plugin/
├── __init__.py # Required - entry point with trigger function
├── utils.py # Supporting module
├── processors.py # Data processing functions
└── config.py # Configuration helpers
```
The `__init__.py` file must contain your trigger function:
```python
# my_plugin/__init__.py
from .processors import process_data
from .config import get_settings
def process_writes(influxdb3_local, table_batches, args=None):
settings = get_settings()
for table_batch in table_batches:
process_data(influxdb3_local, table_batch, settings)
```
Supporting modules can contain helper functions:
```python
# my_plugin/processors.py
def process_data(influxdb3_local, table_batch, settings):
# Processing logic here
pass
```
After writing your plugin, [create a trigger](#use-the-create-trigger-command) to connect it to a database event and define when it runs.
#### Create a data write plugin
@ -313,21 +365,141 @@ After writing your plugin:
- [Install any Python dependencies](#manage-plugin-dependencies) your plugin requires
- Learn how to [extend plugins with the API](/influxdb3/version/extend-plugin/)
### Upload plugins from local machine
For local development and testing, you can upload plugin files directly from your machine when creating triggers. This eliminates the need to manually copy files to the server's plugin directory.
Use the `--upload` flag with `--path` to transfer local files or directories:
```bash
# Upload single-file plugin
influxdb3 create trigger \
--trigger-spec "every:10s" \
--path "/local/path/to/plugin.py" \
--upload \
--database metrics \
my_trigger
# Upload multifile plugin directory
influxdb3 create trigger \
--trigger-spec "every:30s" \
--path "/local/path/to/plugin-dir" \
--upload \
--database metrics \
complex_trigger
```
> \[!Important]
>
> #### Admin privileges required
>
> Plugin uploads require an admin token. This security measure prevents unauthorized code execution on the server.
**When to use plugin upload:**
- Local plugin development and testing
- Deploying plugins without SSH access to the server
- Rapid iteration on plugin code
- Automating plugin deployment in CI/CD pipelines
For more information, see the [`influxdb3 create trigger` CLI reference](/influxdb3/version/reference/cli/influxdb3/create/trigger/).
### Update existing plugins
Modify plugin code for running triggers without recreating them. This allows you to iterate on plugin development while preserving trigger configuration and history.
Use the `influxdb3 update trigger` command:
```bash
# Update single-file plugin
influxdb3 update trigger \
--database metrics \
--trigger-name my_trigger \
--path "/path/to/updated/plugin.py"
# Update multifile plugin
influxdb3 update trigger \
--database metrics \
--trigger-name complex_trigger \
--path "/path/to/updated/plugin-dir"
```
The update operation:
- Replaces plugin files immediately
- Preserves trigger configuration (spec, schedule, arguments)
- Requires admin token for security
- Works with both local paths and uploaded files
For complete reference, see [`influxdb3 update trigger`](/influxdb3/version/reference/cli/influxdb3/update/trigger/).
### View loaded plugins
Monitor which plugins are loaded in your system for operational visibility and troubleshooting.
**Option 1: Use the CLI command**
```bash
# List all plugins
influxdb3 show plugins --token $ADMIN_TOKEN
# JSON format for programmatic access
influxdb3 show plugins --format json --token $ADMIN_TOKEN
```
**Option 2: Query the system table**
The `system.plugin_files` table in the `_internal` database provides detailed plugin file information:
```bash
influxdb3 query \
-d _internal \
"SELECT * FROM system.plugin_files ORDER BY plugin_name" \
--token $ADMIN_TOKEN
```
**Available columns:**
- `plugin_name` (String): Trigger name
- `file_name` (String): Plugin file name
- `file_path` (String): Full server path
- `size_bytes` (Int64): File size
- `last_modified` (Int64): Modification timestamp (milliseconds)
**Example queries:**
```sql
-- Find plugins by name
SELECT * FROM system.plugin_files WHERE plugin_name = 'my_trigger';
-- Find large plugins
SELECT plugin_name, size_bytes
FROM system.plugin_files
WHERE size_bytes > 10000;
-- Check modification times
SELECT plugin_name, file_name, last_modified
FROM system.plugin_files
ORDER BY last_modified DESC;
```
For more information, see the [`influxdb3 show plugins` reference](/influxdb3/version/reference/cli/influxdb3/show/plugins/) and [Query system data](/influxdb3/version/admin/query-system-data/#query-plugin-files).
## Set up a trigger
### Understand trigger types
| Plugin Type | Trigger Specification | When Plugin Runs |
|------------|----------------------|-----------------|
| Data write | `table:<TABLE_NAME>` or `all_tables` | When data is written to tables |
| Scheduled | `every:<DURATION>` or `cron:<EXPRESSION>` | At specified time intervals |
| HTTP request | `request:<REQUEST_PATH>` | When HTTP requests are received |
| Plugin Type | Trigger Specification | When Plugin Runs |
| ------------ | ----------------------------------------- | ------------------------------- |
| Data write | `table:<TABLE_NAME>` or `all_tables` | When data is written to tables |
| Scheduled | `every:<DURATION>` or `cron:<EXPRESSION>` | At specified time intervals |
| HTTP request | `request:<REQUEST_PATH>` | When HTTP requests are received |
### Use the create trigger command
Use the `influxdb3 create trigger` command with the appropriate trigger specification:
{{% code-placeholders "SPECIFICATION|PLUGIN_FILE|DATABASE_NAME|TRIGGER_NAME" %}}
{{% code-placeholders "SPECIFICATION|PLUGIN\_FILE|DATABASE\_NAME|TRIGGER\_NAME" %}}
```bash
influxdb3 create trigger \
@ -335,7 +507,7 @@ influxdb3 create trigger \
--plugin-filename PLUGIN_FILE \
--database DATABASE_NAME \
TRIGGER_NAME
```
```
{{% /code-placeholders %}}
@ -346,14 +518,14 @@ In the example above, replace the following:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: Name of the database
- {{% code-placeholder-key %}}`TRIGGER_NAME`{{% /code-placeholder-key %}}: Name of the new trigger
> [!Note]
> \[!Note]
> When specifying a local plugin file, the `--plugin-filename` parameter
> _is relative to_ the `--plugin-dir` configured for the server.
> *is relative to* the `--plugin-dir` configured for the server.
> You don't need to provide an absolute path.
### Trigger specification examples
#### Trigger on data writes
#### Trigger on data writes
```bash
# Trigger on writes to a specific table
@ -381,7 +553,8 @@ The plugin receives the written data and table information.
If you want to use a single trigger for all tables but exclude specific tables,
you can use trigger arguments and your plugin code to filter out unwanted tables--for example:
{{% code-placeholders "DATABASE_NAME|AUTH_TOKEN" %}}
{{% code-placeholders "DATABASE\_NAME|AUTH\_TOKEN" %}}
```bash
influxdb3 create trigger \
--database DATABASE_NAME \
@ -391,13 +564,14 @@ influxdb3 create trigger \
--trigger-arguments "exclude_tables=temp_data,debug_info,system_logs" \
data_processor
```
{{% /code-placeholders %}}
Replace the following:
- {{% code-placeholder-key %}}DATABASE_NAME{{% /code-placeholder-key %}}: the name of the database
- {{% code-placeholder-key %}}AUTH_TOKEN{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in
"enterprise" %}} with write permissions on the specified database{{% /show-in %}}
- {{% code-placeholder-key %}}DATABASE\_NAME{{% /code-placeholder-key %}}: the name of the database
- {{% code-placeholder-key %}}AUTH\_TOKEN{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in
"enterprise" %}} with write permissions on the specified database{{% /show-in %}}
Then, in your plugin:
@ -423,7 +597,7 @@ def on_write(self, database, table_name, batch):
triggers instead of filtering within plugin code.
See HTTP API [Processing engine endpoints](/influxdb3/version/api/v3/#tag/Processing-engine) for managing triggers.
#### Trigger on a schedule
#### Trigger on a schedule
```bash
# Run every 5 minutes
@ -545,12 +719,10 @@ influxdb3 create trigger \
## Manage plugin dependencies
Use the `influxdb3 install package` command to add third-party libraries (like `pandas`, `requests`, or `influxdb3-python`) to your plugin environment.
Use the `influxdb3 install package` command to add third-party libraries (like `pandas`, `requests`, or `influxdb3-python`) to your plugin environment.\
This installs packages into the Processing Engines embedded Python environment to ensure compatibility with your InfluxDB instance.
{{% code-placeholders "CONTAINER_NAME|PACKAGE_NAME" %}}
{{% code-placeholders "CONTAINER\_NAME|PACKAGE\_NAME" %}}
{{< code-tabs-wrapper >}}
@ -585,18 +757,133 @@ These examples install the specified Python package (for example, pandas) into t
- Use the CLI command when running InfluxDB directly on your system.
- Use the Docker variant if you're running InfluxDB in a containerized environment.
> [!Important]
> \[!Important]
>
> #### Use bundled Python for plugins
>
> When you start the server with the `--plugin-dir` option, InfluxDB 3 creates a Python virtual environment (`<PLUGIN_DIR>/venv`) for your plugins.
> If you need to create a custom virtual environment, use the Python interpreter bundled with InfluxDB 3. Don't use the system Python.
> Creating a virtual environment with the system Python (for example, using `python -m venv`) can lead to runtime errors and plugin failures.
>
>For more information, see the [processing engine README](https://github.com/influxdata/influxdb/blob/main/README_processing_engine.md).
>
> For more information, see the [processing engine README](https://github.com/influxdata/influxdb/blob/main/README_processing_engine.md).
{{% /code-placeholders %}}
InfluxDB creates a Python virtual environment in your plugins directory with the specified packages installed.
### Disable package installation for secure environments
For air-gapped deployments or environments with strict security requirements, you can disable Python package installation while maintaining Processing Engine functionality.
Start the server with `--package-manager disabled`:
```bash
influxdb3 serve \
--node-id node0 \
--object-store file \
--data-dir ~/.influxdb3 \
--plugin-dir ~/.plugins \
--package-manager disabled
```
When package installation is disabled:
- The Processing Engine continues to function normally for triggers
- Plugin code executes without restrictions
- Package installation commands are blocked
- Pre-installed dependencies in the virtual environment remain available
**Pre-install required dependencies:**
Before disabling the package manager, install all required Python packages:
```bash
# Install packages first
influxdb3 install package pandas requests numpy
# Then start with disabled package manager
influxdb3 serve \
--plugin-dir ~/.plugins \
--package-manager disabled
```
**Use cases for disabled package management:**
- Air-gapped environments without internet access
- Compliance requirements prohibiting runtime package installation
- Centrally managed dependency environments
- Security policies requiring pre-approved packages only
For more configuration options, see [--package-manager](/influxdb3/version/reference/config-options/#package-manager).
## Plugin security
The Processing Engine includes security features to protect your {{% product-name %}} instance from unauthorized code execution and file system attacks.
### Plugin path validation
All plugin file paths are validated to prevent directory traversal attacks. The system blocks:
- **Relative paths with parent directory references** (`../`, `../../`)
- **Absolute paths** (`/etc/passwd`, `/usr/bin/script.py`)
- **Symlinks that escape the plugin directory**
When creating or updating triggers, plugin paths must resolve within the configured `--plugin-dir`.
**Example of blocked paths:**
```bash
# These will be rejected
influxdb3 create trigger \
--path "../../../etc/passwd" \ # Blocked: parent directory traversal
...
influxdb3 create trigger \
--path "/tmp/malicious.py" \ # Blocked: absolute path
...
```
**Valid plugin paths:**
```bash
# These are allowed
influxdb3 create trigger \
--path "myapp/plugin.py" \ # Relative to plugin-dir
...
influxdb3 create trigger \
--path "transforms/data.py" \ # Subdirectory in plugin-dir
...
```
### Upload and update permissions
Plugin upload and update operations require admin tokens to prevent unauthorized code deployment:
- `--upload` flag requires admin privileges
- `update trigger` command requires admin token
- Standard resource tokens cannot upload or modify plugin code
This security model ensures only administrators can introduce or modify executable code in your database.
### Best practices
**For development:**
- Use the `--upload` flag to deploy plugins during development
- Test plugins in non-production environments first
- Review plugin code before deployment
**For production:**
- Pre-deploy plugins to the server's plugin directory via secure file transfer
- Use custom plugin repositories for vetted, approved plugins
- Disable package installation (`--package-manager disabled`) in locked-down environments
- Audit plugin files using the [`system.plugin_files` table](#view-loaded-plugins)
- Implement change control processes for plugin updates
For more security configuration options, see [Configuration options](/influxdb3/version/reference/config-options/).
{{% show-in "enterprise" %}}
## Distributed cluster considerations
@ -607,20 +894,21 @@ When you deploy {{% product-name %}} in a multi-node environment, configure each
Each plugin must run on a node that supports its trigger type:
| Plugin type | Trigger spec | Runs on |
|--------------------|--------------------------|-----------------------------|
| Data write | `table:` or `all_tables` | Ingester nodes |
| Scheduled | `every:` or `cron:` | Any node with scheduler |
| HTTP request | `request:` | Nodes that serve API traffic|
| Plugin type | Trigger spec | Runs on |
| ------------ | ------------------------ | ---------------------------- |
| Data write | `table:` or `all_tables` | Ingester nodes |
| Scheduled | `every:` or `cron:` | Any node with scheduler |
| HTTP request | `request:` | Nodes that serve API traffic |
For example:
- Run write-ahead log (WAL) plugins on ingester nodes.
- Run scheduled plugins on any node configured to execute them.
- Run HTTP-triggered plugins on querier nodes or any node that handles HTTP endpoints.
Place all plugin files in the `--plugin-dir` directory configured for each node.
> [!Note]
> \[!Note]
> Triggers fail if the plugin file isnt available on the node where it runs.
### Route third-party clients to querier nodes
@ -630,7 +918,7 @@ External tools—such as Grafana, custom dashboards, or REST clients—must conn
#### Examples
- **Grafana**: When adding InfluxDB 3 as a Grafana data source, use a querier node URL, such as:
`https://querier.example.com:8086`
`https://querier.example.com:8086`
- **REST clients**: Applications using `POST /api/v3/query/sql` or similar endpoints must target a querier node.
{{% /show-in %}}

View File

@ -1,17 +1,49 @@
> [!Note]
> \[!Note]
>
> #### InfluxDB 3 Core and Enterprise relationship
>
> InfluxDB 3 Enterprise is a superset of InfluxDB 3 Core.
> All updates to Core are automatically included in Enterprise.
> The Enterprise sections below only list updates exclusive to Enterprise.
## v3.6.0 {date="2025-10-30"}
### Core
#### Features
- **Quick-Start Developer Experience**:
- `influxdb3` now supports running without arguments for instant database startup, automatically generating IDs and storage flags values based on your system's setup.
- **Processing Engine**:
- Plugins now support multiple files instead of single-file limitations.
- When creating a trigger, you can upload a plugin directly from your local machine using the `--upload` flag.
- Existing plugin files can now be updated at runtime without recreating triggers.
- New `system.plugin_files` table and `show plugins` CLI command now provide visibility into all loaded plugin files.
- Custom plugin repositories are now supported via `--plugin-repo` CLI flag.
- Python package installation can now be disabled with `--package-manager disabled` for locked-down environments.
- Plugin file path validation now prevents directory traversal attacks by blocking relative and absolute path patterns.
#### Bug fixes
- **Token management**: Token display now works correctly for hard-deleted databases
### Enterprise
All Core updates are included in Enterprise. Additional Enterprise-specific features and fixes:
#### Operational improvements
- **Storage engine**: improvements to the Docker-based license service development environment
- **Catalog consistency**: Node management fixes for catalog edge cases
- Other enhancements and performance improvements
## v3.5.0 {date="2025-09-30"}
### Core
#### Features
- **Custom Plugin Repository**:
- **Custom Plugin Repository**:
- Use the `--plugin-repo` option with `influxdb3 serve` to specify custom plugin repositories. This enables loading plugins from personal repos or disabling remote repo access.
#### Bug fixes
@ -19,9 +51,9 @@
- **Database reliability**:
- Table index updates now complete atomically before creating new indices, preventing race conditions that could corrupt database state ([#26838](https://github.com/influxdata/influxdb/pull/26838))
- Delete operations are now idempotent, preventing errors during object store cleanup ([#26839](https://github.com/influxdata/influxdb/pull/26839))
- **Write path**:
- **Write path**:
- Write operations to soft-deleted databases are now rejected, preventing data loss ([#26722](https://github.com/influxdata/influxdb/pull/26722))
- **Runtime stability**:
- **Runtime stability**:
- Fixed a compatibility issue that could cause deadlocks for concurrent operations ([#26804](https://github.com/influxdata/influxdb/pull/26804))
- Other bug fixes and performance improvements
@ -35,12 +67,12 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
#### Features
- **Cache optimization**:
- **Cache optimization**:
- Last Value Cache (LVC) and Distinct Value Cache (DVC) now populate on creation and only on query nodes, reducing resource usage on ingest nodes.
#### Bug fixes
- **Object store reliability**:
- **Object store reliability**:
- Object store operations now use retryable mechanisms with better error handling
#### Operational improvements
@ -48,7 +80,7 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
- **Compaction optimizations**:
- Compaction producer now waits 10 seconds before starting cycles, reducing resource contention during startup
- Enhanced scheduling algorithms distribute compaction work more efficiently across available resources
- **System tables**:
- **System tables**:
- System tables now provide consistent data across different node modes (ingest, query, compact), enabling better monitoring in multi-node deployments
## v3.4.2 {date="2025-09-11"}
@ -92,8 +124,8 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
### Core
#### Bug Fixes
- Upgrading from 3.3.0 to 3.4.x no longer causes possible catalog migration issues ([#26756](https://github.com/influxdata/influxdb/pull/26756))
- Upgrading from 3.3.0 to 3.4.x no longer causes possible catalog migration issues ([#26756](https://github.com/influxdata/influxdb/pull/26756))
## v3.4.0 {date="2025-08-27"}
@ -107,21 +139,22 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
([#26734](https://github.com/influxdata/influxdb/pull/26734))
- **Azure Endpoint**:
- Use the `--azure-endpoint` option with `influxdb3 serve` to specify the Azure Blob Storage endpoint for object store connections. ([#26687](https://github.com/influxdata/influxdb/pull/26687))
- **No_Sync via CLI**:
- **No\_Sync via CLI**:
- Use the `--no-sync` option with `influxdb3 write` to skip waiting for WAL persistence on write and immediately return a response to the write request. ([#26703](https://github.com/influxdata/influxdb/pull/26703))
#### Bug Fixes
- Validate tag and field names when creating tables ([#26641](https://github.com/influxdata/influxdb/pull/26641))
- Using GROUP BY twice on the same column no longer causes incorrect data ([#26732](https://github.com/influxdata/influxdb/pull/26732))
#### Security & Misc
- Reduce verbosity of the TableIndexCache log. ([#26709](https://github.com/influxdata/influxdb/pull/26709))
- WAL replay concurrency limit defaults to number of CPU cores, preventing possible OOMs. ([#26715](https://github.com/influxdata/influxdb/pull/26715))
- Remove unsafe signal_handler code. ([#26685](https://github.com/influxdata/influxdb/pull/26685))
- Remove unsafe signal\_handler code. ([#26685](https://github.com/influxdata/influxdb/pull/26685))
- Upgrade Python version to 3.13.7-20250818. ([#26686](https://github.com/influxdata/influxdb/pull/26686), [#26700](https://github.com/influxdata/influxdb/pull/26700))
- Tags with `/` in the name no longer break the primary key.
### Enterprise
All Core updates are included in Enterprise. Additional Enterprise-specific features and fixes:
@ -129,18 +162,16 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
#### Features
- **Token Provisioning**:
- Generate _resource_ and _admin_ tokens offline and use them when starting the database.
- Generate *resource* and *admin* tokens offline and use them when starting the database.
- Select a home or trial license without using an interactive terminal.
Use `--license-type` [home | trial | commercial] option to the `influxdb3 serve` command to automate the selection of the license type.
Use `--license-type` \[home | trial | commercial] option to the `influxdb3 serve` command to automate the selection of the license type.
#### Bug Fixes
- Don't initialize the Processing Engine when the specified `--mode` does not require it.
- Don't panic when `INFLUXDB3_PLUGIN_DIR` is set in containers without the Processing Engine enabled.
## v3.3.0 {date="2025-07-29"}
### Core
@ -226,7 +257,7 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
## v3.2.0 {date="2025-06-25"}
**Core**: revision 1ca3168bee
**Core**: revision 1ca3168bee\
**Enterprise**: revision 1ca3168bee
### Core
@ -259,7 +290,7 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
#### Features
- **License management improvements**:
- **License management improvements**:
- New `influxdb3 show license` command to display current license information
- **Table-level retention period support**: Add retention period support for individual tables in addition to database-level retention, providing granular data lifecycle management
- New CLI commands: `create table --retention-period` and `update table --retention-period`
@ -276,6 +307,7 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
- **License handling**: Trim whitespace from license file contents after reading to prevent validation issues
## v3.1.0 {date="2025-05-29"}
**Core**: revision 482dd8aac580c04f37e8713a8fffae89ae8bc264
**Enterprise**: revision 2cb23cf32b67f9f0d0803e31b356813a1a151b00
@ -283,6 +315,7 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
### Core
#### Token and Security Updates
- Named admin tokens can now be created, with configurable expirations
- `health`, `ping`, and `metrics` endpoints can now be opted out of authorization
- `Basic $TOKEN` is now supported for all APIs
@ -290,6 +323,7 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
- Additional info available when starting InfuxDB using `--without-auth`
#### Additional Updates
- New catalog metrics available for count operations
- New object store metrics available for transfer latencies and transfer sizes
- New query duration metrics available for Last Value caches
@ -297,6 +331,7 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
- Other performance improvements
#### Fixes
- New tags are now backfilled with NULL instead of empty strings
- Bitcode deserialization error fixed
- Series key metadata not persisting to Parquet is now fixed
@ -305,24 +340,28 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
### Enterprise
#### Token and Security Updates
- Resource tokens now use resource names in `show tokens`
- Tokens can now be granted `CREATE` permission for creating databases
#### Additional Updates
- Last value caches reload on restart
- Distinct value caches reload on restart
- Other performance improvements
- Replaces remaining "INFLUXDB_IOX" Dockerfile environment variables with the following:
- Replaces remaining "INFLUXDB\_IOX" Dockerfile environment variables with the following:
- `ENV INFLUXDB3_OBJECT_STORE=file`
- `ENV INFLUXDB3_DB_DIR=/var/lib/influxdb3`
#### Fixes
- Improvements and fixes for license validations
- False positive fixed for catalog error on shutdown
- UX improvements for error and onboarding messages
- Other general fixes and corrections
## v3.0.3 {date="2025-05-16"}
**Core**: revision 384c457ef5f0d5ca4981b22855e411d8cac2688e
**Enterprise**: revision 34f4d28295132b9efafebf654e9f6decd1a13caf
@ -331,20 +370,19 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
#### Fixes
- Prevent operator token, `_admin`, from being deleted.
- Prevent operator token, `_admin`, from being deleted.
### Enterprise
#### Fixes
- Fix object store info digest that is output during onboarding.
- Fix object store info digest that is output during onboarding.
- Fix issues with false positive catalog error on shutdown.
- Fix licensing validation issues.
- Other fixes and performance improvements.
## v3.0.2 {date="2025-05-01"}
**Core**: revision d80d6cd60049c7b266794a48c97b1b6438ac5da9
**Enterprise**: revision e9d7e03c2290d0c3e44d26e3eeb60aaf12099f29
@ -353,39 +391,40 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
#### Security updates
- Generate testing TLS certificates on the fly.
- Set the TLS CA via the INFLUXDB3_TLS_CA environment variable.
- Enforce a minimum TLS version for enhanced security.
- Allow CORS requests from browsers.
- Generate testing TLS certificates on the fly.
- Set the TLS CA via the INFLUXDB3\_TLS\_CA environment variable.
- Enforce a minimum TLS version for enhanced security.
- Allow CORS requests from browsers.
#### General updates
- Support the `--format json` option in the token creation output.
- Remove the Last Values Cache size limitation to improve performance and flexibility.
- Incorporate additional performance improvements.
- Support the `--format json` option in the token creation output.
- Remove the Last Values Cache size limitation to improve performance and flexibility.
- Incorporate additional performance improvements.
#### Fixes
- Fix a counting bug in the distinct cache.
- Fix how the distinct cache handles rows with null values.
- Fix handling of `group by` tag columns that use escape quotes.
- Sort the IOx table schema consistently in the `SHOW TABLES` command.
- Fix a counting bug in the distinct cache.
- Fix how the distinct cache handles rows with null values.
- Fix handling of `group by` tag columns that use escape quotes.
- Sort the IOx table schema consistently in the `SHOW TABLES` command.
### Enterprise
#### Updates
- Introduce a command and system table to list cluster nodes.
- Support multiple custom permission argument matches.
- Improve overall performance.
- Introduce a command and system table to list cluster nodes.
- Support multiple custom permission argument matches.
- Improve overall performance.
#### Fixes
- Initialize the object store only once.
- Prevent the Home license server from crashing on restart.
- Enforce the `--num-cores` thread allocation limit.
- Initialize the object store only once.
- Prevent the Home license server from crashing on restart.
- Enforce the `--num-cores` thread allocation limit.
## v3.0.1 {date="2025-04-16"}
**Core**: revision d7c071e0c4959beebc7a1a433daf8916abd51214
**Enterprise**: revision 96e4aad870b44709e149160d523b4319ea91b54c
@ -393,15 +432,18 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
### Core
#### Updates
- TLS CA can now be set with an environment variable: `INFLUXDB3_TLS_CA`
- Other general performance improvements
#### Fixes
- The `--tags` argument is now optional for creating a table, and additionally now requires at least one tag _if_ specified
- The `--tags` argument is now optional for creating a table, and additionally now requires at least one tag *if* specified
### Enterprise
#### Updates
- Catalog limits for databases, tables, and columns are now configurable using `influxdb3 serve` options:
- `--num-database-limit`
- `--num-table-limit`
@ -410,8 +452,8 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
- Other general performance improvements
#### Fixes
- **Home** license thread count log errors
- **Home** license thread count log errors
## v3.0.0 {date="2025-04-14"}
@ -440,50 +482,59 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
- You can now use Commercial, Trial, and At-Home licenses.
## v3.0.0-0.beta.3 {date="2025-04-01"}
**Core**: revision f881c5844bec93a85242f26357a1ef3ebf419dd3
**Enterprise**: revision 6bef9e700a59c0973b0cefdc6baf11583933e262
### Core
#### General Improvements
- InfluxDB 3 now supports graceful shutdowns when sending the interrupt signal to the service.
#### Bug fixes
- Empty batches in JSON format results are now handled properly
- The Processing Engine now properly extracts data from DictionaryArrays
### Enterprise
##### Multi-node improvements
- Query nodes now automatically detect new ingest nodes
#### Bug fixes
- Several fixes for compaction planning and processing
- Several fixes for compaction planning and processing
- The Processing Engine now properly extracts data from DictionaryArrays
## v3.0.0-0.beta.2 {date="2025-03-24"}
**Core**: revision 033e1176d8c322b763b4aefb24686121b1b24f7c
**Enterprise**: revision e530fcd498c593cffec2b56d4f5194afc717d898
This update brings several backend performance improvements to both Core and Enterprise in preparation for additional new features over the next several weeks.
This update brings several backend performance improvements to both Core and Enterprise in preparation for additional new features over the next several weeks.
## v3.0.0-0.beta.1 {date="2025-03-17"}
### Core
#### Features
##### Query and storage enhancements
- New ability to stream response data for CSV and JSON queries, similar to how JSONL streaming works
- Parquet files are now cached on the query path, improving performance
- Query buffer is incrementally cleared when snapshotting, lowering memory spikes
##### Processing engine improvements
- New Trigger Types:
- _Scheduled_: Run Python plugins on custom, time-defined basis
- _Request_: Call Python plugins via HTTP requests
- *Scheduled*: Run Python plugins on custom, time-defined basis
- *Request*: Call Python plugins via HTTP requests
- New in-memory cache for storing data temporarily; cached data can be stored for a single trigger or across all triggers
- Integration with virtual environments and install packages:
- Specify Python virtual environment via CLI or `VIRTUAL_ENV` variable
@ -493,11 +544,13 @@ This update brings several backend performance improvements to both Core and Ent
- Write to logs from within the Processing Engine
##### Database and CLI improvements
- You can now specify the precision on your timestamps for writes using the `--precision` flag. Includes nano/micro/milli/seconds (ns/us/ms/s)
- Added a new `show` system subcommand to display system tables with different options via SQL (default limit: 100)
- Clearer table creation error messages
##### Bug fixes
- If a database was created and the service was killed before any data was written, the database would not be retained
- A last cache with specific "value" columns could not be queried
- Running CTRL-C no longer stopped an InfluxDB process, due to a Python trigger
@ -508,14 +561,15 @@ This update brings several backend performance improvements to both Core and Ent
For Core and Enterprise, there are parameter changes for simplicity:
| Old Parameter | New Parameter |
|---------------|---------------|
| `--writer-id`<br>`--host-id` | `--node-id` |
| Old Parameter | New Parameter |
| ---------------------------- | ------------- |
| `--writer-id`<br>`--host-id` | `--node-id` |
### Enterprise features
#### Cluster management
- Nodes are now associated with _clusters_, simplifying compaction, read replication, and processing
- Nodes are now associated with *clusters*, simplifying compaction, read replication, and processing
- Node specs are now available for simpler management of cache creations
#### Mode types
@ -526,9 +580,9 @@ For Core and Enterprise, there are parameter changes for simplicity:
For Enterprise, additional parameters for the `serve` command have been consolidated for simplicity:
| Old Parameter | New Parameter |
|---------------|---------------|
| `--read-from-node-ids`<br>`--compact-from-node-ids` | `--cluster-id` |
| `--run-compactions`<br>`--mode=compactor` | `--mode=compact`<br>`--mode=compact` |
| Old Parameter | New Parameter |
| --------------------------------------------------- | ------------------------------------ |
| `--read-from-node-ids`<br>`--compact-from-node-ids` | `--cluster-id` |
| `--run-compactions`<br>`--mode=compactor` | `--mode=compact`<br>`--mode=compact` |
In addition to the above changes, `--cluster-id` is now a required parameter for all new instances.

View File

@ -40,29 +40,23 @@
# - [The plan for InfluxDB 3.0 Open Source](https://influxdata.com/blog/the-plan-for-influxdb-3-0-open-source)
# - [InfluxDB 3.0 benchmarks](https://influxdata.com/blog/influxdb-3-0-is-2.5x-45x-faster-compared-to-influxdb-open-source/)
- id: influxdb3.5-explorer-1.3
- id: influxdb3.6-explorer-1.4
level: note
scope:
- /
title: New in InfluxDB 3.5
title: New in InfluxDB 3.6
slug: |
Key enhancements in InfluxDB 3.5 and the InfluxDB 3 Explorer 1.3.
Key enhancements in InfluxDB 3.6 and the InfluxDB 3 Explorer 1.4.
<a class="btn" href="https://www.influxdata.com/blog/influxdb-3-5/">See the Blog Post</a>
<a class="btn" href="https://www.influxdata.com/blog/influxdb-3-6/">See the Blog Post</a>
message: |
InfluxDB 3.5 is now available for both Core and Enterprise, introducing
custom plugin repository support,
enhanced operational visibility with queryable CLI parameters and manual node
management, stronger security controls, and general performance improvements.
InfluxDB 3 Explorer 1.3 brings powerful new capabilities including Dashboards
(beta) for saving and organizing your favorite queries, and cache querying for
instant access to Last Value and Distinct Value caches—making Explorer a more
comprehensive workspace for time series monitoring and analysis.
InfluxDB 3.6 is now available for both Core and Enterprise. This release introduces
the 1.4 update to InfluxDB 3 Explorer, featuring the beta launch of Ask AI, along
with new capabilities for simple startup and expanded functionality in the Processing Engine.
For more information, check out:
- [See the announcement blog post](https://www.influxdata.com/blog/influxdb-3-5/)
- [See the announcement blog post](https://www.influxdata.com/blog/influxdb-3-6/)
- [InfluxDB 3 Core release notes](/influxdb3/core/release-notes/)
- [InfluxDB 3 Enterprise release notes](/influxdb3/enterprise/release-notes/)
- [Get Started with InfluxDB 3 Explorer](/influxdb3/explorer/get-started/)

View File

@ -6,7 +6,7 @@ influxdb3_core:
versions: [core]
list_order: 2
latest: core
latest_patch: 3.5.0
latest_patch: 3.6.0
placeholder_host: localhost:8181
detector_config:
query_languages:
@ -35,7 +35,7 @@ influxdb3_enterprise:
versions: [enterprise]
list_order: 2
latest: enterprise
latest_patch: 3.5.0
latest_patch: 3.6.0
placeholder_host: localhost:8181
detector_config:
query_languages:
@ -63,7 +63,7 @@ influxdb3_explorer:
menu_category: tools
list_order: 1
latest: explorer
latest_patch: 1.3.0
latest_patch: 1.4.0
placeholder_host: localhost:8888
ai_sample_questions:
- How do I query data using InfluxDB 3 Explorer?

View File

@ -22,26 +22,6 @@ pre-commit:
docker compose run --rm --name remark-lint remark-lint $files --output --quiet || \
{ echo "⚠️ Remark found formatting issues in instruction files. Automatic formatting applied."; }
stage_fixed: true
# Report markdown formatting issues in content/api-docs without auto-fixing
lint-markdown-content:
tags: lint
glob: "{api-docs/**/*.md,content/**/*.md}"
run: |
# Prepend /workdir/ to staged files since repository is mounted at /workdir in container
files=$(echo '{staged_files}' | sed 's|^|/workdir/|g; s| | /workdir/|g')
# Run remark to check for formatting differences (without --output, shows diff in stdout)
# If output differs from input, fail the commit
for file in $files; do
original=$(cat "${file#/workdir/}")
formatted=$(docker compose run --rm --name remark-lint-content remark-lint "$file" 2>/dev/null | tail -n +2)
if [ "$original" != "$formatted" ]; then
echo "❌ Markdown formatting issues in ${file#/workdir/}"
echo " Run: docker compose run --rm remark-lint $file --output"
echo " Or manually fix the formatting to match remark style"
exit 1
fi
done
echo "✅ All content files are properly formatted"
# Lint instruction and repository documentation files with generic Vale config
lint-instructions:
tags: lint