pull/6505/merge
Jason Stirnaman 2025-11-01 17:28:34 -05:00 committed by GitHub
commit cd894bfa82
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
31 changed files with 2860 additions and 1111 deletions

View File

@ -19,7 +19,7 @@ Complete reference for custom Hugo shortcodes used in InfluxData documentation.
- [Content Management](#content-management) - [Content Management](#content-management)
- [Special Purpose](#special-purpose) - [Special Purpose](#special-purpose)
--- ***
## Notes and Warnings ## Notes and Warnings
@ -146,7 +146,7 @@ Use the `{{< api-endpoint >}}` shortcode to generate a code block that contains
- **method**: HTTP request method (get, post, patch, put, or delete) - **method**: HTTP request method (get, post, patch, put, or delete)
- **endpoint**: API endpoint - **endpoint**: API endpoint
- **api-ref**: Link the endpoint to a specific place in the API documentation - **api-ref**: Link the endpoint to a specific place in the API documentation
- **influxdb_host**: Specify which InfluxDB product host to use _if the `endpoint` contains the `influxdb/host` shortcode_. Uses the current InfluxDB product as default. Supports the following product values: - **influxdb\_host**: Specify which InfluxDB product host to use *if the `endpoint` contains the `influxdb/host` shortcode*. Uses the current InfluxDB product as default. Supports the following product values:
- oss - oss
- cloud - cloud
- serverless - serverless
@ -268,11 +268,11 @@ To link to tabbed content, click on the tab and use the URL parameter shown. It
Use the `{{< page-nav >}}` shortcode to add page navigation buttons to a page. These are useful for guiding users through a set of docs that should be read in sequential order. The shortcode has the following parameters: Use the `{{< page-nav >}}` shortcode to add page navigation buttons to a page. These are useful for guiding users through a set of docs that should be read in sequential order. The shortcode has the following parameters:
- **prev:** path of the previous document _(optional)_ - **prev:** path of the previous document *(optional)*
- **next:** path of the next document _(optional)_ - **next:** path of the next document *(optional)*
- **prevText:** override the button text linking to the previous document _(optional)_ - **prevText:** override the button text linking to the previous document *(optional)*
- **nextText:** override the button text linking to the next document _(optional)_ - **nextText:** override the button text linking to the next document *(optional)*
- **keepTab:** include the currently selected tab in the button link _(optional)_ - **keepTab:** include the currently selected tab in the button link *(optional)*
The shortcode generates buttons that link to both the previous and next documents. By default, the shortcode uses either the `list_title` or the `title` of the linked document, but you can use `prevText` and `nextText` to override button text. The shortcode generates buttons that link to both the previous and next documents. By default, the shortcode uses either the `list_title` or the `title` of the linked document, but you can use `prevText` and `nextText` to override button text.
@ -308,7 +308,7 @@ The children shortcode can also be used to list only "section" articles (those w
{{< children show="pages" >}} {{< children show="pages" >}}
``` ```
_By default, it displays both sections and pages._ *By default, it displays both sections and pages.*
Use the `type` argument to specify the format of the children list. Use the `type` argument to specify the format of the children list.
@ -325,7 +325,7 @@ The following list types are available:
#### Include a "Read more" link #### Include a "Read more" link
To include a "Read more" link with each child summary, set `readmore=true`. _Only the `articles` list type supports "Read more" links._ To include a "Read more" link with each child summary, set `readmore=true`. *Only the `articles` list type supports "Read more" links.*
```md ```md
{{< children readmore=true >}} {{< children readmore=true >}}
@ -333,7 +333,7 @@ To include a "Read more" link with each child summary, set `readmore=true`. _Onl
#### Include a horizontal rule #### Include a horizontal rule
To include a horizontal rule after each child summary, set `hr=true`. _Only the `articles` list type supports horizontal rules._ To include a horizontal rule after each child summary, set `hr=true`. *Only the `articles` list type supports horizontal rules.*
```md ```md
{{< children hr=true >}} {{< children hr=true >}}
@ -390,11 +390,11 @@ This is useful for maintaining and referencing sample code variants in their nat
#### Include specific files from the same directory #### Include specific files from the same directory
> [!Caution] > \[!Caution]
> **Don't use for code examples** > **Don't use for code examples**
> Using this and `get-shared-text` shortcodes to include code examples prevents the code from being tested. > Using this and `get-shared-text` shortcodes to include code examples prevents the code from being tested.
To include the text from one file in another file in the same directory, use the `{{< get-leaf-text >}}` shortcode. The directory that contains both files must be a Hugo [_Leaf Bundle_](https://gohugo.io/content-management/page-bundles/#leaf-bundles), a directory that doesn't have any child directories. To include the text from one file in another file in the same directory, use the `{{< get-leaf-text >}}` shortcode. The directory that contains both files must be a Hugo [*Leaf Bundle*](https://gohugo.io/content-management/page-bundles/#leaf-bundles), a directory that doesn't have any child directories.
In the following example, `api` is a leaf bundle. `content` isn't. In the following example, `api` is a leaf bundle. `content` isn't.
@ -695,7 +695,7 @@ Column 2
The following options are available: The following options are available:
- half _(Default)_ - half *(Default)*
- third - third
- quarter - quarter
@ -721,10 +721,10 @@ Click {{< caps >}}Add Data{{< /caps >}}
### Authentication token link ### Authentication token link
Use the `{{% token-link "<descriptor>" "<link_append>%}}` shortcode to automatically generate links to token management documentation. The shortcode accepts two _optional_ arguments: Use the `{{% token-link "<descriptor>" "<link_append>%}}` shortcode to automatically generate links to token management documentation. The shortcode accepts two *optional* arguments:
- **descriptor**: An optional token descriptor - **descriptor**: An optional token descriptor
- **link_append**: An optional path to append to the token management link path, `/<product>/<version>/admin/tokens/`. - **link\_append**: An optional path to append to the token management link path, `/<product>/<version>/admin/tokens/`.
```md ```md
{{% token-link "database" "resource/" %}} {{% token-link "database" "resource/" %}}
@ -775,7 +775,7 @@ Descriptions should follow consistent patterns:
- Recommended: "your {{% token-link "database" %}}"{{% show-in "enterprise" %}} with permissions on the specified database{{% /show-in %}} - Recommended: "your {{% token-link "database" %}}"{{% show-in "enterprise" %}} with permissions on the specified database{{% /show-in %}}
- Avoid: "your token", "the token", "an authorization token" - Avoid: "your token", "the token", "an authorization token"
3. **Database names**: 3. **Database names**:
- Recommended: "the name of the database to [action]" - Recommended: "the name of the database to \[action]"
- Avoid: "your database", "the database name" - Avoid: "your database", "the database name"
4. **Conditional content**: 4. **Conditional content**:
- Use `{{% show-in "enterprise" %}}` for content specific to enterprise versions - Use `{{% show-in "enterprise" %}}` for content specific to enterprise versions
@ -801,9 +801,71 @@ Descriptions should follow consistent patterns:
- `{{% code-placeholder-key %}}`: Use this shortcode to define a placeholder key - `{{% code-placeholder-key %}}`: Use this shortcode to define a placeholder key
- `{{% /code-placeholder-key %}}`: Use this shortcode to close the key name - `{{% /code-placeholder-key %}}`: Use this shortcode to close the key name
_The `placeholders` attribute supercedes the deprecated `code-placeholders` shortcode._ *The `placeholders` attribute supercedes the deprecated `code-placeholders` shortcode.*
#### Example usage #### Automated placeholder syntax
Use the `docs placeholders` command to automatically add placeholder syntax to code blocks and descriptions:
```bash
# Process a file
npx docs placeholders content/influxdb3/core/admin/upgrade.md
# Preview changes without modifying the file
npx docs placeholders content/influxdb3/core/admin/upgrade.md --dry
# Get help
npx docs placeholders --help
```
**What it does:**
1. Detects UPPERCASE placeholders in code blocks
2. Adds `{ placeholders="..." }` attribute to code fences
3. Wraps placeholder descriptions with `{{% code-placeholder-key %}}` shortcodes
**Example transformation:**
Before:
````markdown
```bash
influxdb3 query \
--database SYSTEM_DATABASE \
--token ADMIN_TOKEN \
"SELECT * FROM system.version"
```
Replace the following:
- **`SYSTEM_DATABASE`**: The name of your system database
- **`ADMIN_TOKEN`**: An admin token with read permissions
````
After:
````markdown
```bash { placeholders="ADMIN_TOKEN|SYSTEM_DATABASE" }
influxdb3 query \
--database SYSTEM_DATABASE \
--token ADMIN_TOKEN \
"SELECT * FROM system.version"
```
Replace the following:
- {{% code-placeholder-key %}}`SYSTEM_DATABASE`{{% /code-placeholder-key %}}: The name of your system database
- {{% code-placeholder-key %}}`ADMIN_TOKEN`{{% /code-placeholder-key %}}: An admin token with read permissions
````
**How it works:**
- Pattern: Matches words with 2+ characters, all uppercase, can include underscores
- Excludes common words: HTTP verbs (GET, POST), protocols (HTTP, HTTPS), SQL keywords (SELECT, FROM), etc.
- Idempotent: Running multiple times won't duplicate syntax
- Preserves existing `placeholders` attributes and already-wrapped descriptions
#### Manual placeholder usage
```sh { placeholders "DATABASE_NAME|USERNAME|PASSWORD_OR_TOKEN|API_TOKEN|exampleuser@influxdata.com" } ```sh { placeholders "DATABASE_NAME|USERNAME|PASSWORD_OR_TOKEN|API_TOKEN|exampleuser@influxdata.com" }
curl --request POST http://localhost:8086/write?db=DATABASE_NAME \ curl --request POST http://localhost:8086/write?db=DATABASE_NAME \
@ -839,7 +901,7 @@ Sample dataset to output. Use either `set` argument name or provide the set as t
#### includeNull #### includeNull
Specify whether or not to include _null_ values in the dataset. Use either `includeNull` argument name or provide the boolean value as the second argument. Specify whether or not to include *null* values in the dataset. Use either `includeNull` argument name or provide the boolean value as the second argument.
#### includeRange #### includeRange
@ -1115,6 +1177,6 @@ The InfluxDB host placeholder that gets replaced by custom domains differs betwe
{{< influxdb/host "serverless" >}} {{< influxdb/host "serverless" >}}
``` ```
--- ***
**For working examples**: Test all shortcodes in [content/example.md](content/example.md) **For working examples**: Test all shortcodes in [content/example.md](content/example.md)

View File

@ -2,9 +2,9 @@
<img src="/static/img/influx-logo-cubo-dark.png" width="200"> <img src="/static/img/influx-logo-cubo-dark.png" width="200">
</p> </p>
# InfluxDB 2.0 Documentation # InfluxData Product Documentation
This repository contains the InfluxDB 2.x documentation published at [docs.influxdata.com](https://docs.influxdata.com). This repository contains the InfluxData product documentation for InfluxDB and related tooling published at [docs.influxdata.com](https://docs.influxdata.com).
## Contributing ## Contributing
@ -15,6 +15,26 @@ For information about contributing to the InfluxData documentation, see [Contrib
For information about testing the documentation, including code block testing, link validation, and style linting, see [Testing guide](DOCS-TESTING.md). For information about testing the documentation, including code block testing, link validation, and style linting, see [Testing guide](DOCS-TESTING.md).
## Documentation Tools
This repository includes a `docs` CLI tool for common documentation workflows:
```sh
# Create new documentation from a draft
npx docs create drafts/new-feature.md --products influxdb3_core
# Edit existing documentation from a URL
npx docs edit https://docs.influxdata.com/influxdb3/core/admin/
# Add placeholder syntax to code blocks
npx docs placeholders content/influxdb3/core/admin/upgrade.md
# Get help
npx docs --help
```
The `docs` command is automatically configured when you run `yarn install`.
## Documentation ## Documentation
Comprehensive reference documentation for contributors: Comprehensive reference documentation for contributors:
@ -27,6 +47,7 @@ Comprehensive reference documentation for contributors:
- **[API Documentation](api-docs/README.md)** - API reference generation - **[API Documentation](api-docs/README.md)** - API reference generation
### Quick Links ### Quick Links
- [Style guidelines](DOCS-CONTRIBUTING.md#style-guidelines) - [Style guidelines](DOCS-CONTRIBUTING.md#style-guidelines)
- [Commit guidelines](DOCS-CONTRIBUTING.md#commit-guidelines) - [Commit guidelines](DOCS-CONTRIBUTING.md#commit-guidelines)
- [Code block testing](DOCS-TESTING.md#code-block-testing) - [Code block testing](DOCS-TESTING.md#code-block-testing)
@ -35,9 +56,9 @@ Comprehensive reference documentation for contributors:
InfluxData takes security and our users' trust very seriously. InfluxData takes security and our users' trust very seriously.
If you believe you have found a security issue in any of our open source projects, If you believe you have found a security issue in any of our open source projects,
please responsibly disclose it by contacting security@influxdata.com. please responsibly disclose it by contacting <security@influxdata.com>.
More details about security vulnerability reporting, More details about security vulnerability reporting,
including our GPG key, can be found at https://www.influxdata.com/how-to-report-security-vulnerabilities/. including our GPG key, can be found at <https://www.influxdata.com/how-to-report-security-vulnerabilities/>.
## Running the docs locally ## Running the docs locally
@ -58,7 +79,13 @@ including our GPG key, can be found at https://www.influxdata.com/how-to-report-
yarn install yarn install
``` ```
_**Note:** The most recent version of Hugo tested with this documentation is **0.149.0**._ ***Note:** The most recent version of Hugo tested with this documentation is **0.149.0**.*
After installation, the `docs` command will be available via `npx`:
```sh
npx docs --help
```
3. To generate the API docs, see [api-docs/README.md](api-docs/README.md). 3. To generate the API docs, see [api-docs/README.md](api-docs/README.md).
@ -71,6 +98,7 @@ including our GPG key, can be found at https://www.influxdata.com/how-to-report-
```sh ```sh
npx hugo server npx hugo server
``` ```
5. View the docs at [localhost:1313](http://localhost:1313). 5. View the docs at [localhost:1313](http://localhost:1313).
### Alternative: Use docker compose ### Alternative: Use docker compose
@ -84,4 +112,5 @@ including our GPG key, can be found at https://www.influxdata.com/how-to-report-
```sh ```sh
docker compose up local-dev docker compose up local-dev
``` ```
4. View the docs at [localhost:1313](http://localhost:1313). 4. View the docs at [localhost:1313](http://localhost:1313).

210
content/create.md Normal file
View File

@ -0,0 +1,210 @@
---
title: Create and edit InfluxData docs
description: Learn how to create and edit InfluxData documentation.
tags: [documentation, guide, influxdata]
test_only: true
---
Learn how to create and edit InfluxData documentation.
- [Submit an issue to request new or updated documentation](#submit-an-issue-to-request-new-or-updated-documentation)
- [Edit an existing page in your browser](#edit-an-existing-page-in-your-browser)
- [Create and edit locally with the docs-v2 repository](#create-and-edit-locally-with-the-docs-v2-repository)
- [Helpful resources](#other-resources)
## Submit an issue to request new or updated documentation
- **Public**: <https://github.com/influxdata/docs-v2/issues/>
- **Private**: <https://github.com/influxdata/DAR/issues/>
## Edit an existing page in your browser
**Example**: Editing a product-specific page
1. Visit <https://docs.influxdata.com> public docs
2. Search, Ask AI, or navigate to find the page to edit--for example, <https://docs.influxdata.com/influxdb3/cloud-serverless/get-started/>
3. Click the "Edit this page" link at the bottom of the page.
This opens the GitHub repository to the file that generates the page
4. Click the pencil icon to edit the file in your browser
5. [Commit and create a pull request](#commit-and-create-a-pull-request)
## Create and edit locally with the docs-v2 repository
Use `docs` scripts with AI agents to help you create and edit documentation locally, especially when working with shared content for multiple products.
**Prerequisites**:
1. [Clone or fork the docs-v2 repository](https://github.com/influxdata/docs-v2/):
```bash
git clone https://github.com/influxdata/docs-v2.git
cd docs-v2
```
2. [Install Yarn](https://yarnpkg.com/getting-started/install)
3. Run `yarn` in the repository root to install dependencies
4. Optional: [Set up GitHub CLI](https://cli.github.com/manual/)
> \[!Tip]
> To run and test your changes locally, enter the following command in your terminal:
>
> ```bash
> yarn hugo server
> ```
>
> *To refresh shared content after making changes, `touch` or edit the frontmatter file, or stop the server (Ctrl+C) and restart it.*
>
> To list all available scripts, run:
>
> ```bash
> yarn run
> ```
### Edit an existing page locally
Use the `npx docs edit` command to open an existing page in your editor.
```bash
npx docs edit https://docs.influxdata.com/influxdb3/enterprise/get-started/
```
### Create content locally
Use the `npx docs create` command with your AI agent tool to scaffold frontmatter and generate new content.
- The `npx docs create` command accepts draft input from stdin or from a file path and generates a prompt file from the draft and your product selections
- The prompt file makes AI agents aware of InfluxData docs guidelines, shared content, and product-specific requirements
- `npx docs create` is designed to work automatically with `claude`, but you can
use the generated prompt file with any AI agent (for example, `copilot` or `codex`)
> \[!Tip]
>
> `docs-v2` contains custom configuration for agents like Claude and Copilot Agent mode.
<!-- Coming soon: generate content from an issue with labels -->
#### Generate content and frontmatter from a draft
{{% tabs-wrapper %}}
{{% tabs %}}
[Interactive (Claude Code)](#)
[Non-interactive (any agent)](#)
{{% /tabs %}}
{{% tab-content %}}
{{% /tab-content %}}
{{% tab-content %}}
1. Open a Claude Code prompt:
```bash
claude code
```
2. In the prompt, run the `docs create` command with the path to your draft file.
Optionally, include the `--products` flag and product namespaces to preselect products--for example:
```bash
npx docs create .context/drafts/"Upgrading Enterprise 3 (draft).md" \
--products influxdb3_enterprise,influxdb3_core
```
If you don't include the `--products` flag, you'll be prompted to select products after running the command.
The script first generates a prompt file, then the agent automatically uses it to generate content and frontmatter based on the draft and the products you select.
{{% /tab-content %}}
{{% tab-content %}}
Use `npx docs create` to generate a prompt file and then pipe it to your preferred AI agent.
Include the `--products` flag and product namespaces to preselect products
The following example uses Copilot to process a draft file:
```bash
npx docs create .context/drafts/"Upgrading Enterprise 3 (draft).md" \
--products "influxdb3_enterprise,influxdb3_core" | \
copilot --prompt --allow-all-tools
```
{{% /tab-content %}}
{{< /tabs-wrapper >}}
## Review, commit, and create a pull request
After you create or edit content, test and review your changes, and then create a pull request.
> \[!Important]
>
> #### Check AI-generated content
>
> Always review and validate AI-generated content for accuracy.
> Make sure example commands are correct for the version you're documenting.
### Test and review your changes
Run a local Hugo server to preview your changes:
```bash
yarn hugo server
```
Visit <http://localhost:1313> to review your changes in the browser.
> \[!Note]
> If you need to preview changes in a live production-like environment
> that you can also share with others,
> the Docs team can deploy your branch to the staging site.
### Commit and create a pull request
1. Commit your changes to a new branch
2. Fix any issues found by automated checks
3. Push the branch to your fork or to the docs-v2 repository
```bash
git add content
git commit -m "feat(product): Your commit message"
git push origin your-branch-name
```
### Create a pull request
1. Create a pull request against the `master` branch of the docs-v2 repository
2. Add reviewers:
- `@influxdata/docs-team`
- team members familiar with the product area
- Optionally, assign Copilot to review
3. After approval and automated checks are successful, merge the pull request (if you have permissions) or wait for the docs team to merge it.
{{< tabs-wrapper >}}
{{% tabs %}}
[GitHub](#)
[gh CLI](#)
{{% /tabs %}}
{{% tab-content %}}
1. Visit [influxdata/docs-v2 pull requests on GitHub](https://github.com/influxdata/docs-v2/pulls)
2. Optional: edit PR title and description
3. Optional: set to draft if it needs more work
4. When ready for review, assign `@influxdata/docs-team` and other reviewers
{{% /tab-content %}}
{{% tab-content %}}
```bash
gh pr create \
--base master \
--head your-branch-name \
--title "Your PR title" \
--body "Your PR description" \
--reviewer influxdata/docs-team,<other-reviewers>
```
{{% /tab-content %}}
{{< /tabs-wrapper >}}
## Other resources
- `DOCS-*.md`: Documentation standards and guidelines
- <http://localhost:1313/example/>: View shortcode examples
- <https://app.kapa.ai>: Review content gaps identified from Ask AI answers

View File

@ -29,7 +29,7 @@ a `values.yaml` on your local machine.
This file grants access to the collection of container images required to This file grants access to the collection of container images required to
install InfluxDB Clustered. install InfluxDB Clustered.
--- ***
## Configuration data ## Configuration data
@ -40,21 +40,22 @@ available:
API endpoints API endpoints
- **PostgreSQL-style data source name (DSN)**: used to access your - **PostgreSQL-style data source name (DSN)**: used to access your
PostgreSQL-compatible database that stores the InfluxDB Catalog. PostgreSQL-compatible database that stores the InfluxDB Catalog.
- **Object store credentials** _(AWS S3 or S3-compatible)_ - **Object store credentials** *(AWS S3 or S3-compatible)*
- Endpoint URL - Endpoint URL
- Access key - Access key
- Bucket name - Bucket name
- Region (required for S3, may not be required for other object stores) - Region (required for S3, may not be required for other object stores)
- **Local storage information** _(for ingester pods)_ - **Local storage information** *(for ingester pods)*
- Storage class - Storage class
- Storage size - Storage size
InfluxDB is deployed to a Kubernetes namespace which, throughout the following InfluxDB is deployed to a Kubernetes namespace which, throughout the following
installation procedure, is referred to as the _target_ namespace. installation procedure, is referred to as the *target* namespace.
For simplicity, we assume this namespace is `influxdb`, however For simplicity, we assume this namespace is `influxdb`, however
you may use any name you like. you may use any name you like.
> [!Note] > \[!Note]
>
> #### Set namespaceOverride if using a namespace other than influxdb > #### Set namespaceOverride if using a namespace other than influxdb
> >
> If you use a namespace name other than `influxdb`, update the `namespaceOverride` > If you use a namespace name other than `influxdb`, update the `namespaceOverride`
@ -85,7 +86,7 @@ which simplifies the installation and management of the InfluxDB Clustered packa
It manages the application of the jsonnet templates used to install, manage, and It manages the application of the jsonnet templates used to install, manage, and
update an InfluxDB cluster. update an InfluxDB cluster.
> [!Note] > \[!Note]
> If you already installed the `kubecfg kubit` operator separately when > If you already installed the `kubecfg kubit` operator separately when
> [setting up prerequisites](/influxdb3/clustered/install/set-up-cluster/prerequisites/#install-the-kubecfg-kubit-operator) > [setting up prerequisites](/influxdb3/clustered/install/set-up-cluster/prerequisites/#install-the-kubecfg-kubit-operator)
> for your cluster, in your `values.yaml`, set `skipOperator` to `true`. > for your cluster, in your `values.yaml`, set `skipOperator` to `true`.
@ -140,7 +141,7 @@ to create a container registry secret file.
2. Use the following command to create a container registry secret file and 2. Use the following command to create a container registry secret file and
retrieve the necessary secrets: retrieve the necessary secrets:
{{% code-placeholders "PACKAGE_VERSION" %}} {{% code-placeholders "PACKAGE\_VERSION" %}}
```sh ```sh
mkdir /tmp/influxdbsecret mkdir /tmp/influxdbsecret
@ -152,12 +153,12 @@ DOCKER_CONFIG=/tmp/influxdbsecret \
{{% /code-placeholders %}} {{% /code-placeholders %}}
--- ***
Replace {{% code-placeholder-key %}}`PACKAGE_VERSION`{{% /code-placeholder-key %}} Replace {{% code-placeholder-key %}}`PACKAGE_VERSION`{{% /code-placeholder-key %}}
with your InfluxDB Clustered package version. with your InfluxDB Clustered package version.
--- ***
If your Docker configuration is valid and youre able to connect to the container If your Docker configuration is valid and youre able to connect to the container
registry, the command succeeds and the output is the JSON manifest for the Docker registry, the command succeeds and the output is the JSON manifest for the Docker
@ -206,6 +207,7 @@ Error: fetching manifest us-docker.pkg.dev/influxdb2-artifacts/clustered/influxd
{{% /tabs %}} {{% /tabs %}}
{{% tab-content %}} {{% tab-content %}}
<!--------------------------- BEGIN Public Registry ---------------------------> <!--------------------------- BEGIN Public Registry --------------------------->
#### Public registry #### Public registry
@ -229,8 +231,10 @@ If you change the name of this secret, you must also change the value of the
`imagePullSecrets.name` field in your `values.yaml`. `imagePullSecrets.name` field in your `values.yaml`.
<!---------------------------- END Public Registry ----------------------------> <!---------------------------- END Public Registry ---------------------------->
{{% /tab-content %}} {{% /tab-content %}}
{{% tab-content %}} {{% tab-content %}}
<!--------------------------- BEGIN Private Registry --------------------------> <!--------------------------- BEGIN Private Registry -------------------------->
#### Private registry (air-gapped) #### Private registry (air-gapped)
@ -297,7 +301,8 @@ cat /tmp/kubit-images.txt | xargs -I% crane cp % YOUR_PRIVATE_REGISTRY/%
Configure your `values.yaml` to use your private registry: Configure your `values.yaml` to use your private registry:
{{% code-placeholders "REGISTRY_HOSTNAME" %}} {{% code-placeholders "REGISTRY\_HOSTNAME" %}}
```yaml ```yaml
# Configure registry override for all images # Configure registry override for all images
images: images:
@ -315,6 +320,7 @@ kubit:
imagePullSecrets: imagePullSecrets:
- name: your-registry-pull-secret - name: your-registry-pull-secret
``` ```
{{% /code-placeholders %}} {{% /code-placeholders %}}
Replace {{% code-placeholder-key %}}`REGISTRY_HOSTNAME`{{% /code-placeholder-key %}} with your private registry hostname. Replace {{% code-placeholder-key %}}`REGISTRY_HOSTNAME`{{% /code-placeholder-key %}} with your private registry hostname.
@ -344,11 +350,11 @@ To configure ingress, provide values for the following fields in your
Provide the hostnames that Kubernetes should use to expose the InfluxDB API Provide the hostnames that Kubernetes should use to expose the InfluxDB API
endpoints--for example: `{{< influxdb/host >}}`. endpoints--for example: `{{< influxdb/host >}}`.
_You can provide multiple hostnames. The ingress layer accepts incoming *You can provide multiple hostnames. The ingress layer accepts incoming
requests for all listed hostnames. This can be useful if you want to have requests for all listed hostnames. This can be useful if you want to have
distinct paths for your internal and external traffic._ distinct paths for your internal and external traffic.*
> [!Note] > \[!Note]
> You are responsible for configuring and managing DNS. Options include: > You are responsible for configuring and managing DNS. Options include:
> >
> - Manually managing DNS records > - Manually managing DNS records
@ -360,16 +366,16 @@ To configure ingress, provide values for the following fields in your
(Optional): Provide the name of the secret that contains your TLS certificate (Optional): Provide the name of the secret that contains your TLS certificate
and key. The examples in this guide use the name `ingress-tls`. and key. The examples in this guide use the name `ingress-tls`.
_The `tlsSecretName` field is optional. You may want to use it if you already *The `tlsSecretName` field is optional. You may want to use it if you already
have a TLS certificate for your DNS name._ have a TLS certificate for your DNS name.*
> [!Note] > \[!Note]
> Writing to and querying data from InfluxDB does not require TLS. > Writing to and querying data from InfluxDB does not require TLS.
> For simplicity, you can wait to enable TLS before moving into production. > For simplicity, you can wait to enable TLS before moving into production.
> For more information, see Phase 4 of the InfluxDB Clustered installation > For more information, see Phase 4 of the InfluxDB Clustered installation
> process, [Secure your cluster](/influxdb3/clustered/install/secure-cluster/). > process, [Secure your cluster](/influxdb3/clustered/install/secure-cluster/).
{{% code-callout "ingress-tls|cluster-host\.com" "green" %}} {{% code-callout "ingress-tls|cluster-host.com" "green" %}}
```yaml ```yaml
ingress: ingress:
@ -404,14 +410,14 @@ following fields in your `values.yaml`:
- `bucket`: Object storage bucket name - `bucket`: Object storage bucket name
- `s3`: - `s3`:
- `endpoint`: Object storage endpoint URL - `endpoint`: Object storage endpoint URL
- `allowHttp`: _Set to `true` to allow unencrypted HTTP connections_ - `allowHttp`: *Set to `true` to allow unencrypted HTTP connections*
- `accessKey.value`: Object storage access key - `accessKey.value`: Object storage access key
_(can use a `value` literal or `valueFrom` to retrieve the value from a secret)_ *(can use a `value` literal or `valueFrom` to retrieve the value from a secret)*
- `secretKey.value`: Object storage secret key - `secretKey.value`: Object storage secret key
_(can use a `value` literal or `valueFrom` to retrieve the value from a secret)_ *(can use a `value` literal or `valueFrom` to retrieve the value from a secret)*
- `region`: Object storage region - `region`: Object storage region
{{% code-placeholders "S3_(URL|ACCESS_KEY|SECRET_KEY|BUCKET_NAME|REGION)" %}} {{% code-placeholders "S3\_(URL|ACCESS\_KEY|SECRET\_KEY|BUCKET\_NAME|REGION)" %}}
```yml ```yml
objectStore: objectStore:
@ -441,7 +447,7 @@ objectStore:
{{% /code-placeholders %}} {{% /code-placeholders %}}
--- ***
Replace the following: Replace the following:
@ -451,7 +457,7 @@ Replace the following:
- {{% code-placeholder-key %}}`S3_SECRET_KEY`{{% /code-placeholder-key %}}: Object storage secret key - {{% code-placeholder-key %}}`S3_SECRET_KEY`{{% /code-placeholder-key %}}: Object storage secret key
- {{% code-placeholder-key %}}`S3_REGION`{{% /code-placeholder-key %}}: Object storage region - {{% code-placeholder-key %}}`S3_REGION`{{% /code-placeholder-key %}}: Object storage region
--- ***
<!----------------------------------- END S3 ----------------------------------> <!----------------------------------- END S3 ---------------------------------->
@ -467,11 +473,11 @@ following fields in your `values.yaml`:
- `bucket`: Azure Blob Storage bucket name - `bucket`: Azure Blob Storage bucket name
- `azure`: - `azure`:
- `accessKey.value`: Azure Blob Storage access key - `accessKey.value`: Azure Blob Storage access key
_(can use a `value` literal or `valueFrom` to retrieve the value from a secret)_ *(can use a `value` literal or `valueFrom` to retrieve the value from a secret)*
- `account.value`: Azure Blob Storage account ID - `account.value`: Azure Blob Storage account ID
_(can use a `value` literal or `valueFrom` to retrieve the value from a secret)_ *(can use a `value` literal or `valueFrom` to retrieve the value from a secret)*
{{% code-placeholders "AZURE_(BUCKET_NAME|ACCESS_KEY|STORAGE_ACCOUNT)" %}} {{% code-placeholders "AZURE\_(BUCKET\_NAME|ACCESS\_KEY|STORAGE\_ACCOUNT)" %}}
```yml ```yml
objectStore: objectStore:
@ -492,7 +498,7 @@ objectStore:
{{% /code-placeholders %}} {{% /code-placeholders %}}
--- ***
Replace the following: Replace the following:
@ -500,7 +506,7 @@ Replace the following:
- {{% code-placeholder-key %}}`AZURE_ACCESS_KEY`{{% /code-placeholder-key %}}: Azure Blob Storage access key - {{% code-placeholder-key %}}`AZURE_ACCESS_KEY`{{% /code-placeholder-key %}}: Azure Blob Storage access key
- {{% code-placeholder-key %}}`AZURE_STORAGE_ACCOUNT`{{% /code-placeholder-key %}}: Azure Blob Storage account ID - {{% code-placeholder-key %}}`AZURE_STORAGE_ACCOUNT`{{% /code-placeholder-key %}}: Azure Blob Storage account ID
--- ***
<!--------------------------------- END AZURE ---------------------------------> <!--------------------------------- END AZURE --------------------------------->
@ -520,7 +526,7 @@ following fields in your `values.yaml`:
- `serviceAccountSecret.key`: the key inside of your Google IAM secret that - `serviceAccountSecret.key`: the key inside of your Google IAM secret that
contains your Google IAM account credentials contains your Google IAM account credentials
{{% code-placeholders "GOOGLE_(BUCKET_NAME|IAM_SECRET|CREDENTIALS_KEY)" %}} {{% code-placeholders "GOOGLE\_(BUCKET\_NAME|IAM\_SECRET|CREDENTIALS\_KEY)" %}}
```yml ```yml
objectStore: objectStore:
@ -540,7 +546,7 @@ objectStore:
{{% /code-placeholders %}} {{% /code-placeholders %}}
--- ***
Replace the following: Replace the following:
@ -553,7 +559,7 @@ Replace the following:
the key inside of your Google IAM secret that contains your Google IAM account the key inside of your Google IAM secret that contains your Google IAM account
credentials credentials
--- ***
<!--------------------------------- END AZURE ---------------------------------> <!--------------------------------- END AZURE --------------------------------->
@ -567,7 +573,7 @@ metadata about your time series data.
To connect your InfluxDB cluster to your PostgreSQL-compatible database, To connect your InfluxDB cluster to your PostgreSQL-compatible database,
provide values for the following fields in your `values.yaml`: provide values for the following fields in your `values.yaml`:
> [!Note] > \[!Note]
> We recommend storing sensitive credentials, such as your PostgreSQL-compatible DSN, > We recommend storing sensitive credentials, such as your PostgreSQL-compatible DSN,
> as secrets in your Kubernetes cluster. > as secrets in your Kubernetes cluster.
@ -575,7 +581,7 @@ provide values for the following fields in your `values.yaml`:
- `SecretName`: Secret name - `SecretName`: Secret name
- `SecretKey`: Key in the secret that contains the DSN - `SecretKey`: Key in the secret that contains the DSN
{{% code-placeholders "SECRET_(NAME|KEY)" %}} {{% code-placeholders "SECRET\_(NAME|KEY)" %}}
```yml ```yml
catalog: catalog:
@ -590,7 +596,7 @@ catalog:
{{% /code-placeholders %}} {{% /code-placeholders %}}
--- ***
Replace the following: Replace the following:
@ -599,9 +605,9 @@ Replace the following:
- {{% code-placeholder-key %}}`SECRET_KEY`{{% /code-placeholder-key %}}: - {{% code-placeholder-key %}}`SECRET_KEY`{{% /code-placeholder-key %}}:
Key in the secret that references your PostgreSQL-compatible DSN Key in the secret that references your PostgreSQL-compatible DSN
--- ***
> [!Warning] > \[!Warning]
> >
> ##### Percent-encode special symbols in PostgreSQL DSNs > ##### Percent-encode special symbols in PostgreSQL DSNs
> >
@ -620,27 +626,31 @@ Replace the following:
> For more information, see the [PostgreSQL Connection URI docs](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING-URIS). > For more information, see the [PostgreSQL Connection URI docs](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING-URIS).
> >
> {{< expand-wrapper >}} > {{< expand-wrapper >}}
{{% expand "View percent-encoded DSN example" %}} > {{% expand "View percent-encoded DSN example" %}}
To use the following DSN containing special characters: > To use the following DSN containing special characters:
{{% code-callout "#" %}} {{% code-callout "#" %}}
```txt ```txt
postgresql://postgres:meow#meow@my-fancy.cloud-database.party:5432/postgres postgresql://postgres:meow#meow@my-fancy.cloud-database.party:5432/postgres
``` ```
{{% /code-callout %}} {{% /code-callout %}}
You must percent-encode the special characters in the connection string: You must percent-encode the special characters in the connection string:
{{% code-callout "%23" %}} {{% code-callout "%23" %}}
```txt ```txt
postgresql://postgres:meow%23meow@my-fancy.cloud-database.party:5432/postgres postgresql://postgres:meow%23meow@my-fancy.cloud-database.party:5432/postgres
``` ```
{{% /code-callout %}} {{% /code-callout %}}
{{% /expand %}} {{% /expand %}}
{{< /expand-wrapper >}} {{< /expand-wrapper >}}
> [!Note] > \[!Note]
> >
> ##### PostgreSQL instances without TLS or SSL > ##### PostgreSQL instances without TLS or SSL
> >
@ -648,9 +658,11 @@ postgresql://postgres:meow%23meow@my-fancy.cloud-database.party:5432/postgres
> the `sslmode=disable` parameter in the DSN. For example: > the `sslmode=disable` parameter in the DSN. For example:
> >
> {{% code-callout "sslmode=disable" %}} > {{% code-callout "sslmode=disable" %}}
``` ```
postgres://username:passw0rd@mydomain:5432/influxdb?sslmode=disable postgres://username:passw0rd@mydomain:5432/influxdb?sslmode=disable
``` ```
{{% /code-callout %}} {{% /code-callout %}}
#### Configure local storage for ingesters #### Configure local storage for ingesters
@ -665,7 +677,7 @@ following fields in your `values.yaml`:
This differs based on the Kubernetes environment and desired storage characteristics. This differs based on the Kubernetes environment and desired storage characteristics.
- `storage`: Storage size. We recommend a minimum of 2 gibibytes (`2Gi`). - `storage`: Storage size. We recommend a minimum of 2 gibibytes (`2Gi`).
{{% code-placeholders "STORAGE_(CLASS|SIZE)" %}} {{% code-placeholders "STORAGE\_(CLASS|SIZE)" %}}
```yaml ```yaml
ingesterStorage: ingesterStorage:
@ -679,7 +691,7 @@ ingesterStorage:
{{% /code-placeholders %}} {{% /code-placeholders %}}
--- ***
Replace the following: Replace the following:
@ -688,7 +700,7 @@ Replace the following:
- {{% code-placeholder-key %}}`STORAGE_SIZE`{{% /code-placeholder-key %}}: - {{% code-placeholder-key %}}`STORAGE_SIZE`{{% /code-placeholder-key %}}:
Storage size (example: `2Gi`) Storage size (example: `2Gi`)
--- ***
### Deploy your cluster ### Deploy your cluster
@ -774,6 +786,7 @@ helm upgrade influxdb ./influxdb3-clustered-X.Y.Z.tgz \
``` ```
{{% note %}} {{% note %}}
#### Understanding kubit's role in air-gapped environments #### Understanding kubit's role in air-gapped environments
When deploying with Helm in an air-gapped environment: When deploying with Helm in an air-gapped environment:
@ -801,3 +814,4 @@ This is why mirroring both the InfluxDB images and the kubit operator images is
Internal error occurred: failed to create pod sandbox: rpc error: code = Unknown Internal error occurred: failed to create pod sandbox: rpc error: code = Unknown
desc = failed to pull image "us-docker.pkg.dev/...": failed to pull and unpack image "...": desc = failed to pull image "us-docker.pkg.dev/...": failed to pull and unpack image "...":
failed to resolve reference "...": failed to do request: ... i/o timeout failed to resolve reference "...": failed to do request: ... i/o timeout
```

View File

@ -32,6 +32,7 @@ If you haven't already set up and configured your cluster, see how to
[install InfluxDB Clustered](/influxdb3/clustered/install/). [install InfluxDB Clustered](/influxdb3/clustered/install/).
<!-- Hidden anchor for links to the kubectl/kubit/helm tabs --> <!-- Hidden anchor for links to the kubectl/kubit/helm tabs -->
<span id="kubectl-kubit-helm"></span> <span id="kubectl-kubit-helm"></span>
{{< tabs-wrapper >}} {{< tabs-wrapper >}}
@ -41,7 +42,9 @@ If you haven't already set up and configured your cluster, see how to
[helm](#) [helm](#)
{{% /tabs %}} {{% /tabs %}}
{{% tab-content %}} {{% tab-content %}}
<!------------------------------- BEGIN kubectl -------------------------------> <!------------------------------- BEGIN kubectl ------------------------------->
- [`kubectl` standard deployment (with internet access)](#kubectl-standard-deployment-with-internet-access) - [`kubectl` standard deployment (with internet access)](#kubectl-standard-deployment-with-internet-access)
- [`kubectl` air-gapped deployment](#kubectl-air-gapped-deployment) - [`kubectl` air-gapped deployment](#kubectl-air-gapped-deployment)
@ -56,21 +59,24 @@ kubectl apply \
--namespace influxdb --namespace influxdb
``` ```
> [!Note] > \[!Note]
> Due to the additional complexity and maintenance requirements, using `kubectl apply` isn't > Due to the additional complexity and maintenance requirements, using `kubectl apply` isn't
> recommended for air-gapped environments. > recommended for air-gapped environments.
> Instead, consider using the [`kubit` CLI approach](#kubit-cli), which is specifically designed for air-gapped deployments. > Instead, consider using the [`kubit` CLI approach](#kubit-cli), which is specifically designed for air-gapped deployments.
<!-------------------------------- END kubectl --------------------------------> <!-------------------------------- END kubectl -------------------------------->
{{% /tab-content %}} {{% /tab-content %}}
{{% tab-content %}} {{% tab-content %}}
<!-------------------------------- BEGIN kubit CLI --------------------------------> <!-------------------------------- BEGIN kubit CLI -------------------------------->
## Standard and air-gapped deployments ## Standard and air-gapped deployments
_This approach avoids the need for installing the kubit operator in the cluster, *This approach avoids the need for installing the kubit operator in the cluster,
making it ideal for air-gapped clusters._ making it ideal for air-gapped clusters.*
> [!Important] > \[!Important]
> For air-gapped deployment, ensure you have [configured access to a private registry for InfluxDB images](/influxdb3/clustered/install/set-up-cluster/configure-cluster/directly/#configure-access-to-the-influxDB-container-registry). > For air-gapped deployment, ensure you have [configured access to a private registry for InfluxDB images](/influxdb3/clustered/install/set-up-cluster/configure-cluster/directly/#configure-access-to-the-influxDB-container-registry).
1. On a machine with internet access, download the [`kubit` CLI](https://github.com/kubecfg/kubit#cli-tool)--for example: 1. On a machine with internet access, download the [`kubit` CLI](https://github.com/kubecfg/kubit#cli-tool)--for example:
@ -108,7 +114,9 @@ applies the resulting Kubernetes resources directly to your cluster.
{{% /tab-content %}} {{% /tab-content %}}
{{% tab-content %}} {{% tab-content %}}
<!-------------------------------- BEGIN Helm ---------------------------------> <!-------------------------------- BEGIN Helm --------------------------------->
- [Helm standard deployment (with internet access)](#helm-standard-deployment-with-internet-access) - [Helm standard deployment (with internet access)](#helm-standard-deployment-with-internet-access)
- [Helm air-gapped deployment](#helm-air-gapped-deployment) - [Helm air-gapped deployment](#helm-air-gapped-deployment)
@ -145,7 +153,7 @@ helm upgrade influxdb influxdata/influxdb3-clustered \
## Helm air-gapped deployment ## Helm air-gapped deployment
> [!Important] > \[!Important]
> For air-gapped deployment, ensure you have [configured access to a private registry for InfluxDB images and the kubit operator](/influxdb3/clustered/install/set-up-cluster/configure-cluster/use-helm/#configure-access-to-the-influxDB-container-registry). > For air-gapped deployment, ensure you have [configured access to a private registry for InfluxDB images and the kubit operator](/influxdb3/clustered/install/set-up-cluster/configure-cluster/use-helm/#configure-access-to-the-influxDB-container-registry).
1. On a machine with internet access, download the Helm chart: 1. On a machine with internet access, download the Helm chart:
@ -188,7 +196,8 @@ helm upgrade influxdb ./influxdb3-clustered-X.Y.Z.tgz \
--namespace influxdb --namespace influxdb
``` ```
> [!Note] > \[!Note]
>
> #### kubit's role in air-gapped environments > #### kubit's role in air-gapped environments
> >
> When deploying with Helm in an air-gapped environment: > When deploying with Helm in an air-gapped environment:
@ -200,6 +209,7 @@ helm upgrade influxdb ./influxdb3-clustered-X.Y.Z.tgz \
> This is why you need to [mirror InfluxDB images and kubit operator images](/influxdb3/clustered/install/set-up-cluster/configure-cluster/use-helm/#mirror-influxdb-images) for air-gapped deployments. > This is why you need to [mirror InfluxDB images and kubit operator images](/influxdb3/clustered/install/set-up-cluster/configure-cluster/use-helm/#mirror-influxdb-images) for air-gapped deployments.
<!--------------------------------- END Helm ----------------------------------> <!--------------------------------- END Helm ---------------------------------->
{{% /tab-content %}} {{% /tab-content %}}
{{< /tabs-wrapper >}} {{< /tabs-wrapper >}}
@ -208,7 +218,7 @@ helm upgrade influxdb ./influxdb3-clustered-X.Y.Z.tgz \
Kubernetes deployments take some time to complete. To check on the status of a Kubernetes deployments take some time to complete. To check on the status of a
deployment, use the `kubectl get` command: deployment, use the `kubectl get` command:
> [!Note] > \[!Note]
> The following example uses the [`yq` command-line YAML parser](https://github.com/mikefarah/yq) > The following example uses the [`yq` command-line YAML parser](https://github.com/mikefarah/yq)
> to parse and format the YAML output. > to parse and format the YAML output.
> You can also specify the output as `json` and use the > You can also specify the output as `json` and use the

View File

@ -21,7 +21,7 @@ InfluxDB Clustered requires the following prerequisite external dependencies:
- **Kubernetes ingress controller** - **Kubernetes ingress controller**
- **Object storage**: AWS S3 or S3-compatible storage (including Google Cloud Storage - **Object storage**: AWS S3 or S3-compatible storage (including Google Cloud Storage
or Azure Blob Storage) to store the InfluxDB Parquet files. or Azure Blob Storage) to store the InfluxDB Parquet files.
- **PostgreSQL-compatible database** _(AWS Aurora, hosted PostgreSQL, etc.)_: - **PostgreSQL-compatible database** *(AWS Aurora, hosted PostgreSQL, etc.)*:
Stores the [InfluxDB Catalog](/influxdb3/clustered/reference/internals/storage-engine/#catalog). Stores the [InfluxDB Catalog](/influxdb3/clustered/reference/internals/storage-engine/#catalog).
- **Local or attached storage**: - **Local or attached storage**:
Stores the Write-Ahead Log (WAL) for Stores the Write-Ahead Log (WAL) for
@ -45,7 +45,7 @@ cluster.
Follow instructions to install `kubectl` on your local machine: Follow instructions to install `kubectl` on your local machine:
> [!Note] > \[!Note]
> InfluxDB Clustered Kubernetes deployments require `kubectl` 1.27 or higher. > InfluxDB Clustered Kubernetes deployments require `kubectl` 1.27 or higher.
- [Install kubectl on Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/) - [Install kubectl on Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/)
@ -97,10 +97,11 @@ following sizing for {{% product-name %}} components:
[On-Prem](#) [On-Prem](#)
{{% /tabs %}} {{% /tabs %}}
{{% tab-content %}} {{% tab-content %}}
<!--------------------------------- BEGIN AWS ---------------------------------> <!--------------------------------- BEGIN AWS --------------------------------->
- **Catalog store (PostgreSQL-compatible database) (x1):** - **Catalog store (PostgreSQL-compatible database) (x1):**
- _[See below](#postgresql-compatible-database-requirements)_ - *[See below](#postgresql-compatible-database-requirements)*
- **Ingesters and Routers (x3):** - **Ingesters and Routers (x3):**
- EC2 m6i.2xlarge (8 CPU, 32 GB RAM) - EC2 m6i.2xlarge (8 CPU, 32 GB RAM)
- Local storage: minimum of 2 GB (high-speed SSD) - Local storage: minimum of 2 GB (high-speed SSD)
@ -112,12 +113,14 @@ following sizing for {{% product-name %}} components:
- EC2 t3.large (2 CPU, 8 GB RAM) - EC2 t3.large (2 CPU, 8 GB RAM)
<!---------------------------------- END AWS ----------------------------------> <!---------------------------------- END AWS ---------------------------------->
{{% /tab-content %}} {{% /tab-content %}}
{{% tab-content %}} {{% tab-content %}}
<!--------------------------------- BEGIN GCP ---------------------------------> <!--------------------------------- BEGIN GCP --------------------------------->
- **Catalog store (PostgreSQL-compatible database) (x1):** - **Catalog store (PostgreSQL-compatible database) (x1):**
- _[See below](#postgresql-compatible-database-requirements)_ - *[See below](#postgresql-compatible-database-requirements)*
- **Ingesters and Routers (x3):** - **Ingesters and Routers (x3):**
- GCE c2-standard-8 (8 CPU, 32 GB RAM) - GCE c2-standard-8 (8 CPU, 32 GB RAM)
- Local storage: minimum of 2 GB (high-speed SSD) - Local storage: minimum of 2 GB (high-speed SSD)
@ -129,25 +132,29 @@ following sizing for {{% product-name %}} components:
- GCE c2d-standard-2 (2 CPU, 8 GB RAM) - GCE c2d-standard-2 (2 CPU, 8 GB RAM)
<!---------------------------------- END GCP ----------------------------------> <!---------------------------------- END GCP ---------------------------------->
{{% /tab-content %}} {{% /tab-content %}}
{{% tab-content %}} {{% tab-content %}}
<!-------------------------------- BEGIN Azure --------------------------------> <!-------------------------------- BEGIN Azure -------------------------------->
- **Catalog store (PostgreSQL-compatible database) (x1):** - **Catalog store (PostgreSQL-compatible database) (x1):**
- _[See below](#postgresql-compatible-database-requirements)_ - *[See below](#postgresql-compatible-database-requirements)*
- **Ingesters and Routers (x3):** - **Ingesters and Routers (x3):**
- Standard_D8s_v3 (8 CPU, 32 GB RAM) - Standard\_D8s\_v3 (8 CPU, 32 GB RAM)
- Local storage: minimum of 2 GB (high-speed SSD) - Local storage: minimum of 2 GB (high-speed SSD)
- **Queriers (x3):** - **Queriers (x3):**
- Standard_D8s_v3 (8 CPU, 32 GB RAM) - Standard\_D8s\_v3 (8 CPU, 32 GB RAM)
- **Compactors (x1):** - **Compactors (x1):**
- Standard_D8s_v3 (8 CPU, 32 GB RAM) - Standard\_D8s\_v3 (8 CPU, 32 GB RAM)
- **Kubernetes Control Plane (x1):** - **Kubernetes Control Plane (x1):**
- Standard_B2ms (2 CPU, 8 GB RAM) - Standard\_B2ms (2 CPU, 8 GB RAM)
<!--------------------------------- END Azure ---------------------------------> <!--------------------------------- END Azure --------------------------------->
{{% /tab-content %}} {{% /tab-content %}}
{{% tab-content %}} {{% tab-content %}}
<!------------------------------- BEGIN ON-PREM -------------------------------> <!------------------------------- BEGIN ON-PREM ------------------------------->
- **Catalog store (PostgreSQL-compatible database) (x1):** - **Catalog store (PostgreSQL-compatible database) (x1):**
@ -168,6 +175,7 @@ following sizing for {{% product-name %}} components:
- RAM: 8 GB - RAM: 8 GB
<!-------------------------------- END ON-PREM --------------------------------> <!-------------------------------- END ON-PREM -------------------------------->
{{% /tab-content %}} {{% /tab-content %}}
{{< /tabs-wrapper >}} {{< /tabs-wrapper >}}
@ -181,7 +189,8 @@ simplifies the installation and management of the InfluxDB Clustered package.
It manages the application of the jsonnet templates used to install, manage, and It manages the application of the jsonnet templates used to install, manage, and
update an InfluxDB cluster. update an InfluxDB cluster.
> [!Note] > \[!Note]
>
> #### The InfluxDB Clustered Helm chart includes the kubit operator > #### The InfluxDB Clustered Helm chart includes the kubit operator
> >
> If using the [InfluxDB Clustered Helm chart](https://github.com/influxdata/helm-charts/tree/master/charts/influxdb3-clustered) > If using the [InfluxDB Clustered Helm chart](https://github.com/influxdata/helm-charts/tree/master/charts/influxdb3-clustered)
@ -206,7 +215,8 @@ You can provide your own ingress or you can install
[Nginx Ingress Controller](https://github.com/kubernetes/ingress-nginx) to use [Nginx Ingress Controller](https://github.com/kubernetes/ingress-nginx) to use
the InfluxDB-defined ingress. the InfluxDB-defined ingress.
> [!Important] > \[!Important]
>
> #### Allow gRPC/HTTP2 > #### Allow gRPC/HTTP2
> >
> InfluxDB Clustered components use gRPC/HTTP2 protocols. > InfluxDB Clustered components use gRPC/HTTP2 protocols.
@ -232,7 +242,8 @@ that work with InfluxDB Clustered. Other S3-compatible object stores should work
as well. as well.
{{% /caption %}} {{% /caption %}}
> [!Important] > \[!Important]
>
> #### Object storage recommendations > #### Object storage recommendations
> >
> We **strongly** recommend the following: > We **strongly** recommend the following:
@ -260,7 +271,8 @@ the correct permissions to allow InfluxDB to perform all the actions it needs to
The IAM role that you use to access AWS S3 should have the following policy: The IAM role that you use to access AWS S3 should have the following policy:
{{% code-placeholders "S3_BUCKET_NAME" %}} {{% code-placeholders "S3\_BUCKET\_NAME" %}}
```json ```json
{ {
"Version": "2012-10-17", "Version": "2012-10-17",
@ -297,6 +309,7 @@ The IAM role that you use to access AWS S3 should have the following policy:
] ]
} }
``` ```
{{% /code-placeholders %}} {{% /code-placeholders %}}
Replace the following: Replace the following:
@ -310,13 +323,15 @@ Replace the following:
To use Google Cloud Storage (GCS) as your object store, your [IAM principal](https://cloud.google.com/iam/docs/overview) should be granted the `roles/storage.objectUser` role. To use Google Cloud Storage (GCS) as your object store, your [IAM principal](https://cloud.google.com/iam/docs/overview) should be granted the `roles/storage.objectUser` role.
For example, if using [Google Service Accounts](https://cloud.google.com/iam/docs/service-account-overview): For example, if using [Google Service Accounts](https://cloud.google.com/iam/docs/service-account-overview):
{{% code-placeholders "GCP_SERVICE_ACCOUNT|GCP_BUCKET" %}} {{% code-placeholders "GCP\_SERVICE\_ACCOUNT|GCP\_BUCKET" %}}
```bash ```bash
gcloud storage buckets add-iam-policy-binding \ gcloud storage buckets add-iam-policy-binding \
gs://GCP_BUCKET \ gs://GCP_BUCKET \
--member="serviceAccount:GCP_SERVICE_ACCOUNT" \ --member="serviceAccount:GCP_SERVICE_ACCOUNT" \
--role="roles/storage.objectUser" --role="roles/storage.objectUser"
``` ```
{{% /code-placeholders %}} {{% /code-placeholders %}}
Replace the following: Replace the following:
@ -333,13 +348,15 @@ should be granted the `Storage Blob Data Contributor` role.
This is a built-in role for Azure which encompasses common permissions. This is a built-in role for Azure which encompasses common permissions.
You can assign it using the following command: You can assign it using the following command:
{{% code-placeholders "PRINCIPAL|AZURE_SUBSCRIPTION|AZURE_RESOURCE_GROUP|AZURE_STORAGE_ACCOUNT|AZURE_STORAGE_CONTAINER" %}} {{% code-placeholders "PRINCIPAL|AZURE\_SUBSCRIPTION|AZURE\_RESOURCE\_GROUP|AZURE\_STORAGE\_ACCOUNT|AZURE\_STORAGE\_CONTAINER" %}}
```bash ```bash
az role assignment create \ az role assignment create \
--role "Storage Blob Data Contributor" \ --role "Storage Blob Data Contributor" \
--assignee PRINCIPAL \ --assignee PRINCIPAL \
--scope "/subscriptions/AZURE_SUBSCRIPTION/resourceGroups/AZURE_RESOURCE_GROUP/providers/Microsoft.Storage/storageAccounts/AZURE_STORAGE_ACCOUNT/blobServices/default/containers/AZURE_STORAGE_CONTAINER" --scope "/subscriptions/AZURE_SUBSCRIPTION/resourceGroups/AZURE_RESOURCE_GROUP/providers/Microsoft.Storage/storageAccounts/AZURE_STORAGE_ACCOUNT/blobServices/default/containers/AZURE_STORAGE_CONTAINER"
``` ```
{{% /code-placeholders %}} {{% /code-placeholders %}}
Replace the following: Replace the following:
@ -354,7 +371,7 @@ Replace the following:
{{< /expand-wrapper >}} {{< /expand-wrapper >}}
> [!Note] > \[!Note]
> To configure permissions with MinIO, use the > To configure permissions with MinIO, use the
> [example AWS access policy](#view-example-aws-s3-access-policy). > [example AWS access policy](#view-example-aws-s3-access-policy).
@ -362,7 +379,7 @@ Replace the following:
The [InfluxDB Catalog](/influxdb3/clustered/reference/internals/storage-engine/#catalog) The [InfluxDB Catalog](/influxdb3/clustered/reference/internals/storage-engine/#catalog)
that stores metadata related to your time series data requires a PostgreSQL or that stores metadata related to your time series data requires a PostgreSQL or
PostgreSQL-compatible database _(AWS Aurora, hosted PostgreSQL, etc.)_. PostgreSQL-compatible database *(AWS Aurora, hosted PostgreSQL, etc.)*.
The process for installing and setting up your PostgreSQL-compatible database The process for installing and setting up your PostgreSQL-compatible database
depends on the database and database provider you use. depends on the database and database provider you use.
Refer to your database's or provider's documentation for setting up your Refer to your database's or provider's documentation for setting up your
@ -376,7 +393,7 @@ PostgreSQL-compatible database.
applications, ensure that your PostgreSQL-compatible instance is dedicated applications, ensure that your PostgreSQL-compatible instance is dedicated
exclusively to InfluxDB. exclusively to InfluxDB.
> [!Note] > \[!Note]
> We **strongly** recommended running the PostgreSQL-compatible database > We **strongly** recommended running the PostgreSQL-compatible database
> in a separate namespace from InfluxDB or external to Kubernetes entirely. > in a separate namespace from InfluxDB or external to Kubernetes entirely.
> Doing so makes management of the InfluxDB cluster easier and helps to prevent > Doing so makes management of the InfluxDB cluster easier and helps to prevent

View File

@ -0,0 +1,696 @@
---
title: Use multi-file Python code and modules in plugins
description: |
Organize complex plugin logic across multiple Python files and modules for better code reuse, testing, and maintainability in InfluxDB 3 Processing Engine plugins.
menu:
influxdb3_core:
name: Use multi-file plugins
parent: Processing engine and Python plugins
weight: 101
influxdb3/core/tags: [processing engine, plugins, python, modules]
related:
- /influxdb3/core/plugins/
- /influxdb3/core/plugins/extend-plugin/
- /influxdb3/core/reference/cli/influxdb3/create/trigger/
---
As your plugin logic grows in complexity, organizing code across multiple Python files improves maintainability, enables code reuse, and makes testing easier.
The InfluxDB 3 Processing Engine supports multi-file plugin architectures using standard Python module patterns.
## Before you begin
Ensure you have:
- A working InfluxDB 3 Core instance with the Processing Engine enabled
- Basic understanding of [Python modules and packages](https://docs.python.org/3/tutorial/modules.html)
- Familiarity with [creating InfluxDB 3 plugins](/influxdb3/core/plugins/)
## Multi-file plugin structure
A multi-file plugin is a directory containing Python files organized as a package.
The directory must include an `__init__.py` file that serves as the entry point and contains your trigger function.
### Basic structure
```
my_plugin/
├── __init__.py # Required - entry point with trigger function
├── processors.py # Data processing functions
├── utils.py # Helper utilities
└── config.py # Configuration management
```
### Required: **init**.py entry point
The `__init__.py` file must contain the trigger function that InfluxDB calls when the trigger fires.
This file imports and orchestrates code from other modules in your plugin.
```python
# my_plugin/__init__.py
from .processors import process_data
from .config import load_settings
from .utils import format_output
def process_writes(influxdb3_local, table_batches, args=None):
"""Entry point for WAL trigger."""
settings = load_settings(args)
for table_batch in table_batches:
processed_data = process_data(table_batch, settings)
output = format_output(processed_data)
influxdb3_local.write(output)
```
## Organizing plugin code
### Separate concerns into modules
Organize your plugin code by functional responsibility to improve maintainability and testing.
#### processors.py - Data transformation logic
```python
# my_plugin/processors.py
"""Data processing and transformation functions."""
def process_data(table_batch, settings):
"""Transform data according to configuration settings."""
table_name = table_batch["table_name"]
rows = table_batch["rows"]
transformed_rows = []
for row in rows:
transformed = transform_row(row, settings)
if transformed:
transformed_rows.append(transformed)
return {
"table": table_name,
"rows": transformed_rows,
"count": len(transformed_rows)
}
def transform_row(row, settings):
"""Apply transformations to a single row."""
# Apply threshold filtering
if "value" in row and row["value"] < settings.get("min_value", 0):
return None
# Apply unit conversion if configured
if settings.get("convert_units"):
row["value"] = row["value"] * settings.get("conversion_factor", 1.0)
return row
```
#### config.py - Configuration management
```python
# my_plugin/config.py
"""Plugin configuration parsing and validation."""
DEFAULT_SETTINGS = {
"min_value": 0.0,
"convert_units": False,
"conversion_factor": 1.0,
"output_measurement": "processed_data",
}
def load_settings(args):
"""Load and validate plugin settings from trigger arguments."""
settings = DEFAULT_SETTINGS.copy()
if not args:
return settings
# Parse numeric arguments
if "min_value" in args:
settings["min_value"] = float(args["min_value"])
if "conversion_factor" in args:
settings["conversion_factor"] = float(args["conversion_factor"])
# Parse boolean arguments
if "convert_units" in args:
settings["convert_units"] = args["convert_units"].lower() in ("true", "1", "yes")
# Parse string arguments
if "output_measurement" in args:
settings["output_measurement"] = args["output_measurement"]
return settings
def validate_settings(settings):
"""Validate settings and raise exceptions for invalid configurations."""
if settings["min_value"] < 0:
raise ValueError("min_value must be non-negative")
if settings["conversion_factor"] <= 0:
raise ValueError("conversion_factor must be positive")
return True
```
#### utils.py - Helper functions
```python
# my_plugin/utils.py
"""Utility functions for data formatting and logging."""
from datetime import datetime
def format_output(processed_data):
"""Format processed data for writing to InfluxDB."""
from influxdb3_local import LineBuilder
lines = []
measurement = processed_data.get("measurement", "processed_data")
for row in processed_data["rows"]:
line = LineBuilder(measurement)
# Add tags from row
for key, value in row.items():
if key.startswith("tag_"):
line.tag(key.replace("tag_", ""), str(value))
# Add fields from row
for key, value in row.items():
if key.startswith("field_"):
field_name = key.replace("field_", "")
if isinstance(value, float):
line.float64_field(field_name, value)
elif isinstance(value, int):
line.int64_field(field_name, value)
elif isinstance(value, str):
line.string_field(field_name, value)
lines.append(line)
return lines
def log_metrics(influxdb3_local, operation, duration_ms, record_count):
"""Log plugin performance metrics."""
influxdb3_local.info(
f"Operation: {operation}, "
f"Duration: {duration_ms}ms, "
f"Records: {record_count}"
)
```
## Importing external libraries
Multi-file plugins can use both relative imports (for your own modules) and absolute imports (for external libraries).
### Relative imports for plugin modules
Use relative imports to reference other modules within your plugin:
```python
# my_plugin/__init__.py
from .processors import process_data # Same package
from .config import load_settings # Same package
from .utils import format_output # Same package
# Relative imports from subdirectories
from .transforms.aggregators import calculate_mean
from .integrations.webhook import send_notification
```
### Absolute imports for external libraries
Use absolute imports for standard library and third-party packages:
```python
# my_plugin/processors.py
import json
import time
from datetime import datetime, timedelta
from collections import defaultdict
# Third-party libraries (must be installed with influxdb3 install package)
import pandas as pd
import numpy as np
```
### Installing third-party dependencies
Before using external libraries, install them into the Processing Engine's Python environment:
```bash
# Install packages for your plugin
influxdb3 install package pandas numpy requests
```
For Docker deployments:
```bash
docker exec -it CONTAINER_NAME influxdb3 install package pandas numpy requests
```
## Advanced plugin patterns
### Nested module structure
For complex plugins, organize code into subdirectories:
```
my_advanced_plugin/
├── __init__.py
├── config.py
├── transforms/
│ ├── __init__.py
│ ├── aggregators.py
│ └── filters.py
├── integrations/
│ ├── __init__.py
│ ├── webhook.py
│ └── email.py
└── utils/
├── __init__.py
├── logging.py
└── validators.py
```
Import from nested modules:
```python
# my_advanced_plugin/__init__.py
from .transforms.aggregators import calculate_statistics
from .transforms.filters import apply_threshold_filter
from .integrations.webhook import send_alert
from .utils.logging import setup_logger
def process_writes(influxdb3_local, table_batches, args=None):
logger = setup_logger(influxdb3_local)
for table_batch in table_batches:
# Filter data
filtered = apply_threshold_filter(table_batch, threshold=100)
# Calculate statistics
stats = calculate_statistics(filtered)
# Send alerts if needed
if stats["max"] > 1000:
send_alert(stats, logger)
```
### Shared code across plugins
Share common code across multiple plugins using a shared module directory:
```
plugins/
├── shared/
│ ├── __init__.py
│ ├── formatters.py
│ └── validators.py
├── plugin_a/
│ └── __init__.py
└── plugin_b/
└── __init__.py
```
Add the shared directory to Python's module search path in your plugin:
```python
# plugin_a/__init__.py
import sys
from pathlib import Path
# Add shared directory to path
plugin_dir = Path(__file__).parent.parent
sys.path.insert(0, str(plugin_dir))
# Now import from shared
from shared.formatters import format_line_protocol
from shared.validators import validate_data
def process_writes(influxdb3_local, table_batches, args=None):
for table_batch in table_batches:
if validate_data(table_batch):
formatted = format_line_protocol(table_batch)
influxdb3_local.write(formatted)
```
## Testing multi-file plugins
### Unit testing individual modules
Test modules independently before integration:
```python
# tests/test_processors.py
import unittest
from my_plugin.processors import transform_row
from my_plugin.config import load_settings
class TestProcessors(unittest.TestCase):
def test_transform_row_filtering(self):
"""Test that rows below threshold are filtered."""
settings = {"min_value": 10.0}
row = {"value": 5.0}
result = transform_row(row, settings)
self.assertIsNone(result)
def test_transform_row_conversion(self):
"""Test unit conversion."""
settings = {
"convert_units": True,
"conversion_factor": 2.0,
"min_value": 0.0
}
row = {"value": 10.0}
result = transform_row(row, settings)
self.assertEqual(result["value"], 20.0)
if __name__ == "__main__":
unittest.main()
```
### Testing with the influxdb3 CLI
Test your complete multi-file plugin before deployment:
```bash
# Test scheduled plugin
influxdb3 test schedule_plugin \
--database testdb \
--schedule "0 0 * * * *" \
--plugin-dir /path/to/plugins \
my_plugin
# Test WAL plugin with sample data
influxdb3 test wal_plugin \
--database testdb \
--plugin-dir /path/to/plugins \
my_plugin
```
For more testing options, see the [influxdb3 test reference](/influxdb3/core/reference/cli/influxdb3/test/).
## Deploying multi-file plugins
### Upload plugin directory
Upload your complete plugin directory when creating a trigger:
```bash
# Upload the entire plugin directory
influxdb3 create trigger \
--trigger-spec "table:sensor_data" \
--path "/local/path/to/my_plugin" \
--upload \
--database mydb \
sensor_processor
```
The `--upload` flag transfers all files in the directory to the server's plugin directory.
### Update plugin code
Update all files in a running plugin:
```bash
# Update the plugin with new code
influxdb3 update trigger \
--database mydb \
--trigger-name sensor_processor \
--path "/local/path/to/my_plugin"
```
The update replaces all plugin files while preserving trigger configuration.
## Best practices
### Code organization
- **Single responsibility**: Each module should have one clear purpose
- **Shallow hierarchies**: Avoid deeply nested directory structures (2-3 levels maximum)
- **Descriptive names**: Use clear, descriptive module and function names
- **Module size**: Keep modules under 300-400 lines for maintainability
### Import management
- **Explicit imports**: Use explicit imports rather than `from module import *`
- **Standard library first**: Import standard library, then third-party, then local modules
- **Avoid circular imports**: Design modules to prevent circular dependencies
Example import organization:
```python
# Standard library
import json
import time
from datetime import datetime
# Third-party packages
import pandas as pd
import numpy as np
# Local modules
from .config import load_settings
from .processors import process_data
from .utils import format_output
```
### Error handling
Centralize error handling in your entry point:
```python
# my_plugin/__init__.py
from .processors import process_data
from .config import load_settings, validate_settings
def process_writes(influxdb3_local, table_batches, args=None):
try:
# Load and validate configuration
settings = load_settings(args)
validate_settings(settings)
# Process data
for table_batch in table_batches:
process_data(influxdb3_local, table_batch, settings)
except ValueError as e:
influxdb3_local.error(f"Configuration error: {e}")
except Exception as e:
influxdb3_local.error(f"Unexpected error: {e}")
```
### Documentation
Document your modules with docstrings:
```python
"""
my_plugin - Data processing plugin for sensor data.
This plugin processes incoming sensor data by:
1. Filtering values below configured threshold
2. Converting units if requested
3. Writing processed data to output measurement
Modules:
- processors: Core data transformation logic
- config: Configuration parsing and validation
- utils: Helper functions for formatting and logging
"""
def process_writes(influxdb3_local, table_batches, args=None):
"""Process incoming sensor data writes.
Args:
influxdb3_local: InfluxDB API interface
table_batches: List of table batches with written data
args: Optional trigger arguments for configuration
Trigger arguments:
min_value (float): Minimum value threshold
convert_units (bool): Enable unit conversion
conversion_factor (float): Conversion multiplier
output_measurement (str): Target measurement name
"""
pass
```
## Example: Complete multi-file plugin
Here's a complete example of a temperature monitoring plugin with multi-file organization:
### Plugin structure
```
temperature_monitor/
├── __init__.py
├── config.py
├── processors.py
└── alerts.py
```
### **init**.py
```python
# temperature_monitor/__init__.py
"""Temperature monitoring plugin with alerting."""
from .config import load_config
from .processors import calculate_statistics
from .alerts import check_thresholds
def process_scheduled_call(influxdb3_local, call_time, args=None):
"""Monitor temperature data and send alerts."""
try:
config = load_config(args)
# Query recent temperature data
query = f"""
SELECT temp_value, location
FROM {config['measurement']}
WHERE time > now() - INTERVAL '{config['window']}'
"""
results = influxdb3_local.query(query)
# Calculate statistics
stats = calculate_statistics(results)
# Check thresholds and alert
check_thresholds(influxdb3_local, stats, config)
influxdb3_local.info(
f"Processed {len(results)} readings "
f"from {len(stats)} locations"
)
except Exception as e:
influxdb3_local.error(f"Plugin error: {e}")
```
### config.py
```python
# temperature_monitor/config.py
"""Configuration management for temperature monitor."""
DEFAULTS = {
"measurement": "temperature",
"window": "1 hour",
"high_threshold": 30.0,
"low_threshold": 10.0,
"alert_measurement": "temperature_alerts"
}
def load_config(args):
"""Load configuration from trigger arguments."""
config = DEFAULTS.copy()
if args:
for key in DEFAULTS:
if key in args:
if key.endswith("_threshold"):
config[key] = float(args[key])
else:
config[key] = args[key]
return config
```
### processors.py
```python
# temperature_monitor/processors.py
"""Data processing functions."""
from collections import defaultdict
def calculate_statistics(data):
"""Calculate statistics by location."""
stats = defaultdict(lambda: {
"count": 0,
"sum": 0.0,
"min": float('inf'),
"max": float('-inf')
})
for row in data:
location = row.get("location", "unknown")
value = float(row.get("temp_value", 0))
s = stats[location]
s["count"] += 1
s["sum"] += value
s["min"] = min(s["min"], value)
s["max"] = max(s["max"], value)
# Calculate averages
for location, s in stats.items():
if s["count"] > 0:
s["avg"] = s["sum"] / s["count"]
return dict(stats)
```
### alerts.py
```python
# temperature_monitor/alerts.py
"""Alert checking and notification."""
def check_thresholds(influxdb3_local, stats, config):
"""Check temperature thresholds and write alerts."""
from influxdb3_local import LineBuilder
high_threshold = config["high_threshold"]
low_threshold = config["low_threshold"]
alert_measurement = config["alert_measurement"]
for location, s in stats.items():
if s["max"] > high_threshold:
line = LineBuilder(alert_measurement)
line.tag("location", location)
line.tag("severity", "high")
line.float64_field("temperature", s["max"])
line.string_field("message",
f"High temperature: {s['max']}°C exceeds {high_threshold}°C")
influxdb3_local.write(line)
influxdb3_local.warn(f"High temperature alert for {location}")
elif s["min"] < low_threshold:
line = LineBuilder(alert_measurement)
line.tag("location", location)
line.tag("severity", "low")
line.float64_field("temperature", s["min"])
line.string_field("message",
f"Low temperature: {s['min']}°C below {low_threshold}°C")
influxdb3_local.write(line)
influxdb3_local.warn(f"Low temperature alert for {location}")
```
### Deploy the plugin
```bash
# Create trigger with configuration
influxdb3 create trigger \
--trigger-spec "every:5m" \
--path "/local/path/to/temperature_monitor" \
--upload \
--trigger-arguments high_threshold=35,low_threshold=5,window="15 minutes" \
--database sensors \
temp_monitor
```
## Related resources
- [Processing engine and Python plugins](/influxdb3/core/plugins/)
- [Extend plugins with API features](/influxdb3/core/plugins/extend-plugin/)
- [Plugin library](/influxdb3/core/plugins/library/)
- [influxdb3 create trigger](/influxdb3/core/reference/cli/influxdb3/create/trigger/)
- [influxdb3 test](/influxdb3/core/reference/cli/influxdb3/test/)

View File

@ -27,11 +27,13 @@ influxdb3 serve [OPTIONS]
- **object-store**: Determines where time series data is stored. - **object-store**: Determines where time series data is stored.
- Other object store parameters depending on the selected `object-store` type. - Other object store parameters depending on the selected `object-store` type.
> [!NOTE] > \[!NOTE]
> `--node-id` supports alphanumeric strings with optional hyphens. > `--node-id` supports alphanumeric strings with optional hyphens.
> [!Important] > \[!Important]
>
> #### Global configuration options > #### Global configuration options
>
> Some configuration options (like [`--num-io-threads`](/influxdb3/core/reference/config-options/#num-io-threads)) are **global** and must be specified **before** the `serve` command: > Some configuration options (like [`--num-io-threads`](/influxdb3/core/reference/config-options/#num-io-threads)) are **global** and must be specified **before** the `serve` command:
> >
> ```bash > ```bash
@ -44,95 +46,95 @@ influxdb3 serve [OPTIONS]
| Option | | Description | | Option | | Description |
| :--------------- | :--------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------ | | :--------------- | :--------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------ |
| {{< req "\*" >}} | `--node-id` | _See [configuration options](/influxdb3/core/reference/config-options/#node-id)_ | | {{< req "\*" >}} | `--node-id` | *See [configuration options](/influxdb3/core/reference/config-options/#node-id)* |
| {{< req "\*" >}} | `--object-store` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store)_ | | {{< req "\*" >}} | `--object-store` | *See [configuration options](/influxdb3/core/reference/config-options/#object-store)* |
| | `--admin-token-recovery-http-bind` | _See [configuration options](/influxdb3/core/reference/config-options/#admin-token-recovery-http-bind)_ | | | `--admin-token-recovery-http-bind` | *See [configuration options](/influxdb3/core/reference/config-options/#admin-token-recovery-http-bind)* |
| | `--admin-token-recovery-tcp-listener-file-path` | _See [configuration options](/influxdb3/core/reference/config-options/#admin-token-recovery-tcp-listener-file-path)_ | | | `--admin-token-recovery-tcp-listener-file-path` | *See [configuration options](/influxdb3/core/reference/config-options/#admin-token-recovery-tcp-listener-file-path)* |
| | `--admin-token-file` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-file)_ | | | `--admin-token-file` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-file)* |
| | `--aws-access-key-id` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-access-key-id)_ | | | `--aws-access-key-id` | *See [configuration options](/influxdb3/core/reference/config-options/#aws-access-key-id)* |
| | `--aws-allow-http` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-allow-http)_ | | | `--aws-allow-http` | *See [configuration options](/influxdb3/core/reference/config-options/#aws-allow-http)* |
| | `--aws-credentials-file` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-credentials-file)_ | | | `--aws-credentials-file` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-credentials-file)* |
| | `--aws-default-region` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-default-region)_ | | | `--aws-default-region` | *See [configuration options](/influxdb3/core/reference/config-options/#aws-default-region)* |
| | `--aws-endpoint` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-endpoint)_ | | | `--aws-endpoint` | *See [configuration options](/influxdb3/core/reference/config-options/#aws-endpoint)* |
| | `--aws-secret-access-key` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-secret-access-key)_ | | | `--aws-secret-access-key` | *See [configuration options](/influxdb3/core/reference/config-options/#aws-secret-access-key)* |
| | `--aws-session-token` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-session-token)_ | | | `--aws-session-token` | *See [configuration options](/influxdb3/core/reference/config-options/#aws-session-token)* |
| | `--aws-skip-signature` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-skip-signature)_ | | | `--aws-skip-signature` | *See [configuration options](/influxdb3/core/reference/config-options/#aws-skip-signature)* |
| | `--azure-allow-http` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-allow-http)_ | | | `--azure-allow-http` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-allow-http)* |
| | `--azure-endpoint` | _See [configuration options](/influxdb3/enterprise/reference/config-options/##azure-endpoint)_ | | | `--azure-endpoint` | *See [configuration options](/influxdb3/enterprise/reference/config-options/##azure-endpoint)* |
| | `--azure-storage-access-key` | _See [configuration options](/influxdb3/core/reference/config-options/#azure-storage-access-key)_ | | | `--azure-storage-access-key` | *See [configuration options](/influxdb3/core/reference/config-options/#azure-storage-access-key)* |
| | `--azure-storage-account` | _See [configuration options](/influxdb3/core/reference/config-options/#azure-storage-account)_ | | | `--azure-storage-account` | *See [configuration options](/influxdb3/core/reference/config-options/#azure-storage-account)* |
| | `--bucket` | _See [configuration options](/influxdb3/core/reference/config-options/#bucket)_ | | | `--bucket` | *See [configuration options](/influxdb3/core/reference/config-options/#bucket)* |
| | `--data-dir` | _See [configuration options](/influxdb3/core/reference/config-options/#data-dir)_ | | | `--data-dir` | *See [configuration options](/influxdb3/core/reference/config-options/#data-dir)* |
| | `--datafusion-config` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-config)_ | | | `--datafusion-config` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-config)* |
| | `--datafusion-max-parquet-fanout` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-max-parquet-fanout)_ | | | `--datafusion-max-parquet-fanout` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-max-parquet-fanout)* |
| | `--datafusion-num-threads` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-num-threads)_ | | | `--datafusion-num-threads` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-num-threads)* |
| | `--datafusion-runtime-disable-lifo-slot` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-disable-lifo-slot)_ | | | `--datafusion-runtime-disable-lifo-slot` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-disable-lifo-slot)* |
| | `--datafusion-runtime-event-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-event-interval)_ | | | `--datafusion-runtime-event-interval` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-event-interval)* |
| | `--datafusion-runtime-global-queue-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-global-queue-interval)_ | | | `--datafusion-runtime-global-queue-interval` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-global-queue-interval)* |
| | `--datafusion-runtime-max-blocking-threads` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-max-blocking-threads)_ | | | `--datafusion-runtime-max-blocking-threads` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-max-blocking-threads)* |
| | `--datafusion-runtime-max-io-events-per-tick` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-max-io-events-per-tick)_ | | | `--datafusion-runtime-max-io-events-per-tick` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-max-io-events-per-tick)* |
| | `--datafusion-runtime-thread-keep-alive` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-thread-keep-alive)_ | | | `--datafusion-runtime-thread-keep-alive` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-thread-keep-alive)* |
| | `--datafusion-runtime-thread-priority` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-thread-priority)_ | | | `--datafusion-runtime-thread-priority` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-thread-priority)* |
| | `--datafusion-runtime-type` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-type)_ | | | `--datafusion-runtime-type` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-type)* |
| | `--datafusion-use-cached-parquet-loader` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-use-cached-parquet-loader)_ | | | `--datafusion-use-cached-parquet-loader` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-use-cached-parquet-loader)* |
| | `--delete-grace-period` | _See [configuration options](/influxdb3/core/reference/config-options/#delete-grace-period)_ | | | `--delete-grace-period` | *See [configuration options](/influxdb3/core/reference/config-options/#delete-grace-period)* |
| | `--disable-authz` | _See [configuration options](/influxdb3/core/reference/config-options/#disable-authz)_ | | | `--disable-authz` | *See [configuration options](/influxdb3/core/reference/config-options/#disable-authz)* |
| | `--disable-parquet-mem-cache` | _See [configuration options](/influxdb3/core/reference/config-options/#disable-parquet-mem-cache)_ | | | `--disable-parquet-mem-cache` | *See [configuration options](/influxdb3/core/reference/config-options/#disable-parquet-mem-cache)* |
| | `--distinct-cache-eviction-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#distinct-cache-eviction-interval)_ | | | `--distinct-cache-eviction-interval` | *See [configuration options](/influxdb3/core/reference/config-options/#distinct-cache-eviction-interval)* |
| | `--exec-mem-pool-bytes` | _See [configuration options](/influxdb3/core/reference/config-options/#exec-mem-pool-bytes)_ | | | `--exec-mem-pool-bytes` | *See [configuration options](/influxdb3/core/reference/config-options/#exec-mem-pool-bytes)* |
| | `--force-snapshot-mem-threshold` | _See [configuration options](/influxdb3/core/reference/config-options/#force-snapshot-mem-threshold)_ | | | `--force-snapshot-mem-threshold` | *See [configuration options](/influxdb3/core/reference/config-options/#force-snapshot-mem-threshold)* |
| | `--gen1-duration` | _See [configuration options](/influxdb3/core/reference/config-options/#gen1-duration)_ | | | `--gen1-duration` | *See [configuration options](/influxdb3/core/reference/config-options/#gen1-duration)* |
| | `--gen1-lookback-duration` | _See [configuration options](/influxdb3/core/reference/config-options/#gen1-lookback-duration)_ | | | `--gen1-lookback-duration` | *See [configuration options](/influxdb3/core/reference/config-options/#gen1-lookback-duration)* |
| | `--google-service-account` | _See [configuration options](/influxdb3/core/reference/config-options/#google-service-account)_ | | | `--google-service-account` | *See [configuration options](/influxdb3/core/reference/config-options/#google-service-account)* |
| | `--hard-delete-default-duration` | _See [configuration options](/influxdb3/core/reference/config-options/#hard-delete-default-duration)_ | | | `--hard-delete-default-duration` | *See [configuration options](/influxdb3/core/reference/config-options/#hard-delete-default-duration)* |
| `-h` | `--help` | Print help information | | `-h` | `--help` | Print help information |
| | `--help-all` | Print detailed help information | | | `--help-all` | Print detailed help information |
| | `--http-bind` | _See [configuration options](/influxdb3/core/reference/config-options/#http-bind)_ | | | `--http-bind` | *See [configuration options](/influxdb3/core/reference/config-options/#http-bind)* |
| | `--last-cache-eviction-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#last-cache-eviction-interval)_ | | | `--last-cache-eviction-interval` | *See [configuration options](/influxdb3/core/reference/config-options/#last-cache-eviction-interval)* |
| | `--log-destination` | _See [configuration options](/influxdb3/core/reference/config-options/#log-destination)_ | | | `--log-destination` | *See [configuration options](/influxdb3/core/reference/config-options/#log-destination)* |
| | `--log-filter` | _See [configuration options](/influxdb3/core/reference/config-options/#log-filter)_ | | | `--log-filter` | *See [configuration options](/influxdb3/core/reference/config-options/#log-filter)* |
| | `--log-format` | _See [configuration options](/influxdb3/core/reference/config-options/#log-format)_ | | | `--log-format` | *See [configuration options](/influxdb3/core/reference/config-options/#log-format)* |
| | `--max-http-request-size` | _See [configuration options](/influxdb3/core/reference/config-options/#max-http-request-size)_ | | | `--max-http-request-size` | *See [configuration options](/influxdb3/core/reference/config-options/#max-http-request-size)* |
| | `--object-store-cache-endpoint` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-cache-endpoint)_ | | | `--object-store-cache-endpoint` | *See [configuration options](/influxdb3/core/reference/config-options/#object-store-cache-endpoint)* |
| | `--object-store-connection-limit` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-connection-limit)_ | | | `--object-store-connection-limit` | *See [configuration options](/influxdb3/core/reference/config-options/#object-store-connection-limit)* |
| | `--object-store-http2-max-frame-size` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-http2-max-frame-size)_ | | | `--object-store-http2-max-frame-size` | *See [configuration options](/influxdb3/core/reference/config-options/#object-store-http2-max-frame-size)* |
| | `--object-store-http2-only` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-http2-only)_ | | | `--object-store-http2-only` | *See [configuration options](/influxdb3/core/reference/config-options/#object-store-http2-only)* |
| | `--object-store-max-retries` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-max-retries)_ | | | `--object-store-max-retries` | *See [configuration options](/influxdb3/core/reference/config-options/#object-store-max-retries)* |
| | `--object-store-retry-timeout` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-retry-timeout)_ | | | `--object-store-retry-timeout` | *See [configuration options](/influxdb3/core/reference/config-options/#object-store-retry-timeout)* |
| | `--package-manager` | _See [configuration options](/influxdb3/core/reference/config-options/#package-manager)_ | | | `--package-manager` | *See [configuration options](/influxdb3/core/reference/config-options/#package-manager)* |
| | `--parquet-mem-cache-prune-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-prune-interval)_ | | | `--parquet-mem-cache-prune-interval` | *See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-prune-interval)* |
| | `--parquet-mem-cache-prune-percentage` | _See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-prune-percentage)_ | | | `--parquet-mem-cache-prune-percentage` | *See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-prune-percentage)* |
| | `--parquet-mem-cache-query-path-duration` | _See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-query-path-duration)_ | | | `--parquet-mem-cache-query-path-duration` | *See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-query-path-duration)* |
| | `--parquet-mem-cache-size` | _See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-size)_ | | | `--parquet-mem-cache-size` | *See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-size)* |
| | `--plugin-dir` | _See [configuration options](/influxdb3/core/reference/config-options/#plugin-dir)_ | | | `--plugin-dir` | *See [configuration options](/influxdb3/core/reference/config-options/#plugin-dir)* |
| | `--preemptive-cache-age` | _See [configuration options](/influxdb3/core/reference/config-options/#preemptive-cache-age)_ | | | `--preemptive-cache-age` | *See [configuration options](/influxdb3/core/reference/config-options/#preemptive-cache-age)* |
| | `--query-file-limit` | _See [configuration options](/influxdb3/core/reference/config-options/#query-file-limit)_ | | | `--query-file-limit` | *See [configuration options](/influxdb3/core/reference/config-options/#query-file-limit)* |
| | `--query-log-size` | _See [configuration options](/influxdb3/core/reference/config-options/#query-log-size)_ | | | `--query-log-size` | *See [configuration options](/influxdb3/core/reference/config-options/#query-log-size)* |
| | `--retention-check-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#retention-check-interval)_ | | | `--retention-check-interval` | *See [configuration options](/influxdb3/core/reference/config-options/#retention-check-interval)* |
| | `--snapshotted-wal-files-to-keep` | _See [configuration options](/influxdb3/core/reference/config-options/#snapshotted-wal-files-to-keep)_ | | | `--snapshotted-wal-files-to-keep` | *See [configuration options](/influxdb3/core/reference/config-options/#snapshotted-wal-files-to-keep)* |
| | `--table-index-cache-concurrency-limit` | _See [configuration options](/influxdb3/core/reference/config-options/#table-index-cache-concurrency-limit)_ | | | `--table-index-cache-concurrency-limit` | *See [configuration options](/influxdb3/core/reference/config-options/#table-index-cache-concurrency-limit)* |
| | `--table-index-cache-max-entries` | _See [configuration options](/influxdb3/core/reference/config-options/#table-index-cache-max-entries)_ | | | `--table-index-cache-max-entries` | *See [configuration options](/influxdb3/core/reference/config-options/#table-index-cache-max-entries)* |
| | `--tcp-listener-file-path` | _See [configuration options](/influxdb3/core/reference/config-options/#tcp-listener-file-path)_ | | | `--tcp-listener-file-path` | *See [configuration options](/influxdb3/core/reference/config-options/#tcp-listener-file-path)* |
| | `--telemetry-disable-upload` | _See [configuration options](/influxdb3/core/reference/config-options/#telemetry-disable-upload)_ | | | `--telemetry-disable-upload` | *See [configuration options](/influxdb3/core/reference/config-options/#telemetry-disable-upload)* |
| | `--telemetry-endpoint` | _See [configuration options](/influxdb3/core/reference/config-options/#telemetry-endpoint)_ | | | `--telemetry-endpoint` | *See [configuration options](/influxdb3/core/reference/config-options/#telemetry-endpoint)* |
| | `--tls-cert` | _See [configuration options](/influxdb3/core/reference/config-options/#tls-cert)_ | | | `--tls-cert` | *See [configuration options](/influxdb3/core/reference/config-options/#tls-cert)* |
| | `--tls-key` | _See [configuration options](/influxdb3/core/reference/config-options/#tls-key)_ | | | `--tls-key` | *See [configuration options](/influxdb3/core/reference/config-options/#tls-key)* |
| | `--tls-minimum-version` | _See [configuration options](/influxdb3/core/reference/config-options/#tls-minimum-version)_ | | | `--tls-minimum-version` | *See [configuration options](/influxdb3/core/reference/config-options/#tls-minimum-version)* |
| | `--traces-exporter` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter)_ | | | `--traces-exporter` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter)* |
| | `--traces-exporter-jaeger-agent-host` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-agent-host)_ | | | `--traces-exporter-jaeger-agent-host` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-agent-host)* |
| | `--traces-exporter-jaeger-agent-port` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-agent-port)_ | | | `--traces-exporter-jaeger-agent-port` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-agent-port)* |
| | `--traces-exporter-jaeger-service-name` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-service-name)_ | | | `--traces-exporter-jaeger-service-name` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-service-name)* |
| | `--traces-exporter-jaeger-trace-context-header-name` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-trace-context-header-name)_ | | | `--traces-exporter-jaeger-trace-context-header-name` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-trace-context-header-name)* |
| | `--traces-jaeger-debug-name` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-debug-name)_ | | | `--traces-jaeger-debug-name` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-debug-name)* |
| | `--traces-jaeger-max-msgs-per-second` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-max-msgs-per-second)_ | | | `--traces-jaeger-max-msgs-per-second` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-max-msgs-per-second)* |
| | `--traces-jaeger-tags` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-tags)_ | | | `--traces-jaeger-tags` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-tags)* |
| | `--virtual-env-location` | _See [configuration options](/influxdb3/core/reference/config-options/#virtual-env-location)_ | | | `--virtual-env-location` | *See [configuration options](/influxdb3/core/reference/config-options/#virtual-env-location)* |
| | `--wal-flush-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#wal-flush-interval)_ | | | `--wal-flush-interval` | *See [configuration options](/influxdb3/core/reference/config-options/#wal-flush-interval)* |
| | `--wal-max-write-buffer-size` | _See [configuration options](/influxdb3/core/reference/config-options/#wal-max-write-buffer-size)_ | | | `--wal-max-write-buffer-size` | *See [configuration options](/influxdb3/core/reference/config-options/#wal-max-write-buffer-size)* |
| | `--wal-replay-concurrency-limit` | _See [configuration options](/influxdb3/core/reference/config-options/#wal-replay-concurrency-limit)_ | | | `--wal-replay-concurrency-limit` | *See [configuration options](/influxdb3/core/reference/config-options/#wal-replay-concurrency-limit)* |
| | `--wal-replay-fail-on-error` | _See [configuration options](/influxdb3/core/reference/config-options/#wal-replay-fail-on-error)_ | | | `--wal-replay-fail-on-error` | *See [configuration options](/influxdb3/core/reference/config-options/#wal-replay-fail-on-error)* |
| | `--wal-snapshot-size` | _See [configuration options](/influxdb3/core/reference/config-options/#wal-snapshot-size)_ | | | `--wal-snapshot-size` | *See [configuration options](/influxdb3/core/reference/config-options/#wal-snapshot-size)* |
| | `--without-auth` | _See [configuration options](/influxdb3/core/reference/config-options/#without-auth)_ | | | `--without-auth` | *See [configuration options](/influxdb3/core/reference/config-options/#without-auth)* |
### Option environment variables ### Option environment variables
@ -169,7 +171,8 @@ influxdb3 --object-store memory
INFLUXDB3_NODE_IDENTIFIER_PREFIX=my-node influxdb3 INFLUXDB3_NODE_IDENTIFIER_PREFIX=my-node influxdb3
``` ```
> [!Important] > \[!Important]
>
> #### Production deployments > #### Production deployments
> >
> Quick-start mode is designed for development and testing environments. > Quick-start mode is designed for development and testing environments.
@ -184,7 +187,7 @@ For more information about quick-start mode, see [Get started](/influxdb3/core/g
- [Run the InfluxDB 3 server](#run-the-influxdb-3-server) - [Run the InfluxDB 3 server](#run-the-influxdb-3-server)
- [Run the InfluxDB 3 server with extra verbose logging](#run-the-influxdb-3-server-with-extra-verbose-logging) - [Run the InfluxDB 3 server with extra verbose logging](#run-the-influxdb-3-server-with-extra-verbose-logging)
- [Run InfluxDB 3 with debug logging using LOG_FILTER](#run-influxdb-3-with-debug-logging-using-log_filter) - [Run InfluxDB 3 with debug logging using LOG\_FILTER](#run-influxdb-3-with-debug-logging-using-log_filter)
In the examples below, replace In the examples below, replace
{{% code-placeholder-key %}}`my-host-01`{{% /code-placeholder-key %}}: {{% code-placeholder-key %}}`my-host-01`{{% /code-placeholder-key %}}:
@ -215,7 +218,7 @@ influxdb3 serve \
--verbose --verbose
``` ```
### Run InfluxDB 3 with debug logging using LOG_FILTER ### Run InfluxDB 3 with debug logging using LOG\_FILTER
<!--pytest.mark.skip--> <!--pytest.mark.skip-->
@ -228,13 +231,12 @@ LOG_FILTER=debug influxdb3 serve \
{{% /code-placeholders %}} {{% /code-placeholders %}}
## Troubleshooting ## Troubleshooting
### Common Issues ### Common Issues
- **Error: "Failed to connect to object store"** - **Error: "Failed to connect to object store"**\
Verify your `--object-store` setting and ensure all required parameters for that storage type are provided. Verify your `--object-store` setting and ensure all required parameters for that storage type are provided.
- **Permission errors when using S3, Google Cloud, or Azure storage** - **Permission errors when using S3, Google Cloud, or Azure storage**\
Check that your authentication credentials are correct and have sufficient permissions. Check that your authentication credentials are correct and have sufficient permissions.

View File

@ -28,11 +28,13 @@ influxdb3 serve [OPTIONS]
- **object-store**: Determines where time series data is stored. - **object-store**: Determines where time series data is stored.
- Other object store parameters depending on the selected `object-store` type. - Other object store parameters depending on the selected `object-store` type.
> [!NOTE] > \[!NOTE]
> `--node-id` and `--cluster-id` support alphanumeric strings with optional hyphens. > `--node-id` and `--cluster-id` support alphanumeric strings with optional hyphens.
> [!Important] > \[!Important]
>
> #### Global configuration options > #### Global configuration options
>
> Some configuration options (like [`--num-io-threads`](/influxdb3/enterprise/reference/config-options/#num-io-threads)) are **global** and must be specified **before** the `serve` command: > Some configuration options (like [`--num-io-threads`](/influxdb3/enterprise/reference/config-options/#num-io-threads)) are **global** and must be specified **before** the `serve` command:
> >
> ```bash > ```bash
@ -44,119 +46,119 @@ influxdb3 serve [OPTIONS]
## Options ## Options
| Option | | Description | | Option | | Description |
| :--------------- | :--------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------ | | :----- | :--------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------ |
| | `--admin-token-recovery-http-bind` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-recovery-http-bind)_ | | | `--admin-token-recovery-http-bind` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-recovery-http-bind)* |
| | `--admin-token-recovery-tcp-listener-file-path` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-recovery-tcp-listener-file-path)_ | | | `--admin-token-recovery-tcp-listener-file-path` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-recovery-tcp-listener-file-path)* |
| | `--admin-token-file` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-file)_ | | | `--admin-token-file` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-file)* |
| | `--aws-access-key-id` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-access-key-id)_ | | | `--aws-access-key-id` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-access-key-id)* |
| | `--aws-allow-http` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-allow-http)_ | | | `--aws-allow-http` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-allow-http)* |
| | `--aws-credentials-file` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-credentials-file)_ | | | `--aws-credentials-file` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-credentials-file)* |
| | `--aws-default-region` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-default-region)_ | | | `--aws-default-region` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-default-region)* |
| | `--aws-endpoint` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-endpoint)_ | | | `--aws-endpoint` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-endpoint)* |
| | `--aws-secret-access-key` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-secret-access-key)_ | | | `--aws-secret-access-key` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-secret-access-key)* |
| | `--aws-session-token` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-session-token)_ | | | `--aws-session-token` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-session-token)* |
| | `--aws-skip-signature` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-skip-signature)_ | | | `--aws-skip-signature` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-skip-signature)* |
| | `--azure-allow-http` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-allow-http)_ | | | `--azure-allow-http` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-allow-http)* |
| | `--azure-endpoint` | _See [configuration options](/influxdb3/enterprise/reference/config-options/##azure-endpoint)_ | | | `--azure-endpoint` | *See [configuration options](/influxdb3/enterprise/reference/config-options/##azure-endpoint)* |
| | `--azure-storage-access-key` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-storage-access-key)_ | | | `--azure-storage-access-key` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-storage-access-key)* |
| | `--azure-storage-account` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-storage-account)_ | | | `--azure-storage-account` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-storage-account)* |
| | `--bucket` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#bucket)_ | | | `--bucket` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#bucket)* |
| | `--catalog-sync-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#catalog-sync-interval)_ | | | `--catalog-sync-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#catalog-sync-interval)* |
| | `--cluster-id` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#cluster-id)_ | | | `--cluster-id` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#cluster-id)* |
| | `--compaction-check-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-check-interval)_ | | | `--compaction-check-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-check-interval)* |
| | `--compaction-cleanup-wait` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-cleanup-wait)_ | | | `--compaction-cleanup-wait` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-cleanup-wait)* |
| | `--compaction-gen2-duration` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-gen2-duration)_ | | | `--compaction-gen2-duration` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-gen2-duration)* |
| | `--compaction-max-num-files-per-plan` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-max-num-files-per-plan)_ | | | `--compaction-max-num-files-per-plan` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-max-num-files-per-plan)* |
| | `--compaction-multipliers` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-multipliers)_ | | | `--compaction-multipliers` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-multipliers)* |
| | `--compaction-row-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-row-limit)_ | | | `--compaction-row-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-row-limit)* |
| | `--data-dir` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#data-dir)_ | | | `--data-dir` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#data-dir)* |
| | `--datafusion-config` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-config)_ | | | `--datafusion-config` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-config)* |
| | `--datafusion-max-parquet-fanout` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-max-parquet-fanout)_ | | | `--datafusion-max-parquet-fanout` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-max-parquet-fanout)* |
| | `--datafusion-num-threads` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-num-threads)_ | | | `--datafusion-num-threads` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-num-threads)* |
| | `--datafusion-runtime-disable-lifo-slot` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-disable-lifo-slot)_ | | | `--datafusion-runtime-disable-lifo-slot` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-disable-lifo-slot)* |
| | `--datafusion-runtime-event-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-event-interval)_ | | | `--datafusion-runtime-event-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-event-interval)* |
| | `--datafusion-runtime-global-queue-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-global-queue-interval)_ | | | `--datafusion-runtime-global-queue-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-global-queue-interval)* |
| | `--datafusion-runtime-max-blocking-threads` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-max-blocking-threads)_ | | | `--datafusion-runtime-max-blocking-threads` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-max-blocking-threads)* |
| | `--datafusion-runtime-max-io-events-per-tick` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-max-io-events-per-tick)_ | | | `--datafusion-runtime-max-io-events-per-tick` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-max-io-events-per-tick)* |
| | `--datafusion-runtime-thread-keep-alive` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-thread-keep-alive)_ | | | `--datafusion-runtime-thread-keep-alive` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-thread-keep-alive)* |
| | `--datafusion-runtime-thread-priority` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-thread-priority)_ | | | `--datafusion-runtime-thread-priority` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-thread-priority)* |
| | `--datafusion-runtime-type` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-type)_ | | | `--datafusion-runtime-type` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-type)* |
| | `--datafusion-use-cached-parquet-loader` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-use-cached-parquet-loader)_ | | | `--datafusion-use-cached-parquet-loader` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-use-cached-parquet-loader)* |
| | `--delete-grace-period` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#delete-grace-period)_ | | | `--delete-grace-period` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#delete-grace-period)* |
| | `--disable-authz` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#disable-authz)_ | | | `--disable-authz` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#disable-authz)* |
| | `--disable-parquet-mem-cache` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#disable-parquet-mem-cache)_ | | | `--disable-parquet-mem-cache` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#disable-parquet-mem-cache)* |
| | `--distinct-cache-eviction-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#distinct-cache-eviction-interval)_ | | | `--distinct-cache-eviction-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#distinct-cache-eviction-interval)* |
| | `--distinct-value-cache-disable-from-history` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#distinct-value-cache-disable-from-history)_ | | | `--distinct-value-cache-disable-from-history` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#distinct-value-cache-disable-from-history)* |
| | `--exec-mem-pool-bytes` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#exec-mem-pool-bytes)_ | | | `--exec-mem-pool-bytes` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#exec-mem-pool-bytes)* |
| | `--force-snapshot-mem-threshold` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#force-snapshot-mem-threshold)_ | | | `--force-snapshot-mem-threshold` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#force-snapshot-mem-threshold)* |
| | `--gen1-duration` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#gen1-duration)_ | | | `--gen1-duration` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#gen1-duration)* |
| | `--gen1-lookback-duration` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#gen1-lookback-duration)_ | | | `--gen1-lookback-duration` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#gen1-lookback-duration)* |
| | `--google-service-account` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#google-service-account)_ | | | `--google-service-account` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#google-service-account)* |
| | `--hard-delete-default-duration` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#hard-delete-default-duration)_ | | | `--hard-delete-default-duration` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#hard-delete-default-duration)* |
| `-h` | `--help` | Print help information | | `-h` | `--help` | Print help information |
| | `--help-all` | Print detailed help information | | | `--help-all` | Print detailed help information |
| | `--http-bind` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#http-bind)_ | | | `--http-bind` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#http-bind)* |
| | `--last-cache-eviction-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#last-cache-eviction-interval)_ | | | `--last-cache-eviction-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#last-cache-eviction-interval)* |
| | `--last-value-cache-disable-from-history` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#last-value-cache-disable-from-history)_ | | | `--last-value-cache-disable-from-history` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#last-value-cache-disable-from-history)* |
| | `--license-email` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#license-email)_ | | | `--license-email` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#license-email)* |
| | `--license-file` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#license-file)_ | | | `--license-file` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#license-file)* |
| | `--log-destination` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#log-destination)_ | | | `--log-destination` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#log-destination)* |
| | `--log-filter` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#log-filter)_ | | | `--log-filter` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#log-filter)* |
| | `--log-format` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#log-format)_ | | | `--log-format` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#log-format)* |
| | `--max-http-request-size` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#max-http-request-size)_ | | | `--max-http-request-size` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#max-http-request-size)* |
| | `--mode` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#mode)_ | | | `--mode` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#mode)* |
| | `--node-id` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#node-id)_ | | | `--node-id` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#node-id)* |
| | `--node-id-from-env` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#node-id-from-env)_ | | | `--node-id-from-env` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#node-id-from-env)* |
| | `--num-cores` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-cores)_ | | | `--num-cores` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#num-cores)* |
| | `--num-datafusion-threads` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-datafusion-threads)_ | | | `--num-datafusion-threads` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#num-datafusion-threads)* |
| | `--num-database-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-database-limit)_ | | | `--num-database-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#num-database-limit)* |
| | `--num-table-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-table-limit)_ | | | `--num-table-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#num-table-limit)* |
| | `--num-total-columns-per-table-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-total-columns-per-table-limit)_ | | | `--num-total-columns-per-table-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#num-total-columns-per-table-limit)* |
| | `--object-store` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store)_ | | | `--object-store` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store)* |
| | `--object-store-cache-endpoint` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-cache-endpoint)_ | | | `--object-store-cache-endpoint` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-cache-endpoint)* |
| | `--object-store-connection-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-connection-limit)_ | | | `--object-store-connection-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-connection-limit)* |
| | `--object-store-http2-max-frame-size` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-http2-max-frame-size)_ | | | `--object-store-http2-max-frame-size` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-http2-max-frame-size)* |
| | `--object-store-http2-only` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-http2-only)_ | | | `--object-store-http2-only` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-http2-only)* |
| | `--object-store-max-retries` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-max-retries)_ | | | `--object-store-max-retries` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-max-retries)* |
| | `--object-store-retry-timeout` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-retry-timeout)_ | | | `--object-store-retry-timeout` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-retry-timeout)* |
| | `--package-manager` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#package-manager)_ | | | `--package-manager` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#package-manager)* |
| | `--parquet-mem-cache-prune-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-prune-interval)_ | | | `--parquet-mem-cache-prune-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-prune-interval)* |
| | `--parquet-mem-cache-prune-percentage` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-prune-percentage)_ | | | `--parquet-mem-cache-prune-percentage` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-prune-percentage)* |
| | `--parquet-mem-cache-query-path-duration` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-query-path-duration)_ | | | `--parquet-mem-cache-query-path-duration` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-query-path-duration)* |
| | `--parquet-mem-cache-size` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-size)_ | | | `--parquet-mem-cache-size` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-size)* |
| | `--permission-tokens-file` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#permission-tokens-file)_ | | | `--permission-tokens-file` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#permission-tokens-file)* |
| | `--plugin-dir` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#plugin-dir)_ | | | `--plugin-dir` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#plugin-dir)* |
| | `--preemptive-cache-age` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#preemptive-cache-age)_ | | | `--preemptive-cache-age` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#preemptive-cache-age)* |
| | `--query-file-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#query-file-limit)_ | | | `--query-file-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#query-file-limit)* |
| | `--query-log-size` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#query-log-size)_ | | | `--query-log-size` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#query-log-size)* |
| | `--replication-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#replication-interval)_ | | | `--replication-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#replication-interval)* |
| | `--retention-check-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#retention-check-interval)_ | | | `--retention-check-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#retention-check-interval)* |
| | `--snapshotted-wal-files-to-keep` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#snapshotted-wal-files-to-keep)_ | | | `--snapshotted-wal-files-to-keep` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#snapshotted-wal-files-to-keep)* |
| | `--table-index-cache-concurrency-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#table-index-cache-concurrency-limit)_ | | | `--table-index-cache-concurrency-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#table-index-cache-concurrency-limit)* |
| | `--table-index-cache-max-entries` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#table-index-cache-max-entries)_ | | | `--table-index-cache-max-entries` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#table-index-cache-max-entries)* |
| | `--tcp-listener-file-path` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#tcp-listener-file-path)_ | | | `--tcp-listener-file-path` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#tcp-listener-file-path)* |
| | `--telemetry-disable-upload` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#telemetry-disable-upload)_ | | | `--telemetry-disable-upload` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#telemetry-disable-upload)* |
| | `--telemetry-endpoint` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#telemetry-endpoint)_ | | | `--telemetry-endpoint` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#telemetry-endpoint)* |
| | `--tls-cert` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#tls-cert)_ | | | `--tls-cert` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#tls-cert)* |
| | `--tls-key` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#tls-key)_ | | | `--tls-key` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#tls-key)* |
| | `--tls-minimum-version` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#tls-minimum-version)_ | | | `--tls-minimum-version` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#tls-minimum-version)* |
| | `--traces-exporter` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter)_ | | | `--traces-exporter` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter)* |
| | `--traces-exporter-jaeger-agent-host` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-agent-host)_ | | | `--traces-exporter-jaeger-agent-host` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-agent-host)* |
| | `--traces-exporter-jaeger-agent-port` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-agent-port)_ | | | `--traces-exporter-jaeger-agent-port` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-agent-port)* |
| | `--traces-exporter-jaeger-service-name` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-service-name)_ | | | `--traces-exporter-jaeger-service-name` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-service-name)* |
| | `--traces-exporter-jaeger-trace-context-header-name` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-trace-context-header-name)_ | | | `--traces-exporter-jaeger-trace-context-header-name` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-trace-context-header-name)* |
| | `--traces-jaeger-debug-name` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-jaeger-debug-name)_ | | | `--traces-jaeger-debug-name` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-jaeger-debug-name)* |
| | `--traces-jaeger-max-msgs-per-second` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-jaeger-max-msgs-per-second)_ | | | `--traces-jaeger-max-msgs-per-second` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-jaeger-max-msgs-per-second)* |
| | `--traces-jaeger-tags` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-jaeger-tags)_ | | | `--traces-jaeger-tags` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-jaeger-tags)* |
| | `--use-pacha-tree` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#use-pacha-tree)_ | | | `--use-pacha-tree` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#use-pacha-tree)* |
| | `--virtual-env-location` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#virtual-env-location)_ | | | `--virtual-env-location` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#virtual-env-location)* |
| | `--wait-for-running-ingestor` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wait-for-running-ingestor)_ | | | `--wait-for-running-ingestor` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#wait-for-running-ingestor)* |
| | `--wal-flush-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-flush-interval)_ | | | `--wal-flush-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-flush-interval)* |
| | `--wal-max-write-buffer-size` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-max-write-buffer-size)_ | | | `--wal-max-write-buffer-size` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-max-write-buffer-size)* |
| | `--wal-replay-concurrency-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-replay-concurrency-limit)_ | | | `--wal-replay-concurrency-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-replay-concurrency-limit)* |
| | `--wal-replay-fail-on-error` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-replay-fail-on-error)_ | | | `--wal-replay-fail-on-error` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-replay-fail-on-error)* |
| | `--wal-snapshot-size` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-snapshot-size)_ | | | `--wal-snapshot-size` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-snapshot-size)* |
| | `--without-auth` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#without-auth)_ | | | `--without-auth` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#without-auth)* |
### Option environment variables ### Option environment variables
@ -195,7 +197,8 @@ influxdb3 --object-store memory
INFLUXDB3_NODE_IDENTIFIER_PREFIX=my-node influxdb3 INFLUXDB3_NODE_IDENTIFIER_PREFIX=my-node influxdb3
``` ```
> [!Important] > \[!Important]
>
> #### Production deployments > #### Production deployments
> >
> Quick-start mode is designed for development and testing environments. > Quick-start mode is designed for development and testing environments.
@ -210,7 +213,7 @@ For more information about quick-start mode, see [Get started](/influxdb3/enterp
- [Run the InfluxDB 3 server](#run-the-influxdb-3-server) - [Run the InfluxDB 3 server](#run-the-influxdb-3-server)
- [Run the InfluxDB 3 server with extra verbose logging](#run-the-influxdb-3-server-with-extra-verbose-logging) - [Run the InfluxDB 3 server with extra verbose logging](#run-the-influxdb-3-server-with-extra-verbose-logging)
- [Run InfluxDB 3 with debug logging using LOG_FILTER](#run-influxdb-3-with-debug-logging-using-log_filter) - [Run InfluxDB 3 with debug logging using LOG\_FILTER](#run-influxdb-3-with-debug-logging-using-log_filter)
In the examples below, replace the following: In the examples below, replace the following:
@ -273,7 +276,7 @@ influxdb3 serve \
--verbose --verbose
``` ```
### Run InfluxDB 3 with debug logging using LOG_FILTER ### Run InfluxDB 3 with debug logging using LOG\_FILTER
<!--pytest.mark.skip--> <!--pytest.mark.skip-->
@ -287,16 +290,15 @@ LOG_FILTER=debug influxdb3 serve \
{{% /code-placeholders %}} {{% /code-placeholders %}}
## Troubleshooting ## Troubleshooting
### Common Issues ### Common Issues
- **Error: "cluster-id cannot match any node-id in the cluster"** - **Error: "cluster-id cannot match any node-id in the cluster"**\
Ensure your `--cluster-id` value is different from all `--node-id` values in your cluster. Ensure your `--cluster-id` value is different from all `--node-id` values in your cluster.
- **Error: "Failed to connect to object store"** - **Error: "Failed to connect to object store"**\
Verify your `--object-store` setting and ensure all required parameters for that storage type are provided. Verify your `--object-store` setting and ensure all required parameters for that storage type are provided.
- **Permission errors when using S3, Google Cloud, or Azure storage** - **Permission errors when using S3, Google Cloud, or Azure storage**\
Check that your authentication credentials are correct and have sufficient permissions. Check that your authentication credentials are correct and have sufficient permissions.

View File

@ -1,5 +1,6 @@
<!--Shortcode--> <!--Shortcode-->
{{% product-name %}} stores data related to the database server, queries, and tables in _system tables_.
{{% product-name %}} stores data related to the database server, queries, and tables in *system tables*.
You can query the system tables for information about your running server, databases, and and table schemas. You can query the system tables for information about your running server, databases, and and table schemas.
## Query system tables ## Query system tables
@ -15,7 +16,6 @@ You can query the system tables for information about your running server, datab
Use the HTTP API `/api/v3/query_sql` endpoint to retrieve system information about your database server and table schemas in {{% product-name %}}. Use the HTTP API `/api/v3/query_sql` endpoint to retrieve system information about your database server and table schemas in {{% product-name %}}.
To execute a query, send a `GET` or `POST` request to the endpoint: To execute a query, send a `GET` or `POST` request to the endpoint:
- `GET`: Pass parameters in the URL query string (for simple queries) - `GET`: Pass parameters in the URL query string (for simple queries)
@ -23,16 +23,17 @@ To execute a query, send a `GET` or `POST` request to the endpoint:
Include the following parameters: Include the following parameters:
- `q`: _({{< req >}})_ The SQL query to execute. - `q`: *({{< req >}})* The SQL query to execute.
- `db`: _({{< req >}})_ The database to execute the query against. - `db`: *({{< req >}})* The database to execute the query against.
- `params`: A JSON object containing parameters to be used in a _parameterized query_. - `params`: A JSON object containing parameters to be used in a *parameterized query*.
- `format`: The format of the response (`json`, `jsonl`, `csv`, `pretty`, or `parquet`). - `format`: The format of the response (`json`, `jsonl`, `csv`, `pretty`, or `parquet`).
JSONL (`jsonl`) is preferred because it streams results back to the client. JSONL (`jsonl`) is preferred because it streams results back to the client.
`pretty` is for human-readable output. Default is `json`. `pretty` is for human-readable output. Default is `json`.
#### Examples #### Examples
> [!Note] > \[!Note]
>
> #### system\_ sample data > #### system\_ sample data
> >
> In examples, tables with `"table_name":"system_` are user-created tables for CPU, memory, disk, > In examples, tables with `"table_name":"system_` are user-created tables for CPU, memory, disk,
@ -90,8 +91,8 @@ A table has one of the following `table_schema` values:
The following query sends a `POST` request that executes an SQL query to The following query sends a `POST` request that executes an SQL query to
retrieve information about columns in the sample `system_swap` table schema: retrieve information about columns in the sample `system_swap` table schema:
_Note: when you send a query in JSON, you must escape single quotes *Note: when you send a query in JSON, you must escape single quotes
that surround field names._ that surround field names.*
```bash ```bash
curl "http://localhost:8181/api/v3/query_sql" \ curl "http://localhost:8181/api/v3/query_sql" \
@ -144,6 +145,7 @@ To view loaded Processing Engine plugins, query the `plugin_files` system table
The `system.plugin_files` table provides information about plugin files loaded by the Processing Engine: The `system.plugin_files` table provides information about plugin files loaded by the Processing Engine:
**Columns:** **Columns:**
- `plugin_name` (String): Name of a trigger using this plugin - `plugin_name` (String): Name of a trigger using this plugin
- `file_name` (String): Plugin filename - `file_name` (String): Plugin filename
- `file_path` (String): Full server path to the plugin file - `file_path` (String): Full server path to the plugin file

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +1,3 @@
The `influxdb3 create trigger` command creates a new trigger for the The `influxdb3 create trigger` command creates a new trigger for the
processing engine. processing engine.
@ -17,32 +16,31 @@ influxdb3 create trigger [OPTIONS] \
## Arguments ## Arguments
- **TRIGGER_NAME**: A name for the new trigger. - **TRIGGER\_NAME**: A name for the new trigger.
## Options ## Options
| Option | | Description | | Option | | Description | | |
| :----- | :------------------ | :------------------------------------------------------------------------------------------------------- | | :--------------------------- | :-------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------- |
| `-H` | `--host` | Host URL of the running {{< product-name >}} server (default is `http://127.0.0.1:8181`) | | `-H` | `--host` | Host URL of the running {{< product-name >}} server (default is `http://127.0.0.1:8181`) | | |
| `-d` | `--database` | _({{< req >}})_ Name of the database to operate on | | `-d` | `--database` | *({{< req >}})* Name of the database to operate on | | |
| | `--token` | _({{< req >}})_ Authentication token | | | `--token` | *({{< req >}})* Authentication token | | |
| `-p` | `--path` | Path to plugin file or directory (single `.py` file or directory containing `__init__.py` for multifile plugins). Can be local path (with `--upload`) or server path. Replaces `--plugin-filename`. | | `-p` | `--path` | Path to plugin file or directory (single `.py` file or directory containing `__init__.py` for multifile plugins). Can be local path (with `--upload`) or server path. Replaces `--plugin-filename`. | | |
| | `--upload` | Upload local plugin files to the server. Requires admin token. Use with `--path` to specify local files. | | | `--upload` | Upload local plugin files to the server. Requires admin token. Use with `--path` to specify local files. | | |
| | `--plugin-filename` | _(Deprecated: use `--path` instead)_ Name of the file, stored in the server's `plugin-dir`, that contains the Python plugin code to run | | | `--plugin-filename` | *(Deprecated: use `--path` instead)* Name of the file, stored in the server's `plugin-dir`, that contains the Python plugin code to run | | |
| | `--trigger-spec` | Trigger specification: `table:<TABLE_NAME>`, `all_tables`, `every:<DURATION>`, `cron:<EXPRESSION>`, or `request:<REQUEST_PATH>` | | | `--trigger-spec` | Trigger specification: `table:<TABLE_NAME>`, `all_tables`, `every:<DURATION>`, `cron:<EXPRESSION>`, or `request:<REQUEST_PATH>` | | |
| | `--trigger-arguments` | Additional arguments for the trigger, in the format `key=value`, separated by commas (for example, `arg1=val1,arg2=val2`) | | | `--trigger-arguments` | Additional arguments for the trigger, in the format `key=value`, separated by commas (for example, `arg1=val1,arg2=val2`) | | |
| | `--disabled` | Create the trigger in disabled state | | | `--disabled` | Create the trigger in disabled state | | |
| | `--error-behavior` | Error handling behavior: `log`, `retry`, or `disable` | | | `--error-behavior` | Error handling behavior: `log`, `retry`, or `disable` | | |
| | `--run-asynchronous` | Run the trigger asynchronously, allowing multiple triggers to run simultaneously (default is synchronous) | | | `--run-asynchronous` | Run the trigger asynchronously, allowing multiple triggers to run simultaneously (default is synchronous) | | |
{{% show-in "enterprise" %}}| | `--node-spec` | Which node(s) the trigger should be configured on. Two value formats are supported: `all` (default) - applies to all nodes, or `nodes:<node-id>[,<node-id>..]` - applies only to specified comma-separated list of nodes |{{% /show-in %}} | {{% show-in "enterprise" %}} | | `--node-spec` | Which node(s) the trigger should be configured on. Two value formats are supported: `all` (default) - applies to all nodes, or `nodes:<node-id>[,<node-id>..]` - applies only to specified comma-separated list of nodes | {{% /show-in %}} |
| | `--tls-ca` | Path to a custom TLS certificate authority (for testing or self-signed certificates) | | | `--tls-ca` | Path to a custom TLS certificate authority (for testing or self-signed certificates) | | |
| `-h` | `--help` | Print help information | | `-h` | `--help` | Print help information | | |
| | `--help-all` | Print detailed help information | | | `--help-all` | Print detailed help information | | |
If you want to use a plugin from the [Plugin Library](https://github.com/influxdata/influxdb3_plugins) repo, use the URL path with `gh:` specified as the prefix. If you want to use a plugin from the [Plugin Library](https://github.com/influxdata/influxdb3_plugins) repo, use the URL path with `gh:` specified as the prefix.
For example, to use the [System Metrics](https://github.com/influxdata/influxdb3_plugins/blob/main/influxdata/system_metrics/system_metrics.py) plugin, the plugin filename is `gh:influxdata/system_metrics/system_metrics.py`. For example, to use the [System Metrics](https://github.com/influxdata/influxdb3_plugins/blob/main/influxdata/system_metrics/system_metrics.py) plugin, the plugin filename is `gh:influxdata/system_metrics/system_metrics.py`.
### Option environment variables ### Option environment variables
You can use the following environment variables to set command options: You can use the following environment variables to set command options:
@ -67,7 +65,7 @@ The following examples show how to use the `influxdb3 create trigger` command to
- [Create a disabled trigger](#create-a-disabled-trigger) - [Create a disabled trigger](#create-a-disabled-trigger)
- [Create a trigger with error handling](#create-a-trigger-with-error-handling) - [Create a trigger with error handling](#create-a-trigger-with-error-handling)
--- ***
Replace the following placeholders with your values: Replace the following placeholders with your values:
@ -79,7 +77,7 @@ Name of the trigger to create
- {{% code-placeholder-key %}}`TABLE_NAME`{{% /code-placeholder-key %}}: - {{% code-placeholder-key %}}`TABLE_NAME`{{% /code-placeholder-key %}}:
Name of the table to trigger on Name of the table to trigger on
{{% code-placeholders "(DATABASE|TRIGGER)_NAME|AUTH_TOKEN|TABLE_NAME" %}} {{% code-placeholders "(DATABASE|TRIGGER)\_NAME|AUTH\_TOKEN|TABLE\_NAME" %}}
### Create a trigger for a specific table ### Create a trigger for a specific table
@ -137,12 +135,13 @@ second minute hour day_of_month month day_of_week
``` ```
Fields: Fields:
- **second**: 0-59 - **second**: 0-59
- **minute**: 0-59 - **minute**: 0-59
- **hour**: 0-23 - **hour**: 0-23
- **day_of_month**: 1-31 - **day\_of\_month**: 1-31
- **month**: 1-12 or JAN-DEC - **month**: 1-12 or JAN-DEC
- **day_of_week**: 0-7 (0 or 7 is Sunday) or SUN-SAT - **day\_of\_week**: 0-7 (0 or 7 is Sunday) or SUN-SAT
Example: Run at 6:00 AM every weekday (Monday-Friday): Example: Run at 6:00 AM every weekday (Monday-Friday):
@ -225,6 +224,7 @@ influxdb3 create trigger \
``` ```
The `--upload` flag transfers local files to the server's plugin directory. This is useful for: The `--upload` flag transfers local files to the server's plugin directory. This is useful for:
- Local plugin development and testing - Local plugin development and testing
- Deploying plugins without SSH access - Deploying plugins without SSH access
- Automating plugin deployment - Automating plugin deployment

View File

@ -1,4 +1,3 @@
The `influxdb3 show` command lists resources in your {{< product-name >}} server. The `influxdb3 show` command lists resources in your {{< product-name >}} server.
## Usage ## Usage
@ -11,14 +10,14 @@ influxdb3 show <SUBCOMMAND>
## Subcommands ## Subcommands
| Subcommand | Description | | Subcommand | Description | | |
| :---------------------------------------------------------------------- | :--------------------------------------------- | | :---------------------------------------------------------------------- | :------------------------------------------------------------------ | --------------------------- | ---------------- |
| [databases](/influxdb3/version/reference/cli/influxdb3/show/databases/) | List database | | [databases](/influxdb3/version/reference/cli/influxdb3/show/databases/) | List database | | |
{{% show-in "enterprise" %}}| [license](/influxdb3/version/reference/cli/influxdb3/show/license/) | Display license information |{{% /show-in %}} | {{% show-in "enterprise" %}} | [license](/influxdb3/version/reference/cli/influxdb3/show/license/) | Display license information | {{% /show-in %}} |
| [plugins](/influxdb3/version/reference/cli/influxdb3/show/plugins/) | List loaded plugins | | [plugins](/influxdb3/version/reference/cli/influxdb3/show/plugins/) | List loaded plugins | | |
| [system](/influxdb3/version/reference/cli/influxdb3/show/system/) | Display system table data | | [system](/influxdb3/version/reference/cli/influxdb3/show/system/) | Display system table data | | |
| [tokens](/influxdb3/version/reference/cli/influxdb3/show/tokens/) | List authentication tokens | | [tokens](/influxdb3/version/reference/cli/influxdb3/show/tokens/) | List authentication tokens | | |
| help | Print command help or the help of a subcommand | | help | Print command help or the help of a subcommand | | |
## Options ## Options

View File

@ -12,10 +12,10 @@ influxdb3 show plugins [OPTIONS]
## Options ## Options
| Option | | Description | | Option | | Description |
| :----- | :--------------- | :--------------------------------------------------------------------------------------- | | :----- | :----------- | :--------------------------------------------------------------------------------------- |
| `-H` | `--host` | Host URL of the running {{< product-name >}} server (default is `http://127.0.0.1:8181`) | | `-H` | `--host` | Host URL of the running {{< product-name >}} server (default is `http://127.0.0.1:8181`) |
| | `--token` | _({{< req >}})_ Authentication token | | | `--token` | *({{< req >}})* Authentication token |
| | `--format` | Output format (`pretty` _(default)_, `json`, `jsonl`, `csv`, or `parquet`) | | | `--format` | Output format (`pretty` *(default)*, `json`, `jsonl`, `csv`, or `parquet`) |
| | `--output` | Path where to save output when using the `parquet` format | | | `--output` | Path where to save output when using the `parquet` format |
| | `--tls-ca` | Path to a custom TLS certificate authority (for testing or self-signed certificates) | | | `--tls-ca` | Path to a custom TLS certificate authority (for testing or self-signed certificates) |
| `-h` | `--help` | Print help information | | `-h` | `--help` | Print help information |
@ -26,7 +26,7 @@ influxdb3 show plugins [OPTIONS]
You can use the following environment variables to set command options: You can use the following environment variables to set command options:
| Environment Variable | Option | | Environment Variable | Option |
| :-------------------- | :-------- | | :--------------------- | :-------- |
| `INFLUXDB3_HOST_URL` | `--host` | | `INFLUXDB3_HOST_URL` | `--host` |
| `INFLUXDB3_AUTH_TOKEN` | `--token` | | `INFLUXDB3_AUTH_TOKEN` | `--token` |
@ -34,13 +34,13 @@ You can use the following environment variables to set command options:
The command returns information about loaded plugin files: The command returns information about loaded plugin files:
- **plugin_name**: Name of a trigger using this plugin - **plugin\_name**: Name of a trigger using this plugin
- **file_name**: Plugin filename - **file\_name**: Plugin filename
- **file_path**: Full server path to the plugin file - **file\_path**: Full server path to the plugin file
- **size_bytes**: File size in bytes - **size\_bytes**: File size in bytes
- **last_modified**: Last modification timestamp (milliseconds since epoch) - **last\_modified**: Last modification timestamp (milliseconds since epoch)
> [!Note] > \[!Note]
> This command queries the `system.plugin_files` table in the `_internal` database. > This command queries the `system.plugin_files` table in the `_internal` database.
> For more advanced queries and filtering, see [Query system data](/influxdb3/version/admin/query-system-data/). > For more advanced queries and filtering, see [Query system data](/influxdb3/version/admin/query-system-data/).
@ -81,6 +81,7 @@ influxdb3 show plugins --format csv
Use the `--output` option to specify the file where you want to save the Parquet data. Use the `--output` option to specify the file where you want to save the Parquet data.
<!--pytest.mark.skip--> <!--pytest.mark.skip-->
```bash ```bash
influxdb3 show plugins \ influxdb3 show plugins \
--format parquet \ --format parquet \

View File

@ -11,21 +11,23 @@ influxdb3 update <SUBCOMMAND>
## Subcommands ## Subcommands
{{% show-in "enterprise" %}} {{% show-in "enterprise" %}}
| Subcommand | Description | | Subcommand | Description |
| :----------------------------------------------------------------- | :--------------------- | | :---------------------------------------------------------------------- | :--------------------------------------------- |
| [database](/influxdb3/version/reference/cli/influxdb3/update/database/) | Update a database | | [database](/influxdb3/version/reference/cli/influxdb3/update/database/) | Update a database |
| [table](/influxdb3/version/reference/cli/influxdb3/update/table/) | Update a table | | [table](/influxdb3/version/reference/cli/influxdb3/update/table/) | Update a table |
| [trigger](/influxdb3/version/reference/cli/influxdb3/update/trigger/) | Update a trigger | | [trigger](/influxdb3/version/reference/cli/influxdb3/update/trigger/) | Update a trigger |
| help | Print command help or the help of a subcommand | | help | Print command help or the help of a subcommand |
{{% /show-in %}} | {{% /show-in %}} | |
{{% show-in "core" %}} {{% show-in "core" %}}
| Subcommand | Description | | Subcommand | Description |
| :----------------------------------------------------------------- | :--------------------- | | :---------------------------------------------------------------------- | :--------------------------------------------- |
| [database](/influxdb3/version/reference/cli/influxdb3/update/database/) | Update a database | | [database](/influxdb3/version/reference/cli/influxdb3/update/database/) | Update a database |
| [trigger](/influxdb3/version/reference/cli/influxdb3/update/trigger/) | Update a trigger | | [trigger](/influxdb3/version/reference/cli/influxdb3/update/trigger/) | Update a trigger |
| help | Print command help or the help of a subcommand | | help | Print command help or the help of a subcommand |
{{% /show-in %}} | {{% /show-in %}} | |
## Options ## Options

View File

@ -20,10 +20,10 @@ influxdb3 update trigger [OPTIONS] \
## Options ## Options
| Option | | Description | | Option | | Description |
| :----- | :------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------- | | :----- | :-------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `-H` | `--host` | Host URL of the running {{< product-name >}} server (default is `http://127.0.0.1:8181`) | | `-H` | `--host` | Host URL of the running {{< product-name >}} server (default is `http://127.0.0.1:8181`) |
| `-d` | `--database` | _({{< req >}})_ Name of the database containing the trigger | | `-d` | `--database` | *({{< req >}})* Name of the database containing the trigger |
| | `--trigger-name` | _({{< req >}})_ Name of the trigger to update | | | `--trigger-name` | *({{< req >}})* Name of the trigger to update |
| `-p` | `--path` | Path to plugin file or directory (single `.py` file or directory containing `__init__.py` for multifile plugins). Can be local path (with `--upload`) or server path. | | `-p` | `--path` | Path to plugin file or directory (single `.py` file or directory containing `__init__.py` for multifile plugins). Can be local path (with `--upload`) or server path. |
| | `--upload` | Upload local plugin files to the server. Requires admin token. Use with `--path` to specify local files. | | | `--upload` | Upload local plugin files to the server. Requires admin token. Use with `--path` to specify local files. |
| | `--trigger-arguments` | Additional arguments for the trigger, in the format `key=value`, separated by commas (for example, `arg1=val1,arg2=val2`) | | | `--trigger-arguments` | Additional arguments for the trigger, in the format `key=value`, separated by commas (for example, `arg1=val1,arg2=val2`) |
@ -56,7 +56,7 @@ The following examples show how to update triggers in different scenarios.
- [Enable or disable a trigger](#enable-or-disable-a-trigger) - [Enable or disable a trigger](#enable-or-disable-a-trigger)
- [Update error handling behavior](#update-error-handling-behavior) - [Update error handling behavior](#update-error-handling-behavior)
--- ***
Replace the following placeholders with your values: Replace the following placeholders with your values:
@ -64,7 +64,7 @@ Replace the following placeholders with your values:
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: Authentication token - {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: Authentication token
- {{% code-placeholder-key %}}`TRIGGER_NAME`{{% /code-placeholder-key %}}: Name of the trigger to update - {{% code-placeholder-key %}}`TRIGGER_NAME`{{% /code-placeholder-key %}}: Name of the trigger to update
{{% code-placeholders "(DATABASE|TRIGGER)_NAME|AUTH_TOKEN" %}} {{% code-placeholders "(DATABASE|TRIGGER)\_NAME|AUTH\_TOKEN" %}}
### Update trigger plugin code ### Update trigger plugin code

View File

@ -1,4 +1,5 @@
<!-- TOC --> <!-- TOC -->
- [Prerequisites](#prerequisites) - [Prerequisites](#prerequisites)
- [Quick-Start Mode (Development)](#quick-start-mode-development) - [Quick-Start Mode (Development)](#quick-start-mode-development)
- [Start InfluxDB](#start-influxdb) - [Start InfluxDB](#start-influxdb)
@ -35,6 +36,7 @@ influxdb3
When you run `influxdb3` without arguments, the following values are auto-generated: When you run `influxdb3` without arguments, the following values are auto-generated:
{{% show-in "enterprise" %}} {{% show-in "enterprise" %}}
- **`node-id`**: `{hostname}-node` (or `primary-node` if hostname is unavailable) - **`node-id`**: `{hostname}-node` (or `primary-node` if hostname is unavailable)
- **`cluster-id`**: `{hostname}-cluster` (or `primary-cluster` if hostname is unavailable) - **`cluster-id`**: `{hostname}-cluster` (or `primary-cluster` if hostname is unavailable)
{{% /show-in %}} {{% /show-in %}}
@ -47,18 +49,23 @@ When you run `influxdb3` without arguments, the following values are auto-genera
The system displays warning messages showing the auto-generated identifiers: The system displays warning messages showing the auto-generated identifiers:
{{% show-in "enterprise" %}} {{% show-in "enterprise" %}}
``` ```
Using auto-generated node id: mylaptop-node. For production deployments, explicitly set --node-id Using auto-generated node id: mylaptop-node. For production deployments, explicitly set --node-id
Using auto-generated cluster id: mylaptop-cluster. For production deployments, explicitly set --cluster-id Using auto-generated cluster id: mylaptop-cluster. For production deployments, explicitly set --cluster-id
``` ```
{{% /show-in %}} {{% /show-in %}}
{{% show-in "core" %}} {{% show-in "core" %}}
``` ```
Using auto-generated node id: mylaptop-node. For production deployments, explicitly set --node-id Using auto-generated node id: mylaptop-node. For production deployments, explicitly set --node-id
``` ```
{{% /show-in %}} {{% /show-in %}}
> [!Important] > \[!Important]
>
> #### When to use quick-start mode > #### When to use quick-start mode
> >
> Quick-start mode is designed for development, testing, and home lab environments > Quick-start mode is designed for development, testing, and home lab environments
@ -79,24 +86,28 @@ to start {{% product-name %}}.
Provide the following: Provide the following:
{{% show-in "enterprise" %}} {{% show-in "enterprise" %}}
- `--node-id`: A string identifier that distinguishes individual server - `--node-id`: A string identifier that distinguishes individual server
instances within the cluster. This forms the final part of the storage path: instances within the cluster. This forms the final part of the storage path:
`<CONFIGURED_PATH>/<CLUSTER_ID>/<NODE_ID>`. `<CONFIGURED_PATH>/<CLUSTER_ID>/<NODE_ID>`.
In a multi-node setup, this ID is used to reference specific nodes. In a multi-node setup, this ID is used to reference specific nodes.
- `--cluster-id`: A string identifier that determines part of the storage path - `--cluster-id`: A string identifier that determines part of the storage path
hierarchy. All nodes within the same cluster share this identifier. hierarchy. All nodes within the same cluster share this identifier.
The storage path follows the pattern `<CONFIGURED_PATH>/<CLUSTER_ID>/<NODE_ID>`. The storage path follows the pattern `<CONFIGURED_PATH>/<CLUSTER_ID>/<NODE_ID>`.
In a multi-node setup, this ID is used to reference the entire cluster. In a multi-node setup, this ID is used to reference the entire cluster.
{{% /show-in %}} {{% /show-in %}}
{{% show-in "core" %}} {{% show-in "core" %}}
- `--node-id`: A string identifier that distinguishes individual server instances. - `--node-id`: A string identifier that distinguishes individual server instances.
This forms the final part of the storage path: `<CONFIGURED_PATH>/<NODE_ID>`. This forms the final part of the storage path: `<CONFIGURED_PATH>/<NODE_ID>`.
{{% /show-in %}} {{% /show-in %}}
- `--object-store`: Specifies the type of object store to use. - `--object-store`: Specifies the type of object store to use.
InfluxDB supports the following: InfluxDB supports the following:
- `file`: local file system - `file`: local file system
- `memory`: in memory _(no object persistence)_ - `memory`: in memory *(no object persistence)*
- `memory-throttled`: like `memory` but with latency and throughput that - `memory-throttled`: like `memory` but with latency and throughput that
somewhat resembles a cloud-based object store somewhat resembles a cloud-based object store
- `s3`: AWS S3 and S3-compatible services like Ceph or Minio - `s3`: AWS S3 and S3-compatible services like Ceph or Minio
@ -106,7 +117,8 @@ Provide the following:
- Other object store parameters depending on the selected `object-store` type. - Other object store parameters depending on the selected `object-store` type.
For example, if you use `s3`, you must provide the bucket name and credentials. For example, if you use `s3`, you must provide the bucket name and credentials.
> [!Note] > \[!Note]
>
> #### Diskless architecture > #### Diskless architecture
> >
> InfluxDB 3 supports a diskless architecture that can operate with object > InfluxDB 3 supports a diskless architecture that can operate with object
@ -123,6 +135,7 @@ For this getting started guide, use the `file` object store to persist data to
your local disk. your local disk.
{{% show-in "enterprise" %}} {{% show-in "enterprise" %}}
```bash ```bash
# File system object store # File system object store
# Provide the filesystem directory # Provide the filesystem directory
@ -132,8 +145,10 @@ influxdb3 serve \
--object-store file \ --object-store file \
--data-dir ~/.influxdb3 --data-dir ~/.influxdb3
``` ```
{{% /show-in %}} {{% /show-in %}}
{{% show-in "core" %}} {{% show-in "core" %}}
```bash ```bash
# File system object store # File system object store
# Provide the file system directory # Provide the file system directory
@ -142,6 +157,7 @@ influxdb3 serve \
--object-store file \ --object-store file \
--data-dir ~/.influxdb3 --data-dir ~/.influxdb3
``` ```
{{% /show-in %}} {{% /show-in %}}
### Object store examples ### Object store examples
@ -155,6 +171,7 @@ This is the default object store type.
Replace the following with your values: Replace the following with your values:
{{% show-in "enterprise" %}} {{% show-in "enterprise" %}}
```bash ```bash
# Filesystem object store # Filesystem object store
# Provide the filesystem directory # Provide the filesystem directory
@ -164,8 +181,10 @@ influxdb3 serve \
--object-store file \ --object-store file \
--data-dir ~/.influxdb3 --data-dir ~/.influxdb3
``` ```
{{% /show-in %}} {{% /show-in %}}
{{% show-in "core" %}} {{% show-in "core" %}}
```bash ```bash
# File system object store # File system object store
# Provide the file system directory # Provide the file system directory
@ -174,6 +193,7 @@ influxdb3 serve \
--object-store file \ --object-store file \
--data-dir ~/.influxdb3 --data-dir ~/.influxdb3
``` ```
{{% /show-in %}} {{% /show-in %}}
{{% /expand %}} {{% /expand %}}
@ -187,7 +207,9 @@ provide the following options with your `docker run` command:
- `--object-store file --data-dir /path/in/container`: Uses the volume for object storage - `--object-store file --data-dir /path/in/container`: Uses the volume for object storage
{{% show-in "enterprise" %}} {{% show-in "enterprise" %}}
<!--pytest.mark.skip--> <!--pytest.mark.skip-->
```bash ```bash
# File system object store with Docker # File system object store with Docker
# Create a mount # Create a mount
@ -200,9 +222,12 @@ docker run -it \
--object-store file \ --object-store file \
--data-dir /path/in/container --data-dir /path/in/container
``` ```
{{% /show-in %}} {{% /show-in %}}
{{% show-in "core" %}} {{% show-in "core" %}}
<!--pytest.mark.skip--> <!--pytest.mark.skip-->
```bash ```bash
# File system object store with Docker # File system object store with Docker
# Create a mount # Create a mount
@ -214,9 +239,10 @@ docker run -it \
--object-store file \ --object-store file \
--data-dir /path/in/container --data-dir /path/in/container
``` ```
{{% /show-in %}} {{% /show-in %}}
> [!Note] > \[!Note]
> >
> The {{% product-name %}} Docker image exposes port `8181`, the `influxdb3` > The {{% product-name %}} Docker image exposes port `8181`, the `influxdb3`
> server default for HTTP connections. > server default for HTTP connections.
@ -228,6 +254,7 @@ docker run -it \
Open `compose.yaml` for editing and add a `services` entry for Open `compose.yaml` for editing and add a `services` entry for
{{% product-name %}}--for example: {{% product-name %}}--for example:
{{% show-in "enterprise" %}} {{% show-in "enterprise" %}}
```yaml ```yaml
# compose.yaml # compose.yaml
services: services:
@ -257,11 +284,13 @@ services:
# Path to store plugins in the container # Path to store plugins in the container
target: /var/lib/influxdb3/plugins target: /var/lib/influxdb3/plugins
``` ```
Replace `EMAIL_ADDRESS` with your email address to bypass the email prompt Replace `EMAIL_ADDRESS` with your email address to bypass the email prompt
when generating a trial or at-home license. For more information, see [Manage your when generating a trial or at-home license. For more information, see [Manage your
{{% product-name %}} license](/influxdb3/version/admin/license/). {{% product-name %}} license](/influxdb3/version/admin/license/).
{{% /show-in %}} {{% /show-in %}}
{{% show-in "core" %}} {{% show-in "core" %}}
```yaml ```yaml
# compose.yaml # compose.yaml
services: services:
@ -288,11 +317,13 @@ services:
# Path to store plugins in the container # Path to store plugins in the container
target: /var/lib/influxdb3/plugins target: /var/lib/influxdb3/plugins
``` ```
{{% /show-in %}} {{% /show-in %}}
Use the Docker Compose CLI to start the server--for example: Use the Docker Compose CLI to start the server--for example:
<!--pytest.mark.skip--> <!--pytest.mark.skip-->
```bash ```bash
docker compose pull && docker compose up influxdb3-{{< product-key >}} docker compose pull && docker compose up influxdb3-{{< product-key >}}
``` ```
@ -301,7 +332,8 @@ The command pulls the latest {{% product-name %}} Docker image and starts
`influxdb3` in a container with host port `8181` mapped to container port `influxdb3` in a container with host port `8181` mapped to container port
`8181`, the server default for HTTP connections. `8181`, the server default for HTTP connections.
> [!Tip] > \[!Tip]
>
> #### Custom port mapping > #### Custom port mapping
> >
> To customize your `influxdb3` server hostname and port, specify the > To customize your `influxdb3` server hostname and port, specify the
@ -318,6 +350,7 @@ This is useful for production deployments that require high availability and dur
Provide your bucket name and credentials to access the S3 object store. Provide your bucket name and credentials to access the S3 object store.
{{% show-in "enterprise" %}} {{% show-in "enterprise" %}}
```bash ```bash
# S3 object store (default is the us-east-1 region) # S3 object store (default is the us-east-1 region)
# Specify the object store type and associated options # Specify the object store type and associated options
@ -344,8 +377,10 @@ influxdb3 serve \
--aws-endpoint ENDPOINT \ --aws-endpoint ENDPOINT \
--aws-allow-http --aws-allow-http
``` ```
{{% /show-in %}} {{% /show-in %}}
{{% show-in "core" %}} {{% show-in "core" %}}
```bash ```bash
# S3 object store (default is the us-east-1 region) # S3 object store (default is the us-east-1 region)
# Specify the object store type and associated options # Specify the object store type and associated options
@ -370,6 +405,7 @@ influxdb3 serve \
--aws-endpoint ENDPOINT \ --aws-endpoint ENDPOINT \
--aws-allow-http --aws-allow-http
``` ```
{{% /show-in %}} {{% /show-in %}}
{{% /expand %}} {{% /expand %}}
@ -379,6 +415,7 @@ Store data in RAM without persisting it on shutdown.
It's useful for rapid testing and development. It's useful for rapid testing and development.
{{% show-in "enterprise" %}} {{% show-in "enterprise" %}}
```bash ```bash
# Memory object store # Memory object store
# Stores data in RAM; doesn't persist data # Stores data in RAM; doesn't persist data
@ -387,8 +424,10 @@ influxdb3 serve \
--cluster-id cluster01 \ --cluster-id cluster01 \
--object-store memory --object-store memory
``` ```
{{% /show-in %}} {{% /show-in %}}
{{% show-in "core" %}} {{% show-in "core" %}}
```bash ```bash
# Memory object store # Memory object store
# Stores data in RAM; doesn't persist data # Stores data in RAM; doesn't persist data
@ -396,6 +435,7 @@ influxdb3 serve \
--node-id host01 \ --node-id host01 \
--object-store memory --object-store memory
``` ```
{{% /show-in %}} {{% /show-in %}}
{{% /expand %}} {{% /expand %}}
@ -409,6 +449,7 @@ influxdb3 serve --help
``` ```
{{% show-in "enterprise" %}} {{% show-in "enterprise" %}}
## Set up licensing ## Set up licensing
When you first start a new instance, {{% product-name %}} prompts you to select a When you first start a new instance, {{% product-name %}} prompts you to select a
@ -426,20 +467,22 @@ InfluxDB 3 Enterprise licenses:
- **At-Home**: For at-home hobbyist use with limited access to InfluxDB 3 Enterprise capabilities. - **At-Home**: For at-home hobbyist use with limited access to InfluxDB 3 Enterprise capabilities.
- **Commercial**: Commercial license with full access to InfluxDB 3 Enterprise capabilities. - **Commercial**: Commercial license with full access to InfluxDB 3 Enterprise capabilities.
> [!Important] > \[!Important]
>
> #### Trial and at-home licenses with Docker > #### Trial and at-home licenses with Docker
> >
> To generate the trial or home license in Docker, bypass the email prompt. > To generate the trial or home license in Docker, bypass the email prompt.
> The first time you start a new instance, provide your email address with the > The first time you start a new instance, provide your email address with the
> `--license-email` option or the `INFLUXDB3_ENTERPRISE_LICENSE_EMAIL` environment variable. > `--license-email` option or the `INFLUXDB3_ENTERPRISE_LICENSE_EMAIL` environment variable.
> >
> _Currently, if you use Docker and enter your email address in the prompt, a bug may > *Currently, if you use Docker and enter your email address in the prompt, a bug may
> prevent the container from generating the license ._ > prevent the container from generating the license .*
> >
> For more information, see [the Docker Compose example](/influxdb3/enterprise/admin/license/?t=Docker+compose#start-the-server-with-your-license-email). > For more information, see [the Docker Compose example](/influxdb3/enterprise/admin/license/?t=Docker+compose#start-the-server-with-your-license-email).
{{% /show-in %}} > {{% /show-in %}}
> [!Tip] > \[!Tip]
>
> #### Use the InfluxDB 3 Explorer query interface > #### Use the InfluxDB 3 Explorer query interface
> >
> You can complete the remaining steps in this guide using InfluxDB 3 Explorer, > You can complete the remaining steps in this guide using InfluxDB 3 Explorer,
@ -469,7 +512,7 @@ commands and HTTP API requests.
metrics for the server metrics for the server
{{% /show-in %}} {{% /show-in %}}
{{% show-in "core" %}} {{% show-in "core" %}}
{{% product-name %}} supports _admin_ tokens, which grant access to all CLI actions and API endpoints. {{% product-name %}} supports *admin* tokens, which grant access to all CLI actions and API endpoints.
{{% /show-in %}} {{% /show-in %}}
For more information about tokens and authorization, see [Manage tokens](/influxdb3/version/admin/tokens/). For more information about tokens and authorization, see [Manage tokens](/influxdb3/version/admin/tokens/).
@ -477,7 +520,7 @@ For more information about tokens and authorization, see [Manage tokens](/influx
### Create an operator token ### Create an operator token
After you start the server, create your first admin token. After you start the server, create your first admin token.
The first admin token you create is the _operator_ token for the server. The first admin token you create is the *operator* token for the server.
Use the [`influxdb3 create token` command](/influxdb3/version/reference/cli/influxdb3/create/token/) Use the [`influxdb3 create token` command](/influxdb3/version/reference/cli/influxdb3/create/token/)
with the `--admin` option to create your operator token: with the `--admin` option to create your operator token:
@ -496,11 +539,13 @@ influxdb3 create token --admin
{{% /code-tab-content %}} {{% /code-tab-content %}}
{{% code-tab-content %}} {{% code-tab-content %}}
{{% code-placeholders "CONTAINER_NAME" %}} {{% code-placeholders "CONTAINER\_NAME" %}}
```bash ```bash
# With Docker — in a new terminal: # With Docker — in a new terminal:
docker exec -it CONTAINER_NAME influxdb3 create token --admin docker exec -it CONTAINER_NAME influxdb3 create token --admin
``` ```
{{% /code-placeholders %}} {{% /code-placeholders %}}
Replace {{% code-placeholder-key %}}`CONTAINER_NAME`{{% /code-placeholder-key %}} with the name of your running Docker container. Replace {{% code-placeholder-key %}}`CONTAINER_NAME`{{% /code-placeholder-key %}} with the name of your running Docker container.
@ -510,7 +555,8 @@ Replace {{% code-placeholder-key %}}`CONTAINER_NAME`{{% /code-placeholder-key %}
The command returns a token string for authenticating CLI commands and API requests. The command returns a token string for authenticating CLI commands and API requests.
> [!Important] > \[!Important]
>
> #### Store your token securely > #### Store your token securely
> >
> InfluxDB displays the token string only when you create it. > InfluxDB displays the token string only when you create it.
@ -537,10 +583,12 @@ In your command, replace {{% code-placeholder-key %}}`YOUR_AUTH_TOKEN`{{% /code-
Set the `INFLUXDB3_AUTH_TOKEN` environment variable to have the CLI use your Set the `INFLUXDB3_AUTH_TOKEN` environment variable to have the CLI use your
token automatically: token automatically:
{{% code-placeholders "YOUR_AUTH_TOKEN" %}} {{% code-placeholders "YOUR\_AUTH\_TOKEN" %}}
```bash ```bash
export INFLUXDB3_AUTH_TOKEN=YOUR_AUTH_TOKEN export INFLUXDB3_AUTH_TOKEN=YOUR_AUTH_TOKEN
``` ```
{{% /code-placeholders %}} {{% /code-placeholders %}}
{{% /tab-content %}} {{% /tab-content %}}
@ -548,10 +596,12 @@ export INFLUXDB3_AUTH_TOKEN=YOUR_AUTH_TOKEN
Include the `--token` option with CLI commands: Include the `--token` option with CLI commands:
{{% code-placeholders "YOUR_AUTH_TOKEN" %}} {{% code-placeholders "YOUR\_AUTH\_TOKEN" %}}
```bash ```bash
influxdb3 show databases --token YOUR_AUTH_TOKEN influxdb3 show databases --token YOUR_AUTH_TOKEN
``` ```
{{% /code-placeholders %}} {{% /code-placeholders %}}
{{% /tab-content %}} {{% /tab-content %}}
@ -559,11 +609,13 @@ influxdb3 show databases --token YOUR_AUTH_TOKEN
For HTTP API requests, include your token in the `Authorization` header--for example: For HTTP API requests, include your token in the `Authorization` header--for example:
{{% code-placeholders "YOUR_AUTH_TOKEN" %}} {{% code-placeholders "YOUR\_AUTH\_TOKEN" %}}
```bash ```bash
curl "http://{{< influxdb/host >}}/api/v3/configure/database" \ curl "http://{{< influxdb/host >}}/api/v3/configure/database" \
--header "Authorization: Bearer YOUR_AUTH_TOKEN" --header "Authorization: Bearer YOUR_AUTH_TOKEN"
``` ```
{{% /code-placeholders %}} {{% /code-placeholders %}}
#### Learn more about tokens and permissions #### Learn more about tokens and permissions
@ -576,7 +628,9 @@ curl "http://{{< influxdb/host >}}/api/v3/configure/database" \
{{% /show-in %}} {{% /show-in %}}
- [Authentication](/influxdb3/version/reference/internals/authentication/) - - [Authentication](/influxdb3/version/reference/internals/authentication/) -
Understand authentication, authorizations, and permissions in {{% product-name %}} Understand authentication, authorizations, and permissions in {{% product-name %}}
<!-- //TODO - Authenticate with compatibility APIs --> <!-- //TODO - Authenticate with compatibility APIs -->
{{% show-in "core" %}} {{% show-in "core" %}}
{{% page-nav {{% page-nav
prev="/influxdb3/version/get-started/" prev="/influxdb3/version/get-started/"

View File

@ -2,7 +2,7 @@ Use the Processing Engine in {{% product-name %}} to extend your database with c
## What is the Processing Engine? ## What is the Processing Engine?
The Processing Engine is an embedded Python virtual machine that runs inside your {{% product-name %}} database. You configure _triggers_ to run your Python _plugin_ code in response to: The Processing Engine is an embedded Python virtual machine that runs inside your {{% product-name %}} database. You configure *triggers* to run your Python *plugin* code in response to:
- **Data writes** - Process and transform data as it enters the database - **Data writes** - Process and transform data as it enters the database
- **Scheduled events** - Run code at defined intervals or specific times - **Scheduled events** - Run code at defined intervals or specific times
@ -15,6 +15,7 @@ This guide walks you through setting up the Processing Engine, creating your fir
## Before you begin ## Before you begin
Ensure you have: Ensure you have:
- A working {{% product-name %}} instance - A working {{% product-name %}} instance
- Access to command line - Access to command line
- Python installed if you're writing your own plugin - Python installed if you're writing your own plugin
@ -38,11 +39,13 @@ Once you have all the prerequisites in place, follow these steps to implement th
To activate the Processing Engine, start your {{% product-name %}} server with the `--plugin-dir` flag. This flag tells InfluxDB where to load your plugin files. To activate the Processing Engine, start your {{% product-name %}} server with the `--plugin-dir` flag. This flag tells InfluxDB where to load your plugin files.
> [!Important] > \[!Important]
>
> #### Keep the influxdb3 binary with its python directory > #### Keep the influxdb3 binary with its python directory
> >
> The influxdb3 binary requires the adjacent `python/` directory to function. > The influxdb3 binary requires the adjacent `python/` directory to function.
> If you manually extract from tar.gz, keep them in the same parent directory: > If you manually extract from tar.gz, keep them in the same parent directory:
>
> ``` > ```
> your-install-location/ > your-install-location/
> ├── influxdb3 > ├── influxdb3
@ -51,7 +54,7 @@ To activate the Processing Engine, start your {{% product-name %}} server with t
> >
> Add the parent directory to your PATH; do not move the binary out of this directory. > Add the parent directory to your PATH; do not move the binary out of this directory.
{{% code-placeholders "NODE_ID|OBJECT_STORE_TYPE|PLUGIN_DIR" %}} {{% code-placeholders "NODE\_ID|OBJECT\_STORE\_TYPE|PLUGIN\_DIR" %}}
```bash ```bash
influxdb3 serve \ influxdb3 serve \
@ -68,11 +71,12 @@ In the example above, replace the following:
- {{% code-placeholder-key %}}`OBJECT_STORE_TYPE`{{% /code-placeholder-key %}}: Type of object store (for example, file or s3) - {{% code-placeholder-key %}}`OBJECT_STORE_TYPE`{{% /code-placeholder-key %}}: Type of object store (for example, file or s3)
- {{% code-placeholder-key %}}`PLUGIN_DIR`{{% /code-placeholder-key %}}: Absolute path to the directory where plugin files are stored. Store all plugin files in this directory or its subdirectories. - {{% code-placeholder-key %}}`PLUGIN_DIR`{{% /code-placeholder-key %}}: Absolute path to the directory where plugin files are stored. Store all plugin files in this directory or its subdirectories.
> [!Note] > \[!Note]
>
> #### Use custom plugin repositories > #### Use custom plugin repositories
> >
> By default, plugins referenced with the `gh:` prefix are fetched from the official > By default, plugins referenced with the `gh:` prefix are fetched from the official
> [influxdata/influxdb3_plugins](https://github.com/influxdata/influxdb3_plugins) repository. > [influxdata/influxdb3\_plugins](https://github.com/influxdata/influxdb3_plugins) repository.
> To use a custom repository, add the `--plugin-repo` flag when starting the server. > To use a custom repository, add the `--plugin-repo` flag when starting the server.
> See [Use a custom plugin repository](#option-3-use-a-custom-plugin-repository) for details. > See [Use a custom plugin repository](#option-3-use-a-custom-plugin-repository) for details.
@ -88,7 +92,8 @@ When running {{% product-name %}} in a distributed setup, follow these steps to
3. Maintain identical plugin files across all instances where plugins run 3. Maintain identical plugin files across all instances where plugins run
- Use shared storage or file synchronization tools to keep plugins consistent - Use shared storage or file synchronization tools to keep plugins consistent
> [!Note] > \[!Note]
>
> #### Provide plugins to nodes that run them > #### Provide plugins to nodes that run them
> >
> Configure your plugin directory on the same system as the nodes that run the triggers and plugins. > Configure your plugin directory on the same system as the nodes that run the triggers and plugins.
@ -99,7 +104,7 @@ For more information about configuring distributed environments, see the [Distri
## Add a Processing Engine plugin ## Add a Processing Engine plugin
A plugin is a Python script that defines a specific function signature for a trigger (_trigger spec_). When the specified event occurs, InfluxDB runs the plugin. A plugin is a Python script that defines a specific function signature for a trigger (*trigger spec*). When the specified event occurs, InfluxDB runs the plugin.
### Choose a plugin strategy ### Choose a plugin strategy
@ -120,7 +125,7 @@ Browse the [plugin library](/influxdb3/version/plugins/library/) to find example
- **Integration**: Connect to external services and APIs - **Integration**: Connect to external services and APIs
- **System monitoring**: Track resource usage and health metrics - **System monitoring**: Track resource usage and health metrics
For community contributions, see the [influxdb3_plugins repository](https://github.com/influxdata/influxdb3_plugins) on GitHub. For community contributions, see the [influxdb3\_plugins repository](https://github.com/influxdata/influxdb3_plugins) on GitHub.
#### Add example plugins #### Add example plugins
@ -196,7 +201,7 @@ See the [plugin-repo configuration option](/influxdb3/version/reference/config-o
Plugins have various functions such as: Plugins have various functions such as:
- Receive plugin-specific arguments (such as written data, call time, or an HTTP request) - Receive plugin-specific arguments (such as written data, call time, or an HTTP request)
- Access keyword arguments (as `args`) passed from _trigger arguments_ configurations - Access keyword arguments (as `args`) passed from *trigger arguments* configurations
- Access the `influxdb3_local` shared API to write data, query data, and managing state between executions - Access the `influxdb3_local` shared API to write data, query data, and managing state between executions
For more information about available functions, arguments, and how plugins interact with InfluxDB, see how to [Extend plugins](/influxdb3/version/extend-plugin/). For more information about available functions, arguments, and how plugins interact with InfluxDB, see how to [Extend plugins](/influxdb3/version/extend-plugin/).
@ -234,11 +239,13 @@ Choose a plugin type based on your automation goals:
Plugins now support both single-file and multifile architectures: Plugins now support both single-file and multifile architectures:
**Single-file plugins:** **Single-file plugins:**
- Create a `.py` file in your plugins directory - Create a `.py` file in your plugins directory
- Add the appropriate function signature based on your chosen plugin type - Add the appropriate function signature based on your chosen plugin type
- Write your processing logic inside the function - Write your processing logic inside the function
**Multifile plugins:** **Multifile plugins:**
- Create a directory in your plugins directory - Create a directory in your plugins directory
- Add an `__init__.py` file as the entry point (required) - Add an `__init__.py` file as the entry point (required)
- Organize supporting modules in additional `.py` files - Organize supporting modules in additional `.py` files
@ -382,12 +389,14 @@ influxdb3 create trigger \
complex_trigger complex_trigger
``` ```
> [!Important] > \[!Important]
>
> #### Admin privileges required > #### Admin privileges required
> >
> Plugin uploads require an admin token. This security measure prevents unauthorized code execution on the server. > Plugin uploads require an admin token. This security measure prevents unauthorized code execution on the server.
**When to use plugin upload:** **When to use plugin upload:**
- Local plugin development and testing - Local plugin development and testing
- Deploying plugins without SSH access to the server - Deploying plugins without SSH access to the server
- Rapid iteration on plugin code - Rapid iteration on plugin code
@ -416,6 +425,7 @@ influxdb3 update trigger \
``` ```
The update operation: The update operation:
- Replaces plugin files immediately - Replaces plugin files immediately
- Preserves trigger configuration (spec, schedule, arguments) - Preserves trigger configuration (spec, schedule, arguments)
- Requires admin token for security - Requires admin token for security
@ -449,6 +459,7 @@ influxdb3 query \
``` ```
**Available columns:** **Available columns:**
- `plugin_name` (String): Trigger name - `plugin_name` (String): Trigger name
- `file_name` (String): Plugin file name - `file_name` (String): Plugin file name
- `file_path` (String): Full server path - `file_path` (String): Full server path
@ -479,7 +490,7 @@ For more information, see the [`influxdb3 show plugins` reference](/influxdb3/ve
### Understand trigger types ### Understand trigger types
| Plugin Type | Trigger Specification | When Plugin Runs | | Plugin Type | Trigger Specification | When Plugin Runs |
|------------|----------------------|-----------------| | ------------ | ----------------------------------------- | ------------------------------- |
| Data write | `table:<TABLE_NAME>` or `all_tables` | When data is written to tables | | Data write | `table:<TABLE_NAME>` or `all_tables` | When data is written to tables |
| Scheduled | `every:<DURATION>` or `cron:<EXPRESSION>` | At specified time intervals | | Scheduled | `every:<DURATION>` or `cron:<EXPRESSION>` | At specified time intervals |
| HTTP request | `request:<REQUEST_PATH>` | When HTTP requests are received | | HTTP request | `request:<REQUEST_PATH>` | When HTTP requests are received |
@ -488,7 +499,7 @@ For more information, see the [`influxdb3 show plugins` reference](/influxdb3/ve
Use the `influxdb3 create trigger` command with the appropriate trigger specification: Use the `influxdb3 create trigger` command with the appropriate trigger specification:
{{% code-placeholders "SPECIFICATION|PLUGIN_FILE|DATABASE_NAME|TRIGGER_NAME" %}} {{% code-placeholders "SPECIFICATION|PLUGIN\_FILE|DATABASE\_NAME|TRIGGER\_NAME" %}}
```bash ```bash
influxdb3 create trigger \ influxdb3 create trigger \
@ -507,9 +518,9 @@ In the example above, replace the following:
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: Name of the database - {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: Name of the database
- {{% code-placeholder-key %}}`TRIGGER_NAME`{{% /code-placeholder-key %}}: Name of the new trigger - {{% code-placeholder-key %}}`TRIGGER_NAME`{{% /code-placeholder-key %}}: Name of the new trigger
> [!Note] > \[!Note]
> When specifying a local plugin file, the `--plugin-filename` parameter > When specifying a local plugin file, the `--plugin-filename` parameter
> _is relative to_ the `--plugin-dir` configured for the server. > *is relative to* the `--plugin-dir` configured for the server.
> You don't need to provide an absolute path. > You don't need to provide an absolute path.
### Trigger specification examples ### Trigger specification examples
@ -542,7 +553,8 @@ The plugin receives the written data and table information.
If you want to use a single trigger for all tables but exclude specific tables, If you want to use a single trigger for all tables but exclude specific tables,
you can use trigger arguments and your plugin code to filter out unwanted tables--for example: you can use trigger arguments and your plugin code to filter out unwanted tables--for example:
{{% code-placeholders "DATABASE_NAME|AUTH_TOKEN" %}} {{% code-placeholders "DATABASE\_NAME|AUTH\_TOKEN" %}}
```bash ```bash
influxdb3 create trigger \ influxdb3 create trigger \
--database DATABASE_NAME \ --database DATABASE_NAME \
@ -552,12 +564,13 @@ influxdb3 create trigger \
--trigger-arguments "exclude_tables=temp_data,debug_info,system_logs" \ --trigger-arguments "exclude_tables=temp_data,debug_info,system_logs" \
data_processor data_processor
``` ```
{{% /code-placeholders %}} {{% /code-placeholders %}}
Replace the following: Replace the following:
- {{% code-placeholder-key %}}DATABASE_NAME{{% /code-placeholder-key %}}: the name of the database - {{% code-placeholder-key %}}DATABASE\_NAME{{% /code-placeholder-key %}}: the name of the database
- {{% code-placeholder-key %}}AUTH_TOKEN{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in - {{% code-placeholder-key %}}AUTH\_TOKEN{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in
"enterprise" %}} with write permissions on the specified database{{% /show-in %}} "enterprise" %}} with write permissions on the specified database{{% /show-in %}}
Then, in your plugin: Then, in your plugin:
@ -706,12 +719,10 @@ influxdb3 create trigger \
## Manage plugin dependencies ## Manage plugin dependencies
Use the `influxdb3 install package` command to add third-party libraries (like `pandas`, `requests`, or `influxdb3-python`) to your plugin environment.\
Use the `influxdb3 install package` command to add third-party libraries (like `pandas`, `requests`, or `influxdb3-python`) to your plugin environment.
This installs packages into the Processing Engines embedded Python environment to ensure compatibility with your InfluxDB instance. This installs packages into the Processing Engines embedded Python environment to ensure compatibility with your InfluxDB instance.
{{% code-placeholders "CONTAINER_NAME|PACKAGE_NAME" %}} {{% code-placeholders "CONTAINER\_NAME|PACKAGE\_NAME" %}}
{{< code-tabs-wrapper >}} {{< code-tabs-wrapper >}}
@ -746,8 +757,10 @@ These examples install the specified Python package (for example, pandas) into t
- Use the CLI command when running InfluxDB directly on your system. - Use the CLI command when running InfluxDB directly on your system.
- Use the Docker variant if you're running InfluxDB in a containerized environment. - Use the Docker variant if you're running InfluxDB in a containerized environment.
> [!Important] > \[!Important]
>
> #### Use bundled Python for plugins > #### Use bundled Python for plugins
>
> When you start the server with the `--plugin-dir` option, InfluxDB 3 creates a Python virtual environment (`<PLUGIN_DIR>/venv`) for your plugins. > When you start the server with the `--plugin-dir` option, InfluxDB 3 creates a Python virtual environment (`<PLUGIN_DIR>/venv`) for your plugins.
> If you need to create a custom virtual environment, use the Python interpreter bundled with InfluxDB 3. Don't use the system Python. > If you need to create a custom virtual environment, use the Python interpreter bundled with InfluxDB 3. Don't use the system Python.
> Creating a virtual environment with the system Python (for example, using `python -m venv`) can lead to runtime errors and plugin failures. > Creating a virtual environment with the system Python (for example, using `python -m venv`) can lead to runtime errors and plugin failures.
@ -774,6 +787,7 @@ influxdb3 serve \
``` ```
When package installation is disabled: When package installation is disabled:
- The Processing Engine continues to function normally for triggers - The Processing Engine continues to function normally for triggers
- Plugin code executes without restrictions - Plugin code executes without restrictions
- Package installation commands are blocked - Package installation commands are blocked
@ -794,6 +808,7 @@ influxdb3 serve \
``` ```
**Use cases for disabled package management:** **Use cases for disabled package management:**
- Air-gapped environments without internet access - Air-gapped environments without internet access
- Compliance requirements prohibiting runtime package installation - Compliance requirements prohibiting runtime package installation
- Centrally managed dependency environments - Centrally managed dependency environments
@ -854,11 +869,13 @@ This security model ensures only administrators can introduce or modify executab
### Best practices ### Best practices
**For development:** **For development:**
- Use the `--upload` flag to deploy plugins during development - Use the `--upload` flag to deploy plugins during development
- Test plugins in non-production environments first - Test plugins in non-production environments first
- Review plugin code before deployment - Review plugin code before deployment
**For production:** **For production:**
- Pre-deploy plugins to the server's plugin directory via secure file transfer - Pre-deploy plugins to the server's plugin directory via secure file transfer
- Use custom plugin repositories for vetted, approved plugins - Use custom plugin repositories for vetted, approved plugins
- Disable package installation (`--package-manager disabled`) in locked-down environments - Disable package installation (`--package-manager disabled`) in locked-down environments
@ -878,19 +895,20 @@ When you deploy {{% product-name %}} in a multi-node environment, configure each
Each plugin must run on a node that supports its trigger type: Each plugin must run on a node that supports its trigger type:
| Plugin type | Trigger spec | Runs on | | Plugin type | Trigger spec | Runs on |
|--------------------|--------------------------|-----------------------------| | ------------ | ------------------------ | ---------------------------- |
| Data write | `table:` or `all_tables` | Ingester nodes | | Data write | `table:` or `all_tables` | Ingester nodes |
| Scheduled | `every:` or `cron:` | Any node with scheduler | | Scheduled | `every:` or `cron:` | Any node with scheduler |
| HTTP request | `request:` | Nodes that serve API traffic | | HTTP request | `request:` | Nodes that serve API traffic |
For example: For example:
- Run write-ahead log (WAL) plugins on ingester nodes. - Run write-ahead log (WAL) plugins on ingester nodes.
- Run scheduled plugins on any node configured to execute them. - Run scheduled plugins on any node configured to execute them.
- Run HTTP-triggered plugins on querier nodes or any node that handles HTTP endpoints. - Run HTTP-triggered plugins on querier nodes or any node that handles HTTP endpoints.
Place all plugin files in the `--plugin-dir` directory configured for each node. Place all plugin files in the `--plugin-dir` directory configured for each node.
> [!Note] > \[!Note]
> Triggers fail if the plugin file isnt available on the node where it runs. > Triggers fail if the plugin file isnt available on the node where it runs.
### Route third-party clients to querier nodes ### Route third-party clients to querier nodes

View File

@ -1,4 +1,5 @@
> [!Note] > \[!Note]
>
> #### InfluxDB 3 Core and Enterprise relationship > #### InfluxDB 3 Core and Enterprise relationship
> >
> InfluxDB 3 Enterprise is a superset of InfluxDB 3 Core. > InfluxDB 3 Enterprise is a superset of InfluxDB 3 Core.
@ -123,8 +124,8 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
### Core ### Core
#### Bug Fixes #### Bug Fixes
- Upgrading from 3.3.0 to 3.4.x no longer causes possible catalog migration issues ([#26756](https://github.com/influxdata/influxdb/pull/26756))
- Upgrading from 3.3.0 to 3.4.x no longer causes possible catalog migration issues ([#26756](https://github.com/influxdata/influxdb/pull/26756))
## v3.4.0 {date="2025-08-27"} ## v3.4.0 {date="2025-08-27"}
@ -138,21 +139,22 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
([#26734](https://github.com/influxdata/influxdb/pull/26734)) ([#26734](https://github.com/influxdata/influxdb/pull/26734))
- **Azure Endpoint**: - **Azure Endpoint**:
- Use the `--azure-endpoint` option with `influxdb3 serve` to specify the Azure Blob Storage endpoint for object store connections. ([#26687](https://github.com/influxdata/influxdb/pull/26687)) - Use the `--azure-endpoint` option with `influxdb3 serve` to specify the Azure Blob Storage endpoint for object store connections. ([#26687](https://github.com/influxdata/influxdb/pull/26687))
- **No_Sync via CLI**: - **No\_Sync via CLI**:
- Use the `--no-sync` option with `influxdb3 write` to skip waiting for WAL persistence on write and immediately return a response to the write request. ([#26703](https://github.com/influxdata/influxdb/pull/26703)) - Use the `--no-sync` option with `influxdb3 write` to skip waiting for WAL persistence on write and immediately return a response to the write request. ([#26703](https://github.com/influxdata/influxdb/pull/26703))
#### Bug Fixes #### Bug Fixes
- Validate tag and field names when creating tables ([#26641](https://github.com/influxdata/influxdb/pull/26641)) - Validate tag and field names when creating tables ([#26641](https://github.com/influxdata/influxdb/pull/26641))
- Using GROUP BY twice on the same column no longer causes incorrect data ([#26732](https://github.com/influxdata/influxdb/pull/26732)) - Using GROUP BY twice on the same column no longer causes incorrect data ([#26732](https://github.com/influxdata/influxdb/pull/26732))
#### Security & Misc #### Security & Misc
- Reduce verbosity of the TableIndexCache log. ([#26709](https://github.com/influxdata/influxdb/pull/26709)) - Reduce verbosity of the TableIndexCache log. ([#26709](https://github.com/influxdata/influxdb/pull/26709))
- WAL replay concurrency limit defaults to number of CPU cores, preventing possible OOMs. ([#26715](https://github.com/influxdata/influxdb/pull/26715)) - WAL replay concurrency limit defaults to number of CPU cores, preventing possible OOMs. ([#26715](https://github.com/influxdata/influxdb/pull/26715))
- Remove unsafe signal_handler code. ([#26685](https://github.com/influxdata/influxdb/pull/26685)) - Remove unsafe signal\_handler code. ([#26685](https://github.com/influxdata/influxdb/pull/26685))
- Upgrade Python version to 3.13.7-20250818. ([#26686](https://github.com/influxdata/influxdb/pull/26686), [#26700](https://github.com/influxdata/influxdb/pull/26700)) - Upgrade Python version to 3.13.7-20250818. ([#26686](https://github.com/influxdata/influxdb/pull/26686), [#26700](https://github.com/influxdata/influxdb/pull/26700))
- Tags with `/` in the name no longer break the primary key. - Tags with `/` in the name no longer break the primary key.
### Enterprise ### Enterprise
All Core updates are included in Enterprise. Additional Enterprise-specific features and fixes: All Core updates are included in Enterprise. Additional Enterprise-specific features and fixes:
@ -160,18 +162,16 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
#### Features #### Features
- **Token Provisioning**: - **Token Provisioning**:
- Generate _resource_ and _admin_ tokens offline and use them when starting the database. - Generate *resource* and *admin* tokens offline and use them when starting the database.
- Select a home or trial license without using an interactive terminal. - Select a home or trial license without using an interactive terminal.
Use `--license-type` [home | trial | commercial] option to the `influxdb3 serve` command to automate the selection of the license type. Use `--license-type` \[home | trial | commercial] option to the `influxdb3 serve` command to automate the selection of the license type.
#### Bug Fixes #### Bug Fixes
- Don't initialize the Processing Engine when the specified `--mode` does not require it. - Don't initialize the Processing Engine when the specified `--mode` does not require it.
- Don't panic when `INFLUXDB3_PLUGIN_DIR` is set in containers without the Processing Engine enabled. - Don't panic when `INFLUXDB3_PLUGIN_DIR` is set in containers without the Processing Engine enabled.
## v3.3.0 {date="2025-07-29"} ## v3.3.0 {date="2025-07-29"}
### Core ### Core
@ -257,7 +257,7 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
## v3.2.0 {date="2025-06-25"} ## v3.2.0 {date="2025-06-25"}
**Core**: revision 1ca3168bee **Core**: revision 1ca3168bee\
**Enterprise**: revision 1ca3168bee **Enterprise**: revision 1ca3168bee
### Core ### Core
@ -307,6 +307,7 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
- **License handling**: Trim whitespace from license file contents after reading to prevent validation issues - **License handling**: Trim whitespace from license file contents after reading to prevent validation issues
## v3.1.0 {date="2025-05-29"} ## v3.1.0 {date="2025-05-29"}
**Core**: revision 482dd8aac580c04f37e8713a8fffae89ae8bc264 **Core**: revision 482dd8aac580c04f37e8713a8fffae89ae8bc264
**Enterprise**: revision 2cb23cf32b67f9f0d0803e31b356813a1a151b00 **Enterprise**: revision 2cb23cf32b67f9f0d0803e31b356813a1a151b00
@ -314,6 +315,7 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
### Core ### Core
#### Token and Security Updates #### Token and Security Updates
- Named admin tokens can now be created, with configurable expirations - Named admin tokens can now be created, with configurable expirations
- `health`, `ping`, and `metrics` endpoints can now be opted out of authorization - `health`, `ping`, and `metrics` endpoints can now be opted out of authorization
- `Basic $TOKEN` is now supported for all APIs - `Basic $TOKEN` is now supported for all APIs
@ -321,6 +323,7 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
- Additional info available when starting InfuxDB using `--without-auth` - Additional info available when starting InfuxDB using `--without-auth`
#### Additional Updates #### Additional Updates
- New catalog metrics available for count operations - New catalog metrics available for count operations
- New object store metrics available for transfer latencies and transfer sizes - New object store metrics available for transfer latencies and transfer sizes
- New query duration metrics available for Last Value caches - New query duration metrics available for Last Value caches
@ -328,6 +331,7 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
- Other performance improvements - Other performance improvements
#### Fixes #### Fixes
- New tags are now backfilled with NULL instead of empty strings - New tags are now backfilled with NULL instead of empty strings
- Bitcode deserialization error fixed - Bitcode deserialization error fixed
- Series key metadata not persisting to Parquet is now fixed - Series key metadata not persisting to Parquet is now fixed
@ -336,24 +340,28 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
### Enterprise ### Enterprise
#### Token and Security Updates #### Token and Security Updates
- Resource tokens now use resource names in `show tokens` - Resource tokens now use resource names in `show tokens`
- Tokens can now be granted `CREATE` permission for creating databases - Tokens can now be granted `CREATE` permission for creating databases
#### Additional Updates #### Additional Updates
- Last value caches reload on restart - Last value caches reload on restart
- Distinct value caches reload on restart - Distinct value caches reload on restart
- Other performance improvements - Other performance improvements
- Replaces remaining "INFLUXDB_IOX" Dockerfile environment variables with the following: - Replaces remaining "INFLUXDB\_IOX" Dockerfile environment variables with the following:
- `ENV INFLUXDB3_OBJECT_STORE=file` - `ENV INFLUXDB3_OBJECT_STORE=file`
- `ENV INFLUXDB3_DB_DIR=/var/lib/influxdb3` - `ENV INFLUXDB3_DB_DIR=/var/lib/influxdb3`
#### Fixes #### Fixes
- Improvements and fixes for license validations - Improvements and fixes for license validations
- False positive fixed for catalog error on shutdown - False positive fixed for catalog error on shutdown
- UX improvements for error and onboarding messages - UX improvements for error and onboarding messages
- Other general fixes and corrections - Other general fixes and corrections
## v3.0.3 {date="2025-05-16"} ## v3.0.3 {date="2025-05-16"}
**Core**: revision 384c457ef5f0d5ca4981b22855e411d8cac2688e **Core**: revision 384c457ef5f0d5ca4981b22855e411d8cac2688e
**Enterprise**: revision 34f4d28295132b9efafebf654e9f6decd1a13caf **Enterprise**: revision 34f4d28295132b9efafebf654e9f6decd1a13caf
@ -373,9 +381,8 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
- Fix licensing validation issues. - Fix licensing validation issues.
- Other fixes and performance improvements. - Other fixes and performance improvements.
## v3.0.2 {date="2025-05-01"} ## v3.0.2 {date="2025-05-01"}
**Core**: revision d80d6cd60049c7b266794a48c97b1b6438ac5da9 **Core**: revision d80d6cd60049c7b266794a48c97b1b6438ac5da9
**Enterprise**: revision e9d7e03c2290d0c3e44d26e3eeb60aaf12099f29 **Enterprise**: revision e9d7e03c2290d0c3e44d26e3eeb60aaf12099f29
@ -385,7 +392,7 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
#### Security updates #### Security updates
- Generate testing TLS certificates on the fly. - Generate testing TLS certificates on the fly.
- Set the TLS CA via the INFLUXDB3_TLS_CA environment variable. - Set the TLS CA via the INFLUXDB3\_TLS\_CA environment variable.
- Enforce a minimum TLS version for enhanced security. - Enforce a minimum TLS version for enhanced security.
- Allow CORS requests from browsers. - Allow CORS requests from browsers.
@ -417,6 +424,7 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
- Enforce the `--num-cores` thread allocation limit. - Enforce the `--num-cores` thread allocation limit.
## v3.0.1 {date="2025-04-16"} ## v3.0.1 {date="2025-04-16"}
**Core**: revision d7c071e0c4959beebc7a1a433daf8916abd51214 **Core**: revision d7c071e0c4959beebc7a1a433daf8916abd51214
**Enterprise**: revision 96e4aad870b44709e149160d523b4319ea91b54c **Enterprise**: revision 96e4aad870b44709e149160d523b4319ea91b54c
@ -424,15 +432,18 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
### Core ### Core
#### Updates #### Updates
- TLS CA can now be set with an environment variable: `INFLUXDB3_TLS_CA` - TLS CA can now be set with an environment variable: `INFLUXDB3_TLS_CA`
- Other general performance improvements - Other general performance improvements
#### Fixes #### Fixes
- The `--tags` argument is now optional for creating a table, and additionally now requires at least one tag _if_ specified
- The `--tags` argument is now optional for creating a table, and additionally now requires at least one tag *if* specified
### Enterprise ### Enterprise
#### Updates #### Updates
- Catalog limits for databases, tables, and columns are now configurable using `influxdb3 serve` options: - Catalog limits for databases, tables, and columns are now configurable using `influxdb3 serve` options:
- `--num-database-limit` - `--num-database-limit`
- `--num-table-limit` - `--num-table-limit`
@ -441,8 +452,8 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
- Other general performance improvements - Other general performance improvements
#### Fixes #### Fixes
- **Home** license thread count log errors
- **Home** license thread count log errors
## v3.0.0 {date="2025-04-14"} ## v3.0.0 {date="2025-04-14"}
@ -471,50 +482,59 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
- You can now use Commercial, Trial, and At-Home licenses. - You can now use Commercial, Trial, and At-Home licenses.
## v3.0.0-0.beta.3 {date="2025-04-01"} ## v3.0.0-0.beta.3 {date="2025-04-01"}
**Core**: revision f881c5844bec93a85242f26357a1ef3ebf419dd3 **Core**: revision f881c5844bec93a85242f26357a1ef3ebf419dd3
**Enterprise**: revision 6bef9e700a59c0973b0cefdc6baf11583933e262 **Enterprise**: revision 6bef9e700a59c0973b0cefdc6baf11583933e262
### Core ### Core
#### General Improvements #### General Improvements
- InfluxDB 3 now supports graceful shutdowns when sending the interrupt signal to the service. - InfluxDB 3 now supports graceful shutdowns when sending the interrupt signal to the service.
#### Bug fixes #### Bug fixes
- Empty batches in JSON format results are now handled properly - Empty batches in JSON format results are now handled properly
- The Processing Engine now properly extracts data from DictionaryArrays - The Processing Engine now properly extracts data from DictionaryArrays
### Enterprise ### Enterprise
##### Multi-node improvements ##### Multi-node improvements
- Query nodes now automatically detect new ingest nodes - Query nodes now automatically detect new ingest nodes
#### Bug fixes #### Bug fixes
- Several fixes for compaction planning and processing - Several fixes for compaction planning and processing
- The Processing Engine now properly extracts data from DictionaryArrays - The Processing Engine now properly extracts data from DictionaryArrays
## v3.0.0-0.beta.2 {date="2025-03-24"} ## v3.0.0-0.beta.2 {date="2025-03-24"}
**Core**: revision 033e1176d8c322b763b4aefb24686121b1b24f7c **Core**: revision 033e1176d8c322b763b4aefb24686121b1b24f7c
**Enterprise**: revision e530fcd498c593cffec2b56d4f5194afc717d898 **Enterprise**: revision e530fcd498c593cffec2b56d4f5194afc717d898
This update brings several backend performance improvements to both Core and Enterprise in preparation for additional new features over the next several weeks. This update brings several backend performance improvements to both Core and Enterprise in preparation for additional new features over the next several weeks.
## v3.0.0-0.beta.1 {date="2025-03-17"} ## v3.0.0-0.beta.1 {date="2025-03-17"}
### Core ### Core
#### Features #### Features
##### Query and storage enhancements ##### Query and storage enhancements
- New ability to stream response data for CSV and JSON queries, similar to how JSONL streaming works - New ability to stream response data for CSV and JSON queries, similar to how JSONL streaming works
- Parquet files are now cached on the query path, improving performance - Parquet files are now cached on the query path, improving performance
- Query buffer is incrementally cleared when snapshotting, lowering memory spikes - Query buffer is incrementally cleared when snapshotting, lowering memory spikes
##### Processing engine improvements ##### Processing engine improvements
- New Trigger Types: - New Trigger Types:
- _Scheduled_: Run Python plugins on custom, time-defined basis - *Scheduled*: Run Python plugins on custom, time-defined basis
- _Request_: Call Python plugins via HTTP requests - *Request*: Call Python plugins via HTTP requests
- New in-memory cache for storing data temporarily; cached data can be stored for a single trigger or across all triggers - New in-memory cache for storing data temporarily; cached data can be stored for a single trigger or across all triggers
- Integration with virtual environments and install packages: - Integration with virtual environments and install packages:
- Specify Python virtual environment via CLI or `VIRTUAL_ENV` variable - Specify Python virtual environment via CLI or `VIRTUAL_ENV` variable
@ -524,11 +544,13 @@ This update brings several backend performance improvements to both Core and Ent
- Write to logs from within the Processing Engine - Write to logs from within the Processing Engine
##### Database and CLI improvements ##### Database and CLI improvements
- You can now specify the precision on your timestamps for writes using the `--precision` flag. Includes nano/micro/milli/seconds (ns/us/ms/s) - You can now specify the precision on your timestamps for writes using the `--precision` flag. Includes nano/micro/milli/seconds (ns/us/ms/s)
- Added a new `show` system subcommand to display system tables with different options via SQL (default limit: 100) - Added a new `show` system subcommand to display system tables with different options via SQL (default limit: 100)
- Clearer table creation error messages - Clearer table creation error messages
##### Bug fixes ##### Bug fixes
- If a database was created and the service was killed before any data was written, the database would not be retained - If a database was created and the service was killed before any data was written, the database would not be retained
- A last cache with specific "value" columns could not be queried - A last cache with specific "value" columns could not be queried
- Running CTRL-C no longer stopped an InfluxDB process, due to a Python trigger - Running CTRL-C no longer stopped an InfluxDB process, due to a Python trigger
@ -540,13 +562,14 @@ This update brings several backend performance improvements to both Core and Ent
For Core and Enterprise, there are parameter changes for simplicity: For Core and Enterprise, there are parameter changes for simplicity:
| Old Parameter | New Parameter | | Old Parameter | New Parameter |
|---------------|---------------| | ---------------------------- | ------------- |
| `--writer-id`<br>`--host-id` | `--node-id` | | `--writer-id`<br>`--host-id` | `--node-id` |
### Enterprise features ### Enterprise features
#### Cluster management #### Cluster management
- Nodes are now associated with _clusters_, simplifying compaction, read replication, and processing
- Nodes are now associated with *clusters*, simplifying compaction, read replication, and processing
- Node specs are now available for simpler management of cache creations - Node specs are now available for simpler management of cache creations
#### Mode types #### Mode types
@ -558,7 +581,7 @@ For Core and Enterprise, there are parameter changes for simplicity:
For Enterprise, additional parameters for the `serve` command have been consolidated for simplicity: For Enterprise, additional parameters for the `serve` command have been consolidated for simplicity:
| Old Parameter | New Parameter | | Old Parameter | New Parameter |
|---------------|---------------| | --------------------------------------------------- | ------------------------------------ |
| `--read-from-node-ids`<br>`--compact-from-node-ids` | `--cluster-id` | | `--read-from-node-ids`<br>`--compact-from-node-ids` | `--cluster-id` |
| `--run-compactions`<br>`--mode=compactor` | `--mode=compact`<br>`--mode=compact` | | `--run-compactions`<br>`--mode=compactor` | `--mode=compact`<br>`--mode=compact` |

View File

@ -212,19 +212,6 @@ influxdb_cloud:
- How is Cloud 2 different from Cloud Serverless? - How is Cloud 2 different from Cloud Serverless?
- How do I manage auth tokens in InfluxDB Cloud 2? - How do I manage auth tokens in InfluxDB Cloud 2?
explorer:
name: InfluxDB 3 Explorer
namespace: explorer
menu_category: other
list_order: 4
versions: [v1]
latest: explorer
latest_patch: 1.1.0
ai_sample_questions:
- How do I use InfluxDB 3 Explorer to visualize data?
- How do I create a dashboard in InfluxDB 3 Explorer?
- How do I query data using InfluxDB 3 Explorer?
telegraf: telegraf:
name: Telegraf name: Telegraf
namespace: telegraf namespace: telegraf

View File

@ -4,6 +4,9 @@
"version": "1.0.0", "version": "1.0.0",
"description": "InfluxDB documentation", "description": "InfluxDB documentation",
"license": "MIT", "license": "MIT",
"bin": {
"docs": "scripts/docs-cli.js"
},
"resolutions": { "resolutions": {
"serialize-javascript": "^6.0.2" "serialize-javascript": "^6.0.2"
}, },
@ -40,6 +43,7 @@
"vanillajs-datepicker": "^1.3.4" "vanillajs-datepicker": "^1.3.4"
}, },
"scripts": { "scripts": {
"postinstall": "node scripts/setup-local-bin.js",
"docs:create": "node scripts/docs-create.js", "docs:create": "node scripts/docs-create.js",
"docs:edit": "node scripts/docs-edit.js", "docs:edit": "node scripts/docs-edit.js",
"docs:add-placeholders": "node scripts/add-placeholders.js", "docs:add-placeholders": "node scripts/add-placeholders.js",
@ -78,5 +82,8 @@
"test": "test" "test": "test"
}, },
"keywords": [], "keywords": [],
"author": "" "author": "",
"optionalDependencies": {
"copilot": "^0.0.2"
}
} }

View File

@ -1,108 +0,0 @@
# Add Placeholders Script
Automatically adds placeholder syntax to code blocks and placeholder descriptions in markdown files.
## What it does
This script finds UPPERCASE placeholders in code blocks and:
1. **Adds `{ placeholders="PATTERN1|PATTERN2" }` attribute** to code block fences
2. **Wraps placeholder descriptions** with `{{% code-placeholder-key %}}` shortcodes
## Usage
### Direct usage
```bash
# Process a single file
node scripts/add-placeholders.js <file.md>
# Dry run to preview changes
node scripts/add-placeholders.js <file.md> --dry
# Example
node scripts/add-placeholders.js content/influxdb3/enterprise/admin/upgrade.md
```
### Using npm script
```bash
# Process a file
yarn docs:add-placeholders <file.md>
# Dry run
yarn docs:add-placeholders <file.md> --dry
```
## Example transformations
### Before
````markdown
```bash
influxdb3 query \
--database SYSTEM_DATABASE \
--token ADMIN_TOKEN \
"SELECT * FROM system.version"
```
Replace the following:
- **`SYSTEM_DATABASE`**: The name of your system database
- **`ADMIN_TOKEN`**: An admin token with read permissions
````
### After
````markdown
```bash { placeholders="ADMIN_TOKEN|SYSTEM_DATABASE" }
influxdb3 query \
--database SYSTEM_DATABASE \
--token ADMIN_TOKEN \
"SELECT * FROM system.version"
```
Replace the following:
- {{% code-placeholder-key %}}`SYSTEM_DATABASE`{{% /code-placeholder-key %}}: The name of your system database
- {{% code-placeholder-key %}}`ADMIN_TOKEN`{{% /code-placeholder-key %}}: An admin token with read permissions
````
## How it works
### Placeholder detection
The script automatically detects UPPERCASE placeholders in code blocks using these rules:
- **Pattern**: Matches words with 2+ characters, all uppercase, can include underscores
- **Excludes common words**: HTTP verbs (GET, POST), protocols (HTTP, HTTPS), SQL keywords (SELECT, FROM), etc.
### Code block processing
1. Finds all code blocks (including indented ones)
2. Extracts UPPERCASE placeholders
3. Adds `{ placeholders="..." }` attribute to the fence line
4. Preserves indentation and language identifiers
### Description wrapping
1. Detects "Replace the following:" sections
2. Wraps placeholder descriptions matching `- **`PLACEHOLDER`**: description`
3. Preserves indentation and formatting
4. Skips already-wrapped descriptions
## Options
- `--dry` or `-d`: Preview changes without modifying files
## Notes
- The script is idempotent - running it multiple times on the same file won't duplicate syntax
- Preserves existing `placeholders` attributes in code blocks
- Works with both indented and non-indented code blocks
- Handles multiple "Replace the following:" sections in a single file
## Related documentation
- [DOCS-SHORTCODES.md](../DOCS-SHORTCODES.md) - Complete shortcode reference
- [DOCS-CONTRIBUTING.md](../DOCS-CONTRIBUTING.md) - Placeholder conventions and style guidelines

View File

@ -16,7 +16,7 @@ import { readFileSync, writeFileSync } from 'fs';
import { parseArgs } from 'node:util'; import { parseArgs } from 'node:util';
// Parse command-line arguments // Parse command-line arguments
const { positionals } = parseArgs({ const { positionals, values } = parseArgs({
allowPositionals: true, allowPositionals: true,
options: { options: {
dry: { dry: {
@ -24,19 +24,47 @@ const { positionals } = parseArgs({
short: 'd', short: 'd',
default: false, default: false,
}, },
help: {
type: 'boolean',
short: 'h',
default: false,
},
}, },
}); });
// Show help if requested
if (values.help) {
console.log(`
Add placeholder syntax to code blocks
Usage:
docs placeholders <file.md> [options]
Options:
--dry, -d Preview changes without modifying files
--help, -h Show this help message
Examples:
docs placeholders content/influxdb3/enterprise/admin/upgrade.md
docs placeholders content/influxdb3/core/admin/databases/create.md --dry
What it does:
1. Finds UPPERCASE placeholders in code blocks
2. Adds { placeholders="PATTERN1|PATTERN2" } attribute to code fences
3. Wraps placeholder descriptions with {{% code-placeholder-key %}} shortcodes
`);
process.exit(0);
}
if (positionals.length === 0) { if (positionals.length === 0) {
console.error('Usage: node scripts/add-placeholders.js <file.md> [--dry]'); console.error('Error: Missing file path argument');
console.error( console.error('Usage: docs placeholders <file.md> [--dry]');
'Example: node scripts/add-placeholders.js content/influxdb3/enterprise/admin/upgrade.md' console.error('Run "docs placeholders --help" for more information');
);
process.exit(1); process.exit(1);
} }
const filePath = positionals[0]; const filePath = positionals[0];
const isDryRun = process.argv.includes('--dry') || process.argv.includes('-d'); const isDryRun = values.dry;
/** /**
* Extract UPPERCASE placeholders from a code block * Extract UPPERCASE placeholders from a code block

82
scripts/docs-cli.js Executable file
View File

@ -0,0 +1,82 @@
#!/usr/bin/env node
/**
* Main CLI entry point for docs tools
* Supports subcommands: create, edit, placeholders
*
* Usage:
* docs create <draft-path> [options]
* docs edit <url> [options]
* docs placeholders <file.md> [options]
*/
import { fileURLToPath } from 'url';
import { dirname, join } from 'path';
import { spawn } from 'child_process';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
// Get subcommand and remaining arguments
const subcommand = process.argv[2];
const args = process.argv.slice(3);
// Map subcommands to script files
const subcommands = {
create: 'docs-create.js',
edit: 'docs-edit.js',
placeholders: 'add-placeholders.js',
};
/**
* Print usage information
*/
function printUsage() {
console.log(`
Usage: docs <command> [options]
Commands:
create <draft-path> Create new documentation from draft
edit <url> Edit existing documentation
placeholders <file.md> Add placeholder syntax to code blocks
Examples:
docs create drafts/new-feature.md --products influxdb3_core
docs edit https://docs.influxdata.com/influxdb3/core/admin/
docs placeholders content/influxdb3/core/admin/upgrade.md
For command-specific help:
docs create --help
docs edit --help
docs placeholders --help
`);
}
// Handle no subcommand or help
if (!subcommand || subcommand === '--help' || subcommand === '-h') {
printUsage();
process.exit(subcommand ? 0 : 1);
}
// Validate subcommand
if (!subcommands[subcommand]) {
console.error(`Error: Unknown command '${subcommand}'`);
console.error(`Run 'docs --help' for usage information`);
process.exit(1);
}
// Execute the appropriate script
const scriptPath = join(__dirname, subcommands[subcommand]);
const child = spawn('node', [scriptPath, ...args], {
stdio: 'inherit',
env: process.env,
});
child.on('exit', (code) => {
process.exit(code || 0);
});
child.on('error', (err) => {
console.error(`Failed to execute ${subcommand}:`, err.message);
process.exit(1);
});

View File

@ -23,7 +23,12 @@ import {
loadProducts, loadProducts,
analyzeStructure, analyzeStructure,
} from './lib/content-scaffolding.js'; } from './lib/content-scaffolding.js';
import { writeJson, readJson, fileExists } from './lib/file-operations.js'; import {
writeJson,
readJson,
fileExists,
readDraft,
} from './lib/file-operations.js';
import { parseMultipleURLs } from './lib/url-parser.js'; import { parseMultipleURLs } from './lib/url-parser.js';
const __filename = fileURLToPath(import.meta.url); const __filename = fileURLToPath(import.meta.url);
@ -36,6 +41,7 @@ const REPO_ROOT = join(__dirname, '..');
const TMP_DIR = join(REPO_ROOT, '.tmp'); const TMP_DIR = join(REPO_ROOT, '.tmp');
const CONTEXT_FILE = join(TMP_DIR, 'scaffold-context.json'); const CONTEXT_FILE = join(TMP_DIR, 'scaffold-context.json');
const PROPOSAL_FILE = join(TMP_DIR, 'scaffold-proposal.yml'); const PROPOSAL_FILE = join(TMP_DIR, 'scaffold-proposal.yml');
const PROMPT_FILE = join(TMP_DIR, 'scaffold-prompt.txt');
// Colors for console output // Colors for console output
const colors = { const colors = {
@ -49,25 +55,53 @@ const colors = {
}; };
/** /**
* Print colored output * Print colored output to stderr (so it doesn't interfere with piped output)
*/ */
function log(message, color = 'reset') { function log(message, color = 'reset') {
console.log(`${colors[color]}${message}${colors.reset}`); // Write to stderr so logs don't interfere with stdout (prompt path/text)
console.error(`${colors[color]}${message}${colors.reset}`);
}
/**
* Check if running in Claude Code environment
* @returns {boolean} True if Task function is available (Claude Code)
*/
function isClaudeCode() {
return typeof Task !== 'undefined';
}
/**
* Output prompt for use with external tools
* @param {string} prompt - The generated prompt text
* @param {boolean} printPrompt - If true, force print to stdout
*/
function outputPromptForExternalUse(prompt, printPrompt = false) {
// Auto-detect if stdout is being piped
const isBeingPiped = !process.stdout.isTTY;
// Print prompt text if explicitly requested OR if being piped
const shouldPrintText = printPrompt || isBeingPiped;
if (shouldPrintText) {
// Output prompt text to stdout
console.log(prompt);
} else {
// Write prompt to file and output file path
writeFileSync(PROMPT_FILE, prompt, 'utf8');
console.log(PROMPT_FILE);
}
process.exit(0);
} }
/** /**
* Prompt user for input (works in TTY and non-TTY environments) * Prompt user for input (works in TTY and non-TTY environments)
*/ */
async function promptUser(question) { async function promptUser(question) {
// For non-TTY environments, return empty string
if (!process.stdin.isTTY) {
return '';
}
const readline = await import('readline'); const readline = await import('readline');
const rl = readline.createInterface({ const rl = readline.createInterface({
input: process.stdin, input: process.stdin,
output: process.stdout, output: process.stdout,
terminal: process.stdin.isTTY !== undefined ? process.stdin.isTTY : true,
}); });
return new Promise((resolve) => { return new Promise((resolve) => {
@ -91,30 +125,28 @@ function divider() {
function parseArguments() { function parseArguments() {
const { values, positionals } = parseArgs({ const { values, positionals } = parseArgs({
options: { options: {
draft: { type: 'string' }, 'from-draft': { type: 'string' },
from: { type: 'string' },
url: { type: 'string', multiple: true }, url: { type: 'string', multiple: true },
urls: { type: 'string' }, urls: { type: 'string' },
products: { type: 'string' }, products: { type: 'string' },
ai: { type: 'string', default: 'claude' }, ai: { type: 'string', default: 'claude' },
execute: { type: 'boolean', default: false }, execute: { type: 'boolean', default: false },
'context-only': { type: 'boolean', default: false }, 'context-only': { type: 'boolean', default: false },
'print-prompt': { type: 'boolean', default: false },
proposal: { type: 'string' }, proposal: { type: 'string' },
'dry-run': { type: 'boolean', default: false }, 'dry-run': { type: 'boolean', default: false },
yes: { type: 'boolean', default: false }, yes: { type: 'boolean', default: false },
help: { type: 'boolean', default: false }, help: { type: 'boolean', default: false },
'follow-external': { type: 'boolean', default: false },
}, },
allowPositionals: true, allowPositionals: true,
}); });
// First positional argument is treated as draft path // First positional argument is treated as draft path
if (positionals.length > 0 && !values.draft && !values.from) { if (positionals.length > 0 && !values['from-draft']) {
values.draft = positionals[0]; values.draft = positionals[0];
} } else if (values['from-draft']) {
values.draft = values['from-draft'];
// --from is an alias for --draft
if (values.from && !values.draft) {
values.draft = values.from;
} }
// Normalize URLs into array // Normalize URLs into array
@ -141,63 +173,101 @@ function printUsage() {
${colors.bright}Documentation Content Scaffolding${colors.reset} ${colors.bright}Documentation Content Scaffolding${colors.reset}
${colors.bright}Usage:${colors.reset} ${colors.bright}Usage:${colors.reset}
yarn docs:create <draft-path> Create from draft docs create <draft-path> Create from draft
yarn docs:create --url <url> --draft <path> Create at URL with draft content docs create --url <url> --from-draft <path> Create at URL with draft
# Or use with yarn:
yarn docs:create <draft-path>
yarn docs:create --url <url> --from-draft <path>
${colors.bright}Options:${colors.reset} ${colors.bright}Options:${colors.reset}
<draft-path> Path to draft markdown file (positional argument) <draft-path> Path to draft markdown file (positional argument)
--draft <path> Path to draft markdown file --from-draft <path> Path to draft markdown file
--from <path> Alias for --draft
--url <url> Documentation URL for new content location --url <url> Documentation URL for new content location
--products <list> Comma-separated product keys (required for stdin)
Examples: influxdb3_core, influxdb3_enterprise
--follow-external Include external (non-docs.influxdata.com) URLs
when extracting links from draft. Without this flag,
only local documentation links are followed.
--context-only Stop after context preparation --context-only Stop after context preparation
(for non-Claude tools) (for non-Claude tools)
--print-prompt Force prompt text output (auto-enabled when piping)
--proposal <path> Import and execute proposal from JSON file --proposal <path> Import and execute proposal from JSON file
--dry-run Show what would be created without creating --dry-run Show what would be created without creating
--yes Skip confirmation prompt --yes Skip confirmation prompt
--help Show this help message --help Show this help message
${colors.bright}Workflow (Create from draft):${colors.reset} ${colors.bright}Stdin Support:${colors.reset}
When piping content from stdin, you must specify target products:
cat draft.md | docs create --products influxdb3_core
echo "# Content" | docs create --products influxdb3_core,influxdb3_enterprise
${colors.bright}Link Following:${colors.reset}
By default, the script extracts links from your draft and prompts you
to select which ones to include as context. This helps the AI:
- Maintain consistent terminology
- Avoid duplicating content
- Add appropriate \`related\` frontmatter links
Local documentation links are always available for selection.
Use --follow-external to also include external URLs (GitHub, etc.)
${colors.bright}Workflow (Inside Claude Code):${colors.reset}
1. Create a draft markdown file with your content 1. Create a draft markdown file with your content
2. Run: yarn docs:create drafts/new-feature.md 2. Run: docs create drafts/new-feature.md
3. Script runs all agents automatically 3. Script runs all agents automatically
4. Review and confirm to create files 4. Review and confirm to create files
${colors.bright}Workflow (Create at specific URL):${colors.reset} ${colors.bright}Workflow (Pipe to external agent):${colors.reset}
1. Create draft: vim drafts/new-feature.md 1. Create draft: vim drafts/new-feature.md
2. Run: yarn docs:create \\ 2. Pipe to your AI tool (prompt auto-detected):
--url https://docs.influxdata.com/influxdb3/core/admin/new-feature/ \\ docs create drafts/new-feature.md --products X | claude -p
--draft drafts/new-feature.md docs create drafts/new-feature.md --products X | copilot -p
3. Script determines structure from URL and uses draft content 3. AI generates files based on prompt
4. Review and confirm to create files
${colors.bright}Workflow (Manual - for non-Claude tools):${colors.reset}
1. Prepare context:
yarn docs:create --context-only drafts/new-feature.md
2. Run your AI tool with templates from scripts/templates/
3. Save proposal to .tmp/scaffold-proposal.json
4. Execute:
yarn docs:create --proposal .tmp/scaffold-proposal.json
${colors.bright}Examples:${colors.reset} ${colors.bright}Examples:${colors.reset}
# Create from draft (AI determines location) # Inside Claude Code - automatic execution
docs create drafts/new-feature.md
# Pipe to external AI tools - prompt auto-detected
docs create drafts/new-feature.md --products influxdb3_core | claude -p
docs create drafts/new-feature.md --products influxdb3_core | copilot -p
# Pipe from stdin
cat drafts/quick-note.md | docs create --products influxdb3_core | claude -p
echo "# Quick note" | docs create --products influxdb3_core | copilot -p
# Get prompt file path (when not piping)
docs create drafts/new-feature.md # Outputs: .tmp/scaffold-prompt.txt
# Still works with yarn
yarn docs:create drafts/new-feature.md yarn docs:create drafts/new-feature.md
# Create at specific URL with draft content # Include external links for context selection
yarn docs:create --url /influxdb3/core/admin/new-feature/ \\ docs create --follow-external drafts/api-guide.md
--draft drafts/new-feature.md
# Preview changes ${colors.bright}Smart Behavior:${colors.reset}
yarn docs:create --draft drafts/new-feature.md --dry-run INSIDE Claude Code:
Automatically runs Task() agent to generate files
PIPING to another tool:
Auto-detects piping and outputs prompt text
No --print-prompt flag needed
INTERACTIVE (not piping):
Outputs prompt file path: .tmp/scaffold-prompt.txt
Use with: code .tmp/scaffold-prompt.txt
${colors.bright}Note:${colors.reset} ${colors.bright}Note:${colors.reset}
To edit existing pages, use: yarn docs:edit <url> To edit existing pages, use: docs edit <url>
`); `);
} }
/** /**
* Phase 1a: Prepare context from URLs * Phase 1a: Prepare context from URLs
*/ */
async function prepareURLPhase(urls, draftPath, options) { async function prepareURLPhase(urls, draftPath, options, stdinContent = null) {
log('\n🔍 Analyzing URLs and finding files...', 'bright'); log('\n🔍 Analyzing URLs and finding files...', 'bright');
try { try {
@ -258,9 +328,18 @@ async function prepareURLPhase(urls, draftPath, options) {
// Build context (include URL analysis) // Build context (include URL analysis)
let context = null; let context = null;
if (draftPath) { let draft;
if (stdinContent) {
// Use stdin content
draft = stdinContent;
log('✓ Using draft from stdin', 'green');
context = prepareContext(draft);
} else if (draftPath) {
// Use draft content if provided // Use draft content if provided
context = prepareContext(draftPath); draft = readDraft(draftPath);
draft.path = draftPath;
context = prepareContext(draft);
} else { } else {
// Minimal context for editing existing pages // Minimal context for editing existing pages
const products = loadProducts(); const products = loadProducts();
@ -351,18 +430,83 @@ async function prepareURLPhase(urls, draftPath, options) {
/** /**
* Phase 1b: Prepare context from draft * Phase 1b: Prepare context from draft
*/ */
async function preparePhase(draftPath, options) { async function preparePhase(draftPath, options, stdinContent = null) {
log('\n🔍 Analyzing draft and repository structure...', 'bright'); log('\n🔍 Analyzing draft and repository structure...', 'bright');
let draft;
// Handle stdin vs file
if (stdinContent) {
draft = stdinContent;
log('✓ Using draft from stdin', 'green');
} else {
// Validate draft exists // Validate draft exists
if (!fileExists(draftPath)) { if (!fileExists(draftPath)) {
log(`✗ Draft file not found: ${draftPath}`, 'red'); log(`✗ Draft file not found: ${draftPath}`, 'red');
process.exit(1); process.exit(1);
} }
draft = readDraft(draftPath);
draft.path = draftPath;
}
try { try {
// Prepare context // Prepare context
const context = prepareContext(draftPath); const context = prepareContext(draft);
// Extract links from draft
const { extractLinks, followLocalLinks, fetchExternalLinks } = await import(
'./lib/content-scaffolding.js'
);
const links = extractLinks(draft.content);
if (links.localFiles.length > 0 || links.external.length > 0) {
// Filter external links if flag not set
if (!options['follow-external']) {
links.external = [];
}
// Let user select which external links to follow
// (local files are automatically included)
const selected = await selectLinksToFollow(links);
// Follow selected links
const linkedContent = [];
if (selected.selectedLocal.length > 0) {
log('\n📄 Loading local files...', 'cyan');
// Determine base path for resolving relative links
const basePath = draft.path
? dirname(join(REPO_ROOT, draft.path))
: REPO_ROOT;
const localResults = followLocalLinks(selected.selectedLocal, basePath);
linkedContent.push(...localResults);
const successCount = localResults.filter((r) => !r.error).length;
log(`✓ Loaded ${successCount} local file(s)`, 'green');
}
if (selected.selectedExternal.length > 0) {
log('\n🌐 Fetching external URLs...', 'cyan');
const externalResults = await fetchExternalLinks(
selected.selectedExternal
);
linkedContent.push(...externalResults);
const successCount = externalResults.filter((r) => !r.error).length;
log(`✓ Fetched ${successCount} external page(s)`, 'green');
}
// Add to context
if (linkedContent.length > 0) {
context.linkedContent = linkedContent;
// Show any errors
const errors = linkedContent.filter((lc) => lc.error);
if (errors.length > 0) {
log('\n⚠ Some links could not be loaded:', 'yellow');
errors.forEach((e) => log(`${e.url}: ${e.error}`, 'yellow'));
}
}
}
// Write context to temp file // Write context to temp file
writeJson(CONTEXT_FILE, context); writeJson(CONTEXT_FILE, context);
@ -382,6 +526,12 @@ async function preparePhase(draftPath, options) {
`✓ Found ${context.structure.existingPaths.length} existing pages`, `✓ Found ${context.structure.existingPaths.length} existing pages`,
'green' 'green'
); );
if (context.linkedContent) {
log(
`✓ Included ${context.linkedContent.length} linked page(s) as context`,
'green'
);
}
log( log(
`✓ Prepared context → ${CONTEXT_FILE.replace(REPO_ROOT, '.')}`, `✓ Prepared context → ${CONTEXT_FILE.replace(REPO_ROOT, '.')}`,
'green' 'green'
@ -441,25 +591,69 @@ async function selectProducts(context, options) {
} }
} }
// Sort products: detected first, then alphabetically within each group
allProducts.sort((a, b) => {
const aDetected = detected.includes(a);
const bDetected = detected.includes(b);
// Detected products first
if (aDetected && !bDetected) return -1;
if (!aDetected && bDetected) return 1;
// Then alphabetically
return a.localeCompare(b);
});
// Case 1: Explicit flag provided // Case 1: Explicit flag provided
if (options.products) { if (options.products) {
const requested = options.products.split(',').map((p) => p.trim()); const requestedKeys = options.products.split(',').map((p) => p.trim());
const invalid = requested.filter((p) => !allProducts.includes(p));
if (invalid.length > 0) { // Map product keys to display names
const requestedNames = [];
const invalidKeys = [];
for (const key of requestedKeys) {
const product = context.products[key];
if (product) {
// Valid product key found
if (product.versions && product.versions.length > 1) {
// Multi-version product: add all versions
product.versions.forEach((version) => {
const displayName = `${product.name} ${version}`;
if (allProducts.includes(displayName)) {
requestedNames.push(displayName);
}
});
} else {
// Single version product
if (allProducts.includes(product.name)) {
requestedNames.push(product.name);
}
}
} else if (allProducts.includes(key)) {
// It's already a display name (backwards compatibility)
requestedNames.push(key);
} else {
invalidKeys.push(key);
}
}
if (invalidKeys.length > 0) {
const validKeys = Object.keys(context.products).join(', ');
log( log(
`\n✗ Invalid products: ${invalid.join(', ')}\n` + `\n✗ Invalid product keys: ${invalidKeys.join(', ')}\n` +
`Valid products: ${allProducts.join(', ')}`, `Valid keys: ${validKeys}`,
'red' 'red'
); );
process.exit(1); process.exit(1);
} }
log( log(
`✓ Using products from --products flag: ${requested.join(', ')}`, `✓ Using products from --products flag: ${requestedNames.join(', ')}`,
'green' 'green'
); );
return requested; return requestedNames;
} }
// Case 2: Unambiguous (single product detected) // Case 2: Unambiguous (single product detected)
@ -514,6 +708,74 @@ async function selectProducts(context, options) {
return selected; return selected;
} }
/**
* Prompt user to select which external links to include
* Local file paths are automatically followed
* @param {object} links - {localFiles, external} from extractLinks
* @returns {Promise<object>} {selectedLocal, selectedExternal}
*/
async function selectLinksToFollow(links) {
// Local files are followed automatically (no user prompt)
// External links require user selection
if (links.external.length === 0) {
return {
selectedLocal: links.localFiles || [],
selectedExternal: [],
};
}
log('\n🔗 Found external links in draft:\n', 'bright');
const allLinks = [];
let index = 1;
// Show external links for selection
links.external.forEach((link) => {
log(` ${index}. ${link}`, 'yellow');
allLinks.push({ type: 'external', url: link });
index++;
});
const answer = await promptUser(
'\nSelect external links to include as context ' +
'(comma-separated numbers, or "all"): '
);
if (!answer || answer.toLowerCase() === 'none') {
return {
selectedLocal: links.localFiles || [],
selectedExternal: [],
};
}
let selectedIndices;
if (answer.toLowerCase() === 'all') {
selectedIndices = Array.from({ length: allLinks.length }, (_, i) => i);
} else {
selectedIndices = answer
.split(',')
.map((s) => parseInt(s.trim()) - 1)
.filter((i) => i >= 0 && i < allLinks.length);
}
const selectedExternal = [];
selectedIndices.forEach((i) => {
const link = allLinks[i];
selectedExternal.push(link.url);
});
log(
`\n✓ Following ${links.localFiles?.length || 0} local file(s) ` +
`and ${selectedExternal.length} external link(s)`,
'green'
);
return {
selectedLocal: links.localFiles || [],
selectedExternal,
};
}
/** /**
* Run single content generator agent with direct file generation (Claude Code) * Run single content generator agent with direct file generation (Claude Code)
*/ */
@ -577,6 +839,30 @@ function generateClaudePrompt(
**Target Products**: Use \`context.selectedProducts\` field (${selectedProducts.join(', ')}) **Target Products**: Use \`context.selectedProducts\` field (${selectedProducts.join(', ')})
**Mode**: ${mode === 'edit' ? 'Edit existing content' : 'Create new documentation'} **Mode**: ${mode === 'edit' ? 'Edit existing content' : 'Create new documentation'}
${isURLBased ? `**URLs**: ${context.urls.map((u) => u.url).join(', ')}` : ''} ${isURLBased ? `**URLs**: ${context.urls.map((u) => u.url).join(', ')}` : ''}
${
context.linkedContent?.length > 0
? `
**Linked References**: The draft references ${context.linkedContent.length} page(s) from existing documentation.
These are provided for context to help you:
- Maintain consistent terminology and style
- Avoid duplicating existing content
- Understand related concepts and their structure
- Add appropriate links to the \`related\` frontmatter field
Linked content details available in \`context.linkedContent\`:
${context.linkedContent
.map((lc) =>
lc.error
? `- ❌ ${lc.url} (${lc.error})`
: `- ✓ [${lc.type}] ${lc.title} (${lc.path || lc.url})`
)
.join('\n')}
**Important**: Use this content for context and reference, but do not copy it verbatim. Consider adding relevant pages to the \`related\` field in frontmatter.
`
: ''
}
**Your Task**: Generate complete documentation files directly (no proposal step). **Your Task**: Generate complete documentation files directly (no proposal step).
@ -908,16 +1194,40 @@ async function executePhase(options) {
async function main() { async function main() {
const options = parseArguments(); const options = parseArguments();
// Show help // Show help first (don't wait for stdin)
if (options.help) { if (options.help) {
printUsage(); printUsage();
process.exit(0); process.exit(0);
} }
// Check for stdin only if no draft file was provided
const hasStdin = !process.stdin.isTTY;
let stdinContent = null;
if (hasStdin && !options.draft) {
// Stdin requires --products option
if (!options.products) {
log(
'\n✗ Error: --products is required when piping content from stdin',
'red'
);
log(
'Example: echo "# Content" | yarn docs:create --products influxdb3_core',
'yellow'
);
process.exit(1);
}
// Import readDraftFromStdin
const { readDraftFromStdin } = await import('./lib/file-operations.js');
log('📥 Reading draft from stdin...', 'cyan');
stdinContent = await readDraftFromStdin();
}
// Determine workflow // Determine workflow
if (options.url && options.url.length > 0) { if (options.url && options.url.length > 0) {
// URL-based workflow requires draft content // URL-based workflow requires draft content
if (!options.draft) { if (!options.draft && !stdinContent) {
log('\n✗ Error: --url requires --draft <path>', 'red'); log('\n✗ Error: --url requires --draft <path>', 'red');
log('The --url option specifies WHERE to create content.', 'yellow'); log('The --url option specifies WHERE to create content.', 'yellow');
log( log(
@ -934,29 +1244,75 @@ async function main() {
process.exit(1); process.exit(1);
} }
const context = await prepareURLPhase(options.url, options.draft, options); const context = await prepareURLPhase(
options.url,
options.draft,
options,
stdinContent
);
if (options['context-only']) { if (options['context-only']) {
// Stop after context preparation // Stop after context preparation
process.exit(0); process.exit(0);
} }
// Continue with AI analysis (Phase 2) // Generate prompt for product selection
const selectedProducts = await selectProducts(context, options);
const mode = context.urls?.length > 0 ? 'create' : 'create';
const isURLBased = true;
const hasExistingContent =
context.existingContent &&
Object.keys(context.existingContent).length > 0;
const prompt = generateClaudePrompt(
context,
selectedProducts,
mode,
isURLBased,
hasExistingContent
);
// Check environment and handle prompt accordingly
if (!isClaudeCode()) {
// Not in Claude Code: output prompt for external use
outputPromptForExternalUse(prompt, options['print-prompt']);
}
// In Claude Code: continue with AI analysis (Phase 2)
log('\n🤖 Running AI analysis with specialized agents...\n', 'bright'); log('\n🤖 Running AI analysis with specialized agents...\n', 'bright');
await runAgentAnalysis(context, options); await runAgentAnalysis(context, options);
// Execute proposal (Phase 3) // Execute proposal (Phase 3)
await executePhase(options); await executePhase(options);
} else if (options.draft) { } else if (options.draft || stdinContent) {
// Draft-based workflow // Draft-based workflow (from file or stdin)
const context = await preparePhase(options.draft, options); const context = await preparePhase(options.draft, options, stdinContent);
if (options['context-only']) { if (options['context-only']) {
// Stop after context preparation // Stop after context preparation
process.exit(0); process.exit(0);
} }
// Continue with AI analysis (Phase 2) // Generate prompt for product selection
const selectedProducts = await selectProducts(context, options);
const mode = 'create';
const isURLBased = false;
const prompt = generateClaudePrompt(
context,
selectedProducts,
mode,
isURLBased,
false
);
// Check environment and handle prompt accordingly
if (!isClaudeCode()) {
// Not in Claude Code: output prompt for external use
outputPromptForExternalUse(prompt, options['print-prompt']);
}
// In Claude Code: continue with AI analysis (Phase 2)
log('\n🤖 Running AI analysis with specialized agents...\n', 'bright'); log('\n🤖 Running AI analysis with specialized agents...\n', 'bright');
await runAgentAnalysis(context, options); await runAgentAnalysis(context, options);

View File

@ -4,7 +4,7 @@
*/ */
import { readdirSync, readFileSync, existsSync, statSync } from 'fs'; import { readdirSync, readFileSync, existsSync, statSync } from 'fs';
import { join, dirname } from 'path'; import { join, dirname, resolve } from 'path';
import { fileURLToPath } from 'url'; import { fileURLToPath } from 'url';
import yaml from 'js-yaml'; import yaml from 'js-yaml';
import matter from 'gray-matter'; import matter from 'gray-matter';
@ -314,12 +314,19 @@ export function findSiblingWeights(dirPath) {
/** /**
* Prepare complete context for AI analysis * Prepare complete context for AI analysis
* @param {string} draftPath - Path to draft file * @param {string|object} draftPathOrObject - Path to draft file or draft object
* @returns {object} Context object * @returns {object} Context object
*/ */
export function prepareContext(draftPath) { export function prepareContext(draftPathOrObject) {
// Read draft // Read draft - handle both file path and draft object
const draft = readDraft(draftPath); let draft;
if (typeof draftPathOrObject === 'string') {
draft = readDraft(draftPathOrObject);
draft.path = draftPathOrObject;
} else {
// Already a draft object from stdin
draft = draftPathOrObject;
}
// Load products // Load products
const products = loadProducts(); const products = loadProducts();
@ -349,7 +356,7 @@ export function prepareContext(draftPath) {
// Build context // Build context
const context = { const context = {
draft: { draft: {
path: draftPath, path: draft.path || draftPathOrObject,
content: draft.content, content: draft.content,
existingFrontmatter: draft.frontmatter, existingFrontmatter: draft.frontmatter,
}, },
@ -616,7 +623,7 @@ export function detectSharedContent(filePath) {
if (parsed.data && parsed.data.source) { if (parsed.data && parsed.data.source) {
return parsed.data.source; return parsed.data.source;
} }
} catch (error) { } catch (_error) {
// Can't parse, assume not shared // Can't parse, assume not shared
return null; return null;
} }
@ -663,13 +670,13 @@ export function findSharedContentVariants(sourcePath) {
const relativePath = fullPath.replace(REPO_ROOT + '/', ''); const relativePath = fullPath.replace(REPO_ROOT + '/', '');
variants.push(relativePath); variants.push(relativePath);
} }
} catch (error) { } catch (_error) {
// Skip files that can't be parsed // Skip files that can't be parsed
continue; continue;
} }
} }
} }
} catch (error) { } catch (_error) {
// Skip directories we can't read // Skip directories we can't read
} }
} }
@ -758,3 +765,127 @@ export function analyzeURLs(parsedURLs) {
return results; return results;
} }
/**
* Extract and categorize links from markdown content
* @param {string} content - Markdown content
* @returns {object} {localFiles: string[], external: string[]}
*/
export function extractLinks(content) {
const localFiles = [];
const external = [];
// Match markdown links: [text](url)
const linkRegex = /\[([^\]]+)\]\(([^)]+)\)/g;
let match;
while ((match = linkRegex.exec(content)) !== null) {
const url = match[2];
// Skip anchor links and mailto
if (url.startsWith('#') || url.startsWith('mailto:')) {
continue;
}
// Local file paths (relative paths) - automatically followed
if (url.startsWith('../') || url.startsWith('./')) {
localFiles.push(url);
}
// All HTTP/HTTPS URLs (including docs.influxdata.com) - user selects
else if (url.startsWith('http://') || url.startsWith('https://')) {
external.push(url);
}
// Absolute paths starting with / are ignored (no base context to resolve)
}
return {
localFiles: [...new Set(localFiles)],
external: [...new Set(external)],
};
}
/**
* Follow local file links (relative paths)
* @param {string[]} links - Array of relative file paths
* @param {string} basePath - Base path to resolve relative links from
* @returns {object[]} Array of {url, title, content, path, frontmatter}
*/
export function followLocalLinks(links, basePath = REPO_ROOT) {
const results = [];
for (const link of links) {
try {
// Resolve relative path from base path
const filePath = resolve(basePath, link);
// Check if file exists
if (existsSync(filePath)) {
const fileContent = readFileSync(filePath, 'utf8');
const parsed = matter(fileContent);
results.push({
url: link,
title: parsed.data?.title || 'Untitled',
content: parsed.content,
path: filePath.replace(REPO_ROOT + '/', ''),
frontmatter: parsed.data,
type: 'local',
});
} else {
results.push({
url: link,
error: 'File not found',
type: 'local',
});
}
} catch (error) {
results.push({
url: link,
error: error.message,
type: 'local',
});
}
}
return results;
}
/**
* Fetch external URLs
* @param {string[]} urls - Array of external URLs
* @returns {Promise<object[]>} Array of {url, title, content, type}
*/
export async function fetchExternalLinks(urls) {
// Dynamic import axios
const axios = (await import('axios')).default;
const results = [];
for (const url of urls) {
try {
const response = await axios.get(url, {
timeout: 10000,
headers: { 'User-Agent': 'InfluxData-Docs-Bot/1.0' },
});
// Extract title from HTML or use URL
const titleMatch = response.data.match(/<title>([^<]+)<\/title>/i);
const title = titleMatch ? titleMatch[1] : url;
results.push({
url,
title,
content: response.data,
type: 'external',
contentType: response.headers['content-type'],
});
} catch (error) {
results.push({
url,
error: error.message,
type: 'external',
});
}
}
return results;
}

View File

@ -28,6 +28,38 @@ export function readDraft(filePath) {
}; };
} }
/**
* Read draft content from stdin
* @returns {Promise<{content: string, frontmatter: object, raw: string, path: string}>}
*/
export async function readDraftFromStdin() {
return new Promise((resolve, reject) => {
let data = '';
process.stdin.setEncoding('utf8');
process.stdin.on('data', (chunk) => {
data += chunk;
});
process.stdin.on('end', () => {
try {
// Parse with gray-matter to extract frontmatter if present
const parsed = matter(data);
resolve({
content: parsed.content,
frontmatter: parsed.data || {},
raw: data,
path: '<stdin>',
});
} catch (error) {
reject(error);
}
});
process.stdin.on('error', reject);
});
}
/** /**
* Write a markdown file with frontmatter * Write a markdown file with frontmatter
* @param {string} filePath - Path to write to * @param {string} filePath - Path to write to

43
scripts/setup-local-bin.js Executable file
View File

@ -0,0 +1,43 @@
#!/usr/bin/env node
/**
* Setup script to make the `docs` command available locally after yarn install.
* Creates a symlink in node_modules/.bin/docs pointing to scripts/docs-cli.js
*/
import { fileURLToPath } from 'url';
import { dirname, join } from 'path';
import { existsSync, mkdirSync, symlinkSync, unlinkSync, chmodSync } from 'fs';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const rootDir = join(__dirname, '..');
const binDir = join(rootDir, 'node_modules', '.bin');
const binLink = join(binDir, 'docs');
const targetScript = join(rootDir, 'scripts', 'docs-cli.js');
try {
// Ensure node_modules/.bin directory exists
if (!existsSync(binDir)) {
mkdirSync(binDir, { recursive: true });
}
// Remove existing symlink if it exists
if (existsSync(binLink)) {
unlinkSync(binLink);
}
// Create symlink
symlinkSync(targetScript, binLink, 'file');
// Ensure the target script is executable
chmodSync(targetScript, 0o755);
console.log('✓ Created local `docs` command in node_modules/.bin/');
console.log(' You can now use: npx docs <command>');
console.log(' Or add node_modules/.bin to your PATH for direct access');
} catch (error) {
console.error('Failed to setup local docs command:', error.message);
process.exit(1);
}

View File

@ -7,6 +7,7 @@ You are analyzing a documentation draft to generate an intelligent file structur
**Context file**: `.tmp/scaffold-context.json` **Context file**: `.tmp/scaffold-context.json`
Read and analyze the context file, which contains: Read and analyze the context file, which contains:
- **draft**: The markdown content and any existing frontmatter - **draft**: The markdown content and any existing frontmatter
- **products**: Available InfluxDB products (Core, Enterprise, Cloud, etc.) - **products**: Available InfluxDB products (Core, Enterprise, Cloud, etc.)
- **productHints**: Products mentioned or suggested based on content analysis - **productHints**: Products mentioned or suggested based on content analysis
@ -49,11 +50,12 @@ For each file, create complete frontmatter with:
- **weight**: Sequential weight based on siblings - **weight**: Sequential weight based on siblings
- **source**: (for frontmatter-only files) Path to shared content - **source**: (for frontmatter-only files) Path to shared content
- **related**: 3-5 relevant related articles from `structure.existingPaths` - **related**: 3-5 relevant related articles from `structure.existingPaths`
- **alt_links**: Map equivalent pages across products for cross-product navigation - **alt\_links**: Map equivalent pages across products for cross-product navigation
### 4. Code Sample Considerations ### 4. Code Sample Considerations
Based on `versionInfo`: Based on `versionInfo`:
- Use version-specific CLI commands (influxdb3, influx, influxctl) - Use version-specific CLI commands (influxdb3, influx, influxctl)
- Reference appropriate API endpoints (/api/v3, /api/v2) - Reference appropriate API endpoints (/api/v3, /api/v2)
- Note testing requirements from `conventions.testing` - Note testing requirements from `conventions.testing`
@ -61,6 +63,7 @@ Based on `versionInfo`:
### 5. Style Compliance ### 5. Style Compliance
Follow conventions from `conventions.namingRules`: Follow conventions from `conventions.namingRules`:
- Files: Use lowercase with hyphens (e.g., `manage-databases.md`) - Files: Use lowercase with hyphens (e.g., `manage-databases.md`)
- Directories: Use lowercase with hyphens - Directories: Use lowercase with hyphens
- Shared content: Place in appropriate `/content/shared/` subdirectory - Shared content: Place in appropriate `/content/shared/` subdirectory
@ -133,4 +136,8 @@ Generate a JSON proposal matching the schema in `scripts/schemas/scaffold-propos
4. Generate complete frontmatter for all files 4. Generate complete frontmatter for all files
5. Save the proposal to `.tmp/scaffold-proposal.json` 5. Save the proposal to `.tmp/scaffold-proposal.json`
The proposal will be validated and used by `yarn docs:create --proposal .tmp/scaffold-proposal.json` to create the files. The following command validates and creates files from the proposal:
```bash
npx docs create --proposal .tmp/scaffold-proposal.json
```

View File

@ -194,6 +194,11 @@
resolved "https://registry.yarnpkg.com/@evilmartians/lefthook/-/lefthook-1.12.3.tgz#081eca59a6d33646616af844244ce6842cd6b5a5" resolved "https://registry.yarnpkg.com/@evilmartians/lefthook/-/lefthook-1.12.3.tgz#081eca59a6d33646616af844244ce6842cd6b5a5"
integrity sha512-MtXIt8h+EVTv5tCGLzh9UwbA/LRv6esdPJOHlxr8NDKHbFnbo8PvU5uVQcm3PAQTd4DZN3HoyokqrwGwntoq6w== integrity sha512-MtXIt8h+EVTv5tCGLzh9UwbA/LRv6esdPJOHlxr8NDKHbFnbo8PvU5uVQcm3PAQTd4DZN3HoyokqrwGwntoq6w==
"@github/copilot@latest":
version "0.0.353"
resolved "https://registry.yarnpkg.com/@github/copilot/-/copilot-0.0.353.tgz#3c8d8a072b3defbd2200c9fe4fb636d633ac7f1e"
integrity sha512-OYgCB4Jf7Y/Wor8mNNQcXEt1m1koYm/WwjGsr5mwABSVYXArWUeEfXqVbx+7O87ld5b+aWy2Zaa2bzKV8dmqaw==
"@humanfs/core@^0.19.1": "@humanfs/core@^0.19.1":
version "0.19.1" version "0.19.1"
resolved "https://registry.yarnpkg.com/@humanfs/core/-/core-0.19.1.tgz#17c55ca7d426733fe3c561906b8173c336b40a77" resolved "https://registry.yarnpkg.com/@humanfs/core/-/core-0.19.1.tgz#17c55ca7d426733fe3c561906b8173c336b40a77"
@ -1364,6 +1369,13 @@ confbox@^0.2.2:
resolved "https://registry.yarnpkg.com/confbox/-/confbox-0.2.2.tgz#8652f53961c74d9e081784beed78555974a9c110" resolved "https://registry.yarnpkg.com/confbox/-/confbox-0.2.2.tgz#8652f53961c74d9e081784beed78555974a9c110"
integrity sha512-1NB+BKqhtNipMsov4xI/NnhCKp9XG9NamYp5PVm9klAT0fsrNPjaFICsCFhNhwZJKNh7zB/3q8qXz0E9oaMNtQ== integrity sha512-1NB+BKqhtNipMsov4xI/NnhCKp9XG9NamYp5PVm9klAT0fsrNPjaFICsCFhNhwZJKNh7zB/3q8qXz0E9oaMNtQ==
copilot@^0.0.2:
version "0.0.2"
resolved "https://registry.yarnpkg.com/copilot/-/copilot-0.0.2.tgz#4712810c9182cd784820ed44627bedd32dd377f9"
integrity sha512-nedf34AaYj9JnFhRmiJEZemAno2WDXMypq6FW5aCVR0N+QdpQ6viukP1JpvJDChpaMEVvbUkMjmjMifJbO/AgQ==
dependencies:
"@github/copilot" latest
core-util-is@1.0.2: core-util-is@1.0.2:
version "1.0.2" version "1.0.2"
resolved "https://registry.yarnpkg.com/core-util-is/-/core-util-is-1.0.2.tgz#b5fd54220aa2bc5ab57aab7140c940754503c1a7" resolved "https://registry.yarnpkg.com/core-util-is/-/core-util-is-1.0.2.tgz#b5fd54220aa2bc5ab57aab7140c940754503c1a7"