Merge b75bc95494 into efd288fdb8
commit
cd894bfa82
|
|
@ -19,7 +19,7 @@ Complete reference for custom Hugo shortcodes used in InfluxData documentation.
|
|||
- [Content Management](#content-management)
|
||||
- [Special Purpose](#special-purpose)
|
||||
|
||||
---
|
||||
***
|
||||
|
||||
## Notes and Warnings
|
||||
|
||||
|
|
@ -146,7 +146,7 @@ Use the `{{< api-endpoint >}}` shortcode to generate a code block that contains
|
|||
- **method**: HTTP request method (get, post, patch, put, or delete)
|
||||
- **endpoint**: API endpoint
|
||||
- **api-ref**: Link the endpoint to a specific place in the API documentation
|
||||
- **influxdb_host**: Specify which InfluxDB product host to use _if the `endpoint` contains the `influxdb/host` shortcode_. Uses the current InfluxDB product as default. Supports the following product values:
|
||||
- **influxdb\_host**: Specify which InfluxDB product host to use *if the `endpoint` contains the `influxdb/host` shortcode*. Uses the current InfluxDB product as default. Supports the following product values:
|
||||
- oss
|
||||
- cloud
|
||||
- serverless
|
||||
|
|
@ -268,11 +268,11 @@ To link to tabbed content, click on the tab and use the URL parameter shown. It
|
|||
|
||||
Use the `{{< page-nav >}}` shortcode to add page navigation buttons to a page. These are useful for guiding users through a set of docs that should be read in sequential order. The shortcode has the following parameters:
|
||||
|
||||
- **prev:** path of the previous document _(optional)_
|
||||
- **next:** path of the next document _(optional)_
|
||||
- **prevText:** override the button text linking to the previous document _(optional)_
|
||||
- **nextText:** override the button text linking to the next document _(optional)_
|
||||
- **keepTab:** include the currently selected tab in the button link _(optional)_
|
||||
- **prev:** path of the previous document *(optional)*
|
||||
- **next:** path of the next document *(optional)*
|
||||
- **prevText:** override the button text linking to the previous document *(optional)*
|
||||
- **nextText:** override the button text linking to the next document *(optional)*
|
||||
- **keepTab:** include the currently selected tab in the button link *(optional)*
|
||||
|
||||
The shortcode generates buttons that link to both the previous and next documents. By default, the shortcode uses either the `list_title` or the `title` of the linked document, but you can use `prevText` and `nextText` to override button text.
|
||||
|
||||
|
|
@ -308,7 +308,7 @@ The children shortcode can also be used to list only "section" articles (those w
|
|||
{{< children show="pages" >}}
|
||||
```
|
||||
|
||||
_By default, it displays both sections and pages._
|
||||
*By default, it displays both sections and pages.*
|
||||
|
||||
Use the `type` argument to specify the format of the children list.
|
||||
|
||||
|
|
@ -325,7 +325,7 @@ The following list types are available:
|
|||
|
||||
#### Include a "Read more" link
|
||||
|
||||
To include a "Read more" link with each child summary, set `readmore=true`. _Only the `articles` list type supports "Read more" links._
|
||||
To include a "Read more" link with each child summary, set `readmore=true`. *Only the `articles` list type supports "Read more" links.*
|
||||
|
||||
```md
|
||||
{{< children readmore=true >}}
|
||||
|
|
@ -333,7 +333,7 @@ To include a "Read more" link with each child summary, set `readmore=true`. _Onl
|
|||
|
||||
#### Include a horizontal rule
|
||||
|
||||
To include a horizontal rule after each child summary, set `hr=true`. _Only the `articles` list type supports horizontal rules._
|
||||
To include a horizontal rule after each child summary, set `hr=true`. *Only the `articles` list type supports horizontal rules.*
|
||||
|
||||
```md
|
||||
{{< children hr=true >}}
|
||||
|
|
@ -390,11 +390,11 @@ This is useful for maintaining and referencing sample code variants in their nat
|
|||
|
||||
#### Include specific files from the same directory
|
||||
|
||||
> [!Caution]
|
||||
> \[!Caution]
|
||||
> **Don't use for code examples**
|
||||
> Using this and `get-shared-text` shortcodes to include code examples prevents the code from being tested.
|
||||
|
||||
To include the text from one file in another file in the same directory, use the `{{< get-leaf-text >}}` shortcode. The directory that contains both files must be a Hugo [_Leaf Bundle_](https://gohugo.io/content-management/page-bundles/#leaf-bundles), a directory that doesn't have any child directories.
|
||||
To include the text from one file in another file in the same directory, use the `{{< get-leaf-text >}}` shortcode. The directory that contains both files must be a Hugo [*Leaf Bundle*](https://gohugo.io/content-management/page-bundles/#leaf-bundles), a directory that doesn't have any child directories.
|
||||
|
||||
In the following example, `api` is a leaf bundle. `content` isn't.
|
||||
|
||||
|
|
@ -447,13 +447,13 @@ Each children list `type` uses frontmatter properties when generating the list o
|
|||
|
||||
| Frontmatter | articles | list | functions |
|
||||
| :------------------- | :------: | :--: | :-------: |
|
||||
| `list_title` | ✓ | ✓ | ✓ |
|
||||
| `description` | ✓ | | |
|
||||
| `external_url` | ✓ | ✓ | |
|
||||
| `list_image` | ✓ | | |
|
||||
| `list_note` | | ✓ | |
|
||||
| `list_code_example` | ✓ | | |
|
||||
| `list_query_example` | ✓ | | |
|
||||
| `list_title` | ✓ | ✓ | ✓ |
|
||||
| `description` | ✓ | | |
|
||||
| `external_url` | ✓ | ✓ | |
|
||||
| `list_image` | ✓ | | |
|
||||
| `list_note` | | ✓ | |
|
||||
| `list_code_example` | ✓ | | |
|
||||
| `list_query_example` | ✓ | | |
|
||||
|
||||
## Visual Elements
|
||||
|
||||
|
|
@ -695,7 +695,7 @@ Column 2
|
|||
|
||||
The following options are available:
|
||||
|
||||
- half _(Default)_
|
||||
- half *(Default)*
|
||||
- third
|
||||
- quarter
|
||||
|
||||
|
|
@ -721,10 +721,10 @@ Click {{< caps >}}Add Data{{< /caps >}}
|
|||
|
||||
### Authentication token link
|
||||
|
||||
Use the `{{% token-link "<descriptor>" "<link_append>%}}` shortcode to automatically generate links to token management documentation. The shortcode accepts two _optional_ arguments:
|
||||
Use the `{{% token-link "<descriptor>" "<link_append>%}}` shortcode to automatically generate links to token management documentation. The shortcode accepts two *optional* arguments:
|
||||
|
||||
- **descriptor**: An optional token descriptor
|
||||
- **link_append**: An optional path to append to the token management link path, `/<product>/<version>/admin/tokens/`.
|
||||
- **link\_append**: An optional path to append to the token management link path, `/<product>/<version>/admin/tokens/`.
|
||||
|
||||
```md
|
||||
{{% token-link "database" "resource/" %}}
|
||||
|
|
@ -775,7 +775,7 @@ Descriptions should follow consistent patterns:
|
|||
- Recommended: "your {{% token-link "database" %}}"{{% show-in "enterprise" %}} with permissions on the specified database{{% /show-in %}}
|
||||
- Avoid: "your token", "the token", "an authorization token"
|
||||
3. **Database names**:
|
||||
- Recommended: "the name of the database to [action]"
|
||||
- Recommended: "the name of the database to \[action]"
|
||||
- Avoid: "your database", "the database name"
|
||||
4. **Conditional content**:
|
||||
- Use `{{% show-in "enterprise" %}}` for content specific to enterprise versions
|
||||
|
|
@ -797,13 +797,75 @@ Descriptions should follow consistent patterns:
|
|||
|
||||
#### Syntax
|
||||
|
||||
- `{ placeholders="PATTERN1|PATTERN2" }`: Use this code block attribute to define placeholder patterns
|
||||
- `{ placeholders="PATTERN1|PATTERN2" }`: Use this code block attribute to define placeholder patterns
|
||||
- `{{% code-placeholder-key %}}`: Use this shortcode to define a placeholder key
|
||||
- `{{% /code-placeholder-key %}}`: Use this shortcode to close the key name
|
||||
|
||||
_The `placeholders` attribute supercedes the deprecated `code-placeholders` shortcode._
|
||||
*The `placeholders` attribute supercedes the deprecated `code-placeholders` shortcode.*
|
||||
|
||||
#### Example usage
|
||||
#### Automated placeholder syntax
|
||||
|
||||
Use the `docs placeholders` command to automatically add placeholder syntax to code blocks and descriptions:
|
||||
|
||||
```bash
|
||||
# Process a file
|
||||
npx docs placeholders content/influxdb3/core/admin/upgrade.md
|
||||
|
||||
# Preview changes without modifying the file
|
||||
npx docs placeholders content/influxdb3/core/admin/upgrade.md --dry
|
||||
|
||||
# Get help
|
||||
npx docs placeholders --help
|
||||
```
|
||||
|
||||
**What it does:**
|
||||
|
||||
1. Detects UPPERCASE placeholders in code blocks
|
||||
2. Adds `{ placeholders="..." }` attribute to code fences
|
||||
3. Wraps placeholder descriptions with `{{% code-placeholder-key %}}` shortcodes
|
||||
|
||||
**Example transformation:**
|
||||
|
||||
Before:
|
||||
|
||||
````markdown
|
||||
```bash
|
||||
influxdb3 query \
|
||||
--database SYSTEM_DATABASE \
|
||||
--token ADMIN_TOKEN \
|
||||
"SELECT * FROM system.version"
|
||||
```
|
||||
|
||||
Replace the following:
|
||||
|
||||
- **`SYSTEM_DATABASE`**: The name of your system database
|
||||
- **`ADMIN_TOKEN`**: An admin token with read permissions
|
||||
````
|
||||
|
||||
After:
|
||||
|
||||
````markdown
|
||||
```bash { placeholders="ADMIN_TOKEN|SYSTEM_DATABASE" }
|
||||
influxdb3 query \
|
||||
--database SYSTEM_DATABASE \
|
||||
--token ADMIN_TOKEN \
|
||||
"SELECT * FROM system.version"
|
||||
```
|
||||
|
||||
Replace the following:
|
||||
|
||||
- {{% code-placeholder-key %}}`SYSTEM_DATABASE`{{% /code-placeholder-key %}}: The name of your system database
|
||||
- {{% code-placeholder-key %}}`ADMIN_TOKEN`{{% /code-placeholder-key %}}: An admin token with read permissions
|
||||
````
|
||||
|
||||
**How it works:**
|
||||
|
||||
- Pattern: Matches words with 2+ characters, all uppercase, can include underscores
|
||||
- Excludes common words: HTTP verbs (GET, POST), protocols (HTTP, HTTPS), SQL keywords (SELECT, FROM), etc.
|
||||
- Idempotent: Running multiple times won't duplicate syntax
|
||||
- Preserves existing `placeholders` attributes and already-wrapped descriptions
|
||||
|
||||
#### Manual placeholder usage
|
||||
|
||||
```sh { placeholders "DATABASE_NAME|USERNAME|PASSWORD_OR_TOKEN|API_TOKEN|exampleuser@influxdata.com" }
|
||||
curl --request POST http://localhost:8086/write?db=DATABASE_NAME \
|
||||
|
|
@ -839,7 +901,7 @@ Sample dataset to output. Use either `set` argument name or provide the set as t
|
|||
|
||||
#### includeNull
|
||||
|
||||
Specify whether or not to include _null_ values in the dataset. Use either `includeNull` argument name or provide the boolean value as the second argument.
|
||||
Specify whether or not to include *null* values in the dataset. Use either `includeNull` argument name or provide the boolean value as the second argument.
|
||||
|
||||
#### includeRange
|
||||
|
||||
|
|
@ -1115,6 +1177,6 @@ The InfluxDB host placeholder that gets replaced by custom domains differs betwe
|
|||
{{< influxdb/host "serverless" >}}
|
||||
```
|
||||
|
||||
---
|
||||
***
|
||||
|
||||
**For working examples**: Test all shortcodes in [content/example.md](content/example.md)
|
||||
|
|
|
|||
77
README.md
77
README.md
|
|
@ -2,9 +2,9 @@
|
|||
<img src="/static/img/influx-logo-cubo-dark.png" width="200">
|
||||
</p>
|
||||
|
||||
# InfluxDB 2.0 Documentation
|
||||
# InfluxData Product Documentation
|
||||
|
||||
This repository contains the InfluxDB 2.x documentation published at [docs.influxdata.com](https://docs.influxdata.com).
|
||||
This repository contains the InfluxData product documentation for InfluxDB and related tooling published at [docs.influxdata.com](https://docs.influxdata.com).
|
||||
|
||||
## Contributing
|
||||
|
||||
|
|
@ -15,6 +15,26 @@ For information about contributing to the InfluxData documentation, see [Contrib
|
|||
|
||||
For information about testing the documentation, including code block testing, link validation, and style linting, see [Testing guide](DOCS-TESTING.md).
|
||||
|
||||
## Documentation Tools
|
||||
|
||||
This repository includes a `docs` CLI tool for common documentation workflows:
|
||||
|
||||
```sh
|
||||
# Create new documentation from a draft
|
||||
npx docs create drafts/new-feature.md --products influxdb3_core
|
||||
|
||||
# Edit existing documentation from a URL
|
||||
npx docs edit https://docs.influxdata.com/influxdb3/core/admin/
|
||||
|
||||
# Add placeholder syntax to code blocks
|
||||
npx docs placeholders content/influxdb3/core/admin/upgrade.md
|
||||
|
||||
# Get help
|
||||
npx docs --help
|
||||
```
|
||||
|
||||
The `docs` command is automatically configured when you run `yarn install`.
|
||||
|
||||
## Documentation
|
||||
|
||||
Comprehensive reference documentation for contributors:
|
||||
|
|
@ -27,6 +47,7 @@ Comprehensive reference documentation for contributors:
|
|||
- **[API Documentation](api-docs/README.md)** - API reference generation
|
||||
|
||||
### Quick Links
|
||||
|
||||
- [Style guidelines](DOCS-CONTRIBUTING.md#style-guidelines)
|
||||
- [Commit guidelines](DOCS-CONTRIBUTING.md#commit-guidelines)
|
||||
- [Code block testing](DOCS-TESTING.md#code-block-testing)
|
||||
|
|
@ -35,42 +56,49 @@ Comprehensive reference documentation for contributors:
|
|||
|
||||
InfluxData takes security and our users' trust very seriously.
|
||||
If you believe you have found a security issue in any of our open source projects,
|
||||
please responsibly disclose it by contacting security@influxdata.com.
|
||||
please responsibly disclose it by contacting <security@influxdata.com>.
|
||||
More details about security vulnerability reporting,
|
||||
including our GPG key, can be found at https://www.influxdata.com/how-to-report-security-vulnerabilities/.
|
||||
including our GPG key, can be found at <https://www.influxdata.com/how-to-report-security-vulnerabilities/>.
|
||||
|
||||
## Running the docs locally
|
||||
|
||||
1. [**Clone this repository**](https://help.github.com/articles/cloning-a-repository/) to your local machine.
|
||||
|
||||
2. **Install NodeJS, Yarn, Hugo, & Asset Pipeline Tools**
|
||||
2. **Install NodeJS, Yarn, Hugo, & Asset Pipeline Tools**
|
||||
|
||||
The InfluxData documentation uses [Hugo](https://gohugo.io/), a static site generator built in Go. The site uses Hugo's asset pipeline, which requires the extended version of Hugo along with NodeJS tools like PostCSS, to build and process stylesheets and JavaScript.
|
||||
The InfluxData documentation uses [Hugo](https://gohugo.io/), a static site generator built in Go. The site uses Hugo's asset pipeline, which requires the extended version of Hugo along with NodeJS tools like PostCSS, to build and process stylesheets and JavaScript.
|
||||
|
||||
To install the required dependencies and build the assets, do the following:
|
||||
To install the required dependencies and build the assets, do the following:
|
||||
|
||||
1. [Install NodeJS](https://nodejs.org/en/download/)
|
||||
2. [Install Yarn](https://classic.yarnpkg.com/en/docs/install/)
|
||||
3. In your terminal, from the `docs-v2` directory, install the dependencies:
|
||||
1. [Install NodeJS](https://nodejs.org/en/download/)
|
||||
2. [Install Yarn](https://classic.yarnpkg.com/en/docs/install/)
|
||||
3. In your terminal, from the `docs-v2` directory, install the dependencies:
|
||||
|
||||
```sh
|
||||
cd docs-v2
|
||||
yarn install
|
||||
```
|
||||
```sh
|
||||
cd docs-v2
|
||||
yarn install
|
||||
```
|
||||
|
||||
_**Note:** The most recent version of Hugo tested with this documentation is **0.149.0**._
|
||||
***Note:** The most recent version of Hugo tested with this documentation is **0.149.0**.*
|
||||
|
||||
After installation, the `docs` command will be available via `npx`:
|
||||
|
||||
```sh
|
||||
npx docs --help
|
||||
```
|
||||
|
||||
3. To generate the API docs, see [api-docs/README.md](api-docs/README.md).
|
||||
|
||||
4. **Start the Hugo server**
|
||||
4. **Start the Hugo server**
|
||||
|
||||
Hugo provides a local development server that generates the HTML pages, builds the static assets, and serves them at `localhost:1313`.
|
||||
Hugo provides a local development server that generates the HTML pages, builds the static assets, and serves them at `localhost:1313`.
|
||||
|
||||
In your terminal, start the Hugo server:
|
||||
In your terminal, start the Hugo server:
|
||||
|
||||
```sh
|
||||
npx hugo server
|
||||
```
|
||||
|
||||
```sh
|
||||
npx hugo server
|
||||
```
|
||||
5. View the docs at [localhost:1313](http://localhost:1313).
|
||||
|
||||
### Alternative: Use docker compose
|
||||
|
|
@ -81,7 +109,8 @@ including our GPG key, can be found at https://www.influxdata.com/how-to-report-
|
|||
|
||||
3. Use Docker Compose to start the Hugo server in development mode--for example, enter the following command in your terminal:
|
||||
|
||||
```sh
|
||||
docker compose up local-dev
|
||||
```
|
||||
```sh
|
||||
docker compose up local-dev
|
||||
```
|
||||
|
||||
4. View the docs at [localhost:1313](http://localhost:1313).
|
||||
|
|
|
|||
|
|
@ -0,0 +1,210 @@
|
|||
---
|
||||
title: Create and edit InfluxData docs
|
||||
description: Learn how to create and edit InfluxData documentation.
|
||||
tags: [documentation, guide, influxdata]
|
||||
test_only: true
|
||||
---
|
||||
|
||||
Learn how to create and edit InfluxData documentation.
|
||||
|
||||
- [Submit an issue to request new or updated documentation](#submit-an-issue-to-request-new-or-updated-documentation)
|
||||
- [Edit an existing page in your browser](#edit-an-existing-page-in-your-browser)
|
||||
- [Create and edit locally with the docs-v2 repository](#create-and-edit-locally-with-the-docs-v2-repository)
|
||||
- [Helpful resources](#other-resources)
|
||||
|
||||
## Submit an issue to request new or updated documentation
|
||||
|
||||
- **Public**: <https://github.com/influxdata/docs-v2/issues/>
|
||||
- **Private**: <https://github.com/influxdata/DAR/issues/>
|
||||
|
||||
## Edit an existing page in your browser
|
||||
|
||||
**Example**: Editing a product-specific page
|
||||
|
||||
1. Visit <https://docs.influxdata.com> public docs
|
||||
2. Search, Ask AI, or navigate to find the page to edit--for example, <https://docs.influxdata.com/influxdb3/cloud-serverless/get-started/>
|
||||
3. Click the "Edit this page" link at the bottom of the page.
|
||||
This opens the GitHub repository to the file that generates the page
|
||||
4. Click the pencil icon to edit the file in your browser
|
||||
5. [Commit and create a pull request](#commit-and-create-a-pull-request)
|
||||
|
||||
## Create and edit locally with the docs-v2 repository
|
||||
|
||||
Use `docs` scripts with AI agents to help you create and edit documentation locally, especially when working with shared content for multiple products.
|
||||
|
||||
**Prerequisites**:
|
||||
|
||||
1. [Clone or fork the docs-v2 repository](https://github.com/influxdata/docs-v2/):
|
||||
|
||||
```bash
|
||||
git clone https://github.com/influxdata/docs-v2.git
|
||||
cd docs-v2
|
||||
```
|
||||
2. [Install Yarn](https://yarnpkg.com/getting-started/install)
|
||||
3. Run `yarn` in the repository root to install dependencies
|
||||
4. Optional: [Set up GitHub CLI](https://cli.github.com/manual/)
|
||||
|
||||
> \[!Tip]
|
||||
> To run and test your changes locally, enter the following command in your terminal:
|
||||
>
|
||||
> ```bash
|
||||
> yarn hugo server
|
||||
> ```
|
||||
>
|
||||
> *To refresh shared content after making changes, `touch` or edit the frontmatter file, or stop the server (Ctrl+C) and restart it.*
|
||||
>
|
||||
> To list all available scripts, run:
|
||||
>
|
||||
> ```bash
|
||||
> yarn run
|
||||
> ```
|
||||
|
||||
### Edit an existing page locally
|
||||
|
||||
Use the `npx docs edit` command to open an existing page in your editor.
|
||||
|
||||
```bash
|
||||
npx docs edit https://docs.influxdata.com/influxdb3/enterprise/get-started/
|
||||
```
|
||||
|
||||
### Create content locally
|
||||
|
||||
Use the `npx docs create` command with your AI agent tool to scaffold frontmatter and generate new content.
|
||||
|
||||
- The `npx docs create` command accepts draft input from stdin or from a file path and generates a prompt file from the draft and your product selections
|
||||
- The prompt file makes AI agents aware of InfluxData docs guidelines, shared content, and product-specific requirements
|
||||
- `npx docs create` is designed to work automatically with `claude`, but you can
|
||||
use the generated prompt file with any AI agent (for example, `copilot` or `codex`)
|
||||
|
||||
> \[!Tip]
|
||||
>
|
||||
> `docs-v2` contains custom configuration for agents like Claude and Copilot Agent mode.
|
||||
|
||||
<!-- Coming soon: generate content from an issue with labels -->
|
||||
|
||||
#### Generate content and frontmatter from a draft
|
||||
|
||||
{{% tabs-wrapper %}}
|
||||
{{% tabs %}}
|
||||
[Interactive (Claude Code)](#)
|
||||
[Non-interactive (any agent)](#)
|
||||
{{% /tabs %}}
|
||||
{{% tab-content %}}
|
||||
|
||||
{{% /tab-content %}}
|
||||
{{% tab-content %}}
|
||||
|
||||
1. Open a Claude Code prompt:
|
||||
|
||||
```bash
|
||||
claude code
|
||||
```
|
||||
|
||||
2. In the prompt, run the `docs create` command with the path to your draft file.
|
||||
Optionally, include the `--products` flag and product namespaces to preselect products--for example:
|
||||
|
||||
```bash
|
||||
npx docs create .context/drafts/"Upgrading Enterprise 3 (draft).md" \
|
||||
--products influxdb3_enterprise,influxdb3_core
|
||||
```
|
||||
|
||||
If you don't include the `--products` flag, you'll be prompted to select products after running the command.
|
||||
|
||||
The script first generates a prompt file, then the agent automatically uses it to generate content and frontmatter based on the draft and the products you select.
|
||||
|
||||
{{% /tab-content %}}
|
||||
{{% tab-content %}}
|
||||
|
||||
Use `npx docs create` to generate a prompt file and then pipe it to your preferred AI agent.
|
||||
Include the `--products` flag and product namespaces to preselect products
|
||||
|
||||
The following example uses Copilot to process a draft file:
|
||||
|
||||
```bash
|
||||
npx docs create .context/drafts/"Upgrading Enterprise 3 (draft).md" \
|
||||
--products "influxdb3_enterprise,influxdb3_core" | \
|
||||
copilot --prompt --allow-all-tools
|
||||
```
|
||||
|
||||
{{% /tab-content %}}
|
||||
{{< /tabs-wrapper >}}
|
||||
|
||||
## Review, commit, and create a pull request
|
||||
|
||||
After you create or edit content, test and review your changes, and then create a pull request.
|
||||
|
||||
> \[!Important]
|
||||
>
|
||||
> #### Check AI-generated content
|
||||
>
|
||||
> Always review and validate AI-generated content for accuracy.
|
||||
> Make sure example commands are correct for the version you're documenting.
|
||||
|
||||
### Test and review your changes
|
||||
|
||||
Run a local Hugo server to preview your changes:
|
||||
|
||||
```bash
|
||||
yarn hugo server
|
||||
```
|
||||
|
||||
Visit <http://localhost:1313> to review your changes in the browser.
|
||||
|
||||
> \[!Note]
|
||||
> If you need to preview changes in a live production-like environment
|
||||
> that you can also share with others,
|
||||
> the Docs team can deploy your branch to the staging site.
|
||||
|
||||
### Commit and create a pull request
|
||||
|
||||
1. Commit your changes to a new branch
|
||||
2. Fix any issues found by automated checks
|
||||
3. Push the branch to your fork or to the docs-v2 repository
|
||||
|
||||
```bash
|
||||
git add content
|
||||
git commit -m "feat(product): Your commit message"
|
||||
git push origin your-branch-name
|
||||
```
|
||||
|
||||
### Create a pull request
|
||||
|
||||
1. Create a pull request against the `master` branch of the docs-v2 repository
|
||||
2. Add reviewers:
|
||||
- `@influxdata/docs-team`
|
||||
- team members familiar with the product area
|
||||
- Optionally, assign Copilot to review
|
||||
3. After approval and automated checks are successful, merge the pull request (if you have permissions) or wait for the docs team to merge it.
|
||||
|
||||
{{< tabs-wrapper >}}
|
||||
{{% tabs %}}
|
||||
[GitHub](#)
|
||||
[gh CLI](#)
|
||||
{{% /tabs %}}
|
||||
{{% tab-content %}}
|
||||
|
||||
1. Visit [influxdata/docs-v2 pull requests on GitHub](https://github.com/influxdata/docs-v2/pulls)
|
||||
2. Optional: edit PR title and description
|
||||
3. Optional: set to draft if it needs more work
|
||||
4. When ready for review, assign `@influxdata/docs-team` and other reviewers
|
||||
|
||||
{{% /tab-content %}}
|
||||
{{% tab-content %}}
|
||||
|
||||
```bash
|
||||
gh pr create \
|
||||
--base master \
|
||||
--head your-branch-name \
|
||||
--title "Your PR title" \
|
||||
--body "Your PR description" \
|
||||
--reviewer influxdata/docs-team,<other-reviewers>
|
||||
```
|
||||
|
||||
{{% /tab-content %}}
|
||||
{{< /tabs-wrapper >}}
|
||||
|
||||
## Other resources
|
||||
|
||||
- `DOCS-*.md`: Documentation standards and guidelines
|
||||
- <http://localhost:1313/example/>: View shortcode examples
|
||||
- <https://app.kapa.ai>: Review content gaps identified from Ask AI answers
|
||||
|
|
@ -22,14 +22,14 @@ provides an alternative method for deploying your InfluxDB cluster using
|
|||
resource. When using Helm, apply configuration options in a
|
||||
a `values.yaml` on your local machine.
|
||||
|
||||
InfluxData provides the following items:
|
||||
InfluxData provides the following items:
|
||||
|
||||
- **`influxdb-docker-config.json`**: an authenticated Docker configuration file.
|
||||
The InfluxDB Clustered software is in a secure container registry.
|
||||
This file grants access to the collection of container images required to
|
||||
install InfluxDB Clustered.
|
||||
|
||||
---
|
||||
***
|
||||
|
||||
## Configuration data
|
||||
|
||||
|
|
@ -40,23 +40,24 @@ available:
|
|||
API endpoints
|
||||
- **PostgreSQL-style data source name (DSN)**: used to access your
|
||||
PostgreSQL-compatible database that stores the InfluxDB Catalog.
|
||||
- **Object store credentials** _(AWS S3 or S3-compatible)_
|
||||
- **Object store credentials** *(AWS S3 or S3-compatible)*
|
||||
- Endpoint URL
|
||||
- Access key
|
||||
- Bucket name
|
||||
- Region (required for S3, may not be required for other object stores)
|
||||
- **Local storage information** _(for ingester pods)_
|
||||
- **Local storage information** *(for ingester pods)*
|
||||
- Storage class
|
||||
- Storage size
|
||||
|
||||
InfluxDB is deployed to a Kubernetes namespace which, throughout the following
|
||||
installation procedure, is referred to as the _target_ namespace.
|
||||
installation procedure, is referred to as the *target* namespace.
|
||||
For simplicity, we assume this namespace is `influxdb`, however
|
||||
you may use any name you like.
|
||||
|
||||
> [!Note]
|
||||
> \[!Note]
|
||||
>
|
||||
> #### Set namespaceOverride if using a namespace other than influxdb
|
||||
>
|
||||
>
|
||||
> If you use a namespace name other than `influxdb`, update the `namespaceOverride`
|
||||
> field in your `values.yaml` to use your custom namespace name.
|
||||
|
||||
|
|
@ -85,21 +86,21 @@ which simplifies the installation and management of the InfluxDB Clustered packa
|
|||
It manages the application of the jsonnet templates used to install, manage, and
|
||||
update an InfluxDB cluster.
|
||||
|
||||
> [!Note]
|
||||
> \[!Note]
|
||||
> If you already installed the `kubecfg kubit` operator separately when
|
||||
> [setting up prerequisites](/influxdb3/clustered/install/set-up-cluster/prerequisites/#install-the-kubecfg-kubit-operator)
|
||||
> for your cluster, in your `values.yaml`, set `skipOperator` to `true`.
|
||||
>
|
||||
>
|
||||
> ```yaml
|
||||
> skipOperator: true
|
||||
> ```
|
||||
|
||||
## Configure your cluster
|
||||
|
||||
1. [Install Helm](#install-helm)
|
||||
2. [Create a values.yaml file](#create-a-valuesyaml-file)
|
||||
3. [Configure access to the InfluxDB container registry](#configure-access-to-the-influxdb-container-registry)
|
||||
4. [Modify the configuration file to point to prerequisites](#modify-the-configuration-file-to-point-to-prerequisites)
|
||||
1. [Install Helm](#install-helm)
|
||||
2. [Create a values.yaml file](#create-a-valuesyaml-file)
|
||||
3. [Configure access to the InfluxDB container registry](#configure-access-to-the-influxdb-container-registry)
|
||||
4. [Modify the configuration file to point to prerequisites](#modify-the-configuration-file-to-point-to-prerequisites)
|
||||
|
||||
### Install Helm
|
||||
|
||||
|
|
@ -136,11 +137,11 @@ In both scenarios, you need a valid container registry secret file.
|
|||
Use [crane](https://github.com/google/go-containerregistry/tree/main/cmd/crane)
|
||||
to create a container registry secret file.
|
||||
|
||||
1. [Install crane](https://github.com/google/go-containerregistry/tree/main/cmd/crane#installation)
|
||||
2. Use the following command to create a container registry secret file and
|
||||
retrieve the necessary secrets:
|
||||
1. [Install crane](https://github.com/google/go-containerregistry/tree/main/cmd/crane#installation)
|
||||
2. Use the following command to create a container registry secret file and
|
||||
retrieve the necessary secrets:
|
||||
|
||||
{{% code-placeholders "PACKAGE_VERSION" %}}
|
||||
{{% code-placeholders "PACKAGE\_VERSION" %}}
|
||||
|
||||
```sh
|
||||
mkdir /tmp/influxdbsecret
|
||||
|
|
@ -152,12 +153,12 @@ DOCKER_CONFIG=/tmp/influxdbsecret \
|
|||
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
---
|
||||
***
|
||||
|
||||
Replace {{% code-placeholder-key %}}`PACKAGE_VERSION`{{% /code-placeholder-key %}}
|
||||
with your InfluxDB Clustered package version.
|
||||
|
||||
---
|
||||
***
|
||||
|
||||
If your Docker configuration is valid and you’re able to connect to the container
|
||||
registry, the command succeeds and the output is the JSON manifest for the Docker
|
||||
|
|
@ -206,6 +207,7 @@ Error: fetching manifest us-docker.pkg.dev/influxdb2-artifacts/clustered/influxd
|
|||
{{% /tabs %}}
|
||||
|
||||
{{% tab-content %}}
|
||||
|
||||
<!--------------------------- BEGIN Public Registry --------------------------->
|
||||
|
||||
#### Public registry
|
||||
|
|
@ -229,8 +231,10 @@ If you change the name of this secret, you must also change the value of the
|
|||
`imagePullSecrets.name` field in your `values.yaml`.
|
||||
|
||||
<!---------------------------- END Public Registry ---------------------------->
|
||||
|
||||
{{% /tab-content %}}
|
||||
{{% tab-content %}}
|
||||
|
||||
<!--------------------------- BEGIN Private Registry -------------------------->
|
||||
|
||||
#### Private registry (air-gapped)
|
||||
|
|
@ -297,7 +301,8 @@ cat /tmp/kubit-images.txt | xargs -I% crane cp % YOUR_PRIVATE_REGISTRY/%
|
|||
|
||||
Configure your `values.yaml` to use your private registry:
|
||||
|
||||
{{% code-placeholders "REGISTRY_HOSTNAME" %}}
|
||||
{{% code-placeholders "REGISTRY\_HOSTNAME" %}}
|
||||
|
||||
```yaml
|
||||
# Configure registry override for all images
|
||||
images:
|
||||
|
|
@ -315,6 +320,7 @@ kubit:
|
|||
imagePullSecrets:
|
||||
- name: your-registry-pull-secret
|
||||
```
|
||||
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace {{% code-placeholder-key %}}`REGISTRY_HOSTNAME`{{% /code-placeholder-key %}} with your private registry hostname.
|
||||
|
|
@ -344,13 +350,13 @@ To configure ingress, provide values for the following fields in your
|
|||
Provide the hostnames that Kubernetes should use to expose the InfluxDB API
|
||||
endpoints--for example: `{{< influxdb/host >}}`.
|
||||
|
||||
_You can provide multiple hostnames. The ingress layer accepts incoming
|
||||
*You can provide multiple hostnames. The ingress layer accepts incoming
|
||||
requests for all listed hostnames. This can be useful if you want to have
|
||||
distinct paths for your internal and external traffic._
|
||||
distinct paths for your internal and external traffic.*
|
||||
|
||||
> [!Note]
|
||||
> \[!Note]
|
||||
> You are responsible for configuring and managing DNS. Options include:
|
||||
>
|
||||
>
|
||||
> - Manually managing DNS records
|
||||
> - Using [external-dns](https://github.com/kubernetes-sigs/external-dns) to
|
||||
> synchronize exposed Kubernetes services and ingresses with DNS providers.
|
||||
|
|
@ -360,16 +366,16 @@ To configure ingress, provide values for the following fields in your
|
|||
(Optional): Provide the name of the secret that contains your TLS certificate
|
||||
and key. The examples in this guide use the name `ingress-tls`.
|
||||
|
||||
_The `tlsSecretName` field is optional. You may want to use it if you already
|
||||
have a TLS certificate for your DNS name._
|
||||
*The `tlsSecretName` field is optional. You may want to use it if you already
|
||||
have a TLS certificate for your DNS name.*
|
||||
|
||||
> [!Note]
|
||||
> \[!Note]
|
||||
> Writing to and querying data from InfluxDB does not require TLS.
|
||||
> For simplicity, you can wait to enable TLS before moving into production.
|
||||
> For more information, see Phase 4 of the InfluxDB Clustered installation
|
||||
> process, [Secure your cluster](/influxdb3/clustered/install/secure-cluster/).
|
||||
|
||||
{{% code-callout "ingress-tls|cluster-host\.com" "green" %}}
|
||||
{{% code-callout "ingress-tls|cluster-host.com" "green" %}}
|
||||
|
||||
```yaml
|
||||
ingress:
|
||||
|
|
@ -404,14 +410,14 @@ following fields in your `values.yaml`:
|
|||
- `bucket`: Object storage bucket name
|
||||
- `s3`:
|
||||
- `endpoint`: Object storage endpoint URL
|
||||
- `allowHttp`: _Set to `true` to allow unencrypted HTTP connections_
|
||||
- `allowHttp`: *Set to `true` to allow unencrypted HTTP connections*
|
||||
- `accessKey.value`: Object storage access key
|
||||
_(can use a `value` literal or `valueFrom` to retrieve the value from a secret)_
|
||||
*(can use a `value` literal or `valueFrom` to retrieve the value from a secret)*
|
||||
- `secretKey.value`: Object storage secret key
|
||||
_(can use a `value` literal or `valueFrom` to retrieve the value from a secret)_
|
||||
*(can use a `value` literal or `valueFrom` to retrieve the value from a secret)*
|
||||
- `region`: Object storage region
|
||||
|
||||
{{% code-placeholders "S3_(URL|ACCESS_KEY|SECRET_KEY|BUCKET_NAME|REGION)" %}}
|
||||
{{% code-placeholders "S3\_(URL|ACCESS\_KEY|SECRET\_KEY|BUCKET\_NAME|REGION)" %}}
|
||||
|
||||
```yml
|
||||
objectStore:
|
||||
|
|
@ -441,7 +447,7 @@ objectStore:
|
|||
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
---
|
||||
***
|
||||
|
||||
Replace the following:
|
||||
|
||||
|
|
@ -451,7 +457,7 @@ Replace the following:
|
|||
- {{% code-placeholder-key %}}`S3_SECRET_KEY`{{% /code-placeholder-key %}}: Object storage secret key
|
||||
- {{% code-placeholder-key %}}`S3_REGION`{{% /code-placeholder-key %}}: Object storage region
|
||||
|
||||
---
|
||||
***
|
||||
|
||||
<!----------------------------------- END S3 ---------------------------------->
|
||||
|
||||
|
|
@ -467,11 +473,11 @@ following fields in your `values.yaml`:
|
|||
- `bucket`: Azure Blob Storage bucket name
|
||||
- `azure`:
|
||||
- `accessKey.value`: Azure Blob Storage access key
|
||||
_(can use a `value` literal or `valueFrom` to retrieve the value from a secret)_
|
||||
*(can use a `value` literal or `valueFrom` to retrieve the value from a secret)*
|
||||
- `account.value`: Azure Blob Storage account ID
|
||||
_(can use a `value` literal or `valueFrom` to retrieve the value from a secret)_
|
||||
*(can use a `value` literal or `valueFrom` to retrieve the value from a secret)*
|
||||
|
||||
{{% code-placeholders "AZURE_(BUCKET_NAME|ACCESS_KEY|STORAGE_ACCOUNT)" %}}
|
||||
{{% code-placeholders "AZURE\_(BUCKET\_NAME|ACCESS\_KEY|STORAGE\_ACCOUNT)" %}}
|
||||
|
||||
```yml
|
||||
objectStore:
|
||||
|
|
@ -492,7 +498,7 @@ objectStore:
|
|||
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
---
|
||||
***
|
||||
|
||||
Replace the following:
|
||||
|
||||
|
|
@ -500,7 +506,7 @@ Replace the following:
|
|||
- {{% code-placeholder-key %}}`AZURE_ACCESS_KEY`{{% /code-placeholder-key %}}: Azure Blob Storage access key
|
||||
- {{% code-placeholder-key %}}`AZURE_STORAGE_ACCOUNT`{{% /code-placeholder-key %}}: Azure Blob Storage account ID
|
||||
|
||||
---
|
||||
***
|
||||
|
||||
<!--------------------------------- END AZURE --------------------------------->
|
||||
|
||||
|
|
@ -520,7 +526,7 @@ following fields in your `values.yaml`:
|
|||
- `serviceAccountSecret.key`: the key inside of your Google IAM secret that
|
||||
contains your Google IAM account credentials
|
||||
|
||||
{{% code-placeholders "GOOGLE_(BUCKET_NAME|IAM_SECRET|CREDENTIALS_KEY)" %}}
|
||||
{{% code-placeholders "GOOGLE\_(BUCKET\_NAME|IAM\_SECRET|CREDENTIALS\_KEY)" %}}
|
||||
|
||||
```yml
|
||||
objectStore:
|
||||
|
|
@ -540,7 +546,7 @@ objectStore:
|
|||
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
---
|
||||
***
|
||||
|
||||
Replace the following:
|
||||
|
||||
|
|
@ -549,11 +555,11 @@ Replace the following:
|
|||
- {{% code-placeholder-key %}}`GOOGLE_IAM_SECRET`{{% /code-placeholder-key %}}:
|
||||
the Kubernetes Secret name that contains your Google IAM service account
|
||||
credentials
|
||||
- {{% code-placeholder-key %}}`GOOGLE_CREDENTIALS_KEY`{{% /code-placeholder-key %}}:
|
||||
- {{% code-placeholder-key %}}`GOOGLE_CREDENTIALS_KEY`{{% /code-placeholder-key %}}:
|
||||
the key inside of your Google IAM secret that contains your Google IAM account
|
||||
credentials
|
||||
|
||||
---
|
||||
***
|
||||
|
||||
<!--------------------------------- END AZURE --------------------------------->
|
||||
|
||||
|
|
@ -567,7 +573,7 @@ metadata about your time series data.
|
|||
To connect your InfluxDB cluster to your PostgreSQL-compatible database,
|
||||
provide values for the following fields in your `values.yaml`:
|
||||
|
||||
> [!Note]
|
||||
> \[!Note]
|
||||
> We recommend storing sensitive credentials, such as your PostgreSQL-compatible DSN,
|
||||
> as secrets in your Kubernetes cluster.
|
||||
|
||||
|
|
@ -575,7 +581,7 @@ provide values for the following fields in your `values.yaml`:
|
|||
- `SecretName`: Secret name
|
||||
- `SecretKey`: Key in the secret that contains the DSN
|
||||
|
||||
{{% code-placeholders "SECRET_(NAME|KEY)" %}}
|
||||
{{% code-placeholders "SECRET\_(NAME|KEY)" %}}
|
||||
|
||||
```yml
|
||||
catalog:
|
||||
|
|
@ -590,7 +596,7 @@ catalog:
|
|||
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
---
|
||||
***
|
||||
|
||||
Replace the following:
|
||||
|
||||
|
|
@ -599,58 +605,64 @@ Replace the following:
|
|||
- {{% code-placeholder-key %}}`SECRET_KEY`{{% /code-placeholder-key %}}:
|
||||
Key in the secret that references your PostgreSQL-compatible DSN
|
||||
|
||||
---
|
||||
***
|
||||
|
||||
> [!Warning]
|
||||
> \[!Warning]
|
||||
>
|
||||
> ##### Percent-encode special symbols in PostgreSQL DSNs
|
||||
>
|
||||
>
|
||||
> Special symbols in PostgreSQL DSNs should be percent-encoded to ensure they
|
||||
> are parsed correctly by InfluxDB Clustered. This is important to consider when
|
||||
> using DSNs containing auto-generated passwords which may include special
|
||||
> symbols to make passwords more secure.
|
||||
>
|
||||
>
|
||||
> A DSN with special characters that are not percent-encoded result in an error
|
||||
> similar to:
|
||||
>
|
||||
>
|
||||
> ```txt
|
||||
> Catalog DSN error: A catalog error occurred: unhandled external error: error with configuration: invalid port number
|
||||
> ```
|
||||
>
|
||||
>
|
||||
> For more information, see the [PostgreSQL Connection URI docs](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING-URIS).
|
||||
>
|
||||
> {{< expand-wrapper >}}
|
||||
{{% expand "View percent-encoded DSN example" %}}
|
||||
To use the following DSN containing special characters:
|
||||
> {{% expand "View percent-encoded DSN example" %}}
|
||||
> To use the following DSN containing special characters:
|
||||
|
||||
{{% code-callout "#" %}}
|
||||
|
||||
```txt
|
||||
postgresql://postgres:meow#meow@my-fancy.cloud-database.party:5432/postgres
|
||||
```
|
||||
|
||||
{{% /code-callout %}}
|
||||
|
||||
You must percent-encode the special characters in the connection string:
|
||||
|
||||
{{% code-callout "%23" %}}
|
||||
|
||||
```txt
|
||||
postgresql://postgres:meow%23meow@my-fancy.cloud-database.party:5432/postgres
|
||||
```
|
||||
|
||||
{{% /code-callout %}}
|
||||
|
||||
{{% /expand %}}
|
||||
{{< /expand-wrapper >}}
|
||||
|
||||
> [!Note]
|
||||
>
|
||||
> \[!Note]
|
||||
>
|
||||
> ##### PostgreSQL instances without TLS or SSL
|
||||
>
|
||||
>
|
||||
> If your PostgreSQL-compatible instance runs without TLS or SSL, you must include
|
||||
> the `sslmode=disable` parameter in the DSN. For example:
|
||||
>
|
||||
>
|
||||
> {{% code-callout "sslmode=disable" %}}
|
||||
|
||||
```
|
||||
postgres://username:passw0rd@mydomain:5432/influxdb?sslmode=disable
|
||||
```
|
||||
|
||||
{{% /code-callout %}}
|
||||
|
||||
#### Configure local storage for ingesters
|
||||
|
|
@ -665,7 +677,7 @@ following fields in your `values.yaml`:
|
|||
This differs based on the Kubernetes environment and desired storage characteristics.
|
||||
- `storage`: Storage size. We recommend a minimum of 2 gibibytes (`2Gi`).
|
||||
|
||||
{{% code-placeholders "STORAGE_(CLASS|SIZE)" %}}
|
||||
{{% code-placeholders "STORAGE\_(CLASS|SIZE)" %}}
|
||||
|
||||
```yaml
|
||||
ingesterStorage:
|
||||
|
|
@ -679,7 +691,7 @@ ingesterStorage:
|
|||
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
---
|
||||
***
|
||||
|
||||
Replace the following:
|
||||
|
||||
|
|
@ -688,7 +700,7 @@ Replace the following:
|
|||
- {{% code-placeholder-key %}}`STORAGE_SIZE`{{% /code-placeholder-key %}}:
|
||||
Storage size (example: `2Gi`)
|
||||
|
||||
---
|
||||
***
|
||||
|
||||
### Deploy your cluster
|
||||
|
||||
|
|
@ -774,6 +786,7 @@ helm upgrade influxdb ./influxdb3-clustered-X.Y.Z.tgz \
|
|||
```
|
||||
|
||||
{{% note %}}
|
||||
|
||||
#### Understanding kubit's role in air-gapped environments
|
||||
|
||||
When deploying with Helm in an air-gapped environment:
|
||||
|
|
@ -795,9 +808,10 @@ This is why mirroring both the InfluxDB images and the kubit operator images is
|
|||
### Common issues
|
||||
|
||||
1. **Image pull errors**
|
||||
|
||||
|
||||
```
|
||||
Error: failed to create labeled resources: failed to create resources: failed to create resources:
|
||||
Internal error occurred: failed to create pod sandbox: rpc error: code = Unknown
|
||||
desc = failed to pull image "us-docker.pkg.dev/...": failed to pull and unpack image "...":
|
||||
failed to resolve reference "...": failed to do request: ... i/o timeout
|
||||
```
|
||||
|
|
|
|||
|
|
@ -29,9 +29,10 @@ To compare these tools and deployment methods, see [Choose the right deployment
|
|||
## Prerequisites
|
||||
|
||||
If you haven't already set up and configured your cluster, see how to
|
||||
[install InfluxDB Clustered](/influxdb3/clustered/install/).
|
||||
[install InfluxDB Clustered](/influxdb3/clustered/install/).
|
||||
|
||||
<!-- Hidden anchor for links to the kubectl/kubit/helm tabs -->
|
||||
|
||||
<span id="kubectl-kubit-helm"></span>
|
||||
|
||||
{{< tabs-wrapper >}}
|
||||
|
|
@ -41,7 +42,9 @@ If you haven't already set up and configured your cluster, see how to
|
|||
[helm](#)
|
||||
{{% /tabs %}}
|
||||
{{% tab-content %}}
|
||||
|
||||
<!------------------------------- BEGIN kubectl ------------------------------->
|
||||
|
||||
- [`kubectl` standard deployment (with internet access)](#kubectl-standard-deployment-with-internet-access)
|
||||
- [`kubectl` air-gapped deployment](#kubectl-air-gapped-deployment)
|
||||
|
||||
|
|
@ -56,21 +59,24 @@ kubectl apply \
|
|||
--namespace influxdb
|
||||
```
|
||||
|
||||
> [!Note]
|
||||
> \[!Note]
|
||||
> Due to the additional complexity and maintenance requirements, using `kubectl apply` isn't
|
||||
> recommended for air-gapped environments.
|
||||
> Instead, consider using the [`kubit` CLI approach](#kubit-cli), which is specifically designed for air-gapped deployments.
|
||||
|
||||
<!-------------------------------- END kubectl -------------------------------->
|
||||
|
||||
{{% /tab-content %}}
|
||||
{{% tab-content %}}
|
||||
|
||||
<!-------------------------------- BEGIN kubit CLI -------------------------------->
|
||||
|
||||
## Standard and air-gapped deployments
|
||||
|
||||
_This approach avoids the need for installing the kubit operator in the cluster,
|
||||
making it ideal for air-gapped clusters._
|
||||
*This approach avoids the need for installing the kubit operator in the cluster,
|
||||
making it ideal for air-gapped clusters.*
|
||||
|
||||
> [!Important]
|
||||
> \[!Important]
|
||||
> For air-gapped deployment, ensure you have [configured access to a private registry for InfluxDB images](/influxdb3/clustered/install/set-up-cluster/configure-cluster/directly/#configure-access-to-the-influxDB-container-registry).
|
||||
|
||||
1. On a machine with internet access, download the [`kubit` CLI](https://github.com/kubecfg/kubit#cli-tool)--for example:
|
||||
|
|
@ -83,7 +89,7 @@ making it ideal for air-gapped clusters._
|
|||
Replace {{% code-placeholder-key %}}`v0.0.22`{{% /code-placeholder-key%}} with the [latest release version](https://github.com/kubecfg/kubit/releases/latest).
|
||||
|
||||
2. If deploying InfluxDB in an air-gapped environment (without internet access),
|
||||
transfer the binary to your air-gapped environment.
|
||||
transfer the binary to your air-gapped environment.
|
||||
|
||||
3. Use the `kubit local apply` command to process your [custom-configured `myinfluxdb.yml`](/influxdb3/clustered/install/set-up-cluster/configure-cluster/directly/) locally
|
||||
and apply the resulting resources to your cluster:
|
||||
|
|
@ -108,7 +114,9 @@ applies the resulting Kubernetes resources directly to your cluster.
|
|||
|
||||
{{% /tab-content %}}
|
||||
{{% tab-content %}}
|
||||
|
||||
<!-------------------------------- BEGIN Helm --------------------------------->
|
||||
|
||||
- [Helm standard deployment (with internet access)](#helm-standard-deployment-with-internet-access)
|
||||
- [Helm air-gapped deployment](#helm-air-gapped-deployment)
|
||||
|
||||
|
|
@ -145,7 +153,7 @@ helm upgrade influxdb influxdata/influxdb3-clustered \
|
|||
|
||||
## Helm air-gapped deployment
|
||||
|
||||
> [!Important]
|
||||
> \[!Important]
|
||||
> For air-gapped deployment, ensure you have [configured access to a private registry for InfluxDB images and the kubit operator](/influxdb3/clustered/install/set-up-cluster/configure-cluster/use-helm/#configure-access-to-the-influxDB-container-registry).
|
||||
|
||||
1. On a machine with internet access, download the Helm chart:
|
||||
|
|
@ -153,14 +161,14 @@ helm upgrade influxdb influxdata/influxdb3-clustered \
|
|||
```bash
|
||||
# Add the InfluxData repository
|
||||
helm repo add influxdata https://helm.influxdata.com/
|
||||
|
||||
|
||||
# Update the repositories
|
||||
helm repo update
|
||||
|
||||
|
||||
# Download the chart as a tarball
|
||||
helm pull influxdata/influxdb3-clustered --version X.Y.Z
|
||||
```
|
||||
|
||||
|
||||
Replace `X.Y.Z` with the specific chart version you want to use.
|
||||
|
||||
2. Transfer the chart tarball to your air-gapped environment using your secure file transfer method.
|
||||
|
|
@ -188,7 +196,8 @@ helm upgrade influxdb ./influxdb3-clustered-X.Y.Z.tgz \
|
|||
--namespace influxdb
|
||||
```
|
||||
|
||||
> [!Note]
|
||||
> \[!Note]
|
||||
>
|
||||
> #### kubit's role in air-gapped environments
|
||||
>
|
||||
> When deploying with Helm in an air-gapped environment:
|
||||
|
|
@ -200,6 +209,7 @@ helm upgrade influxdb ./influxdb3-clustered-X.Y.Z.tgz \
|
|||
> This is why you need to [mirror InfluxDB images and kubit operator images](/influxdb3/clustered/install/set-up-cluster/configure-cluster/use-helm/#mirror-influxdb-images) for air-gapped deployments.
|
||||
|
||||
<!--------------------------------- END Helm ---------------------------------->
|
||||
|
||||
{{% /tab-content %}}
|
||||
{{< /tabs-wrapper >}}
|
||||
|
||||
|
|
@ -208,7 +218,7 @@ helm upgrade influxdb ./influxdb3-clustered-X.Y.Z.tgz \
|
|||
Kubernetes deployments take some time to complete. To check on the status of a
|
||||
deployment, use the `kubectl get` command:
|
||||
|
||||
> [!Note]
|
||||
> \[!Note]
|
||||
> The following example uses the [`yq` command-line YAML parser](https://github.com/mikefarah/yq)
|
||||
> to parse and format the YAML output.
|
||||
> You can also specify the output as `json` and use the
|
||||
|
|
|
|||
|
|
@ -13,17 +13,17 @@ aliases:
|
|||
- /influxdb3/clustered/install/prerequisites/
|
||||
---
|
||||
|
||||
InfluxDB Clustered requires the following prerequisite external dependencies:
|
||||
InfluxDB Clustered requires the following prerequisite external dependencies:
|
||||
|
||||
- **kubectl command line tool**
|
||||
- **Kubernetes cluster**
|
||||
- **kubecfg kubit operator**
|
||||
- **Kubernetes ingress controller**
|
||||
- **Object storage**: AWS S3 or S3-compatible storage (including Google Cloud Storage
|
||||
or Azure Blob Storage) to store the InfluxDB Parquet files.
|
||||
- **PostgreSQL-compatible database** _(AWS Aurora, hosted PostgreSQL, etc.)_:
|
||||
or Azure Blob Storage) to store the InfluxDB Parquet files.
|
||||
- **PostgreSQL-compatible database** *(AWS Aurora, hosted PostgreSQL, etc.)*:
|
||||
Stores the [InfluxDB Catalog](/influxdb3/clustered/reference/internals/storage-engine/#catalog).
|
||||
- **Local or attached storage**:
|
||||
- **Local or attached storage**:
|
||||
Stores the Write-Ahead Log (WAL) for
|
||||
[InfluxDB Ingesters](/influxdb3/clustered/reference/internals/storage-engine/#ingester).
|
||||
|
||||
|
|
@ -45,7 +45,7 @@ cluster.
|
|||
|
||||
Follow instructions to install `kubectl` on your local machine:
|
||||
|
||||
> [!Note]
|
||||
> \[!Note]
|
||||
> InfluxDB Clustered Kubernetes deployments require `kubectl` 1.27 or higher.
|
||||
|
||||
- [Install kubectl on Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/)
|
||||
|
|
@ -54,35 +54,35 @@ Follow instructions to install `kubectl` on your local machine:
|
|||
|
||||
#### Set up your Kubernetes cluster
|
||||
|
||||
1. Deploy a Kubernetes cluster. The deployment process depends on your
|
||||
Kubernetes environment or Kubernetes cloud provider. Refer to the
|
||||
[Kubernetes documentation](https://kubernetes.io/docs/home/) or your cloud
|
||||
provider's documentation for information about deploying a Kubernetes cluster.
|
||||
1. Deploy a Kubernetes cluster. The deployment process depends on your
|
||||
Kubernetes environment or Kubernetes cloud provider. Refer to the
|
||||
[Kubernetes documentation](https://kubernetes.io/docs/home/) or your cloud
|
||||
provider's documentation for information about deploying a Kubernetes cluster.
|
||||
|
||||
2. Ensure `kubectl` can connect to your Kubernetes cluster.
|
||||
Your Manage [kubeconfig file](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/)
|
||||
defines cluster connection credentials.
|
||||
2. Ensure `kubectl` can connect to your Kubernetes cluster.
|
||||
Your Manage [kubeconfig file](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/)
|
||||
defines cluster connection credentials.
|
||||
|
||||
3. Create two namespaces--`influxdb` and `kubit`. Use
|
||||
[`kubectl create namespace`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_namespace/) to create the
|
||||
namespaces:
|
||||
3. Create two namespaces--`influxdb` and `kubit`. Use
|
||||
[`kubectl create namespace`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_namespace/) to create the
|
||||
namespaces:
|
||||
|
||||
<!-- pytest.mark.skip -->
|
||||
<!-- pytest.mark.skip -->
|
||||
|
||||
```bash
|
||||
kubectl create namespace influxdb && \
|
||||
kubectl create namespace kubit
|
||||
```
|
||||
```bash
|
||||
kubectl create namespace influxdb && \
|
||||
kubectl create namespace kubit
|
||||
```
|
||||
|
||||
4. Install an [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/)
|
||||
in the cluster and a mechanism to obtain a valid TLS certificate
|
||||
(for example: [cert-manager](https://cert-manager.io/) or provide the
|
||||
certificate PEM manually out of band).
|
||||
To use the InfluxDB-specific ingress controller, install [Ingress NGINX](https://github.com/kubernetes/ingress-nginx).
|
||||
4. Install an [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/)
|
||||
in the cluster and a mechanism to obtain a valid TLS certificate
|
||||
(for example: [cert-manager](https://cert-manager.io/) or provide the
|
||||
certificate PEM manually out of band).
|
||||
To use the InfluxDB-specific ingress controller, install [Ingress NGINX](https://github.com/kubernetes/ingress-nginx).
|
||||
|
||||
5. Ensure your Kubernetes cluster can access the InfluxDB container registry,
|
||||
or, if running in an air-gapped environment, a local container registry to
|
||||
which you can copy the InfluxDB images.
|
||||
5. Ensure your Kubernetes cluster can access the InfluxDB container registry,
|
||||
or, if running in an air-gapped environment, a local container registry to
|
||||
which you can copy the InfluxDB images.
|
||||
|
||||
### Cluster sizing recommendation
|
||||
|
||||
|
|
@ -97,10 +97,11 @@ following sizing for {{% product-name %}} components:
|
|||
[On-Prem](#)
|
||||
{{% /tabs %}}
|
||||
{{% tab-content %}}
|
||||
|
||||
<!--------------------------------- BEGIN AWS --------------------------------->
|
||||
|
||||
- **Catalog store (PostgreSQL-compatible database) (x1):**
|
||||
- _[See below](#postgresql-compatible-database-requirements)_
|
||||
- *[See below](#postgresql-compatible-database-requirements)*
|
||||
- **Ingesters and Routers (x3):**
|
||||
- EC2 m6i.2xlarge (8 CPU, 32 GB RAM)
|
||||
- Local storage: minimum of 2 GB (high-speed SSD)
|
||||
|
|
@ -112,12 +113,14 @@ following sizing for {{% product-name %}} components:
|
|||
- EC2 t3.large (2 CPU, 8 GB RAM)
|
||||
|
||||
<!---------------------------------- END AWS ---------------------------------->
|
||||
|
||||
{{% /tab-content %}}
|
||||
{{% tab-content %}}
|
||||
|
||||
<!--------------------------------- BEGIN GCP --------------------------------->
|
||||
|
||||
- **Catalog store (PostgreSQL-compatible database) (x1):**
|
||||
- _[See below](#postgresql-compatible-database-requirements)_
|
||||
- *[See below](#postgresql-compatible-database-requirements)*
|
||||
- **Ingesters and Routers (x3):**
|
||||
- GCE c2-standard-8 (8 CPU, 32 GB RAM)
|
||||
- Local storage: minimum of 2 GB (high-speed SSD)
|
||||
|
|
@ -129,25 +132,29 @@ following sizing for {{% product-name %}} components:
|
|||
- GCE c2d-standard-2 (2 CPU, 8 GB RAM)
|
||||
|
||||
<!---------------------------------- END GCP ---------------------------------->
|
||||
|
||||
{{% /tab-content %}}
|
||||
{{% tab-content %}}
|
||||
|
||||
<!-------------------------------- BEGIN Azure -------------------------------->
|
||||
|
||||
- **Catalog store (PostgreSQL-compatible database) (x1):**
|
||||
- _[See below](#postgresql-compatible-database-requirements)_
|
||||
- *[See below](#postgresql-compatible-database-requirements)*
|
||||
- **Ingesters and Routers (x3):**
|
||||
- Standard_D8s_v3 (8 CPU, 32 GB RAM)
|
||||
- Standard\_D8s\_v3 (8 CPU, 32 GB RAM)
|
||||
- Local storage: minimum of 2 GB (high-speed SSD)
|
||||
- **Queriers (x3):**
|
||||
- Standard_D8s_v3 (8 CPU, 32 GB RAM)
|
||||
- Standard\_D8s\_v3 (8 CPU, 32 GB RAM)
|
||||
- **Compactors (x1):**
|
||||
- Standard_D8s_v3 (8 CPU, 32 GB RAM)
|
||||
- Standard\_D8s\_v3 (8 CPU, 32 GB RAM)
|
||||
- **Kubernetes Control Plane (x1):**
|
||||
- Standard_B2ms (2 CPU, 8 GB RAM)
|
||||
- Standard\_B2ms (2 CPU, 8 GB RAM)
|
||||
|
||||
<!--------------------------------- END Azure --------------------------------->
|
||||
|
||||
{{% /tab-content %}}
|
||||
{{% tab-content %}}
|
||||
|
||||
<!------------------------------- BEGIN ON-PREM ------------------------------->
|
||||
|
||||
- **Catalog store (PostgreSQL-compatible database) (x1):**
|
||||
|
|
@ -168,6 +175,7 @@ following sizing for {{% product-name %}} components:
|
|||
- RAM: 8 GB
|
||||
|
||||
<!-------------------------------- END ON-PREM -------------------------------->
|
||||
|
||||
{{% /tab-content %}}
|
||||
{{< /tabs-wrapper >}}
|
||||
|
||||
|
|
@ -181,9 +189,10 @@ simplifies the installation and management of the InfluxDB Clustered package.
|
|||
It manages the application of the jsonnet templates used to install, manage, and
|
||||
update an InfluxDB cluster.
|
||||
|
||||
> [!Note]
|
||||
> \[!Note]
|
||||
>
|
||||
> #### The InfluxDB Clustered Helm chart includes the kubit operator
|
||||
>
|
||||
>
|
||||
> If using the [InfluxDB Clustered Helm chart](https://github.com/influxdata/helm-charts/tree/master/charts/influxdb3-clustered)
|
||||
> to deploy your InfluxDB cluster, you do not need to install the kubit operator
|
||||
> separately. The Helm chart installs the kubit operator.
|
||||
|
|
@ -206,7 +215,8 @@ You can provide your own ingress or you can install
|
|||
[Nginx Ingress Controller](https://github.com/kubernetes/ingress-nginx) to use
|
||||
the InfluxDB-defined ingress.
|
||||
|
||||
> [!Important]
|
||||
> \[!Important]
|
||||
>
|
||||
> #### Allow gRPC/HTTP2
|
||||
>
|
||||
> InfluxDB Clustered components use gRPC/HTTP2 protocols.
|
||||
|
|
@ -232,19 +242,20 @@ that work with InfluxDB Clustered. Other S3-compatible object stores should work
|
|||
as well.
|
||||
{{% /caption %}}
|
||||
|
||||
> [!Important]
|
||||
> \[!Important]
|
||||
>
|
||||
> #### Object storage recommendations
|
||||
>
|
||||
>
|
||||
> We **strongly** recommend the following:
|
||||
>
|
||||
>
|
||||
> - ##### Enable object versioning
|
||||
>
|
||||
>
|
||||
> Enable object versioning in your object store.
|
||||
> Refer to your object storage provider's documentation for information about
|
||||
> enabling object versioning.
|
||||
>
|
||||
>
|
||||
> - ##### Run the object store in a separate namespace or outside of Kubernetes
|
||||
>
|
||||
>
|
||||
> Run the Object store in a separate namespace from InfluxDB or external to
|
||||
> Kubernetes entirely. Doing so makes management of the InfluxDB cluster easier
|
||||
> and helps to prevent accidental data loss. While deploying everything in the
|
||||
|
|
@ -260,7 +271,8 @@ the correct permissions to allow InfluxDB to perform all the actions it needs to
|
|||
|
||||
The IAM role that you use to access AWS S3 should have the following policy:
|
||||
|
||||
{{% code-placeholders "S3_BUCKET_NAME" %}}
|
||||
{{% code-placeholders "S3\_BUCKET\_NAME" %}}
|
||||
|
||||
```json
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
|
|
@ -297,6 +309,7 @@ The IAM role that you use to access AWS S3 should have the following policy:
|
|||
]
|
||||
}
|
||||
```
|
||||
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace the following:
|
||||
|
|
@ -310,13 +323,15 @@ Replace the following:
|
|||
To use Google Cloud Storage (GCS) as your object store, your [IAM principal](https://cloud.google.com/iam/docs/overview) should be granted the `roles/storage.objectUser` role.
|
||||
For example, if using [Google Service Accounts](https://cloud.google.com/iam/docs/service-account-overview):
|
||||
|
||||
{{% code-placeholders "GCP_SERVICE_ACCOUNT|GCP_BUCKET" %}}
|
||||
{{% code-placeholders "GCP\_SERVICE\_ACCOUNT|GCP\_BUCKET" %}}
|
||||
|
||||
```bash
|
||||
gcloud storage buckets add-iam-policy-binding \
|
||||
gs://GCP_BUCKET \
|
||||
--member="serviceAccount:GCP_SERVICE_ACCOUNT" \
|
||||
--role="roles/storage.objectUser"
|
||||
```
|
||||
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace the following:
|
||||
|
|
@ -333,13 +348,15 @@ should be granted the `Storage Blob Data Contributor` role.
|
|||
This is a built-in role for Azure which encompasses common permissions.
|
||||
You can assign it using the following command:
|
||||
|
||||
{{% code-placeholders "PRINCIPAL|AZURE_SUBSCRIPTION|AZURE_RESOURCE_GROUP|AZURE_STORAGE_ACCOUNT|AZURE_STORAGE_CONTAINER" %}}
|
||||
{{% code-placeholders "PRINCIPAL|AZURE\_SUBSCRIPTION|AZURE\_RESOURCE\_GROUP|AZURE\_STORAGE\_ACCOUNT|AZURE\_STORAGE\_CONTAINER" %}}
|
||||
|
||||
```bash
|
||||
az role assignment create \
|
||||
--role "Storage Blob Data Contributor" \
|
||||
--assignee PRINCIPAL \
|
||||
--scope "/subscriptions/AZURE_SUBSCRIPTION/resourceGroups/AZURE_RESOURCE_GROUP/providers/Microsoft.Storage/storageAccounts/AZURE_STORAGE_ACCOUNT/blobServices/default/containers/AZURE_STORAGE_CONTAINER"
|
||||
```
|
||||
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace the following:
|
||||
|
|
@ -354,7 +371,7 @@ Replace the following:
|
|||
|
||||
{{< /expand-wrapper >}}
|
||||
|
||||
> [!Note]
|
||||
> \[!Note]
|
||||
> To configure permissions with MinIO, use the
|
||||
> [example AWS access policy](#view-example-aws-s3-access-policy).
|
||||
|
||||
|
|
@ -362,7 +379,7 @@ Replace the following:
|
|||
|
||||
The [InfluxDB Catalog](/influxdb3/clustered/reference/internals/storage-engine/#catalog)
|
||||
that stores metadata related to your time series data requires a PostgreSQL or
|
||||
PostgreSQL-compatible database _(AWS Aurora, hosted PostgreSQL, etc.)_.
|
||||
PostgreSQL-compatible database *(AWS Aurora, hosted PostgreSQL, etc.)*.
|
||||
The process for installing and setting up your PostgreSQL-compatible database
|
||||
depends on the database and database provider you use.
|
||||
Refer to your database's or provider's documentation for setting up your
|
||||
|
|
@ -376,12 +393,12 @@ PostgreSQL-compatible database.
|
|||
applications, ensure that your PostgreSQL-compatible instance is dedicated
|
||||
exclusively to InfluxDB.
|
||||
|
||||
> [!Note]
|
||||
> \[!Note]
|
||||
> We **strongly** recommended running the PostgreSQL-compatible database
|
||||
> in a separate namespace from InfluxDB or external to Kubernetes entirely.
|
||||
> Doing so makes management of the InfluxDB cluster easier and helps to prevent
|
||||
> accidental data loss.
|
||||
>
|
||||
>
|
||||
> While deploying everything in the same namespace is possible, we do not
|
||||
> recommend it for production environments.
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,696 @@
|
|||
---
|
||||
title: Use multi-file Python code and modules in plugins
|
||||
description: |
|
||||
Organize complex plugin logic across multiple Python files and modules for better code reuse, testing, and maintainability in InfluxDB 3 Processing Engine plugins.
|
||||
menu:
|
||||
influxdb3_core:
|
||||
name: Use multi-file plugins
|
||||
parent: Processing engine and Python plugins
|
||||
weight: 101
|
||||
influxdb3/core/tags: [processing engine, plugins, python, modules]
|
||||
related:
|
||||
- /influxdb3/core/plugins/
|
||||
- /influxdb3/core/plugins/extend-plugin/
|
||||
- /influxdb3/core/reference/cli/influxdb3/create/trigger/
|
||||
---
|
||||
|
||||
As your plugin logic grows in complexity, organizing code across multiple Python files improves maintainability, enables code reuse, and makes testing easier.
|
||||
The InfluxDB 3 Processing Engine supports multi-file plugin architectures using standard Python module patterns.
|
||||
|
||||
## Before you begin
|
||||
|
||||
Ensure you have:
|
||||
|
||||
- A working InfluxDB 3 Core instance with the Processing Engine enabled
|
||||
- Basic understanding of [Python modules and packages](https://docs.python.org/3/tutorial/modules.html)
|
||||
- Familiarity with [creating InfluxDB 3 plugins](/influxdb3/core/plugins/)
|
||||
|
||||
## Multi-file plugin structure
|
||||
|
||||
A multi-file plugin is a directory containing Python files organized as a package.
|
||||
The directory must include an `__init__.py` file that serves as the entry point and contains your trigger function.
|
||||
|
||||
### Basic structure
|
||||
|
||||
```
|
||||
my_plugin/
|
||||
├── __init__.py # Required - entry point with trigger function
|
||||
├── processors.py # Data processing functions
|
||||
├── utils.py # Helper utilities
|
||||
└── config.py # Configuration management
|
||||
```
|
||||
|
||||
### Required: **init**.py entry point
|
||||
|
||||
The `__init__.py` file must contain the trigger function that InfluxDB calls when the trigger fires.
|
||||
This file imports and orchestrates code from other modules in your plugin.
|
||||
|
||||
```python
|
||||
# my_plugin/__init__.py
|
||||
from .processors import process_data
|
||||
from .config import load_settings
|
||||
from .utils import format_output
|
||||
|
||||
def process_writes(influxdb3_local, table_batches, args=None):
|
||||
"""Entry point for WAL trigger."""
|
||||
settings = load_settings(args)
|
||||
|
||||
for table_batch in table_batches:
|
||||
processed_data = process_data(table_batch, settings)
|
||||
output = format_output(processed_data)
|
||||
influxdb3_local.write(output)
|
||||
```
|
||||
|
||||
## Organizing plugin code
|
||||
|
||||
### Separate concerns into modules
|
||||
|
||||
Organize your plugin code by functional responsibility to improve maintainability and testing.
|
||||
|
||||
#### processors.py - Data transformation logic
|
||||
|
||||
```python
|
||||
# my_plugin/processors.py
|
||||
"""Data processing and transformation functions."""
|
||||
|
||||
def process_data(table_batch, settings):
|
||||
"""Transform data according to configuration settings."""
|
||||
table_name = table_batch["table_name"]
|
||||
rows = table_batch["rows"]
|
||||
|
||||
transformed_rows = []
|
||||
for row in rows:
|
||||
transformed = transform_row(row, settings)
|
||||
if transformed:
|
||||
transformed_rows.append(transformed)
|
||||
|
||||
return {
|
||||
"table": table_name,
|
||||
"rows": transformed_rows,
|
||||
"count": len(transformed_rows)
|
||||
}
|
||||
|
||||
def transform_row(row, settings):
|
||||
"""Apply transformations to a single row."""
|
||||
# Apply threshold filtering
|
||||
if "value" in row and row["value"] < settings.get("min_value", 0):
|
||||
return None
|
||||
|
||||
# Apply unit conversion if configured
|
||||
if settings.get("convert_units"):
|
||||
row["value"] = row["value"] * settings.get("conversion_factor", 1.0)
|
||||
|
||||
return row
|
||||
```
|
||||
|
||||
#### config.py - Configuration management
|
||||
|
||||
```python
|
||||
# my_plugin/config.py
|
||||
"""Plugin configuration parsing and validation."""
|
||||
|
||||
DEFAULT_SETTINGS = {
|
||||
"min_value": 0.0,
|
||||
"convert_units": False,
|
||||
"conversion_factor": 1.0,
|
||||
"output_measurement": "processed_data",
|
||||
}
|
||||
|
||||
def load_settings(args):
|
||||
"""Load and validate plugin settings from trigger arguments."""
|
||||
settings = DEFAULT_SETTINGS.copy()
|
||||
|
||||
if not args:
|
||||
return settings
|
||||
|
||||
# Parse numeric arguments
|
||||
if "min_value" in args:
|
||||
settings["min_value"] = float(args["min_value"])
|
||||
|
||||
if "conversion_factor" in args:
|
||||
settings["conversion_factor"] = float(args["conversion_factor"])
|
||||
|
||||
# Parse boolean arguments
|
||||
if "convert_units" in args:
|
||||
settings["convert_units"] = args["convert_units"].lower() in ("true", "1", "yes")
|
||||
|
||||
# Parse string arguments
|
||||
if "output_measurement" in args:
|
||||
settings["output_measurement"] = args["output_measurement"]
|
||||
|
||||
return settings
|
||||
|
||||
def validate_settings(settings):
|
||||
"""Validate settings and raise exceptions for invalid configurations."""
|
||||
if settings["min_value"] < 0:
|
||||
raise ValueError("min_value must be non-negative")
|
||||
|
||||
if settings["conversion_factor"] <= 0:
|
||||
raise ValueError("conversion_factor must be positive")
|
||||
|
||||
return True
|
||||
```
|
||||
|
||||
#### utils.py - Helper functions
|
||||
|
||||
```python
|
||||
# my_plugin/utils.py
|
||||
"""Utility functions for data formatting and logging."""
|
||||
|
||||
from datetime import datetime
|
||||
|
||||
def format_output(processed_data):
|
||||
"""Format processed data for writing to InfluxDB."""
|
||||
from influxdb3_local import LineBuilder
|
||||
|
||||
lines = []
|
||||
measurement = processed_data.get("measurement", "processed_data")
|
||||
|
||||
for row in processed_data["rows"]:
|
||||
line = LineBuilder(measurement)
|
||||
|
||||
# Add tags from row
|
||||
for key, value in row.items():
|
||||
if key.startswith("tag_"):
|
||||
line.tag(key.replace("tag_", ""), str(value))
|
||||
|
||||
# Add fields from row
|
||||
for key, value in row.items():
|
||||
if key.startswith("field_"):
|
||||
field_name = key.replace("field_", "")
|
||||
if isinstance(value, float):
|
||||
line.float64_field(field_name, value)
|
||||
elif isinstance(value, int):
|
||||
line.int64_field(field_name, value)
|
||||
elif isinstance(value, str):
|
||||
line.string_field(field_name, value)
|
||||
|
||||
lines.append(line)
|
||||
|
||||
return lines
|
||||
|
||||
def log_metrics(influxdb3_local, operation, duration_ms, record_count):
|
||||
"""Log plugin performance metrics."""
|
||||
influxdb3_local.info(
|
||||
f"Operation: {operation}, "
|
||||
f"Duration: {duration_ms}ms, "
|
||||
f"Records: {record_count}"
|
||||
)
|
||||
```
|
||||
|
||||
## Importing external libraries
|
||||
|
||||
Multi-file plugins can use both relative imports (for your own modules) and absolute imports (for external libraries).
|
||||
|
||||
### Relative imports for plugin modules
|
||||
|
||||
Use relative imports to reference other modules within your plugin:
|
||||
|
||||
```python
|
||||
# my_plugin/__init__.py
|
||||
from .processors import process_data # Same package
|
||||
from .config import load_settings # Same package
|
||||
from .utils import format_output # Same package
|
||||
|
||||
# Relative imports from subdirectories
|
||||
from .transforms.aggregators import calculate_mean
|
||||
from .integrations.webhook import send_notification
|
||||
```
|
||||
|
||||
### Absolute imports for external libraries
|
||||
|
||||
Use absolute imports for standard library and third-party packages:
|
||||
|
||||
```python
|
||||
# my_plugin/processors.py
|
||||
import json
|
||||
import time
|
||||
from datetime import datetime, timedelta
|
||||
from collections import defaultdict
|
||||
|
||||
# Third-party libraries (must be installed with influxdb3 install package)
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
```
|
||||
|
||||
### Installing third-party dependencies
|
||||
|
||||
Before using external libraries, install them into the Processing Engine's Python environment:
|
||||
|
||||
```bash
|
||||
# Install packages for your plugin
|
||||
influxdb3 install package pandas numpy requests
|
||||
```
|
||||
|
||||
For Docker deployments:
|
||||
|
||||
```bash
|
||||
docker exec -it CONTAINER_NAME influxdb3 install package pandas numpy requests
|
||||
```
|
||||
|
||||
## Advanced plugin patterns
|
||||
|
||||
### Nested module structure
|
||||
|
||||
For complex plugins, organize code into subdirectories:
|
||||
|
||||
```
|
||||
my_advanced_plugin/
|
||||
├── __init__.py
|
||||
├── config.py
|
||||
├── transforms/
|
||||
│ ├── __init__.py
|
||||
│ ├── aggregators.py
|
||||
│ └── filters.py
|
||||
├── integrations/
|
||||
│ ├── __init__.py
|
||||
│ ├── webhook.py
|
||||
│ └── email.py
|
||||
└── utils/
|
||||
├── __init__.py
|
||||
├── logging.py
|
||||
└── validators.py
|
||||
```
|
||||
|
||||
Import from nested modules:
|
||||
|
||||
```python
|
||||
# my_advanced_plugin/__init__.py
|
||||
from .transforms.aggregators import calculate_statistics
|
||||
from .transforms.filters import apply_threshold_filter
|
||||
from .integrations.webhook import send_alert
|
||||
from .utils.logging import setup_logger
|
||||
|
||||
def process_writes(influxdb3_local, table_batches, args=None):
|
||||
logger = setup_logger(influxdb3_local)
|
||||
|
||||
for table_batch in table_batches:
|
||||
# Filter data
|
||||
filtered = apply_threshold_filter(table_batch, threshold=100)
|
||||
|
||||
# Calculate statistics
|
||||
stats = calculate_statistics(filtered)
|
||||
|
||||
# Send alerts if needed
|
||||
if stats["max"] > 1000:
|
||||
send_alert(stats, logger)
|
||||
```
|
||||
|
||||
### Shared code across plugins
|
||||
|
||||
Share common code across multiple plugins using a shared module directory:
|
||||
|
||||
```
|
||||
plugins/
|
||||
├── shared/
|
||||
│ ├── __init__.py
|
||||
│ ├── formatters.py
|
||||
│ └── validators.py
|
||||
├── plugin_a/
|
||||
│ └── __init__.py
|
||||
└── plugin_b/
|
||||
└── __init__.py
|
||||
```
|
||||
|
||||
Add the shared directory to Python's module search path in your plugin:
|
||||
|
||||
```python
|
||||
# plugin_a/__init__.py
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Add shared directory to path
|
||||
plugin_dir = Path(__file__).parent.parent
|
||||
sys.path.insert(0, str(plugin_dir))
|
||||
|
||||
# Now import from shared
|
||||
from shared.formatters import format_line_protocol
|
||||
from shared.validators import validate_data
|
||||
|
||||
def process_writes(influxdb3_local, table_batches, args=None):
|
||||
for table_batch in table_batches:
|
||||
if validate_data(table_batch):
|
||||
formatted = format_line_protocol(table_batch)
|
||||
influxdb3_local.write(formatted)
|
||||
```
|
||||
|
||||
## Testing multi-file plugins
|
||||
|
||||
### Unit testing individual modules
|
||||
|
||||
Test modules independently before integration:
|
||||
|
||||
```python
|
||||
# tests/test_processors.py
|
||||
import unittest
|
||||
from my_plugin.processors import transform_row
|
||||
from my_plugin.config import load_settings
|
||||
|
||||
class TestProcessors(unittest.TestCase):
|
||||
def test_transform_row_filtering(self):
|
||||
"""Test that rows below threshold are filtered."""
|
||||
settings = {"min_value": 10.0}
|
||||
row = {"value": 5.0}
|
||||
|
||||
result = transform_row(row, settings)
|
||||
|
||||
self.assertIsNone(result)
|
||||
|
||||
def test_transform_row_conversion(self):
|
||||
"""Test unit conversion."""
|
||||
settings = {
|
||||
"convert_units": True,
|
||||
"conversion_factor": 2.0,
|
||||
"min_value": 0.0
|
||||
}
|
||||
row = {"value": 10.0}
|
||||
|
||||
result = transform_row(row, settings)
|
||||
|
||||
self.assertEqual(result["value"], 20.0)
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
```
|
||||
|
||||
### Testing with the influxdb3 CLI
|
||||
|
||||
Test your complete multi-file plugin before deployment:
|
||||
|
||||
```bash
|
||||
# Test scheduled plugin
|
||||
influxdb3 test schedule_plugin \
|
||||
--database testdb \
|
||||
--schedule "0 0 * * * *" \
|
||||
--plugin-dir /path/to/plugins \
|
||||
my_plugin
|
||||
|
||||
# Test WAL plugin with sample data
|
||||
influxdb3 test wal_plugin \
|
||||
--database testdb \
|
||||
--plugin-dir /path/to/plugins \
|
||||
my_plugin
|
||||
```
|
||||
|
||||
For more testing options, see the [influxdb3 test reference](/influxdb3/core/reference/cli/influxdb3/test/).
|
||||
|
||||
## Deploying multi-file plugins
|
||||
|
||||
### Upload plugin directory
|
||||
|
||||
Upload your complete plugin directory when creating a trigger:
|
||||
|
||||
```bash
|
||||
# Upload the entire plugin directory
|
||||
influxdb3 create trigger \
|
||||
--trigger-spec "table:sensor_data" \
|
||||
--path "/local/path/to/my_plugin" \
|
||||
--upload \
|
||||
--database mydb \
|
||||
sensor_processor
|
||||
```
|
||||
|
||||
The `--upload` flag transfers all files in the directory to the server's plugin directory.
|
||||
|
||||
### Update plugin code
|
||||
|
||||
Update all files in a running plugin:
|
||||
|
||||
```bash
|
||||
# Update the plugin with new code
|
||||
influxdb3 update trigger \
|
||||
--database mydb \
|
||||
--trigger-name sensor_processor \
|
||||
--path "/local/path/to/my_plugin"
|
||||
```
|
||||
|
||||
The update replaces all plugin files while preserving trigger configuration.
|
||||
|
||||
## Best practices
|
||||
|
||||
### Code organization
|
||||
|
||||
- **Single responsibility**: Each module should have one clear purpose
|
||||
- **Shallow hierarchies**: Avoid deeply nested directory structures (2-3 levels maximum)
|
||||
- **Descriptive names**: Use clear, descriptive module and function names
|
||||
- **Module size**: Keep modules under 300-400 lines for maintainability
|
||||
|
||||
### Import management
|
||||
|
||||
- **Explicit imports**: Use explicit imports rather than `from module import *`
|
||||
- **Standard library first**: Import standard library, then third-party, then local modules
|
||||
- **Avoid circular imports**: Design modules to prevent circular dependencies
|
||||
|
||||
Example import organization:
|
||||
|
||||
```python
|
||||
# Standard library
|
||||
import json
|
||||
import time
|
||||
from datetime import datetime
|
||||
|
||||
# Third-party packages
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
|
||||
# Local modules
|
||||
from .config import load_settings
|
||||
from .processors import process_data
|
||||
from .utils import format_output
|
||||
```
|
||||
|
||||
### Error handling
|
||||
|
||||
Centralize error handling in your entry point:
|
||||
|
||||
```python
|
||||
# my_plugin/__init__.py
|
||||
from .processors import process_data
|
||||
from .config import load_settings, validate_settings
|
||||
|
||||
def process_writes(influxdb3_local, table_batches, args=None):
|
||||
try:
|
||||
# Load and validate configuration
|
||||
settings = load_settings(args)
|
||||
validate_settings(settings)
|
||||
|
||||
# Process data
|
||||
for table_batch in table_batches:
|
||||
process_data(influxdb3_local, table_batch, settings)
|
||||
|
||||
except ValueError as e:
|
||||
influxdb3_local.error(f"Configuration error: {e}")
|
||||
except Exception as e:
|
||||
influxdb3_local.error(f"Unexpected error: {e}")
|
||||
```
|
||||
|
||||
### Documentation
|
||||
|
||||
Document your modules with docstrings:
|
||||
|
||||
```python
|
||||
"""
|
||||
my_plugin - Data processing plugin for sensor data.
|
||||
|
||||
This plugin processes incoming sensor data by:
|
||||
1. Filtering values below configured threshold
|
||||
2. Converting units if requested
|
||||
3. Writing processed data to output measurement
|
||||
|
||||
Modules:
|
||||
- processors: Core data transformation logic
|
||||
- config: Configuration parsing and validation
|
||||
- utils: Helper functions for formatting and logging
|
||||
"""
|
||||
|
||||
def process_writes(influxdb3_local, table_batches, args=None):
|
||||
"""Process incoming sensor data writes.
|
||||
|
||||
Args:
|
||||
influxdb3_local: InfluxDB API interface
|
||||
table_batches: List of table batches with written data
|
||||
args: Optional trigger arguments for configuration
|
||||
|
||||
Trigger arguments:
|
||||
min_value (float): Minimum value threshold
|
||||
convert_units (bool): Enable unit conversion
|
||||
conversion_factor (float): Conversion multiplier
|
||||
output_measurement (str): Target measurement name
|
||||
"""
|
||||
pass
|
||||
```
|
||||
|
||||
## Example: Complete multi-file plugin
|
||||
|
||||
Here's a complete example of a temperature monitoring plugin with multi-file organization:
|
||||
|
||||
### Plugin structure
|
||||
|
||||
```
|
||||
temperature_monitor/
|
||||
├── __init__.py
|
||||
├── config.py
|
||||
├── processors.py
|
||||
└── alerts.py
|
||||
```
|
||||
|
||||
### **init**.py
|
||||
|
||||
```python
|
||||
# temperature_monitor/__init__.py
|
||||
"""Temperature monitoring plugin with alerting."""
|
||||
|
||||
from .config import load_config
|
||||
from .processors import calculate_statistics
|
||||
from .alerts import check_thresholds
|
||||
|
||||
def process_scheduled_call(influxdb3_local, call_time, args=None):
|
||||
"""Monitor temperature data and send alerts."""
|
||||
try:
|
||||
config = load_config(args)
|
||||
|
||||
# Query recent temperature data
|
||||
query = f"""
|
||||
SELECT temp_value, location
|
||||
FROM {config['measurement']}
|
||||
WHERE time > now() - INTERVAL '{config['window']}'
|
||||
"""
|
||||
results = influxdb3_local.query(query)
|
||||
|
||||
# Calculate statistics
|
||||
stats = calculate_statistics(results)
|
||||
|
||||
# Check thresholds and alert
|
||||
check_thresholds(influxdb3_local, stats, config)
|
||||
|
||||
influxdb3_local.info(
|
||||
f"Processed {len(results)} readings "
|
||||
f"from {len(stats)} locations"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
influxdb3_local.error(f"Plugin error: {e}")
|
||||
```
|
||||
|
||||
### config.py
|
||||
|
||||
```python
|
||||
# temperature_monitor/config.py
|
||||
"""Configuration management for temperature monitor."""
|
||||
|
||||
DEFAULTS = {
|
||||
"measurement": "temperature",
|
||||
"window": "1 hour",
|
||||
"high_threshold": 30.0,
|
||||
"low_threshold": 10.0,
|
||||
"alert_measurement": "temperature_alerts"
|
||||
}
|
||||
|
||||
def load_config(args):
|
||||
"""Load configuration from trigger arguments."""
|
||||
config = DEFAULTS.copy()
|
||||
|
||||
if args:
|
||||
for key in DEFAULTS:
|
||||
if key in args:
|
||||
if key.endswith("_threshold"):
|
||||
config[key] = float(args[key])
|
||||
else:
|
||||
config[key] = args[key]
|
||||
|
||||
return config
|
||||
```
|
||||
|
||||
### processors.py
|
||||
|
||||
```python
|
||||
# temperature_monitor/processors.py
|
||||
"""Data processing functions."""
|
||||
|
||||
from collections import defaultdict
|
||||
|
||||
def calculate_statistics(data):
|
||||
"""Calculate statistics by location."""
|
||||
stats = defaultdict(lambda: {
|
||||
"count": 0,
|
||||
"sum": 0.0,
|
||||
"min": float('inf'),
|
||||
"max": float('-inf')
|
||||
})
|
||||
|
||||
for row in data:
|
||||
location = row.get("location", "unknown")
|
||||
value = float(row.get("temp_value", 0))
|
||||
|
||||
s = stats[location]
|
||||
s["count"] += 1
|
||||
s["sum"] += value
|
||||
s["min"] = min(s["min"], value)
|
||||
s["max"] = max(s["max"], value)
|
||||
|
||||
# Calculate averages
|
||||
for location, s in stats.items():
|
||||
if s["count"] > 0:
|
||||
s["avg"] = s["sum"] / s["count"]
|
||||
|
||||
return dict(stats)
|
||||
```
|
||||
|
||||
### alerts.py
|
||||
|
||||
```python
|
||||
# temperature_monitor/alerts.py
|
||||
"""Alert checking and notification."""
|
||||
|
||||
def check_thresholds(influxdb3_local, stats, config):
|
||||
"""Check temperature thresholds and write alerts."""
|
||||
from influxdb3_local import LineBuilder
|
||||
|
||||
high_threshold = config["high_threshold"]
|
||||
low_threshold = config["low_threshold"]
|
||||
alert_measurement = config["alert_measurement"]
|
||||
|
||||
for location, s in stats.items():
|
||||
if s["max"] > high_threshold:
|
||||
line = LineBuilder(alert_measurement)
|
||||
line.tag("location", location)
|
||||
line.tag("severity", "high")
|
||||
line.float64_field("temperature", s["max"])
|
||||
line.string_field("message",
|
||||
f"High temperature: {s['max']}°C exceeds {high_threshold}°C")
|
||||
|
||||
influxdb3_local.write(line)
|
||||
influxdb3_local.warn(f"High temperature alert for {location}")
|
||||
|
||||
elif s["min"] < low_threshold:
|
||||
line = LineBuilder(alert_measurement)
|
||||
line.tag("location", location)
|
||||
line.tag("severity", "low")
|
||||
line.float64_field("temperature", s["min"])
|
||||
line.string_field("message",
|
||||
f"Low temperature: {s['min']}°C below {low_threshold}°C")
|
||||
|
||||
influxdb3_local.write(line)
|
||||
influxdb3_local.warn(f"Low temperature alert for {location}")
|
||||
```
|
||||
|
||||
### Deploy the plugin
|
||||
|
||||
```bash
|
||||
# Create trigger with configuration
|
||||
influxdb3 create trigger \
|
||||
--trigger-spec "every:5m" \
|
||||
--path "/local/path/to/temperature_monitor" \
|
||||
--upload \
|
||||
--trigger-arguments high_threshold=35,low_threshold=5,window="15 minutes" \
|
||||
--database sensors \
|
||||
temp_monitor
|
||||
```
|
||||
|
||||
## Related resources
|
||||
|
||||
- [Processing engine and Python plugins](/influxdb3/core/plugins/)
|
||||
- [Extend plugins with API features](/influxdb3/core/plugins/extend-plugin/)
|
||||
- [Plugin library](/influxdb3/core/plugins/library/)
|
||||
- [influxdb3 create trigger](/influxdb3/core/reference/cli/influxdb3/create/trigger/)
|
||||
- [influxdb3 test](/influxdb3/core/reference/cli/influxdb3/test/)
|
||||
|
|
@ -27,11 +27,13 @@ influxdb3 serve [OPTIONS]
|
|||
- **object-store**: Determines where time series data is stored.
|
||||
- Other object store parameters depending on the selected `object-store` type.
|
||||
|
||||
> [!NOTE]
|
||||
> \[!NOTE]
|
||||
> `--node-id` supports alphanumeric strings with optional hyphens.
|
||||
|
||||
> [!Important]
|
||||
> \[!Important]
|
||||
>
|
||||
> #### Global configuration options
|
||||
>
|
||||
> Some configuration options (like [`--num-io-threads`](/influxdb3/core/reference/config-options/#num-io-threads)) are **global** and must be specified **before** the `serve` command:
|
||||
>
|
||||
> ```bash
|
||||
|
|
@ -44,95 +46,95 @@ influxdb3 serve [OPTIONS]
|
|||
|
||||
| Option | | Description |
|
||||
| :--------------- | :--------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------ |
|
||||
| {{< req "\*" >}} | `--node-id` | _See [configuration options](/influxdb3/core/reference/config-options/#node-id)_ |
|
||||
| {{< req "\*" >}} | `--object-store` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store)_ |
|
||||
| | `--admin-token-recovery-http-bind` | _See [configuration options](/influxdb3/core/reference/config-options/#admin-token-recovery-http-bind)_ |
|
||||
| | `--admin-token-recovery-tcp-listener-file-path` | _See [configuration options](/influxdb3/core/reference/config-options/#admin-token-recovery-tcp-listener-file-path)_ |
|
||||
| | `--admin-token-file` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-file)_ |
|
||||
| | `--aws-access-key-id` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-access-key-id)_ |
|
||||
| | `--aws-allow-http` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-allow-http)_ |
|
||||
| | `--aws-credentials-file` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-credentials-file)_ |
|
||||
| | `--aws-default-region` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-default-region)_ |
|
||||
| | `--aws-endpoint` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-endpoint)_ |
|
||||
| | `--aws-secret-access-key` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-secret-access-key)_ |
|
||||
| | `--aws-session-token` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-session-token)_ |
|
||||
| | `--aws-skip-signature` | _See [configuration options](/influxdb3/core/reference/config-options/#aws-skip-signature)_ |
|
||||
| | `--azure-allow-http` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-allow-http)_ |
|
||||
| | `--azure-endpoint` | _See [configuration options](/influxdb3/enterprise/reference/config-options/##azure-endpoint)_ |
|
||||
| | `--azure-storage-access-key` | _See [configuration options](/influxdb3/core/reference/config-options/#azure-storage-access-key)_ |
|
||||
| | `--azure-storage-account` | _See [configuration options](/influxdb3/core/reference/config-options/#azure-storage-account)_ |
|
||||
| | `--bucket` | _See [configuration options](/influxdb3/core/reference/config-options/#bucket)_ |
|
||||
| | `--data-dir` | _See [configuration options](/influxdb3/core/reference/config-options/#data-dir)_ |
|
||||
| | `--datafusion-config` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-config)_ |
|
||||
| | `--datafusion-max-parquet-fanout` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-max-parquet-fanout)_ |
|
||||
| | `--datafusion-num-threads` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-num-threads)_ |
|
||||
| | `--datafusion-runtime-disable-lifo-slot` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-disable-lifo-slot)_ |
|
||||
| | `--datafusion-runtime-event-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-event-interval)_ |
|
||||
| | `--datafusion-runtime-global-queue-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-global-queue-interval)_ |
|
||||
| | `--datafusion-runtime-max-blocking-threads` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-max-blocking-threads)_ |
|
||||
| | `--datafusion-runtime-max-io-events-per-tick` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-max-io-events-per-tick)_ |
|
||||
| | `--datafusion-runtime-thread-keep-alive` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-thread-keep-alive)_ |
|
||||
| | `--datafusion-runtime-thread-priority` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-thread-priority)_ |
|
||||
| | `--datafusion-runtime-type` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-type)_ |
|
||||
| | `--datafusion-use-cached-parquet-loader` | _See [configuration options](/influxdb3/core/reference/config-options/#datafusion-use-cached-parquet-loader)_ |
|
||||
| | `--delete-grace-period` | _See [configuration options](/influxdb3/core/reference/config-options/#delete-grace-period)_ |
|
||||
| | `--disable-authz` | _See [configuration options](/influxdb3/core/reference/config-options/#disable-authz)_ |
|
||||
| | `--disable-parquet-mem-cache` | _See [configuration options](/influxdb3/core/reference/config-options/#disable-parquet-mem-cache)_ |
|
||||
| | `--distinct-cache-eviction-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#distinct-cache-eviction-interval)_ |
|
||||
| | `--exec-mem-pool-bytes` | _See [configuration options](/influxdb3/core/reference/config-options/#exec-mem-pool-bytes)_ |
|
||||
| | `--force-snapshot-mem-threshold` | _See [configuration options](/influxdb3/core/reference/config-options/#force-snapshot-mem-threshold)_ |
|
||||
| | `--gen1-duration` | _See [configuration options](/influxdb3/core/reference/config-options/#gen1-duration)_ |
|
||||
| | `--gen1-lookback-duration` | _See [configuration options](/influxdb3/core/reference/config-options/#gen1-lookback-duration)_ |
|
||||
| | `--google-service-account` | _See [configuration options](/influxdb3/core/reference/config-options/#google-service-account)_ |
|
||||
| | `--hard-delete-default-duration` | _See [configuration options](/influxdb3/core/reference/config-options/#hard-delete-default-duration)_ |
|
||||
| {{< req "\*" >}} | `--node-id` | *See [configuration options](/influxdb3/core/reference/config-options/#node-id)* |
|
||||
| {{< req "\*" >}} | `--object-store` | *See [configuration options](/influxdb3/core/reference/config-options/#object-store)* |
|
||||
| | `--admin-token-recovery-http-bind` | *See [configuration options](/influxdb3/core/reference/config-options/#admin-token-recovery-http-bind)* |
|
||||
| | `--admin-token-recovery-tcp-listener-file-path` | *See [configuration options](/influxdb3/core/reference/config-options/#admin-token-recovery-tcp-listener-file-path)* |
|
||||
| | `--admin-token-file` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-file)* |
|
||||
| | `--aws-access-key-id` | *See [configuration options](/influxdb3/core/reference/config-options/#aws-access-key-id)* |
|
||||
| | `--aws-allow-http` | *See [configuration options](/influxdb3/core/reference/config-options/#aws-allow-http)* |
|
||||
| | `--aws-credentials-file` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-credentials-file)* |
|
||||
| | `--aws-default-region` | *See [configuration options](/influxdb3/core/reference/config-options/#aws-default-region)* |
|
||||
| | `--aws-endpoint` | *See [configuration options](/influxdb3/core/reference/config-options/#aws-endpoint)* |
|
||||
| | `--aws-secret-access-key` | *See [configuration options](/influxdb3/core/reference/config-options/#aws-secret-access-key)* |
|
||||
| | `--aws-session-token` | *See [configuration options](/influxdb3/core/reference/config-options/#aws-session-token)* |
|
||||
| | `--aws-skip-signature` | *See [configuration options](/influxdb3/core/reference/config-options/#aws-skip-signature)* |
|
||||
| | `--azure-allow-http` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-allow-http)* |
|
||||
| | `--azure-endpoint` | *See [configuration options](/influxdb3/enterprise/reference/config-options/##azure-endpoint)* |
|
||||
| | `--azure-storage-access-key` | *See [configuration options](/influxdb3/core/reference/config-options/#azure-storage-access-key)* |
|
||||
| | `--azure-storage-account` | *See [configuration options](/influxdb3/core/reference/config-options/#azure-storage-account)* |
|
||||
| | `--bucket` | *See [configuration options](/influxdb3/core/reference/config-options/#bucket)* |
|
||||
| | `--data-dir` | *See [configuration options](/influxdb3/core/reference/config-options/#data-dir)* |
|
||||
| | `--datafusion-config` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-config)* |
|
||||
| | `--datafusion-max-parquet-fanout` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-max-parquet-fanout)* |
|
||||
| | `--datafusion-num-threads` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-num-threads)* |
|
||||
| | `--datafusion-runtime-disable-lifo-slot` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-disable-lifo-slot)* |
|
||||
| | `--datafusion-runtime-event-interval` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-event-interval)* |
|
||||
| | `--datafusion-runtime-global-queue-interval` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-global-queue-interval)* |
|
||||
| | `--datafusion-runtime-max-blocking-threads` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-max-blocking-threads)* |
|
||||
| | `--datafusion-runtime-max-io-events-per-tick` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-max-io-events-per-tick)* |
|
||||
| | `--datafusion-runtime-thread-keep-alive` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-thread-keep-alive)* |
|
||||
| | `--datafusion-runtime-thread-priority` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-thread-priority)* |
|
||||
| | `--datafusion-runtime-type` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-runtime-type)* |
|
||||
| | `--datafusion-use-cached-parquet-loader` | *See [configuration options](/influxdb3/core/reference/config-options/#datafusion-use-cached-parquet-loader)* |
|
||||
| | `--delete-grace-period` | *See [configuration options](/influxdb3/core/reference/config-options/#delete-grace-period)* |
|
||||
| | `--disable-authz` | *See [configuration options](/influxdb3/core/reference/config-options/#disable-authz)* |
|
||||
| | `--disable-parquet-mem-cache` | *See [configuration options](/influxdb3/core/reference/config-options/#disable-parquet-mem-cache)* |
|
||||
| | `--distinct-cache-eviction-interval` | *See [configuration options](/influxdb3/core/reference/config-options/#distinct-cache-eviction-interval)* |
|
||||
| | `--exec-mem-pool-bytes` | *See [configuration options](/influxdb3/core/reference/config-options/#exec-mem-pool-bytes)* |
|
||||
| | `--force-snapshot-mem-threshold` | *See [configuration options](/influxdb3/core/reference/config-options/#force-snapshot-mem-threshold)* |
|
||||
| | `--gen1-duration` | *See [configuration options](/influxdb3/core/reference/config-options/#gen1-duration)* |
|
||||
| | `--gen1-lookback-duration` | *See [configuration options](/influxdb3/core/reference/config-options/#gen1-lookback-duration)* |
|
||||
| | `--google-service-account` | *See [configuration options](/influxdb3/core/reference/config-options/#google-service-account)* |
|
||||
| | `--hard-delete-default-duration` | *See [configuration options](/influxdb3/core/reference/config-options/#hard-delete-default-duration)* |
|
||||
| `-h` | `--help` | Print help information |
|
||||
| | `--help-all` | Print detailed help information |
|
||||
| | `--http-bind` | _See [configuration options](/influxdb3/core/reference/config-options/#http-bind)_ |
|
||||
| | `--last-cache-eviction-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#last-cache-eviction-interval)_ |
|
||||
| | `--log-destination` | _See [configuration options](/influxdb3/core/reference/config-options/#log-destination)_ |
|
||||
| | `--log-filter` | _See [configuration options](/influxdb3/core/reference/config-options/#log-filter)_ |
|
||||
| | `--log-format` | _See [configuration options](/influxdb3/core/reference/config-options/#log-format)_ |
|
||||
| | `--max-http-request-size` | _See [configuration options](/influxdb3/core/reference/config-options/#max-http-request-size)_ |
|
||||
| | `--object-store-cache-endpoint` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-cache-endpoint)_ |
|
||||
| | `--object-store-connection-limit` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-connection-limit)_ |
|
||||
| | `--object-store-http2-max-frame-size` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-http2-max-frame-size)_ |
|
||||
| | `--object-store-http2-only` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-http2-only)_ |
|
||||
| | `--object-store-max-retries` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-max-retries)_ |
|
||||
| | `--object-store-retry-timeout` | _See [configuration options](/influxdb3/core/reference/config-options/#object-store-retry-timeout)_ |
|
||||
| | `--package-manager` | _See [configuration options](/influxdb3/core/reference/config-options/#package-manager)_ |
|
||||
| | `--parquet-mem-cache-prune-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-prune-interval)_ |
|
||||
| | `--parquet-mem-cache-prune-percentage` | _See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-prune-percentage)_ |
|
||||
| | `--parquet-mem-cache-query-path-duration` | _See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-query-path-duration)_ |
|
||||
| | `--parquet-mem-cache-size` | _See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-size)_ |
|
||||
| | `--plugin-dir` | _See [configuration options](/influxdb3/core/reference/config-options/#plugin-dir)_ |
|
||||
| | `--preemptive-cache-age` | _See [configuration options](/influxdb3/core/reference/config-options/#preemptive-cache-age)_ |
|
||||
| | `--query-file-limit` | _See [configuration options](/influxdb3/core/reference/config-options/#query-file-limit)_ |
|
||||
| | `--query-log-size` | _See [configuration options](/influxdb3/core/reference/config-options/#query-log-size)_ |
|
||||
| | `--retention-check-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#retention-check-interval)_ |
|
||||
| | `--snapshotted-wal-files-to-keep` | _See [configuration options](/influxdb3/core/reference/config-options/#snapshotted-wal-files-to-keep)_ |
|
||||
| | `--table-index-cache-concurrency-limit` | _See [configuration options](/influxdb3/core/reference/config-options/#table-index-cache-concurrency-limit)_ |
|
||||
| | `--table-index-cache-max-entries` | _See [configuration options](/influxdb3/core/reference/config-options/#table-index-cache-max-entries)_ |
|
||||
| | `--tcp-listener-file-path` | _See [configuration options](/influxdb3/core/reference/config-options/#tcp-listener-file-path)_ |
|
||||
| | `--telemetry-disable-upload` | _See [configuration options](/influxdb3/core/reference/config-options/#telemetry-disable-upload)_ |
|
||||
| | `--telemetry-endpoint` | _See [configuration options](/influxdb3/core/reference/config-options/#telemetry-endpoint)_ |
|
||||
| | `--tls-cert` | _See [configuration options](/influxdb3/core/reference/config-options/#tls-cert)_ |
|
||||
| | `--tls-key` | _See [configuration options](/influxdb3/core/reference/config-options/#tls-key)_ |
|
||||
| | `--tls-minimum-version` | _See [configuration options](/influxdb3/core/reference/config-options/#tls-minimum-version)_ |
|
||||
| | `--traces-exporter` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter)_ |
|
||||
| | `--traces-exporter-jaeger-agent-host` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-agent-host)_ |
|
||||
| | `--traces-exporter-jaeger-agent-port` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-agent-port)_ |
|
||||
| | `--traces-exporter-jaeger-service-name` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-service-name)_ |
|
||||
| | `--traces-exporter-jaeger-trace-context-header-name` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-trace-context-header-name)_ |
|
||||
| | `--traces-jaeger-debug-name` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-debug-name)_ |
|
||||
| | `--traces-jaeger-max-msgs-per-second` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-max-msgs-per-second)_ |
|
||||
| | `--traces-jaeger-tags` | _See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-tags)_ |
|
||||
| | `--virtual-env-location` | _See [configuration options](/influxdb3/core/reference/config-options/#virtual-env-location)_ |
|
||||
| | `--wal-flush-interval` | _See [configuration options](/influxdb3/core/reference/config-options/#wal-flush-interval)_ |
|
||||
| | `--wal-max-write-buffer-size` | _See [configuration options](/influxdb3/core/reference/config-options/#wal-max-write-buffer-size)_ |
|
||||
| | `--wal-replay-concurrency-limit` | _See [configuration options](/influxdb3/core/reference/config-options/#wal-replay-concurrency-limit)_ |
|
||||
| | `--wal-replay-fail-on-error` | _See [configuration options](/influxdb3/core/reference/config-options/#wal-replay-fail-on-error)_ |
|
||||
| | `--wal-snapshot-size` | _See [configuration options](/influxdb3/core/reference/config-options/#wal-snapshot-size)_ |
|
||||
| | `--without-auth` | _See [configuration options](/influxdb3/core/reference/config-options/#without-auth)_ |
|
||||
| | `--http-bind` | *See [configuration options](/influxdb3/core/reference/config-options/#http-bind)* |
|
||||
| | `--last-cache-eviction-interval` | *See [configuration options](/influxdb3/core/reference/config-options/#last-cache-eviction-interval)* |
|
||||
| | `--log-destination` | *See [configuration options](/influxdb3/core/reference/config-options/#log-destination)* |
|
||||
| | `--log-filter` | *See [configuration options](/influxdb3/core/reference/config-options/#log-filter)* |
|
||||
| | `--log-format` | *See [configuration options](/influxdb3/core/reference/config-options/#log-format)* |
|
||||
| | `--max-http-request-size` | *See [configuration options](/influxdb3/core/reference/config-options/#max-http-request-size)* |
|
||||
| | `--object-store-cache-endpoint` | *See [configuration options](/influxdb3/core/reference/config-options/#object-store-cache-endpoint)* |
|
||||
| | `--object-store-connection-limit` | *See [configuration options](/influxdb3/core/reference/config-options/#object-store-connection-limit)* |
|
||||
| | `--object-store-http2-max-frame-size` | *See [configuration options](/influxdb3/core/reference/config-options/#object-store-http2-max-frame-size)* |
|
||||
| | `--object-store-http2-only` | *See [configuration options](/influxdb3/core/reference/config-options/#object-store-http2-only)* |
|
||||
| | `--object-store-max-retries` | *See [configuration options](/influxdb3/core/reference/config-options/#object-store-max-retries)* |
|
||||
| | `--object-store-retry-timeout` | *See [configuration options](/influxdb3/core/reference/config-options/#object-store-retry-timeout)* |
|
||||
| | `--package-manager` | *See [configuration options](/influxdb3/core/reference/config-options/#package-manager)* |
|
||||
| | `--parquet-mem-cache-prune-interval` | *See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-prune-interval)* |
|
||||
| | `--parquet-mem-cache-prune-percentage` | *See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-prune-percentage)* |
|
||||
| | `--parquet-mem-cache-query-path-duration` | *See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-query-path-duration)* |
|
||||
| | `--parquet-mem-cache-size` | *See [configuration options](/influxdb3/core/reference/config-options/#parquet-mem-cache-size)* |
|
||||
| | `--plugin-dir` | *See [configuration options](/influxdb3/core/reference/config-options/#plugin-dir)* |
|
||||
| | `--preemptive-cache-age` | *See [configuration options](/influxdb3/core/reference/config-options/#preemptive-cache-age)* |
|
||||
| | `--query-file-limit` | *See [configuration options](/influxdb3/core/reference/config-options/#query-file-limit)* |
|
||||
| | `--query-log-size` | *See [configuration options](/influxdb3/core/reference/config-options/#query-log-size)* |
|
||||
| | `--retention-check-interval` | *See [configuration options](/influxdb3/core/reference/config-options/#retention-check-interval)* |
|
||||
| | `--snapshotted-wal-files-to-keep` | *See [configuration options](/influxdb3/core/reference/config-options/#snapshotted-wal-files-to-keep)* |
|
||||
| | `--table-index-cache-concurrency-limit` | *See [configuration options](/influxdb3/core/reference/config-options/#table-index-cache-concurrency-limit)* |
|
||||
| | `--table-index-cache-max-entries` | *See [configuration options](/influxdb3/core/reference/config-options/#table-index-cache-max-entries)* |
|
||||
| | `--tcp-listener-file-path` | *See [configuration options](/influxdb3/core/reference/config-options/#tcp-listener-file-path)* |
|
||||
| | `--telemetry-disable-upload` | *See [configuration options](/influxdb3/core/reference/config-options/#telemetry-disable-upload)* |
|
||||
| | `--telemetry-endpoint` | *See [configuration options](/influxdb3/core/reference/config-options/#telemetry-endpoint)* |
|
||||
| | `--tls-cert` | *See [configuration options](/influxdb3/core/reference/config-options/#tls-cert)* |
|
||||
| | `--tls-key` | *See [configuration options](/influxdb3/core/reference/config-options/#tls-key)* |
|
||||
| | `--tls-minimum-version` | *See [configuration options](/influxdb3/core/reference/config-options/#tls-minimum-version)* |
|
||||
| | `--traces-exporter` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter)* |
|
||||
| | `--traces-exporter-jaeger-agent-host` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-agent-host)* |
|
||||
| | `--traces-exporter-jaeger-agent-port` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-agent-port)* |
|
||||
| | `--traces-exporter-jaeger-service-name` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-service-name)* |
|
||||
| | `--traces-exporter-jaeger-trace-context-header-name` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-exporter-jaeger-trace-context-header-name)* |
|
||||
| | `--traces-jaeger-debug-name` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-debug-name)* |
|
||||
| | `--traces-jaeger-max-msgs-per-second` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-max-msgs-per-second)* |
|
||||
| | `--traces-jaeger-tags` | *See [configuration options](/influxdb3/core/reference/config-options/#traces-jaeger-tags)* |
|
||||
| | `--virtual-env-location` | *See [configuration options](/influxdb3/core/reference/config-options/#virtual-env-location)* |
|
||||
| | `--wal-flush-interval` | *See [configuration options](/influxdb3/core/reference/config-options/#wal-flush-interval)* |
|
||||
| | `--wal-max-write-buffer-size` | *See [configuration options](/influxdb3/core/reference/config-options/#wal-max-write-buffer-size)* |
|
||||
| | `--wal-replay-concurrency-limit` | *See [configuration options](/influxdb3/core/reference/config-options/#wal-replay-concurrency-limit)* |
|
||||
| | `--wal-replay-fail-on-error` | *See [configuration options](/influxdb3/core/reference/config-options/#wal-replay-fail-on-error)* |
|
||||
| | `--wal-snapshot-size` | *See [configuration options](/influxdb3/core/reference/config-options/#wal-snapshot-size)* |
|
||||
| | `--without-auth` | *See [configuration options](/influxdb3/core/reference/config-options/#without-auth)* |
|
||||
|
||||
### Option environment variables
|
||||
|
||||
|
|
@ -169,7 +171,8 @@ influxdb3 --object-store memory
|
|||
INFLUXDB3_NODE_IDENTIFIER_PREFIX=my-node influxdb3
|
||||
```
|
||||
|
||||
> [!Important]
|
||||
> \[!Important]
|
||||
>
|
||||
> #### Production deployments
|
||||
>
|
||||
> Quick-start mode is designed for development and testing environments.
|
||||
|
|
@ -184,7 +187,7 @@ For more information about quick-start mode, see [Get started](/influxdb3/core/g
|
|||
|
||||
- [Run the InfluxDB 3 server](#run-the-influxdb-3-server)
|
||||
- [Run the InfluxDB 3 server with extra verbose logging](#run-the-influxdb-3-server-with-extra-verbose-logging)
|
||||
- [Run InfluxDB 3 with debug logging using LOG_FILTER](#run-influxdb-3-with-debug-logging-using-log_filter)
|
||||
- [Run InfluxDB 3 with debug logging using LOG\_FILTER](#run-influxdb-3-with-debug-logging-using-log_filter)
|
||||
|
||||
In the examples below, replace
|
||||
{{% code-placeholder-key %}}`my-host-01`{{% /code-placeholder-key %}}:
|
||||
|
|
@ -215,7 +218,7 @@ influxdb3 serve \
|
|||
--verbose
|
||||
```
|
||||
|
||||
### Run InfluxDB 3 with debug logging using LOG_FILTER
|
||||
### Run InfluxDB 3 with debug logging using LOG\_FILTER
|
||||
|
||||
<!--pytest.mark.skip-->
|
||||
|
||||
|
|
@ -228,13 +231,12 @@ LOG_FILTER=debug influxdb3 serve \
|
|||
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
- **Error: "Failed to connect to object store"**
|
||||
- **Error: "Failed to connect to object store"**\
|
||||
Verify your `--object-store` setting and ensure all required parameters for that storage type are provided.
|
||||
|
||||
- **Permission errors when using S3, Google Cloud, or Azure storage**
|
||||
- **Permission errors when using S3, Google Cloud, or Azure storage**\
|
||||
Check that your authentication credentials are correct and have sufficient permissions.
|
||||
|
|
|
|||
|
|
@ -85,7 +85,7 @@ In the examples below, replace the following:
|
|||
|
||||
{{% code-placeholders "my-host-01|my-cluster-01" %}}
|
||||
|
||||
### Quick-start influxdb3 server
|
||||
### Quick-start influxdb3 server
|
||||
|
||||
<!--pytest.mark.skip-->
|
||||
|
||||
|
|
|
|||
|
|
@ -28,11 +28,13 @@ influxdb3 serve [OPTIONS]
|
|||
- **object-store**: Determines where time series data is stored.
|
||||
- Other object store parameters depending on the selected `object-store` type.
|
||||
|
||||
> [!NOTE]
|
||||
> \[!NOTE]
|
||||
> `--node-id` and `--cluster-id` support alphanumeric strings with optional hyphens.
|
||||
|
||||
> [!Important]
|
||||
> \[!Important]
|
||||
>
|
||||
> #### Global configuration options
|
||||
>
|
||||
> Some configuration options (like [`--num-io-threads`](/influxdb3/enterprise/reference/config-options/#num-io-threads)) are **global** and must be specified **before** the `serve` command:
|
||||
>
|
||||
> ```bash
|
||||
|
|
@ -43,120 +45,120 @@ influxdb3 serve [OPTIONS]
|
|||
|
||||
## Options
|
||||
|
||||
| Option | | Description |
|
||||
| :--------------- | :--------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| | `--admin-token-recovery-http-bind` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-recovery-http-bind)_ |
|
||||
| | `--admin-token-recovery-tcp-listener-file-path` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-recovery-tcp-listener-file-path)_ |
|
||||
| | `--admin-token-file` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-file)_ |
|
||||
| | `--aws-access-key-id` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-access-key-id)_ |
|
||||
| | `--aws-allow-http` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-allow-http)_ |
|
||||
| | `--aws-credentials-file` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-credentials-file)_ |
|
||||
| | `--aws-default-region` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-default-region)_ |
|
||||
| | `--aws-endpoint` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-endpoint)_ |
|
||||
| | `--aws-secret-access-key` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-secret-access-key)_ |
|
||||
| | `--aws-session-token` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-session-token)_ |
|
||||
| | `--aws-skip-signature` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-skip-signature)_ |
|
||||
| | `--azure-allow-http` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-allow-http)_ |
|
||||
| | `--azure-endpoint` | _See [configuration options](/influxdb3/enterprise/reference/config-options/##azure-endpoint)_ |
|
||||
| | `--azure-storage-access-key` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-storage-access-key)_ |
|
||||
| | `--azure-storage-account` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-storage-account)_ |
|
||||
| | `--bucket` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#bucket)_ |
|
||||
| | `--catalog-sync-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#catalog-sync-interval)_ |
|
||||
| | `--cluster-id` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#cluster-id)_ |
|
||||
| | `--compaction-check-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-check-interval)_ |
|
||||
| | `--compaction-cleanup-wait` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-cleanup-wait)_ |
|
||||
| | `--compaction-gen2-duration` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-gen2-duration)_ |
|
||||
| | `--compaction-max-num-files-per-plan` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-max-num-files-per-plan)_ |
|
||||
| | `--compaction-multipliers` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-multipliers)_ |
|
||||
| | `--compaction-row-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-row-limit)_ |
|
||||
| | `--data-dir` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#data-dir)_ |
|
||||
| | `--datafusion-config` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-config)_ |
|
||||
| | `--datafusion-max-parquet-fanout` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-max-parquet-fanout)_ |
|
||||
| | `--datafusion-num-threads` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-num-threads)_ |
|
||||
| | `--datafusion-runtime-disable-lifo-slot` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-disable-lifo-slot)_ |
|
||||
| | `--datafusion-runtime-event-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-event-interval)_ |
|
||||
| | `--datafusion-runtime-global-queue-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-global-queue-interval)_ |
|
||||
| | `--datafusion-runtime-max-blocking-threads` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-max-blocking-threads)_ |
|
||||
| | `--datafusion-runtime-max-io-events-per-tick` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-max-io-events-per-tick)_ |
|
||||
| | `--datafusion-runtime-thread-keep-alive` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-thread-keep-alive)_ |
|
||||
| | `--datafusion-runtime-thread-priority` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-thread-priority)_ |
|
||||
| | `--datafusion-runtime-type` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-type)_ |
|
||||
| | `--datafusion-use-cached-parquet-loader` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-use-cached-parquet-loader)_ |
|
||||
| | `--delete-grace-period` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#delete-grace-period)_ |
|
||||
| | `--disable-authz` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#disable-authz)_ |
|
||||
| | `--disable-parquet-mem-cache` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#disable-parquet-mem-cache)_ |
|
||||
| | `--distinct-cache-eviction-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#distinct-cache-eviction-interval)_ |
|
||||
| | `--distinct-value-cache-disable-from-history` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#distinct-value-cache-disable-from-history)_ |
|
||||
| | `--exec-mem-pool-bytes` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#exec-mem-pool-bytes)_ |
|
||||
| | `--force-snapshot-mem-threshold` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#force-snapshot-mem-threshold)_ |
|
||||
| | `--gen1-duration` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#gen1-duration)_ |
|
||||
| | `--gen1-lookback-duration` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#gen1-lookback-duration)_ |
|
||||
| | `--google-service-account` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#google-service-account)_ |
|
||||
| | `--hard-delete-default-duration` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#hard-delete-default-duration)_ |
|
||||
| `-h` | `--help` | Print help information |
|
||||
| | `--help-all` | Print detailed help information |
|
||||
| | `--http-bind` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#http-bind)_ |
|
||||
| | `--last-cache-eviction-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#last-cache-eviction-interval)_ |
|
||||
| | `--last-value-cache-disable-from-history` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#last-value-cache-disable-from-history)_ |
|
||||
| | `--license-email` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#license-email)_ |
|
||||
| | `--license-file` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#license-file)_ |
|
||||
| | `--log-destination` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#log-destination)_ |
|
||||
| | `--log-filter` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#log-filter)_ |
|
||||
| | `--log-format` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#log-format)_ |
|
||||
| | `--max-http-request-size` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#max-http-request-size)_ |
|
||||
| | `--mode` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#mode)_ |
|
||||
| | `--node-id` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#node-id)_ |
|
||||
| | `--node-id-from-env` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#node-id-from-env)_ |
|
||||
| | `--num-cores` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-cores)_ |
|
||||
| | `--num-datafusion-threads` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-datafusion-threads)_ |
|
||||
| | `--num-database-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-database-limit)_ |
|
||||
| | `--num-table-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-table-limit)_ |
|
||||
| | `--num-total-columns-per-table-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#num-total-columns-per-table-limit)_ |
|
||||
| | `--object-store` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store)_ |
|
||||
| | `--object-store-cache-endpoint` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-cache-endpoint)_ |
|
||||
| | `--object-store-connection-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-connection-limit)_ |
|
||||
| | `--object-store-http2-max-frame-size` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-http2-max-frame-size)_ |
|
||||
| | `--object-store-http2-only` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-http2-only)_ |
|
||||
| | `--object-store-max-retries` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-max-retries)_ |
|
||||
| | `--object-store-retry-timeout` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-retry-timeout)_ |
|
||||
| | `--package-manager` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#package-manager)_ |
|
||||
| | `--parquet-mem-cache-prune-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-prune-interval)_ |
|
||||
| | `--parquet-mem-cache-prune-percentage` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-prune-percentage)_ |
|
||||
| | `--parquet-mem-cache-query-path-duration` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-query-path-duration)_ |
|
||||
| | `--parquet-mem-cache-size` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-size)_ |
|
||||
| | `--permission-tokens-file` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#permission-tokens-file)_ |
|
||||
| | `--plugin-dir` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#plugin-dir)_ |
|
||||
| | `--preemptive-cache-age` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#preemptive-cache-age)_ |
|
||||
| | `--query-file-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#query-file-limit)_ |
|
||||
| | `--query-log-size` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#query-log-size)_ |
|
||||
| | `--replication-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#replication-interval)_ |
|
||||
| | `--retention-check-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#retention-check-interval)_ |
|
||||
| | `--snapshotted-wal-files-to-keep` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#snapshotted-wal-files-to-keep)_ |
|
||||
| | `--table-index-cache-concurrency-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#table-index-cache-concurrency-limit)_ |
|
||||
| | `--table-index-cache-max-entries` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#table-index-cache-max-entries)_ |
|
||||
| | `--tcp-listener-file-path` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#tcp-listener-file-path)_ |
|
||||
| | `--telemetry-disable-upload` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#telemetry-disable-upload)_ |
|
||||
| | `--telemetry-endpoint` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#telemetry-endpoint)_ |
|
||||
| | `--tls-cert` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#tls-cert)_ |
|
||||
| | `--tls-key` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#tls-key)_ |
|
||||
| | `--tls-minimum-version` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#tls-minimum-version)_ |
|
||||
| | `--traces-exporter` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter)_ |
|
||||
| | `--traces-exporter-jaeger-agent-host` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-agent-host)_ |
|
||||
| | `--traces-exporter-jaeger-agent-port` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-agent-port)_ |
|
||||
| | `--traces-exporter-jaeger-service-name` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-service-name)_ |
|
||||
| | `--traces-exporter-jaeger-trace-context-header-name` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-trace-context-header-name)_ |
|
||||
| | `--traces-jaeger-debug-name` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-jaeger-debug-name)_ |
|
||||
| | `--traces-jaeger-max-msgs-per-second` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-jaeger-max-msgs-per-second)_ |
|
||||
| | `--traces-jaeger-tags` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-jaeger-tags)_ |
|
||||
| | `--use-pacha-tree` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#use-pacha-tree)_ |
|
||||
| | `--virtual-env-location` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#virtual-env-location)_ |
|
||||
| | `--wait-for-running-ingestor` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wait-for-running-ingestor)_ |
|
||||
| | `--wal-flush-interval` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-flush-interval)_ |
|
||||
| | `--wal-max-write-buffer-size` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-max-write-buffer-size)_ |
|
||||
| | `--wal-replay-concurrency-limit` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-replay-concurrency-limit)_ |
|
||||
| | `--wal-replay-fail-on-error` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-replay-fail-on-error)_ |
|
||||
| | `--wal-snapshot-size` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-snapshot-size)_ |
|
||||
| | `--without-auth` | _See [configuration options](/influxdb3/enterprise/reference/config-options/#without-auth)_ |
|
||||
| Option | | Description |
|
||||
| :----- | :--------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| | `--admin-token-recovery-http-bind` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-recovery-http-bind)* |
|
||||
| | `--admin-token-recovery-tcp-listener-file-path` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-recovery-tcp-listener-file-path)* |
|
||||
| | `--admin-token-file` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#admin-token-file)* |
|
||||
| | `--aws-access-key-id` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-access-key-id)* |
|
||||
| | `--aws-allow-http` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-allow-http)* |
|
||||
| | `--aws-credentials-file` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-credentials-file)* |
|
||||
| | `--aws-default-region` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-default-region)* |
|
||||
| | `--aws-endpoint` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-endpoint)* |
|
||||
| | `--aws-secret-access-key` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-secret-access-key)* |
|
||||
| | `--aws-session-token` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-session-token)* |
|
||||
| | `--aws-skip-signature` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#aws-skip-signature)* |
|
||||
| | `--azure-allow-http` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-allow-http)* |
|
||||
| | `--azure-endpoint` | *See [configuration options](/influxdb3/enterprise/reference/config-options/##azure-endpoint)* |
|
||||
| | `--azure-storage-access-key` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-storage-access-key)* |
|
||||
| | `--azure-storage-account` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#azure-storage-account)* |
|
||||
| | `--bucket` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#bucket)* |
|
||||
| | `--catalog-sync-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#catalog-sync-interval)* |
|
||||
| | `--cluster-id` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#cluster-id)* |
|
||||
| | `--compaction-check-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-check-interval)* |
|
||||
| | `--compaction-cleanup-wait` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-cleanup-wait)* |
|
||||
| | `--compaction-gen2-duration` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-gen2-duration)* |
|
||||
| | `--compaction-max-num-files-per-plan` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-max-num-files-per-plan)* |
|
||||
| | `--compaction-multipliers` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-multipliers)* |
|
||||
| | `--compaction-row-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#compaction-row-limit)* |
|
||||
| | `--data-dir` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#data-dir)* |
|
||||
| | `--datafusion-config` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-config)* |
|
||||
| | `--datafusion-max-parquet-fanout` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-max-parquet-fanout)* |
|
||||
| | `--datafusion-num-threads` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-num-threads)* |
|
||||
| | `--datafusion-runtime-disable-lifo-slot` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-disable-lifo-slot)* |
|
||||
| | `--datafusion-runtime-event-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-event-interval)* |
|
||||
| | `--datafusion-runtime-global-queue-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-global-queue-interval)* |
|
||||
| | `--datafusion-runtime-max-blocking-threads` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-max-blocking-threads)* |
|
||||
| | `--datafusion-runtime-max-io-events-per-tick` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-max-io-events-per-tick)* |
|
||||
| | `--datafusion-runtime-thread-keep-alive` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-thread-keep-alive)* |
|
||||
| | `--datafusion-runtime-thread-priority` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-thread-priority)* |
|
||||
| | `--datafusion-runtime-type` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-runtime-type)* |
|
||||
| | `--datafusion-use-cached-parquet-loader` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#datafusion-use-cached-parquet-loader)* |
|
||||
| | `--delete-grace-period` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#delete-grace-period)* |
|
||||
| | `--disable-authz` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#disable-authz)* |
|
||||
| | `--disable-parquet-mem-cache` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#disable-parquet-mem-cache)* |
|
||||
| | `--distinct-cache-eviction-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#distinct-cache-eviction-interval)* |
|
||||
| | `--distinct-value-cache-disable-from-history` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#distinct-value-cache-disable-from-history)* |
|
||||
| | `--exec-mem-pool-bytes` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#exec-mem-pool-bytes)* |
|
||||
| | `--force-snapshot-mem-threshold` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#force-snapshot-mem-threshold)* |
|
||||
| | `--gen1-duration` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#gen1-duration)* |
|
||||
| | `--gen1-lookback-duration` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#gen1-lookback-duration)* |
|
||||
| | `--google-service-account` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#google-service-account)* |
|
||||
| | `--hard-delete-default-duration` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#hard-delete-default-duration)* |
|
||||
| `-h` | `--help` | Print help information |
|
||||
| | `--help-all` | Print detailed help information |
|
||||
| | `--http-bind` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#http-bind)* |
|
||||
| | `--last-cache-eviction-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#last-cache-eviction-interval)* |
|
||||
| | `--last-value-cache-disable-from-history` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#last-value-cache-disable-from-history)* |
|
||||
| | `--license-email` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#license-email)* |
|
||||
| | `--license-file` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#license-file)* |
|
||||
| | `--log-destination` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#log-destination)* |
|
||||
| | `--log-filter` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#log-filter)* |
|
||||
| | `--log-format` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#log-format)* |
|
||||
| | `--max-http-request-size` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#max-http-request-size)* |
|
||||
| | `--mode` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#mode)* |
|
||||
| | `--node-id` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#node-id)* |
|
||||
| | `--node-id-from-env` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#node-id-from-env)* |
|
||||
| | `--num-cores` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#num-cores)* |
|
||||
| | `--num-datafusion-threads` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#num-datafusion-threads)* |
|
||||
| | `--num-database-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#num-database-limit)* |
|
||||
| | `--num-table-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#num-table-limit)* |
|
||||
| | `--num-total-columns-per-table-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#num-total-columns-per-table-limit)* |
|
||||
| | `--object-store` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store)* |
|
||||
| | `--object-store-cache-endpoint` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-cache-endpoint)* |
|
||||
| | `--object-store-connection-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-connection-limit)* |
|
||||
| | `--object-store-http2-max-frame-size` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-http2-max-frame-size)* |
|
||||
| | `--object-store-http2-only` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-http2-only)* |
|
||||
| | `--object-store-max-retries` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-max-retries)* |
|
||||
| | `--object-store-retry-timeout` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#object-store-retry-timeout)* |
|
||||
| | `--package-manager` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#package-manager)* |
|
||||
| | `--parquet-mem-cache-prune-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-prune-interval)* |
|
||||
| | `--parquet-mem-cache-prune-percentage` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-prune-percentage)* |
|
||||
| | `--parquet-mem-cache-query-path-duration` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-query-path-duration)* |
|
||||
| | `--parquet-mem-cache-size` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#parquet-mem-cache-size)* |
|
||||
| | `--permission-tokens-file` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#permission-tokens-file)* |
|
||||
| | `--plugin-dir` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#plugin-dir)* |
|
||||
| | `--preemptive-cache-age` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#preemptive-cache-age)* |
|
||||
| | `--query-file-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#query-file-limit)* |
|
||||
| | `--query-log-size` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#query-log-size)* |
|
||||
| | `--replication-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#replication-interval)* |
|
||||
| | `--retention-check-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#retention-check-interval)* |
|
||||
| | `--snapshotted-wal-files-to-keep` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#snapshotted-wal-files-to-keep)* |
|
||||
| | `--table-index-cache-concurrency-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#table-index-cache-concurrency-limit)* |
|
||||
| | `--table-index-cache-max-entries` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#table-index-cache-max-entries)* |
|
||||
| | `--tcp-listener-file-path` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#tcp-listener-file-path)* |
|
||||
| | `--telemetry-disable-upload` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#telemetry-disable-upload)* |
|
||||
| | `--telemetry-endpoint` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#telemetry-endpoint)* |
|
||||
| | `--tls-cert` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#tls-cert)* |
|
||||
| | `--tls-key` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#tls-key)* |
|
||||
| | `--tls-minimum-version` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#tls-minimum-version)* |
|
||||
| | `--traces-exporter` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter)* |
|
||||
| | `--traces-exporter-jaeger-agent-host` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-agent-host)* |
|
||||
| | `--traces-exporter-jaeger-agent-port` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-agent-port)* |
|
||||
| | `--traces-exporter-jaeger-service-name` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-service-name)* |
|
||||
| | `--traces-exporter-jaeger-trace-context-header-name` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-exporter-jaeger-trace-context-header-name)* |
|
||||
| | `--traces-jaeger-debug-name` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-jaeger-debug-name)* |
|
||||
| | `--traces-jaeger-max-msgs-per-second` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-jaeger-max-msgs-per-second)* |
|
||||
| | `--traces-jaeger-tags` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#traces-jaeger-tags)* |
|
||||
| | `--use-pacha-tree` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#use-pacha-tree)* |
|
||||
| | `--virtual-env-location` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#virtual-env-location)* |
|
||||
| | `--wait-for-running-ingestor` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#wait-for-running-ingestor)* |
|
||||
| | `--wal-flush-interval` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-flush-interval)* |
|
||||
| | `--wal-max-write-buffer-size` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-max-write-buffer-size)* |
|
||||
| | `--wal-replay-concurrency-limit` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-replay-concurrency-limit)* |
|
||||
| | `--wal-replay-fail-on-error` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-replay-fail-on-error)* |
|
||||
| | `--wal-snapshot-size` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#wal-snapshot-size)* |
|
||||
| | `--without-auth` | *See [configuration options](/influxdb3/enterprise/reference/config-options/#without-auth)* |
|
||||
|
||||
### Option environment variables
|
||||
|
||||
|
|
@ -195,7 +197,8 @@ influxdb3 --object-store memory
|
|||
INFLUXDB3_NODE_IDENTIFIER_PREFIX=my-node influxdb3
|
||||
```
|
||||
|
||||
> [!Important]
|
||||
> \[!Important]
|
||||
>
|
||||
> #### Production deployments
|
||||
>
|
||||
> Quick-start mode is designed for development and testing environments.
|
||||
|
|
@ -210,15 +213,15 @@ For more information about quick-start mode, see [Get started](/influxdb3/enterp
|
|||
|
||||
- [Run the InfluxDB 3 server](#run-the-influxdb-3-server)
|
||||
- [Run the InfluxDB 3 server with extra verbose logging](#run-the-influxdb-3-server-with-extra-verbose-logging)
|
||||
- [Run InfluxDB 3 with debug logging using LOG_FILTER](#run-influxdb-3-with-debug-logging-using-log_filter)
|
||||
- [Run InfluxDB 3 with debug logging using LOG\_FILTER](#run-influxdb-3-with-debug-logging-using-log_filter)
|
||||
|
||||
In the examples below, replace the following:
|
||||
|
||||
- {{% code-placeholder-key %}}`my-host-01`{{% /code-placeholder-key %}}:
|
||||
a unique string that identifies your {{< product-name >}} server.
|
||||
a unique string that identifies your {{< product-name >}} server.
|
||||
- {{% code-placeholder-key %}}`my-cluster-01`{{% /code-placeholder-key %}}:
|
||||
a unique string that identifies your {{< product-name >}} cluster.
|
||||
The value you use must be different from `--node-id` values in the cluster.
|
||||
a unique string that identifies your {{< product-name >}} cluster.
|
||||
The value you use must be different from `--node-id` values in the cluster.
|
||||
|
||||
{{% code-placeholders "my-host-01|my-cluster-01" %}}
|
||||
|
||||
|
|
@ -273,7 +276,7 @@ influxdb3 serve \
|
|||
--verbose
|
||||
```
|
||||
|
||||
### Run InfluxDB 3 with debug logging using LOG_FILTER
|
||||
### Run InfluxDB 3 with debug logging using LOG\_FILTER
|
||||
|
||||
<!--pytest.mark.skip-->
|
||||
|
||||
|
|
@ -287,16 +290,15 @@ LOG_FILTER=debug influxdb3 serve \
|
|||
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
- **Error: "cluster-id cannot match any node-id in the cluster"**
|
||||
- **Error: "cluster-id cannot match any node-id in the cluster"**\
|
||||
Ensure your `--cluster-id` value is different from all `--node-id` values in your cluster.
|
||||
|
||||
- **Error: "Failed to connect to object store"**
|
||||
- **Error: "Failed to connect to object store"**\
|
||||
Verify your `--object-store` setting and ensure all required parameters for that storage type are provided.
|
||||
|
||||
- **Permission errors when using S3, Google Cloud, or Azure storage**
|
||||
- **Permission errors when using S3, Google Cloud, or Azure storage**\
|
||||
Check that your authentication credentials are correct and have sufficient permissions.
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
<!--Shortcode-->
|
||||
{{% product-name %}} stores data related to the database server, queries, and tables in _system tables_.
|
||||
|
||||
{{% product-name %}} stores data related to the database server, queries, and tables in *system tables*.
|
||||
You can query the system tables for information about your running server, databases, and and table schemas.
|
||||
|
||||
## Query system tables
|
||||
|
|
@ -11,11 +12,10 @@ You can query the system tables for information about your running server, datab
|
|||
- [Recently executed queries](#recently-executed-queries)
|
||||
- [Query plugin files](#query-plugin-files)
|
||||
|
||||
### Use the HTTP query API
|
||||
### Use the HTTP query API
|
||||
|
||||
Use the HTTP API `/api/v3/query_sql` endpoint to retrieve system information about your database server and table schemas in {{% product-name %}}.
|
||||
|
||||
|
||||
To execute a query, send a `GET` or `POST` request to the endpoint:
|
||||
|
||||
- `GET`: Pass parameters in the URL query string (for simple queries)
|
||||
|
|
@ -23,16 +23,17 @@ To execute a query, send a `GET` or `POST` request to the endpoint:
|
|||
|
||||
Include the following parameters:
|
||||
|
||||
- `q`: _({{< req >}})_ The SQL query to execute.
|
||||
- `db`: _({{< req >}})_ The database to execute the query against.
|
||||
- `params`: A JSON object containing parameters to be used in a _parameterized query_.
|
||||
- `q`: *({{< req >}})* The SQL query to execute.
|
||||
- `db`: *({{< req >}})* The database to execute the query against.
|
||||
- `params`: A JSON object containing parameters to be used in a *parameterized query*.
|
||||
- `format`: The format of the response (`json`, `jsonl`, `csv`, `pretty`, or `parquet`).
|
||||
JSONL (`jsonl`) is preferred because it streams results back to the client.
|
||||
`pretty` is for human-readable output. Default is `json`.
|
||||
|
||||
#### Examples
|
||||
|
||||
> [!Note]
|
||||
> \[!Note]
|
||||
>
|
||||
> #### system\_ sample data
|
||||
>
|
||||
> In examples, tables with `"table_name":"system_` are user-created tables for CPU, memory, disk,
|
||||
|
|
@ -90,8 +91,8 @@ A table has one of the following `table_schema` values:
|
|||
The following query sends a `POST` request that executes an SQL query to
|
||||
retrieve information about columns in the sample `system_swap` table schema:
|
||||
|
||||
_Note: when you send a query in JSON, you must escape single quotes
|
||||
that surround field names._
|
||||
*Note: when you send a query in JSON, you must escape single quotes
|
||||
that surround field names.*
|
||||
|
||||
```bash
|
||||
curl "http://localhost:8181/api/v3/query_sql" \
|
||||
|
|
@ -144,6 +145,7 @@ To view loaded Processing Engine plugins, query the `plugin_files` system table
|
|||
The `system.plugin_files` table provides information about plugin files loaded by the Processing Engine:
|
||||
|
||||
**Columns:**
|
||||
|
||||
- `plugin_name` (String): Name of a trigger using this plugin
|
||||
- `file_name` (String): Plugin filename
|
||||
- `file_path` (String): Full server path to the plugin file
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load Diff
|
|
@ -1,4 +1,3 @@
|
|||
|
||||
The `influxdb3 create trigger` command creates a new trigger for the
|
||||
processing engine.
|
||||
|
||||
|
|
@ -17,32 +16,31 @@ influxdb3 create trigger [OPTIONS] \
|
|||
|
||||
## Arguments
|
||||
|
||||
- **TRIGGER_NAME**: A name for the new trigger.
|
||||
- **TRIGGER\_NAME**: A name for the new trigger.
|
||||
|
||||
## Options
|
||||
|
||||
| Option | | Description |
|
||||
| :----- | :------------------ | :------------------------------------------------------------------------------------------------------- |
|
||||
| `-H` | `--host` | Host URL of the running {{< product-name >}} server (default is `http://127.0.0.1:8181`) |
|
||||
| `-d` | `--database` | _({{< req >}})_ Name of the database to operate on |
|
||||
| | `--token` | _({{< req >}})_ Authentication token |
|
||||
| `-p` | `--path` | Path to plugin file or directory (single `.py` file or directory containing `__init__.py` for multifile plugins). Can be local path (with `--upload`) or server path. Replaces `--plugin-filename`. |
|
||||
| | `--upload` | Upload local plugin files to the server. Requires admin token. Use with `--path` to specify local files. |
|
||||
| | `--plugin-filename` | _(Deprecated: use `--path` instead)_ Name of the file, stored in the server's `plugin-dir`, that contains the Python plugin code to run |
|
||||
| | `--trigger-spec` | Trigger specification: `table:<TABLE_NAME>`, `all_tables`, `every:<DURATION>`, `cron:<EXPRESSION>`, or `request:<REQUEST_PATH>` |
|
||||
| | `--trigger-arguments` | Additional arguments for the trigger, in the format `key=value`, separated by commas (for example, `arg1=val1,arg2=val2`) |
|
||||
| | `--disabled` | Create the trigger in disabled state |
|
||||
| | `--error-behavior` | Error handling behavior: `log`, `retry`, or `disable` |
|
||||
| | `--run-asynchronous` | Run the trigger asynchronously, allowing multiple triggers to run simultaneously (default is synchronous) |
|
||||
{{% show-in "enterprise" %}}| | `--node-spec` | Which node(s) the trigger should be configured on. Two value formats are supported: `all` (default) - applies to all nodes, or `nodes:<node-id>[,<node-id>..]` - applies only to specified comma-separated list of nodes |{{% /show-in %}}
|
||||
| | `--tls-ca` | Path to a custom TLS certificate authority (for testing or self-signed certificates) |
|
||||
| `-h` | `--help` | Print help information |
|
||||
| | `--help-all` | Print detailed help information |
|
||||
| Option | | Description | | |
|
||||
| :--------------------------- | :-------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------- |
|
||||
| `-H` | `--host` | Host URL of the running {{< product-name >}} server (default is `http://127.0.0.1:8181`) | | |
|
||||
| `-d` | `--database` | *({{< req >}})* Name of the database to operate on | | |
|
||||
| | `--token` | *({{< req >}})* Authentication token | | |
|
||||
| `-p` | `--path` | Path to plugin file or directory (single `.py` file or directory containing `__init__.py` for multifile plugins). Can be local path (with `--upload`) or server path. Replaces `--plugin-filename`. | | |
|
||||
| | `--upload` | Upload local plugin files to the server. Requires admin token. Use with `--path` to specify local files. | | |
|
||||
| | `--plugin-filename` | *(Deprecated: use `--path` instead)* Name of the file, stored in the server's `plugin-dir`, that contains the Python plugin code to run | | |
|
||||
| | `--trigger-spec` | Trigger specification: `table:<TABLE_NAME>`, `all_tables`, `every:<DURATION>`, `cron:<EXPRESSION>`, or `request:<REQUEST_PATH>` | | |
|
||||
| | `--trigger-arguments` | Additional arguments for the trigger, in the format `key=value`, separated by commas (for example, `arg1=val1,arg2=val2`) | | |
|
||||
| | `--disabled` | Create the trigger in disabled state | | |
|
||||
| | `--error-behavior` | Error handling behavior: `log`, `retry`, or `disable` | | |
|
||||
| | `--run-asynchronous` | Run the trigger asynchronously, allowing multiple triggers to run simultaneously (default is synchronous) | | |
|
||||
| {{% show-in "enterprise" %}} | | `--node-spec` | Which node(s) the trigger should be configured on. Two value formats are supported: `all` (default) - applies to all nodes, or `nodes:<node-id>[,<node-id>..]` - applies only to specified comma-separated list of nodes | {{% /show-in %}} |
|
||||
| | `--tls-ca` | Path to a custom TLS certificate authority (for testing or self-signed certificates) | | |
|
||||
| `-h` | `--help` | Print help information | | |
|
||||
| | `--help-all` | Print detailed help information | | |
|
||||
|
||||
If you want to use a plugin from the [Plugin Library](https://github.com/influxdata/influxdb3_plugins) repo, use the URL path with `gh:` specified as the prefix.
|
||||
For example, to use the [System Metrics](https://github.com/influxdata/influxdb3_plugins/blob/main/influxdata/system_metrics/system_metrics.py) plugin, the plugin filename is `gh:influxdata/system_metrics/system_metrics.py`.
|
||||
|
||||
|
||||
### Option environment variables
|
||||
|
||||
You can use the following environment variables to set command options:
|
||||
|
|
@ -67,7 +65,7 @@ The following examples show how to use the `influxdb3 create trigger` command to
|
|||
- [Create a disabled trigger](#create-a-disabled-trigger)
|
||||
- [Create a trigger with error handling](#create-a-trigger-with-error-handling)
|
||||
|
||||
---
|
||||
***
|
||||
|
||||
Replace the following placeholders with your values:
|
||||
|
||||
|
|
@ -75,11 +73,11 @@ Replace the following placeholders with your values:
|
|||
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: Authentication token
|
||||
- {{% code-placeholder-key %}}`PLUGIN_FILENAME`{{% /code-placeholder-key %}}: Python plugin filename
|
||||
- {{% code-placeholder-key %}}`TRIGGER_NAME`{{% /code-placeholder-key %}}:
|
||||
Name of the trigger to create
|
||||
Name of the trigger to create
|
||||
- {{% code-placeholder-key %}}`TABLE_NAME`{{% /code-placeholder-key %}}:
|
||||
Name of the table to trigger on
|
||||
Name of the table to trigger on
|
||||
|
||||
{{% code-placeholders "(DATABASE|TRIGGER)_NAME|AUTH_TOKEN|TABLE_NAME" %}}
|
||||
{{% code-placeholders "(DATABASE|TRIGGER)\_NAME|AUTH\_TOKEN|TABLE\_NAME" %}}
|
||||
|
||||
### Create a trigger for a specific table
|
||||
|
||||
|
|
@ -137,12 +135,13 @@ second minute hour day_of_month month day_of_week
|
|||
```
|
||||
|
||||
Fields:
|
||||
|
||||
- **second**: 0-59
|
||||
- **minute**: 0-59
|
||||
- **hour**: 0-23
|
||||
- **day_of_month**: 1-31
|
||||
- **day\_of\_month**: 1-31
|
||||
- **month**: 1-12 or JAN-DEC
|
||||
- **day_of_week**: 0-7 (0 or 7 is Sunday) or SUN-SAT
|
||||
- **day\_of\_week**: 0-7 (0 or 7 is Sunday) or SUN-SAT
|
||||
|
||||
Example: Run at 6:00 AM every weekday (Monday-Friday):
|
||||
|
||||
|
|
@ -225,6 +224,7 @@ influxdb3 create trigger \
|
|||
```
|
||||
|
||||
The `--upload` flag transfers local files to the server's plugin directory. This is useful for:
|
||||
|
||||
- Local plugin development and testing
|
||||
- Deploying plugins without SSH access
|
||||
- Automating plugin deployment
|
||||
|
|
@ -245,7 +245,7 @@ influxdb3 create trigger \
|
|||
|
||||
### Create a disabled trigger
|
||||
|
||||
Create a trigger in a disabled state.
|
||||
Create a trigger in a disabled state.
|
||||
|
||||
<!--pytest.mark.skip-->
|
||||
|
||||
|
|
|
|||
|
|
@ -1,4 +1,3 @@
|
|||
|
||||
The `influxdb3 show` command lists resources in your {{< product-name >}} server.
|
||||
|
||||
## Usage
|
||||
|
|
@ -11,14 +10,14 @@ influxdb3 show <SUBCOMMAND>
|
|||
|
||||
## Subcommands
|
||||
|
||||
| Subcommand | Description |
|
||||
| :---------------------------------------------------------------------- | :--------------------------------------------- |
|
||||
| [databases](/influxdb3/version/reference/cli/influxdb3/show/databases/) | List database |
|
||||
{{% show-in "enterprise" %}}| [license](/influxdb3/version/reference/cli/influxdb3/show/license/) | Display license information |{{% /show-in %}}
|
||||
| [plugins](/influxdb3/version/reference/cli/influxdb3/show/plugins/) | List loaded plugins |
|
||||
| [system](/influxdb3/version/reference/cli/influxdb3/show/system/) | Display system table data |
|
||||
| [tokens](/influxdb3/version/reference/cli/influxdb3/show/tokens/) | List authentication tokens |
|
||||
| help | Print command help or the help of a subcommand |
|
||||
| Subcommand | Description | | |
|
||||
| :---------------------------------------------------------------------- | :------------------------------------------------------------------ | --------------------------- | ---------------- |
|
||||
| [databases](/influxdb3/version/reference/cli/influxdb3/show/databases/) | List database | | |
|
||||
| {{% show-in "enterprise" %}} | [license](/influxdb3/version/reference/cli/influxdb3/show/license/) | Display license information | {{% /show-in %}} |
|
||||
| [plugins](/influxdb3/version/reference/cli/influxdb3/show/plugins/) | List loaded plugins | | |
|
||||
| [system](/influxdb3/version/reference/cli/influxdb3/show/system/) | Display system table data | | |
|
||||
| [tokens](/influxdb3/version/reference/cli/influxdb3/show/tokens/) | List authentication tokens | | |
|
||||
| help | Print command help or the help of a subcommand | | |
|
||||
|
||||
## Options
|
||||
|
||||
|
|
|
|||
|
|
@ -11,36 +11,36 @@ influxdb3 show plugins [OPTIONS]
|
|||
|
||||
## Options
|
||||
|
||||
| Option | | Description |
|
||||
| :----- | :--------------- | :--------------------------------------------------------------------------------------- |
|
||||
| `-H` | `--host` | Host URL of the running {{< product-name >}} server (default is `http://127.0.0.1:8181`) |
|
||||
| | `--token` | _({{< req >}})_ Authentication token |
|
||||
| | `--format` | Output format (`pretty` _(default)_, `json`, `jsonl`, `csv`, or `parquet`) |
|
||||
| | `--output` | Path where to save output when using the `parquet` format |
|
||||
| | `--tls-ca` | Path to a custom TLS certificate authority (for testing or self-signed certificates) |
|
||||
| `-h` | `--help` | Print help information |
|
||||
| | `--help-all` | Print detailed help information |
|
||||
| Option | | Description |
|
||||
| :----- | :----------- | :--------------------------------------------------------------------------------------- |
|
||||
| `-H` | `--host` | Host URL of the running {{< product-name >}} server (default is `http://127.0.0.1:8181`) |
|
||||
| | `--token` | *({{< req >}})* Authentication token |
|
||||
| | `--format` | Output format (`pretty` *(default)*, `json`, `jsonl`, `csv`, or `parquet`) |
|
||||
| | `--output` | Path where to save output when using the `parquet` format |
|
||||
| | `--tls-ca` | Path to a custom TLS certificate authority (for testing or self-signed certificates) |
|
||||
| `-h` | `--help` | Print help information |
|
||||
| | `--help-all` | Print detailed help information |
|
||||
|
||||
### Option environment variables
|
||||
|
||||
You can use the following environment variables to set command options:
|
||||
|
||||
| Environment Variable | Option |
|
||||
| :-------------------- | :-------- |
|
||||
| `INFLUXDB3_HOST_URL` | `--host` |
|
||||
| `INFLUXDB3_AUTH_TOKEN`| `--token` |
|
||||
| :--------------------- | :-------- |
|
||||
| `INFLUXDB3_HOST_URL` | `--host` |
|
||||
| `INFLUXDB3_AUTH_TOKEN` | `--token` |
|
||||
|
||||
## Output
|
||||
|
||||
The command returns information about loaded plugin files:
|
||||
|
||||
- **plugin_name**: Name of a trigger using this plugin
|
||||
- **file_name**: Plugin filename
|
||||
- **file_path**: Full server path to the plugin file
|
||||
- **size_bytes**: File size in bytes
|
||||
- **last_modified**: Last modification timestamp (milliseconds since epoch)
|
||||
- **plugin\_name**: Name of a trigger using this plugin
|
||||
- **file\_name**: Plugin filename
|
||||
- **file\_path**: Full server path to the plugin file
|
||||
- **size\_bytes**: File size in bytes
|
||||
- **last\_modified**: Last modification timestamp (milliseconds since epoch)
|
||||
|
||||
> [!Note]
|
||||
> \[!Note]
|
||||
> This command queries the `system.plugin_files` table in the `_internal` database.
|
||||
> For more advanced queries and filtering, see [Query system data](/influxdb3/version/admin/query-system-data/).
|
||||
|
||||
|
|
@ -81,6 +81,7 @@ influxdb3 show plugins --format csv
|
|||
Use the `--output` option to specify the file where you want to save the Parquet data.
|
||||
|
||||
<!--pytest.mark.skip-->
|
||||
|
||||
```bash
|
||||
influxdb3 show plugins \
|
||||
--format parquet \
|
||||
|
|
|
|||
|
|
@ -11,25 +11,27 @@ influxdb3 update <SUBCOMMAND>
|
|||
## Subcommands
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
| Subcommand | Description |
|
||||
| :----------------------------------------------------------------- | :--------------------- |
|
||||
| [database](/influxdb3/version/reference/cli/influxdb3/update/database/) | Update a database |
|
||||
| [table](/influxdb3/version/reference/cli/influxdb3/update/table/) | Update a table |
|
||||
| [trigger](/influxdb3/version/reference/cli/influxdb3/update/trigger/) | Update a trigger |
|
||||
| help | Print command help or the help of a subcommand |
|
||||
{{% /show-in %}}
|
||||
|
||||
| Subcommand | Description |
|
||||
| :---------------------------------------------------------------------- | :--------------------------------------------- |
|
||||
| [database](/influxdb3/version/reference/cli/influxdb3/update/database/) | Update a database |
|
||||
| [table](/influxdb3/version/reference/cli/influxdb3/update/table/) | Update a table |
|
||||
| [trigger](/influxdb3/version/reference/cli/influxdb3/update/trigger/) | Update a trigger |
|
||||
| help | Print command help or the help of a subcommand |
|
||||
| {{% /show-in %}} | |
|
||||
|
||||
{{% show-in "core" %}}
|
||||
| Subcommand | Description |
|
||||
| :----------------------------------------------------------------- | :--------------------- |
|
||||
| [database](/influxdb3/version/reference/cli/influxdb3/update/database/) | Update a database |
|
||||
| [trigger](/influxdb3/version/reference/cli/influxdb3/update/trigger/) | Update a trigger |
|
||||
| help | Print command help or the help of a subcommand |
|
||||
{{% /show-in %}}
|
||||
|
||||
| Subcommand | Description |
|
||||
| :---------------------------------------------------------------------- | :--------------------------------------------- |
|
||||
| [database](/influxdb3/version/reference/cli/influxdb3/update/database/) | Update a database |
|
||||
| [trigger](/influxdb3/version/reference/cli/influxdb3/update/trigger/) | Update a trigger |
|
||||
| help | Print command help or the help of a subcommand |
|
||||
| {{% /show-in %}} | |
|
||||
|
||||
## Options
|
||||
|
||||
| Option | | Description |
|
||||
| :----- | :----------- | :------------------------------ |
|
||||
| `-h` | `--help` | Print help information |
|
||||
| | `--help-all` | Print detailed help information |
|
||||
| | `--help-all` | Print detailed help information |
|
||||
|
|
|
|||
|
|
@ -19,21 +19,21 @@ influxdb3 update trigger [OPTIONS] \
|
|||
|
||||
## Options
|
||||
|
||||
| Option | | Description |
|
||||
| :----- | :------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `-H` | `--host` | Host URL of the running {{< product-name >}} server (default is `http://127.0.0.1:8181`) |
|
||||
| `-d` | `--database` | _({{< req >}})_ Name of the database containing the trigger |
|
||||
| | `--trigger-name` | _({{< req >}})_ Name of the trigger to update |
|
||||
| `-p` | `--path` | Path to plugin file or directory (single `.py` file or directory containing `__init__.py` for multifile plugins). Can be local path (with `--upload`) or server path. |
|
||||
| | `--upload` | Upload local plugin files to the server. Requires admin token. Use with `--path` to specify local files. |
|
||||
| | `--trigger-arguments`| Additional arguments for the trigger, in the format `key=value`, separated by commas (for example, `arg1=val1,arg2=val2`) |
|
||||
| | `--disabled` | Set the trigger state to disabled |
|
||||
| | `--enabled` | Set the trigger state to enabled |
|
||||
| | `--error-behavior` | Error handling behavior: `log`, `retry`, or `disable` |
|
||||
| | `--token` | Authentication token |
|
||||
| | `--tls-ca` | Path to a custom TLS certificate authority (for testing or self-signed certificates) |
|
||||
| `-h` | `--help` | Print help information |
|
||||
| | `--help-all` | Print detailed help information |
|
||||
| Option | | Description |
|
||||
| :----- | :-------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `-H` | `--host` | Host URL of the running {{< product-name >}} server (default is `http://127.0.0.1:8181`) |
|
||||
| `-d` | `--database` | *({{< req >}})* Name of the database containing the trigger |
|
||||
| | `--trigger-name` | *({{< req >}})* Name of the trigger to update |
|
||||
| `-p` | `--path` | Path to plugin file or directory (single `.py` file or directory containing `__init__.py` for multifile plugins). Can be local path (with `--upload`) or server path. |
|
||||
| | `--upload` | Upload local plugin files to the server. Requires admin token. Use with `--path` to specify local files. |
|
||||
| | `--trigger-arguments` | Additional arguments for the trigger, in the format `key=value`, separated by commas (for example, `arg1=val1,arg2=val2`) |
|
||||
| | `--disabled` | Set the trigger state to disabled |
|
||||
| | `--enabled` | Set the trigger state to enabled |
|
||||
| | `--error-behavior` | Error handling behavior: `log`, `retry`, or `disable` |
|
||||
| | `--token` | Authentication token |
|
||||
| | `--tls-ca` | Path to a custom TLS certificate authority (for testing or self-signed certificates) |
|
||||
| `-h` | `--help` | Print help information |
|
||||
| | `--help-all` | Print detailed help information |
|
||||
|
||||
### Option environment variables
|
||||
|
||||
|
|
@ -56,7 +56,7 @@ The following examples show how to update triggers in different scenarios.
|
|||
- [Enable or disable a trigger](#enable-or-disable-a-trigger)
|
||||
- [Update error handling behavior](#update-error-handling-behavior)
|
||||
|
||||
---
|
||||
***
|
||||
|
||||
Replace the following placeholders with your values:
|
||||
|
||||
|
|
@ -64,7 +64,7 @@ Replace the following placeholders with your values:
|
|||
- {{% code-placeholder-key %}}`AUTH_TOKEN`{{% /code-placeholder-key %}}: Authentication token
|
||||
- {{% code-placeholder-key %}}`TRIGGER_NAME`{{% /code-placeholder-key %}}: Name of the trigger to update
|
||||
|
||||
{{% code-placeholders "(DATABASE|TRIGGER)_NAME|AUTH_TOKEN" %}}
|
||||
{{% code-placeholders "(DATABASE|TRIGGER)\_NAME|AUTH\_TOKEN" %}}
|
||||
|
||||
### Update trigger plugin code
|
||||
|
||||
|
|
|
|||
|
|
@ -1,12 +1,13 @@
|
|||
<!-- TOC -->
|
||||
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Quick-Start Mode (Development)](#quick-start-mode-development)
|
||||
- [Start InfluxDB](#start-influxdb)
|
||||
- [Object store examples](#object-store-examples)
|
||||
{{% show-in "enterprise" %}}
|
||||
{{% show-in "enterprise" %}}
|
||||
- [Set up licensing](#set-up-licensing)
|
||||
- [Available license types](#available-license-types)
|
||||
{{% /show-in %}}
|
||||
{{% /show-in %}}
|
||||
- [Set up authorization](#set-up-authorization)
|
||||
- [Create an operator token](#create-an-operator-token)
|
||||
- [Set your token for authorization](#set-your-token-for-authorization)
|
||||
|
|
@ -35,30 +36,36 @@ influxdb3
|
|||
When you run `influxdb3` without arguments, the following values are auto-generated:
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
|
||||
- **`node-id`**: `{hostname}-node` (or `primary-node` if hostname is unavailable)
|
||||
- **`cluster-id`**: `{hostname}-cluster` (or `primary-cluster` if hostname is unavailable)
|
||||
{{% /show-in %}}
|
||||
{{% show-in "core" %}}
|
||||
{{% /show-in %}}
|
||||
{{% show-in "core" %}}
|
||||
- **`node-id`**: `{hostname}-node` (or `primary-node` if hostname is unavailable)
|
||||
{{% /show-in %}}
|
||||
{{% /show-in %}}
|
||||
- **`object-store`**: `file`
|
||||
- **`data-dir`**: `~/.influxdb`
|
||||
|
||||
The system displays warning messages showing the auto-generated identifiers:
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
|
||||
```
|
||||
Using auto-generated node id: mylaptop-node. For production deployments, explicitly set --node-id
|
||||
Using auto-generated cluster id: mylaptop-cluster. For production deployments, explicitly set --cluster-id
|
||||
```
|
||||
|
||||
{{% /show-in %}}
|
||||
{{% show-in "core" %}}
|
||||
|
||||
```
|
||||
Using auto-generated node id: mylaptop-node. For production deployments, explicitly set --node-id
|
||||
```
|
||||
|
||||
{{% /show-in %}}
|
||||
|
||||
> [!Important]
|
||||
> \[!Important]
|
||||
>
|
||||
> #### When to use quick-start mode
|
||||
>
|
||||
> Quick-start mode is designed for development, testing, and home lab environments
|
||||
|
|
@ -79,24 +86,28 @@ to start {{% product-name %}}.
|
|||
Provide the following:
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
|
||||
- `--node-id`: A string identifier that distinguishes individual server
|
||||
instances within the cluster. This forms the final part of the storage path:
|
||||
`<CONFIGURED_PATH>/<CLUSTER_ID>/<NODE_ID>`.
|
||||
In a multi-node setup, this ID is used to reference specific nodes.
|
||||
|
||||
- `--cluster-id`: A string identifier that determines part of the storage path
|
||||
hierarchy. All nodes within the same cluster share this identifier.
|
||||
The storage path follows the pattern `<CONFIGURED_PATH>/<CLUSTER_ID>/<NODE_ID>`.
|
||||
In a multi-node setup, this ID is used to reference the entire cluster.
|
||||
{{% /show-in %}}
|
||||
{{% show-in "core" %}}
|
||||
{{% /show-in %}}
|
||||
{{% show-in "core" %}}
|
||||
|
||||
- `--node-id`: A string identifier that distinguishes individual server instances.
|
||||
This forms the final part of the storage path: `<CONFIGURED_PATH>/<NODE_ID>`.
|
||||
{{% /show-in %}}
|
||||
{{% /show-in %}}
|
||||
|
||||
- `--object-store`: Specifies the type of object store to use.
|
||||
InfluxDB supports the following:
|
||||
|
||||
- `file`: local file system
|
||||
- `memory`: in memory _(no object persistence)_
|
||||
|
||||
- `file`: local file system
|
||||
- `memory`: in memory *(no object persistence)*
|
||||
- `memory-throttled`: like `memory` but with latency and throughput that
|
||||
somewhat resembles a cloud-based object store
|
||||
- `s3`: AWS S3 and S3-compatible services like Ceph or Minio
|
||||
|
|
@ -106,14 +117,15 @@ Provide the following:
|
|||
- Other object store parameters depending on the selected `object-store` type.
|
||||
For example, if you use `s3`, you must provide the bucket name and credentials.
|
||||
|
||||
> [!Note]
|
||||
> \[!Note]
|
||||
>
|
||||
> #### Diskless architecture
|
||||
>
|
||||
> InfluxDB 3 supports a diskless architecture that can operate with object
|
||||
> storage alone, eliminating the need for locally attached disks.
|
||||
> {{% product-name %}} can also work with only local disk storage when needed.
|
||||
> {{% product-name %}} can also work with only local disk storage when needed.
|
||||
>
|
||||
> {{% show-in "enterprise" %}}
|
||||
> {{% show-in "enterprise" %}}
|
||||
> The combined path structure `<CONFIGURED_PATH>/<CLUSTER_ID>/<NODE_ID>` ensures
|
||||
> proper organization of data in your object store, allowing for clean
|
||||
> separation between clusters and individual nodes.
|
||||
|
|
@ -123,6 +135,7 @@ For this getting started guide, use the `file` object store to persist data to
|
|||
your local disk.
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
|
||||
```bash
|
||||
# File system object store
|
||||
# Provide the filesystem directory
|
||||
|
|
@ -132,8 +145,10 @@ influxdb3 serve \
|
|||
--object-store file \
|
||||
--data-dir ~/.influxdb3
|
||||
```
|
||||
|
||||
{{% /show-in %}}
|
||||
{{% show-in "core" %}}
|
||||
|
||||
```bash
|
||||
# File system object store
|
||||
# Provide the file system directory
|
||||
|
|
@ -142,6 +157,7 @@ influxdb3 serve \
|
|||
--object-store file \
|
||||
--data-dir ~/.influxdb3
|
||||
```
|
||||
|
||||
{{% /show-in %}}
|
||||
|
||||
### Object store examples
|
||||
|
|
@ -155,6 +171,7 @@ This is the default object store type.
|
|||
Replace the following with your values:
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
|
||||
```bash
|
||||
# Filesystem object store
|
||||
# Provide the filesystem directory
|
||||
|
|
@ -164,8 +181,10 @@ influxdb3 serve \
|
|||
--object-store file \
|
||||
--data-dir ~/.influxdb3
|
||||
```
|
||||
|
||||
{{% /show-in %}}
|
||||
{{% show-in "core" %}}
|
||||
|
||||
```bash
|
||||
# File system object store
|
||||
# Provide the file system directory
|
||||
|
|
@ -174,6 +193,7 @@ influxdb3 serve \
|
|||
--object-store file \
|
||||
--data-dir ~/.influxdb3
|
||||
```
|
||||
|
||||
{{% /show-in %}}
|
||||
|
||||
{{% /expand %}}
|
||||
|
|
@ -187,7 +207,9 @@ provide the following options with your `docker run` command:
|
|||
- `--object-store file --data-dir /path/in/container`: Uses the volume for object storage
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
|
||||
<!--pytest.mark.skip-->
|
||||
|
||||
```bash
|
||||
# File system object store with Docker
|
||||
# Create a mount
|
||||
|
|
@ -200,9 +222,12 @@ docker run -it \
|
|||
--object-store file \
|
||||
--data-dir /path/in/container
|
||||
```
|
||||
|
||||
{{% /show-in %}}
|
||||
{{% show-in "core" %}}
|
||||
|
||||
<!--pytest.mark.skip-->
|
||||
|
||||
```bash
|
||||
# File system object store with Docker
|
||||
# Create a mount
|
||||
|
|
@ -214,10 +239,11 @@ docker run -it \
|
|||
--object-store file \
|
||||
--data-dir /path/in/container
|
||||
```
|
||||
|
||||
{{% /show-in %}}
|
||||
|
||||
> [!Note]
|
||||
>
|
||||
> \[!Note]
|
||||
>
|
||||
> The {{% product-name %}} Docker image exposes port `8181`, the `influxdb3`
|
||||
> server default for HTTP connections.
|
||||
> To map the exposed port to a different port when running a container, see the
|
||||
|
|
@ -226,8 +252,9 @@ docker run -it \
|
|||
{{% /expand %}}
|
||||
{{% expand "Docker compose with a mounted file system object store" %}}
|
||||
Open `compose.yaml` for editing and add a `services` entry for
|
||||
{{% product-name %}}--for example:
|
||||
{{% product-name %}}--for example:
|
||||
{{% show-in "enterprise" %}}
|
||||
|
||||
```yaml
|
||||
# compose.yaml
|
||||
services:
|
||||
|
|
@ -257,11 +284,13 @@ services:
|
|||
# Path to store plugins in the container
|
||||
target: /var/lib/influxdb3/plugins
|
||||
```
|
||||
Replace `EMAIL_ADDRESS` with your email address to bypass the email prompt
|
||||
when generating a trial or at-home license. For more information, see [Manage your
|
||||
{{% product-name %}} license](/influxdb3/version/admin/license/).
|
||||
|
||||
Replace `EMAIL_ADDRESS` with your email address to bypass the email prompt
|
||||
when generating a trial or at-home license. For more information, see [Manage your
|
||||
{{% product-name %}} license](/influxdb3/version/admin/license/).
|
||||
{{% /show-in %}}
|
||||
{{% show-in "core" %}}
|
||||
|
||||
```yaml
|
||||
# compose.yaml
|
||||
services:
|
||||
|
|
@ -288,11 +317,13 @@ services:
|
|||
# Path to store plugins in the container
|
||||
target: /var/lib/influxdb3/plugins
|
||||
```
|
||||
|
||||
{{% /show-in %}}
|
||||
|
||||
Use the Docker Compose CLI to start the server--for example:
|
||||
|
||||
<!--pytest.mark.skip-->
|
||||
|
||||
```bash
|
||||
docker compose pull && docker compose up influxdb3-{{< product-key >}}
|
||||
```
|
||||
|
|
@ -301,7 +332,8 @@ The command pulls the latest {{% product-name %}} Docker image and starts
|
|||
`influxdb3` in a container with host port `8181` mapped to container port
|
||||
`8181`, the server default for HTTP connections.
|
||||
|
||||
> [!Tip]
|
||||
> \[!Tip]
|
||||
>
|
||||
> #### Custom port mapping
|
||||
>
|
||||
> To customize your `influxdb3` server hostname and port, specify the
|
||||
|
|
@ -318,6 +350,7 @@ This is useful for production deployments that require high availability and dur
|
|||
Provide your bucket name and credentials to access the S3 object store.
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
|
||||
```bash
|
||||
# S3 object store (default is the us-east-1 region)
|
||||
# Specify the object store type and associated options
|
||||
|
|
@ -344,8 +377,10 @@ influxdb3 serve \
|
|||
--aws-endpoint ENDPOINT \
|
||||
--aws-allow-http
|
||||
```
|
||||
|
||||
{{% /show-in %}}
|
||||
{{% show-in "core" %}}
|
||||
|
||||
```bash
|
||||
# S3 object store (default is the us-east-1 region)
|
||||
# Specify the object store type and associated options
|
||||
|
|
@ -370,6 +405,7 @@ influxdb3 serve \
|
|||
--aws-endpoint ENDPOINT \
|
||||
--aws-allow-http
|
||||
```
|
||||
|
||||
{{% /show-in %}}
|
||||
|
||||
{{% /expand %}}
|
||||
|
|
@ -379,6 +415,7 @@ Store data in RAM without persisting it on shutdown.
|
|||
It's useful for rapid testing and development.
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
|
||||
```bash
|
||||
# Memory object store
|
||||
# Stores data in RAM; doesn't persist data
|
||||
|
|
@ -387,8 +424,10 @@ influxdb3 serve \
|
|||
--cluster-id cluster01 \
|
||||
--object-store memory
|
||||
```
|
||||
|
||||
{{% /show-in %}}
|
||||
{{% show-in "core" %}}
|
||||
|
||||
```bash
|
||||
# Memory object store
|
||||
# Stores data in RAM; doesn't persist data
|
||||
|
|
@ -396,6 +435,7 @@ influxdb3 serve \
|
|||
--node-id host01 \
|
||||
--object-store memory
|
||||
```
|
||||
|
||||
{{% /show-in %}}
|
||||
|
||||
{{% /expand %}}
|
||||
|
|
@ -409,6 +449,7 @@ influxdb3 serve --help
|
|||
```
|
||||
|
||||
{{% show-in "enterprise" %}}
|
||||
|
||||
## Set up licensing
|
||||
|
||||
When you first start a new instance, {{% product-name %}} prompts you to select a
|
||||
|
|
@ -426,27 +467,29 @@ InfluxDB 3 Enterprise licenses:
|
|||
- **At-Home**: For at-home hobbyist use with limited access to InfluxDB 3 Enterprise capabilities.
|
||||
- **Commercial**: Commercial license with full access to InfluxDB 3 Enterprise capabilities.
|
||||
|
||||
> [!Important]
|
||||
> \[!Important]
|
||||
>
|
||||
> #### Trial and at-home licenses with Docker
|
||||
>
|
||||
> To generate the trial or home license in Docker, bypass the email prompt.
|
||||
> The first time you start a new instance, provide your email address with the
|
||||
> `--license-email` option or the `INFLUXDB3_ENTERPRISE_LICENSE_EMAIL` environment variable.
|
||||
>
|
||||
> _Currently, if you use Docker and enter your email address in the prompt, a bug may
|
||||
> prevent the container from generating the license ._
|
||||
> *Currently, if you use Docker and enter your email address in the prompt, a bug may
|
||||
> prevent the container from generating the license .*
|
||||
>
|
||||
> For more information, see [the Docker Compose example](/influxdb3/enterprise/admin/license/?t=Docker+compose#start-the-server-with-your-license-email).
|
||||
{{% /show-in %}}
|
||||
> {{% /show-in %}}
|
||||
|
||||
> [!Tip]
|
||||
> \[!Tip]
|
||||
>
|
||||
> #### Use the InfluxDB 3 Explorer query interface
|
||||
>
|
||||
> You can complete the remaining steps in this guide using InfluxDB 3 Explorer,
|
||||
> the web-based query and administrative interface for InfluxDB 3.
|
||||
> Explorer provides visual management of databases and tokens and an
|
||||
> easy way to write and query your time series data.
|
||||
>
|
||||
>
|
||||
> For more information, see the [InfluxDB 3 Explorer documentation](/influxdb3/explorer/).
|
||||
|
||||
## Set up authorization
|
||||
|
|
@ -467,17 +510,17 @@ commands and HTTP API requests.
|
|||
database
|
||||
- A system token grants read access to system information endpoints and
|
||||
metrics for the server
|
||||
{{% /show-in %}}
|
||||
{{% show-in "core" %}}
|
||||
{{% product-name %}} supports _admin_ tokens, which grant access to all CLI actions and API endpoints.
|
||||
{{% /show-in %}}
|
||||
{{% /show-in %}}
|
||||
{{% show-in "core" %}}
|
||||
{{% product-name %}} supports *admin* tokens, which grant access to all CLI actions and API endpoints.
|
||||
{{% /show-in %}}
|
||||
|
||||
For more information about tokens and authorization, see [Manage tokens](/influxdb3/version/admin/tokens/).
|
||||
|
||||
### Create an operator token
|
||||
|
||||
After you start the server, create your first admin token.
|
||||
The first admin token you create is the _operator_ token for the server.
|
||||
The first admin token you create is the *operator* token for the server.
|
||||
|
||||
Use the [`influxdb3 create token` command](/influxdb3/version/reference/cli/influxdb3/create/token/)
|
||||
with the `--admin` option to create your operator token:
|
||||
|
|
@ -496,11 +539,13 @@ influxdb3 create token --admin
|
|||
{{% /code-tab-content %}}
|
||||
{{% code-tab-content %}}
|
||||
|
||||
{{% code-placeholders "CONTAINER_NAME" %}}
|
||||
{{% code-placeholders "CONTAINER\_NAME" %}}
|
||||
|
||||
```bash
|
||||
# With Docker — in a new terminal:
|
||||
docker exec -it CONTAINER_NAME influxdb3 create token --admin
|
||||
```
|
||||
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace {{% code-placeholder-key %}}`CONTAINER_NAME`{{% /code-placeholder-key %}} with the name of your running Docker container.
|
||||
|
|
@ -510,9 +555,10 @@ Replace {{% code-placeholder-key %}}`CONTAINER_NAME`{{% /code-placeholder-key %}
|
|||
|
||||
The command returns a token string for authenticating CLI commands and API requests.
|
||||
|
||||
> [!Important]
|
||||
> \[!Important]
|
||||
>
|
||||
> #### Store your token securely
|
||||
>
|
||||
>
|
||||
> InfluxDB displays the token string only when you create it.
|
||||
> Store your token securely—you cannot retrieve it from the database later.
|
||||
|
||||
|
|
@ -537,10 +583,12 @@ In your command, replace {{% code-placeholder-key %}}`YOUR_AUTH_TOKEN`{{% /code-
|
|||
Set the `INFLUXDB3_AUTH_TOKEN` environment variable to have the CLI use your
|
||||
token automatically:
|
||||
|
||||
{{% code-placeholders "YOUR_AUTH_TOKEN" %}}
|
||||
{{% code-placeholders "YOUR\_AUTH\_TOKEN" %}}
|
||||
|
||||
```bash
|
||||
export INFLUXDB3_AUTH_TOKEN=YOUR_AUTH_TOKEN
|
||||
```
|
||||
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
{{% /tab-content %}}
|
||||
|
|
@ -548,10 +596,12 @@ export INFLUXDB3_AUTH_TOKEN=YOUR_AUTH_TOKEN
|
|||
|
||||
Include the `--token` option with CLI commands:
|
||||
|
||||
{{% code-placeholders "YOUR_AUTH_TOKEN" %}}
|
||||
{{% code-placeholders "YOUR\_AUTH\_TOKEN" %}}
|
||||
|
||||
```bash
|
||||
influxdb3 show databases --token YOUR_AUTH_TOKEN
|
||||
```
|
||||
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
{{% /tab-content %}}
|
||||
|
|
@ -559,37 +609,41 @@ influxdb3 show databases --token YOUR_AUTH_TOKEN
|
|||
|
||||
For HTTP API requests, include your token in the `Authorization` header--for example:
|
||||
|
||||
{{% code-placeholders "YOUR_AUTH_TOKEN" %}}
|
||||
{{% code-placeholders "YOUR\_AUTH\_TOKEN" %}}
|
||||
|
||||
```bash
|
||||
curl "http://{{< influxdb/host >}}/api/v3/configure/database" \
|
||||
--header "Authorization: Bearer YOUR_AUTH_TOKEN"
|
||||
```
|
||||
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
#### Learn more about tokens and permissions
|
||||
|
||||
- [Manage admin tokens](/influxdb3/version/admin/tokens/admin/) - Understand and
|
||||
manage operator and named admin tokens
|
||||
{{% show-in "enterprise" %}}
|
||||
{{% show-in "enterprise" %}}
|
||||
- [Manage resource tokens](/influxdb3/version/admin/tokens/resource/) - Create,
|
||||
list, and delete resource tokens
|
||||
{{% /show-in %}}
|
||||
{{% /show-in %}}
|
||||
- [Authentication](/influxdb3/version/reference/internals/authentication/) -
|
||||
Understand authentication, authorizations, and permissions in {{% product-name %}}
|
||||
|
||||
<!-- //TODO - Authenticate with compatibility APIs -->
|
||||
|
||||
{{% show-in "core" %}}
|
||||
{{% page-nav
|
||||
prev="/influxdb3/version/get-started/"
|
||||
prevText="Get started"
|
||||
next="/influxdb3/version/get-started/write/"
|
||||
nextText="Write data"
|
||||
prev="/influxdb3/version/get-started/"
|
||||
prevText="Get started"
|
||||
next="/influxdb3/version/get-started/write/"
|
||||
nextText="Write data"
|
||||
%}}
|
||||
{{% /show-in %}}
|
||||
{{% show-in "enterprise" %}}
|
||||
{{% page-nav
|
||||
prev="/influxdb3/version/get-started/"
|
||||
prevText="Get started"
|
||||
next="/influxdb3/version/get-started/multi-server/"
|
||||
nextText="Create a multi-node cluster"
|
||||
prev="/influxdb3/version/get-started/"
|
||||
prevText="Get started"
|
||||
next="/influxdb3/version/get-started/multi-server/"
|
||||
nextText="Create a multi-node cluster"
|
||||
%}}
|
||||
{{% /show-in %}}
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
Use the Processing Engine in {{% product-name %}} to extend your database with custom Python code. Trigger your code on write, on a schedule, or on demand to automate workflows, transform data, and create API endpoints.
|
||||
Use the Processing Engine in {{% product-name %}} to extend your database with custom Python code. Trigger your code on write, on a schedule, or on demand to automate workflows, transform data, and create API endpoints.
|
||||
|
||||
## What is the Processing Engine?
|
||||
|
||||
The Processing Engine is an embedded Python virtual machine that runs inside your {{% product-name %}} database. You configure _triggers_ to run your Python _plugin_ code in response to:
|
||||
The Processing Engine is an embedded Python virtual machine that runs inside your {{% product-name %}} database. You configure *triggers* to run your Python *plugin* code in response to:
|
||||
|
||||
- **Data writes** - Process and transform data as it enters the database
|
||||
- **Scheduled events** - Run code at defined intervals or specific times
|
||||
|
|
@ -14,7 +14,8 @@ This guide walks you through setting up the Processing Engine, creating your fir
|
|||
|
||||
## Before you begin
|
||||
|
||||
Ensure you have:
|
||||
Ensure you have:
|
||||
|
||||
- A working {{% product-name %}} instance
|
||||
- Access to command line
|
||||
- Python installed if you're writing your own plugin
|
||||
|
|
@ -30,19 +31,21 @@ Once you have all the prerequisites in place, follow these steps to implement th
|
|||
- [Set up a trigger](#set-up-a-trigger)
|
||||
- [Manage plugin dependencies](#manage-plugin-dependencies)
|
||||
- [Plugin security](#plugin-security)
|
||||
{{% show-in "enterprise" %}}
|
||||
{{% show-in "enterprise" %}}
|
||||
- [Distributed cluster considerations](#distributed-cluster-considerations)
|
||||
{{% /show-in %}}
|
||||
{{% /show-in %}}
|
||||
|
||||
## Set up the Processing Engine
|
||||
|
||||
To activate the Processing Engine, start your {{% product-name %}} server with the `--plugin-dir` flag. This flag tells InfluxDB where to load your plugin files.
|
||||
|
||||
> [!Important]
|
||||
> \[!Important]
|
||||
>
|
||||
> #### Keep the influxdb3 binary with its python directory
|
||||
>
|
||||
> The influxdb3 binary requires the adjacent `python/` directory to function.
|
||||
> The influxdb3 binary requires the adjacent `python/` directory to function.
|
||||
> If you manually extract from tar.gz, keep them in the same parent directory:
|
||||
>
|
||||
> ```
|
||||
> your-install-location/
|
||||
> ├── influxdb3
|
||||
|
|
@ -51,7 +54,7 @@ To activate the Processing Engine, start your {{% product-name %}} server with t
|
|||
>
|
||||
> Add the parent directory to your PATH; do not move the binary out of this directory.
|
||||
|
||||
{{% code-placeholders "NODE_ID|OBJECT_STORE_TYPE|PLUGIN_DIR" %}}
|
||||
{{% code-placeholders "NODE\_ID|OBJECT\_STORE\_TYPE|PLUGIN\_DIR" %}}
|
||||
|
||||
```bash
|
||||
influxdb3 serve \
|
||||
|
|
@ -68,11 +71,12 @@ In the example above, replace the following:
|
|||
- {{% code-placeholder-key %}}`OBJECT_STORE_TYPE`{{% /code-placeholder-key %}}: Type of object store (for example, file or s3)
|
||||
- {{% code-placeholder-key %}}`PLUGIN_DIR`{{% /code-placeholder-key %}}: Absolute path to the directory where plugin files are stored. Store all plugin files in this directory or its subdirectories.
|
||||
|
||||
> [!Note]
|
||||
> \[!Note]
|
||||
>
|
||||
> #### Use custom plugin repositories
|
||||
>
|
||||
> By default, plugins referenced with the `gh:` prefix are fetched from the official
|
||||
> [influxdata/influxdb3_plugins](https://github.com/influxdata/influxdb3_plugins) repository.
|
||||
> [influxdata/influxdb3\_plugins](https://github.com/influxdata/influxdb3_plugins) repository.
|
||||
> To use a custom repository, add the `--plugin-repo` flag when starting the server.
|
||||
> See [Use a custom plugin repository](#option-3-use-a-custom-plugin-repository) for details.
|
||||
|
||||
|
|
@ -88,7 +92,8 @@ When running {{% product-name %}} in a distributed setup, follow these steps to
|
|||
3. Maintain identical plugin files across all instances where plugins run
|
||||
- Use shared storage or file synchronization tools to keep plugins consistent
|
||||
|
||||
> [!Note]
|
||||
> \[!Note]
|
||||
>
|
||||
> #### Provide plugins to nodes that run them
|
||||
>
|
||||
> Configure your plugin directory on the same system as the nodes that run the triggers and plugins.
|
||||
|
|
@ -99,7 +104,7 @@ For more information about configuring distributed environments, see the [Distri
|
|||
|
||||
## Add a Processing Engine plugin
|
||||
|
||||
A plugin is a Python script that defines a specific function signature for a trigger (_trigger spec_). When the specified event occurs, InfluxDB runs the plugin.
|
||||
A plugin is a Python script that defines a specific function signature for a trigger (*trigger spec*). When the specified event occurs, InfluxDB runs the plugin.
|
||||
|
||||
### Choose a plugin strategy
|
||||
|
||||
|
|
@ -114,13 +119,13 @@ InfluxData maintains a repository of official and community plugins that you can
|
|||
|
||||
Browse the [plugin library](/influxdb3/version/plugins/library/) to find examples and InfluxData official plugins for:
|
||||
|
||||
- **Data transformation**: Process and transform incoming data
|
||||
- **Alerting**: Send notifications based on data thresholds
|
||||
- **Aggregation**: Calculate statistics on time series data
|
||||
- **Integration**: Connect to external services and APIs
|
||||
- **System monitoring**: Track resource usage and health metrics
|
||||
- **Data transformation**: Process and transform incoming data
|
||||
- **Alerting**: Send notifications based on data thresholds
|
||||
- **Aggregation**: Calculate statistics on time series data
|
||||
- **Integration**: Connect to external services and APIs
|
||||
- **System monitoring**: Track resource usage and health metrics
|
||||
|
||||
For community contributions, see the [influxdb3_plugins repository](https://github.com/influxdata/influxdb3_plugins) on GitHub.
|
||||
For community contributions, see the [influxdb3\_plugins repository](https://github.com/influxdata/influxdb3_plugins) on GitHub.
|
||||
|
||||
#### Add example plugins
|
||||
|
||||
|
|
@ -193,17 +198,17 @@ influxdb3 create trigger \
|
|||
The `--plugin-repo` option accepts any HTTP/HTTPS URL that serves raw plugin files.
|
||||
See the [plugin-repo configuration option](/influxdb3/version/reference/config-options/#plugin-repo) for more details.
|
||||
|
||||
Plugins have various functions such as:
|
||||
Plugins have various functions such as:
|
||||
|
||||
- Receive plugin-specific arguments (such as written data, call time, or an HTTP request)
|
||||
- Access keyword arguments (as `args`) passed from _trigger arguments_ configurations
|
||||
- Access keyword arguments (as `args`) passed from *trigger arguments* configurations
|
||||
- Access the `influxdb3_local` shared API to write data, query data, and managing state between executions
|
||||
|
||||
For more information about available functions, arguments, and how plugins interact with InfluxDB, see how to [Extend plugins](/influxdb3/version/extend-plugin/).
|
||||
For more information about available functions, arguments, and how plugins interact with InfluxDB, see how to [Extend plugins](/influxdb3/version/extend-plugin/).
|
||||
|
||||
### Create a custom plugin
|
||||
|
||||
To build custom functionality, you can create your own Processing Engine plugin.
|
||||
To build custom functionality, you can create your own Processing Engine plugin.
|
||||
|
||||
#### Prerequisites
|
||||
|
||||
|
|
@ -234,11 +239,13 @@ Choose a plugin type based on your automation goals:
|
|||
Plugins now support both single-file and multifile architectures:
|
||||
|
||||
**Single-file plugins:**
|
||||
|
||||
- Create a `.py` file in your plugins directory
|
||||
- Add the appropriate function signature based on your chosen plugin type
|
||||
- Write your processing logic inside the function
|
||||
|
||||
**Multifile plugins:**
|
||||
|
||||
- Create a directory in your plugins directory
|
||||
- Add an `__init__.py` file as the entry point (required)
|
||||
- Organize supporting modules in additional `.py` files
|
||||
|
|
@ -382,12 +389,14 @@ influxdb3 create trigger \
|
|||
complex_trigger
|
||||
```
|
||||
|
||||
> [!Important]
|
||||
> \[!Important]
|
||||
>
|
||||
> #### Admin privileges required
|
||||
>
|
||||
> Plugin uploads require an admin token. This security measure prevents unauthorized code execution on the server.
|
||||
|
||||
**When to use plugin upload:**
|
||||
|
||||
- Local plugin development and testing
|
||||
- Deploying plugins without SSH access to the server
|
||||
- Rapid iteration on plugin code
|
||||
|
|
@ -416,6 +425,7 @@ influxdb3 update trigger \
|
|||
```
|
||||
|
||||
The update operation:
|
||||
|
||||
- Replaces plugin files immediately
|
||||
- Preserves trigger configuration (spec, schedule, arguments)
|
||||
- Requires admin token for security
|
||||
|
|
@ -449,6 +459,7 @@ influxdb3 query \
|
|||
```
|
||||
|
||||
**Available columns:**
|
||||
|
||||
- `plugin_name` (String): Trigger name
|
||||
- `file_name` (String): Plugin file name
|
||||
- `file_path` (String): Full server path
|
||||
|
|
@ -478,17 +489,17 @@ For more information, see the [`influxdb3 show plugins` reference](/influxdb3/ve
|
|||
|
||||
### Understand trigger types
|
||||
|
||||
| Plugin Type | Trigger Specification | When Plugin Runs |
|
||||
|------------|----------------------|-----------------|
|
||||
| Data write | `table:<TABLE_NAME>` or `all_tables` | When data is written to tables |
|
||||
| Scheduled | `every:<DURATION>` or `cron:<EXPRESSION>` | At specified time intervals |
|
||||
| HTTP request | `request:<REQUEST_PATH>` | When HTTP requests are received |
|
||||
| Plugin Type | Trigger Specification | When Plugin Runs |
|
||||
| ------------ | ----------------------------------------- | ------------------------------- |
|
||||
| Data write | `table:<TABLE_NAME>` or `all_tables` | When data is written to tables |
|
||||
| Scheduled | `every:<DURATION>` or `cron:<EXPRESSION>` | At specified time intervals |
|
||||
| HTTP request | `request:<REQUEST_PATH>` | When HTTP requests are received |
|
||||
|
||||
### Use the create trigger command
|
||||
|
||||
Use the `influxdb3 create trigger` command with the appropriate trigger specification:
|
||||
|
||||
{{% code-placeholders "SPECIFICATION|PLUGIN_FILE|DATABASE_NAME|TRIGGER_NAME" %}}
|
||||
{{% code-placeholders "SPECIFICATION|PLUGIN\_FILE|DATABASE\_NAME|TRIGGER\_NAME" %}}
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
|
|
@ -496,7 +507,7 @@ influxdb3 create trigger \
|
|||
--plugin-filename PLUGIN_FILE \
|
||||
--database DATABASE_NAME \
|
||||
TRIGGER_NAME
|
||||
```
|
||||
```
|
||||
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
|
|
@ -507,14 +518,14 @@ In the example above, replace the following:
|
|||
- {{% code-placeholder-key %}}`DATABASE_NAME`{{% /code-placeholder-key %}}: Name of the database
|
||||
- {{% code-placeholder-key %}}`TRIGGER_NAME`{{% /code-placeholder-key %}}: Name of the new trigger
|
||||
|
||||
> [!Note]
|
||||
> \[!Note]
|
||||
> When specifying a local plugin file, the `--plugin-filename` parameter
|
||||
> _is relative to_ the `--plugin-dir` configured for the server.
|
||||
> *is relative to* the `--plugin-dir` configured for the server.
|
||||
> You don't need to provide an absolute path.
|
||||
|
||||
### Trigger specification examples
|
||||
|
||||
#### Trigger on data writes
|
||||
#### Trigger on data writes
|
||||
|
||||
```bash
|
||||
# Trigger on writes to a specific table
|
||||
|
|
@ -542,7 +553,8 @@ The plugin receives the written data and table information.
|
|||
If you want to use a single trigger for all tables but exclude specific tables,
|
||||
you can use trigger arguments and your plugin code to filter out unwanted tables--for example:
|
||||
|
||||
{{% code-placeholders "DATABASE_NAME|AUTH_TOKEN" %}}
|
||||
{{% code-placeholders "DATABASE\_NAME|AUTH\_TOKEN" %}}
|
||||
|
||||
```bash
|
||||
influxdb3 create trigger \
|
||||
--database DATABASE_NAME \
|
||||
|
|
@ -552,13 +564,14 @@ influxdb3 create trigger \
|
|||
--trigger-arguments "exclude_tables=temp_data,debug_info,system_logs" \
|
||||
data_processor
|
||||
```
|
||||
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
Replace the following:
|
||||
|
||||
- {{% code-placeholder-key %}}DATABASE_NAME{{% /code-placeholder-key %}}: the name of the database
|
||||
- {{% code-placeholder-key %}}AUTH_TOKEN{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in
|
||||
"enterprise" %}} with write permissions on the specified database{{% /show-in %}}
|
||||
- {{% code-placeholder-key %}}DATABASE\_NAME{{% /code-placeholder-key %}}: the name of the database
|
||||
- {{% code-placeholder-key %}}AUTH\_TOKEN{{% /code-placeholder-key %}}: your {{% token-link "database" %}}{{% show-in
|
||||
"enterprise" %}} with write permissions on the specified database{{% /show-in %}}
|
||||
|
||||
Then, in your plugin:
|
||||
|
||||
|
|
@ -584,7 +597,7 @@ def on_write(self, database, table_name, batch):
|
|||
triggers instead of filtering within plugin code.
|
||||
See HTTP API [Processing engine endpoints](/influxdb3/version/api/v3/#tag/Processing-engine) for managing triggers.
|
||||
|
||||
#### Trigger on a schedule
|
||||
#### Trigger on a schedule
|
||||
|
||||
```bash
|
||||
# Run every 5 minutes
|
||||
|
|
@ -706,12 +719,10 @@ influxdb3 create trigger \
|
|||
|
||||
## Manage plugin dependencies
|
||||
|
||||
|
||||
|
||||
Use the `influxdb3 install package` command to add third-party libraries (like `pandas`, `requests`, or `influxdb3-python`) to your plugin environment.
|
||||
Use the `influxdb3 install package` command to add third-party libraries (like `pandas`, `requests`, or `influxdb3-python`) to your plugin environment.\
|
||||
This installs packages into the Processing Engine’s embedded Python environment to ensure compatibility with your InfluxDB instance.
|
||||
|
||||
{{% code-placeholders "CONTAINER_NAME|PACKAGE_NAME" %}}
|
||||
{{% code-placeholders "CONTAINER\_NAME|PACKAGE\_NAME" %}}
|
||||
|
||||
{{< code-tabs-wrapper >}}
|
||||
|
||||
|
|
@ -746,13 +757,15 @@ These examples install the specified Python package (for example, pandas) into t
|
|||
- Use the CLI command when running InfluxDB directly on your system.
|
||||
- Use the Docker variant if you're running InfluxDB in a containerized environment.
|
||||
|
||||
> [!Important]
|
||||
> \[!Important]
|
||||
>
|
||||
> #### Use bundled Python for plugins
|
||||
>
|
||||
> When you start the server with the `--plugin-dir` option, InfluxDB 3 creates a Python virtual environment (`<PLUGIN_DIR>/venv`) for your plugins.
|
||||
> If you need to create a custom virtual environment, use the Python interpreter bundled with InfluxDB 3. Don't use the system Python.
|
||||
> Creating a virtual environment with the system Python (for example, using `python -m venv`) can lead to runtime errors and plugin failures.
|
||||
>
|
||||
>For more information, see the [processing engine README](https://github.com/influxdata/influxdb/blob/main/README_processing_engine.md).
|
||||
>
|
||||
> For more information, see the [processing engine README](https://github.com/influxdata/influxdb/blob/main/README_processing_engine.md).
|
||||
|
||||
{{% /code-placeholders %}}
|
||||
|
||||
|
|
@ -774,6 +787,7 @@ influxdb3 serve \
|
|||
```
|
||||
|
||||
When package installation is disabled:
|
||||
|
||||
- The Processing Engine continues to function normally for triggers
|
||||
- Plugin code executes without restrictions
|
||||
- Package installation commands are blocked
|
||||
|
|
@ -794,6 +808,7 @@ influxdb3 serve \
|
|||
```
|
||||
|
||||
**Use cases for disabled package management:**
|
||||
|
||||
- Air-gapped environments without internet access
|
||||
- Compliance requirements prohibiting runtime package installation
|
||||
- Centrally managed dependency environments
|
||||
|
|
@ -854,11 +869,13 @@ This security model ensures only administrators can introduce or modify executab
|
|||
### Best practices
|
||||
|
||||
**For development:**
|
||||
|
||||
- Use the `--upload` flag to deploy plugins during development
|
||||
- Test plugins in non-production environments first
|
||||
- Review plugin code before deployment
|
||||
|
||||
**For production:**
|
||||
|
||||
- Pre-deploy plugins to the server's plugin directory via secure file transfer
|
||||
- Use custom plugin repositories for vetted, approved plugins
|
||||
- Disable package installation (`--package-manager disabled`) in locked-down environments
|
||||
|
|
@ -877,20 +894,21 @@ When you deploy {{% product-name %}} in a multi-node environment, configure each
|
|||
|
||||
Each plugin must run on a node that supports its trigger type:
|
||||
|
||||
| Plugin type | Trigger spec | Runs on |
|
||||
|--------------------|--------------------------|-----------------------------|
|
||||
| Data write | `table:` or `all_tables` | Ingester nodes |
|
||||
| Scheduled | `every:` or `cron:` | Any node with scheduler |
|
||||
| HTTP request | `request:` | Nodes that serve API traffic|
|
||||
| Plugin type | Trigger spec | Runs on |
|
||||
| ------------ | ------------------------ | ---------------------------- |
|
||||
| Data write | `table:` or `all_tables` | Ingester nodes |
|
||||
| Scheduled | `every:` or `cron:` | Any node with scheduler |
|
||||
| HTTP request | `request:` | Nodes that serve API traffic |
|
||||
|
||||
For example:
|
||||
|
||||
- Run write-ahead log (WAL) plugins on ingester nodes.
|
||||
- Run scheduled plugins on any node configured to execute them.
|
||||
- Run HTTP-triggered plugins on querier nodes or any node that handles HTTP endpoints.
|
||||
|
||||
Place all plugin files in the `--plugin-dir` directory configured for each node.
|
||||
|
||||
> [!Note]
|
||||
> \[!Note]
|
||||
> Triggers fail if the plugin file isn’t available on the node where it runs.
|
||||
|
||||
### Route third-party clients to querier nodes
|
||||
|
|
@ -900,7 +918,7 @@ External tools—such as Grafana, custom dashboards, or REST clients—must conn
|
|||
#### Examples
|
||||
|
||||
- **Grafana**: When adding InfluxDB 3 as a Grafana data source, use a querier node URL, such as:
|
||||
`https://querier.example.com:8086`
|
||||
`https://querier.example.com:8086`
|
||||
- **REST clients**: Applications using `POST /api/v3/query/sql` or similar endpoints must target a querier node.
|
||||
|
||||
{{% /show-in %}}
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
> [!Note]
|
||||
> \[!Note]
|
||||
>
|
||||
> #### InfluxDB 3 Core and Enterprise relationship
|
||||
>
|
||||
> InfluxDB 3 Enterprise is a superset of InfluxDB 3 Core.
|
||||
|
|
@ -14,12 +15,12 @@
|
|||
- **Quick-Start Developer Experience**:
|
||||
- `influxdb3` now supports running without arguments for instant database startup, automatically generating IDs and storage flags values based on your system's setup.
|
||||
- **Processing Engine**:
|
||||
- Plugins now support multiple files instead of single-file limitations.
|
||||
- Plugins now support multiple files instead of single-file limitations.
|
||||
- When creating a trigger, you can upload a plugin directly from your local machine using the `--upload` flag.
|
||||
- Existing plugin files can now be updated at runtime without recreating triggers.
|
||||
- Existing plugin files can now be updated at runtime without recreating triggers.
|
||||
- New `system.plugin_files` table and `show plugins` CLI command now provide visibility into all loaded plugin files.
|
||||
- Custom plugin repositories are now supported via `--plugin-repo` CLI flag.
|
||||
- Python package installation can now be disabled with `--package-manager disabled` for locked-down environments.
|
||||
- Python package installation can now be disabled with `--package-manager disabled` for locked-down environments.
|
||||
- Plugin file path validation now prevents directory traversal attacks by blocking relative and absolute path patterns.
|
||||
|
||||
#### Bug fixes
|
||||
|
|
@ -42,7 +43,7 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
|
|||
|
||||
#### Features
|
||||
|
||||
- **Custom Plugin Repository**:
|
||||
- **Custom Plugin Repository**:
|
||||
- Use the `--plugin-repo` option with `influxdb3 serve` to specify custom plugin repositories. This enables loading plugins from personal repos or disabling remote repo access.
|
||||
|
||||
#### Bug fixes
|
||||
|
|
@ -50,9 +51,9 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
|
|||
- **Database reliability**:
|
||||
- Table index updates now complete atomically before creating new indices, preventing race conditions that could corrupt database state ([#26838](https://github.com/influxdata/influxdb/pull/26838))
|
||||
- Delete operations are now idempotent, preventing errors during object store cleanup ([#26839](https://github.com/influxdata/influxdb/pull/26839))
|
||||
- **Write path**:
|
||||
- **Write path**:
|
||||
- Write operations to soft-deleted databases are now rejected, preventing data loss ([#26722](https://github.com/influxdata/influxdb/pull/26722))
|
||||
- **Runtime stability**:
|
||||
- **Runtime stability**:
|
||||
- Fixed a compatibility issue that could cause deadlocks for concurrent operations ([#26804](https://github.com/influxdata/influxdb/pull/26804))
|
||||
- Other bug fixes and performance improvements
|
||||
|
||||
|
|
@ -66,12 +67,12 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
|
|||
|
||||
#### Features
|
||||
|
||||
- **Cache optimization**:
|
||||
- **Cache optimization**:
|
||||
- Last Value Cache (LVC) and Distinct Value Cache (DVC) now populate on creation and only on query nodes, reducing resource usage on ingest nodes.
|
||||
|
||||
#### Bug fixes
|
||||
|
||||
- **Object store reliability**:
|
||||
- **Object store reliability**:
|
||||
- Object store operations now use retryable mechanisms with better error handling
|
||||
|
||||
#### Operational improvements
|
||||
|
|
@ -79,7 +80,7 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
|
|||
- **Compaction optimizations**:
|
||||
- Compaction producer now waits 10 seconds before starting cycles, reducing resource contention during startup
|
||||
- Enhanced scheduling algorithms distribute compaction work more efficiently across available resources
|
||||
- **System tables**:
|
||||
- **System tables**:
|
||||
- System tables now provide consistent data across different node modes (ingest, query, compact), enabling better monitoring in multi-node deployments
|
||||
|
||||
## v3.4.2 {date="2025-09-11"}
|
||||
|
|
@ -123,8 +124,8 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
|
|||
### Core
|
||||
|
||||
#### Bug Fixes
|
||||
- Upgrading from 3.3.0 to 3.4.x no longer causes possible catalog migration issues ([#26756](https://github.com/influxdata/influxdb/pull/26756))
|
||||
|
||||
- Upgrading from 3.3.0 to 3.4.x no longer causes possible catalog migration issues ([#26756](https://github.com/influxdata/influxdb/pull/26756))
|
||||
|
||||
## v3.4.0 {date="2025-08-27"}
|
||||
|
||||
|
|
@ -138,21 +139,22 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
|
|||
([#26734](https://github.com/influxdata/influxdb/pull/26734))
|
||||
- **Azure Endpoint**:
|
||||
- Use the `--azure-endpoint` option with `influxdb3 serve` to specify the Azure Blob Storage endpoint for object store connections. ([#26687](https://github.com/influxdata/influxdb/pull/26687))
|
||||
- **No_Sync via CLI**:
|
||||
- **No\_Sync via CLI**:
|
||||
- Use the `--no-sync` option with `influxdb3 write` to skip waiting for WAL persistence on write and immediately return a response to the write request. ([#26703](https://github.com/influxdata/influxdb/pull/26703))
|
||||
|
||||
|
||||
#### Bug Fixes
|
||||
|
||||
- Validate tag and field names when creating tables ([#26641](https://github.com/influxdata/influxdb/pull/26641))
|
||||
- Using GROUP BY twice on the same column no longer causes incorrect data ([#26732](https://github.com/influxdata/influxdb/pull/26732))
|
||||
|
||||
#### Security & Misc
|
||||
|
||||
- Reduce verbosity of the TableIndexCache log. ([#26709](https://github.com/influxdata/influxdb/pull/26709))
|
||||
- WAL replay concurrency limit defaults to number of CPU cores, preventing possible OOMs. ([#26715](https://github.com/influxdata/influxdb/pull/26715))
|
||||
- Remove unsafe signal_handler code. ([#26685](https://github.com/influxdata/influxdb/pull/26685))
|
||||
- Remove unsafe signal\_handler code. ([#26685](https://github.com/influxdata/influxdb/pull/26685))
|
||||
- Upgrade Python version to 3.13.7-20250818. ([#26686](https://github.com/influxdata/influxdb/pull/26686), [#26700](https://github.com/influxdata/influxdb/pull/26700))
|
||||
- Tags with `/` in the name no longer break the primary key.
|
||||
|
||||
|
||||
### Enterprise
|
||||
|
||||
All Core updates are included in Enterprise. Additional Enterprise-specific features and fixes:
|
||||
|
|
@ -160,18 +162,16 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
|
|||
#### Features
|
||||
|
||||
- **Token Provisioning**:
|
||||
- Generate _resource_ and _admin_ tokens offline and use them when starting the database.
|
||||
- Generate *resource* and *admin* tokens offline and use them when starting the database.
|
||||
|
||||
- Select a home or trial license without using an interactive terminal.
|
||||
Use `--license-type` [home | trial | commercial] option to the `influxdb3 serve` command to automate the selection of the license type.
|
||||
Use `--license-type` \[home | trial | commercial] option to the `influxdb3 serve` command to automate the selection of the license type.
|
||||
|
||||
#### Bug Fixes
|
||||
|
||||
- Don't initialize the Processing Engine when the specified `--mode` does not require it.
|
||||
- Don't panic when `INFLUXDB3_PLUGIN_DIR` is set in containers without the Processing Engine enabled.
|
||||
|
||||
|
||||
|
||||
## v3.3.0 {date="2025-07-29"}
|
||||
|
||||
### Core
|
||||
|
|
@ -257,7 +257,7 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
|
|||
|
||||
## v3.2.0 {date="2025-06-25"}
|
||||
|
||||
**Core**: revision 1ca3168bee
|
||||
**Core**: revision 1ca3168bee\
|
||||
**Enterprise**: revision 1ca3168bee
|
||||
|
||||
### Core
|
||||
|
|
@ -290,7 +290,7 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
|
|||
|
||||
#### Features
|
||||
|
||||
- **License management improvements**:
|
||||
- **License management improvements**:
|
||||
- New `influxdb3 show license` command to display current license information
|
||||
- **Table-level retention period support**: Add retention period support for individual tables in addition to database-level retention, providing granular data lifecycle management
|
||||
- New CLI commands: `create table --retention-period` and `update table --retention-period`
|
||||
|
|
@ -307,6 +307,7 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
|
|||
- **License handling**: Trim whitespace from license file contents after reading to prevent validation issues
|
||||
|
||||
## v3.1.0 {date="2025-05-29"}
|
||||
|
||||
**Core**: revision 482dd8aac580c04f37e8713a8fffae89ae8bc264
|
||||
|
||||
**Enterprise**: revision 2cb23cf32b67f9f0d0803e31b356813a1a151b00
|
||||
|
|
@ -314,6 +315,7 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
|
|||
### Core
|
||||
|
||||
#### Token and Security Updates
|
||||
|
||||
- Named admin tokens can now be created, with configurable expirations
|
||||
- `health`, `ping`, and `metrics` endpoints can now be opted out of authorization
|
||||
- `Basic $TOKEN` is now supported for all APIs
|
||||
|
|
@ -321,6 +323,7 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
|
|||
- Additional info available when starting InfuxDB using `--without-auth`
|
||||
|
||||
#### Additional Updates
|
||||
|
||||
- New catalog metrics available for count operations
|
||||
- New object store metrics available for transfer latencies and transfer sizes
|
||||
- New query duration metrics available for Last Value caches
|
||||
|
|
@ -328,6 +331,7 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
|
|||
- Other performance improvements
|
||||
|
||||
#### Fixes
|
||||
|
||||
- New tags are now backfilled with NULL instead of empty strings
|
||||
- Bitcode deserialization error fixed
|
||||
- Series key metadata not persisting to Parquet is now fixed
|
||||
|
|
@ -336,24 +340,28 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
|
|||
### Enterprise
|
||||
|
||||
#### Token and Security Updates
|
||||
|
||||
- Resource tokens now use resource names in `show tokens`
|
||||
- Tokens can now be granted `CREATE` permission for creating databases
|
||||
|
||||
#### Additional Updates
|
||||
|
||||
- Last value caches reload on restart
|
||||
- Distinct value caches reload on restart
|
||||
- Other performance improvements
|
||||
- Replaces remaining "INFLUXDB_IOX" Dockerfile environment variables with the following:
|
||||
- Replaces remaining "INFLUXDB\_IOX" Dockerfile environment variables with the following:
|
||||
- `ENV INFLUXDB3_OBJECT_STORE=file`
|
||||
- `ENV INFLUXDB3_DB_DIR=/var/lib/influxdb3`
|
||||
|
||||
#### Fixes
|
||||
|
||||
- Improvements and fixes for license validations
|
||||
- False positive fixed for catalog error on shutdown
|
||||
- UX improvements for error and onboarding messages
|
||||
- Other general fixes and corrections
|
||||
|
||||
## v3.0.3 {date="2025-05-16"}
|
||||
|
||||
**Core**: revision 384c457ef5f0d5ca4981b22855e411d8cac2688e
|
||||
|
||||
**Enterprise**: revision 34f4d28295132b9efafebf654e9f6decd1a13caf
|
||||
|
|
@ -362,20 +370,19 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
|
|||
|
||||
#### Fixes
|
||||
|
||||
- Prevent operator token, `_admin`, from being deleted.
|
||||
- Prevent operator token, `_admin`, from being deleted.
|
||||
|
||||
### Enterprise
|
||||
|
||||
#### Fixes
|
||||
|
||||
- Fix object store info digest that is output during onboarding.
|
||||
- Fix object store info digest that is output during onboarding.
|
||||
- Fix issues with false positive catalog error on shutdown.
|
||||
- Fix licensing validation issues.
|
||||
- Other fixes and performance improvements.
|
||||
|
||||
|
||||
|
||||
## v3.0.2 {date="2025-05-01"}
|
||||
|
||||
**Core**: revision d80d6cd60049c7b266794a48c97b1b6438ac5da9
|
||||
|
||||
**Enterprise**: revision e9d7e03c2290d0c3e44d26e3eeb60aaf12099f29
|
||||
|
|
@ -384,39 +391,40 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
|
|||
|
||||
#### Security updates
|
||||
|
||||
- Generate testing TLS certificates on the fly.
|
||||
- Set the TLS CA via the INFLUXDB3_TLS_CA environment variable.
|
||||
- Enforce a minimum TLS version for enhanced security.
|
||||
- Allow CORS requests from browsers.
|
||||
- Generate testing TLS certificates on the fly.
|
||||
- Set the TLS CA via the INFLUXDB3\_TLS\_CA environment variable.
|
||||
- Enforce a minimum TLS version for enhanced security.
|
||||
- Allow CORS requests from browsers.
|
||||
|
||||
#### General updates
|
||||
|
||||
- Support the `--format json` option in the token creation output.
|
||||
- Remove the Last Values Cache size limitation to improve performance and flexibility.
|
||||
- Incorporate additional performance improvements.
|
||||
- Support the `--format json` option in the token creation output.
|
||||
- Remove the Last Values Cache size limitation to improve performance and flexibility.
|
||||
- Incorporate additional performance improvements.
|
||||
|
||||
#### Fixes
|
||||
|
||||
- Fix a counting bug in the distinct cache.
|
||||
- Fix how the distinct cache handles rows with null values.
|
||||
- Fix handling of `group by` tag columns that use escape quotes.
|
||||
- Sort the IOx table schema consistently in the `SHOW TABLES` command.
|
||||
- Fix a counting bug in the distinct cache.
|
||||
- Fix how the distinct cache handles rows with null values.
|
||||
- Fix handling of `group by` tag columns that use escape quotes.
|
||||
- Sort the IOx table schema consistently in the `SHOW TABLES` command.
|
||||
|
||||
### Enterprise
|
||||
|
||||
#### Updates
|
||||
|
||||
- Introduce a command and system table to list cluster nodes.
|
||||
- Support multiple custom permission argument matches.
|
||||
- Improve overall performance.
|
||||
- Introduce a command and system table to list cluster nodes.
|
||||
- Support multiple custom permission argument matches.
|
||||
- Improve overall performance.
|
||||
|
||||
#### Fixes
|
||||
|
||||
- Initialize the object store only once.
|
||||
- Prevent the Home license server from crashing on restart.
|
||||
- Enforce the `--num-cores` thread allocation limit.
|
||||
- Initialize the object store only once.
|
||||
- Prevent the Home license server from crashing on restart.
|
||||
- Enforce the `--num-cores` thread allocation limit.
|
||||
|
||||
## v3.0.1 {date="2025-04-16"}
|
||||
|
||||
**Core**: revision d7c071e0c4959beebc7a1a433daf8916abd51214
|
||||
|
||||
**Enterprise**: revision 96e4aad870b44709e149160d523b4319ea91b54c
|
||||
|
|
@ -424,15 +432,18 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
|
|||
### Core
|
||||
|
||||
#### Updates
|
||||
|
||||
- TLS CA can now be set with an environment variable: `INFLUXDB3_TLS_CA`
|
||||
- Other general performance improvements
|
||||
|
||||
#### Fixes
|
||||
- The `--tags` argument is now optional for creating a table, and additionally now requires at least one tag _if_ specified
|
||||
|
||||
- The `--tags` argument is now optional for creating a table, and additionally now requires at least one tag *if* specified
|
||||
|
||||
### Enterprise
|
||||
|
||||
#### Updates
|
||||
|
||||
- Catalog limits for databases, tables, and columns are now configurable using `influxdb3 serve` options:
|
||||
- `--num-database-limit`
|
||||
- `--num-table-limit`
|
||||
|
|
@ -441,8 +452,8 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
|
|||
- Other general performance improvements
|
||||
|
||||
#### Fixes
|
||||
- **Home** license thread count log errors
|
||||
|
||||
- **Home** license thread count log errors
|
||||
|
||||
## v3.0.0 {date="2025-04-14"}
|
||||
|
||||
|
|
@ -471,50 +482,59 @@ All Core updates are included in Enterprise. Additional Enterprise-specific feat
|
|||
|
||||
- You can now use Commercial, Trial, and At-Home licenses.
|
||||
|
||||
|
||||
## v3.0.0-0.beta.3 {date="2025-04-01"}
|
||||
|
||||
**Core**: revision f881c5844bec93a85242f26357a1ef3ebf419dd3
|
||||
|
||||
**Enterprise**: revision 6bef9e700a59c0973b0cefdc6baf11583933e262
|
||||
|
||||
### Core
|
||||
|
||||
#### General Improvements
|
||||
|
||||
- InfluxDB 3 now supports graceful shutdowns when sending the interrupt signal to the service.
|
||||
|
||||
#### Bug fixes
|
||||
|
||||
- Empty batches in JSON format results are now handled properly
|
||||
- The Processing Engine now properly extracts data from DictionaryArrays
|
||||
|
||||
### Enterprise
|
||||
|
||||
##### Multi-node improvements
|
||||
|
||||
- Query nodes now automatically detect new ingest nodes
|
||||
|
||||
#### Bug fixes
|
||||
- Several fixes for compaction planning and processing
|
||||
|
||||
- Several fixes for compaction planning and processing
|
||||
- The Processing Engine now properly extracts data from DictionaryArrays
|
||||
|
||||
|
||||
## v3.0.0-0.beta.2 {date="2025-03-24"}
|
||||
|
||||
**Core**: revision 033e1176d8c322b763b4aefb24686121b1b24f7c
|
||||
|
||||
**Enterprise**: revision e530fcd498c593cffec2b56d4f5194afc717d898
|
||||
|
||||
This update brings several backend performance improvements to both Core and Enterprise in preparation for additional new features over the next several weeks.
|
||||
|
||||
This update brings several backend performance improvements to both Core and Enterprise in preparation for additional new features over the next several weeks.
|
||||
|
||||
## v3.0.0-0.beta.1 {date="2025-03-17"}
|
||||
|
||||
### Core
|
||||
|
||||
#### Features
|
||||
|
||||
##### Query and storage enhancements
|
||||
|
||||
- New ability to stream response data for CSV and JSON queries, similar to how JSONL streaming works
|
||||
- Parquet files are now cached on the query path, improving performance
|
||||
- Query buffer is incrementally cleared when snapshotting, lowering memory spikes
|
||||
|
||||
##### Processing engine improvements
|
||||
|
||||
- New Trigger Types:
|
||||
- _Scheduled_: Run Python plugins on custom, time-defined basis
|
||||
- _Request_: Call Python plugins via HTTP requests
|
||||
- *Scheduled*: Run Python plugins on custom, time-defined basis
|
||||
- *Request*: Call Python plugins via HTTP requests
|
||||
- New in-memory cache for storing data temporarily; cached data can be stored for a single trigger or across all triggers
|
||||
- Integration with virtual environments and install packages:
|
||||
- Specify Python virtual environment via CLI or `VIRTUAL_ENV` variable
|
||||
|
|
@ -524,11 +544,13 @@ This update brings several backend performance improvements to both Core and Ent
|
|||
- Write to logs from within the Processing Engine
|
||||
|
||||
##### Database and CLI improvements
|
||||
|
||||
- You can now specify the precision on your timestamps for writes using the `--precision` flag. Includes nano/micro/milli/seconds (ns/us/ms/s)
|
||||
- Added a new `show` system subcommand to display system tables with different options via SQL (default limit: 100)
|
||||
- Clearer table creation error messages
|
||||
|
||||
##### Bug fixes
|
||||
|
||||
- If a database was created and the service was killed before any data was written, the database would not be retained
|
||||
- A last cache with specific "value" columns could not be queried
|
||||
- Running CTRL-C no longer stopped an InfluxDB process, due to a Python trigger
|
||||
|
|
@ -539,14 +561,15 @@ This update brings several backend performance improvements to both Core and Ent
|
|||
|
||||
For Core and Enterprise, there are parameter changes for simplicity:
|
||||
|
||||
| Old Parameter | New Parameter |
|
||||
|---------------|---------------|
|
||||
| `--writer-id`<br>`--host-id` | `--node-id` |
|
||||
| Old Parameter | New Parameter |
|
||||
| ---------------------------- | ------------- |
|
||||
| `--writer-id`<br>`--host-id` | `--node-id` |
|
||||
|
||||
### Enterprise features
|
||||
|
||||
#### Cluster management
|
||||
- Nodes are now associated with _clusters_, simplifying compaction, read replication, and processing
|
||||
|
||||
- Nodes are now associated with *clusters*, simplifying compaction, read replication, and processing
|
||||
- Node specs are now available for simpler management of cache creations
|
||||
|
||||
#### Mode types
|
||||
|
|
@ -557,9 +580,9 @@ For Core and Enterprise, there are parameter changes for simplicity:
|
|||
|
||||
For Enterprise, additional parameters for the `serve` command have been consolidated for simplicity:
|
||||
|
||||
| Old Parameter | New Parameter |
|
||||
|---------------|---------------|
|
||||
| `--read-from-node-ids`<br>`--compact-from-node-ids` | `--cluster-id` |
|
||||
| `--run-compactions`<br>`--mode=compactor` | `--mode=compact`<br>`--mode=compact` |
|
||||
| Old Parameter | New Parameter |
|
||||
| --------------------------------------------------- | ------------------------------------ |
|
||||
| `--read-from-node-ids`<br>`--compact-from-node-ids` | `--cluster-id` |
|
||||
| `--run-compactions`<br>`--mode=compactor` | `--mode=compact`<br>`--mode=compact` |
|
||||
|
||||
In addition to the above changes, `--cluster-id` is now a required parameter for all new instances.
|
||||
|
|
|
|||
|
|
@ -212,19 +212,6 @@ influxdb_cloud:
|
|||
- How is Cloud 2 different from Cloud Serverless?
|
||||
- How do I manage auth tokens in InfluxDB Cloud 2?
|
||||
|
||||
explorer:
|
||||
name: InfluxDB 3 Explorer
|
||||
namespace: explorer
|
||||
menu_category: other
|
||||
list_order: 4
|
||||
versions: [v1]
|
||||
latest: explorer
|
||||
latest_patch: 1.1.0
|
||||
ai_sample_questions:
|
||||
- How do I use InfluxDB 3 Explorer to visualize data?
|
||||
- How do I create a dashboard in InfluxDB 3 Explorer?
|
||||
- How do I query data using InfluxDB 3 Explorer?
|
||||
|
||||
telegraf:
|
||||
name: Telegraf
|
||||
namespace: telegraf
|
||||
|
|
|
|||
|
|
@ -4,6 +4,9 @@
|
|||
"version": "1.0.0",
|
||||
"description": "InfluxDB documentation",
|
||||
"license": "MIT",
|
||||
"bin": {
|
||||
"docs": "scripts/docs-cli.js"
|
||||
},
|
||||
"resolutions": {
|
||||
"serialize-javascript": "^6.0.2"
|
||||
},
|
||||
|
|
@ -40,6 +43,7 @@
|
|||
"vanillajs-datepicker": "^1.3.4"
|
||||
},
|
||||
"scripts": {
|
||||
"postinstall": "node scripts/setup-local-bin.js",
|
||||
"docs:create": "node scripts/docs-create.js",
|
||||
"docs:edit": "node scripts/docs-edit.js",
|
||||
"docs:add-placeholders": "node scripts/add-placeholders.js",
|
||||
|
|
@ -78,5 +82,8 @@
|
|||
"test": "test"
|
||||
},
|
||||
"keywords": [],
|
||||
"author": ""
|
||||
"author": "",
|
||||
"optionalDependencies": {
|
||||
"copilot": "^0.0.2"
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,108 +0,0 @@
|
|||
# Add Placeholders Script
|
||||
|
||||
Automatically adds placeholder syntax to code blocks and placeholder descriptions in markdown files.
|
||||
|
||||
## What it does
|
||||
|
||||
This script finds UPPERCASE placeholders in code blocks and:
|
||||
|
||||
1. **Adds `{ placeholders="PATTERN1|PATTERN2" }` attribute** to code block fences
|
||||
2. **Wraps placeholder descriptions** with `{{% code-placeholder-key %}}` shortcodes
|
||||
|
||||
## Usage
|
||||
|
||||
### Direct usage
|
||||
|
||||
```bash
|
||||
# Process a single file
|
||||
node scripts/add-placeholders.js <file.md>
|
||||
|
||||
# Dry run to preview changes
|
||||
node scripts/add-placeholders.js <file.md> --dry
|
||||
|
||||
# Example
|
||||
node scripts/add-placeholders.js content/influxdb3/enterprise/admin/upgrade.md
|
||||
```
|
||||
|
||||
### Using npm script
|
||||
|
||||
```bash
|
||||
# Process a file
|
||||
yarn docs:add-placeholders <file.md>
|
||||
|
||||
# Dry run
|
||||
yarn docs:add-placeholders <file.md> --dry
|
||||
```
|
||||
|
||||
## Example transformations
|
||||
|
||||
### Before
|
||||
|
||||
````markdown
|
||||
```bash
|
||||
influxdb3 query \
|
||||
--database SYSTEM_DATABASE \
|
||||
--token ADMIN_TOKEN \
|
||||
"SELECT * FROM system.version"
|
||||
```
|
||||
|
||||
Replace the following:
|
||||
|
||||
- **`SYSTEM_DATABASE`**: The name of your system database
|
||||
- **`ADMIN_TOKEN`**: An admin token with read permissions
|
||||
````
|
||||
|
||||
### After
|
||||
|
||||
````markdown
|
||||
```bash { placeholders="ADMIN_TOKEN|SYSTEM_DATABASE" }
|
||||
influxdb3 query \
|
||||
--database SYSTEM_DATABASE \
|
||||
--token ADMIN_TOKEN \
|
||||
"SELECT * FROM system.version"
|
||||
```
|
||||
|
||||
Replace the following:
|
||||
|
||||
- {{% code-placeholder-key %}}`SYSTEM_DATABASE`{{% /code-placeholder-key %}}: The name of your system database
|
||||
- {{% code-placeholder-key %}}`ADMIN_TOKEN`{{% /code-placeholder-key %}}: An admin token with read permissions
|
||||
````
|
||||
|
||||
## How it works
|
||||
|
||||
### Placeholder detection
|
||||
|
||||
The script automatically detects UPPERCASE placeholders in code blocks using these rules:
|
||||
|
||||
- **Pattern**: Matches words with 2+ characters, all uppercase, can include underscores
|
||||
- **Excludes common words**: HTTP verbs (GET, POST), protocols (HTTP, HTTPS), SQL keywords (SELECT, FROM), etc.
|
||||
|
||||
### Code block processing
|
||||
|
||||
1. Finds all code blocks (including indented ones)
|
||||
2. Extracts UPPERCASE placeholders
|
||||
3. Adds `{ placeholders="..." }` attribute to the fence line
|
||||
4. Preserves indentation and language identifiers
|
||||
|
||||
### Description wrapping
|
||||
|
||||
1. Detects "Replace the following:" sections
|
||||
2. Wraps placeholder descriptions matching `- **`PLACEHOLDER`**: description`
|
||||
3. Preserves indentation and formatting
|
||||
4. Skips already-wrapped descriptions
|
||||
|
||||
## Options
|
||||
|
||||
- `--dry` or `-d`: Preview changes without modifying files
|
||||
|
||||
## Notes
|
||||
|
||||
- The script is idempotent - running it multiple times on the same file won't duplicate syntax
|
||||
- Preserves existing `placeholders` attributes in code blocks
|
||||
- Works with both indented and non-indented code blocks
|
||||
- Handles multiple "Replace the following:" sections in a single file
|
||||
|
||||
## Related documentation
|
||||
|
||||
- [DOCS-SHORTCODES.md](../DOCS-SHORTCODES.md) - Complete shortcode reference
|
||||
- [DOCS-CONTRIBUTING.md](../DOCS-CONTRIBUTING.md) - Placeholder conventions and style guidelines
|
||||
|
|
@ -16,7 +16,7 @@ import { readFileSync, writeFileSync } from 'fs';
|
|||
import { parseArgs } from 'node:util';
|
||||
|
||||
// Parse command-line arguments
|
||||
const { positionals } = parseArgs({
|
||||
const { positionals, values } = parseArgs({
|
||||
allowPositionals: true,
|
||||
options: {
|
||||
dry: {
|
||||
|
|
@ -24,19 +24,47 @@ const { positionals } = parseArgs({
|
|||
short: 'd',
|
||||
default: false,
|
||||
},
|
||||
help: {
|
||||
type: 'boolean',
|
||||
short: 'h',
|
||||
default: false,
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
// Show help if requested
|
||||
if (values.help) {
|
||||
console.log(`
|
||||
Add placeholder syntax to code blocks
|
||||
|
||||
Usage:
|
||||
docs placeholders <file.md> [options]
|
||||
|
||||
Options:
|
||||
--dry, -d Preview changes without modifying files
|
||||
--help, -h Show this help message
|
||||
|
||||
Examples:
|
||||
docs placeholders content/influxdb3/enterprise/admin/upgrade.md
|
||||
docs placeholders content/influxdb3/core/admin/databases/create.md --dry
|
||||
|
||||
What it does:
|
||||
1. Finds UPPERCASE placeholders in code blocks
|
||||
2. Adds { placeholders="PATTERN1|PATTERN2" } attribute to code fences
|
||||
3. Wraps placeholder descriptions with {{% code-placeholder-key %}} shortcodes
|
||||
`);
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
if (positionals.length === 0) {
|
||||
console.error('Usage: node scripts/add-placeholders.js <file.md> [--dry]');
|
||||
console.error(
|
||||
'Example: node scripts/add-placeholders.js content/influxdb3/enterprise/admin/upgrade.md'
|
||||
);
|
||||
console.error('Error: Missing file path argument');
|
||||
console.error('Usage: docs placeholders <file.md> [--dry]');
|
||||
console.error('Run "docs placeholders --help" for more information');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const filePath = positionals[0];
|
||||
const isDryRun = process.argv.includes('--dry') || process.argv.includes('-d');
|
||||
const isDryRun = values.dry;
|
||||
|
||||
/**
|
||||
* Extract UPPERCASE placeholders from a code block
|
||||
|
|
|
|||
|
|
@ -0,0 +1,82 @@
|
|||
#!/usr/bin/env node
|
||||
|
||||
/**
|
||||
* Main CLI entry point for docs tools
|
||||
* Supports subcommands: create, edit, placeholders
|
||||
*
|
||||
* Usage:
|
||||
* docs create <draft-path> [options]
|
||||
* docs edit <url> [options]
|
||||
* docs placeholders <file.md> [options]
|
||||
*/
|
||||
|
||||
import { fileURLToPath } from 'url';
|
||||
import { dirname, join } from 'path';
|
||||
import { spawn } from 'child_process';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
// Get subcommand and remaining arguments
|
||||
const subcommand = process.argv[2];
|
||||
const args = process.argv.slice(3);
|
||||
|
||||
// Map subcommands to script files
|
||||
const subcommands = {
|
||||
create: 'docs-create.js',
|
||||
edit: 'docs-edit.js',
|
||||
placeholders: 'add-placeholders.js',
|
||||
};
|
||||
|
||||
/**
|
||||
* Print usage information
|
||||
*/
|
||||
function printUsage() {
|
||||
console.log(`
|
||||
Usage: docs <command> [options]
|
||||
|
||||
Commands:
|
||||
create <draft-path> Create new documentation from draft
|
||||
edit <url> Edit existing documentation
|
||||
placeholders <file.md> Add placeholder syntax to code blocks
|
||||
|
||||
Examples:
|
||||
docs create drafts/new-feature.md --products influxdb3_core
|
||||
docs edit https://docs.influxdata.com/influxdb3/core/admin/
|
||||
docs placeholders content/influxdb3/core/admin/upgrade.md
|
||||
|
||||
For command-specific help:
|
||||
docs create --help
|
||||
docs edit --help
|
||||
docs placeholders --help
|
||||
`);
|
||||
}
|
||||
|
||||
// Handle no subcommand or help
|
||||
if (!subcommand || subcommand === '--help' || subcommand === '-h') {
|
||||
printUsage();
|
||||
process.exit(subcommand ? 0 : 1);
|
||||
}
|
||||
|
||||
// Validate subcommand
|
||||
if (!subcommands[subcommand]) {
|
||||
console.error(`Error: Unknown command '${subcommand}'`);
|
||||
console.error(`Run 'docs --help' for usage information`);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Execute the appropriate script
|
||||
const scriptPath = join(__dirname, subcommands[subcommand]);
|
||||
const child = spawn('node', [scriptPath, ...args], {
|
||||
stdio: 'inherit',
|
||||
env: process.env,
|
||||
});
|
||||
|
||||
child.on('exit', (code) => {
|
||||
process.exit(code || 0);
|
||||
});
|
||||
|
||||
child.on('error', (err) => {
|
||||
console.error(`Failed to execute ${subcommand}:`, err.message);
|
||||
process.exit(1);
|
||||
});
|
||||
|
|
@ -23,7 +23,12 @@ import {
|
|||
loadProducts,
|
||||
analyzeStructure,
|
||||
} from './lib/content-scaffolding.js';
|
||||
import { writeJson, readJson, fileExists } from './lib/file-operations.js';
|
||||
import {
|
||||
writeJson,
|
||||
readJson,
|
||||
fileExists,
|
||||
readDraft,
|
||||
} from './lib/file-operations.js';
|
||||
import { parseMultipleURLs } from './lib/url-parser.js';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
|
|
@ -36,6 +41,7 @@ const REPO_ROOT = join(__dirname, '..');
|
|||
const TMP_DIR = join(REPO_ROOT, '.tmp');
|
||||
const CONTEXT_FILE = join(TMP_DIR, 'scaffold-context.json');
|
||||
const PROPOSAL_FILE = join(TMP_DIR, 'scaffold-proposal.yml');
|
||||
const PROMPT_FILE = join(TMP_DIR, 'scaffold-prompt.txt');
|
||||
|
||||
// Colors for console output
|
||||
const colors = {
|
||||
|
|
@ -49,25 +55,53 @@ const colors = {
|
|||
};
|
||||
|
||||
/**
|
||||
* Print colored output
|
||||
* Print colored output to stderr (so it doesn't interfere with piped output)
|
||||
*/
|
||||
function log(message, color = 'reset') {
|
||||
console.log(`${colors[color]}${message}${colors.reset}`);
|
||||
// Write to stderr so logs don't interfere with stdout (prompt path/text)
|
||||
console.error(`${colors[color]}${message}${colors.reset}`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if running in Claude Code environment
|
||||
* @returns {boolean} True if Task function is available (Claude Code)
|
||||
*/
|
||||
function isClaudeCode() {
|
||||
return typeof Task !== 'undefined';
|
||||
}
|
||||
|
||||
/**
|
||||
* Output prompt for use with external tools
|
||||
* @param {string} prompt - The generated prompt text
|
||||
* @param {boolean} printPrompt - If true, force print to stdout
|
||||
*/
|
||||
function outputPromptForExternalUse(prompt, printPrompt = false) {
|
||||
// Auto-detect if stdout is being piped
|
||||
const isBeingPiped = !process.stdout.isTTY;
|
||||
|
||||
// Print prompt text if explicitly requested OR if being piped
|
||||
const shouldPrintText = printPrompt || isBeingPiped;
|
||||
|
||||
if (shouldPrintText) {
|
||||
// Output prompt text to stdout
|
||||
console.log(prompt);
|
||||
} else {
|
||||
// Write prompt to file and output file path
|
||||
writeFileSync(PROMPT_FILE, prompt, 'utf8');
|
||||
console.log(PROMPT_FILE);
|
||||
}
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
/**
|
||||
* Prompt user for input (works in TTY and non-TTY environments)
|
||||
*/
|
||||
async function promptUser(question) {
|
||||
// For non-TTY environments, return empty string
|
||||
if (!process.stdin.isTTY) {
|
||||
return '';
|
||||
}
|
||||
|
||||
const readline = await import('readline');
|
||||
const rl = readline.createInterface({
|
||||
input: process.stdin,
|
||||
output: process.stdout,
|
||||
terminal: process.stdin.isTTY !== undefined ? process.stdin.isTTY : true,
|
||||
});
|
||||
|
||||
return new Promise((resolve) => {
|
||||
|
|
@ -91,30 +125,28 @@ function divider() {
|
|||
function parseArguments() {
|
||||
const { values, positionals } = parseArgs({
|
||||
options: {
|
||||
draft: { type: 'string' },
|
||||
from: { type: 'string' },
|
||||
'from-draft': { type: 'string' },
|
||||
url: { type: 'string', multiple: true },
|
||||
urls: { type: 'string' },
|
||||
products: { type: 'string' },
|
||||
ai: { type: 'string', default: 'claude' },
|
||||
execute: { type: 'boolean', default: false },
|
||||
'context-only': { type: 'boolean', default: false },
|
||||
'print-prompt': { type: 'boolean', default: false },
|
||||
proposal: { type: 'string' },
|
||||
'dry-run': { type: 'boolean', default: false },
|
||||
yes: { type: 'boolean', default: false },
|
||||
help: { type: 'boolean', default: false },
|
||||
'follow-external': { type: 'boolean', default: false },
|
||||
},
|
||||
allowPositionals: true,
|
||||
});
|
||||
|
||||
// First positional argument is treated as draft path
|
||||
if (positionals.length > 0 && !values.draft && !values.from) {
|
||||
if (positionals.length > 0 && !values['from-draft']) {
|
||||
values.draft = positionals[0];
|
||||
}
|
||||
|
||||
// --from is an alias for --draft
|
||||
if (values.from && !values.draft) {
|
||||
values.draft = values.from;
|
||||
} else if (values['from-draft']) {
|
||||
values.draft = values['from-draft'];
|
||||
}
|
||||
|
||||
// Normalize URLs into array
|
||||
|
|
@ -141,63 +173,101 @@ function printUsage() {
|
|||
${colors.bright}Documentation Content Scaffolding${colors.reset}
|
||||
|
||||
${colors.bright}Usage:${colors.reset}
|
||||
yarn docs:create <draft-path> Create from draft
|
||||
yarn docs:create --url <url> --draft <path> Create at URL with draft content
|
||||
docs create <draft-path> Create from draft
|
||||
docs create --url <url> --from-draft <path> Create at URL with draft
|
||||
|
||||
# Or use with yarn:
|
||||
yarn docs:create <draft-path>
|
||||
yarn docs:create --url <url> --from-draft <path>
|
||||
|
||||
${colors.bright}Options:${colors.reset}
|
||||
<draft-path> Path to draft markdown file (positional argument)
|
||||
--draft <path> Path to draft markdown file
|
||||
--from <path> Alias for --draft
|
||||
--url <url> Documentation URL for new content location
|
||||
--context-only Stop after context preparation
|
||||
(for non-Claude tools)
|
||||
--proposal <path> Import and execute proposal from JSON file
|
||||
--dry-run Show what would be created without creating
|
||||
--yes Skip confirmation prompt
|
||||
--help Show this help message
|
||||
<draft-path> Path to draft markdown file (positional argument)
|
||||
--from-draft <path> Path to draft markdown file
|
||||
--url <url> Documentation URL for new content location
|
||||
--products <list> Comma-separated product keys (required for stdin)
|
||||
Examples: influxdb3_core, influxdb3_enterprise
|
||||
--follow-external Include external (non-docs.influxdata.com) URLs
|
||||
when extracting links from draft. Without this flag,
|
||||
only local documentation links are followed.
|
||||
--context-only Stop after context preparation
|
||||
(for non-Claude tools)
|
||||
--print-prompt Force prompt text output (auto-enabled when piping)
|
||||
--proposal <path> Import and execute proposal from JSON file
|
||||
--dry-run Show what would be created without creating
|
||||
--yes Skip confirmation prompt
|
||||
--help Show this help message
|
||||
|
||||
${colors.bright}Workflow (Create from draft):${colors.reset}
|
||||
${colors.bright}Stdin Support:${colors.reset}
|
||||
When piping content from stdin, you must specify target products:
|
||||
|
||||
cat draft.md | docs create --products influxdb3_core
|
||||
echo "# Content" | docs create --products influxdb3_core,influxdb3_enterprise
|
||||
|
||||
${colors.bright}Link Following:${colors.reset}
|
||||
By default, the script extracts links from your draft and prompts you
|
||||
to select which ones to include as context. This helps the AI:
|
||||
- Maintain consistent terminology
|
||||
- Avoid duplicating content
|
||||
- Add appropriate \`related\` frontmatter links
|
||||
|
||||
Local documentation links are always available for selection.
|
||||
Use --follow-external to also include external URLs (GitHub, etc.)
|
||||
|
||||
${colors.bright}Workflow (Inside Claude Code):${colors.reset}
|
||||
1. Create a draft markdown file with your content
|
||||
2. Run: yarn docs:create drafts/new-feature.md
|
||||
2. Run: docs create drafts/new-feature.md
|
||||
3. Script runs all agents automatically
|
||||
4. Review and confirm to create files
|
||||
|
||||
${colors.bright}Workflow (Create at specific URL):${colors.reset}
|
||||
${colors.bright}Workflow (Pipe to external agent):${colors.reset}
|
||||
1. Create draft: vim drafts/new-feature.md
|
||||
2. Run: yarn docs:create \\
|
||||
--url https://docs.influxdata.com/influxdb3/core/admin/new-feature/ \\
|
||||
--draft drafts/new-feature.md
|
||||
3. Script determines structure from URL and uses draft content
|
||||
4. Review and confirm to create files
|
||||
|
||||
${colors.bright}Workflow (Manual - for non-Claude tools):${colors.reset}
|
||||
1. Prepare context:
|
||||
yarn docs:create --context-only drafts/new-feature.md
|
||||
2. Run your AI tool with templates from scripts/templates/
|
||||
3. Save proposal to .tmp/scaffold-proposal.json
|
||||
4. Execute:
|
||||
yarn docs:create --proposal .tmp/scaffold-proposal.json
|
||||
2. Pipe to your AI tool (prompt auto-detected):
|
||||
docs create drafts/new-feature.md --products X | claude -p
|
||||
docs create drafts/new-feature.md --products X | copilot -p
|
||||
3. AI generates files based on prompt
|
||||
|
||||
${colors.bright}Examples:${colors.reset}
|
||||
# Create from draft (AI determines location)
|
||||
# Inside Claude Code - automatic execution
|
||||
docs create drafts/new-feature.md
|
||||
|
||||
# Pipe to external AI tools - prompt auto-detected
|
||||
docs create drafts/new-feature.md --products influxdb3_core | claude -p
|
||||
docs create drafts/new-feature.md --products influxdb3_core | copilot -p
|
||||
|
||||
# Pipe from stdin
|
||||
cat drafts/quick-note.md | docs create --products influxdb3_core | claude -p
|
||||
echo "# Quick note" | docs create --products influxdb3_core | copilot -p
|
||||
|
||||
# Get prompt file path (when not piping)
|
||||
docs create drafts/new-feature.md # Outputs: .tmp/scaffold-prompt.txt
|
||||
|
||||
# Still works with yarn
|
||||
yarn docs:create drafts/new-feature.md
|
||||
|
||||
# Create at specific URL with draft content
|
||||
yarn docs:create --url /influxdb3/core/admin/new-feature/ \\
|
||||
--draft drafts/new-feature.md
|
||||
# Include external links for context selection
|
||||
docs create --follow-external drafts/api-guide.md
|
||||
|
||||
# Preview changes
|
||||
yarn docs:create --draft drafts/new-feature.md --dry-run
|
||||
${colors.bright}Smart Behavior:${colors.reset}
|
||||
INSIDE Claude Code:
|
||||
→ Automatically runs Task() agent to generate files
|
||||
|
||||
PIPING to another tool:
|
||||
→ Auto-detects piping and outputs prompt text
|
||||
→ No --print-prompt flag needed
|
||||
|
||||
INTERACTIVE (not piping):
|
||||
→ Outputs prompt file path: .tmp/scaffold-prompt.txt
|
||||
→ Use with: code .tmp/scaffold-prompt.txt
|
||||
|
||||
${colors.bright}Note:${colors.reset}
|
||||
To edit existing pages, use: yarn docs:edit <url>
|
||||
To edit existing pages, use: docs edit <url>
|
||||
`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Phase 1a: Prepare context from URLs
|
||||
*/
|
||||
async function prepareURLPhase(urls, draftPath, options) {
|
||||
async function prepareURLPhase(urls, draftPath, options, stdinContent = null) {
|
||||
log('\n🔍 Analyzing URLs and finding files...', 'bright');
|
||||
|
||||
try {
|
||||
|
|
@ -258,9 +328,18 @@ async function prepareURLPhase(urls, draftPath, options) {
|
|||
|
||||
// Build context (include URL analysis)
|
||||
let context = null;
|
||||
if (draftPath) {
|
||||
let draft;
|
||||
|
||||
if (stdinContent) {
|
||||
// Use stdin content
|
||||
draft = stdinContent;
|
||||
log('✓ Using draft from stdin', 'green');
|
||||
context = prepareContext(draft);
|
||||
} else if (draftPath) {
|
||||
// Use draft content if provided
|
||||
context = prepareContext(draftPath);
|
||||
draft = readDraft(draftPath);
|
||||
draft.path = draftPath;
|
||||
context = prepareContext(draft);
|
||||
} else {
|
||||
// Minimal context for editing existing pages
|
||||
const products = loadProducts();
|
||||
|
|
@ -351,18 +430,83 @@ async function prepareURLPhase(urls, draftPath, options) {
|
|||
/**
|
||||
* Phase 1b: Prepare context from draft
|
||||
*/
|
||||
async function preparePhase(draftPath, options) {
|
||||
async function preparePhase(draftPath, options, stdinContent = null) {
|
||||
log('\n🔍 Analyzing draft and repository structure...', 'bright');
|
||||
|
||||
// Validate draft exists
|
||||
if (!fileExists(draftPath)) {
|
||||
log(`✗ Draft file not found: ${draftPath}`, 'red');
|
||||
process.exit(1);
|
||||
let draft;
|
||||
|
||||
// Handle stdin vs file
|
||||
if (stdinContent) {
|
||||
draft = stdinContent;
|
||||
log('✓ Using draft from stdin', 'green');
|
||||
} else {
|
||||
// Validate draft exists
|
||||
if (!fileExists(draftPath)) {
|
||||
log(`✗ Draft file not found: ${draftPath}`, 'red');
|
||||
process.exit(1);
|
||||
}
|
||||
draft = readDraft(draftPath);
|
||||
draft.path = draftPath;
|
||||
}
|
||||
|
||||
try {
|
||||
// Prepare context
|
||||
const context = prepareContext(draftPath);
|
||||
const context = prepareContext(draft);
|
||||
|
||||
// Extract links from draft
|
||||
const { extractLinks, followLocalLinks, fetchExternalLinks } = await import(
|
||||
'./lib/content-scaffolding.js'
|
||||
);
|
||||
|
||||
const links = extractLinks(draft.content);
|
||||
|
||||
if (links.localFiles.length > 0 || links.external.length > 0) {
|
||||
// Filter external links if flag not set
|
||||
if (!options['follow-external']) {
|
||||
links.external = [];
|
||||
}
|
||||
|
||||
// Let user select which external links to follow
|
||||
// (local files are automatically included)
|
||||
const selected = await selectLinksToFollow(links);
|
||||
|
||||
// Follow selected links
|
||||
const linkedContent = [];
|
||||
|
||||
if (selected.selectedLocal.length > 0) {
|
||||
log('\n📄 Loading local files...', 'cyan');
|
||||
// Determine base path for resolving relative links
|
||||
const basePath = draft.path
|
||||
? dirname(join(REPO_ROOT, draft.path))
|
||||
: REPO_ROOT;
|
||||
const localResults = followLocalLinks(selected.selectedLocal, basePath);
|
||||
linkedContent.push(...localResults);
|
||||
const successCount = localResults.filter((r) => !r.error).length;
|
||||
log(`✓ Loaded ${successCount} local file(s)`, 'green');
|
||||
}
|
||||
|
||||
if (selected.selectedExternal.length > 0) {
|
||||
log('\n🌐 Fetching external URLs...', 'cyan');
|
||||
const externalResults = await fetchExternalLinks(
|
||||
selected.selectedExternal
|
||||
);
|
||||
linkedContent.push(...externalResults);
|
||||
const successCount = externalResults.filter((r) => !r.error).length;
|
||||
log(`✓ Fetched ${successCount} external page(s)`, 'green');
|
||||
}
|
||||
|
||||
// Add to context
|
||||
if (linkedContent.length > 0) {
|
||||
context.linkedContent = linkedContent;
|
||||
|
||||
// Show any errors
|
||||
const errors = linkedContent.filter((lc) => lc.error);
|
||||
if (errors.length > 0) {
|
||||
log('\n⚠️ Some links could not be loaded:', 'yellow');
|
||||
errors.forEach((e) => log(` • ${e.url}: ${e.error}`, 'yellow'));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Write context to temp file
|
||||
writeJson(CONTEXT_FILE, context);
|
||||
|
|
@ -382,6 +526,12 @@ async function preparePhase(draftPath, options) {
|
|||
`✓ Found ${context.structure.existingPaths.length} existing pages`,
|
||||
'green'
|
||||
);
|
||||
if (context.linkedContent) {
|
||||
log(
|
||||
`✓ Included ${context.linkedContent.length} linked page(s) as context`,
|
||||
'green'
|
||||
);
|
||||
}
|
||||
log(
|
||||
`✓ Prepared context → ${CONTEXT_FILE.replace(REPO_ROOT, '.')}`,
|
||||
'green'
|
||||
|
|
@ -441,25 +591,69 @@ async function selectProducts(context, options) {
|
|||
}
|
||||
}
|
||||
|
||||
// Sort products: detected first, then alphabetically within each group
|
||||
allProducts.sort((a, b) => {
|
||||
const aDetected = detected.includes(a);
|
||||
const bDetected = detected.includes(b);
|
||||
|
||||
// Detected products first
|
||||
if (aDetected && !bDetected) return -1;
|
||||
if (!aDetected && bDetected) return 1;
|
||||
|
||||
// Then alphabetically
|
||||
return a.localeCompare(b);
|
||||
});
|
||||
|
||||
// Case 1: Explicit flag provided
|
||||
if (options.products) {
|
||||
const requested = options.products.split(',').map((p) => p.trim());
|
||||
const invalid = requested.filter((p) => !allProducts.includes(p));
|
||||
const requestedKeys = options.products.split(',').map((p) => p.trim());
|
||||
|
||||
if (invalid.length > 0) {
|
||||
// Map product keys to display names
|
||||
const requestedNames = [];
|
||||
const invalidKeys = [];
|
||||
|
||||
for (const key of requestedKeys) {
|
||||
const product = context.products[key];
|
||||
|
||||
if (product) {
|
||||
// Valid product key found
|
||||
if (product.versions && product.versions.length > 1) {
|
||||
// Multi-version product: add all versions
|
||||
product.versions.forEach((version) => {
|
||||
const displayName = `${product.name} ${version}`;
|
||||
if (allProducts.includes(displayName)) {
|
||||
requestedNames.push(displayName);
|
||||
}
|
||||
});
|
||||
} else {
|
||||
// Single version product
|
||||
if (allProducts.includes(product.name)) {
|
||||
requestedNames.push(product.name);
|
||||
}
|
||||
}
|
||||
} else if (allProducts.includes(key)) {
|
||||
// It's already a display name (backwards compatibility)
|
||||
requestedNames.push(key);
|
||||
} else {
|
||||
invalidKeys.push(key);
|
||||
}
|
||||
}
|
||||
|
||||
if (invalidKeys.length > 0) {
|
||||
const validKeys = Object.keys(context.products).join(', ');
|
||||
log(
|
||||
`\n✗ Invalid products: ${invalid.join(', ')}\n` +
|
||||
`Valid products: ${allProducts.join(', ')}`,
|
||||
`\n✗ Invalid product keys: ${invalidKeys.join(', ')}\n` +
|
||||
`Valid keys: ${validKeys}`,
|
||||
'red'
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
log(
|
||||
`✓ Using products from --products flag: ${requested.join(', ')}`,
|
||||
`✓ Using products from --products flag: ${requestedNames.join(', ')}`,
|
||||
'green'
|
||||
);
|
||||
return requested;
|
||||
return requestedNames;
|
||||
}
|
||||
|
||||
// Case 2: Unambiguous (single product detected)
|
||||
|
|
@ -514,6 +708,74 @@ async function selectProducts(context, options) {
|
|||
return selected;
|
||||
}
|
||||
|
||||
/**
|
||||
* Prompt user to select which external links to include
|
||||
* Local file paths are automatically followed
|
||||
* @param {object} links - {localFiles, external} from extractLinks
|
||||
* @returns {Promise<object>} {selectedLocal, selectedExternal}
|
||||
*/
|
||||
async function selectLinksToFollow(links) {
|
||||
// Local files are followed automatically (no user prompt)
|
||||
// External links require user selection
|
||||
if (links.external.length === 0) {
|
||||
return {
|
||||
selectedLocal: links.localFiles || [],
|
||||
selectedExternal: [],
|
||||
};
|
||||
}
|
||||
|
||||
log('\n🔗 Found external links in draft:\n', 'bright');
|
||||
|
||||
const allLinks = [];
|
||||
let index = 1;
|
||||
|
||||
// Show external links for selection
|
||||
links.external.forEach((link) => {
|
||||
log(` ${index}. ${link}`, 'yellow');
|
||||
allLinks.push({ type: 'external', url: link });
|
||||
index++;
|
||||
});
|
||||
|
||||
const answer = await promptUser(
|
||||
'\nSelect external links to include as context ' +
|
||||
'(comma-separated numbers, or "all"): '
|
||||
);
|
||||
|
||||
if (!answer || answer.toLowerCase() === 'none') {
|
||||
return {
|
||||
selectedLocal: links.localFiles || [],
|
||||
selectedExternal: [],
|
||||
};
|
||||
}
|
||||
|
||||
let selectedIndices;
|
||||
if (answer.toLowerCase() === 'all') {
|
||||
selectedIndices = Array.from({ length: allLinks.length }, (_, i) => i);
|
||||
} else {
|
||||
selectedIndices = answer
|
||||
.split(',')
|
||||
.map((s) => parseInt(s.trim()) - 1)
|
||||
.filter((i) => i >= 0 && i < allLinks.length);
|
||||
}
|
||||
|
||||
const selectedExternal = [];
|
||||
|
||||
selectedIndices.forEach((i) => {
|
||||
const link = allLinks[i];
|
||||
selectedExternal.push(link.url);
|
||||
});
|
||||
|
||||
log(
|
||||
`\n✓ Following ${links.localFiles?.length || 0} local file(s) ` +
|
||||
`and ${selectedExternal.length} external link(s)`,
|
||||
'green'
|
||||
);
|
||||
return {
|
||||
selectedLocal: links.localFiles || [],
|
||||
selectedExternal,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Run single content generator agent with direct file generation (Claude Code)
|
||||
*/
|
||||
|
|
@ -577,6 +839,30 @@ function generateClaudePrompt(
|
|||
**Target Products**: Use \`context.selectedProducts\` field (${selectedProducts.join(', ')})
|
||||
**Mode**: ${mode === 'edit' ? 'Edit existing content' : 'Create new documentation'}
|
||||
${isURLBased ? `**URLs**: ${context.urls.map((u) => u.url).join(', ')}` : ''}
|
||||
${
|
||||
context.linkedContent?.length > 0
|
||||
? `
|
||||
**Linked References**: The draft references ${context.linkedContent.length} page(s) from existing documentation.
|
||||
|
||||
These are provided for context to help you:
|
||||
- Maintain consistent terminology and style
|
||||
- Avoid duplicating existing content
|
||||
- Understand related concepts and their structure
|
||||
- Add appropriate links to the \`related\` frontmatter field
|
||||
|
||||
Linked content details available in \`context.linkedContent\`:
|
||||
${context.linkedContent
|
||||
.map((lc) =>
|
||||
lc.error
|
||||
? `- ❌ ${lc.url} (${lc.error})`
|
||||
: `- ✓ [${lc.type}] ${lc.title} (${lc.path || lc.url})`
|
||||
)
|
||||
.join('\n')}
|
||||
|
||||
**Important**: Use this content for context and reference, but do not copy it verbatim. Consider adding relevant pages to the \`related\` field in frontmatter.
|
||||
`
|
||||
: ''
|
||||
}
|
||||
|
||||
**Your Task**: Generate complete documentation files directly (no proposal step).
|
||||
|
||||
|
|
@ -908,16 +1194,40 @@ async function executePhase(options) {
|
|||
async function main() {
|
||||
const options = parseArguments();
|
||||
|
||||
// Show help
|
||||
// Show help first (don't wait for stdin)
|
||||
if (options.help) {
|
||||
printUsage();
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
// Check for stdin only if no draft file was provided
|
||||
const hasStdin = !process.stdin.isTTY;
|
||||
let stdinContent = null;
|
||||
|
||||
if (hasStdin && !options.draft) {
|
||||
// Stdin requires --products option
|
||||
if (!options.products) {
|
||||
log(
|
||||
'\n✗ Error: --products is required when piping content from stdin',
|
||||
'red'
|
||||
);
|
||||
log(
|
||||
'Example: echo "# Content" | yarn docs:create --products influxdb3_core',
|
||||
'yellow'
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Import readDraftFromStdin
|
||||
const { readDraftFromStdin } = await import('./lib/file-operations.js');
|
||||
log('📥 Reading draft from stdin...', 'cyan');
|
||||
stdinContent = await readDraftFromStdin();
|
||||
}
|
||||
|
||||
// Determine workflow
|
||||
if (options.url && options.url.length > 0) {
|
||||
// URL-based workflow requires draft content
|
||||
if (!options.draft) {
|
||||
if (!options.draft && !stdinContent) {
|
||||
log('\n✗ Error: --url requires --draft <path>', 'red');
|
||||
log('The --url option specifies WHERE to create content.', 'yellow');
|
||||
log(
|
||||
|
|
@ -934,29 +1244,75 @@ async function main() {
|
|||
process.exit(1);
|
||||
}
|
||||
|
||||
const context = await prepareURLPhase(options.url, options.draft, options);
|
||||
const context = await prepareURLPhase(
|
||||
options.url,
|
||||
options.draft,
|
||||
options,
|
||||
stdinContent
|
||||
);
|
||||
|
||||
if (options['context-only']) {
|
||||
// Stop after context preparation
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
// Continue with AI analysis (Phase 2)
|
||||
// Generate prompt for product selection
|
||||
const selectedProducts = await selectProducts(context, options);
|
||||
const mode = context.urls?.length > 0 ? 'create' : 'create';
|
||||
const isURLBased = true;
|
||||
const hasExistingContent =
|
||||
context.existingContent &&
|
||||
Object.keys(context.existingContent).length > 0;
|
||||
|
||||
const prompt = generateClaudePrompt(
|
||||
context,
|
||||
selectedProducts,
|
||||
mode,
|
||||
isURLBased,
|
||||
hasExistingContent
|
||||
);
|
||||
|
||||
// Check environment and handle prompt accordingly
|
||||
if (!isClaudeCode()) {
|
||||
// Not in Claude Code: output prompt for external use
|
||||
outputPromptForExternalUse(prompt, options['print-prompt']);
|
||||
}
|
||||
|
||||
// In Claude Code: continue with AI analysis (Phase 2)
|
||||
log('\n🤖 Running AI analysis with specialized agents...\n', 'bright');
|
||||
await runAgentAnalysis(context, options);
|
||||
|
||||
// Execute proposal (Phase 3)
|
||||
await executePhase(options);
|
||||
} else if (options.draft) {
|
||||
// Draft-based workflow
|
||||
const context = await preparePhase(options.draft, options);
|
||||
} else if (options.draft || stdinContent) {
|
||||
// Draft-based workflow (from file or stdin)
|
||||
const context = await preparePhase(options.draft, options, stdinContent);
|
||||
|
||||
if (options['context-only']) {
|
||||
// Stop after context preparation
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
// Continue with AI analysis (Phase 2)
|
||||
// Generate prompt for product selection
|
||||
const selectedProducts = await selectProducts(context, options);
|
||||
const mode = 'create';
|
||||
const isURLBased = false;
|
||||
|
||||
const prompt = generateClaudePrompt(
|
||||
context,
|
||||
selectedProducts,
|
||||
mode,
|
||||
isURLBased,
|
||||
false
|
||||
);
|
||||
|
||||
// Check environment and handle prompt accordingly
|
||||
if (!isClaudeCode()) {
|
||||
// Not in Claude Code: output prompt for external use
|
||||
outputPromptForExternalUse(prompt, options['print-prompt']);
|
||||
}
|
||||
|
||||
// In Claude Code: continue with AI analysis (Phase 2)
|
||||
log('\n🤖 Running AI analysis with specialized agents...\n', 'bright');
|
||||
await runAgentAnalysis(context, options);
|
||||
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
*/
|
||||
|
||||
import { readdirSync, readFileSync, existsSync, statSync } from 'fs';
|
||||
import { join, dirname } from 'path';
|
||||
import { join, dirname, resolve } from 'path';
|
||||
import { fileURLToPath } from 'url';
|
||||
import yaml from 'js-yaml';
|
||||
import matter from 'gray-matter';
|
||||
|
|
@ -314,12 +314,19 @@ export function findSiblingWeights(dirPath) {
|
|||
|
||||
/**
|
||||
* Prepare complete context for AI analysis
|
||||
* @param {string} draftPath - Path to draft file
|
||||
* @param {string|object} draftPathOrObject - Path to draft file or draft object
|
||||
* @returns {object} Context object
|
||||
*/
|
||||
export function prepareContext(draftPath) {
|
||||
// Read draft
|
||||
const draft = readDraft(draftPath);
|
||||
export function prepareContext(draftPathOrObject) {
|
||||
// Read draft - handle both file path and draft object
|
||||
let draft;
|
||||
if (typeof draftPathOrObject === 'string') {
|
||||
draft = readDraft(draftPathOrObject);
|
||||
draft.path = draftPathOrObject;
|
||||
} else {
|
||||
// Already a draft object from stdin
|
||||
draft = draftPathOrObject;
|
||||
}
|
||||
|
||||
// Load products
|
||||
const products = loadProducts();
|
||||
|
|
@ -349,7 +356,7 @@ export function prepareContext(draftPath) {
|
|||
// Build context
|
||||
const context = {
|
||||
draft: {
|
||||
path: draftPath,
|
||||
path: draft.path || draftPathOrObject,
|
||||
content: draft.content,
|
||||
existingFrontmatter: draft.frontmatter,
|
||||
},
|
||||
|
|
@ -616,7 +623,7 @@ export function detectSharedContent(filePath) {
|
|||
if (parsed.data && parsed.data.source) {
|
||||
return parsed.data.source;
|
||||
}
|
||||
} catch (error) {
|
||||
} catch (_error) {
|
||||
// Can't parse, assume not shared
|
||||
return null;
|
||||
}
|
||||
|
|
@ -663,13 +670,13 @@ export function findSharedContentVariants(sourcePath) {
|
|||
const relativePath = fullPath.replace(REPO_ROOT + '/', '');
|
||||
variants.push(relativePath);
|
||||
}
|
||||
} catch (error) {
|
||||
} catch (_error) {
|
||||
// Skip files that can't be parsed
|
||||
continue;
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
} catch (_error) {
|
||||
// Skip directories we can't read
|
||||
}
|
||||
}
|
||||
|
|
@ -758,3 +765,127 @@ export function analyzeURLs(parsedURLs) {
|
|||
|
||||
return results;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract and categorize links from markdown content
|
||||
* @param {string} content - Markdown content
|
||||
* @returns {object} {localFiles: string[], external: string[]}
|
||||
*/
|
||||
export function extractLinks(content) {
|
||||
const localFiles = [];
|
||||
const external = [];
|
||||
|
||||
// Match markdown links: [text](url)
|
||||
const linkRegex = /\[([^\]]+)\]\(([^)]+)\)/g;
|
||||
let match;
|
||||
|
||||
while ((match = linkRegex.exec(content)) !== null) {
|
||||
const url = match[2];
|
||||
|
||||
// Skip anchor links and mailto
|
||||
if (url.startsWith('#') || url.startsWith('mailto:')) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Local file paths (relative paths) - automatically followed
|
||||
if (url.startsWith('../') || url.startsWith('./')) {
|
||||
localFiles.push(url);
|
||||
}
|
||||
// All HTTP/HTTPS URLs (including docs.influxdata.com) - user selects
|
||||
else if (url.startsWith('http://') || url.startsWith('https://')) {
|
||||
external.push(url);
|
||||
}
|
||||
// Absolute paths starting with / are ignored (no base context to resolve)
|
||||
}
|
||||
|
||||
return {
|
||||
localFiles: [...new Set(localFiles)],
|
||||
external: [...new Set(external)],
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Follow local file links (relative paths)
|
||||
* @param {string[]} links - Array of relative file paths
|
||||
* @param {string} basePath - Base path to resolve relative links from
|
||||
* @returns {object[]} Array of {url, title, content, path, frontmatter}
|
||||
*/
|
||||
export function followLocalLinks(links, basePath = REPO_ROOT) {
|
||||
const results = [];
|
||||
|
||||
for (const link of links) {
|
||||
try {
|
||||
// Resolve relative path from base path
|
||||
const filePath = resolve(basePath, link);
|
||||
|
||||
// Check if file exists
|
||||
if (existsSync(filePath)) {
|
||||
const fileContent = readFileSync(filePath, 'utf8');
|
||||
const parsed = matter(fileContent);
|
||||
|
||||
results.push({
|
||||
url: link,
|
||||
title: parsed.data?.title || 'Untitled',
|
||||
content: parsed.content,
|
||||
path: filePath.replace(REPO_ROOT + '/', ''),
|
||||
frontmatter: parsed.data,
|
||||
type: 'local',
|
||||
});
|
||||
} else {
|
||||
results.push({
|
||||
url: link,
|
||||
error: 'File not found',
|
||||
type: 'local',
|
||||
});
|
||||
}
|
||||
} catch (error) {
|
||||
results.push({
|
||||
url: link,
|
||||
error: error.message,
|
||||
type: 'local',
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch external URLs
|
||||
* @param {string[]} urls - Array of external URLs
|
||||
* @returns {Promise<object[]>} Array of {url, title, content, type}
|
||||
*/
|
||||
export async function fetchExternalLinks(urls) {
|
||||
// Dynamic import axios
|
||||
const axios = (await import('axios')).default;
|
||||
const results = [];
|
||||
|
||||
for (const url of urls) {
|
||||
try {
|
||||
const response = await axios.get(url, {
|
||||
timeout: 10000,
|
||||
headers: { 'User-Agent': 'InfluxData-Docs-Bot/1.0' },
|
||||
});
|
||||
|
||||
// Extract title from HTML or use URL
|
||||
const titleMatch = response.data.match(/<title>([^<]+)<\/title>/i);
|
||||
const title = titleMatch ? titleMatch[1] : url;
|
||||
|
||||
results.push({
|
||||
url,
|
||||
title,
|
||||
content: response.data,
|
||||
type: 'external',
|
||||
contentType: response.headers['content-type'],
|
||||
});
|
||||
} catch (error) {
|
||||
results.push({
|
||||
url,
|
||||
error: error.message,
|
||||
type: 'external',
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -28,6 +28,38 @@ export function readDraft(filePath) {
|
|||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Read draft content from stdin
|
||||
* @returns {Promise<{content: string, frontmatter: object, raw: string, path: string}>}
|
||||
*/
|
||||
export async function readDraftFromStdin() {
|
||||
return new Promise((resolve, reject) => {
|
||||
let data = '';
|
||||
process.stdin.setEncoding('utf8');
|
||||
|
||||
process.stdin.on('data', (chunk) => {
|
||||
data += chunk;
|
||||
});
|
||||
|
||||
process.stdin.on('end', () => {
|
||||
try {
|
||||
// Parse with gray-matter to extract frontmatter if present
|
||||
const parsed = matter(data);
|
||||
resolve({
|
||||
content: parsed.content,
|
||||
frontmatter: parsed.data || {},
|
||||
raw: data,
|
||||
path: '<stdin>',
|
||||
});
|
||||
} catch (error) {
|
||||
reject(error);
|
||||
}
|
||||
});
|
||||
|
||||
process.stdin.on('error', reject);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Write a markdown file with frontmatter
|
||||
* @param {string} filePath - Path to write to
|
||||
|
|
|
|||
|
|
@ -0,0 +1,43 @@
|
|||
#!/usr/bin/env node
|
||||
|
||||
/**
|
||||
* Setup script to make the `docs` command available locally after yarn install.
|
||||
* Creates a symlink in node_modules/.bin/docs pointing to scripts/docs-cli.js
|
||||
*/
|
||||
|
||||
import { fileURLToPath } from 'url';
|
||||
import { dirname, join } from 'path';
|
||||
import { existsSync, mkdirSync, symlinkSync, unlinkSync, chmodSync } from 'fs';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
const rootDir = join(__dirname, '..');
|
||||
|
||||
const binDir = join(rootDir, 'node_modules', '.bin');
|
||||
const binLink = join(binDir, 'docs');
|
||||
const targetScript = join(rootDir, 'scripts', 'docs-cli.js');
|
||||
|
||||
try {
|
||||
// Ensure node_modules/.bin directory exists
|
||||
if (!existsSync(binDir)) {
|
||||
mkdirSync(binDir, { recursive: true });
|
||||
}
|
||||
|
||||
// Remove existing symlink if it exists
|
||||
if (existsSync(binLink)) {
|
||||
unlinkSync(binLink);
|
||||
}
|
||||
|
||||
// Create symlink
|
||||
symlinkSync(targetScript, binLink, 'file');
|
||||
|
||||
// Ensure the target script is executable
|
||||
chmodSync(targetScript, 0o755);
|
||||
|
||||
console.log('✓ Created local `docs` command in node_modules/.bin/');
|
||||
console.log(' You can now use: npx docs <command>');
|
||||
console.log(' Or add node_modules/.bin to your PATH for direct access');
|
||||
} catch (error) {
|
||||
console.error('Failed to setup local docs command:', error.message);
|
||||
process.exit(1);
|
||||
}
|
||||
|
|
@ -7,6 +7,7 @@ You are analyzing a documentation draft to generate an intelligent file structur
|
|||
**Context file**: `.tmp/scaffold-context.json`
|
||||
|
||||
Read and analyze the context file, which contains:
|
||||
|
||||
- **draft**: The markdown content and any existing frontmatter
|
||||
- **products**: Available InfluxDB products (Core, Enterprise, Cloud, etc.)
|
||||
- **productHints**: Products mentioned or suggested based on content analysis
|
||||
|
|
@ -49,11 +50,12 @@ For each file, create complete frontmatter with:
|
|||
- **weight**: Sequential weight based on siblings
|
||||
- **source**: (for frontmatter-only files) Path to shared content
|
||||
- **related**: 3-5 relevant related articles from `structure.existingPaths`
|
||||
- **alt_links**: Map equivalent pages across products for cross-product navigation
|
||||
- **alt\_links**: Map equivalent pages across products for cross-product navigation
|
||||
|
||||
### 4. Code Sample Considerations
|
||||
|
||||
Based on `versionInfo`:
|
||||
|
||||
- Use version-specific CLI commands (influxdb3, influx, influxctl)
|
||||
- Reference appropriate API endpoints (/api/v3, /api/v2)
|
||||
- Note testing requirements from `conventions.testing`
|
||||
|
|
@ -61,6 +63,7 @@ Based on `versionInfo`:
|
|||
### 5. Style Compliance
|
||||
|
||||
Follow conventions from `conventions.namingRules`:
|
||||
|
||||
- Files: Use lowercase with hyphens (e.g., `manage-databases.md`)
|
||||
- Directories: Use lowercase with hyphens
|
||||
- Shared content: Place in appropriate `/content/shared/` subdirectory
|
||||
|
|
@ -133,4 +136,8 @@ Generate a JSON proposal matching the schema in `scripts/schemas/scaffold-propos
|
|||
4. Generate complete frontmatter for all files
|
||||
5. Save the proposal to `.tmp/scaffold-proposal.json`
|
||||
|
||||
The proposal will be validated and used by `yarn docs:create --proposal .tmp/scaffold-proposal.json` to create the files.
|
||||
The following command validates and creates files from the proposal:
|
||||
|
||||
```bash
|
||||
npx docs create --proposal .tmp/scaffold-proposal.json
|
||||
```
|
||||
|
|
|
|||
12
yarn.lock
12
yarn.lock
|
|
@ -194,6 +194,11 @@
|
|||
resolved "https://registry.yarnpkg.com/@evilmartians/lefthook/-/lefthook-1.12.3.tgz#081eca59a6d33646616af844244ce6842cd6b5a5"
|
||||
integrity sha512-MtXIt8h+EVTv5tCGLzh9UwbA/LRv6esdPJOHlxr8NDKHbFnbo8PvU5uVQcm3PAQTd4DZN3HoyokqrwGwntoq6w==
|
||||
|
||||
"@github/copilot@latest":
|
||||
version "0.0.353"
|
||||
resolved "https://registry.yarnpkg.com/@github/copilot/-/copilot-0.0.353.tgz#3c8d8a072b3defbd2200c9fe4fb636d633ac7f1e"
|
||||
integrity sha512-OYgCB4Jf7Y/Wor8mNNQcXEt1m1koYm/WwjGsr5mwABSVYXArWUeEfXqVbx+7O87ld5b+aWy2Zaa2bzKV8dmqaw==
|
||||
|
||||
"@humanfs/core@^0.19.1":
|
||||
version "0.19.1"
|
||||
resolved "https://registry.yarnpkg.com/@humanfs/core/-/core-0.19.1.tgz#17c55ca7d426733fe3c561906b8173c336b40a77"
|
||||
|
|
@ -1364,6 +1369,13 @@ confbox@^0.2.2:
|
|||
resolved "https://registry.yarnpkg.com/confbox/-/confbox-0.2.2.tgz#8652f53961c74d9e081784beed78555974a9c110"
|
||||
integrity sha512-1NB+BKqhtNipMsov4xI/NnhCKp9XG9NamYp5PVm9klAT0fsrNPjaFICsCFhNhwZJKNh7zB/3q8qXz0E9oaMNtQ==
|
||||
|
||||
copilot@^0.0.2:
|
||||
version "0.0.2"
|
||||
resolved "https://registry.yarnpkg.com/copilot/-/copilot-0.0.2.tgz#4712810c9182cd784820ed44627bedd32dd377f9"
|
||||
integrity sha512-nedf34AaYj9JnFhRmiJEZemAno2WDXMypq6FW5aCVR0N+QdpQ6viukP1JpvJDChpaMEVvbUkMjmjMifJbO/AgQ==
|
||||
dependencies:
|
||||
"@github/copilot" latest
|
||||
|
||||
core-util-is@1.0.2:
|
||||
version "1.0.2"
|
||||
resolved "https://registry.yarnpkg.com/core-util-is/-/core-util-is-1.0.2.tgz#b5fd54220aa2bc5ab57aab7140c940754503c1a7"
|
||||
|
|
|
|||
Loading…
Reference in New Issue