Merge branch 'master' into jts-dar-515-naming

pull/6214/head
Jason Stirnaman 2025-07-17 15:23:31 -05:00 committed by GitHub
commit b8598c7007
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
12 changed files with 503 additions and 108 deletions

View File

@ -8,6 +8,11 @@ menu:
parent: Install
related:
- /enterprise_influxdb/v1/introduction/installation/docker/docker-troubleshooting/
alt_links:
core: /influxdb3/core/get-started/setup/
enterprise: /influxdb3/enterprise/get-started/setup/
v1: /influxdb/v1/introduction/install/docker/
v2: /influxdb/v2/install/use-docker-compose/
---
InfluxDB v1 Enterprise provides Docker images for both meta nodes and data nodes to simplify cluster deployment and management.
@ -18,6 +23,14 @@ Using Docker allows you to quickly set up and run InfluxDB Enterprise clusters w
> You must have a valid license to run InfluxDB Enterprise.
> Contact <sales@influxdata.com> for licensing information or obtain a 14-day demo license via the [InfluxDB Enterprise portal](https://portal.influxdata.com/users/new).
- [Docker image variants](#docker-image-variants)
- [Requirements](#requirements)
- [Set up an InfluxDB Enterprise cluster with Docker](#set-up-an-influxdb-enterprise-cluster-with-docker)
- [Configuration options](#configuration-options)
- [Exposing ports](#exposing-ports)
- [Persistent data storage](#persistent-data-storage)
- [Next steps](#next-steps)
## Docker image variants
InfluxDB Enterprise provides two specialized Docker images:
@ -35,15 +48,23 @@ InfluxDB Enterprise provides two specialized Docker images:
## Set up an InfluxDB Enterprise cluster with Docker
### 1. Create a Docker network
1. [Create a Docker network](#create-a-docker-network)
2. [Start meta nodes](#start-meta-nodes)
3. [Configure meta nodes to know each other](#configure-meta-nodes-to-know-each-other)
4. [Start data nodes](#start-data-nodes)
5. [Add data nodes to the cluster](#add-data-nodes-to-the-cluster)
6. [Verify the cluster](#verify-the-cluster)
7. [Stop and restart InfluxDB v1 Enterprise Containers](#stop-and-restart-influxdb-v1-enterprise-containers)
Create a custom Docker network to allow communication between meta and data nodes:
### Create a Docker network
```bash
docker network create influxdb
```
Create a custom Docker network to allow communication between meta and data nodes:
### 2. Start meta nodes
```bash
docker network create influxdb
```
### Start meta nodes
Start three meta nodes using the `influxdb:meta` image.
Each meta node requires a unique hostname and the Enterprise license key:
@ -74,7 +95,7 @@ docker run -d \
influxdb:meta
```
### 3. Configure meta nodes to know each other
### Configure meta nodes to know each other
From the first meta node, add the other meta nodes to the cluster:
@ -88,7 +109,7 @@ docker exec influxdb-meta-0 \
influxd-ctl add-meta influxdb-meta-2:8091
```
### 4. Start data nodes
### Start data nodes
Start two or more data nodes using the `influxdb:data` image:
@ -110,7 +131,7 @@ docker run -d \
influxdb:data
```
### 5. Add data nodes to the cluster
### Add data nodes to the cluster
From the first meta node, register each data node with the cluster:
@ -124,7 +145,7 @@ docker exec influxdb-meta-0 \
influxd-ctl add-data influxdb-data-1:8088
```
### 6. Verify the cluster
### Verify the cluster
Check that all nodes are properly added to the cluster:

View File

@ -16,12 +16,26 @@ related:
- /influxdb/v1/administration/config/, Configure InfluxDB OSS v1
alt_links:
core: /influxdb3/core/install/
enterprise: /influxdb3/enterprise/install/
v2: /influxdb/v2/install/use-docker-compose/
---
Install and run InfluxDB OSS v1.x using Docker containers.
This guide covers Docker installation, configuration, and initialization options.
- [Install and run InfluxDB](#install-and-run-influxdb)
- [Pull the InfluxDB v1.x image](#pull-the-influxdb-v1x-image)
- [Start InfluxDB](#start-influxdb)
- [Configure InfluxDB](#configure-influxdb)
- [Using environment variables](#using-environment-variables)
- [Using a configuration file](#using-a-configuration-file)
- [Initialize InfluxDB](#initialize-influxdb)
- [Automatic initialization (for development)](#automatic-initialization-for-development)
- [Custom initialization scripts](#custom-initialization-scripts)
- [Access the InfluxDB CLI](#access-the-influxdb-cli)
- [Next steps](#next-steps)
## Install and run InfluxDB
### Pull the InfluxDB v1.x image

View File

@ -17,18 +17,22 @@ Use the automated upgrade process built into the [InfluxDB 2.x Docker image](htt
to update InfluxDB 1.x Docker deployments to InfluxDB 2.x.
- [Upgrade requirements](#upgrade-requirements)
- [InfluxDB 2.x initialization credentials](#influxdb-2x-initialization-credentials)
- [File system mounts](#file-system-mounts)
- [Upgrade initialization mode](#upgrade-initialization-mode)
- [Minimal upgrade](#minimal-upgrade)
- [Upgrade with a custom InfluxDB 1.x configuration file](#upgrade-with-a-custom-influxdb-1-x-configuration-file)
- [Upgrade with a custom InfluxDB 1.x configuration file](#upgrade-with-a-custom-influxdb-1x-configuration-file)
- [Upgrade with custom paths](#upgrade-with-custom-paths)
- [Use new InfluxDB tools](#use-new-influxdb-tools)
- [Migrate continuous queries to tasks](#migrate-continuous-queries-to-tasks)
- [Use the interactive InfluxQL shell](#use-the-interactive-influxql-shell)
{{% note %}}
#### Export continuous queries before upgrading
The automated upgrade process **does not** migrate InfluxDB 1.x continuous queries (CQs)
to InfluxDB 2.x tasks (the 2.x equivalent). Export all of your CQs before upgrading to InfluxDB 2.x.
For information about exporting and migrating CQs to tasks, see
[Migrate continuous queries to tasks](/influxdb/v2/upgrade/v1-to-v2/migrate-cqs/).
{{% /note %}}
> [!Note]
> #### Export continuous queries before upgrading
> The automated upgrade process **does not** migrate InfluxDB 1.x continuous queries (CQs)
> to InfluxDB 2.x tasks (the 2.x equivalent). Export all of your CQs before upgrading to InfluxDB 2.x.
> For information about exporting and migrating CQs to tasks, see
> [Migrate continuous queries to tasks](/influxdb/v2/upgrade/v1-to-v2/migrate-cqs/).
## Upgrade requirements
InfluxDB 2.x provides a 1.x compatibility API, but expects a different storage layout on disk.
@ -253,21 +257,8 @@ docker run -p 8086:8086 \
<!--------------------------- END USE 2.x DEFAULTS ---------------------------->
{{< /tabs-wrapper >}}
## Use new InfluxDB tools
Once upgraded, use InfluxDB {{< current-version >}} tools to work with your time series data.
- [Migrate continuous queries to tasks](#migrate-continuous-queries-to-tasks)
- [Use the interactive InfluxQL shell](#use-the-interactive-influxql-shell)
### Migrate continuous queries to tasks
InfluxDB {{< current-version >}} replaces continuous queries with **tasks**.
By default, the upgrade process writes all continuous queries to `~/continuous_queries.txt`.
To convert continuous queries to InfluxDB tasks, see
[Migrate continuous queries to tasks](/influxdb/v2/upgrade/v1-to-v2/migrate-cqs/).
### Use the interactive InfluxQL shell
## Use the interactive InfluxQL shell
The InfluxDB {{< current-version >}} `influx` CLI includes an interactive **InfluxQL shell** for executing InfluxQL queries.
The InfluxDB {{< current-version >}} Docker image includes the `influx` CLI.

View File

@ -9,21 +9,31 @@ weight: 2
influxdb/v2/tags: [install]
related:
- /influxdb/v2/install/
- /influxdb/v2/install/upgrade/v1-to-v2/docker/
- /influxdb/v2/reference/cli/influx/auth/
- /influxdb/v2/reference/cli/influx/config/
- /influxdb/v2/reference/cli/influx/
- /influxdb/v2/admin/tokens/
alt_links:
v1: /influxdb/v1/introduction/install/docker/
core: /influxdb3/core/get-started/setup/
enterprise: /influxdb3/enterprise/get-started/setup/
---
Use Docker Compose to install and set up InfluxDB v2, the time series platform
is purpose-built to collect, store, process and visualize metrics and events.
purpose-built to collect, store, process and visualize metrics and events.
When you use Docker Compose to create an InfluxDB container, you can use
Compose [`secrets`](https://docs.docker.com/compose/use-secrets/) to control
access to sensitive credentials such as username, password, and token and
prevent leaking them in your `docker inspect` output.
- [Set up using Docker Compose secrets](#set-up-using-docker-compose-secrets)
- [Run InfluxDB CLI commands in a container](#run-influxdb-cli-commands-in-a-container)
- [Manage files in mounted volumes](#manage-files-in-mounted-volumes)
## Set up using Docker Compose secrets
> [!Tip]
> When you use Docker Compose to create an InfluxDB container, you can use
> Compose [`secrets`](https://docs.docker.com/compose/use-secrets/) to control
> access to sensitive credentials such as username, password, and token and
> prevent leaking them in your `docker inspect` output.
The `influxdb` Docker image provides the following environment
variables to use with Compose `secrets`:
@ -37,8 +47,6 @@ variables to use with Compose `secrets`:
[Operator token](/influxdb/v2/admin/tokens/#operator-token).
If you don't specify an initial token, InfluxDB generates one for you.
## Set up using Docker Compose secrets
Follow steps to set up and run InfluxDB using Docker Compose and `secrets`:
1. If you haven't already, install
@ -83,6 +91,8 @@ Follow steps to set up and run InfluxDB using Docker Compose and `secrets`:
influxdb2-config:
```
_For more information about initialization environment variables, see the [upgrade guide](/influxdb/v2/install/upgrade/v1-to-v2/docker/)._
3. For each secret in `compose.yaml`, create a file that contains the secret
value--for example:

View File

@ -0,0 +1,17 @@
---
title: Configure object storage
description: |
Configure {{% product-name %}} to connect to and use different object storage
providers.
menu:
influxdb3_core:
name: Configure object storage
parent: Administer InfluxDB
weight: 110
influxdb3/core/tags: [object storage, S3]
related:
- /influxdb3/core/reference/config-options/
source: /shared/influxdb3-admin/object-storage/_index.md
---
<!-- //SOURCE content/shared/influxdb3-admin/object-storage/_index.md -->

View File

@ -0,0 +1,20 @@
---
title: Use MinIO for object storage
list_title: MinIO
description: |
Use [MinIO](min.io) as the object store for your {{% product-name %}} instance.
InfluxDB uses the MinIO S3-compatible API to interact with your MinIO server or
cluster.
menu:
influxdb3_core:
name: MinIO
parent: Configure object storage
weight: 205
influxdb3/core/tags: [object storage, S3]
related:
- https://min.io/docs/minio/linux/operations/installation.html, Install and Deploy MinIO
- /influxdb3/core/reference/config-options/
source: /shared/influxdb3-admin/object-storage/minio.md
---
<!-- //SOURCE content/shared/influxdb3-admin/object-storage/minio.md -->

View File

@ -0,0 +1,17 @@
---
title: Configure object storage
description: |
Configure {{% product-name %}} to connect to and use different object storage
providers.
menu:
influxdb3_enterprise:
name: Configure object storage
parent: Administer InfluxDB
weight: 110
influxdb3/enterprise/tags: [object storage, S3]
related:
- /influxdb3/enterprise/reference/config-options/
source: /shared/influxdb3-admin/object-storage/_index.md
---
<!-- //SOURCE content/shared/influxdb3-admin/object-storage/_index.md -->

View File

@ -0,0 +1,20 @@
---
title: Use MinIO for object storage
list_title: MinIO
description: |
Use [MinIO](min.io) as the object store for your {{% product-name %}} instance.
InfluxDB uses the MinIO S3-compatible API to interact with your MinIO server or
cluster.
menu:
influxdb3_enterprise:
name: MinIO
parent: Configure object storage
weight: 205
influxdb3/enterprise/tags: [object storage, S3]
related:
- https://min.io/docs/minio/linux/operations/installation.html, Install and Deploy MinIO
- /influxdb3/enterprise/reference/config-options/
source: /shared/influxdb3-admin/object-storage/minio.md
---
<!-- //SOURCE content/shared/influxdb3-admin/object-storage/minio.md -->

View File

@ -0,0 +1,8 @@
<!-- Comment to support shortcode -->
{{% product-name %}} can be configured to use different object storage providers
to store time series data in Parquet format. The process of configuring and
connecting to different object storage providers varies.
The following guides walk through configuring, connecting to, and using
different object storage providers as your {{% product-name %}} object store.
{{< children >}}

View File

@ -0,0 +1,320 @@
Use [MinIO](https://min.io) as the object store for your {{% product-name %}} instance.
InfluxDB uses the MinIO S3-compatible API to interact with your MinIO server or
cluster.
> MinIO is a high-performance, S3-compatible object storage solution released
> under the GNU AGPL v3.0 license. Designed for speed and scalability, it powers
> AI/ML, analytics, and data-intensive workloads with industry-leading performance.
>
> {{% cite %}}[MinIO GitHub repository](https://github.com/minio/minio?tab=readme-ov-file#readme){{% /cite %}}
MinIO provides both an open source version ([MinIO Community Edition](https://min.io/open-source/download))
and an enterprise version ([MinIO AIStor](https://min.io/download)).
While both can be used as your {{% product-name %}} object store,
**this guide walks through using MinIO Community Edition**.
- [Set up MinIO](#set-up-minio)
- [Configure InfluxDB to connect to MinIO](#configure-influxdb-to-connect-to-minio)
- [Confirm the object store is working](#confirm-the-object-store-is-working)
## Set up MinIO
1. **Install and deploy a MinIO server or cluster**.
You can install MinIO locally for testing and development or you can deploy
a production MinIO cluster across multiple machines. The MinIO documentation
provides detailed instructions for installing and deploying MinIO based on
your target operating system:
- [Install and deploy MinIO on **Linux**](https://min.io/docs/minio/linux/operations/installation.html)
<em class="op60">(recommended for production deployments)</em>
- [Install and deploy MinIO with **Kubernetes**](https://min.io/docs/minio/kubernetes/upstream/operations/installation.html)
- [Install and deploy MinIO with **Docker**](https://min.io/docs/minio/container/operations/installation.html)
- [Install and deploy MinIO on **macOS**](https://min.io/docs/minio/macos/operations/installation.html)
- [Install and deploy MinIO on **Windows**](https://min.io/docs/minio/windows/operations/installation.html)
2. **Download and install the MinIO Client (`mc`)**.
The MinIO client, or `mc` CLI, lets you perform administrative tasks on your
MinIO server or cluster like creating users, assigning access policies, and
more. Download and install the `mc` CLI for your local operating system
and architecture.
[See the **MinIO Client** section of the MinIO downloads page](https://min.io/open-source/download).
3. **Configure the `mc` CLI to connect to your MinIO server or cluster**.
{#configure-alias}
The `mc` CLI uses "aliases" to connect to a MinIO server or cluster.
The alias refers to a set of connection credentials used to connect to and
authorize with your MinIO server.
Use the `mc alias set` command and provide the following:
- **Alias**: A unique name or identifier for this credential set
({{% code-placeholder-key %}}`ALIAS`{{% /code-placeholder-key %}})
- **MinIO URL**: The URL of your MinIO server or cluster
({{% code-placeholder-key %}}`https://localhost:9000`{{% /code-placeholder-key %}}
if running locally)
- **Root username:** The root username you specified when setting up your
MinIO server or cluster
({{% code-placeholder-key %}}`ROOT_USERNAME`{{% /code-placeholder-key %}})
- **Root password**: The root password you specified when setting up your
MinIO server or cluster
({{% code-placeholder-key %}}`ROOT_PASSWORD`{{% /code-placeholder-key %}})
<!-- pytest.mark.skip -->
```bash { placeholders="ALIAS|http://localhost:9000|ROOT_(USERNAME|PASSWORD)" }
mc alias set ALIAS http://localhost:9000 ROOT_USERNAME ROOT_PASSWORD
```
4. **Create a MinIO bucket**.
Use the _MinIO Console_ or the _`mc mb` command_ to create a new bucket
in your MinIO server or cluster.
{{< tabs-wrapper >}}
{{% tabs "medium" %}}
[MinIO Console](#)
[mc CLI](#)
{{% /tabs %}}
{{% tab-content %}}
<!---------------------------- BEGIN MinIO Console ---------------------------->
The MinIO Console is a graphical user interface that lets you manage and browse
buckets in your MinIO server or cluster. By default, the console is served on port
`9001`.
If running MinIO on your local machine, visit <http://localhost:9001>
to access the MinIO Console. If MinIO is running on a remote server, use your
custom domain or IP to access the MinIO console.
1. In the Minio Console, click **Create Bucket**.
2. Enter a bucket name. For this guide, use `influxdb3`.
3. Click **Create Bucket**.
<!----------------------------- END MinIO Console ----------------------------->
{{% /tab-content %}}
{{% tab-content %}}
<!-------------------------------- BEGIN mc CLI ------------------------------->
Use the `mc mb` command to create a new MinIO bucket named `influxdb3`.
Provide the MinIO alias configured in [step 3](#configure-alias) and the bucket
name using the `ALIAS/BUCKET_NAME` syntax--for example:
<!-- pytest.mark.skip -->
```bash { placeholders="ALIAS" }
mc mb ALIAS/influxdb3
```
<!--------------------------------- END mc CLI -------------------------------->
{{% /tab-content %}}
{{< /tabs-wrapper >}}
5. **Create a MinIO user**.
Use the `mc admin user add` command to create a new user.
Provide the following:
- **MinIO alias**: The MinIO server alias (created in [step 3](#configure-alias))
to add the user to ({{% code-placeholder-key %}}`ALIAS`{{% /code-placeholder-key %}})
- **Username**: A unique username for the user
({{% code-placeholder-key %}}`MINIO_USERNAME`{{% /code-placeholder-key %}})
- **Password**: A password for the user
({{% code-placeholder-key %}}`MINIO_PASSWORD`{{% /code-placeholder-key %}})
```bash { placeholders="ALIAS|MINIO_(USERNAME|PASSWORD)" }
mc admin user add ALIAS MINIO_USERNAME MINIO_PASSWORD
```
> [!Note]
> MinIO user credentials are equivalent to credentials you would typically
> use to authorize with AWS S3:
>
> - A MinIO username is equivalent to an AWS access key ID
> - A MinIO password is equivalent to an AWS secret key
6. **Create an access policy that grants full access to the `influxdb3` bucket**.
MinIO uses S3 compatible access policies to authorize access to buckets.
To create a new access policy:
1. Create a file named `influxdb3-policy.json` that contains the following
JSON:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::influxdb3"]
},
{
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::influxdb3/*"]
}
]
}
```
2. Use the `mc admin policy create` command to create the new access policy
in your MinIO server or cluster. Provide the following:
- **MinIO alias**: The MinIO server alias (created in [step 3](#configure-alias)) to add
the access policy to ({{% code-placeholder-key %}}`ALIAS`{{% /code-placeholder-key %}})
- **Policy name**: A unique name for the policy
({{% code-placeholder-key %}}`POLICY_NAME`{{% /code-placeholder-key %}})
- **Policy file**: The relative or absolute file path of your
`influxdb3-policy.json` policy file
({{% code-placeholder-key %}}`/path/to/influxdb3-policy.json`{{% /code-placeholder-key %}})
```bash { placeholders="ALIAS|POLICY_NAME|/path/to/influxdb3-policy\.json" }
mc admin policy create \
ALIAS \
POLICY_NAME \
/path/to/influxdb3-policy.json
```
7. **Attach the access policy to your user.**
Use the `mc admin policy attach` command to attach the access policy to your
user.
> [!Note]
> MinIO supports attaching access policies to both users and user groups.
> All users in a user group inherit policies attached to the group.
> For information about managing MinIO user groups, see
> [MinIO Group Management](https://min.io/docs/minio/linux/administration/identity-access-management/minio-group-management.html).
Provide the following:
- **MinIO alias**: The MinIO server alias created in [step 3](#configure-alias)
({{% code-placeholder-key %}}`ALIAS`{{% /code-placeholder-key %}})
- **Policy name**: A unique username for the user
({{% code-placeholder-key %}}`POLICY_NAME`{{% /code-placeholder-key %}})
- **Username** or **group name**: The user or user group to assign the policy to
({{% code-placeholder-key %}}`MINIO_USERNAME`{{% /code-placeholder-key %}} or
{{% code-placeholder-key %}}`MINIO_GROUP_NAME`{{% /code-placeholder-key %}})
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[user](#)
[group](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
<!-- pytest.mark.skip -->
```bash { placeholders="ALIAS|POLICY_NAME|MINIO_USERNAME" }
mc admin policy attach ALIAS POLICY_NAME --user MINIO_USERNAME
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
<!-- pytest.mark.skip -->
```bash { placeholders="ALIAS|POLICY_NAME|MINIO_GROUP_NAME" }
mc admin policy attach ALIAS POLICY_NAME --group MINIO_GROUP_NAME
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
Your MinIO server or cluster is now set up and ready to be used with {{% product-name %}}.
## Configure InfluxDB to connect to MinIO
To use your MinIO server or cluster as the object store for your {{% product-name %}}
instance, provide the following options or environment variables with the
`influxdb3 serve` command:
{{< tabs-wrapper >}}
{{% tabs "medium" %}}
[Command options](#)
[Environment variables](#)
{{% /tabs %}}
{{% tab-content %}}
<!--------------------------- BEGIN COMMAND OPTIONS --------------------------->
{{% show-in "enterprise" %}}- `--cluster-id`: Your {{% product-name %}} cluster ID ({{% code-placeholder-key %}}`INFLUXDB_CLUSTER_ID`{{% /code-placeholder-key %}}){{% /show-in %}}
- `--node-id`: Your {{% product-name %}} node ID ({{% code-placeholder-key %}}`INFLUXDB_NODE_ID`{{% /code-placeholder-key %}})
- `--object-store`: `s3`
- `--bucket`: `influxdb3`
- `--aws-endpoint`: Your MinIO URL ({{% code-placeholder-key %}}`http://localhost:9000`{{% /code-placeholder-key %}} if running locally)
- `--aws-access-key-id`: Your MinIO username ({{% code-placeholder-key %}}`MINIO_USERNAME`{{% /code-placeholder-key %}})
- `--aws-secret-access-key`: Your MinIO password ({{% code-placeholder-key %}}`MINIO_PASSWORD`{{% /code-placeholder-key %}})
- `--aws-allow-http`: _(Optional)_ Include if _not_ using HTTPS to connect to
your MinIO server or cluster
<!-- pytest.mark.skip -->
```bash { placeholders="INFLUXDB_(CLUSTER|NODE)_ID|http://localhost:9000|MINIO_(USERNAME|PASSWORD)" }
influxdb3 serve \
{{< show-in "enterprise" >}}--cluster-id INFLUXDB_CLUSTER_ID \
{{< /show-in >}}--node-id INFLUXDB_NODE_ID \
--object-store s3 \
--bucket influxdb3 \
--aws-endpoint http://localhost:9000 \
--aws-access-key-id MINIO_USERNAME \
--aws-secret-access-key MINIO_PASSWORD \
--aws-allow-http
```
<!---------------------------- END COMMAND OPTIONS ---------------------------->
{{% /tab-content %}}
{{% tab-content %}}
<!------------------------ BEGIN ENVIRONMENT VARIABLES ------------------------>
{{% show-in "enterprise" %}}- `INFLUXDB3_ENTERPRISE_CLUSTER_ID`: Your {{% product-name %}} cluster ID ({{% code-placeholder-key %}}`INFLUXDB_CLUSTER_ID`{{% /code-placeholder-key %}}){{% /show-in %}}
- `INFLUXDB3_NODE_IDENTIFIER_PREFIX`: Your {{% product-name %}} node ID ({{% code-placeholder-key %}}`INFLUXDB_NODE_ID`{{% /code-placeholder-key %}})
- `INFLUXDB3_OBJECT_STORE`: `s3`
- `INFLUXDB3_BUCKET`: `influxdb3`
- `AWS_ENDPOINT`: Your MinIO URL ({{% code-placeholder-key %}}`http://localhost:9000`{{% /code-placeholder-key %}} if running locally)
- `AWS_ACCESSKEY_ID`: Your MinIO username ({{% code-placeholder-key %}}`MINIO_USERNAME`{{% /code-placeholder-key %}})
- `AWS_SECRET_ACCESS_KEY`: Your MinIO password ({{% code-placeholder-key %}}`MINIO_PASSWORD`{{% /code-placeholder-key %}})
- `AWS_ALLOW_HTTP`: _(Optional)_ Set to `true` if _not_ using HTTPS to connect to
your MinIO server or cluster (default is `false`)
<!-- pytest.mark.skip -->
```bash { placeholders="INFLUXDB_(CLUSTER|NODE)_ID|http://localhost:9000|MINIO_(USERNAME|PASSWORD)" }
{{< show-in "enterprise" >}}export INFLUXDB3_ENTERPRISE_CLUSTER_ID=INFLUXDB_CLUSTER_ID
{{< /show-in >}}export INFLUXDB3_NODE_IDENTIFIER_PREFIX=INFLUXDB_NODE_ID
export INFLUXDB3_OBJECT_STORE=s3
export INFLUXDB3_BUCKET=influxdb3
export AWS_ENDPOINT=http://localhost:9000
export AWS_ACCESS_KEY_ID=MINIO_USERNAME
export AWS_SECRET_ACCESS_KEY=MINIO_PASSWORD
export AWS_ALLOW_HTTP=true
influxdb3 serve
```
<!------------------------- END ENVIRONMENT VARIABLES ------------------------->
{{% /tab-content %}}
{{< /tabs-wrapper >}}
## Confirm the object store is working
When {{% product-name %}} starts, it will seed your MinIO object store with the
necessary directory structure and begin storing data there. Confirm the object
store is functioning properly:
1. View the `influxdb3 serve` log output to confirm that the server is running correctly.
2. Inspect the contents of your MinIO `influxdb3` bucket to confirm that the
necessary directory structure is created. You can use the **MinIO Console**
or the **`mc ls` command** to view the contents of a bucket--for example:
```bash { placeholders="ALIAS" }
mc ls ALIAS/influxdb3
```

View File

@ -0,0 +1,9 @@
{{ $result := transform.HighlightCodeBlock . }}
{{ if .Attributes.placeholders }}
{{ $elReplace := print "<div data-component='code-placeholder' class='code-placeholder-wrapper'><var title='Edit $0' class='code-placeholder' data-code-var='$0' data-code-var-value='$0' data-code-var-escaped=\"$0\">$0<span class='code-placeholder-edit-icon cf-icon Pencil'></span></var></div>" }}
{{ $highlightedCode := highlight .Inner .Type }}
{{ $withPlaceholders := replaceRE .Attributes.placeholders $elReplace $highlightedCode }}
{{ $withPlaceholders | safeHTML }}
{{ else }}
{{ $result.Wrapped }}
{{ end }}

View File

@ -1,74 +1,22 @@
{{ $page := .page }}
{{ $menu := .menu }}
{{ range $menu }}
<li class="nav-category {{ if eq $page.RelPermalink .URL }}active{{end}}">
{{ if .HasChildren }}<a href="#" class="children-toggle {{ if or ($page.IsMenuCurrent .Menu .) ($page.HasMenuCurrent .Menu .) }}open{{end}}"></a>{{ end }}
<a href='{{ default .URL .Params.url }}'>{{ .Name }}</a>
{{ if .HasChildren }}
<ul class="children {{ if or ($page.IsMenuCurrent .Menu .) ($page.HasMenuCurrent .Menu .) }}open{{end}}">
{{ range .Children }}
<li class="nav-item {{ if eq $page.RelPermalink .URL }}active{{end}}">
{{ if .HasChildren }}<a href="#" class="children-toggle {{ if or ($page.IsMenuCurrent .Menu .) ($page.HasMenuCurrent .Menu .) }}open{{end}}"></a>{{ end }}
<a href='{{ default .URL .Params.url }}'>{{ .Name }}</a>
{{ if .HasChildren }}
<ul class="children {{ if or ($page.IsMenuCurrent .Menu .) ($page.HasMenuCurrent .Menu .) }}open{{end}}">
{{ range .Children }}
<li class="nav-item {{ if eq $page.RelPermalink .URL }}active{{end}}">
{{ if .HasChildren }}<a href="#" class="children-toggle {{ if or ($page.IsMenuCurrent .Menu .) ($page.HasMenuCurrent .Menu .) }}open{{end}}"></a>{{ end }}
<a href='{{ default .URL .Params.url }}'>{{ .Name }}</a>
{{ if .HasChildren }}
<ul class="children {{ if or ($page.IsMenuCurrent .Menu .) ($page.HasMenuCurrent .Menu .) }}open{{end}}">
{{ range .Children }}
<li class="nav-item {{ if eq $page.RelPermalink .URL }}active{{end}}">
{{ if .HasChildren }}<a href="#" class="children-toggle {{ if or ($page.IsMenuCurrent .Menu .) ($page.HasMenuCurrent .Menu .) }}open{{end}}"></a>{{ end }}
<a href='{{ default .URL .Params.url }}'>{{ .Name }}</a>
{{ if .HasChildren }}
<ul class="children {{ if or ($page.IsMenuCurrent .Menu .) ($page.HasMenuCurrent .Menu .) }}open{{end}}">
{{ range .Children }}
<li class="nav-item {{ if eq $page.RelPermalink .URL }}active{{end}}">
{{ if .HasChildren }}<a href="#" class="children-toggle {{ if or ($page.IsMenuCurrent .Menu .) ($page.HasMenuCurrent .Menu .) }}open{{end}}"></a>{{ end }}
<a href='{{ default .URL .Params.url }}'>{{ .Name }}</a>
{{ if .HasChildren }}
<ul class="children {{ if or ($page.IsMenuCurrent .Menu .) ($page.HasMenuCurrent .Menu .) }}open{{end}}">
{{ range .Children }}
<li class="nav-item {{ if eq $page.RelPermalink .URL }}active{{end}}">
{{ if .HasChildren }}<a href="#" class="children-toggle {{ if or ($page.IsMenuCurrent .Menu .) ($page.HasMenuCurrent .Menu .) }}open{{end}}"></a>{{ end }}
<a href='{{ default .URL .Params.url }}'>{{ .Name }}</a>
<!-- Begin nested block -->
{{ if .HasChildren }}
<ul class="children {{ if or ($page.IsMenuCurrent .Menu .) ($page.HasMenuCurrent .Menu .) }}open{{end}}">
{{ range .Children }}
<li class="nav-item {{ if eq $page.RelPermalink .URL }}active{{end}}">
{{ if .HasChildren }}<a href="#" class="children-toggle {{ if or ($page.IsMenuCurrent .Menu .) ($page.HasMenuCurrent .Menu .) }}open{{end}}"></a>{{ end }}
<a href='{{ default .URL .Params.url }}'>{{ .Name }}</a>
<!-- To add more nested layers, copy the nested block and paste it here -->
</li>
{{ end }}
</ul>
{{ end }}
<!-- End nested block -->
</li>
{{ end }}
</ul>
{{ end }}
</li>
{{ end }}
</ul>
{{ end }}
</li>
{{ end }}
</ul>
{{ end }}
</li>
{{ end }}
</ul>
{{ end }}
</li>
{{ define "recursiveMenu" }}
{{ $menuContext := .menu }}
{{ $currentPage := .currentPage }}
{{ $depth := add .depth 1 }}
{{ $navClass := cond (gt $depth 1) "item" "category" }}
{{ range $menuContext }}
<li class="nav-{{ $navClass }} {{ if eq $currentPage.RelPermalink .URL }}active{{end}}">
{{ if .HasChildren }}<a href="#" class="children-toggle {{ if or ($currentPage.IsMenuCurrent .Menu .) ($currentPage.HasMenuCurrent .Menu .) }}open{{end}}"></a>{{ end }}
<a href='{{ default .URL .Params.url }}'>{{ .Name }}</a>
{{ if .HasChildren }}
<ul class="children {{ if or ($currentPage.IsMenuCurrent .Menu .) ($currentPage.HasMenuCurrent .Menu .) }}open{{end}}">
{{ template "recursiveMenu" (dict "menu" .Children "currentPage" $currentPage "depth" $depth) }}
</ul>
{{ end }}
</ul>
{{ end }}
</li>
</li>
{{ end }}
{{ end }}
{{ template "recursiveMenu" (dict "menu" .menu "currentPage" .page "depth" 0) }}