Merge branch 'master' into pbarnett/3.0.1-release-notes

pbarnett/3.0.1-release-notes
peterbarnett03 2025-04-18 09:31:12 -04:00 committed by GitHub
commit 5d2e61eb85
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
40 changed files with 1661 additions and 961 deletions

View File

@ -9,6 +9,60 @@ menu:
parent: About the project parent: About the project
--- ---
## v1.12.0 {date="2025-04-15"}
## Features
- Add additional log output when using
[`influx_inspect buildtsi`](/enterprise_influxdb/v1/tools/influx_inspect/#buildtsi) to
rebuild the TSI index.
- Use [`influx_inspect export`](/enterprise_influxdb/v1/tools/influx_inspect/#export) with
[`-tsmfile` option](/enterprise_influxdb/v1/tools/influx_inspect/#--tsmfile-tsm_file-) to
export a single TSM file.
- Add `-m` flag to the [`influxd-ctl show-shards` command](/enterprise_influxdb/v1/tools/influxd-ctl/show-shards/)
to output inconsistent shards.
- Allow the specification of a write window for retention policies.
- Add `fluxQueryRespBytes` metric to the `/debug/vars` metrics endpoint.
- Log whenever meta gossip times exceed expiration.
- Add [`query-log-path` configuration option](/enterprise_influxdb/v1/administration/configure/config-data-nodes/#query-log-path)
to data nodes.
- Add [`aggressive-points-per-block` configuration option](/influxdb/v1/administration/config/#aggressive-points-per-block)
to prevent TSM files from not getting fully compacted.
- Log TLS configuration settings on startup.
- Check for TLS certificate and private key permissions.
- Add a warning if the TLS certificate is expired.
- Add authentication to the Raft portal and add the following related _data_
node configuration options:
- [`[meta].raft-portal-auth-required`](/enterprise_influxdb/v1/administration/configure/config-data-nodes/#raft-portal-auth-required)
- [`[meta].raft-dialer-auth-required`](/enterprise_influxdb/v1/administration/configure/config-data-nodes/#raft-dialer-auth-required)
- Improve error handling.
- InfluxQL updates:
- Delete series by retention policy.
- Allow retention policies to discard writes that fall within their range, but
outside of [`FUTURE LIMIT`](/enterprise_influxdb/v1/query_language/manage-database/#future-limit)
and [`PAST LIMIT`](/enterprise_influxdb/v1/query_language/manage-database/#past-limit).
## Bug fixes
- Log rejected writes to subscriptions.
- Update `xxhash` and avoid `stringtoslicebyte` in the cache.
- Prevent a panic when a shard group has no shards.
- Fix file handle leaks in `Compactor.write`.
- Ensure fields in memory match the fields on disk.
- Ensure temporary files are removed after failed compactions.
- Do not panic on invalid multiple subqueries.
- Update the `/shard-status` API to return the correct result and use a
consistent "idleness" definition for shards.
## Other
- Update Go to 1.23.5.
- Upgrade Flux to v0.196.1.
- Upgrade InfluxQL to v1.4.1.
- Various other dependency updates.
---
{{% note %}} {{% note %}}
#### InfluxDB Enterprise and FIPS-compliance #### InfluxDB Enterprise and FIPS-compliance
@ -21,6 +75,10 @@ InfluxDB Enterprise builds are available. For more information, see
## v1.11.8 {date="2024-11-15"} ## v1.11.8 {date="2024-11-15"}
### Features
- Add a startup logger to InfluxDB Enterprise data nodes.
### Bug Fixes ### Bug Fixes
- Strip double quotes from measurement names in the [`/api/v2/delete` compatibility - Strip double quotes from measurement names in the [`/api/v2/delete` compatibility
@ -28,6 +86,8 @@ InfluxDB Enterprise builds are available. For more information, see
string comparisons (e.g. to allow special characters in measurement names). string comparisons (e.g. to allow special characters in measurement names).
- Enable SHA256 for FIPS RPMs. - Enable SHA256 for FIPS RPMs.
---
## v1.11.7 {date="2024-09-19"} ## v1.11.7 {date="2024-09-19"}
### Bug Fixes ### Bug Fixes
@ -581,7 +641,7 @@ in that there is no corresponding InfluxDB OSS release.
### Features ### Features
- Upgrade to Go 1.15.10. - Upgrade to Go 1.15.10.
- Support user-defined *node labels*. - Support user-defined _node labels_.
Node labels let you assign arbitrary key-value pairs to meta and data nodes in a cluster. Node labels let you assign arbitrary key-value pairs to meta and data nodes in a cluster.
For instance, an operator might want to label nodes with the availability zone in which they're located. For instance, an operator might want to label nodes with the availability zone in which they're located.
- Improve performance of `SHOW SERIES CARDINALITY` and `SHOW SERIES CARDINALITY from <measurement>` InfluxQL queries. - Improve performance of `SHOW SERIES CARDINALITY` and `SHOW SERIES CARDINALITY from <measurement>` InfluxQL queries.
@ -756,11 +816,15 @@ For details on changes incorporated from the InfluxDB OSS release, see
### Features ### Features
#### **Back up meta data only** #### Back up meta data only
- Add option to back up **meta data only** (users, roles, databases, continuous queries, and retention policies) using the new `-strategy` flag and `only meta` option: `influx ctl backup -strategy only meta </your-backup-directory>`. - Add option to back up **meta data only** (users, roles, databases, continuous
queries, and retention policies) using the new `-strategy` flag and `only meta`
option: `influx ctl backup -strategy only meta </your-backup-directory>`.
> **Note:** To restore a meta data backup, use the `restore -full` command and specify your backup manifest: `influxd-ctl restore -full </backup-directory/backup.manifest>`. > [!Note]
> To restore a meta data backup, use the `restore -full` command and specify
> your backup manifest: `influxd-ctl restore -full </backup-directory/backup.manifest>`.
For more information, see [Perform a metastore only backup](/enterprise_influxdb/v1/administration/backup-and-restore/#perform-a-metastore-only-backup). For more information, see [Perform a metastore only backup](/enterprise_influxdb/v1/administration/backup-and-restore/#perform-a-metastore-only-backup).
@ -1007,7 +1071,10 @@ The following summarizes the expected settings for proper configuration of JWT a
`""`. `""`.
- A long pass phrase is recommended for better security. - A long pass phrase is recommended for better security.
>**Note:** To provide encrypted internode communication, you must enable HTTPS. Although the JWT signature is encrypted, the the payload of a JWT token is encoded, but is not encrypted. > [!Note]
> To provide encrypted internode communication, you must enable HTTPS. Although
> the JWT signature is encrypted, the the payload of a JWT token is encoded, but
> is not encrypted.
### Bug fixes ### Bug fixes
@ -1082,8 +1149,10 @@ Please see the [InfluxDB OSS release notes](/influxdb/v1/about_the_project/relea
## v1.5.0 {date="2018-03-06"} ## v1.5.0 {date="2018-03-06"}
> ***Note:*** This release builds off of the 1.5 release of InfluxDB OSS. Please see the [InfluxDB OSS release > [!Note]
> notes](/influxdb/v1/about_the_project/release-notes/) for more information about the InfluxDB OSS release. > This release builds off of the 1.5 release of InfluxDB OSS.
> Please see the [InfluxDB OSS release notes](/influxdb/v1/about_the_project/release-notes/)
> for more information about the InfluxDB OSS release.
For highlights of the InfluxDB 1.5 release, see [What's new in InfluxDB 1.5](/influxdb/v1/about_the_project/whats_new/). For highlights of the InfluxDB 1.5 release, see [What's new in InfluxDB 1.5](/influxdb/v1/about_the_project/whats_new/).

View File

@ -259,6 +259,29 @@ For detailed configuration information, see [`meta.ensure-fips`](/enterprise_inf
Environment variable: `INFLUXDB_META_ENSURE_FIPS` Environment variable: `INFLUXDB_META_ENSURE_FIPS`
#### raft-portal-auth-required {metadata="v1.12.0+"}
Default is `false`.
Require Raft clients to authenticate with server using the
[`meta-internal-shared-secret`](#meta-internal-shared-secret).
This requires that all meta nodes are running InfluxDB Enterprise v1.12.0+ and
are configured with the correct `meta-internal-shared-secret`.
Environment variable: `INFLUXDB_META_RAFT_PORTAL_AUTH_REQUIRED`
#### raft-dialer-auth-required {metadata="v1.12.0+"}
Default is `false`.
Require Raft servers to authenticate Raft clients using the
[`meta-internal-shared-secret`](#meta-internal-shared-secret).
This requires that all meta nodes are running InfluxDB Enterprise v1.12.0+, have
`raft-portal-auth-required=true`, and are configured with the correct
`meta-internal-shared-secret`.
Environment variable: `INFLUXDB_META_RAFT_DIALER_AUTH_REQUIRED`
----- -----
## Data settings ## Data settings
@ -305,6 +328,8 @@ Environment variable: `INFLUXDB_DATA_QUERY_LOG_ENABLED`
#### query-log-path #### query-log-path
Default is `""`.
An absolute path to the query log file. An absolute path to the query log file.
The default is `""` (queries aren't logged to a file). The default is `""` (queries aren't logged to a file).
@ -326,6 +351,8 @@ The following is an example of a `logrotate` configuration:
} }
``` ```
Environment variable: `INFLUXDB_DATA_QUERY_LOG_PATH`
#### wal-fsync-delay #### wal-fsync-delay
Default is `"0s"`. Default is `"0s"`.
@ -422,6 +449,16 @@ The duration at which to compact all TSM and TSI files in a shard if it has not
Environment variable: `INFLUXDB_DATA_COMPACT_FULL_WRITE_COLD_DURATION` Environment variable: `INFLUXDB_DATA_COMPACT_FULL_WRITE_COLD_DURATION`
#### aggressive-points-per-block {metadata="v1.12.0+"}
Default is `10000`.
The number of points per block to use during aggressive compaction. There are
certain cases where TSM files do not get fully compacted. This adjusts an
internal parameter to help ensure these files do get fully compacted.
Environment variable: `INFLUXDB_DATA_AGGRESSIVE_POINTS_PER_BLOCK`
#### index-version #### index-version
Default is `"inmem"`. Default is `"inmem"`.

View File

@ -62,17 +62,22 @@ Creates a new database.
#### Syntax #### Syntax
```sql ```sql
CREATE DATABASE <database_name> [WITH [DURATION <duration>] [REPLICATION <n>] [SHARD DURATION <duration>] [NAME <retention-policy-name>]] CREATE DATABASE <database_name> [WITH [DURATION <duration>] [REPLICATION <n>] [SHARD DURATION <duration>] [PAST LIMIT <duration>] [FUTURE LIMIT <duration>] [NAME <retention-policy-name>]]
``` ```
#### Description of syntax #### Description of syntax
`CREATE DATABASE` requires a database [name](/enterprise_influxdb/v1/troubleshooting/frequently-asked-questions/#what-words-and-characters-should-i-avoid-when-writing-data-to-influxdb). `CREATE DATABASE` requires a database [name](/enterprise_influxdb/v1/troubleshooting/frequently-asked-questions/#what-words-and-characters-should-i-avoid-when-writing-data-to-influxdb).
The `WITH`, `DURATION`, `REPLICATION`, `SHARD DURATION`, and `NAME` clauses are optional and create a single [retention policy](/enterprise_influxdb/v1/concepts/glossary/#retention-policy-rp) associated with the created database. The `WITH`, `DURATION`, `REPLICATION`, `SHARD DURATION`, `PAST LIMIT`,
If you do not specify one of the clauses after `WITH`, the relevant behavior defaults to the `autogen` retention policy settings. `FUTURE LIMIT, and `NAME` clauses are optional and create a single
[retention policy](/enterprise_influxdb/v1/concepts/glossary/#retention-policy-rp)
associated with the created database.
If you do not specify one of the clauses after `WITH`, the relevant behavior
defaults to the `autogen` retention policy settings.
The created retention policy automatically serves as the database's default retention policy. The created retention policy automatically serves as the database's default retention policy.
For more information about those clauses, see [Retention Policy Management](/enterprise_influxdb/v1/query_language/manage-database/#retention-policy-management). For more information about those clauses, see
[Retention Policy Management](/enterprise_influxdb/v1/query_language/manage-database/#retention-policy-management).
A successful `CREATE DATABASE` query returns an empty result. A successful `CREATE DATABASE` query returns an empty result.
If you attempt to create a database that already exists, InfluxDB does nothing and does not return an error. If you attempt to create a database that already exists, InfluxDB does nothing and does not return an error.
@ -122,21 +127,25 @@ The `DROP SERIES` query deletes all points from a [series](/enterprise_influxdb/
and it drops the series from the index. and it drops the series from the index.
The query takes the following form, where you must specify either the `FROM` clause or the `WHERE` clause: The query takes the following form, where you must specify either the `FROM` clause or the `WHERE` clause:
```sql ```sql
DROP SERIES FROM <measurement_name[,measurement_name]> WHERE <tag_key>='<tag_value>' DROP SERIES FROM <measurement_name[,measurement_name]> WHERE <tag_key>='<tag_value>'
``` ```
Drop all series from a single measurement: Drop all series from a single measurement:
```sql ```sql
> DROP SERIES FROM "h2o_feet" > DROP SERIES FROM "h2o_feet"
``` ```
Drop series with a specific tag pair from a single measurement: Drop series with a specific tag pair from a single measurement:
```sql ```sql
> DROP SERIES FROM "h2o_feet" WHERE "location" = 'santa_monica' > DROP SERIES FROM "h2o_feet" WHERE "location" = 'santa_monica'
``` ```
Drop all points in the series that have a specific tag pair from all measurements in the database: Drop all points in the series that have a specific tag pair from all measurements in the database:
```sql ```sql
> DROP SERIES WHERE "location" = 'santa_monica' > DROP SERIES WHERE "location" = 'santa_monica'
``` ```
@ -152,35 +161,49 @@ Unlike
You must include either the `FROM` clause, the `WHERE` clause, or both: You must include either the `FROM` clause, the `WHERE` clause, or both:
``` ```sql
DELETE FROM <measurement_name> WHERE [<tag_key>='<tag_value>'] | [<time interval>] DELETE FROM <measurement_name> WHERE [<tag_key>='<tag_value>'] | [<time interval>]
``` ```
Delete all data associated with the measurement `h2o_feet`: Delete all data associated with the measurement `h2o_feet`:
```
```sql
> DELETE FROM "h2o_feet" > DELETE FROM "h2o_feet"
``` ```
Delete all data associated with the measurement `h2o_quality` and where the tag `randtag` equals `3`: Delete all data associated with the measurement `h2o_quality` and where the tag `randtag` equals `3`:
```
```sql
> DELETE FROM "h2o_quality" WHERE "randtag" = '3' > DELETE FROM "h2o_quality" WHERE "randtag" = '3'
``` ```
Delete all data in the database that occur before January 01, 2020: Delete all data in the database that occur before January 01, 2020:
```
```sql
> DELETE WHERE time < '2020-01-01' > DELETE WHERE time < '2020-01-01'
``` ```
Delete all data associated with the measurement `h2o_feet` in retention policy `one_day`:
```sql
> DELETE FROM "one_day"."h2o_feet"
```
A successful `DELETE` query returns an empty result. A successful `DELETE` query returns an empty result.
Things to note about `DELETE`: Things to note about `DELETE`:
* `DELETE` supports * `DELETE` supports
[regular expressions](/enterprise_influxdb/v1/query_language/explore-data/#regular-expressions) [regular expressions](/enterprise_influxdb/v1/query_language/explore-data/#regular-expressions)
in the `FROM` clause when specifying measurement names and in the `WHERE` clause in the `FROM` clause when specifying measurement names and in the `WHERE` clause
when specifying tag values. when specifying tag values. It *does not* support regular expressions for the
* `DELETE` does not support [fields](/enterprise_influxdb/v1/concepts/glossary/#field) in the `WHERE` clause. retention policy in the `FROM` clause.
* If you need to delete points in the future, you must specify that time period as `DELETE SERIES` runs for `time < now()` by default. [Syntax](https://github.com/influxdata/influxdb/issues/8007) If deleting a series in a retention policy, `DELETE` requires that you define
*only one* retention policy in the `FROM` clause.
* `DELETE` does not support [fields](/enterprise_influxdb/v1/concepts/glossary/#field)
in the `WHERE` clause.
* If you need to delete points in the future, you must specify that time period
as `DELETE SERIES` runs for `time < now()` by default.
### Delete measurements with DROP MEASUREMENT ### Delete measurements with DROP MEASUREMENT
@ -234,8 +257,9 @@ You may disable its auto-creation in the [configuration file](/enterprise_influx
### Create retention policies with CREATE RETENTION POLICY ### Create retention policies with CREATE RETENTION POLICY
#### Syntax #### Syntax
```
CREATE RETENTION POLICY <retention_policy_name> ON <database_name> DURATION <duration> REPLICATION <n> [SHARD DURATION <duration>] [DEFAULT] ```sql
CREATE RETENTION POLICY <retention_policy_name> ON <database_name> DURATION <duration> REPLICATION <n> [SHARD DURATION <duration>] [PAST LIMIT <duration>] [FUTURE LIMIT <duration>] [DEFAULT]
``` ```
#### Description of syntax #### Description of syntax
@ -283,6 +307,28 @@ See
[Shard group duration management](/enterprise_influxdb/v1/concepts/schema_and_data_layout/#shard-group-duration-management) [Shard group duration management](/enterprise_influxdb/v1/concepts/schema_and_data_layout/#shard-group-duration-management)
for recommended configurations. for recommended configurations.
##### `PAST LIMIT`
The `PAST LIMIT` clause defines a time boundary before and relative to _now_
in which points written to the retention policy are accepted. If a point has a
timestamp before the specified boundary, the point is rejected and the write
request returns a partial write error.
For example, if a write request tries to write data to a retention policy with a
`PAST LIMIT 6h` and there are points in the request with timestamps older than
6 hours, those points are rejected.
##### `FUTURE LIMIT`
The `FUTURE LIMIT` clause defines a time boundary after and relative to _now_
in which points written to the retention policy are accepted. If a point has a
timestamp after the specified boundary, the point is rejected and the write
request returns a partial write error.
For example, if a write request tries to write data to a retention policy with a
`FUTURE LIMIT 6h` and there are points in the request with future timestamps
greater than 6 hours from now, those points are rejected.
##### `DEFAULT` ##### `DEFAULT`
Sets the new retention policy as the default retention policy for the database. Sets the new retention policy as the default retention policy for the database.

View File

@ -122,15 +122,15 @@ ALL ALTER ANY AS ASC BEGIN
BY CREATE CONTINUOUS DATABASE DATABASES DEFAULT BY CREATE CONTINUOUS DATABASE DATABASES DEFAULT
DELETE DESC DESTINATIONS DIAGNOSTICS DISTINCT DROP DELETE DESC DESTINATIONS DIAGNOSTICS DISTINCT DROP
DURATION END EVERY EXPLAIN FIELD FOR DURATION END EVERY EXPLAIN FIELD FOR
FROM GRANT GRANTS GROUP GROUPS IN FROM FUTURE GRANT GRANTS GROUP GROUPS
INF INSERT INTO KEY KEYS KILL IN INF INSERT INTO KEY KEYS
LIMIT SHOW MEASUREMENT MEASUREMENTS NAME OFFSET KILL LIMIT SHOW MEASUREMENT MEASUREMENTS NAME
ON ORDER PASSWORD POLICY POLICIES PRIVILEGES OFFSET ON ORDER PASSWORD PAST POLICY
QUERIES QUERY READ REPLICATION RESAMPLE RETENTION POLICIES PRIVILEGES QUERIES QUERY READ REPLICATION
REVOKE SELECT SERIES SET SHARD SHARDS RESAMPLE RETENTION REVOKE SELECT SERIES SET
SLIMIT SOFFSET STATS SUBSCRIPTION SUBSCRIPTIONS TAG SHARD SHARDS SLIMIT SOFFSET STATS SUBSCRIPTION
TO USER USERS VALUES WHERE WITH SUBSCRIPTIONS TAG TO USER USERS VALUES
WRITE WHERE WITH WRITE
``` ```
If you use an InfluxQL keywords as an If you use an InfluxQL keywords as an
@ -380,12 +380,14 @@ create_database_stmt = "CREATE DATABASE" db_name
[ retention_policy_duration ] [ retention_policy_duration ]
[ retention_policy_replication ] [ retention_policy_replication ]
[ retention_policy_shard_group_duration ] [ retention_policy_shard_group_duration ]
[ retention_past_limit ]
[ retention_future_limit ]
[ retention_policy_name ] [ retention_policy_name ]
] . ] .
``` ```
{{% warn %}} Replication factors do not serve a purpose with single node instances. > [!Warning]
{{% /warn %}} > Replication factors do not serve a purpose with single node instances.
#### Examples #### Examples
@ -393,11 +395,17 @@ create_database_stmt = "CREATE DATABASE" db_name
-- Create a database called foo -- Create a database called foo
CREATE DATABASE "foo" CREATE DATABASE "foo"
-- Create a database called bar with a new DEFAULT retention policy and specify the duration, replication, shard group duration, and name of that retention policy -- Create a database called bar with a new DEFAULT retention policy and specify
-- the duration, replication, shard group duration, and name of that retention policy
CREATE DATABASE "bar" WITH DURATION 1d REPLICATION 1 SHARD DURATION 30m NAME "myrp" CREATE DATABASE "bar" WITH DURATION 1d REPLICATION 1 SHARD DURATION 30m NAME "myrp"
-- Create a database called mydb with a new DEFAULT retention policy and specify the name of that retention policy -- Create a database called mydb with a new DEFAULT retention policy and specify
-- the name of that retention policy
CREATE DATABASE "mydb" WITH NAME "myrp" CREATE DATABASE "mydb" WITH NAME "myrp"
-- Create a database called bar with a new retention policy named "myrp", and
-- specify the duration, past and future limits, and name of that retention policy
CREATE DATABASE "bar" WITH DURATION 1d PAST LIMIT 6h FUTURE LIMIT 6h NAME "myrp"
``` ```
### CREATE RETENTION POLICY ### CREATE RETENTION POLICY
@ -407,11 +415,13 @@ create_retention_policy_stmt = "CREATE RETENTION POLICY" policy_name on_clause
retention_policy_duration retention_policy_duration
retention_policy_replication retention_policy_replication
[ retention_policy_shard_group_duration ] [ retention_policy_shard_group_duration ]
[ retention_past_limit ]
[ retention_future_limit ]
[ "DEFAULT" ] . [ "DEFAULT" ] .
``` ```
{{% warn %}} Replication factors do not serve a purpose with single node instances. > [!Warning]
{{% /warn %}} > Replication factors do not serve a purpose with single node instances.
#### Examples #### Examples
@ -424,6 +434,9 @@ CREATE RETENTION POLICY "10m.events" ON "somedb" DURATION 60m REPLICATION 2 DEFA
-- Create a retention policy and specify the shard group duration. -- Create a retention policy and specify the shard group duration.
CREATE RETENTION POLICY "10m.events" ON "somedb" DURATION 60m REPLICATION 2 SHARD DURATION 30m CREATE RETENTION POLICY "10m.events" ON "somedb" DURATION 60m REPLICATION 2 SHARD DURATION 30m
-- Create a retention policy and specify past and future limits.
CREATE RETENTION POLICY "10m.events" ON "somedb" DURATION 12h PAST LIMIT 6h FUTURE LIMIT 6h
``` ```
### CREATE SUBSCRIPTION ### CREATE SUBSCRIPTION

View File

@ -6,6 +6,15 @@ menu:
enterprise_influxdb_v1: enterprise_influxdb_v1:
name: Tools name: Tools
weight: 72 weight: 72
aliases:
- /enterprise_influxdb/v1/tools/flux-vscode/
prepend: |
> [!Important]
> #### Flux VS Code extension no longer available
>
> The `vsflux` extension is no longer available in the Visual Studio Marketplace.
> `vsflux` and the `flux-lsp` Flux Language Server Protocol plugin are no longer maintained.
> Their repositories have been archived and are no longer receiving updates.
--- ---
Use the following tools to work with InfluxDB Enterprise: Use the following tools to work with InfluxDB Enterprise:

View File

@ -10,6 +10,14 @@ menu:
enterprise_influxdb_v1: enterprise_influxdb_v1:
name: Flux VS Code extension name: Flux VS Code extension
parent: Tools parent: Tools
draft: true
prepend: |
> [!Important]
> #### Flux VS Code extension no longer available
>
> The `vsflux` extension is no longer available in the Visual Studio Marketplace.
> `vsflux` and the `flux-lsp` Flux Language Server Protocol plugin are no longer maintained.
> Their repositories have been archived and are no longer receiving updates.
--- ---
The [Flux Visual Studio Code (VS Code) extension](https://marketplace.visualstudio.com/items?itemName=influxdata.flux) The [Flux Visual Studio Code (VS Code) extension](https://marketplace.visualstudio.com/items?itemName=influxdata.flux)

View File

@ -10,9 +10,10 @@ menu:
Influx Inspect is an InfluxDB disk utility that can be used to: Influx Inspect is an InfluxDB disk utility that can be used to:
* View detailed information about disk shards. - View detailed information about disk shards.
* Export data from a shard to [InfluxDB line protocol](/enterprise_influxdb/v1/concepts/glossary/#influxdb-line-protocol) that can be inserted back into the database. - Export data from a shard to [InfluxDB line protocol](/enterprise_influxdb/v1/concepts/glossary/#influxdb-line-protocol)
* Convert TSM index shards to TSI index shards. that can be inserted back into the database.
- Convert TSM index shards to TSI index shards.
## `influx_inspect` utility ## `influx_inspect` utility
@ -38,8 +39,8 @@ The `influx_inspect` commands are summarized here, with links to detailed inform
- [`merge-schema`](#merge-schema): Merges a set of schema files from the `check-schema` command. - [`merge-schema`](#merge-schema): Merges a set of schema files from the `check-schema` command.
- [`report`](#report): Displays a shard level report. - [`report`](#report): Displays a shard level report.
- [`report-db`](#report-db): Estimates InfluxDB Cloud (TSM) cardinality for a database. - [`report-db`](#report-db): Estimates InfluxDB Cloud (TSM) cardinality for a database.
- [`report-disk`](#report-disk): Reports disk usage by shard and measurement. - [`report-disk`](#report-disk): Reports disk usage by shards and measurements.
- [`reporttsi`](#reporttsi): Reports on cardinality for measurements and shards. - [`reporttsi`](#reporttsi): Reports on cardinality for shards and measurements.
- [`verify`](#verify): Verifies the integrity of TSM files. - [`verify`](#verify): Verifies the integrity of TSM files.
- [`verify-seriesfile`](#verify-seriesfile): Verifies the integrity of series files. - [`verify-seriesfile`](#verify-seriesfile): Verifies the integrity of series files.
- [`verify-tombstone`](#verify-tombstone): Verifies the integrity of tombstones. - [`verify-tombstone`](#verify-tombstone): Verifies the integrity of tombstones.
@ -50,7 +51,9 @@ Builds TSI (Time Series Index) disk-based shard index files and associated serie
The index is written to a temporary location until complete and then moved to a permanent location. The index is written to a temporary location until complete and then moved to a permanent location.
If an error occurs, then this operation will fall back to the original in-memory index. If an error occurs, then this operation will fall back to the original in-memory index.
> ***Note:*** **For offline conversion only.** > [!Note]
> #### For offline conversion only
>
> When TSI is enabled, new shards use the TSI indexes. > When TSI is enabled, new shards use the TSI indexes.
> Existing shards continue as TSM-based shards until > Existing shards continue as TSM-based shards until
> converted offline. > converted offline.
@ -60,7 +63,9 @@ If an error occurs, then this operation will fall back to the original in-memory
``` ```
influx_inspect buildtsi -datadir <data_dir> -waldir <wal_dir> [ options ] influx_inspect buildtsi -datadir <data_dir> -waldir <wal_dir> [ options ]
``` ```
> **Note:** Use the `buildtsi` command with the user account that you are going to run the database as,
> [!Note]
> Use the `buildtsi` command with the user account that you are going to run the database as,
> or ensure that the permissions match after running the command. > or ensure that the permissions match after running the command.
#### Options #### Options
@ -71,9 +76,8 @@ Optional arguments are in brackets.
The size of the batches written to the index. Default value is `10000`. The size of the batches written to the index. Default value is `10000`.
{{% warn %}} > [!Warning]
**Warning:** Setting this value can have adverse effects on performance and heap size. > Setting this value can have adverse effects on performance and heap size.
{{% /warn %}}
##### `[ -compact-series-file ]` ##### `[ -compact-series-file ]`
@ -90,10 +94,11 @@ The name of the database.
##### `-datadir <data_dir>` ##### `-datadir <data_dir>`
The path to the `data` directory. The path to the [`data` directory](/enterprise_influxdb/v1/concepts/file-system-layout/#data-directory).
Default value is `$HOME/.influxdb/data`. Default value is `$HOME/.influxdb/data`.
See the [file system layout](/enterprise_influxdb/v1/concepts/file-system-layout/) for InfluxDB on your system. See the [file system layout](/enterprise_influxdb/v1/concepts/file-system-layout/#file-system-layout)
for InfluxDB on your system.
##### `[ -max-cache-size ]` ##### `[ -max-cache-size ]`
@ -120,31 +125,32 @@ Flag to enable output in verbose mode.
##### `-waldir <wal_dir>` ##### `-waldir <wal_dir>`
The directory for the WAL (Write Ahead Log) files. The directory for the [WAL (Write Ahead Log)](/enterprise_influxdb/v1/concepts/file-system-layout/#wal-directory) files.
Default value is `$HOME/.influxdb/wal`. Default value is `$HOME/.influxdb/wal`.
See the [file system layout](/enterprise_influxdb/v1/concepts/file-system-layout/) for InfluxDB on your system. See the [file system layout](/enterprise_influxdb/v1/concepts/file-system-layout/#file-system-layout)
for InfluxDB on your system.
#### Examples #### Examples
##### Converting all shards on a node ##### Converting all shards on a node
``` ```
$ influx_inspect buildtsi -datadir /var/lib/influxdb/data -waldir /var/lib/influxdb/wal influx_inspect buildtsi -datadir ~/.influxdb/data -waldir ~/.influxdb/wal
``` ```
##### Converting all shards for a database ##### Converting all shards for a database
``` ```
$ influx_inspect buildtsi -database mydb datadir /var/lib/influxdb/data -waldir /var/lib/influxdb/wal influx_inspect buildtsi -database mydb -datadir ~/.influxdb/data -waldir ~/.influxdb/wal
``` ```
##### Converting a specific shard ##### Converting a specific shard
``` ```
$ influx_inspect buildtsi -database stress -shard 1 datadir /var/lib/influxdb/data -waldir /var/lib/influxdb/wal influx_inspect buildtsi -database stress -shard 1 -datadir ~/.influxdb/data -waldir ~/.influxdb/wal
``` ```
### `check-schema` ### `check-schema`
@ -161,7 +167,7 @@ influx_inspect check-schema [ options ]
##### [ `-conflicts-file <string>` ] ##### [ `-conflicts-file <string>` ]
Filename conflicts data should be written to. Default is `conflicts.json`. The filename where conflicts data should be written. Default is `conflicts.json`.
##### [ `-path <string>` ] ##### [ `-path <string>` ]
@ -170,17 +176,16 @@ working directory `.`.
##### [ `-schema-file <string>` ] ##### [ `-schema-file <string>` ]
Filename schema data should be written to. Default is `schema.json`. The filename where schema data should be written. Default is `schema.json`.
### `deletetsm` ### `deletetsm`
Use `deletetsm -measurement` to delete a measurement in a raw TSM file (from specified shards). Use `deletetsm -measurement` to delete a measurement in a raw TSM file (from specified shards).
Use `deletetsm -sanitize` to remove all tag and field keys containing non-printable Unicode characters in a raw TSM file (from specified shards). Use `deletetsm -sanitize` to remove all tag and field keys containing non-printable Unicode characters in a raw TSM file (from specified shards).
{{% warn %}} > [!Warning]
**Warning:** Use the `deletetsm` command only when your InfluxDB instance is > Use the `deletetsm` command only when your InfluxDB instance is
offline (`influxd` service is not running). > offline (`influxd` service is not running).
{{% /warn %}}
#### Syntax #### Syntax
@ -244,7 +249,7 @@ Optional arguments are in brackets.
##### `-series-file <series_path>` ##### `-series-file <series_path>`
Path to the `_series` directory under the database `data` directory. Required. The path to the `_series` directory under the database `data` directory. Required.
##### [ `-series` ] ##### [ `-series` ]
@ -283,18 +288,18 @@ Filter data by tag value regular expression.
##### Specifying paths to the `_series` and `index` directories ##### Specifying paths to the `_series` and `index` directories
``` ```
$ influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index
``` ```
##### Specifying paths to the `_series` directory and an `index` file ##### Specifying paths to the `_series` directory and an `index` file
``` ```
$ influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index/file0 influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index/file0
``` ```
##### Specifying paths to the `_series` directory and multiple `index` files ##### Specifying paths to the `_series` directory and multiple `index` files
``` ```
$ influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index/file0 /path/to/index/file1 ... influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index/file0 /path/to/index/file1 ...
``` ```
### `dumptsm` ### `dumptsm`
@ -309,7 +314,7 @@ influx_inspect dumptsm [ options ] <path>
##### `<path>` ##### `<path>`
Path to the `.tsm` file, located by default in the `data` directory. The path to the `.tsm` file, located by default in the `data` directory.
#### Options #### Options
@ -317,17 +322,17 @@ Optional arguments are in brackets.
##### [ `-index` ] ##### [ `-index` ]
Flag to dump raw index data. The flag to dump raw index data.
Default value is `false`. Default value is `false`.
##### [ `-blocks` ] ##### [ `-blocks` ]
Flag to dump raw block data. The flag to dump raw block data.
Default value is `false`. Default value is `false`.
##### [ `-all` ] ##### [ `-all` ]
Flag to dump all data. Caution: This may print a lot of information. The flag to dump all data. Caution: This may print a lot of information.
Default value is `false`. Default value is `false`.
##### [ `-filter-key <key_name>` ] ##### [ `-filter-key <key_name>` ]
@ -351,14 +356,14 @@ Optional arguments are in brackets.
##### [ `-show-duplicates` ] ##### [ `-show-duplicates` ]
Flag to show keys which have duplicate or out-of-order timestamps. The flag to show keys which have duplicate or out-of-order timestamps.
If a user writes points with timestamps set by the client, then multiple points with the same timestamp (or with time-descending timestamps) can be written. If a user writes points with timestamps set by the client, then multiple points with the same timestamp (or with time-descending timestamps) can be written.
### `export` ### `export`
Exports all TSM files in InfluxDB line protocol data format. Exports all TSM files or a single TSM file in InfluxDB line protocol data format.
This output file can be imported using the The output file can be imported using the
[influx](/enterprise_influxdb/v1/tools/influx-cli/use-influx/#import-data-from-a-file-with-import) command. [influx](http://localhost:1313/enterprise_influxdb/v1/tools/influx-cli/use-influx-cli) command.
#### Syntax #### Syntax
@ -382,10 +387,11 @@ Default value is `""`.
##### `-datadir <data_dir>` ##### `-datadir <data_dir>`
The path to the `data` directory. The path to the [`data` directory](/enterprise_influxdb/v1/concepts/file-system-layout/#data-directory).
Default value is `$HOME/.influxdb/data`. Default value is `$HOME/.influxdb/data`.
See the [file system layout](/enterprise_influxdb/v1/concepts/file-system-layout/) for InfluxDB on your system. See the [file system layout](/enterprise_influxdb/v1/concepts/file-system-layout/#file-system-layout)
for InfluxDB on your system.
##### [ `-end <timestamp>` ] ##### [ `-end <timestamp>` ]
@ -408,15 +414,20 @@ YYYY-MM-DDTHH:MM:SS-08:00
YYYY-MM-DDTHH:MM:SS+07:00 YYYY-MM-DDTHH:MM:SS+07:00
``` ```
> **Note:** With offsets, avoid replacing the + or - sign with a Z. It may cause an error or print Z (ISO 8601 behavior) instead of the time zone offset. > [!Note]
> With offsets, avoid replacing the + or - sign with a Z. It may cause an error
> or print Z (ISO 8601 behavior) instead of the time zone offset.
##### [ `-lponly` ] ##### [ `-lponly` ]
Output data in line protocol format only. Output data in line protocol format only.
Does not output data definition language (DDL) statements (such as `CREATE DATABASE`) or DML context metadata (such as `# CONTEXT-DATABASE`). Does not output data definition language (DDL) statements (such as `CREATE DATABASE`)
or DML context metadata (such as `# CONTEXT-DATABASE`).
##### [ `-out <export_dir>` ] ##### [ `-out <export_dir>` or `-out -`]
Location to export shard data. Specify an export directory to export a file, or add a hyphen after out (`-out -`) to export shard data to standard out (`stdout`) and send status messages to standard error (`stderr`).
The location for the export file.
Default value is `$HOME/.influxdb/export`. Default value is `$HOME/.influxdb/export`.
##### [ `-retention <rp_name> ` ] ##### [ `-retention <rp_name> ` ]
@ -433,7 +444,13 @@ The timestamp string must be in [RFC3339 format](https://tools.ietf.org/html/rfc
Path to the [WAL](/enterprise_influxdb/v1/concepts/glossary/#wal-write-ahead-log) directory. Path to the [WAL](/enterprise_influxdb/v1/concepts/glossary/#wal-write-ahead-log) directory.
Default value is `$HOME/.influxdb/wal`. Default value is `$HOME/.influxdb/wal`.
See the [file system layout](/enterprise_influxdb/v1/concepts/file-system-layout/) for InfluxDB on your system. See the [file system layout](/enterprise_influxdb/v1/concepts/file-system-layout/#file-system-layout)
for InfluxDB on your system.
##### [ `-tsmfile <tsm_file>` ]
Path to a single tsm file to export. This requires both `-database` and
`-retention` to be specified.
#### Examples #### Examples
@ -449,6 +466,15 @@ influx_inspect export -compress
influx_inspect export -database DATABASE_NAME -retention RETENTION_POLICY influx_inspect export -database DATABASE_NAME -retention RETENTION_POLICY
``` ```
##### Export data from a single TSM file
```bash
influx_inspect export \
-database DATABASE_NAME \
-retention RETENTION_POLICY \
-tsmfile TSM_FILE_NAME
```
##### Output file ##### Output file
```bash ```bash
@ -522,7 +548,7 @@ Note: This can use a lot of memory.
Use the `report-db` command to estimate the series cardinality of data in a Use the `report-db` command to estimate the series cardinality of data in a
database when migrated to InfluxDB Cloud (TSM). InfluxDB Cloud (TSM) includes database when migrated to InfluxDB Cloud (TSM). InfluxDB Cloud (TSM) includes
field keys in the series key so unique field keys affect the total cardinality. fields keys in the series key so unique field keys affect the total cardinality.
The total series cardinality of data in a InfluxDB 1.x database may differ from The total series cardinality of data in a InfluxDB 1.x database may differ from
from the series cardinality of that same data when migrated to InfluxDB Cloud (TSM). from the series cardinality of that same data when migrated to InfluxDB Cloud (TSM).
@ -562,33 +588,87 @@ Specify the cardinality "rollup" level--the granularity of the cardinality repor
### `report-disk` ### `report-disk`
Use the `report-disk` command to review TSM file disk usage per shard and measurement in a specified directory. Useful for capacity planning and identifying which measurement or shard is using the most disk space. The default directory path `~/.influxdb/data/`. Use the `report-disk` command to review disk usage by shards and measurements for TSM files in a specified directory. Useful for determining disk usage for capacity planning and identifying which measurements or shards are using the most space.
Calculates the total disk size by database (`db`), retention policy (`rp`), shard (`shard`), tsm file (`tsm_file`), and measurement (`measurement`). Calculates the total disk size (`total_tsm_size`) in bytes, the number of shards (`shards`), and the number of tsm files (`tsm_files`) for the specified directory. Also calculates the disk size (`size`) and number of tsm files (`tsm_files`) for each shard. Use the `-detailed` flag to report disk usage (`size`) by database (`db`), retention policy (`rp`), and measurement (`measurement`).
#### Syntax #### Syntax
``` ```
influx_inspect report-disk [ options ] <data_dir> influx_inspect report-disk [ options ] <path>
``` ```
##### `<path>`
Path to the directory with `.tsm` file(s) to report disk usage for. Default location is `$HOME/.influxdb/data`.
When specifying the path, wildcards (`*`) can replace one or more characters.
#### Options #### Options
Optional arguments are in brackets. Optional arguments are in brackets.
##### [ `-detailed` ] ##### [ `-detailed` ]
Report disk usage by measurement. Include this flag to report disk usage by measurement.
#### Examples
##### Report on disk size by shard
```bash
influx_inspect report-disk ~/.influxdb/data/
```
##### Output
```bash
{
"Summary": {"shards": 2, "tsm_files": 8, "total_tsm_size": 149834637 },
"Shard": [
{"db": "stress", "rp": "autogen", "shard": "3", "tsm_files": 7, "size": 147022321},
{"db": "telegraf", "rp": "autogen", "shard": "2", "tsm_files": 1, "size": 2812316}
]
}
```
##### Report on disk size by measurement
```bash
influx_inspect report-disk -detailed ~/.influxdb/data/
```
##### Output
```bash
{
"Summary": {"shards": 2, "tsm_files": 8, "total_tsm_size": 149834637 },
"Shard": [
{"db": "stress", "rp": "autogen", "shard": "3", "tsm_files": 7, "size": 147022321},
{"db": "telegraf", "rp": "autogen", "shard": "2", "tsm_files": 1, "size": 2812316}
],
"Measurement": [
{"db": "stress", "rp": "autogen", "measurement": "ctr", "size": 107900000},
{"db": "telegraf", "rp": "autogen", "measurement": "cpu", "size": 1784211},
{"db": "telegraf", "rp": "autogen", "measurement": "disk", "size": 374121},
{"db": "telegraf", "rp": "autogen", "measurement": "diskio", "size": 254453},
{"db": "telegraf", "rp": "autogen", "measurement": "mem", "size": 171120},
{"db": "telegraf", "rp": "autogen", "measurement": "processes", "size": 59691},
{"db": "telegraf", "rp": "autogen", "measurement": "swap", "size": 42310},
{"db": "telegraf", "rp": "autogen", "measurement": "system", "size": 59561}
]
}
```
### `reporttsi` ### `reporttsi`
The report does the following: The report does the following:
* Calculates the total exact series cardinality in the database. - Calculates the total exact series cardinality in the database.
* Segments that cardinality by measurement, and emits those cardinality values. - Segments that cardinality by measurement, and emits those cardinality values.
* Emits total exact cardinality for each shard in the database. - Emits total exact cardinality for each shard in the database.
* Segments for each shard the exact cardinality for each measurement in the shard. - Segments for each shard the exact cardinality for each measurement in the shard.
* Optionally limits the results in each shard to the "top n". - Optionally limits the results in each shard to the "top n".
The `reporttsi` command is primarily useful when there has been a change in cardinality The `reporttsi` command is primarily useful when there has been a change in cardinality
and it's not clear which measurement is responsible for this change, and further, _when_ and it's not clear which measurement is responsible for this change, and further, _when_
@ -703,7 +783,8 @@ Enables very verbose logging. Displays progress for every series key and time ra
Enables very very verbose logging. Displays progress for every series key and time range in the tombstone files. Timestamps are displayed in [RFC3339 format](https://tools.ietf.org/html/rfc3339) with nanosecond precision. Enables very very verbose logging. Displays progress for every series key and time range in the tombstone files. Timestamps are displayed in [RFC3339 format](https://tools.ietf.org/html/rfc3339) with nanosecond precision.
> **Note on verbose logging:** Higher verbosity levels override lower levels. > [!Note]
> Higher verbosity levels override lower levels.
## Caveats ## Caveats

View File

@ -44,14 +44,16 @@ ID Database Retention Policy Desired Replicas Shard Group Start
{{% /expand %}} {{% /expand %}}
{{< /expand-wrapper >}} {{< /expand-wrapper >}}
You can also use the `-m` flag to output "inconsistent" shards which are shards
that are either in metadata but not on disk or on disk but not in metadata.
## Flags ## Flags
| Flag | Description | | Flag | Description |
| :--- | :-------------------------------- | | :--- | :-------------------------------- |
| `-v` | Return detailed shard information | | `-v` | Return detailed shard information |
| `-m` | Return inconsistent shards |
{{% caption %}} {{% caption %}}
_Also see [`influxd-ctl` global flags](/enterprise_influxdb/v1/tools/influxd-ctl/#influxd-ctl-global-flags)._ _Also see [`influxd-ctl` global flags](/enterprise_influxdb/v1/tools/influxd-ctl/#influxd-ctl-global-flags)._
{{% /caption %}} {{% /caption %}}
## Examples

View File

@ -14,6 +14,14 @@ aliases:
InfluxDB Cloud updates occur frequently. Find a compilation of recent updates below. InfluxDB Cloud updates occur frequently. Find a compilation of recent updates below.
To find information about the latest Flux updates in InfluxDB Cloud, see [Flux release notes](/influxdb/cloud/reference/release-notes/flux/). To find information about the latest Flux updates in InfluxDB Cloud, see [Flux release notes](/influxdb/cloud/reference/release-notes/flux/).
## April 2025
### Flux VS Code extension no longer maintained
`vsflux` is no longer available in the Visual Studio Marketplace.
The `vsflux` Visual Studio Code extension and the `flux-lsp` Flux Language Server Protocol plugin are no longer maintained.
Their repositories have been archived and are no longer receiving updates.
## October 2022 ## October 2022
### Custom data retention periods ### Custom data retention periods

View File

@ -6,6 +6,15 @@ weight: 13
menu: menu:
influxdb_cloud: influxdb_cloud:
name: Tools & integrations name: Tools & integrations
aliases:
- /influxdb/cloud/tools/flux-vscode/
prepend: |
> [!Important]
> #### Flux VS Code extension no longer available
>
> The `vsflux` extension is no longer available in the Visual Studio Marketplace.
> `vsflux` and the `flux-lsp` Flux Language Server Protocol plugin are no longer maintained.
> Their repositories have been archived and are no longer receiving updates.
--- ---
{{< children >}} {{< children >}}

View File

@ -11,6 +11,7 @@ menu:
name: Flux VS Code extension name: Flux VS Code extension
parent: Tools & integrations parent: Tools & integrations
source: /shared/influxdb-v2/tools/flux-vscode.md source: /shared/influxdb-v2/tools/flux-vscode.md
draft: true
--- ---
<!-- The content of this file is at <!-- The content of this file is at

View File

@ -12,6 +12,45 @@ alt_links:
v2: /influxdb/v2/reference/release-notes/influxdb/ v2: /influxdb/v2/reference/release-notes/influxdb/
--- ---
## v1.12.0 {date="2025-04-15"}
## Features
- Add additional log output when using
[`influx_inspect buildtsi`](/influxdb/v1/tools/influx_inspect/#buildtsi) to
rebuild the TSI index.
- Use [`influx_inspect export`](/influxdb/v1/tools/influx_inspect/#export) with
[`-tsmfile` option](/influxdb/v1/tools/influx_inspect/#--tsmfile-tsm_file-) to
export a single TSM file.
- Add `fluxQueryRespBytes` metric to the `/debug/vars` metrics endpoint.
- Add [`aggressive-points-per-block` configuration option](/influxdb/v1/administration/config/#aggressive-points-per-block)
to prevent TSM files from not getting fully compacted.
- Improve error handling.
- InfluxQL updates:
- Delete series by retention policy.
- Allow retention policies to discard writes that fall within their range, but
outside of [`FUTURE LIMIT`](/influxdb/v1/query_language/manage-database/#future-limit)
and [`PAST LIMIT`](/influxdb/v1/query_language/manage-database/#past-limit).
## Bug fixes
- Log rejected writes to subscriptions.
- Update `xxhash` and avoid `stringtoslicebyte` in the cache.
- Prevent a panic when a shard group has no shards.
- Fix file handle leaks in `Compactor.write`.
- Ensure fields in memory match the fields on disk.
- Ensure temporary files are removed after failed compactions.
- Do not panic on invalid multiple subqueries.
## Other
- Update Go to 1.23.5.
- Upgrade Flux to v0.196.1.
- Upgrade InfluxQL to v1.4.1.
- Various other dependency updates.
---
## v1.11.8 {date="2024-11-15"} ## v1.11.8 {date="2024-11-15"}
### Bug Fixes ### Bug Fixes
@ -20,6 +59,8 @@ alt_links:
compatibility API](/influxdb/v1/tools/api/#apiv2delete-http-endpoint) before compatibility API](/influxdb/v1/tools/api/#apiv2delete-http-endpoint) before
string comparisons (e.g. to allow special characters in measurement names). string comparisons (e.g. to allow special characters in measurement names).
---
## v1.11.7 {date="2024-10-10"} ## v1.11.7 {date="2024-10-10"}
This release represents the first public release of InfluxDB OSS v1 since 2021 This release represents the first public release of InfluxDB OSS v1 since 2021
@ -72,17 +113,17 @@ All official build packages are for 64-bit architectures.
and [`influx_inspect merge-schema`](/influxdb/v1/tools/influx_inspect/#merge-schema) and [`influx_inspect merge-schema`](/influxdb/v1/tools/influx_inspect/#merge-schema)
commands to check for type conflicts between shards. commands to check for type conflicts between shards.
- **New configuration options:** - **New configuration options:**
- Add [`total-buffer-bytes`](/influxdb/v1/administration/config/#total-buffer-bytes--0) - Add [`total-buffer-bytes`](/influxdb/v1/administration/config/#total-buffer-bytes)
configuration option to set the total number of bytes to allocate to configuration option to set the total number of bytes to allocate to
subscription buffers. subscription buffers.
- Add [`termination-query-log`](/influxdb/v1/administration/config/#termination-query-log--false) - Add [`termination-query-log`](/influxdb/v1/administration/config/#termination-query-log)
configuration option to enable dumping running queries to log on `SIGTERM`. configuration option to enable dumping running queries to log on `SIGTERM`.
- Add [`max-concurrent-deletes`](/influxdb/v1/administration/config/#max-concurrent-deletes--1) - Add [`max-concurrent-deletes`](/influxdb/v1/administration/config/#max-concurrent-deletes)
configuration option to set delete concurrency. configuration option to set delete concurrency.
- Add [Flux query configuration settings](/influxdb/v1/administration/config/#flux-query-management-settings). - Add [Flux query configuration settings](/influxdb/v1/administration/config/#flux-query-management-settings).
- Add [`compact-series-file`](/influxdb/v1/administration/config/#compact-series-file--false) - Add [`compact-series-file`](/influxdb/v1/administration/config/#compact-series-file)
configuration option to enable or disable series file compaction on startup. configuration option to enable or disable series file compaction on startup.
- Add [`prom-read-auth-enabled` configuration option](/influxdb/v1/administration/config/#prom-read-auth-enabled--false) - Add [`prom-read-auth-enabled` configuration option](/influxdb/v1/administration/config/#prom-read-auth-enabled)
to authenticate Prometheus remote read. to authenticate Prometheus remote read.
- **Flux improvements:** - **Flux improvements:**
- Upgrade Flux to v0.194.5. - Upgrade Flux to v0.194.5.
@ -243,7 +284,7 @@ This release is for InfluxDB Enterprise 1.8.6 customers only. No OSS-specific ch
### Bug fixes ### Bug fixes
- Update meta queries (for example, SHOW TAG VALUES, SHOW TAG KEYS, SHOW SERIES CARDINALITY, SHOW MEASUREMENT CARDINALITY, and SHOW MEASUREMENTS) to check the query context when possible to respect timeout values set in the [`query-timeout` configuration parameter](/influxdb/v1/administration/config/#query-timeout--0s). Note, meta queries will check the context less frequently than regular queries, which use iterators, because meta queries return data in batches. - Update meta queries (for example, SHOW TAG VALUES, SHOW TAG KEYS, SHOW SERIES CARDINALITY, SHOW MEASUREMENT CARDINALITY, and SHOW MEASUREMENTS) to check the query context when possible to respect timeout values set in the [`query-timeout` configuration parameter](/influxdb/v1/administration/config/#query-timeout). Note, meta queries will check the context less frequently than regular queries, which use iterators, because meta queries return data in batches.
- Previously, successful writes were incorrectly incrementing the `WriteErr` statistics. Now, successful writes correctly increment the `writeOK` statistics. - Previously, successful writes were incorrectly incrementing the `WriteErr` statistics. Now, successful writes correctly increment the `writeOK` statistics.
- Correct JSON marshalling error format. - Correct JSON marshalling error format.
- Previously, a GROUP BY query with an offset that caused an interval to cross a daylight savings change inserted an extra output row off by one hour. Now, the correct GROUP BY interval start time is set before the time zone offset is calculated. - Previously, a GROUP BY query with an offset that caused an interval to cross a daylight savings change inserted an extra output row off by one hour. Now, the correct GROUP BY interval start time is set before the time zone offset is calculated.
@ -326,9 +367,9 @@ features, performance improvements, and bug fixes below.
This release updates support for the Flux language and queries. To learn about Flux design principles and see how to get started with Flux, see [Introduction to Flux](/influxdb/v1/flux/). This release updates support for the Flux language and queries. To learn about Flux design principles and see how to get started with Flux, see [Introduction to Flux](/influxdb/v1/flux/).
* Use the new [`influx -type=flux`](/influxdb/v1/tools/influx-cli/#flags) option to enable the Flux REPL shell for creating Flux queries. - Use the new [`influx -type=flux`](/influxdb/v1/tools/influx-cli/#flags) option to enable the Flux REPL shell for creating Flux queries.
* Flux v0.65 includes the following capabilities: - Flux v0.65 includes the following capabilities:
- Join data residing in multiple measurements, buckets, or data sources - Join data residing in multiple measurements, buckets, or data sources
- Perform mathematical operations using data gathered across measurements/buckets - Perform mathematical operations using data gathered across measurements/buckets
- Manipulate Strings through an extensive library of string related functions - Manipulate Strings through an extensive library of string related functions
@ -564,7 +605,7 @@ Chunked query was added into the Go client v2 interface. If you compiled against
Support for the Flux language and queries has been added in this release. To begin exploring Flux 0.7 (technical preview): Support for the Flux language and queries has been added in this release. To begin exploring Flux 0.7 (technical preview):
* Enable Flux using the new configuration setting [`[http] flux-enabled = true`](/influxdb/v1/administration/config/#flux-enabled-false). * Enable Flux using the new configuration setting [`[http] flux-enabled = true`](/influxdb/v1/administration/config/#flux-enabled).
* Use the new [`influx -type=flux`](/influxdb/v1/tools/shell/#type) option to enable the Flux REPL shell for creating Flux queries. * Use the new [`influx -type=flux`](/influxdb/v1/tools/shell/#type) option to enable the Flux REPL shell for creating Flux queries.
* Read about Flux and the Flux language, enabling Flux, or jump into the getting started and other guides. * Read about Flux and the Flux language, enabling Flux, or jump into the getting started and other guides.
@ -1101,7 +1142,7 @@ With TSI, the number of series should be unbounded by the memory on the server h
See Paul Dix's blogpost [Path to 1 Billion Time Series: InfluxDB High Cardinality Indexing Ready for Testing](https://www.influxdata.com/path-1-billion-time-series-influxdb-high-cardinality-indexing-ready-testing/) for additional information. See Paul Dix's blogpost [Path to 1 Billion Time Series: InfluxDB High Cardinality Indexing Ready for Testing](https://www.influxdata.com/path-1-billion-time-series-influxdb-high-cardinality-indexing-ready-testing/) for additional information.
TSI is disabled by default in version 1.3. TSI is disabled by default in version 1.3.
To enable TSI, uncomment the [`index-version` setting](/influxdb/v1/administration/config#index-version-inmem) and set it to `tsi1`. To enable TSI, uncomment the [`index-version` setting](/influxdb/v1/administration/config#index-version) and set it to `tsi1`.
The `index-version` setting is in the `[data]` section of the configuration file. The `index-version` setting is in the `[data]` section of the configuration file.
Next, restart your InfluxDB instance. Next, restart your InfluxDB instance.
@ -1250,14 +1291,14 @@ The following new configuration options are available.
#### `[http]` Section #### `[http]` Section
* [`max-row-limit`](/influxdb/v1/administration/config#max-row-limit-0) now defaults to `0`. * [`max-row-limit`](/influxdb/v1/administration/config#max-row-limit) now defaults to `0`.
In versions 1.0 and 1.1, the default setting was `10000`, but due to a bug, the value in use in versions 1.0 and 1.1 was effectively `0`. In versions 1.0 and 1.1, the default setting was `10000`, but due to a bug, the value in use in versions 1.0 and 1.1 was effectively `0`.
In versions 1.2.0 through 1.2.1, we fixed that bug, but the fix caused a breaking change for Grafana and Kapacitor users; users who had not set `max-row-limit` to `0` experienced truncated/partial data due to the `10000` row limit. In versions 1.2.0 through 1.2.1, we fixed that bug, but the fix caused a breaking change for Grafana and Kapacitor users; users who had not set `max-row-limit` to `0` experienced truncated/partial data due to the `10000` row limit.
In version 1.2.2, we've changed the default `max-row-limit` setting to `0` to match the behavior in versions 1.0 and 1.1. In version 1.2.2, we've changed the default `max-row-limit` setting to `0` to match the behavior in versions 1.0 and 1.1.
### Bug fixes ### Bug fixes
- Change the default [`max-row-limit`](/influxdb/v1/administration/config#max-row-limit-0) setting from `10000` to `0` to prevent the absence of data in Grafana or Kapacitor. - Change the default [`max-row-limit`](/influxdb/v1/administration/config#max-row-limit) setting from `10000` to `0` to prevent the absence of data in Grafana or Kapacitor.
## v1.2.1 {date="2017-03-08"} ## v1.2.1 {date="2017-03-08"}

View File

@ -683,7 +683,7 @@ are `127.0.0.1:8088`.
**To customize the TCP IP and port the backup and restore services use**, **To customize the TCP IP and port the backup and restore services use**,
uncomment and update the uncomment and update the
[`bind-address` configuration setting](/influxdb/v1/administration/config#bind-address-127-0-0-1-8088) [`bind-address` configuration setting](/influxdb/v1/administration/config#bind-address)
at the root level of your InfluxDB configuration file (`influxdb.conf`). at the root level of your InfluxDB configuration file (`influxdb.conf`).
```toml ```toml

File diff suppressed because it is too large Load Diff

View File

@ -12,14 +12,14 @@ menu:
### `8086` ### `8086`
The default port that runs the InfluxDB HTTP service. The default port that runs the InfluxDB HTTP service.
[Configure this port](/influxdb/v1/administration/config#bind-address-8086) [Configure this port](/influxdb/v1/administration/config#http-bind-address)
in the configuration file. in the configuration file.
**Resources** [API Reference](/influxdb/v1/tools/api/) **Resources** [API Reference](/influxdb/v1/tools/api/)
### 8088 ### 8088
The default port used by the RPC service for RPC calls made by the CLI for backup and restore operations (`influxdb backup` and `influxd restore`). The default port used by the RPC service for RPC calls made by the CLI for backup and restore operations (`influxdb backup` and `influxd restore`).
[Configure this port](/influxdb/v1/administration/config#bind-address-127-0-0-1-8088) [Configure this port](/influxdb/v1/administration/config#rpc-bind-address)
in the configuration file. in the configuration file.
**Resources** [Backup and Restore](/influxdb/v1/administration/backup_and_restore/) **Resources** [Backup and Restore](/influxdb/v1/administration/backup_and_restore/)
@ -29,7 +29,7 @@ in the configuration file.
### 2003 ### 2003
The default port that runs the Graphite service. The default port that runs the Graphite service.
[Enable and configure this port](/influxdb/v1/administration/config#bind-address-2003) [Enable and configure this port](/influxdb/v1/administration/config#graphite-bind-address)
in the configuration file. in the configuration file.
**Resources** [Graphite README](https://github.com/influxdata/influxdb/tree/1.8/services/graphite/README.md) **Resources** [Graphite README](https://github.com/influxdata/influxdb/tree/1.8/services/graphite/README.md)
@ -37,7 +37,7 @@ in the configuration file.
### 4242 ### 4242
The default port that runs the OpenTSDB service. The default port that runs the OpenTSDB service.
[Enable and configure this port](/influxdb/v1/administration/config#bind-address-4242) [Enable and configure this port](/influxdb/v1/administration/config#opentsdb-bind-address)
in the configuration file. in the configuration file.
**Resources** [OpenTSDB README](https://github.com/influxdata/influxdb/tree/1.8/services/opentsdb/README.md) **Resources** [OpenTSDB README](https://github.com/influxdata/influxdb/tree/1.8/services/opentsdb/README.md)
@ -45,7 +45,7 @@ in the configuration file.
### 8089 ### 8089
The default port that runs the UDP service. The default port that runs the UDP service.
[Enable and configure this port](/influxdb/v1/administration/config#bind-address-8089) [Enable and configure this port](/influxdb/v1/administration/config#udp-bind-address)
in the configuration file. in the configuration file.
**Resources** [UDP README](https://github.com/influxdata/influxdb/tree/1.8/services/udp/README.md) **Resources** [UDP README](https://github.com/influxdata/influxdb/tree/1.8/services/udp/README.md)
@ -53,7 +53,7 @@ in the configuration file.
### 25826 ### 25826
The default port that runs the Collectd service. The default port that runs the Collectd service.
[Enable and configure this port](/influxdb/v1/administration/config#bind-address-25826) [Enable and configure this port](/influxdb/v1/administration/config#collectd-bind-address)
in the configuration file. in the configuration file.
**Resources** [Collectd README](https://github.com/influxdata/influxdb/tree/1.8/services/collectd/README.md) **Resources** [Collectd README](https://github.com/influxdata/influxdb/tree/1.8/services/collectd/README.md)

View File

@ -21,7 +21,7 @@ HTTP, HTTPS, or UDP in [line protocol](/influxdb/v1/write_protocols/line_protoco
the InfluxDB subscriber service creates multiple "writers" ([goroutines](https://golangbot.com/goroutines/)) the InfluxDB subscriber service creates multiple "writers" ([goroutines](https://golangbot.com/goroutines/))
which send writes to the subscription endpoints. which send writes to the subscription endpoints.
_The number of writer goroutines is defined by the [`write-concurrency`](/influxdb/v1/administration/config#write-concurrency-40) configuration._ _The number of writer goroutines is defined by the [`write-concurrency`](/influxdb/v1/administration/config#write-concurrency) configuration._
As writes occur in InfluxDB, each subscription writer sends the written data to the As writes occur in InfluxDB, each subscription writer sends the written data to the
specified subscription endpoints. specified subscription endpoints.

View File

@ -21,18 +21,18 @@ The InfluxDB file structure includes of the following:
### Data directory ### Data directory
Directory path where InfluxDB stores time series data (TSM files). Directory path where InfluxDB stores time series data (TSM files).
To customize this path, use the [`[data].dir`](/influxdb/v1/administration/config/#dir--varlibinfluxdbdata) To customize this path, use the [`[data].dir`](/influxdb/v1/administration/config/#dir-1)
configuration option. configuration option.
### WAL directory ### WAL directory
Directory path where InfluxDB stores Write Ahead Log (WAL) files. Directory path where InfluxDB stores Write Ahead Log (WAL) files.
To customize this path, use the [`[data].wal-dir`](/influxdb/v1/administration/config/#wal-dir--varlibinfluxdbwal) To customize this path, use the [`[data].wal-dir`](/influxdb/v1/administration/config/#wal-dir)
configuration option. configuration option.
### Metastore directory ### Metastore directory
Directory path of the InfluxDB metastore, which stores information about users, Directory path of the InfluxDB metastore, which stores information about users,
databases, retention policies, shards, and continuous queries. databases, retention policies, shards, and continuous queries.
To customize this path, use the [`[meta].dir`](/influxdb/v1/administration/config/#dir--varlibinfluxdbmeta) To customize this path, use the [`[meta].dir`](/influxdb/v1/administration/config/#dir)
configuration option. configuration option.
## InfluxDB configuration files ## InfluxDB configuration files

View File

@ -66,13 +66,13 @@ Deletes sent to the Cache will clear out the given key or the specific time rang
The Cache exposes a few controls for snapshotting behavior. The Cache exposes a few controls for snapshotting behavior.
The two most important controls are the memory limits. The two most important controls are the memory limits.
There is a lower bound, [`cache-snapshot-memory-size`](/influxdb/v1/administration/config#cache-snapshot-memory-size-25m), which when exceeded will trigger a snapshot to TSM files and remove the corresponding WAL segments. There is a lower bound, [`cache-snapshot-memory-size`](/influxdb/v1/administration/config#cache-snapshot-memory-size), which when exceeded will trigger a snapshot to TSM files and remove the corresponding WAL segments.
There is also an upper bound, [`cache-max-memory-size`](/influxdb/v1/administration/config#cache-max-memory-size-1g), which when exceeded will cause the Cache to reject new writes. There is also an upper bound, [`cache-max-memory-size`](/influxdb/v1/administration/config#cache-max-memory-size), which when exceeded will cause the Cache to reject new writes.
These configurations are useful to prevent out of memory situations and to apply back pressure to clients writing data faster than the instance can persist it. These configurations are useful to prevent out of memory situations and to apply back pressure to clients writing data faster than the instance can persist it.
The checks for memory thresholds occur on every write. The checks for memory thresholds occur on every write.
The other snapshot controls are time based. The other snapshot controls are time based.
The idle threshold, [`cache-snapshot-write-cold-duration`](/influxdb/v1/administration/config#cache-snapshot-write-cold-duration-10m), forces the Cache to snapshot to TSM files if it hasn't received a write within the specified interval. The idle threshold, [`cache-snapshot-write-cold-duration`](/influxdb/v1/administration/config#cache-snapshot-write-cold-duration), forces the Cache to snapshot to TSM files if it hasn't received a write within the specified interval.
The in-memory Cache is recreated on restart by re-reading the WAL files on disk. The in-memory Cache is recreated on restart by re-reading the WAL files on disk.

View File

@ -215,7 +215,7 @@ data that reside in an RP other than the `DEFAULT` RP.
Between checks, `orders` may have data that are older than two hours. Between checks, `orders` may have data that are older than two hours.
The rate at which InfluxDB checks to enforce an RP is a configurable setting, The rate at which InfluxDB checks to enforce an RP is a configurable setting,
see see
[Database Configuration](/influxdb/v1/administration/config#check-interval-30m0s). [Database Configuration](/influxdb/v1/administration/config#check-interval).
Using a combination of RPs and CQs, we've successfully set up our database to Using a combination of RPs and CQs, we've successfully set up our database to
automatically keep the high precision raw data for a limited time, create lower automatically keep the high precision raw data for a limited time, create lower

View File

@ -62,17 +62,22 @@ Creates a new database.
#### Syntax #### Syntax
```sql ```sql
CREATE DATABASE <database_name> [WITH [DURATION <duration>] [REPLICATION <n>] [SHARD DURATION <duration>] [NAME <retention-policy-name>]] CREATE DATABASE <database_name> [WITH [DURATION <duration>] [REPLICATION <n>] [SHARD DURATION <duration>] [PAST LIMIT <duration>] [FUTURE LIMIT <duration>] [NAME <retention-policy-name>]]
``` ```
#### Description of syntax #### Description of syntax
`CREATE DATABASE` requires a database [name](/influxdb/v1/troubleshooting/frequently-asked-questions/#what-words-and-characters-should-i-avoid-when-writing-data-to-influxdb). `CREATE DATABASE` requires a database [name](/influxdb/v1/troubleshooting/frequently-asked-questions/#what-words-and-characters-should-i-avoid-when-writing-data-to-influxdb).
The `WITH`, `DURATION`, `REPLICATION`, `SHARD DURATION`, and `NAME` clauses are optional and create a single [retention policy](/influxdb/v1/concepts/glossary/#retention-policy-rp) associated with the created database. The `WITH`, `DURATION`, `REPLICATION`, `SHARD DURATION`, `PAST LIMIT`,
If you do not specify one of the clauses after `WITH`, the relevant behavior defaults to the `autogen` retention policy settings. `FUTURE LIMIT, and `NAME` clauses are optional and create a single
[retention policy](/influxdb/v1/concepts/glossary/#retention-policy-rp)
associated with the created database.
If you do not specify one of the clauses after `WITH`, the relevant behavior
defaults to the `autogen` retention policy settings.
The created retention policy automatically serves as the database's default retention policy. The created retention policy automatically serves as the database's default retention policy.
For more information about those clauses, see [Retention Policy Management](/influxdb/v1/query_language/manage-database/#retention-policy-management). For more information about those clauses, see
[Retention Policy Management](/influxdb/v1/query_language/manage-database/#retention-policy-management).
A successful `CREATE DATABASE` query returns an empty result. A successful `CREATE DATABASE` query returns an empty result.
If you attempt to create a database that already exists, InfluxDB does nothing and does not return an error. If you attempt to create a database that already exists, InfluxDB does nothing and does not return an error.
@ -87,7 +92,7 @@ If you attempt to create a database that already exists, InfluxDB does nothing a
``` ```
The query creates a database called `NOAA_water_database`. The query creates a database called `NOAA_water_database`.
[By default](/influxdb/v1/administration/config/#retention-autocreate-true), InfluxDB also creates the `autogen` retention policy and associates it with the `NOAA_water_database`. [By default](/influxdb/v1/administration/config/#retention-autocreate), InfluxDB also creates the `autogen` retention policy and associates it with the `NOAA_water_database`.
##### Create a database with a specific retention policy ##### Create a database with a specific retention policy
@ -122,21 +127,25 @@ The `DROP SERIES` query deletes all points from a [series](/influxdb/v1/concepts
and it drops the series from the index. and it drops the series from the index.
The query takes the following form, where you must specify either the `FROM` clause or the `WHERE` clause: The query takes the following form, where you must specify either the `FROM` clause or the `WHERE` clause:
```sql ```sql
DROP SERIES FROM <measurement_name[,measurement_name]> WHERE <tag_key>='<tag_value>' DROP SERIES FROM <measurement_name[,measurement_name]> WHERE <tag_key>='<tag_value>'
``` ```
Drop all series from a single measurement: Drop all series from a single measurement:
```sql ```sql
> DROP SERIES FROM "h2o_feet" > DROP SERIES FROM "h2o_feet"
``` ```
Drop series with a specific tag pair from a single measurement: Drop series with a specific tag pair from a single measurement:
```sql ```sql
> DROP SERIES FROM "h2o_feet" WHERE "location" = 'santa_monica' > DROP SERIES FROM "h2o_feet" WHERE "location" = 'santa_monica'
``` ```
Drop all points in the series that have a specific tag pair from all measurements in the database: Drop all points in the series that have a specific tag pair from all measurements in the database:
```sql ```sql
> DROP SERIES WHERE "location" = 'santa_monica' > DROP SERIES WHERE "location" = 'santa_monica'
``` ```
@ -152,27 +161,31 @@ Unlike
You must include either the `FROM` clause, the `WHERE` clause, or both: You must include either the `FROM` clause, the `WHERE` clause, or both:
``` ```sql
DELETE FROM <measurement_name> WHERE [<tag_key>='<tag_value>'] | [<time interval>] DELETE FROM <measurement_name> WHERE [<tag_key>='<tag_value>'] | [<time interval>]
``` ```
Delete all data associated with the measurement `h2o_feet`: Delete all data associated with the measurement `h2o_feet`:
```
```sql
> DELETE FROM "h2o_feet" > DELETE FROM "h2o_feet"
``` ```
Delete all data associated with the measurement `h2o_quality` and where the tag `randtag` equals `3`: Delete all data associated with the measurement `h2o_quality` and where the tag `randtag` equals `3`:
```
```sql
> DELETE FROM "h2o_quality" WHERE "randtag" = '3' > DELETE FROM "h2o_quality" WHERE "randtag" = '3'
``` ```
Delete all data in the database that occur before January 01, 2020: Delete all data in the database that occur before January 01, 2020:
```
```sql
> DELETE WHERE time < '2020-01-01' > DELETE WHERE time < '2020-01-01'
``` ```
Delete all data associated with the measurement `h2o_feet` in retention policy `one_day`: Delete all data associated with the measurement `h2o_feet` in retention policy `one_day`:
```
```sql
> DELETE FROM "one_day"."h2o_feet" > DELETE FROM "one_day"."h2o_feet"
``` ```
@ -181,12 +194,16 @@ A successful `DELETE` query returns an empty result.
Things to note about `DELETE`: Things to note about `DELETE`:
* `DELETE` supports * `DELETE` supports
[regular expressions](/influxdb/v1/query_language/explore-data/#regular-expressions) [regular expressions](/enterprise_influxdb/v1/query_language/explore-data/#regular-expressions)
in the `FROM` clause when specifying measurement names and in the `WHERE` clause in the `FROM` clause when specifying measurement names and in the `WHERE` clause
when specifying tag values. It *does not* support regular expressions for the retention policy in the `FROM` clause. when specifying tag values. It *does not* support regular expressions for the
`DELETE` requires that you define *one* retention policy in the `FROM` clause. retention policy in the `FROM` clause.
* `DELETE` does not support [fields](/influxdb/v1/concepts/glossary/#field) in the `WHERE` clause. If deleting a series in a retention policy, `DELETE` requires that you define
* If you need to delete points in the future, you must specify that time period as `DELETE SERIES` runs for `time < now()` by default. [Syntax](https://github.com/influxdata/influxdb/issues/8007) *only one* retention policy in the `FROM` clause.
* `DELETE` does not support [fields](/influxdb/v1/concepts/glossary/#field) in
the `WHERE` clause.
* If you need to delete points in the future, you must specify that time period
as `DELETE SERIES` runs for `time < now()` by default.
### Delete measurements with DROP MEASUREMENT ### Delete measurements with DROP MEASUREMENT
@ -240,8 +257,9 @@ You may disable its auto-creation in the [configuration file](/influxdb/v1/admin
### Create retention policies with CREATE RETENTION POLICY ### Create retention policies with CREATE RETENTION POLICY
#### Syntax #### Syntax
```
CREATE RETENTION POLICY <retention_policy_name> ON <database_name> DURATION <duration> REPLICATION <n> [SHARD DURATION <duration>] [DEFAULT] ```sql
CREATE RETENTION POLICY <retention_policy_name> ON <database_name> DURATION <duration> REPLICATION <n> [SHARD DURATION <duration>] [PAST LIMIT <duration>] [FUTURE LIMIT <duration>] [DEFAULT]
``` ```
#### Description of syntax #### Description of syntax
@ -289,6 +307,28 @@ See
[Shard group duration management](/influxdb/v1/concepts/schema_and_data_layout/#shard-group-duration-management) [Shard group duration management](/influxdb/v1/concepts/schema_and_data_layout/#shard-group-duration-management)
for recommended configurations. for recommended configurations.
##### `PAST LIMIT`
The `PAST LIMIT` clause defines a time boundary before and relative to _now_
in which points written to the retention policy are accepted. If a point has a
timestamp before the specified boundary, the point is rejected and the write
request returns a partial write error.
For example, if a write request tries to write data to a retention policy with a
`PAST LIMIT 6h` and there are points in the request with timestamps older than
6 hours, those points are rejected.
##### `FUTURE LIMIT`
The `FUTURE LIMIT` clause defines a time boundary after and relative to _now_
in which points written to the retention policy are accepted. If a point has a
timestamp after the specified boundary, the point is rejected and the write
request returns a partial write error.
For example, if a write request tries to write data to a retention policy with a
`FUTURE LIMIT 6h` and there are points in the request with future timestamps
greater than 6 hours from now, those points are rejected.
##### `DEFAULT` ##### `DEFAULT`
Sets the new retention policy as the default retention policy for the database. Sets the new retention policy as the default retention policy for the database.

View File

@ -8,11 +8,6 @@ menu:
parent: InfluxQL parent: InfluxQL
aliases: aliases:
- /influxdb/v2/query_language/spec/ - /influxdb/v2/query_language/spec/
- /influxdb/v2/query_language/spec/
- /influxdb/v2/query_language/spec/
- /influxdb/v2/query_language/spec/
- /influxdb/v2/query_language/spec/
- /influxdb/v2/query_language/spec/
--- ---
## Introduction ## Introduction
@ -123,15 +118,15 @@ ALL ALTER ANY AS ASC BEGIN
BY CREATE CONTINUOUS DATABASE DATABASES DEFAULT BY CREATE CONTINUOUS DATABASE DATABASES DEFAULT
DELETE DESC DESTINATIONS DIAGNOSTICS DISTINCT DROP DELETE DESC DESTINATIONS DIAGNOSTICS DISTINCT DROP
DURATION END EVERY EXPLAIN FIELD FOR DURATION END EVERY EXPLAIN FIELD FOR
FROM GRANT GRANTS GROUP GROUPS IN FROM FUTURE GRANT GRANTS GROUP GROUPS
INF INSERT INTO KEY KEYS KILL IN INF INSERT INTO KEY KEYS
LIMIT SHOW MEASUREMENT MEASUREMENTS NAME OFFSET KILL LIMIT SHOW MEASUREMENT MEASUREMENTS NAME
ON ORDER PASSWORD POLICY POLICIES PRIVILEGES OFFSET ON ORDER PASSWORD PAST POLICY
QUERIES QUERY READ REPLICATION RESAMPLE RETENTION POLICIES PRIVILEGES QUERIES QUERY READ REPLICATION
REVOKE SELECT SERIES SET SHARD SHARDS RESAMPLE RETENTION REVOKE SELECT SERIES SET
SLIMIT SOFFSET STATS SUBSCRIPTION SUBSCRIPTIONS TAG SHARD SHARDS SLIMIT SOFFSET STATS SUBSCRIPTION
TO USER USERS VALUES WHERE WITH SUBSCRIPTIONS TAG TO USER USERS VALUES
WRITE WHERE WITH WRITE
``` ```
If you use an InfluxQL keywords as an If you use an InfluxQL keywords as an
@ -383,12 +378,14 @@ create_database_stmt = "CREATE DATABASE" db_name
[ retention_policy_duration ] [ retention_policy_duration ]
[ retention_policy_replication ] [ retention_policy_replication ]
[ retention_policy_shard_group_duration ] [ retention_policy_shard_group_duration ]
[ retention_past_limit ]
[ retention_future_limit ]
[ retention_policy_name ] [ retention_policy_name ]
] . ] .
``` ```
{{% warn %}} Replication factors do not serve a purpose with single node instances. > [!Warning]
{{% /warn %}} > Replication factors do not serve a purpose with single node instances.
#### Examples #### Examples
@ -396,11 +393,17 @@ create_database_stmt = "CREATE DATABASE" db_name
-- Create a database called foo -- Create a database called foo
CREATE DATABASE "foo" CREATE DATABASE "foo"
-- Create a database called bar with a new DEFAULT retention policy and specify the duration, replication, shard group duration, and name of that retention policy -- Create a database called bar with a new DEFAULT retention policy and specify
-- the duration, replication, shard group duration, and name of that retention policy
CREATE DATABASE "bar" WITH DURATION 1d REPLICATION 1 SHARD DURATION 30m NAME "myrp" CREATE DATABASE "bar" WITH DURATION 1d REPLICATION 1 SHARD DURATION 30m NAME "myrp"
-- Create a database called mydb with a new DEFAULT retention policy and specify the name of that retention policy -- Create a database called mydb with a new DEFAULT retention policy and specify
-- the name of that retention policy
CREATE DATABASE "mydb" WITH NAME "myrp" CREATE DATABASE "mydb" WITH NAME "myrp"
-- Create a database called bar with a new retention policy named "myrp", and
-- specify the duration, past and future limits, and name of that retention policy
CREATE DATABASE "bar" WITH DURATION 1d PAST LIMIT 6h FUTURE LIMIT 6h NAME "myrp"
``` ```
### CREATE RETENTION POLICY ### CREATE RETENTION POLICY
@ -410,11 +413,13 @@ create_retention_policy_stmt = "CREATE RETENTION POLICY" policy_name on_clause
retention_policy_duration retention_policy_duration
retention_policy_replication retention_policy_replication
[ retention_policy_shard_group_duration ] [ retention_policy_shard_group_duration ]
[ retention_past_limit ]
[ retention_future_limit ]
[ "DEFAULT" ] . [ "DEFAULT" ] .
``` ```
{{% warn %}} Replication factors do not serve a purpose with single node instances. > [!Warning]
{{% /warn %}} > Replication factors do not serve a purpose with single node instances.
#### Examples #### Examples
@ -427,6 +432,9 @@ CREATE RETENTION POLICY "10m.events" ON "somedb" DURATION 60m REPLICATION 2 DEFA
-- Create a retention policy and specify the shard group duration. -- Create a retention policy and specify the shard group duration.
CREATE RETENTION POLICY "10m.events" ON "somedb" DURATION 60m REPLICATION 2 SHARD DURATION 30m CREATE RETENTION POLICY "10m.events" ON "somedb" DURATION 60m REPLICATION 2 SHARD DURATION 30m
-- Create a retention policy and specify past and future limits.
CREATE RETENTION POLICY "10m.events" ON "somedb" DURATION 12h PAST LIMIT 6h FUTURE LIMIT 6h
``` ```
### CREATE SUBSCRIPTION ### CREATE SUBSCRIPTION

View File

@ -89,7 +89,7 @@ made to match the InfluxDB data structure:
* Prometheus labels become InfluxDB tags. * Prometheus labels become InfluxDB tags.
* All `# HELP` and `# TYPE` lines are ignored. * All `# HELP` and `# TYPE` lines are ignored.
* [v1.8.6 and later] Prometheus remote write endpoint drops unsupported Prometheus values (`NaN`,`-Inf`, and `+Inf`) rather than reject the entire batch. * [v1.8.6 and later] Prometheus remote write endpoint drops unsupported Prometheus values (`NaN`,`-Inf`, and `+Inf`) rather than reject the entire batch.
* If [write trace logging is enabled (`[http] write-tracing = true`)](/influxdb/v1/administration/config/#write-tracing-false), then summaries of dropped values are logged. * If [write trace logging is enabled (`[http] write-tracing = true`)](/influxdb/v1/administration/config/#write-tracing), then summaries of dropped values are logged.
* If a batch of values contains values that are subsequently dropped, HTTP status code `204` is returned. * If a batch of values contains values that are subsequently dropped, HTTP status code `204` is returned.
### Example: Parse Prometheus to InfluxDB ### Example: Parse Prometheus to InfluxDB

View File

@ -4,12 +4,20 @@ description: Tools and utilities for interacting with InfluxDB.
aliases: aliases:
- /influxdb/v1/clients/ - /influxdb/v1/clients/
- /influxdb/v1/write_protocols/json/ - /influxdb/v1/write_protocols/json/
- /influxdb/v1/tools/flux-vscode/
menu: menu:
influxdb_v1: influxdb_v1:
name: Tools name: Tools
weight: 60 weight: 60
alt_links: alt_links:
v2: /influxdb/v2/tools/ v2: /influxdb/v2/tools/
prepend: |
> [!Important]
> #### Flux VS Code extension no longer available
>
> The `vsflux` extension is no longer available in the Visual Studio Marketplace.
> `vsflux` and the `flux-lsp` Flux Language Server Protocol plugin are no longer maintained.
> Their repositories have been archived and are no longer receiving updates.
--- ---
This section covers the available tools for interacting with InfluxDB. This section covers the available tools for interacting with InfluxDB.
@ -35,6 +43,7 @@ The list of [client libraries](/influxdb/v1/tools/api_client_libraries/) for int
Use the [InfluxDB `inch` tool](/influxdb/v1/tools/inch/) to test InfluxDB performance. Adjust metrics such as the batch size, tag values, and concurrent write streams to test how ingesting different tag cardinalities and metrics affects performance. Use the [InfluxDB `inch` tool](/influxdb/v1/tools/inch/) to test InfluxDB performance. Adjust metrics such as the batch size, tag values, and concurrent write streams to test how ingesting different tag cardinalities and metrics affects performance.
## Graphs and dashboards ## Graphs and dashboards
Use [Chronograf](/chronograf/v1/) or [Grafana](https://grafana.com/docs/grafana/latest/features/datasources/influxdb/) dashboards to visualize your time series data. Use [Chronograf](/chronograf/v1/) or [Grafana](https://grafana.com/docs/grafana/latest/features/datasources/influxdb/) dashboards to visualize your time series data.
@ -60,3 +69,12 @@ SHOW TAG VALUES FROM "your.system"."host_info" WITH KEY = “host”
``` ```
> **Note:** In Chronograf, you can also filter meta query results for a specified time range by [creating a `custom meta query` template variable](/chronograf/v1/guides/dashboard-template-variables/#create-custom-template-variables) and adding a time range filter. > **Note:** In Chronograf, you can also filter meta query results for a specified time range by [creating a `custom meta query` template variable](/chronograf/v1/guides/dashboard-template-variables/#create-custom-template-variables) and adding a time range filter.
## Flux tools
> [!NOTE]
> #### vsflux and Flux-LSP no longer maintained
>
> The `vsflux` Flux VS Code extension and the `flux-lsp` language server plugin for Vim are no longer maintained.
> Their repositories have been archived and are no longer receiving updates.
> `vsflux` is no longer available in the Visual Studio Marketplace.

View File

@ -554,7 +554,7 @@ A successful [`CREATE DATABASE` query](/influxdb/v1/query_language/manage-databa
| u=\<username> | Optional if you haven't [enabled authentication](/influxdb/v1/administration/authentication_and_authorization/#set-up-authentication). Required if you've enabled authentication.* | Sets the username for authentication if you've enabled authentication. The user must have read access to the database. Use with the query string parameter `p`. | | u=\<username> | Optional if you haven't [enabled authentication](/influxdb/v1/administration/authentication_and_authorization/#set-up-authentication). Required if you've enabled authentication.* | Sets the username for authentication if you've enabled authentication. The user must have read access to the database. Use with the query string parameter `p`. |
\* InfluxDB does not truncate the number of rows returned for requests without the `chunked` parameter. \* InfluxDB does not truncate the number of rows returned for requests without the `chunked` parameter.
That behavior is configurable; see the [`max-row-limit`](/influxdb/v1/administration/config/#max-row-limit-0) configuration option for more information. That behavior is configurable; see the [`max-row-limit`](/influxdb/v1/administration/config/#max-row-limit) configuration option for more information.
\** The InfluxDB API also supports basic authentication. \** The InfluxDB API also supports basic authentication.
Use basic authentication if you've [enabled authentication](/influxdb/v1/administration/authentication_and_authorization/#set-up-authentication) Use basic authentication if you've [enabled authentication](/influxdb/v1/administration/authentication_and_authorization/#set-up-authentication)
@ -1077,7 +1077,7 @@ Errors are returned in JSON.
| 400 Bad Request | Unacceptable request. Can occur with an InfluxDB line protocol syntax error or if a user attempts to write values to a field that previously accepted a different value type. The returned JSON offers further information. | | 400 Bad Request | Unacceptable request. Can occur with an InfluxDB line protocol syntax error or if a user attempts to write values to a field that previously accepted a different value type. The returned JSON offers further information. |
| 401 Unauthorized | Unacceptable request. Can occur with invalid authentication credentials. | | 401 Unauthorized | Unacceptable request. Can occur with invalid authentication credentials. |
| 404 Not Found | Unacceptable request. Can occur if a user attempts to write to a database that does not exist. The returned JSON offers further information. | | 404 Not Found | Unacceptable request. Can occur if a user attempts to write to a database that does not exist. The returned JSON offers further information. |
| 413 Request Entity Too Large | Unaccetable request. It will occur if the payload of the POST request is bigger than the maximum size allowed. See [`max-body-size`](/influxdb/v1/administration/config/#max-body-size-25000000) parameter for more details. | 413 Request Entity Too Large | Unacceptable request. It will occur if the payload of the POST request is bigger than the maximum size allowed. See [`max-body-size`](/influxdb/v1/administration/config/#max-body-size) parameter for more details.
| 500 Internal Server Error | The system is overloaded or significantly impaired. Can occur if a user attempts to write to a retention policy that does not exist. The returned JSON offers further information. | | 500 Internal Server Error | The system is overloaded or significantly impaired. Can occur if a user attempts to write to a retention policy that does not exist. The returned JSON offers further information. |
#### Examples #### Examples

View File

@ -12,6 +12,14 @@ menu:
parent: Tools parent: Tools
alt_links: alt_links:
v2: /influxdb/v2/tools/flux-vscode/ v2: /influxdb/v2/tools/flux-vscode/
draft: true
prepend: |
> [!Important]
> #### Flux VS Code extension no longer available
>
> The `vsflux` extension is no longer available in the Visual Studio Marketplace.
> `vsflux` and the `flux-lsp` Flux Language Server Protocol plugin are no longer maintained.
> Their repositories have been archived and are no longer receiving updates.
--- ---
The [Flux Visual Studio Code (VS Code) extension](https://marketplace.visualstudio.com/items?itemName=influxdata.flux) The [Flux Visual Studio Code (VS Code) extension](https://marketplace.visualstudio.com/items?itemName=influxdata.flux)

View File

@ -12,9 +12,10 @@ alt_links:
Influx Inspect is an InfluxDB disk utility that can be used to: Influx Inspect is an InfluxDB disk utility that can be used to:
* View detailed information about disk shards. - View detailed information about disk shards.
* Export data from a shard to [InfluxDB line protocol](/influxdb/v1/concepts/glossary/#influxdb-line-protocol) that can be inserted back into the database. - Export data from a shard to [InfluxDB line protocol](/influxdb/v1/concepts/glossary/#influxdb-line-protocol)
* Convert TSM index shards to TSI index shards. that can be inserted back into the database.
- Convert TSM index shards to TSI index shards.
## `influx_inspect` utility ## `influx_inspect` utility
@ -52,7 +53,9 @@ Builds TSI (Time Series Index) disk-based shard index files and associated serie
The index is written to a temporary location until complete and then moved to a permanent location. The index is written to a temporary location until complete and then moved to a permanent location.
If an error occurs, then this operation will fall back to the original in-memory index. If an error occurs, then this operation will fall back to the original in-memory index.
> ***Note:*** **For offline conversion only.** > [!Note]
> #### For offline conversion only
>
> When TSI is enabled, new shards use the TSI indexes. > When TSI is enabled, new shards use the TSI indexes.
> Existing shards continue as TSM-based shards until > Existing shards continue as TSM-based shards until
> converted offline. > converted offline.
@ -62,7 +65,9 @@ If an error occurs, then this operation will fall back to the original in-memory
``` ```
influx_inspect buildtsi -datadir <data_dir> -waldir <wal_dir> [ options ] influx_inspect buildtsi -datadir <data_dir> -waldir <wal_dir> [ options ]
``` ```
> **Note:** Use the `buildtsi` command with the user account that you are going to run the database as,
> [!Note]
> Use the `buildtsi` command with the user account that you are going to run the database as,
> or ensure that the permissions match after running the command. > or ensure that the permissions match after running the command.
#### Options #### Options
@ -73,9 +78,8 @@ Optional arguments are in brackets.
The size of the batches written to the index. Default value is `10000`. The size of the batches written to the index. Default value is `10000`.
{{% warn %}} > [!Warning]
**Warning:** Setting this value can have adverse effects on performance and heap size. > Setting this value can have adverse effects on performance and heap size.
{{% /warn %}}
##### `[ -compact-series-file ]` ##### `[ -compact-series-file ]`
@ -123,7 +127,7 @@ Flag to enable output in verbose mode.
##### `-waldir <wal_dir>` ##### `-waldir <wal_dir>`
The directory for the (WAL (Write Ahead Log)](/influxdb/v1/concepts/file-system-layout/#wal-directory) files. The directory for the [WAL (Write Ahead Log)](/influxdb/v1/concepts/file-system-layout/#wal-directory) files.
Default value is `$HOME/.influxdb/wal`. Default value is `$HOME/.influxdb/wal`.
See the [file system layout](/influxdb/v1/concepts/file-system-layout/#file-system-layout) See the [file system layout](/influxdb/v1/concepts/file-system-layout/#file-system-layout)
@ -181,10 +185,9 @@ The filename where schema data should be written. Default is `schema.json`.
Use `deletetsm -measurement` to delete a measurement in a raw TSM file (from specified shards). Use `deletetsm -measurement` to delete a measurement in a raw TSM file (from specified shards).
Use `deletetsm -sanitize` to remove all tag and field keys containing non-printable Unicode characters in a raw TSM file (from specified shards). Use `deletetsm -sanitize` to remove all tag and field keys containing non-printable Unicode characters in a raw TSM file (from specified shards).
{{% warn %}} > [!Warning]
**Warning:** Use the `deletetsm` command only when your InfluxDB instance is > Use the `deletetsm` command only when your InfluxDB instance is
offline (`influxd` service is not running). > offline (`influxd` service is not running).
{{% /warn %}}
#### Syntax #### Syntax
@ -287,18 +290,18 @@ Filter data by tag value regular expression.
##### Specifying paths to the `_series` and `index` directories ##### Specifying paths to the `_series` and `index` directories
``` ```
$ influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index
``` ```
##### Specifying paths to the `_series` directory and an `index` file ##### Specifying paths to the `_series` directory and an `index` file
``` ```
$ influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index/file0 influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index/file0
``` ```
##### Specifying paths to the `_series` directory and multiple `index` files ##### Specifying paths to the `_series` directory and multiple `index` files
``` ```
$ influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index/file0 /path/to/index/file1 ... influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index/file0 /path/to/index/file1 ...
``` ```
### `dumptsm` ### `dumptsm`
@ -360,8 +363,8 @@ If a user writes points with timestamps set by the client, then multiple points
### `export` ### `export`
Exports all TSM files in InfluxDB line protocol data format. Exports all TSM files or a single TSM file in InfluxDB line protocol data format.
This output file can be imported using the The output file can be imported using the
[influx](/influxdb/v1/tools/shell/#import-data-from-a-file-with-import) command. [influx](/influxdb/v1/tools/shell/#import-data-from-a-file-with-import) command.
#### Syntax #### Syntax
@ -413,9 +416,12 @@ YYYY-MM-DDTHH:MM:SS-08:00
YYYY-MM-DDTHH:MM:SS+07:00 YYYY-MM-DDTHH:MM:SS+07:00
``` ```
> **Note:** With offsets, avoid replacing the + or - sign with a Z. It may cause an error or print Z (ISO 8601 behavior) instead of the time zone offset. > [!Note]
> With offsets, avoid replacing the + or - sign with a Z. It may cause an error
> or print Z (ISO 8601 behavior) instead of the time zone offset.
##### [ `-lponly` ] ##### [ `-lponly` ]
Output data in line protocol format only. Output data in line protocol format only.
Does not output data definition language (DDL) statements (such as `CREATE DATABASE`) Does not output data definition language (DDL) statements (such as `CREATE DATABASE`)
or DML context metadata (such as `# CONTEXT-DATABASE`). or DML context metadata (such as `# CONTEXT-DATABASE`).
@ -443,6 +449,11 @@ Default value is `$HOME/.influxdb/wal`.
See the [file system layout](/influxdb/v1/concepts/file-system-layout/#file-system-layout) See the [file system layout](/influxdb/v1/concepts/file-system-layout/#file-system-layout)
for InfluxDB on your system. for InfluxDB on your system.
##### [ `-tsmfile <tsm_file>` ]
Path to a single tsm file to export. This requires both `-database` and
`-retention` to be specified.
#### Examples #### Examples
##### Export all databases and compress the output ##### Export all databases and compress the output
@ -457,6 +468,15 @@ influx_inspect export -compress
influx_inspect export -database DATABASE_NAME -retention RETENTION_POLICY influx_inspect export -database DATABASE_NAME -retention RETENTION_POLICY
``` ```
##### Export data from a single TSM file
```bash
influx_inspect export \
-database DATABASE_NAME \
-retention RETENTION_POLICY \
-tsmfile TSM_FILE_NAME
```
##### Output file ##### Output file
```bash ```bash
@ -650,11 +670,11 @@ influx_inspect report-disk -detailed ~/.influxdb/data/
The report does the following: The report does the following:
* Calculates the total exact series cardinality in the database. - Calculates the total exact series cardinality in the database.
* Segments that cardinality by measurement, and emits those cardinality values. - Segments that cardinality by measurement, and emits those cardinality values.
* Emits total exact cardinality for each shard in the database. - Emits total exact cardinality for each shard in the database.
* Segments for each shard the exact cardinality for each measurement in the shard. - Segments for each shard the exact cardinality for each measurement in the shard.
* Optionally limits the results in each shard to the "top n". - Optionally limits the results in each shard to the "top n".
The `reporttsi` command is primarily useful when there has been a change in cardinality The `reporttsi` command is primarily useful when there has been a change in cardinality
and it's not clear which measurement is responsible for this change, and further, _when_ and it's not clear which measurement is responsible for this change, and further, _when_
@ -769,7 +789,8 @@ Enables very verbose logging. Displays progress for every series key and time ra
Enables very very verbose logging. Displays progress for every series key and time range in the tombstone files. Timestamps are displayed in [RFC3339 format](https://tools.ietf.org/html/rfc3339) with nanosecond precision. Enables very very verbose logging. Displays progress for every series key and time range in the tombstone files. Timestamps are displayed in [RFC3339 format](https://tools.ietf.org/html/rfc3339) with nanosecond precision.
> **Note on verbose logging:** Higher verbosity levels override lower levels. > [!Note]
> Higher verbosity levels override lower levels.
## Caveats ## Caveats

View File

@ -47,7 +47,7 @@ By default `max-series-per-database` is set to one million.
Changing the setting to `0` allows an unlimited number of series per database. Changing the setting to `0` allows an unlimited number of series per database.
**Resources:** **Resources:**
[Database Configuration](/influxdb/v1/administration/config/#max-series-per-database-1000000) [Database Configuration](/influxdb/v1/administration/config/#max-series-per-database)
## `error parsing query: found < >, expected identifier at line < >, char < >` ## `error parsing query: found < >, expected identifier at line < >, char < >`
@ -326,7 +326,7 @@ The maximum valid timestamp is `9223372036854775806` or `2262-04-11T23:47:16.854
The `cache maximum memory size exceeded` error occurs when the cached The `cache maximum memory size exceeded` error occurs when the cached
memory size increases beyond the memory size increases beyond the
[`cache-max-memory-size` setting](/influxdb/v1/administration/config/#cache-max-memory-size-1g) [`cache-max-memory-size` setting](/influxdb/v1/administration/config/#cache-max-memory-size)
in the configuration file. in the configuration file.
By default, `cache-max-memory-size` is set to 512mb. By default, `cache-max-memory-size` is set to 512mb.
@ -398,11 +398,15 @@ This error occurs when the Docker container cannot read files on the host machin
#### Make host machine files readable to Docker #### Make host machine files readable to Docker
1. Create a directory, and then copy files to import into InfluxDB to this directory. 1. Create a directory, and then copy files to import into InfluxDB to this directory.
2. When you launch the Docker container, mount the new directory on the InfluxDB container by running the following command: 2. When you launch the Docker container, mount the new directory on the InfluxDB container by running the following command:
```bash
docker run -v /dir/path/on/host:/dir/path/in/container docker run -v /dir/path/on/host:/dir/path/in/container
```
3. Verify the Docker container can read host machine files by running the following command: 3. Verify the Docker container can read host machine files by running the following command:
```bash
influx -import -path=/path/in/container influx -import -path=/path/in/container
```

View File

@ -164,7 +164,7 @@ an RP every 30 minutes.
You may need to wait for the next RP check for InfluxDB to drop data that are You may need to wait for the next RP check for InfluxDB to drop data that are
outside the RP's new `DURATION` setting. outside the RP's new `DURATION` setting.
The 30 minute interval is The 30 minute interval is
[configurable](/influxdb/v1/administration/config/#check-interval-30m0s). [configurable](/influxdb/v1/administration/config/#check-interval).
Second, altering both the `DURATION` and `SHARD DURATION` of an RP can result in Second, altering both the `DURATION` and `SHARD DURATION` of an RP can result in
unexpected data retention. unexpected data retention.
@ -1093,7 +1093,7 @@ time az hostname val_1 val_2
To store both points: To store both points:
* Introduce an arbitrary new tag to enforce uniqueness. - Introduce an arbitrary new tag to enforce uniqueness.
Old point: `cpu_load,hostname=server02,az=us_west,uniq=1 val_1=24.5,val_2=7 1234567890000000` Old point: `cpu_load,hostname=server02,az=us_west,uniq=1 val_1=24.5,val_2=7 1234567890000000`
@ -1101,16 +1101,16 @@ To store both points:
After writing the new point to InfluxDB: After writing the new point to InfluxDB:
```sql ```sql
> SELECT * FROM "cpu_load" WHERE time = 1234567890000000 > SELECT * FROM "cpu_load" WHERE time = 1234567890000000
name: cpu_load name: cpu_load
-------------- --------------
time az hostname uniq val_1 val_2 time az hostname uniq val_1 val_2
1970-01-15T06:56:07.89Z us_west server02 1 24.5 7 1970-01-15T06:56:07.89Z us_west server02 1 24.5 7
1970-01-15T06:56:07.89Z us_west server02 2 5.24 1970-01-15T06:56:07.89Z us_west server02 2 5.24
``` ```
* Increment the timestamp by a nanosecond. - Increment the timestamp by a nanosecond.
Old point: `cpu_load,hostname=server02,az=us_west val_1=24.5,val_2=7 1234567890000000` Old point: `cpu_load,hostname=server02,az=us_west val_1=24.5,val_2=7 1234567890000000`
@ -1118,14 +1118,14 @@ time az hostname uniq val_1 val_2
After writing the new point to InfluxDB: After writing the new point to InfluxDB:
```sql ```sql
> SELECT * FROM "cpu_load" WHERE time >= 1234567890000000 and time <= 1234567890000001 > SELECT * FROM "cpu_load" WHERE time >= 1234567890000000 and time <= 1234567890000001
name: cpu_load name: cpu_load
-------------- --------------
time az hostname val_1 val_2 time az hostname val_1 val_2
1970-01-15T06:56:07.89Z us_west server02 24.5 7 1970-01-15T06:56:07.89Z us_west server02 24.5 7
1970-01-15T06:56:07.890000001Z us_west server02 5.24 1970-01-15T06:56:07.890000001Z us_west server02 5.24
``` ```
## What newline character does the InfluxDB API require? ## What newline character does the InfluxDB API require?
@ -1207,8 +1207,10 @@ To keep regular expressions and quoting simple, avoid using the following charac
## When should I single quote and when should I double quote when writing data? ## When should I single quote and when should I double quote when writing data?
* Avoid single quoting and double quoting identifiers when writing data via the line protocol; see the examples below for how writing identifiers with quotes can complicate queries. - Avoid single quoting and double quoting identifiers when writing data via the
Identifiers are database names, retention policy names, user names, measurement names, tag keys, and field keys. line protocol; see the examples below for how writing identifiers with quotes
can complicate queries. Identifiers are database names, retention policy
names, user names, measurement names, tag keys, and field keys.
Write with a double-quoted measurement: `INSERT "bikes" bikes_available=3` Write with a double-quoted measurement: `INSERT "bikes" bikes_available=3`
Applicable query: `SELECT * FROM "\"bikes\""` Applicable query: `SELECT * FROM "\"bikes\""`
@ -1219,12 +1221,12 @@ Identifiers are database names, retention policy names, user names, measurement
Write with an unquoted measurement: `INSERT bikes bikes_available=3` Write with an unquoted measurement: `INSERT bikes bikes_available=3`
Applicable query: `SELECT * FROM "bikes"` Applicable query: `SELECT * FROM "bikes"`
* Double quote field values that are strings. - Double quote field values that are strings.
Write: `INSERT bikes happiness="level 2"` Write: `INSERT bikes happiness="level 2"`
Applicable query: `SELECT * FROM "bikes" WHERE "happiness"='level 2'` Applicable query: `SELECT * FROM "bikes" WHERE "happiness"='level 2'`
* Special characters should be escaped with a backslash and not placed in quotes. - Special characters should be escaped with a backslash and not placed in quotes.
Write: `INSERT wacky va\"ue=4` Write: `INSERT wacky va\"ue=4`
Applicable query: `SELECT "va\"ue" FROM "wacky"` Applicable query: `SELECT "va\"ue" FROM "wacky"`
@ -1255,6 +1257,6 @@ The default shard group duration is one week and if your data cover several hund
Having an extremely high number of shards is inefficient for InfluxDB. Having an extremely high number of shards is inefficient for InfluxDB.
Increase the shard group duration for your datas retention policy with the [`ALTER RETENTION POLICY` query](/influxdb/v1/query_language/manage-database/#modify-retention-policies-with-alter-retention-policy). Increase the shard group duration for your datas retention policy with the [`ALTER RETENTION POLICY` query](/influxdb/v1/query_language/manage-database/#modify-retention-policies-with-alter-retention-policy).
Second, temporarily lowering the [`cache-snapshot-write-cold-duration` configuration setting](/influxdb/v1/administration/config/#cache-snapshot-write-cold-duration-10m). Second, temporarily lowering the [`cache-snapshot-write-cold-duration` configuration setting](/influxdb/v1/administration/config/#cache-snapshot-write-cold-duration).
If youre writing a lot of historical data, the default setting (`10m`) can cause the system to hold all of your data in cache for every shard. If youre writing a lot of historical data, the default setting (`10m`) can cause the system to hold all of your data in cache for every shard.
Temporarily lowering the `cache-snapshot-write-cold-duration` setting to `10s` while you write the historical data makes the process more efficient. Temporarily lowering the `cache-snapshot-write-cold-duration` setting to `10s` while you write the historical data makes the process more efficient.

View File

@ -6,6 +6,15 @@ weight: 13
menu: menu:
influxdb_v2: influxdb_v2:
name: Tools & integrations name: Tools & integrations
aliases:
- /influxdb/v2/tools/flux-vscode/
prepend: |
> [!Important]
> #### Flux VS Code extension no longer available
>
> The `vsflux` extension is no longer available in the Visual Studio Marketplace.
> `vsflux` and the `flux-lsp` Flux Language Server Protocol plugin are no longer maintained.
> Their repositories have been archived and are no longer receiving updates.
--- ---
{{< children >}} {{< children >}}

View File

@ -11,6 +11,7 @@ menu:
name: Flux VS Code extension name: Flux VS Code extension
parent: Tools & integrations parent: Tools & integrations
source: /shared/influxdb-v2/tools/flux-vscode.md source: /shared/influxdb-v2/tools/flux-vscode.md
draft: true
--- ---
<!-- The content for this file is located at <!-- The content for this file is located at

View File

@ -91,13 +91,13 @@ source ~/.zshrc
<!-------------------------------- BEGIN LINUX --------------------------------> <!-------------------------------- BEGIN LINUX -------------------------------->
- [{{< product-name >}} • Linux (x86) • GNU](https://download.influxdata.com/influxdb/snapshots/influxdb3-{{< product-key >}}_x86_64-unknown-linux-gnu.tar.gz) - [{{< product-name >}} • Linux (AMD64, x86_64) • GNU](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_linux_amd64.tar.gz)
[sha256](https://dl.influxdata.com/influxdb/snapshots/influxdb3-{{< product-key >}}_x86_64-unknown-linux-gnu.tar.gz.sha256) [sha256](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_linux_amd64.tar.gz.sha256)
- [{{< product-name >}} • Linux (ARM) • GNU](https://download.influxdata.com/influxdb/snapshots/influxdb3-{{< product-key >}}_aarch64-unknown-linux-gnu.tar.gz) - [{{< product-name >}} • Linux (ARM64, AArch64) • GNU](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_linux_arm64.tar.gz)
[sha256](https://dl.influxdata.com/influxdb/snapshots/influxdb3-{{< product-key >}}_aarch64-unknown-linux-gnu.tar.gz.sha256) [sha256](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_linux_arm64.tar.gz.sha256)
<!--------------------------------- END LINUX ---------------------------------> <!--------------------------------- END LINUX --------------------------------->
@ -106,9 +106,9 @@ source ~/.zshrc
<!-------------------------------- BEGIN MACOS --------------------------------> <!-------------------------------- BEGIN MACOS -------------------------------->
- [{{< product-name >}} • macOS (Silicon)](https://download.influxdata.com/influxdb/snapshots/influxdb3-{{< product-key >}}_aarch64-apple-darwin.tar.gz) - [{{< product-name >}} • macOS (Silicon, ARM64)](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_darwin_arm64.tar.gz)
[sha256](https://dl.influxdata.com/influxdb/snapshots/influxdb3-{{< product-key >}}_aarch64-apple-darwin.tar.gz.sha256) [sha256](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_darwin_arm64.tar.gz.sha256)
> [!Note] > [!Note]
> macOS Intel builds are coming soon. > macOS Intel builds are coming soon.
@ -120,9 +120,9 @@ source ~/.zshrc
<!------------------------------- BEGIN WINDOWS -------------------------------> <!------------------------------- BEGIN WINDOWS ------------------------------->
- [{{< product-name >}} • Windows (x86)](https://dl.influxdata.com/influxdb/snapshots/influxdb3-{{< product-key >}}_x86_64-pc-windows-gnu.tar.gz) - [{{< product-name >}} • Windows (AMD64, x86_64)](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}-windows_amd64.zip)
[sha256](https://dl.influxdata.com/influxdb/snapshots/influxdb3-{{< product-key >}}_x86_64-pc-windows-gnu.tar.gz.sha256) [sha256](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}-windows_amd64.zip.sha256)
<!-------------------------------- END WINDOWS --------------------------------> <!-------------------------------- END WINDOWS -------------------------------->

View File

@ -91,13 +91,13 @@ source ~/.zshrc
<!-------------------------------- BEGIN LINUX --------------------------------> <!-------------------------------- BEGIN LINUX -------------------------------->
- [{{< product-name >}} • Linux (x86) • GNU](https://download.influxdata.com/influxdb/snapshots/influxdb3-{{< product-key >}}_x86_64-unknown-linux-gnu.tar.gz) - [{{< product-name >}} • Linux (AMD64, x86_64) • GNU](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_linux_amd64.tar.gz)
[sha256](https://dl.influxdata.com/influxdb/snapshots/influxdb3-{{< product-key >}}_x86_64-unknown-linux-gnu.tar.gz.sha256) [sha256](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_linux_amd64.tar.gz.sha256)
- [{{< product-name >}} • Linux (ARM) • GNU](https://download.influxdata.com/influxdb/snapshots/influxdb3-{{< product-key >}}_aarch64-unknown-linux-gnu.tar.gz) - [{{< product-name >}} • Linux (ARM64, AArch64) • GNU](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_linux_arm64.tar.gz)
[sha256](https://dl.influxdata.com/influxdb/snapshots/influxdb3-{{< product-key >}}_aarch64-unknown-linux-gnu.tar.gz.sha256) [sha256](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_linux_arm64.tar.gz.sha256)
<!--------------------------------- END LINUX ---------------------------------> <!--------------------------------- END LINUX --------------------------------->
@ -106,9 +106,9 @@ source ~/.zshrc
<!-------------------------------- BEGIN MACOS --------------------------------> <!-------------------------------- BEGIN MACOS -------------------------------->
- [{{< product-name >}} • macOS (Silicon)](https://download.influxdata.com/influxdb/snapshots/influxdb3-{{< product-key >}}_aarch64-apple-darwin.tar.gz) - [{{< product-name >}} • macOS (Silicon, ARM64)](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_darwin_arm64.tar.gz)
[sha256](https://dl.influxdata.com/influxdb/snapshots/influxdb3-{{< product-key >}}_aarch64-apple-darwin.tar.gz.sha256) [sha256](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_darwin_arm64.tar.gz.sha256)
> [!Note] > [!Note]
> macOS Intel builds are coming soon. > macOS Intel builds are coming soon.
@ -120,9 +120,9 @@ source ~/.zshrc
<!------------------------------- BEGIN WINDOWS -------------------------------> <!------------------------------- BEGIN WINDOWS ------------------------------->
- [{{< product-name >}} • Windows (x86)](https://dl.influxdata.com/influxdb/snapshots/influxdb3-{{< product-key >}}_x86_64-pc-windows-gnu.tar.gz) - [{{< product-name >}} • Windows (AMD64, x86_64)](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}-windows_amd64.zip)
[sha256](https://dl.influxdata.com/influxdb/snapshots/influxdb3-{{< product-key >}}_x86_64-pc-windows-gnu.tar.gz.sha256) [sha256](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}-windows_amd64.zip.sha256)
<!-------------------------------- END WINDOWS --------------------------------> <!-------------------------------- END WINDOWS -------------------------------->

View File

@ -541,6 +541,9 @@ The number of Flux query requests served.
#### fluxQueryReqDurationNs #### fluxQueryReqDurationNs
The duration (wall-time), in nanoseconds, spent executing Flux query requests. The duration (wall-time), in nanoseconds, spent executing Flux query requests.
#### fluxQueryRespBytes
The sum of all bytes returned in Flux query responses.
#### pingReq #### pingReq
The number of times InfluxDB HTTP server served the `/ping` HTTP endpoint. The number of times InfluxDB HTTP server served the `/ping` HTTP endpoint.

View File

@ -1,4 +1,9 @@
> [!Important]
> #### Flux-LSP no longer maintained
> The `flux-lsp` Flux Language Server Protocol plugin is no longer maintained.
> The [`flux-lsp` repo](https://github.com/influxdata/flux-lsp) has been archived and is no longer receiving updates.
## Requirements ## Requirements
- Vim 8+ - Vim 8+

View File

@ -1,4 +1,10 @@
> [!Important]
> #### vsflux and Flux-LSP no longer maintained
> `vsflux` is no longer available in the Visual Studio Marketplace.
> The `vsflux` Visual Studio Code extension and the `flux-lsp` Flux Language Server Protocol plugin are no longer maintained.
> Their repositories have been archived and are no longer receiving updates.
The [Flux Visual Studio Code (VS Code) extension](https://marketplace.visualstudio.com/items?itemName=influxdata.flux) The [Flux Visual Studio Code (VS Code) extension](https://marketplace.visualstudio.com/items?itemName=influxdata.flux)
provides Flux syntax highlighting, autocompletion, and a direct InfluxDB server provides Flux syntax highlighting, autocompletion, and a direct InfluxDB server
integration that lets you run Flux scripts natively and show results in VS Code. integration that lets you run Flux scripts natively and show results in VS Code.

View File

@ -30,7 +30,7 @@
- Other general performance improvements - Other general performance improvements
#### Fixes #### Fixes
- A **Home** license thread count log errors cleared up - **Home** license thread count log errors
## v3.0.0 {date="2025-04-14"} ## v3.0.0 {date="2025-04-14"}

View File

@ -67,15 +67,15 @@ curl -O https://www.influxdata.com/d/install_influxdb3.sh \
Or, download and install [build artifacts](/influxdb3/core/install/#download-influxdb-3-core-binaries): Or, download and install [build artifacts](/influxdb3/core/install/#download-influxdb-3-core-binaries):
- [Linux | x86 | gnu](https://dl.influxdata.com/influxdb/snapshots/influxdb3-core_x86_64-unknown-linux-gnu.tar.gz) - [Linux | AMD64 (x86_64) | GNU](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_linux_amd64.tar.gz)
[sha256](https://dl.influxdata.com/influxdb/snapshots/influxdb3-core_x86_64-unknown-linux-gnu.tar.gz.sha256) [sha256](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_linux_amd64.tar.gz.sha256)
- [Linux | ARM | gnu](https://dl.influxdata.com/influxdb/snapshots/influxdb3-core_aarch64-unknown-linux-gnu.tar.gz) - [Linux | ARM64 (AArch64) | GNU](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_linux_arm64.tar.gz)
[sha256](https://dl.influxdata.com/influxdb/snapshots/influxdb3-core_aarch64-unknown-linux-gnu.tar.gz.sha256) [sha256](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_linux_arm64.tar.gz.sha256)
- [macOS | Darwin](https://dl.influxdata.com/influxdb/snapshots/influxdb3-core_aarch64-apple-darwin.tar.gz) - [macOS | Silicon (ARM64)](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_darwin_arm64.tar.gz)
[sha256](https://dl.influxdata.com/influxdb/snapshots/influxdb3-enterprise_aarch64-apple-darwin.tar.gz.sha256) [sha256](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_darwin_arm64.tar.gz.sha256)
> [!Note] > [!Note]
> macOS Intel builds are coming soon. > macOS Intel builds are coming soon.
@ -84,10 +84,9 @@ Or, download and install [build artifacts](/influxdb3/core/install/#download-inf
{{% /tab-content %}} {{% /tab-content %}}
{{% tab-content %}} {{% tab-content %}}
<!--------------- BEGIN WINDOWS --------------> <!--------------- BEGIN WINDOWS -------------->
Download and install the {{% product-name %}} [Windows (x86) binary](https://dl.influxdata.com/influxdb/snapshots/influxdb3-core_x86_64-pc-windows-gnu.tar.gz) Download and install the {{% product-name %}} [Windows (AMD64, x86_64) binary](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}-windows_amd64.zip)
[sha256](https://dl.influxdata.com/influxdb/snapshots/influxdb3-core_x86_64-pc-windows-gnu.tar.gz.sha256) [sha256](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}-windows_amd64.zip.sha256)
<!--------------- END WINDOWS --------------> <!--------------- END WINDOWS -------------->
{{% /tab-content %}} {{% /tab-content %}}
{{% tab-content %}} {{% tab-content %}}

View File

@ -66,15 +66,15 @@ curl -O https://www.influxdata.com/d/install_influxdb3.sh \
Or, download and install [build artifacts](/influxdb3/enterprise/install/#download-influxdb-3-enterprise-binaries): Or, download and install [build artifacts](/influxdb3/enterprise/install/#download-influxdb-3-enterprise-binaries):
- [Linux | x86_64 | GNU](https://dl.influxdata.com/influxdb/snapshots/influxdb3-enterprise_x86_64-unknown-linux-gnu.tar.gz) - [Linux | AMD64 (x86_64) | GNU](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_linux_amd64.tar.gz)
[sha256](https://dl.influxdata.com/influxdb/snapshots/influxdb3-enterprise_x86_64-unknown-linux-gnu.tar.gz.sha256) [sha256](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_linux_amd64.tar.gz.sha256)
- [Linux | ARM64 | GNU](https://dl.influxdata.com/influxdb/snapshots/influxdb3-enterprise_aarch64-unknown-linux-gnu.tar.gz) - [Linux | ARM64 (AArch64) | GNU](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_linux_arm64.tar.gz)
[sha256](https://dl.influxdata.com/influxdb/snapshots/influxdb3-enterprise_aarch64-unknown-linux-gnu.tar.gz.sha256) [sha256](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_linux_arm64.tar.gz.sha256)
- [macOS | ARM64](https://dl.influxdata.com/influxdb/snapshots/influxdb3-enterprise_aarch64-apple-darwin.tar.gz) - [macOS | Silicon (ARM64)](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_darwin_arm64.tar.gz)
[sha256](https://dl.influxdata.com/influxdb/snapshots/influxdb3-enterprise_aarch64-apple-darwin.tar.gz.sha256) [sha256](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}_darwin_arm64.tar.gz.sha256)
> [!Note] > [!Note]
> macOS Intel builds are coming soon. > macOS Intel builds are coming soon.
@ -83,10 +83,9 @@ Or, download and install [build artifacts](/influxdb3/enterprise/install/#downlo
{{% /tab-content %}} {{% /tab-content %}}
{{% tab-content %}} {{% tab-content %}}
<!--------------- BEGIN WINDOWS --------------> <!--------------- BEGIN WINDOWS -------------->
Download and install the {{% product-name %}} [Windows (x86) binary](https://dl.influxdata.com/influxdb/snapshots/influxdb3-enterprise_x86_64-pc-windows-gnu.tar.gz) Download and install the {{% product-name %}} [Windows (AMD64, x86_64) binary](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}-windows_amd64.zip)
[sha256](https://dl.influxdata.com/influxdb/snapshots/influxdb3-enterprise_x86_64-pc-windows-gnu.tar.gz.sha256) [sha256](https://dl.influxdata.com/influxdb/releases/influxdb3-{{< product-key >}}-{{< latest-patch >}}-windows_amd64.zip.sha256)
<!--------------- END WINDOWS --------------> <!--------------- END WINDOWS -------------->
{{% /tab-content %}} {{% /tab-content %}}
{{% tab-content %}} {{% tab-content %}}

View File

@ -6,6 +6,7 @@ influxdb3_core:
versions: [core] versions: [core]
list_order: 2 list_order: 2
latest: core latest: core
latest_patch: 3.0.1
placeholder_host: localhost:8181 placeholder_host: localhost:8181
ai_sample_questions: ai_sample_questions:
- How do I install and run InfluxDB 3 Core? - How do I install and run InfluxDB 3 Core?
@ -20,6 +21,7 @@ influxdb3_enterprise:
versions: [enterprise] versions: [enterprise]
list_order: 2 list_order: 2
latest: enterprise latest: enterprise
latest_patch: 3.0.1
placeholder_host: localhost:8181 placeholder_host: localhost:8181
ai_sample_questions: ai_sample_questions:
- How do I install and run InfluxDB 3 Enterprise? - How do I install and run InfluxDB 3 Enterprise?
@ -82,9 +84,8 @@ influxdb:
- v1 - v1
latest: v2.7 latest: v2.7
latest_patches: latest_patches:
v3: 3.0.0alpha
v2: 2.7.11 v2: 2.7.11
v1: 1.11.8 v1: 1.12.0
latest_cli: latest_cli:
v2: 2.7.5 v2: 2.7.5
ai_sample_questions: ai_sample_questions:
@ -154,9 +155,9 @@ enterprise_influxdb:
menu_category: self-managed menu_category: self-managed
list_order: 5 list_order: 5
versions: [v1] versions: [v1]
latest: v1.11 latest: v1.12
latest_patches: latest_patches:
v1: 1.11.8 v1: 1.12.0
ai_sample_questions: ai_sample_questions:
- How can I configure my InfluxDB v1 Enterprise server? - How can I configure my InfluxDB v1 Enterprise server?
- How do I replicate data between InfluxDB v1 Enterprise and OSS? - How do I replicate data between InfluxDB v1 Enterprise and OSS?

View File

@ -1,28 +1,28 @@
{{- $scratch := newScratch -}}
{{- $cli := .Get "cli" | default false }} {{- $cli := .Get "cli" | default false }}
{{- $productPathData := findRE "[^/]+.*?" .Page.RelPermalink -}} {{- $productPathData := split .Page.RelPermalink "/" -}}
{{- $parsedProduct := index $productPathData 0 | default "influxdb" -}} {{- $parsedProduct := index $productPathData 1 | default "influxdb" -}}
{{- $parsedVersion := index $productPathData 1 -}} {{- $parsedVersion := index $productPathData 2 -}}
{{- $productArg := .Get "product" | default "" -}} {{- $productArg := .Get "product" | default "" -}}
{{- $versionArg := .Get "version" | default "" -}} {{- $versionArg := .Get "version" | default "" -}}
{{- $minorVersionOffset := .Get "minorVersionOffset" | default 0 -}}
{{- $product := cond (gt (len $productArg) 0) $productArg $parsedProduct -}} {{- $product := cond (gt (len $productArg) 0) $productArg $parsedProduct -}}
{{- $latestVersion := replaceRE `\..*$` "" (index (index .Site.Data.products $product) "latest") -}} {{- $latestVersion := replaceRE `\..*$` "" (index (index .Site.Data.products $product) "latest") -}}
{{- $versionNoOffset := cond (gt (len $versionArg) 0) $versionArg (cond (ne $product $parsedProduct) $latestVersion $parsedVersion) -}} {{- $version := cond (gt (len $versionArg) 0) $versionArg $parsedVersion -}}
{{- $version := replaceRE `\d+$` (add (int (index (findRE `\d+$` $versionNoOffset) 0)) $minorVersionOffset) $versionNoOffset -}}
{{- $patchVersions := index (index .Site.Data.products $product) "latest_patches" -}} {{- $patchVersions := index (index .Site.Data.products $product) "latest_patches" -}}
{{- $cliVersions := index .Site.Data.products.influxdb "latest_cli" -}} {{- $cliVersions := index .Site.Data.products.influxdb "latest_cli" -}}
{{- $isInfluxDB3 := eq $product "influxdb3" -}}
{{- if $cli }} {{- if $cli }}
{{- if eq $version "cloud" -}} {{- if eq $version "cloud" -}}
{{- $scratch.Set "patchVersion" (index $cliVersions $latestVersion) -}} {{- .Store.Set "patchVersion" (index $cliVersions $latestVersion) -}}
{{- else -}} {{- else -}}
{{- $scratch.Set "patchVersion" (index $cliVersions $version) -}} {{- .Store.Set "patchVersion" (index $cliVersions $version) -}}
{{- end -}} {{- end -}}
{{- else -}} {{- else -}}
{{- if eq $version "cloud" -}} {{- if eq $version "cloud" -}}
{{- $scratch.Set "patchVersion" (index $patchVersions $latestVersion) -}} {{- .Store.Set "patchVersion" (index $patchVersions $latestVersion) -}}
{{- else if $isInfluxDB3 -}}
{{- .Store.Set "patchVersion" (index .Site.Data.products (print $product "_" $version)).latest_patch -}}
{{- else -}} {{- else -}}
{{- $scratch.Set "patchVersion" (index $patchVersions $version) -}} {{- .Store.Set "patchVersion" (index $patchVersions $version) -}}
{{- end -}} {{- end -}}
{{- end -}} {{- end -}}
{{- $scratch.Get "patchVersion" -}} {{- .Store.Get "patchVersion" -}}