feat: InfluxDB OSS and Enterprise 1.12.1 (#6250)

* InfluxDB OSS and Enterprise 1.12.1

* add message to enterprise 1.12 release notes

* Update content/influxdb/v1/query_language/manage-database.md

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Jason Stirnaman <stirnamanj@gmail.com>

* Apply suggestions from code review

Co-authored-by: Jason Stirnaman <stirnamanj@gmail.com>

* Apply suggestions from code review

Co-authored-by: Jason Stirnaman <stirnamanj@gmail.com>

* fix: update to address PR feedback

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Jason Stirnaman <stirnamanj@gmail.com>
pull/6251/head
Scott Anderson 2025-07-28 09:47:31 -06:00 committed by GitHub
parent 1ca63a20a2
commit 09d1414e22
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
24 changed files with 1895 additions and 1222 deletions

View File

@ -9,18 +9,81 @@ menu:
parent: About the project
---
{{% note %}}
#### InfluxDB Enterprise and FIPS-compliance
## v1.12.1 {date="2025-06-26"}
**InfluxDB Enterprise 1.11+** introduces builds that are compliant with
[Federal Information Processing Standards (FIPS)](https://www.nist.gov/standardsgov/compliance-faqs-federal-information-processing-standards-fips)
and adhere to a strict set of security standards. Both standard and FIPS-compliant
InfluxDB Enterprise builds are available. For more information, see
[FIPS-compliant InfluxDB Enterprise builds](/enterprise_influxdb/v1/introduction/installation/fips-compliant/).
{{% /note %}}
> [!Important]
> #### Upgrade meta nodes first
>
> When upgrading to InfluxDB Enterprise 1.12.1+, upgrade meta nodes before
> upgrading data nodes.
## Features
- Add additional log output when using
[`influx_inspect buildtsi`](/enterprise_influxdb/v1/tools/influx_inspect/#buildtsi) to
rebuild the TSI index.
- Use [`influx_inspect export`](/enterprise_influxdb/v1/tools/influx_inspect/#export) with
[`-tsmfile` option](/enterprise_influxdb/v1/tools/influx_inspect/#--tsmfile-tsm_file-) to
export a single TSM file.
- Add `-m` flag to the [`influxd-ctl show-shards` command](/enterprise_influxdb/v1/tools/influxd-ctl/show-shards/)
to output inconsistent shards.
- Allow the specification of a write window for retention policies.
- Add `fluxQueryRespBytes` metric to the `/debug/vars` metrics endpoint.
- Log whenever meta gossip times exceed expiration.
- Add [`query-log-path` configuration option](/enterprise_influxdb/v1/administration/configure/config-data-nodes/#query-log-path)
to data nodes.
- Add [`aggressive-points-per-block` configuration option](/influxdb/v1/administration/config/#aggressive-points-per-block)
to prevent TSM files from not getting fully compacted.
- Log TLS configuration settings on startup.
- Check for TLS certificate and private key permissions.
- Add a warning if the TLS certificate is expired.
- Add authentication to the Raft portal and add the following related _data_
node configuration options:
- [`[meta].raft-portal-auth-required`](/enterprise_influxdb/v1/administration/configure/config-data-nodes/#raft-portal-auth-required)
- [`[meta].raft-dialer-auth-required`](/enterprise_influxdb/v1/administration/configure/config-data-nodes/#raft-dialer-auth-required)
- Improve error handling.
- InfluxQL updates:
- Delete series by retention policy.
- Allow retention policies to discard writes that fall within their range, but
outside of [`FUTURE LIMIT`](/enterprise_influxdb/v1/query_language/manage-database/#future-limit)
and [`PAST LIMIT`](/enterprise_influxdb/v1/query_language/manage-database/#past-limit).
## Bug fixes
- Log rejected writes to subscriptions.
- Update `xxhash` and avoid `stringtoslicebyte` in the cache.
- Prevent a panic when a shard group has no shards.
- Fix file handle leaks in `Compactor.write`.
- Ensure fields in memory match the fields on disk.
- Ensure temporary files are removed after failed compactions.
- Do not panic on invalid multiple subqueries.
- Update the `/shard-status` API to return the correct result and use a
consistent "idleness" definition for shards.
## Other
- Update Go to 1.23.5.
- Upgrade Flux to v0.196.1.
- Upgrade InfluxQL to v1.4.1.
- Various other dependency updates.
---
> [!Note]
> #### InfluxDB Enterprise and FIPS-compliance
>
> **InfluxDB Enterprise 1.11+** introduces builds that are compliant with
> [Federal Information Processing Standards (FIPS)](https://www.nist.gov/standardsgov/compliance-faqs-federal-information-processing-standards-fips)
> and adhere to a strict set of security standards. Both standard and FIPS-compliant
> InfluxDB Enterprise builds are available. For more information, see
> [FIPS-compliant InfluxDB Enterprise builds](/enterprise_influxdb/v1/introduction/installation/fips-compliant/).
## v1.11.8 {date="2024-11-15"}
### Features
- Add a startup logger to InfluxDB Enterprise data nodes.
### Bug Fixes
- Strip double quotes from measurement names in the [`/api/v2/delete` compatibility
@ -28,6 +91,8 @@ InfluxDB Enterprise builds are available. For more information, see
string comparisons (e.g. to allow special characters in measurement names).
- Enable SHA256 for FIPS RPMs.
---
## v1.11.7 {date="2024-09-19"}
### Bug Fixes
@ -79,14 +144,13 @@ InfluxDB Enterprise builds are available. For more information, see
## v1.11.5 {date="2024-02-14"}
{{% note %}}
#### Upgrading from InfluxDB Enterprise v1.11.3
If upgrading from InfluxDB Enterprise v1.11.3+ to {{< latest-patch >}}, you can
now configure whether or not InfluxDB compacts series files on startup using the
[`compact-series-file` configuration option](/enterprise_influxdb/v1/administration/configure/config-data-nodes/#compact-series-file)
in your [InfluxDB Enterprise data node configuration file](/enterprise_influxdb/v1/administration/configure/config-data-nodes/).
{{% /note %}}
> [!Note]
> #### Upgrading from InfluxDB Enterprise v1.11.3
>
> If upgrading from InfluxDB Enterprise v1.11.3+ to {{< latest-patch >}}, you can
> now configure whether or not InfluxDB compacts series files on startup using the
> [`compact-series-file` configuration option](/enterprise_influxdb/v1/administration/configure/config-data-nodes/#compact-series-file)
> in your [InfluxDB Enterprise data node configuration file](/enterprise_influxdb/v1/administration/configure/config-data-nodes/).
### Bug Fixes
@ -101,29 +165,28 @@ in your [InfluxDB Enterprise data node configuration file](/enterprise_influxdb/
## v1.11.4 {date="2023-12-14"}
{{% note %}}
#### Series file compaction
With InfluxDB Enterprise v1.11.4+, InfluxDB can be configured to optionally
[compact series files](/enterprise_influxdb/v1/tools/influx_inspect/#--compact-series-file-)
before data nodes are started.
Series files are stored in `_series` directories inside the
[InfluxDB data directory](/enterprise_influxdb/v1/concepts/file-system-layout/#data-node-file-system-layout).
Default: `/var/lib/data/<db-name>/_series`.
To compact series files on startup, set the [`compact-series-file` configuration option](/enterprise_influxdb/v1/administration/configure/config-data-nodes/#compact-series-file)
to `true` in your [InfluxDB Enterprise data node configuration file](/enterprise_influxdb/v1/administration/configure/config-data-nodes/).
- If any series files are corrupt, the `influx_inspect` or `influxd` processes on
the data node may fail to start. In both cases, delete the series file
directories before restarting the database. InfluxDB automatically
regenerates the necessary series directories and files when restarting.
- To check if series files are corrupt before starting the database, run the
[`influx_inspect verify-seriesfile` command](/enterprise_influxdb/v1/tools/influx_inspect/#verify-seriesfile)
while the database is off-line.
- If series files are large (20+ gigabytes), it may be faster to delete the
series file directories before starting the database.
{{% /note %}}
> [!Note]
> #### Series file compaction
>
> With InfluxDB Enterprise v1.11.4+, InfluxDB can be configured to optionally
> [compact series files](/enterprise_influxdb/v1/tools/influx_inspect/#--compact-series-file-)
> before data nodes are started.
> Series files are stored in `_series` directories inside the
> [InfluxDB data directory](/enterprise_influxdb/v1/concepts/file-system-layout/#data-node-file-system-layout).
> Default: `/var/lib/data/<db-name>/_series`.
>
> To compact series files on startup, set the [`compact-series-file` configuration option](/enterprise_influxdb/v1/administration/configure/config-data-nodes/#compact-series-file)
> to `true` in your [InfluxDB Enterprise data node configuration file](/enterprise_influxdb/v1/administration/configure/config-data-nodes/).
>
> - If any series files are corrupt, the `influx_inspect` or `influxd` processes on
> the data node may fail to start. In both cases, delete the series file
> directories before restarting the database. InfluxDB automatically
> regenerates the necessary series directories and files when restarting.
> - To check if series files are corrupt before starting the database, run the
> [`influx_inspect verify-seriesfile` command](/enterprise_influxdb/v1/tools/influx_inspect/#verify-seriesfile)
> while the database is off-line.
> - If series files are large (20+ gigabytes), it may be faster to delete the
> series file directories before starting the database.
### Bug Fixes
@ -448,8 +511,10 @@ An edge case regression was introduced into this version that may cause a consta
## v1.9.6 {date="2022-02-16"}
{{% note %}} InfluxDB Enterprise offerings are no longer available on AWS, Azure, and GCP marketplaces. Please [contact Sales](https://www.influxdata.com/contact-sales/) to request an license key to [install InfluxDB Enterprise in your own environment](/enterprise_influxdb/v1/introduction/installation/).
{{% /note %}}
> [!Note]
> InfluxDB Enterprise offerings are no longer available on AWS, Azure, and GCP
> marketplaces. Please [contact Sales](https://www.influxdata.com/contact-sales/)
> to request an license key to [install InfluxDB Enterprise in your own environment](/enterprise_influxdb/v1/introduction/installation/).
### Features
@ -495,10 +560,9 @@ An edge case regression was introduced into this version that may cause a consta
## v1.9.5 {date="2021-10-11"}
{{% note %}}
InfluxDB Enterprise 1.9.4 was not released.
Changes below are included in InfluxDB Enterprise 1.9.5.
{{% /note %}}
> [!Note]
> InfluxDB Enterprise 1.9.4 was not released.
> Changes below are included in InfluxDB Enterprise 1.9.5.
### Features
@ -581,7 +645,7 @@ in that there is no corresponding InfluxDB OSS release.
### Features
- Upgrade to Go 1.15.10.
- Support user-defined *node labels*.
- Support user-defined _node labels_.
Node labels let you assign arbitrary key-value pairs to meta and data nodes in a cluster.
For instance, an operator might want to label nodes with the availability zone in which they're located.
- Improve performance of `SHOW SERIES CARDINALITY` and `SHOW SERIES CARDINALITY from <measurement>` InfluxQL queries.
@ -646,10 +710,9 @@ in that there is no corresponding InfluxDB OSS release.
Instead, use [`inch`](https://github.com/influxdata/inch)
or [`influx-stress`](https://github.com/influxdata/influx-stress) (not to be confused with `influx_stress`).
{{% note %}}
**Note:** InfluxDB Enterprise 1.9.0 and 1.9.1 were not released.
Bug fixes intended for 1.9.0 and 1.9.1 were rolled into InfluxDB Enterprise 1.9.2.
{{% /note %}}
> [!Note]
> InfluxDB Enterprise 1.9.0 and 1.9.1 were not released.
> Bug fixes intended for 1.9.0 and 1.9.1 were rolled into InfluxDB Enterprise 1.9.2.
---
@ -756,11 +819,15 @@ For details on changes incorporated from the InfluxDB OSS release, see
### Features
#### **Back up meta data only**
#### Back up meta data only
- Add option to back up **meta data only** (users, roles, databases, continuous queries, and retention policies) using the new `-strategy` flag and `only meta` option: `influx ctl backup -strategy only meta </your-backup-directory>`.
- Add option to back up **meta data only** (users, roles, databases, continuous
queries, and retention policies) using the new `-strategy` flag and `only meta`
option: `influx ctl backup -strategy only meta </your-backup-directory>`.
> **Note:** To restore a meta data backup, use the `restore -full` command and specify your backup manifest: `influxd-ctl restore -full </backup-directory/backup.manifest>`.
> [!Note]
> To restore a meta data backup, use the `restore -full` command and specify
> your backup manifest: `influxd-ctl restore -full </backup-directory/backup.manifest>`.
For more information, see [Perform a metastore only backup](/enterprise_influxdb/v1/administration/backup-and-restore/#perform-a-metastore-only-backup).
@ -1007,7 +1074,10 @@ The following summarizes the expected settings for proper configuration of JWT a
`""`.
- A long pass phrase is recommended for better security.
>**Note:** To provide encrypted internode communication, you must enable HTTPS. Although the JWT signature is encrypted, the the payload of a JWT token is encoded, but is not encrypted.
> [!Note]
> To provide encrypted internode communication, you must enable HTTPS. Although
> the JWT signature is encrypted, the the payload of a JWT token is encoded, but
> is not encrypted.
### Bug fixes

View File

@ -259,6 +259,29 @@ For detailed configuration information, see [`meta.ensure-fips`](/enterprise_inf
Environment variable: `INFLUXDB_META_ENSURE_FIPS`
#### raft-portal-auth-required {metadata="v1.12.0+"}
Default is `false`.
Require Raft clients to authenticate with server using the
[`meta-internal-shared-secret`](#meta-internal-shared-secret).
This requires that all meta nodes are running InfluxDB Enterprise v1.12.0+ and
are configured with the correct `meta-internal-shared-secret`.
Environment variable: `INFLUXDB_META_RAFT_PORTAL_AUTH_REQUIRED`
#### raft-dialer-auth-required {metadata="v1.12.0+"}
Default is `false`.
Require Raft servers to authenticate Raft clients using the
[`meta-internal-shared-secret`](#meta-internal-shared-secret).
This requires that all meta nodes are running InfluxDB Enterprise v1.12.0+, have
`raft-portal-auth-required=true`, and are configured with the correct
`meta-internal-shared-secret`.
Environment variable: `INFLUXDB_META_RAFT_DIALER_AUTH_REQUIRED`
-----
## Data settings
@ -305,6 +328,8 @@ Environment variable: `INFLUXDB_DATA_QUERY_LOG_ENABLED`
#### query-log-path
Default is `""`.
An absolute path to the query log file.
The default is `""` (queries aren't logged to a file).
@ -326,6 +351,8 @@ The following is an example of a `logrotate` configuration:
}
```
Environment variable: `INFLUXDB_DATA_QUERY_LOG_PATH`
#### wal-fsync-delay
Default is `"0s"`.
@ -422,6 +449,16 @@ The duration at which to compact all TSM and TSI files in a shard if it has not
Environment variable: `INFLUXDB_DATA_COMPACT_FULL_WRITE_COLD_DURATION`
#### aggressive-points-per-block {metadata="v1.12.0+"}
Default is `10000`.
The number of points per block to use during aggressive compaction. There are
certain cases where TSM files do not get fully compacted. This adjusts an
internal parameter to help ensure these files do get fully compacted.
Environment variable: `INFLUXDB_DATA_AGGRESSIVE_POINTS_PER_BLOCK`
#### index-version
Default is `"inmem"`.

View File

@ -62,17 +62,22 @@ Creates a new database.
#### Syntax
```sql
CREATE DATABASE <database_name> [WITH [DURATION <duration>] [REPLICATION <n>] [SHARD DURATION <duration>] [NAME <retention-policy-name>]]
CREATE DATABASE <database_name> [WITH [DURATION <duration>] [REPLICATION <n>] [SHARD DURATION <duration>] [PAST LIMIT <duration>] [FUTURE LIMIT <duration>] [NAME <retention-policy-name>]]
```
#### Description of syntax
`CREATE DATABASE` requires a database [name](/enterprise_influxdb/v1/troubleshooting/frequently-asked-questions/#what-words-and-characters-should-i-avoid-when-writing-data-to-influxdb).
The `WITH`, `DURATION`, `REPLICATION`, `SHARD DURATION`, and `NAME` clauses are optional and create a single [retention policy](/enterprise_influxdb/v1/concepts/glossary/#retention-policy-rp) associated with the created database.
If you do not specify one of the clauses after `WITH`, the relevant behavior defaults to the `autogen` retention policy settings.
The `WITH`, `DURATION`, `REPLICATION`, `SHARD DURATION`, `PAST LIMIT`,
`FUTURE LIMIT, and `NAME` clauses are optional and create a single
[retention policy](/enterprise_influxdb/v1/concepts/glossary/#retention-policy-rp)
associated with the created database.
If you do not specify one of the clauses after `WITH`, the relevant behavior
defaults to the `autogen` retention policy settings.
The created retention policy automatically serves as the database's default retention policy.
For more information about those clauses, see [Retention Policy Management](/enterprise_influxdb/v1/query_language/manage-database/#retention-policy-management).
For more information about those clauses, see
[Retention Policy Management](/enterprise_influxdb/v1/query_language/manage-database/#retention-policy-management).
A successful `CREATE DATABASE` query returns an empty result.
If you attempt to create a database that already exists, InfluxDB does nothing and does not return an error.
@ -122,21 +127,25 @@ The `DROP SERIES` query deletes all points from a [series](/enterprise_influxdb/
and it drops the series from the index.
The query takes the following form, where you must specify either the `FROM` clause or the `WHERE` clause:
```sql
DROP SERIES FROM <measurement_name[,measurement_name]> WHERE <tag_key>='<tag_value>'
```
Drop all series from a single measurement:
```sql
> DROP SERIES FROM "h2o_feet"
```
Drop series with a specific tag pair from a single measurement:
```sql
> DROP SERIES FROM "h2o_feet" WHERE "location" = 'santa_monica'
```
Drop all points in the series that have a specific tag pair from all measurements in the database:
```sql
> DROP SERIES WHERE "location" = 'santa_monica'
```
@ -152,35 +161,48 @@ Unlike
You must include either the `FROM` clause, the `WHERE` clause, or both:
```
```sql
DELETE FROM <measurement_name> WHERE [<tag_key>='<tag_value>'] | [<time interval>]
```
Delete all data associated with the measurement `h2o_feet`:
```
```sql
> DELETE FROM "h2o_feet"
```
Delete all data associated with the measurement `h2o_quality` and where the tag `randtag` equals `3`:
```
```sql
> DELETE FROM "h2o_quality" WHERE "randtag" = '3'
```
Delete all data in the database that occur before January 01, 2020:
```
```sql
> DELETE WHERE time < '2020-01-01'
```
Delete all data associated with the measurement `h2o_feet` in retention policy `one_day`:
```sql
> DELETE FROM "one_day"."h2o_feet"
```
A successful `DELETE` query returns an empty result.
Things to note about `DELETE`:
* `DELETE` supports
[regular expressions](/enterprise_influxdb/v1/query_language/explore-data/#regular-expressions)
in the `FROM` clause when specifying measurement names and in the `WHERE` clause
when specifying tag values.
* `DELETE` does not support [fields](/enterprise_influxdb/v1/concepts/glossary/#field) in the `WHERE` clause.
* If you need to delete points in the future, you must specify that time period as `DELETE SERIES` runs for `time < now()` by default. [Syntax](https://github.com/influxdata/influxdb/issues/8007)
- `DELETE` supports [regular expressions](/enterprise_influxdb/v1/query_language/explore-data/#regular-expressions)
in the `FROM` clause when specifying measurement names and in the `WHERE` clause
when specifying tag values. It _does not_ support regular expressions for the
retention policy in the `FROM` clause.
If deleting a series in a retention policy, `DELETE` requires that you define
_only one_ retention policy in the `FROM` clause.
- `DELETE` does not support [fields](/enterprise_influxdb/v1/concepts/glossary/#field)
in the `WHERE` clause.
- If you need to delete points in the future, you must specify that time period
as `DELETE SERIES` runs for `time < now()` by default.
### Delete measurements with DROP MEASUREMENT
@ -234,8 +256,9 @@ You may disable its auto-creation in the [configuration file](/enterprise_influx
### Create retention policies with CREATE RETENTION POLICY
#### Syntax
```
CREATE RETENTION POLICY <retention_policy_name> ON <database_name> DURATION <duration> REPLICATION <n> [SHARD DURATION <duration>] [DEFAULT]
```sql
CREATE RETENTION POLICY <retention_policy_name> ON <database_name> DURATION <duration> REPLICATION <n> [SHARD DURATION <duration>] [PAST LIMIT <duration>] [FUTURE LIMIT <duration>] [DEFAULT]
```
#### Description of syntax
@ -283,6 +306,28 @@ See
[Shard group duration management](/enterprise_influxdb/v1/concepts/schema_and_data_layout/#shard-group-duration-management)
for recommended configurations.
##### `PAST LIMIT`
The `PAST LIMIT` clause defines a time boundary before and relative to _now_
in which points written to the retention policy are accepted. If a point has a
timestamp before the specified boundary, the point is rejected and the write
request returns a partial write error.
For example, if a write request tries to write data to a retention policy with a
`PAST LIMIT 6h` and there are points in the request with timestamps older than
6 hours, those points are rejected.
##### `FUTURE LIMIT`
The `FUTURE LIMIT` clause defines a time boundary after and relative to _now_
in which points written to the retention policy are accepted. If a point has a
timestamp after the specified boundary, the point is rejected and the write
request returns a partial write error.
For example, if a write request tries to write data to a retention policy with a
`FUTURE LIMIT 6h` and there are points in the request with future timestamps
greater than 6 hours from now, those points are rejected.
##### `DEFAULT`
Sets the new retention policy as the default retention policy for the database.

View File

@ -122,15 +122,15 @@ ALL ALTER ANY AS ASC BEGIN
BY CREATE CONTINUOUS DATABASE DATABASES DEFAULT
DELETE DESC DESTINATIONS DIAGNOSTICS DISTINCT DROP
DURATION END EVERY EXPLAIN FIELD FOR
FROM GRANT GRANTS GROUP GROUPS IN
INF INSERT INTO KEY KEYS KILL
LIMIT SHOW MEASUREMENT MEASUREMENTS NAME OFFSET
ON ORDER PASSWORD POLICY POLICIES PRIVILEGES
QUERIES QUERY READ REPLICATION RESAMPLE RETENTION
REVOKE SELECT SERIES SET SHARD SHARDS
SLIMIT SOFFSET STATS SUBSCRIPTION SUBSCRIPTIONS TAG
TO USER USERS VALUES WHERE WITH
WRITE
FROM FUTURE GRANT GRANTS GROUP GROUPS
IN INF INSERT INTO KEY KEYS
KILL LIMIT SHOW MEASUREMENT MEASUREMENTS NAME
OFFSET ON ORDER PASSWORD PAST POLICY
POLICIES PRIVILEGES QUERIES QUERY READ REPLICATION
RESAMPLE RETENTION REVOKE SELECT SERIES SET
SHARD SHARDS SLIMIT SOFFSET STATS SUBSCRIPTION
SUBSCRIPTIONS TAG TO USER USERS VALUES
WHERE WITH WRITE
```
If you use an InfluxQL keywords as an
@ -379,13 +379,15 @@ create_database_stmt = "CREATE DATABASE" db_name
[ WITH
[ retention_policy_duration ]
[ retention_policy_replication ]
[ retention_past_limit ]
[ retention_future_limit ]
[ retention_policy_shard_group_duration ]
[ retention_policy_name ]
] .
```
{{% warn %}} Replication factors do not serve a purpose with single node instances.
{{% /warn %}}
> [!Warning]
> Replication factors do not serve a purpose with single node instances.
#### Examples
@ -393,11 +395,17 @@ create_database_stmt = "CREATE DATABASE" db_name
-- Create a database called foo
CREATE DATABASE "foo"
-- Create a database called bar with a new DEFAULT retention policy and specify the duration, replication, shard group duration, and name of that retention policy
-- Create a database called bar with a new DEFAULT retention policy and specify
-- the duration, replication, shard group duration, and name of that retention policy
CREATE DATABASE "bar" WITH DURATION 1d REPLICATION 1 SHARD DURATION 30m NAME "myrp"
-- Create a database called mydb with a new DEFAULT retention policy and specify the name of that retention policy
-- Create a database called mydb with a new DEFAULT retention policy and specify
-- the name of that retention policy
CREATE DATABASE "mydb" WITH NAME "myrp"
-- Create a database called bar with a new retention policy named "myrp", and
-- specify the duration, past and future limits, and name of that retention policy
CREATE DATABASE "bar" WITH DURATION 1d PAST LIMIT 6h FUTURE LIMIT 6h NAME "myrp"
```
### CREATE RETENTION POLICY
@ -407,11 +415,13 @@ create_retention_policy_stmt = "CREATE RETENTION POLICY" policy_name on_clause
retention_policy_duration
retention_policy_replication
[ retention_policy_shard_group_duration ]
[ retention_past_limit ]
[ retention_future_limit ]
[ "DEFAULT" ] .
```
{{% warn %}} Replication factors do not serve a purpose with single node instances.
{{% /warn %}}
> [!Warning]
> Replication factors do not serve a purpose with single node instances.
#### Examples
@ -424,6 +434,9 @@ CREATE RETENTION POLICY "10m.events" ON "somedb" DURATION 60m REPLICATION 2 DEFA
-- Create a retention policy and specify the shard group duration.
CREATE RETENTION POLICY "10m.events" ON "somedb" DURATION 60m REPLICATION 2 SHARD DURATION 30m
-- Create a retention policy and specify past and future limits.
CREATE RETENTION POLICY "10m.events" ON "somedb" DURATION 12h PAST LIMIT 6h FUTURE LIMIT 6h
```
### CREATE SUBSCRIPTION

View File

@ -10,9 +10,10 @@ menu:
Influx Inspect is an InfluxDB disk utility that can be used to:
* View detailed information about disk shards.
* Export data from a shard to [InfluxDB line protocol](/enterprise_influxdb/v1/concepts/glossary/#influxdb-line-protocol) that can be inserted back into the database.
* Convert TSM index shards to TSI index shards.
- View detailed information about disk shards.
- Export data from a shard to [InfluxDB line protocol](/enterprise_influxdb/v1/concepts/glossary/#influxdb-line-protocol)
that can be inserted back into the database.
- Convert TSM index shards to TSI index shards.
## `influx_inspect` utility
@ -38,8 +39,8 @@ The `influx_inspect` commands are summarized here, with links to detailed inform
- [`merge-schema`](#merge-schema): Merges a set of schema files from the `check-schema` command.
- [`report`](#report): Displays a shard level report.
- [`report-db`](#report-db): Estimates InfluxDB Cloud (TSM) cardinality for a database.
- [`report-disk`](#report-disk): Reports disk usage by shard and measurement.
- [`reporttsi`](#reporttsi): Reports on cardinality for measurements and shards.
- [`report-disk`](#report-disk): Reports disk usage by shards and measurements.
- [`reporttsi`](#reporttsi): Reports on cardinality for shards and measurements.
- [`verify`](#verify): Verifies the integrity of TSM files.
- [`verify-seriesfile`](#verify-seriesfile): Verifies the integrity of series files.
- [`verify-tombstone`](#verify-tombstone): Verifies the integrity of tombstones.
@ -50,7 +51,9 @@ Builds TSI (Time Series Index) disk-based shard index files and associated serie
The index is written to a temporary location until complete and then moved to a permanent location.
If an error occurs, then this operation will fall back to the original in-memory index.
> ***Note:*** **For offline conversion only.**
> [!Note]
> #### For offline conversion only
>
> When TSI is enabled, new shards use the TSI indexes.
> Existing shards continue as TSM-based shards until
> converted offline.
@ -60,7 +63,9 @@ If an error occurs, then this operation will fall back to the original in-memory
```
influx_inspect buildtsi -datadir <data_dir> -waldir <wal_dir> [ options ]
```
> **Note:** Use the `buildtsi` command with the user account that you are going to run the database as,
> [!Note]
> Use the `buildtsi` command with the user account that you are going to run the database as,
> or ensure that the permissions match after running the command.
#### Options
@ -71,9 +76,8 @@ Optional arguments are in brackets.
The size of the batches written to the index. Default value is `10000`.
{{% warn %}}
**Warning:** Setting this value can have adverse effects on performance and heap size.
{{% /warn %}}
> [!Warning]
> Setting this value can have adverse effects on performance and heap size.
##### `[ -compact-series-file ]`
@ -90,10 +94,11 @@ The name of the database.
##### `-datadir <data_dir>`
The path to the `data` directory.
The path to the [`data` directory](/enterprise_influxdb/v1/concepts/file-system-layout/#data-directory).
Default value is `$HOME/.influxdb/data`.
See the [file system layout](/enterprise_influxdb/v1/concepts/file-system-layout/) for InfluxDB on your system.
See the [file system layout](/enterprise_influxdb/v1/concepts/file-system-layout/#file-system-layout)
for InfluxDB on your system.
##### `[ -max-cache-size ]`
@ -120,31 +125,30 @@ Flag to enable output in verbose mode.
##### `-waldir <wal_dir>`
The directory for the WAL (Write Ahead Log) files.
The directory for the [WAL (Write Ahead Log)](/enterprise_influxdb/v1/concepts/file-system-layout/#wal-directory) files.
Default value is `$HOME/.influxdb/wal`.
See the [file system layout](/enterprise_influxdb/v1/concepts/file-system-layout/) for InfluxDB on your system.
See the [file system layout](/enterprise_influxdb/v1/concepts/file-system-layout/)
for InfluxDB on your system.
#### Examples
##### Converting all shards on a node
```
$ influx_inspect buildtsi -datadir /var/lib/influxdb/data -waldir /var/lib/influxdb/wal
```bash
influx_inspect buildtsi -datadir ~/.influxdb/data -waldir ~/.influxdb/wal
```
##### Converting all shards for a database
```
$ influx_inspect buildtsi -database mydb datadir /var/lib/influxdb/data -waldir /var/lib/influxdb/wal
```bash
influx_inspect buildtsi -database mydb -datadir ~/.influxdb/data -waldir ~/.influxdb/wal
```
##### Converting a specific shard
```
$ influx_inspect buildtsi -database stress -shard 1 datadir /var/lib/influxdb/data -waldir /var/lib/influxdb/wal
```bash
influx_inspect buildtsi -database stress -shard 1 -datadir ~/.influxdb/data -waldir ~/.influxdb/wal
```
### `check-schema`
@ -161,7 +165,7 @@ influx_inspect check-schema [ options ]
##### [ `-conflicts-file <string>` ]
Filename conflicts data should be written to. Default is `conflicts.json`.
The filename where conflicts data should be written. Default is `conflicts.json`.
##### [ `-path <string>` ]
@ -170,23 +174,23 @@ working directory `.`.
##### [ `-schema-file <string>` ]
Filename schema data should be written to. Default is `schema.json`.
The filename where schema data should be written. Default is `schema.json`.
### `deletetsm`
Use `deletetsm -measurement` to delete a measurement in a raw TSM file (from specified shards).
Use `deletetsm -sanitize` to remove all tag and field keys containing non-printable Unicode characters in a raw TSM file (from specified shards).
{{% warn %}}
**Warning:** Use the `deletetsm` command only when your InfluxDB instance is
offline (`influxd` service is not running).
{{% /warn %}}
> [!Warning]
> Use the `deletetsm` command only when your InfluxDB instance is
> offline (`influxd` service is not running).
#### Syntax
````
influx_inspect deletetsm -measurement <measurement_name> [ arguments ] <path>
````
##### `<path>`
Path to the `.tsm` file, located by default in the `data` directory.
@ -244,7 +248,7 @@ Optional arguments are in brackets.
##### `-series-file <series_path>`
Path to the `_series` directory under the database `data` directory. Required.
The path to the `_series` directory under the database `data` directory. Required.
##### [ `-series` ]
@ -282,19 +286,20 @@ Filter data by tag value regular expression.
##### Specifying paths to the `_series` and `index` directories
```
$ influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index
```bash
influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index
```
##### Specifying paths to the `_series` directory and an `index` file
```bash
influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index/file0
```
$ influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index/file0
```
##### Specifying paths to the `_series` directory and multiple `index` files
```
$ influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index/file0 /path/to/index/file1 ...
```bash
influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index/file0 /path/to/index/file1 ...
```
### `dumptsm`
@ -309,7 +314,7 @@ influx_inspect dumptsm [ options ] <path>
##### `<path>`
Path to the `.tsm` file, located by default in the `data` directory.
The path to the `.tsm` file, located by default in the `data` directory.
#### Options
@ -317,17 +322,17 @@ Optional arguments are in brackets.
##### [ `-index` ]
Flag to dump raw index data.
The flag to dump raw index data.
Default value is `false`.
##### [ `-blocks` ]
Flag to dump raw block data.
The flag to dump raw block data.
Default value is `false`.
##### [ `-all` ]
Flag to dump all data. Caution: This may print a lot of information.
The flag to dump all data. Caution: This may print a lot of information.
Default value is `false`.
##### [ `-filter-key <key_name>` ]
@ -351,14 +356,15 @@ Optional arguments are in brackets.
##### [ `-show-duplicates` ]
Flag to show keys which have duplicate or out-of-order timestamps.
If a user writes points with timestamps set by the client, then multiple points with the same timestamp (or with time-descending timestamps) can be written.
The flag to show keys which have duplicate or out-of-order timestamps.
If a user writes points with timestamps set by the client, then multiple points
with the same timestamp (or with time-descending timestamps) can be written.
### `export`
Exports all TSM files in InfluxDB line protocol data format.
This output file can be imported using the
[influx](/enterprise_influxdb/v1/tools/influx-cli/use-influx/#import-data-from-a-file-with-import) command.
Exports all TSM files or a single TSM file in InfluxDB line protocol data format.
The output file can be imported using the
[influx](/enterprise_influxdb/v1/tools/influx-cli/use-influx-cli) command.
#### Syntax
@ -382,16 +388,19 @@ Default value is `""`.
##### `-datadir <data_dir>`
The path to the `data` directory.
The path to the [`data` directory](/enterprise_influxdb/v1/concepts/file-system-layout/#data-directory).
Default value is `$HOME/.influxdb/data`.
See the [file system layout](/enterprise_influxdb/v1/concepts/file-system-layout/) for InfluxDB on your system.
See the [file system layout](/enterprise_influxdb/v1/concepts/file-system-layout/)
for InfluxDB on your system.
##### [ `-end <timestamp>` ]
The timestamp for the end of the time range. Must be in [RFC3339 format](https://tools.ietf.org/html/rfc3339).
RFC3339 requires very specific formatting. For example, to indicate no time zone offset (UTC+0), you must include Z or +00:00 after seconds. Examples of valid RFC3339 formats include:
RFC3339 requires very specific formatting. For example, to indicate no time zone
offset (UTC+0), you must include Z or +00:00 after seconds.
Examples of valid RFC3339 formats include:
**No offset**
@ -408,20 +417,28 @@ YYYY-MM-DDTHH:MM:SS-08:00
YYYY-MM-DDTHH:MM:SS+07:00
```
> **Note:** With offsets, avoid replacing the + or - sign with a Z. It may cause an error or print Z (ISO 8601 behavior) instead of the time zone offset.
> [!Note]
> With offsets, avoid replacing the + or - sign with a Z. It may cause an error
> or print Z (ISO 8601 behavior) instead of the time zone offset.
##### [ `-lponly` ]
Output data in line protocol format only.
Does not output data definition language (DDL) statements (such as `CREATE DATABASE`) or DML context metadata (such as `# CONTEXT-DATABASE`).
Does not output data definition language (DDL) statements (such as `CREATE DATABASE`)
or DML context metadata (such as `# CONTEXT-DATABASE`).
##### [ `-out <export_dir>` ]
##### [ `-out <export_dir>` or `-out -`]
Location to export shard data. Specify an export directory to export a file, or
add a hyphen after out (`-out -`) to export shard data to standard out (`stdout`)
and send status messages to standard error (`stderr`).
The location for the export file.
Default value is `$HOME/.influxdb/export`.
##### [ `-retention <rp_name> ` ]
The name of the [retention policy](/enterprise_influxdb/v1/concepts/glossary/#retention-policy-rp) to export. Default value is `""`.
The name of the [retention policy](/enterprise_influxdb/v1/concepts/glossary/#retention-policy-rp)
to export. Default value is `""`.
##### [ `-start <timestamp>` ]
@ -433,7 +450,13 @@ The timestamp string must be in [RFC3339 format](https://tools.ietf.org/html/rfc
Path to the [WAL](/enterprise_influxdb/v1/concepts/glossary/#wal-write-ahead-log) directory.
Default value is `$HOME/.influxdb/wal`.
See the [file system layout](/enterprise_influxdb/v1/concepts/file-system-layout/) for InfluxDB on your system.
See the [file system layout](/enterprise_influxdb/v1/concepts/file-system-layout/#file-system-layout)
for InfluxDB on your system.
##### [ `-tsmfile <tsm_file>` ]
Path to a single tsm file to export. This requires both `-database` and
`-retention` to be specified.
#### Examples
@ -449,6 +472,15 @@ influx_inspect export -compress
influx_inspect export -database DATABASE_NAME -retention RETENTION_POLICY
```
##### Export data from a single TSM file
```bash
influx_inspect export \
-database DATABASE_NAME \
-retention RETENTION_POLICY \
-tsmfile TSM_FILE_NAME
```
##### Output file
```bash
@ -562,33 +594,95 @@ Specify the cardinality "rollup" level--the granularity of the cardinality repor
### `report-disk`
Use the `report-disk` command to review TSM file disk usage per shard and measurement in a specified directory. Useful for capacity planning and identifying which measurement or shard is using the most disk space. The default directory path `~/.influxdb/data/`.
Use the `report-disk` command to review disk usage by shards and measurements
for TSM files in a specified directory. Useful for determining disk usage for
capacity planning and identifying which measurements or shards are using the
most space.
Calculates the total disk size by database (`db`), retention policy (`rp`), shard (`shard`), tsm file (`tsm_file`), and measurement (`measurement`).
Calculates the total disk size (`total_tsm_size`) in bytes, the number of
shards (`shards`), and the number of tsm files (`tsm_files`) for the specified
directory. Also calculates the disk size (`size`) and number of tsm files
(`tsm_files`) for each shard. Use the `-detailed` flag to report disk usage
(`size`) by database (`db`), retention policy (`rp`), and measurement (`measurement`).
#### Syntax
```
influx_inspect report-disk [ options ] <data_dir>
influx_inspect report-disk [ options ] <path>
```
##### `<path>`
Path to the directory with `.tsm` file(s) to report disk usage for.
Default location is `$HOME/.influxdb/data`.
When specifying the path, wildcards (`*`) can replace one or more characters.
#### Options
Optional arguments are in brackets.
##### [ `-detailed` ]
Report disk usage by measurement.
Include this flag to report disk usage by measurement.
#### Examples
##### Report on disk size by shard
```bash
influx_inspect report-disk ~/.influxdb/data/
```
##### Output
```json
{
"Summary": {"shards": 2, "tsm_files": 8, "total_tsm_size": 149834637 },
"Shard": [
{"db": "stress", "rp": "autogen", "shard": "3", "tsm_files": 7, "size": 147022321},
{"db": "telegraf", "rp": "autogen", "shard": "2", "tsm_files": 1, "size": 2812316}
]
}
```
##### Report on disk size by measurement
```bash
influx_inspect report-disk -detailed ~/.influxdb/data/
```
##### Output
```json
{
"Summary": {"shards": 2, "tsm_files": 8, "total_tsm_size": 149834637 },
"Shard": [
{"db": "stress", "rp": "autogen", "shard": "3", "tsm_files": 7, "size": 147022321},
{"db": "telegraf", "rp": "autogen", "shard": "2", "tsm_files": 1, "size": 2812316}
],
"Measurement": [
{"db": "stress", "rp": "autogen", "measurement": "ctr", "size": 107900000},
{"db": "telegraf", "rp": "autogen", "measurement": "cpu", "size": 1784211},
{"db": "telegraf", "rp": "autogen", "measurement": "disk", "size": 374121},
{"db": "telegraf", "rp": "autogen", "measurement": "diskio", "size": 254453},
{"db": "telegraf", "rp": "autogen", "measurement": "mem", "size": 171120},
{"db": "telegraf", "rp": "autogen", "measurement": "processes", "size": 59691},
{"db": "telegraf", "rp": "autogen", "measurement": "swap", "size": 42310},
{"db": "telegraf", "rp": "autogen", "measurement": "system", "size": 59561}
]
}
```
### `reporttsi`
The report does the following:
* Calculates the total exact series cardinality in the database.
* Segments that cardinality by measurement, and emits those cardinality values.
* Emits total exact cardinality for each shard in the database.
* Segments for each shard the exact cardinality for each measurement in the shard.
* Optionally limits the results in each shard to the "top n".
- Calculates the total exact series cardinality in the database.
- Segments that cardinality by measurement, and emits those cardinality values.
- Emits total exact cardinality for each shard in the database.
- Segments for each shard the exact cardinality for each measurement in the shard.
- Optionally limits the results in each shard to the "top n".
The `reporttsi` command is primarily useful when there has been a change in cardinality
and it's not clear which measurement is responsible for this change, and further, _when_
@ -703,7 +797,8 @@ Enables very verbose logging. Displays progress for every series key and time ra
Enables very very verbose logging. Displays progress for every series key and time range in the tombstone files. Timestamps are displayed in [RFC3339 format](https://tools.ietf.org/html/rfc3339) with nanosecond precision.
> **Note on verbose logging:** Higher verbosity levels override lower levels.
> [!Note]
> Higher verbosity levels override lower levels.
## Caveats

View File

@ -44,11 +44,15 @@ ID Database Retention Policy Desired Replicas Shard Group Start
{{% /expand %}}
{{< /expand-wrapper >}}
You can also use the `-m` flag to output "inconsistent" shards which are shards
that are either in metadata but not on disk or on disk but not in metadata.
## Flags
| Flag | Description |
| :--- | :-------------------------------- |
| `-v` | Return detailed shard information |
| `-m` | Return inconsistent shards |
{{% caption %}}
_Also see [`influxd-ctl` global flags](/enterprise_influxdb/v1/tools/influxd-ctl/#influxd-ctl-global-flags)._

View File

@ -1220,29 +1220,31 @@ To keep regular expressions and quoting simple, avoid using the following charac
## When should I single quote and when should I double quote when writing data?
* Avoid single quoting and double quoting identifiers when writing data via the line protocol; see the examples below for how writing identifiers with quotes can complicate queries.
Identifiers are database names, retention policy names, user names, measurement names, tag keys, and field keys.
- Avoid single quoting and double quoting identifiers when writing data via the
line protocol; see the examples below for how writing identifiers with quotes
can complicate queries. Identifiers are database names, retention policy
names, user names, measurement names, tag keys, and field keys.
Write with a double-quoted measurement: `INSERT "bikes" bikes_available=3`
Applicable query: `SELECT * FROM "\"bikes\""`
Write with a double-quoted measurement: `INSERT "bikes" bikes_available=3`
Applicable query: `SELECT * FROM "\"bikes\""`
Write with a single-quoted measurement: `INSERT 'bikes' bikes_available=3`
Applicable query: `SELECT * FROM "\'bikes\'"`
Write with a single-quoted measurement: `INSERT 'bikes' bikes_available=3`
Applicable query: `SELECT * FROM "\'bikes\'"`
Write with an unquoted measurement: `INSERT bikes bikes_available=3`
Applicable query: `SELECT * FROM "bikes"`
Write with an unquoted measurement: `INSERT bikes bikes_available=3`
Applicable query: `SELECT * FROM "bikes"`
* Double quote field values that are strings.
- Double quote field values that are strings.
Write: `INSERT bikes happiness="level 2"`
Applicable query: `SELECT * FROM "bikes" WHERE "happiness"='level 2'`
Write: `INSERT bikes happiness="level 2"`
Applicable query: `SELECT * FROM "bikes" WHERE "happiness"='level 2'`
* Special characters should be escaped with a backslash and not placed in quotes.
- Special characters should be escaped with a backslash and not placed in quotes--for example:
Write: `INSERT wacky va\"ue=4`
Applicable query: `SELECT "va\"ue" FROM "wacky"`
Write: `INSERT wacky va\"ue=4`
Applicable query: `SELECT "va\"ue" FROM "wacky"`
For more information , see [Line protocol](/enterprise_influxdb/v1/write_protocols/).
For more information , see [Line protocol](/influxdb/v1/write_protocols/).
## Does the precision of the timestamp matter?

View File

@ -12,6 +12,45 @@ alt_links:
v2: /influxdb/v2/reference/release-notes/influxdb/
---
## v1.12.1 {date="2025-06-26"}
## Features
- Add additional log output when using
[`influx_inspect buildtsi`](/influxdb/v1/tools/influx_inspect/#buildtsi) to
rebuild the TSI index.
- Use [`influx_inspect export`](/influxdb/v1/tools/influx_inspect/#export) with
[`-tsmfile` option](/influxdb/v1/tools/influx_inspect/#--tsmfile-tsm_file-) to
export a single TSM file.
- Add `fluxQueryRespBytes` metric to the `/debug/vars` metrics endpoint.
- Add [`aggressive-points-per-block` configuration option](/influxdb/v1/administration/config/#aggressive-points-per-block)
to prevent TSM files from not getting fully compacted.
- Improve error handling.
- InfluxQL updates:
- Delete series by retention policy.
- Allow retention policies to discard writes that fall within their range, but
outside of [`FUTURE LIMIT`](/influxdb/v1/query_language/manage-database/#future-limit)
and [`PAST LIMIT`](/influxdb/v1/query_language/manage-database/#past-limit).
## Bug fixes
- Log rejected writes to subscriptions.
- Update `xxhash` and avoid `stringtoslicebyte` in the cache.
- Prevent a panic when a shard group has no shards.
- Fix file handle leaks in `Compactor.write`.
- Ensure fields in memory match the fields on disk.
- Ensure temporary files are removed after failed compactions.
- Do not panic on invalid multiple subqueries.
## Other
- Update Go to 1.23.5.
- Upgrade Flux to v0.196.1.
- Upgrade InfluxQL to v1.4.1.
- Various other dependency updates.
---
## v1.11.8 {date="2024-11-15"}
### Bug Fixes
@ -20,6 +59,8 @@ alt_links:
compatibility API](/influxdb/v1/tools/api/#apiv2delete-http-endpoint) before
string comparisons (e.g. to allow special characters in measurement names).
---
## v1.11.7 {date="2024-10-10"}
This release represents the first public release of InfluxDB OSS v1 since 2021
@ -28,24 +69,23 @@ then back-ported to InfluxDB OSS v1. Many of these enhancements improve
compatibility between InfluxDB v1 and InfluxDB 3 and help to ease the migration
of InfluxDB v1 workloads to InfluxDB 3.
{{% warn %}}
#### Before upgrading to InfluxDB 1.11
The last public release of InfluxDB v1 was v1.8.10. Upgrading from v1.8.10 to
v1.11.7 is a large jump and should be done with care. Consider doing
one or more of the the following before upgrading:
- [Back up your data](/influxdb/v1/administration/backup_and_restore/)
- Create a clone of your current InfluxDB using InfluxDB 1.11 with identical
configuration options. Dual-write to your current InfluxDB
instance and your new 1.11 instance. Test writing and querying data with
InfluxDB 1.11.
#### No 32-bit builds
InfluxData no longer provides builds of InfluxDB v1 for 32-bit architectures.
All official build packages are for 64-bit architectures.
{{% /warn %}}
> [!Warning]
> #### Before upgrading to InfluxDB 1.11
>
> The last public release of InfluxDB v1 was v1.8.10. Upgrading from v1.8.10 to
> v1.11.7 is a large jump and should be done with care. Consider doing
> one or more of the the following before upgrading:
>
> - [Back up your data](/influxdb/v1/administration/backup_and_restore/)
> - Create a clone of your current InfluxDB using InfluxDB 1.11 with identical
> configuration options. Dual-write to your current InfluxDB
> instance and your new 1.11 instance. Test writing and querying data with
> InfluxDB 1.11.
>
> #### No 32-bit builds
>
> InfluxData no longer provides builds of InfluxDB v1 for 32-bit architectures.
> All official build packages are for 64-bit architectures.
### Features
@ -72,17 +112,17 @@ All official build packages are for 64-bit architectures.
and [`influx_inspect merge-schema`](/influxdb/v1/tools/influx_inspect/#merge-schema)
commands to check for type conflicts between shards.
- **New configuration options:**
- Add [`total-buffer-bytes`](/influxdb/v1/administration/config/#total-buffer-bytes--0)
- Add [`total-buffer-bytes`](/influxdb/v1/administration/config/#total-buffer-bytes)
configuration option to set the total number of bytes to allocate to
subscription buffers.
- Add [`termination-query-log`](/influxdb/v1/administration/config/#termination-query-log--false)
- Add [`termination-query-log`](/influxdb/v1/administration/config/#termination-query-log)
configuration option to enable dumping running queries to log on `SIGTERM`.
- Add [`max-concurrent-deletes`](/influxdb/v1/administration/config/#max-concurrent-deletes--1)
- Add [`max-concurrent-deletes`](/influxdb/v1/administration/config/#max-concurrent-deletes)
configuration option to set delete concurrency.
- Add [Flux query configuration settings](/influxdb/v1/administration/config/#flux-query-management-settings).
- Add [`compact-series-file`](/influxdb/v1/administration/config/#compact-series-file--false)
- Add [`compact-series-file`](/influxdb/v1/administration/config/#compact-series-file)
configuration option to enable or disable series file compaction on startup.
- Add [`prom-read-auth-enabled` configuration option](/influxdb/v1/administration/config/#prom-read-auth-enabled--false)
- Add [`prom-read-auth-enabled` configuration option](/influxdb/v1/administration/config/#prom-read-auth-enabled)
to authenticate Prometheus remote read.
- **Flux improvements:**
- Upgrade Flux to v0.194.5.
@ -230,32 +270,53 @@ Due to encountering several issues with build dependencies in v.1.8.8, this vers
## v1.8.6 {date="2021-05-21"}
This release is for InfluxDB Enterprise 1.8.6 customers only. No OSS-specific changes were made for InfluxDB 1.8.6--updates were made to the code base to support [InfluxDB Enterprise 1.8.6](/enterprise_influxdb/v1/about-the-project/release-notes/#v186).
This release is for InfluxDB Enterprise 1.8.6 customers only. No OSS-specific
changes were made for InfluxDB 1.8.6--updates were made to the code base to
support [InfluxDB Enterprise 1.8.6](/enterprise_influxdb/v1/about-the-project/release-notes/#v186).
## v1.8.5 {date="2021-04-20"}
### Features
- Add the ability to find which measurements or shards are contributing to disk size with the new [`influx_inspect report-disk`](/influxdb/v1/tools/influx_inspect/#report-disk) command. Useful for capacity planning and managing storage requirements.
- Add support to [`influx_inspect export`](/influxdb/v1/tools/influx_inspect/#export) to write to standard out (`stdout`) by adding a hyphen after the [`-out`](/influxdb/v1/tools/influx_inspect/#--out-export_dir-or--out--) flag. Using this option writes to `stdout`, and sends error and status messages to standard error (`stderr`).
- Update HTTP handler for `/query` to [log query text for POST requests](/influxdb/v1/administration/logs/#http-access-log-format).
- Optimize shard lookups in groups containing only one shard. Thanks @StoneYunZhao!
- Add the ability to find which measurements or shards are contributing to disk
size with the new [`influx_inspect report-disk`](/influxdb/v1/tools/influx_inspect/#report-disk)
command. Useful for capacity planning and managing storage requirements.
- Add support to [`influx_inspect export`](/influxdb/v1/tools/influx_inspect/#export)
to write to standard out (`stdout`) by adding a hyphen after the
[`-out`](/influxdb/v1/tools/influx_inspect/#--out-export_dir-or--out--) flag.
Using this option writes to `stdout`, and sends error and status messages to
standard error (`stderr`).
- Update HTTP handler for `/query` to
[log query text for POST requests](/influxdb/v1/administration/logs/#http-access-log-format).
- Optimize shard lookups in groups containing only one shard.
### Bug fixes
- Update meta queries (for example, SHOW TAG VALUES, SHOW TAG KEYS, SHOW SERIES CARDINALITY, SHOW MEASUREMENT CARDINALITY, and SHOW MEASUREMENTS) to check the query context when possible to respect timeout values set in the [`query-timeout` configuration parameter](/influxdb/v1/administration/config/#query-timeout--0s). Note, meta queries will check the context less frequently than regular queries, which use iterators, because meta queries return data in batches.
- Previously, successful writes were incorrectly incrementing the `WriteErr` statistics. Now, successful writes correctly increment the `writeOK` statistics.
- Update meta queries (for example, `SHOW TAG VALUES`, `SHOW TAG KEYS`,
`SHOW SERIES CARDINALITY`, `SHOW MEASUREMENT CARDINALITY`, and `SHOW MEASUREMENTS`)
to check the query context when possible to respect timeout values set in the
[`query-timeout` configuration parameter](/influxdb/v1/administration/config/#query-timeout).
Note, meta queries will check the context less frequently than regular queries,
which use iterators, because meta queries return data in batches.
- Previously, successful writes were incorrectly incrementing the `WriteErr`
statistics. Now, successful writes correctly increment the `writeOK` statistics.
- Correct JSON marshalling error format.
- Previously, a GROUP BY query with an offset that caused an interval to cross a daylight savings change inserted an extra output row off by one hour. Now, the correct GROUP BY interval start time is set before the time zone offset is calculated.
- Previously, a GROUP BY query with an offset that caused an interval to cross a
daylight savings change inserted an extra output row off by one hour. Now, the
correct GROUP BY interval start time is set before the time zone offset is calculated.
- Improved error logging for TCP connection closures.
- Fix `regexp` handling to comply with PromQL.
- Previously, when a SELECT INTO query generated an unsupported value, for example, `+/- Inf`, the query failed silently. Now, an error occurs to notify that the value cannot be inserted.
- Previously, when a SELECT INTO query generated an unsupported value, for
example, `+/- Inf`, the query failed silently. Now, an error occurs to notify
that the value cannot be inserted.
- Resolve the "snapshot in progress" error that occurred during a backup.
- Fix data race when accessing tombstone statistics (`TombstoneStat`).
- Minimize lock contention when adding new fields or measurements.
- Resolve a bug causing excess resource usage when an error occurs while reporting an earlier error.
- Resolve a bug causing excess resource usage when an error occurs while
reporting an earlier error.
## v1.8.4 {date="2021-02-01"}
### Features
- Add `stat_total_allocated` to Flux logging.
@ -294,11 +355,10 @@ This release is for InfluxDB Enterprise 1.8.6 customers only. No OSS-specific ch
## v1.8.1 {date="2020-07-14"}
{{% warn %}}
InfluxDB 1.8.1 introduced a bug that could potentially increase memory usage.
**If you installed this release**, install [v1.8.2](#v182), which includes the
features, performance improvements, and bug fixes below.
{{% /warn %}}
> [!Warning]
> InfluxDB 1.8.1 introduced a bug that could potentially increase memory usage.
> **If you installed this release**, install [v1.8.2](#v182), which includes the
> features, performance improvements, and bug fixes below.
### Features
@ -324,23 +384,32 @@ features, performance improvements, and bug fixes below.
#### Flux v0.65 ready for production use
This release updates support for the Flux language and queries. To learn about Flux design principles and see how to get started with Flux, see [Introduction to Flux](/influxdb/v1/flux/).
This release updates support for the Flux language and queries. To learn about
Flux design principles and see how to get started with Flux, see
[Introduction to Flux](/influxdb/v1/flux/).
* Use the new [`influx -type=flux`](/influxdb/v1/tools/influx-cli/#flags) option to enable the Flux REPL shell for creating Flux queries.
- Use the new [`influx -type=flux`](/influxdb/v1/tools/influx-cli/#flags) option
to enable the Flux REPL shell for creating Flux queries.
* Flux v0.65 includes the following capabilities:
- Join data residing in multiple measurements, buckets, or data sources
- Perform mathematical operations using data gathered across measurements/buckets
- Manipulate Strings through an extensive library of string related functions
- Shape data through `pivot()` and other functions
- Group based on any data column: tags, fields, etc.
- Window and aggregate based on calendar months, years
- Join data across Influx and non-Influx sources
- Cast booleans to integers
- Query geo-temporal data (experimental)
- Many additional functions for working with data
- Flux v0.65 includes the following capabilities:
- Join data residing in multiple measurements, buckets, or data sources
- Perform mathematical operations using data gathered across measurements/buckets
- Manipulate Strings through an extensive library of string related functions
- Shape data through `pivot()` and other functions
- Group based on any data column: tags, fields, etc.
- Window and aggregate based on calendar months, years
- Join data across Influx and non-Influx sources
- Cast booleans to integers
- Query geo-temporal data (experimental)
- Many additional functions for working with data
> We're evaluating the need for Flux query management controls equivalent to existing InfluxQL [query management controls](/influxdb/v1/troubleshooting/query_management/#configuration-settings-for-query-management) based on your feedback. Please join the discussion on [InfluxCommunity](https://community.influxdata.com/), [Slack](https://influxcommunity.slack.com/), or [GitHub](https://github.com/influxdata/flux). InfluxDB Enterprise customers, please contact <support@influxdata.com>.
> [!Note]
> We're evaluating the need for Flux query management controls equivalent to
> existing InfluxQL [query management controls](/influxdb/v1/troubleshooting/query_management/#configuration-settings-for-query-management)
> based on your feedback. Please join the discussion on
> [InfluxCommunity](https://community.influxdata.com/),
> [Slack](https://influxcommunity.slack.com/), or [GitHub](https://github.com/influxdata/flux).
> InfluxDB Enterprise customers, please contact <support@influxdata.com>.
#### Forward compatibility
@ -516,41 +585,41 @@ If you have not installed this release, then install the 1.7.4 release.
### Bug fixes
* Limit force-full and cold compaction size.
* Add user authentication and authorization support for Flux HTTP requests.
* Call `storage.Group` API to correctly map group mode.
* Marked functions that always return floats as always returning floats.
* Add support for optionally logging Flux queries.
* Fix cardinality estimation error.
- Limit force-full and cold compaction size.
- Add user authentication and authorization support for Flux HTTP requests.
- Call `storage.Group` API to correctly map group mode.
- Marked functions that always return floats as always returning floats.
- Add support for optionally logging Flux queries.
- Fix cardinality estimation error.
## 1.7.2 {date="2018-12-11"}
### Bug fixes
* Update to Flux 0.7.1.
* Conflict-based concurrency resolution adds guards and an epoch-based system to
- Update to Flux 0.7.1.
- Conflict-based concurrency resolution adds guards and an epoch-based system to
coordinate modifications when deletes happen against writes to the same points
at the same time.
* Skip and warn that series file should not be in a retention policy directory.
* Checks if measurement was removed from index, and if it was, then cleans up out
- Skip and warn that series file should not be in a retention policy directory.
- Checks if measurement was removed from index, and if it was, then cleans up out
of fields index. Also fix cleanup issue where only prefix was checked when
matching measurements like "m1" and "m10".
* Error message to user that databases must be run in non-mixed index mode
- Error message to user that databases must be run in non-mixed index mode
to allow deletes.
* Update platform dependency to simplify Flux support in Enterprise.
* Verify series file in presence of tombstones.
* Fix `ApplyEnvOverrides` when a type that implements Unmarshaler is in a slice to
- Update platform dependency to simplify Flux support in Enterprise.
- Verify series file in presence of tombstones.
- Fix `ApplyEnvOverrides` when a type that implements Unmarshaler is in a slice to
not call `UnMarshaltext` when the environment variable is set to empty.
* Drop NaN values when writing back points and fix the point writer to report the
- Drop NaN values when writing back points and fix the point writer to report the
number of points actually written and omits the ones that were dropped.
* Query authorizer was not properly passed to subqueries so rejections did not
- Query authorizer was not properly passed to subqueries so rejections did not
happen when a subquery was the one reading the value. Max series limit was not propagated downward.
## 1.7.1 {date="2018-11-14"}
### Bug fixes
* Simple8B `EncodeAll` incorrectly encodes entries: For a run of `1s`, if the 120th or 240th entry is not a `1`, the run will be incorrectly encoded as selector `0` (`240 1s`) or selector `1` (`120 1s`), resulting in a loss of data for the 120th or 240th value. Manifests itself as consuming significant CPU resources and as compactions running indefinitely.
- Simple8B `EncodeAll` incorrectly encodes entries: For a run of `1s`, if the 120th or 240th entry is not a `1`, the run will be incorrectly encoded as selector `0` (`240 1s`) or selector `1` (`120 1s`), resulting in a loss of data for the 120th or 240th value. Manifests itself as consuming significant CPU resources and as compactions running indefinitely.
## 1.7.0 {date="2018-11-06"}
@ -564,92 +633,93 @@ Chunked query was added into the Go client v2 interface. If you compiled against
Support for the Flux language and queries has been added in this release. To begin exploring Flux 0.7 (technical preview):
* Enable Flux using the new configuration setting [`[http] flux-enabled = true`](/influxdb/v1/administration/config/#flux-enabled-false).
* Use the new [`influx -type=flux`](/influxdb/v1/tools/shell/#type) option to enable the Flux REPL shell for creating Flux queries.
* Read about Flux and the Flux language, enabling Flux, or jump into the getting started and other guides.
- Enable Flux using the new configuration setting
[`[http] flux-enabled = true`](/influxdb/v1/administration/config/#flux-enabled).
- Use the new [`influx -type=flux`](/influxdb/v1/tools/shell/#type) option to enable the Flux REPL shell for creating Flux queries.
- Read about Flux and the Flux language, enabling Flux, or jump into the getting started and other guides.
#### Time Series Index (TSI) query performance and throughputs improvements
* Faster index planning for queries against indexes with many series that share tag pairs.
* Reduced index planning for queries that include previously queried tag pairs — the TSI
- Faster index planning for queries against indexes with many series that share tag pairs.
- Reduced index planning for queries that include previously queried tag pairs — the TSI
index now caches partial index results for later reuse.
* Performance improvements required a change in on-disk TSI format to be used.
* **To take advantage of these improvements**:
* Rebuild your indexes or wait for a TSI compaction of your indexes,
- Performance improvements required a change in on-disk TSI format to be used.
- **To take advantage of these improvements**:
- Rebuild your indexes or wait for a TSI compaction of your indexes,
at which point the new TSI format will be applied.
* Hot shards and new shards immediately use the new TSI format.
- Hot shards and new shards immediately use the new TSI format.
#### Other features
* Enable the storage service by default.
* Ensure read service regular expressions get optimized.
* Add chunked query into the Go client v2.
* Add `access-log-status-filters` config setting to create an access log filter.
* Compaction performance improvements for Time Series Index (TSI).
* Add roaring bitmaps to TSI index files.
- Enable the storage service by default.
- Ensure read service regular expressions get optimized.
- Add chunked query into the Go client v2.
- Add `access-log-status-filters` config setting to create an access log filter.
- Compaction performance improvements for Time Series Index (TSI).
- Add roaring bitmaps to TSI index files.
### Bug fixes
* Missing `hardwareAddr` in `uuid` v1 generation.
* Fix the inherited interval for derivative and others.
* Fix subquery functionality when a function references a tag from the subquery.
* Strip tags from a subquery when the outer query does not group by that tag.
- Missing `hardwareAddr` in `uuid` v1 generation.
- Fix the inherited interval for derivative and others.
- Fix subquery functionality when a function references a tag from the subquery.
- Strip tags from a subquery when the outer query does not group by that tag.
## 1.6.6 {date="2019-02-28"}
### Bug fixes
* Marked functions that always return floats as always returning floats.
* Fix cardinality estimation error.
* Update `tagKeyValue` mutex to write lock.
- Marked functions that always return floats as always returning floats.
- Fix cardinality estimation error.
- Update `tagKeyValue` mutex to write lock.
## 1.6.5 {date="2019-01-10"}
### Features
* Reduce allocations in TSI `TagSets` implementation.
- Reduce allocations in TSI `TagSets` implementation.
### Bug fixes
* Fix panic in `IndexSet`.
* Pass the query authorizer to subqueries.
* Fix TSM1 panic on reader error.
* Limit database and retention policy names to 255 characters.
* Update Go runtime to 1.10.6.
- Fix panic in `IndexSet`.
- Pass the query authorizer to subqueries.
- Fix TSM1 panic on reader error.
- Limit database and retention policy names to 255 characters.
- Update Go runtime to 1.10.6.
## 1.6.4 {date="2018-10-16"}
### Features
* Set maximum cache size using `-max-cache-size` in `buildtsi` when building TSI index.
- Set maximum cache size using `-max-cache-size` in `buildtsi` when building TSI index.
### Bug fixes
* Fix `tsi1` sketch locking.
* Fix subquery functionality when a function references a tag from the subquery.
* Strip tags from a subquery when the outer query does not group by that tag.
* Add `-series-file` flag to `dumptsi` command help.
* Cleanup failed TSM snapshots.
* Fix TSM1 panic on reader error.
* Fix series file tombstoning.
* Fixing the stream iterator to not ignore the error.
* Do not panic when a series ID iterator is nil.
* Fix append of possible nil iterator.
- Fix `tsi1` sketch locking.
- Fix subquery functionality when a function references a tag from the subquery.
- Strip tags from a subquery when the outer query does not group by that tag.
- Add `-series-file` flag to `dumptsi` command help.
- Cleanup failed TSM snapshots.
- Fix TSM1 panic on reader error.
- Fix series file tombstoning.
- Fixing the stream iterator to not ignore the error.
- Do not panic when a series ID iterator is nil.
- Fix append of possible nil iterator.
## 1.6.3 {date="2018-09-14"}
### Features
* Remove TSI1 HLL sketches from heap.
- Remove TSI1 HLL sketches from heap.
### Bug fixes
* Fix the inherited interval for derivative and others. The inherited interval from an outer query should not have caused
- Fix the inherited interval for derivative and others. The inherited interval from an outer query should not have caused
an inner query to fail because inherited intervals are only implicitly passed to inner queries that support group
by time functionality. Since an inner query with a derivative doesn't support grouping by time and the inner query itself
doesn't specify a time, the outer query shouldn't have invalidated the inner query.
* Fix the derivative and others time ranges for aggregate data. The derivative function and others similar to it would
- Fix the derivative and others time ranges for aggregate data. The derivative function and others similar to it would
preload themselves with data so that the first interval would be the start of the time range. That meant reading data outside
of the time range. One change to the shard mapper made in v1.4.0 caused the shard mapper to constrict queries to the
intervals given to the shard mapper. This was correct because the shard mapper can only deal with times it has mapped,
@ -662,138 +732,138 @@ but may be queried because of the above described functionality.
### Features
* Reduce allocations in TSI TagSets implementation.
- Reduce allocations in TSI TagSets implementation.
### Bug fixes
* Ensure orphaned series cleaned up with shard drop.
- Ensure orphaned series cleaned up with shard drop.
## 1.6.1 {date="2018-08-03"}
### Features
* Improve LogFile performance with bitset iterator.
* Add TSI index cardinality report to `influx_inspect`.
* Update to Go 1.10.
* Improve performance of `buildtsi` and TSI planning.
* Improve performance of read service for single measurements.
* Remove max concurrent compaction limit.
* Provide configurable TLS options.
* Add option to hint MADV_WILLNEED to kernel.
- Improve LogFile performance with bitset iterator.
- Add TSI index cardinality report to `influx_inspect`.
- Update to Go 1.10.
- Improve performance of `buildtsi` and TSI planning.
- Improve performance of read service for single measurements.
- Remove max concurrent compaction limit.
- Provide configurable TLS options.
- Add option to hint MADV_WILLNEED to kernel.
### Bug fixes
* Improve series segment recovery.
* Fix windows mmap on zero length file.
* Ensure Filter iterators executed as late as possible.
* Document UDP precision setting in config.
* Allow tag keys to contain underscores.
* Fix a panic when matching on a specific type of regular expression.
- Improve series segment recovery.
- Fix windows mmap on zero length file.
- Ensure Filter iterators executed as late as possible.
- Document UDP precision setting in config.
- Allow tag keys to contain underscores.
- Fix a panic when matching on a specific type of regular expression.
## 1.6.0 {date="2018-07-05"}
### Breaking changes
* If math is used with the same selector multiple times, it will now act as a selector
- If math is used with the same selector multiple times, it will now act as a selector
rather than an aggregate. See [#9563](https://github.com/influxdata/influxdb/pull/9563) for details.
* For data received from Prometheus endpoints, every Prometheus measurement is now
- For data received from Prometheus endpoints, every Prometheus measurement is now
stored in its own InfluxDB measurement rather than storing everything in the `_` measurement
using the Prometheus measurement name as the `__name__` label.
### Features
* Support proxy environment variables in the `influx` client.
* Implement basic trigonometry functions.
* Add ability to delete many series with predicate.
* Implement `floor`, `ceil`, and `round` functions.
* Add more math functions to InfluxQL.
* Allow customizing the unix socket group and permissions created by the server.
* Add `suppress-write-log` option to disable the write log when the log is enabled.
* Add additional technical analysis algorithms.
* Validate points on input.
* Log information about index version during startup.
* Add key sanitization to `deletetsm` command in `influx_inspect` utility.
* Optimize the `spread` function to process points iteratively instead of in batch.
* Allow math functions to be used in the condition.
* Add HTTP write throttle settings: `max-concurrent-write-limit`, `max-enqueued-write-limit`, and `enqueued-write-timeout`.
* Implement `SHOW STATS FOR indexes`.
* Add `dumptsmwal` command to `influx_inspect` utility.
* Improve the number of regex patterns that are optimized to static OR conditions.
- Support proxy environment variables in the `influx` client.
- Implement basic trigonometry functions.
- Add ability to delete many series with predicate.
- Implement `floor`, `ceil`, and `round` functions.
- Add more math functions to InfluxQL.
- Allow customizing the unix socket group and permissions created by the server.
- Add `suppress-write-log` option to disable the write log when the log is enabled.
- Add additional technical analysis algorithms.
- Validate points on input.
- Log information about index version during startup.
- Add key sanitization to `deletetsm` command in `influx_inspect` utility.
- Optimize the `spread` function to process points iteratively instead of in batch.
- Allow math functions to be used in the condition.
- Add HTTP write throttle settings: `max-concurrent-write-limit`, `max-enqueued-write-limit`, and `enqueued-write-timeout`.
- Implement `SHOW STATS FOR indexes`.
- Add `dumptsmwal` command to `influx_inspect` utility.
- Improve the number of regex patterns that are optimized to static OR conditions.
### Bug fixes
* Support setting the log level through the environment variable.
* Fix panic when checking fieldsets.
* Ensure correct number of tags parsed when commas used.
* Fix data race in WAL.
* Allow `SHOW SERIES` kill.
* Revert "Use MADV_WILLNEED when loading TSM files".
* Fix regression to allow now() to be used as the group by offset again.
* Delete deleted shards in retention service.
* Ignore index size in `Engine.DiskSize()`.
* Enable casting values from a subquery.
* Avoid a panic when using show diagnostics with text/csv.
* Properly track the response bytes written for queries in all format types.
* Remove error for series file when no shards exist.
* Fix the validation for multiple nested distinct calls.
* TSM: `TSMReader.Close` blocks until reads complete.
* Return the correct auxiliary values for `top` and `bottom`.
* Close TSMReaders from `FileStore.Close` after releasing FileStore mutex.
- Support setting the log level through the environment variable.
- Fix panic when checking fieldsets.
- Ensure correct number of tags parsed when commas used.
- Fix data race in WAL.
- Allow `SHOW SERIES` kill.
- Revert "Use MADV_WILLNEED when loading TSM files".
- Fix regression to allow now() to be used as the group by offset again.
- Delete deleted shards in retention service.
- Ignore index size in `Engine.DiskSize()`.
- Enable casting values from a subquery.
- Avoid a panic when using show diagnostics with text/csv.
- Properly track the response bytes written for queries in all format types.
- Remove error for series file when no shards exist.
- Fix the validation for multiple nested distinct calls.
- TSM: `TSMReader.Close` blocks until reads complete.
- Return the correct auxiliary values for `top` and `bottom`.
- Close TSMReaders from `FileStore.Close` after releasing FileStore mutex.
## 1.5.5 {date="2018-12-19"}
### Features
* Reduce allocations in TSI `TagSets` implementation.
- Reduce allocations in TSI `TagSets` implementation.
### Bug fixes
* Copy return value of `IndexSet.MeasurementNamesByExpr`.
* Ensure orphaned series cleaned up with shard drop.
* Fix the derivative and others time ranges for aggregate data.
* Fix the stream iterator to not ignore errors.
* Do not panic when a series ID iterator is `nil`.
* Fix panic in `IndexSet`.
* Pass the query authorizer to subqueries.
* Fix TSM1 panic on reader error.
- Copy return value of `IndexSet.MeasurementNamesByExpr`.
- Ensure orphaned series cleaned up with shard drop.
- Fix the derivative and others time ranges for aggregate data.
- Fix the stream iterator to not ignore errors.
- Do not panic when a series ID iterator is `nil`.
- Fix panic in `IndexSet`.
- Pass the query authorizer to subqueries.
- Fix TSM1 panic on reader error.
## 1.5.4 {date="2018-06-21"}
### Features
* Add `influx_inspect deletetsm` command for bulk deletes of measurements in raw TSM files.
- Add `influx_inspect deletetsm` command for bulk deletes of measurements in raw TSM files.
### Bug fixes
* Fix panic in readTombstoneV4.
* buildtsi: Do not escape measurement names.
- Fix panic in readTombstoneV4.
- buildtsi: Do not escape measurement names.
## 1.5.3 {date="2018-05-25"}
### Features
* Add `[http] debug-pprof-enabled` configuration setting immediately on startup. Useful for debugging startup performance issues.
- Add `[http] debug-pprof-enabled` configuration setting immediately on startup. Useful for debugging startup performance issues.
### Bug fixes
* Fix the validation for multiple nested `DISTINCT` calls.
* Return the correct auxiliary values for `TOP` and `BOTTOM`.
- Fix the validation for multiple nested `DISTINCT` calls.
- Return the correct auxiliary values for `TOP` and `BOTTOM`.
## 1.5.2 {date="2018-04-12"}
### Features
* Check for root user when running `buildtsi`.
* Adjustable TSI Compaction Threshold.
- Check for root user when running `buildtsi`.
- Adjustable TSI Compaction Threshold.
### Bug fixes
* backport: check for failure case where backup directory has no manifest files.
* Fix regression to allow `now()` to be used as the group by offset again.
* Revert `Use MADV_WILLNEED when loading TSM files`.
* Ignore index size in `Engine.DiskSize()`.
* Fix `buildtsi` partition key.
* Ensure that conditions are encoded correctly even if the AST is not properly formed.
- backport: check for failure case where backup directory has no manifest files.
- Fix regression to allow `now()` to be used as the group by offset again.
- Revert `Use MADV_WILLNEED when loading TSM files`.
- Ignore index size in `Engine.DiskSize()`.
- Fix `buildtsi` partition key.
- Ensure that conditions are encoded correctly even if the AST is not properly formed.
## 1.5.1 {date="2018-03-20"}
@ -908,7 +978,7 @@ will find the shards refuse to open and will most likely see the following error
#### `[collectd]` Section
* `parse-multivalue-plugin` option was added with a default of `split`. When set to `split`, multivalue plugin data (e.g. `df free:5000,used:1000`) will be split into separate measurements (e.g., `df_free, value=5000` and `df_used, value=1000`). When set to `join`, multivalue plugin will be stored as a single multi-value measurement (e.g., `df, free=5000,used=1000`).
- `parse-multivalue-plugin` option was added with a default of `split`. When set to `split`, multivalue plugin data (e.g. `df free:5000,used:1000`) will be split into separate measurements (e.g., `df_free, value=5000` and `df_used, value=1000`). When set to `join`, multivalue plugin will be stored as a single multi-value measurement (e.g., `df, free=5000,used=1000`).
### Features
@ -1096,12 +1166,17 @@ Minor bug fixes were identified via Community and InfluxCloud.
Version 1.3.0 marks the first official release of the new InfluxDB time series index (TSI) engine.
The TSI engine is a significant technical advancement in InfluxDB.
It offers a solution to the [time-structured merge tree](/influxdb/v1/concepts/storage_engine/) engine's [high series cardinality issue](/influxdb/v1/troubleshooting/frequently-asked-questions/#why-does-series-cardinality-matter).
With TSI, the number of series should be unbounded by the memory on the server hardware and the number of existing series will have a negligible impact on database startup time.
See Paul Dix's blogpost [Path to 1 Billion Time Series: InfluxDB High Cardinality Indexing Ready for Testing](https://www.influxdata.com/path-1-billion-time-series-influxdb-high-cardinality-indexing-ready-testing/) for additional information.
It offers a solution to the [time-structured merge tree](/influxdb/v1/concepts/storage_engine/)
engine's [high series cardinality issue](/influxdb/v1/troubleshooting/frequently-asked-questions/#why-does-series-cardinality-matter).
With TSI, the number of series should be unbounded by the memory on the server
hardware and the number of existing series will have a negligible impact on
database startup time.
See Paul Dix's blogpost [Path to 1 Billion Time Series: InfluxDB High Cardinality Indexing Ready for Testing](https://www.influxdata.com/path-1-billion-time-series-influxdb-high-cardinality-indexing-ready-testing/)
for additional information.
TSI is disabled by default in version 1.3.
To enable TSI, uncomment the [`index-version` setting](/influxdb/v1/administration/config#index-version-inmem) and set it to `tsi1`.
To enable TSI, uncomment the [`index-version` setting](/influxdb/v1/administration/config#index-version)
and set it to `tsi1`.
The `index-version` setting is in the `[data]` section of the configuration file.
Next, restart your InfluxDB instance.
@ -1125,8 +1200,8 @@ When enabled, each time a continuous query is completed, a number of details reg
| `pointsWrittenOK` | number of points written to the target measurement |
* `startTime` and `endTime` are UNIX timestamps, in nanoseconds.
* The number of points written is also included in CQ log messages.
- `startTime` and `endTime` are UNIX timestamps, in nanoseconds.
- The number of points written is also included in CQ log messages.
### Removals
@ -1134,20 +1209,20 @@ The admin UI is removed and unusable in this release. The `[admin]` configuratio
### Configuration Changes
* The top-level config `bind-address` now defaults to `localhost:8088`.
- The top-level config `bind-address` now defaults to `localhost:8088`.
The previous default was just `:8088`, causing the backup and restore port to be bound on all available interfaces (i.e. including interfaces on the public internet).
The following new configuration options are available.
#### `[http]` Section
* `max-body-size` was added with a default of 25,000,000, but can be disabled by setting it to 0.
- `max-body-size` was added with a default of 25,000,000, but can be disabled by setting it to 0.
Specifies the maximum size (in bytes) of a client request body. When a client sends data that exceeds
the configured maximum size, a `413 Request Entity Too Large` HTTP response is returned.
#### `[continuous_queries]` Section
* `query-stats-enabled` was added with a default of `false`. When set to `true`, continuous query execution statistics are written to the default monitor store.
- `query-stats-enabled` was added with a default of `false`. When set to `true`, continuous query execution statistics are written to the default monitor store.
### Features
@ -1250,14 +1325,15 @@ The following new configuration options are available.
#### `[http]` Section
* [`max-row-limit`](/influxdb/v1/administration/config#max-row-limit-0) now defaults to `0`.
- [`max-row-limit`](/influxdb/v1/administration/config#max-row-limit) now defaults to `0`.
In versions 1.0 and 1.1, the default setting was `10000`, but due to a bug, the value in use in versions 1.0 and 1.1 was effectively `0`.
In versions 1.2.0 through 1.2.1, we fixed that bug, but the fix caused a breaking change for Grafana and Kapacitor users; users who had not set `max-row-limit` to `0` experienced truncated/partial data due to the `10000` row limit.
In version 1.2.2, we've changed the default `max-row-limit` setting to `0` to match the behavior in versions 1.0 and 1.1.
### Bug fixes
- Change the default [`max-row-limit`](/influxdb/v1/administration/config#max-row-limit-0) setting from `10000` to `0` to prevent the absence of data in Grafana or Kapacitor.
- Change the default [`max-row-limit`](/influxdb/v1/administration/config#max-row-limit)
setting from `10000` to `0` to prevent the absence of data in Grafana or Kapacitor.
## v1.2.1 {date="2017-03-08"}
@ -1299,8 +1375,8 @@ The following new configuration options are available, if upgrading to `1.2.0` f
#### `[[collectd]]` Section
* `security-level` which defaults to `"none"`. This field also accepts `"sign"` and `"encrypt"` and enables different levels of transmission security for the collectd plugin.
* `auth-file` which defaults to `"/etc/collectd/auth_file"`. Specifies where to locate the authentication file used to authenticate clients when using signed or encrypted mode.
- `security-level` which defaults to `"none"`. This field also accepts `"sign"` and `"encrypt"` and enables different levels of transmission security for the collectd plugin.
- `auth-file` which defaults to `"/etc/collectd/auth_file"`. Specifies where to locate the authentication file used to authenticate clients when using signed or encrypted mode.
### Deprecations
@ -1410,14 +1486,14 @@ The following configuration changes may need to changed before upgrading to `1.1
#### `[admin]` Section
* `enabled` now default to false. If you are currently using the admin interaface, you will need to change this value to `true` to re-enable it. The admin interface is currently deprecated and will be removed in a subsequent release.
- `enabled` now default to false. If you are currently using the admin interaface, you will need to change this value to `true` to re-enable it. The admin interface is currently deprecated and will be removed in a subsequent release.
#### `[data]` Section
* `max-values-per-tag` was added with a default of 100,000, but can be disabled by setting it to `0`. Existing measurements with tags that exceed this limit will continue to load, but writes that would cause the tags cardinality to increase will be dropped and a `partial write` error will be returned to the caller. This limit can be used to prevent high cardinality tag values from being written to a measurement.
* `cache-max-memory-size` has been increased to from `524288000` to `1048576000`. This setting is the maximum amount of RAM, in bytes, a shard cache can use before it rejects writes with an error. Setting this value to `0` disables the limit.
* `cache-snapshot-write-cold-duration` has been decreased from `1h` to `10m`. This setting determines how long values will stay in the shard cache while the shard is cold for writes.
* `compact-full-write-cold-duration` has been decreased from `24h` to `4h`. The shorter duration allows cold shards to be compacted to an optimal state more quickly.
- `max-values-per-tag` was added with a default of 100,000, but can be disabled by setting it to `0`. Existing measurements with tags that exceed this limit will continue to load, but writes that would cause the tags cardinality to increase will be dropped and a `partial write` error will be returned to the caller. This limit can be used to prevent high cardinality tag values from being written to a measurement.
- `cache-max-memory-size` has been increased to from `524288000` to `1048576000`. This setting is the maximum amount of RAM, in bytes, a shard cache can use before it rejects writes with an error. Setting this value to `0` disables the limit.
- `cache-snapshot-write-cold-duration` has been decreased from `1h` to `10m`. This setting determines how long values will stay in the shard cache while the shard is cold for writes.
- `compact-full-write-cold-duration` has been decreased from `24h` to `4h`. The shorter duration allows cold shards to be compacted to an optimal state more quickly.
### Features
@ -1517,12 +1593,12 @@ Initial release of InfluxDB.
### Breaking changes
* `max-series-per-database` was added with a default of 1M but can be disabled by setting it to `0`. Existing databases with series that exceed this limit will continue to load but writes that would create new series will fail.
* Config option `[cluster]` has been replaced with `[coordinator]`.
* Support for config options `[collectd]` and `[opentsdb]` has been removed; use `[[collectd]]` and `[[opentsdb]]` instead.
* Config option `data-logging-enabled` within the `[data]` section, has been renamed to `trace-logging-enabled`, and defaults to `false`.
* The keywords `IF`, `EXISTS`, and `NOT` where removed for this release. This means you no longer need to specify `IF NOT EXISTS` for `DROP DATABASE` or `IF EXISTS` for `CREATE DATABASE`. If these are specified, a query parse error is returned.
* The Shard `writePointsFail` stat has been renamed to `writePointsErr` for consistency with other stats.
- `max-series-per-database` was added with a default of 1M but can be disabled by setting it to `0`. Existing databases with series that exceed this limit will continue to load but writes that would create new series will fail.
- Config option `[cluster]` has been replaced with `[coordinator]`.
- Support for config options `[collectd]` and `[opentsdb]` has been removed; use `[[collectd]]` and `[[opentsdb]]` instead.
- Config option `data-logging-enabled` within the `[data]` section, has been renamed to `trace-logging-enabled`, and defaults to `false`.
- The keywords `IF`, `EXISTS`, and `NOT` where removed for this release. This means you no longer need to specify `IF NOT EXISTS` for `DROP DATABASE` or `IF EXISTS` for `CREATE DATABASE`. If these are specified, a query parse error is returned.
- The Shard `writePointsFail` stat has been renamed to `writePointsErr` for consistency with other stats.
With this release the systemd configuration files for InfluxDB will use the system configured default for logging and will no longer write files to `/var/log/influxdb` by default. On most systems, the logs will be directed to the systemd journal and can be accessed by `journalctl -u influxdb.service`. Consult the systemd journald documentation for configuring journald.

View File

@ -666,7 +666,7 @@ from(bucket: "example-tmp-db/autogen")
For more information, see
[How does InfluxDB handle duplicate points?](/influxdb/v1/troubleshooting/frequently-asked-questions/#how-does-influxdb-handle-duplicate-points)
3. Use InfluxQL to delete the temporary database.
3. Use InfluxQL to delete the temporary database.
```sql
DROP DATABASE "example-tmp-db"
@ -683,7 +683,7 @@ are `127.0.0.1:8088`.
**To customize the TCP IP and port the backup and restore services use**,
uncomment and update the
[`bind-address` configuration setting](/influxdb/v1/administration/config#bind-address-127-0-0-1-8088)
[`bind-address` configuration setting](/influxdb/v1/administration/config#rpc-bind-address)
at the root level of your InfluxDB configuration file (`influxdb.conf`).
```toml

File diff suppressed because it is too large Load Diff

View File

@ -12,14 +12,14 @@ menu:
### `8086`
The default port that runs the InfluxDB HTTP service.
[Configure this port](/influxdb/v1/administration/config#bind-address-8086)
[Configure this port](/influxdb/v1/administration/config#http-bind-address)
in the configuration file.
**Resources** [API Reference](/influxdb/v1/tools/api/)
### 8088
The default port used by the RPC service for RPC calls made by the CLI for backup and restore operations (`influxdb backup` and `influxd restore`).
[Configure this port](/influxdb/v1/administration/config#bind-address-127-0-0-1-8088)
[Configure this port](/influxdb/v1/administration/config#rpc-bind-address)
in the configuration file.
**Resources** [Backup and Restore](/influxdb/v1/administration/backup_and_restore/)
@ -29,7 +29,7 @@ in the configuration file.
### 2003
The default port that runs the Graphite service.
[Enable and configure this port](/influxdb/v1/administration/config#bind-address-2003)
[Enable and configure this port](/influxdb/v1/administration/config#graphite-bind-address)
in the configuration file.
**Resources** [Graphite README](https://github.com/influxdata/influxdb/tree/1.8/services/graphite/README.md)
@ -37,7 +37,7 @@ in the configuration file.
### 4242
The default port that runs the OpenTSDB service.
[Enable and configure this port](/influxdb/v1/administration/config#bind-address-4242)
[Enable and configure this port](/influxdb/v1/administration/config#opentsdb-bind-address)
in the configuration file.
**Resources** [OpenTSDB README](https://github.com/influxdata/influxdb/tree/1.8/services/opentsdb/README.md)
@ -45,7 +45,7 @@ in the configuration file.
### 8089
The default port that runs the UDP service.
[Enable and configure this port](/influxdb/v1/administration/config#bind-address-8089)
[Enable and configure this port](/influxdb/v1/administration/config#udp-bind-address)
in the configuration file.
**Resources** [UDP README](https://github.com/influxdata/influxdb/tree/1.8/services/udp/README.md)
@ -53,7 +53,7 @@ in the configuration file.
### 25826
The default port that runs the Collectd service.
[Enable and configure this port](/influxdb/v1/administration/config#bind-address-25826)
[Enable and configure this port](/influxdb/v1/administration/config#collectd-bind-address)
in the configuration file.
**Resources** [Collectd README](https://github.com/influxdata/influxdb/tree/1.8/services/collectd/README.md)
**Resources** [Collectd README](https://github.com/influxdata/influxdb/tree/1.8/services/collectd/README.md)

View File

@ -21,7 +21,7 @@ HTTP, HTTPS, or UDP in [line protocol](/influxdb/v1/write_protocols/line_protoco
the InfluxDB subscriber service creates multiple "writers" ([goroutines](https://golangbot.com/goroutines/))
which send writes to the subscription endpoints.
_The number of writer goroutines is defined by the [`write-concurrency`](/influxdb/v1/administration/config#write-concurrency-40) configuration._
_The number of writer goroutines is defined by the [`write-concurrency`](/influxdb/v1/administration/config#write-concurrency) configuration._
As writes occur in InfluxDB, each subscription writer sends the written data to the
specified subscription endpoints.

View File

@ -21,18 +21,18 @@ The InfluxDB file structure includes of the following:
### Data directory
Directory path where InfluxDB stores time series data (TSM files).
To customize this path, use the [`[data].dir`](/influxdb/v1/administration/config/#dir--varlibinfluxdbdata)
To customize this path, use the [`[data].dir`](/influxdb/v1/administration/config/#dir-1)
configuration option.
### WAL directory
Directory path where InfluxDB stores Write Ahead Log (WAL) files.
To customize this path, use the [`[data].wal-dir`](/influxdb/v1/administration/config/#wal-dir--varlibinfluxdbwal)
To customize this path, use the [`[data].wal-dir`](/influxdb/v1/administration/config/#wal-dir)
configuration option.
### Metastore directory
Directory path of the InfluxDB metastore, which stores information about users,
databases, retention policies, shards, and continuous queries.
To customize this path, use the [`[meta].dir`](/influxdb/v1/administration/config/#dir--varlibinfluxdbmeta)
To customize this path, use the [`[meta].dir`](/influxdb/v1/administration/config/#dir)
configuration option.
## InfluxDB configuration files

View File

@ -66,13 +66,13 @@ Deletes sent to the Cache will clear out the given key or the specific time rang
The Cache exposes a few controls for snapshotting behavior.
The two most important controls are the memory limits.
There is a lower bound, [`cache-snapshot-memory-size`](/influxdb/v1/administration/config#cache-snapshot-memory-size-25m), which when exceeded will trigger a snapshot to TSM files and remove the corresponding WAL segments.
There is also an upper bound, [`cache-max-memory-size`](/influxdb/v1/administration/config#cache-max-memory-size-1g), which when exceeded will cause the Cache to reject new writes.
There is a lower bound, [`cache-snapshot-memory-size`](/influxdb/v1/administration/config#cache-snapshot-memory-size), which when exceeded will trigger a snapshot to TSM files and remove the corresponding WAL segments.
There is also an upper bound, [`cache-max-memory-size`](/influxdb/v1/administration/config#cache-max-memory-size), which when exceeded will cause the Cache to reject new writes.
These configurations are useful to prevent out of memory situations and to apply back pressure to clients writing data faster than the instance can persist it.
The checks for memory thresholds occur on every write.
The other snapshot controls are time based.
The idle threshold, [`cache-snapshot-write-cold-duration`](/influxdb/v1/administration/config#cache-snapshot-write-cold-duration-10m), forces the Cache to snapshot to TSM files if it hasn't received a write within the specified interval.
The idle threshold, [`cache-snapshot-write-cold-duration`](/influxdb/v1/administration/config#cache-snapshot-write-cold-duration), forces the Cache to snapshot to TSM files if it hasn't received a write within the specified interval.
The in-memory Cache is recreated on restart by re-reading the WAL files on disk.

View File

@ -215,7 +215,7 @@ data that reside in an RP other than the `DEFAULT` RP.
Between checks, `orders` may have data that are older than two hours.
The rate at which InfluxDB checks to enforce an RP is a configurable setting,
see
[Database Configuration](/influxdb/v1/administration/config#check-interval-30m0s).
[Database Configuration](/influxdb/v1/administration/config#check-interval).
Using a combination of RPs and CQs, we've successfully set up our database to
automatically keep the high precision raw data for a limited time, create lower

View File

@ -62,17 +62,22 @@ Creates a new database.
#### Syntax
```sql
CREATE DATABASE <database_name> [WITH [DURATION <duration>] [REPLICATION <n>] [SHARD DURATION <duration>] [NAME <retention-policy-name>]]
CREATE DATABASE <database_name> [WITH [DURATION <duration>] [REPLICATION <n>] [SHARD DURATION <duration>] [PAST LIMIT <duration>] [FUTURE LIMIT <duration>] [NAME <retention-policy-name>]]
```
#### Description of syntax
`CREATE DATABASE` requires a database [name](/influxdb/v1/troubleshooting/frequently-asked-questions/#what-words-and-characters-should-i-avoid-when-writing-data-to-influxdb).
The `WITH`, `DURATION`, `REPLICATION`, `SHARD DURATION`, and `NAME` clauses are optional and create a single [retention policy](/influxdb/v1/concepts/glossary/#retention-policy-rp) associated with the created database.
If you do not specify one of the clauses after `WITH`, the relevant behavior defaults to the `autogen` retention policy settings.
The `WITH`, `DURATION`, `REPLICATION`, `SHARD DURATION`, `PAST LIMIT`,
`FUTURE LIMIT`, and `NAME` clauses are optional and create a single
[retention policy](/influxdb/v1/concepts/glossary/#retention-policy-rp)
associated with the created database.
If you do not specify one of the clauses after `WITH`, the relevant behavior
defaults to the `autogen` retention policy settings.
The created retention policy automatically serves as the database's default retention policy.
For more information about those clauses, see [Retention Policy Management](/influxdb/v1/query_language/manage-database/#retention-policy-management).
For more information about those clauses, see
[Retention Policy Management](/influxdb/v1/query_language/manage-database/#retention-policy-management).
A successful `CREATE DATABASE` query returns an empty result.
If you attempt to create a database that already exists, InfluxDB does nothing and does not return an error.
@ -87,7 +92,7 @@ If you attempt to create a database that already exists, InfluxDB does nothing a
```
The query creates a database called `NOAA_water_database`.
[By default](/influxdb/v1/administration/config/#retention-autocreate-true), InfluxDB also creates the `autogen` retention policy and associates it with the `NOAA_water_database`.
[By default](/influxdb/v1/administration/config/#retention-autocreate), InfluxDB also creates the `autogen` retention policy and associates it with the `NOAA_water_database`.
##### Create a database with a specific retention policy
@ -122,21 +127,25 @@ The `DROP SERIES` query deletes all points from a [series](/influxdb/v1/concepts
and it drops the series from the index.
The query takes the following form, where you must specify either the `FROM` clause or the `WHERE` clause:
```sql
DROP SERIES FROM <measurement_name[,measurement_name]> WHERE <tag_key>='<tag_value>'
```
Drop all series from a single measurement:
```sql
> DROP SERIES FROM "h2o_feet"
```
Drop series with a specific tag pair from a single measurement:
```sql
> DROP SERIES FROM "h2o_feet" WHERE "location" = 'santa_monica'
```
Drop all points in the series that have a specific tag pair from all measurements in the database:
```sql
> DROP SERIES WHERE "location" = 'santa_monica'
```
@ -152,27 +161,31 @@ Unlike
You must include either the `FROM` clause, the `WHERE` clause, or both:
```
```sql
DELETE FROM <measurement_name> WHERE [<tag_key>='<tag_value>'] | [<time interval>]
```
Delete all data associated with the measurement `h2o_feet`:
```
```sql
> DELETE FROM "h2o_feet"
```
Delete all data associated with the measurement `h2o_quality` and where the tag `randtag` equals `3`:
```
```sql
> DELETE FROM "h2o_quality" WHERE "randtag" = '3'
```
Delete all data in the database that occur before January 01, 2020:
```
```sql
> DELETE WHERE time < '2020-01-01'
```
Delete all data associated with the measurement `h2o_feet` in retention policy `one_day`:
```
```sql
> DELETE FROM "one_day"."h2o_feet"
```
@ -181,12 +194,16 @@ A successful `DELETE` query returns an empty result.
Things to note about `DELETE`:
* `DELETE` supports
[regular expressions](/influxdb/v1/query_language/explore-data/#regular-expressions)
in the `FROM` clause when specifying measurement names and in the `WHERE` clause
when specifying tag values. It *does not* support regular expressions for the retention policy in the `FROM` clause.
`DELETE` requires that you define *one* retention policy in the `FROM` clause.
* `DELETE` does not support [fields](/influxdb/v1/concepts/glossary/#field) in the `WHERE` clause.
* If you need to delete points in the future, you must specify that time period as `DELETE SERIES` runs for `time < now()` by default. [Syntax](https://github.com/influxdata/influxdb/issues/8007)
[regular expressions](/enterprise_influxdb/v1/query_language/explore-data/#regular-expressions)
in the `FROM` clause when specifying measurement names and in the `WHERE` clause
when specifying tag values. It *does not* support regular expressions for the
retention policy in the `FROM` clause.
If deleting a series in a retention policy, `DELETE` requires that you define
*only one* retention policy in the `FROM` clause.
* `DELETE` does not support [fields](/influxdb/v1/concepts/glossary/#field) in
the `WHERE` clause.
* If you need to delete points in the future, you must specify that time period
as `DELETE SERIES` runs for `time < now()` by default.
### Delete measurements with DROP MEASUREMENT
@ -240,8 +257,9 @@ You may disable its auto-creation in the [configuration file](/influxdb/v1/admin
### Create retention policies with CREATE RETENTION POLICY
#### Syntax
```
CREATE RETENTION POLICY <retention_policy_name> ON <database_name> DURATION <duration> REPLICATION <n> [SHARD DURATION <duration>] [DEFAULT]
```sql
CREATE RETENTION POLICY <retention_policy_name> ON <database_name> DURATION <duration> REPLICATION <n> [SHARD DURATION <duration>] [PAST LIMIT <duration>] [FUTURE LIMIT <duration>] [DEFAULT]
```
#### Description of syntax
@ -289,6 +307,28 @@ See
[Shard group duration management](/influxdb/v1/concepts/schema_and_data_layout/#shard-group-duration-management)
for recommended configurations.
##### `PAST LIMIT`
The `PAST LIMIT` clause defines a time boundary before and relative to _now_
in which points written to the retention policy are accepted. If a point has a
timestamp before the specified boundary, the point is rejected and the write
request returns a partial write error.
For example, if a write request tries to write data to a retention policy with a
`PAST LIMIT 6h` and there are points in the request with timestamps older than
6 hours, those points are rejected.
##### `FUTURE LIMIT`
The `FUTURE LIMIT` clause defines a time boundary after and relative to _now_
in which points written to the retention policy are accepted. If a point has a
timestamp after the specified boundary, the point is rejected and the write
request returns a partial write error.
For example, if a write request tries to write data to a retention policy with a
`FUTURE LIMIT 6h` and there are points in the request with future timestamps
greater than 6 hours from now, those points are rejected.
##### `DEFAULT`
Sets the new retention policy as the default retention policy for the database.

View File

@ -8,35 +8,30 @@ menu:
parent: InfluxQL
aliases:
- /influxdb/v2/query_language/spec/
- /influxdb/v2/query_language/spec/
- /influxdb/v2/query_language/spec/
- /influxdb/v2/query_language/spec/
- /influxdb/v2/query_language/spec/
- /influxdb/v2/query_language/spec/
---
## Introduction
Find Influx Query Language (InfluxQL) definitions and details, including:
* [Notation](/influxdb/v1/query_language/spec/#notation)
* [Query representation](/influxdb/v1/query_language/spec/#query-representation)
* [Identifiers](/influxdb/v1/query_language/spec/#identifiers)
* [Keywords](/influxdb/v1/query_language/spec/#keywords)
* [Literals](/influxdb/v1/query_language/spec/#literals)
* [Queries](/influxdb/v1/query_language/spec/#queries)
* [Statements](/influxdb/v1/query_language/spec/#statements)
* [Clauses](/influxdb/v1/query_language/spec/#clauses)
* [Expressions](/influxdb/v1/query_language/spec/#expressions)
* [Other](/influxdb/v1/query_language/spec/#other)
* [Query engine internals](/influxdb/v1/query_language/spec/#query-engine-internals)
- [Notation](/influxdb/v1/query_language/spec/#notation)
- [Query representation](/influxdb/v1/query_language/spec/#query-representation)
- [Identifiers](/influxdb/v1/query_language/spec/#identifiers)
- [Keywords](/influxdb/v1/query_language/spec/#keywords)
- [Literals](/influxdb/v1/query_language/spec/#literals)
- [Queries](/influxdb/v1/query_language/spec/#queries)
- [Statements](/influxdb/v1/query_language/spec/#statements)
- [Clauses](/influxdb/v1/query_language/spec/#clauses)
- [Expressions](/influxdb/v1/query_language/spec/#expressions)
- [Other](/influxdb/v1/query_language/spec/#other)
- [Query engine internals](/influxdb/v1/query_language/spec/#query-engine-internals)
To learn more about InfluxQL, browse the following topics:
* [Explore your data with InfluxQL](/influxdb/v1/query_language/explore-data/)
* [Explore your schema with InfluxQL](/influxdb/v1/query_language/explore-schema/)
* [Database management](/influxdb/v1/query_language/manage-database/)
* [Authentication and authorization](/influxdb/v1/administration/authentication_and_authorization/).
- [Explore your data with InfluxQL](/influxdb/v1/query_language/explore-data/)
- [Explore your schema with InfluxQL](/influxdb/v1/query_language/explore-schema/)
- [Database management](/influxdb/v1/query_language/manage-database/)
- [Authentication and authorization](/influxdb/v1/administration/authentication_and_authorization/).
InfluxQL is a SQL-like query language for interacting with InfluxDB and providing features specific to storing and analyzing time series data.
@ -123,15 +118,15 @@ ALL ALTER ANY AS ASC BEGIN
BY CREATE CONTINUOUS DATABASE DATABASES DEFAULT
DELETE DESC DESTINATIONS DIAGNOSTICS DISTINCT DROP
DURATION END EVERY EXPLAIN FIELD FOR
FROM GRANT GRANTS GROUP GROUPS IN
INF INSERT INTO KEY KEYS KILL
LIMIT SHOW MEASUREMENT MEASUREMENTS NAME OFFSET
ON ORDER PASSWORD POLICY POLICIES PRIVILEGES
QUERIES QUERY READ REPLICATION RESAMPLE RETENTION
REVOKE SELECT SERIES SET SHARD SHARDS
SLIMIT SOFFSET STATS SUBSCRIPTION SUBSCRIPTIONS TAG
TO USER USERS VALUES WHERE WITH
WRITE
FROM FUTURE GRANT GRANTS GROUP GROUPS
IN INF INSERT INTO KEY KEYS
KILL LIMIT SHOW MEASUREMENT MEASUREMENTS NAME
OFFSET ON ORDER PASSWORD PAST POLICY
POLICIES PRIVILEGES QUERIES QUERY READ REPLICATION
RESAMPLE RETENTION REVOKE SELECT SERIES SET
SHARD SHARDS SLIMIT SOFFSET STATS SUBSCRIPTION
SUBSCRIPTIONS TAG TO USER USERS VALUES
WHERE WITH WRITE
```
If you use an InfluxQL keywords as an
@ -383,12 +378,14 @@ create_database_stmt = "CREATE DATABASE" db_name
[ retention_policy_duration ]
[ retention_policy_replication ]
[ retention_policy_shard_group_duration ]
[ retention_past_limit ]
[ retention_future_limit ]
[ retention_policy_name ]
] .
```
{{% warn %}} Replication factors do not serve a purpose with single node instances.
{{% /warn %}}
> [!Warning]
> Replication factors do not serve a purpose with single node instances.
#### Examples
@ -396,11 +393,17 @@ create_database_stmt = "CREATE DATABASE" db_name
-- Create a database called foo
CREATE DATABASE "foo"
-- Create a database called bar with a new DEFAULT retention policy and specify the duration, replication, shard group duration, and name of that retention policy
-- Create a database called bar with a new DEFAULT retention policy and specify
-- the duration, replication, shard group duration, and name of that retention policy
CREATE DATABASE "bar" WITH DURATION 1d REPLICATION 1 SHARD DURATION 30m NAME "myrp"
-- Create a database called mydb with a new DEFAULT retention policy and specify the name of that retention policy
-- Create a database called mydb with a new DEFAULT retention policy and specify
-- the name of that retention policy
CREATE DATABASE "mydb" WITH NAME "myrp"
-- Create a database called bar with a new retention policy named "myrp", and
-- specify the duration, past and future limits, and name of that retention policy
CREATE DATABASE "bar" WITH DURATION 1d PAST LIMIT 6h FUTURE LIMIT 6h NAME "myrp"
```
### CREATE RETENTION POLICY
@ -410,11 +413,13 @@ create_retention_policy_stmt = "CREATE RETENTION POLICY" policy_name on_clause
retention_policy_duration
retention_policy_replication
[ retention_policy_shard_group_duration ]
[ retention_past_limit ]
[ retention_future_limit ]
[ "DEFAULT" ] .
```
{{% warn %}} Replication factors do not serve a purpose with single node instances.
{{% /warn %}}
> [!Warning]
> Replication factors do not serve a purpose with single node instances.
#### Examples
@ -427,6 +432,9 @@ CREATE RETENTION POLICY "10m.events" ON "somedb" DURATION 60m REPLICATION 2 DEFA
-- Create a retention policy and specify the shard group duration.
CREATE RETENTION POLICY "10m.events" ON "somedb" DURATION 60m REPLICATION 2 SHARD DURATION 30m
-- Create a retention policy and specify past and future limits.
CREATE RETENTION POLICY "10m.events" ON "somedb" DURATION 12h PAST LIMIT 6h FUTURE LIMIT 6h
```
### CREATE SUBSCRIPTION
@ -1069,17 +1077,17 @@ show_stats_stmt = "SHOW STATS [ FOR '<component>' | 'indexes' ]"
#### `SHOW STATS`
* The `SHOW STATS` command does not list index memory usage -- use the [`SHOW STATS FOR 'indexes'`](#show-stats-for-indexes) command.
* Statistics returned by `SHOW STATS` are stored in memory and reset to zero when the node is restarted, but `SHOW STATS` is triggered every 10 seconds to populate the `_internal` database.
- The `SHOW STATS` command does not list index memory usage -- use the [`SHOW STATS FOR 'indexes'`](#show-stats-for-indexes) command.
- Statistics returned by `SHOW STATS` are stored in memory and reset to zero when the node is restarted, but `SHOW STATS` is triggered every 10 seconds to populate the `_internal` database.
#### `SHOW STATS FOR <component>`
* For the specified component (\<component\>), the command returns available statistics.
* For the `runtime` component, the command returns an overview of memory usage by the InfluxDB system, using the [Go runtime](https://golang.org/pkg/runtime/) package.
- For the specified component (\<component\>), the command returns available statistics.
- For the `runtime` component, the command returns an overview of memory usage by the InfluxDB system, using the [Go runtime](https://golang.org/pkg/runtime/) package.
#### `SHOW STATS FOR 'indexes'`
* Returns an estimate of memory use of all indexes. Index memory use is not reported with `SHOW STATS` because it is a potentially expensive operation.
- Returns an estimate of memory use of all indexes. Index memory use is not reported with `SHOW STATS` because it is a potentially expensive operation.
#### Example
@ -1346,9 +1354,9 @@ var_ref = measurement .
Use comments with InfluxQL statements to describe your queries.
* A single line comment begins with two hyphens (`--`) and ends where InfluxDB detects a line break.
- A single line comment begins with two hyphens (`--`) and ends where InfluxDB detects a line break.
This comment type cannot span several lines.
* A multi-line comment begins with `/*` and ends with `*/`. This comment type can span several lines.
- A multi-line comment begins with `/*` and ends with `*/`. This comment type can span several lines.
Multi-line comments do not support nested multi-line comments.
## Query Engine Internals
@ -1452,42 +1460,42 @@ iterator.
There are many helper iterators that let us build queries:
* Merge Iterator - This iterator combines one or more iterators into a single
- Merge Iterator - This iterator combines one or more iterators into a single
new iterator of the same type. This iterator guarantees that all points
within a window will be output before starting the next window but does not
provide ordering guarantees within the window. This allows for fast access
for aggregate queries which do not need stronger sorting guarantees.
* Sorted Merge Iterator - This iterator also combines one or more iterators
- Sorted Merge Iterator - This iterator also combines one or more iterators
into a new iterator of the same type. However, this iterator guarantees
time ordering of every point. This makes it slower than the `MergeIterator`
but this ordering guarantee is required for non-aggregate queries which
return the raw data points.
* Limit Iterator - This iterator limits the number of points per name/tag
- Limit Iterator - This iterator limits the number of points per name/tag
group. This is the implementation of the `LIMIT` & `OFFSET` syntax.
* Fill Iterator - This iterator injects extra points if they are missing from
- Fill Iterator - This iterator injects extra points if they are missing from
the input iterator. It can provide `null` points, points with the previous
value, or points with a specific value.
* Buffered Iterator - This iterator provides the ability to "unread" a point
- Buffered Iterator - This iterator provides the ability to "unread" a point
back onto a buffer so it can be read again next time. This is used extensively
to provide lookahead for windowing.
* Reduce Iterator - This iterator calls a reduction function for each point in
- Reduce Iterator - This iterator calls a reduction function for each point in
a window. When the window is complete then all points for that window are
output. This is used for simple aggregate functions such as `COUNT()`.
* Reduce Slice Iterator - This iterator collects all points for a window first
- Reduce Slice Iterator - This iterator collects all points for a window first
and then passes them all to a reduction function at once. The results are
returned from the iterator. This is used for aggregate functions such as
`DERIVATIVE()`.
* Transform Iterator - This iterator calls a transform function for each point
- Transform Iterator - This iterator calls a transform function for each point
from an input iterator. This is used for executing binary expressions.
* Dedupe Iterator - This iterator only outputs unique points. It is resource
- Dedupe Iterator - This iterator only outputs unique points. It is resource
intensive so it is only used for small queries such as meta query statements.
### Call iterators
@ -1501,4 +1509,4 @@ iterators can be created using `NewCallIterator()`.
Some iterators are more complex or need to be implemented at a higher level.
For example, the `DERIVATIVE()` needs to retrieve all points for a window first
before performing the calculation. This iterator is created by the engine itself
and is never requested to be created by the lower levels.
and is never requested to be created by the lower levels.

View File

@ -20,8 +20,8 @@ will be included in the InfluxDB release notes.
InfluxDB support for the Prometheus remote read and write API adds the following
HTTP endpoints to InfluxDB:
* `/api/v1/prom/read`
* `/api/v1/prom/write`
- `/api/v1/prom/read`
- `/api/v1/prom/write`
Additionally, there is a [`/metrics` endpoint](/influxdb/v1/administration/server_monitoring/#influxdb-metrics-http-endpoint) configured to produce default Go metrics in Prometheus metrics format.
@ -40,8 +40,8 @@ CREATE DATABASE "prometheus"
To enable the use of the Prometheus remote read and write APIs with InfluxDB, add URL
values to the following settings in the [Prometheus configuration file](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file):
* [`remote_write`](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#%3Cremote_write%3E)
* [`remote_read`](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#%3Cremote_read%3E)
- [`remote_write`](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#%3Cremote_write%3E)
- [`remote_read`](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#%3Cremote_read%3E)
The URLs must be resolvable from your running Prometheus server and use the port
on which InfluxDB is running (`8086` by default).
@ -84,12 +84,12 @@ remote_read:
As Prometheus data is brought into InfluxDB, the following transformations are
made to match the InfluxDB data structure:
* The Prometheus metric name becomes the InfluxDB [measurement](/influxdb/v1/concepts/key_concepts/#measurement) name.
* The Prometheus sample (value) becomes an InfluxDB field using the `value` field key. It is always a float.
* Prometheus labels become InfluxDB tags.
* All `# HELP` and `# TYPE` lines are ignored.
* [v1.8.6 and later] Prometheus remote write endpoint drops unsupported Prometheus values (`NaN`,`-Inf`, and `+Inf`) rather than reject the entire batch.
* If [write trace logging is enabled (`[http] write-tracing = true`)](/influxdb/v1/administration/config/#write-tracing-false), then summaries of dropped values are logged.
- The Prometheus metric name becomes the InfluxDB [measurement](/influxdb/v1/concepts/key_concepts/#measurement) name.
- The Prometheus sample (value) becomes an InfluxDB field using the `value` field key. It is always a float.
- Prometheus labels become InfluxDB tags.
- All `# HELP` and `# TYPE` lines are ignored.
- [v1.8.6 and later] Prometheus remote write endpoint drops unsupported Prometheus values (`NaN`,`-Inf`, and `+Inf`) rather than reject the entire batch.
* If [write trace logging is enabled (`[http] write-tracing = true`)](/influxdb/v1/administration/config/#write-tracing), then summaries of dropped values are logged.
* If a batch of values contains values that are subsequently dropped, HTTP status code `204` is returned.
### Example: Parse Prometheus to InfluxDB

View File

@ -554,7 +554,7 @@ A successful [`CREATE DATABASE` query](/influxdb/v1/query_language/manage-databa
| u=\<username> | Optional if you haven't [enabled authentication](/influxdb/v1/administration/authentication_and_authorization/#set-up-authentication). Required if you've enabled authentication.* | Sets the username for authentication if you've enabled authentication. The user must have read access to the database. Use with the query string parameter `p`. |
\* InfluxDB does not truncate the number of rows returned for requests without the `chunked` parameter.
That behavior is configurable; see the [`max-row-limit`](/influxdb/v1/administration/config/#max-row-limit-0) configuration option for more information.
That behavior is configurable; see the [`max-row-limit`](/influxdb/v1/administration/config/#max-row-limit) configuration option for more information.
\** The InfluxDB API also supports basic authentication.
Use basic authentication if you've [enabled authentication](/influxdb/v1/administration/authentication_and_authorization/#set-up-authentication)
@ -1077,7 +1077,7 @@ Errors are returned in JSON.
| 400 Bad Request | Unacceptable request. Can occur with an InfluxDB line protocol syntax error or if a user attempts to write values to a field that previously accepted a different value type. The returned JSON offers further information. |
| 401 Unauthorized | Unacceptable request. Can occur with invalid authentication credentials. |
| 404 Not Found | Unacceptable request. Can occur if a user attempts to write to a database that does not exist. The returned JSON offers further information. |
| 413 Request Entity Too Large | Unaccetable request. It will occur if the payload of the POST request is bigger than the maximum size allowed. See [`max-body-size`](/influxdb/v1/administration/config/#max-body-size-25000000) parameter for more details.
| 413 Request Entity Too Large | Unacceptable request. It will occur if the payload of the POST request is bigger than the maximum size allowed. See [`max-body-size`](/influxdb/v1/administration/config/#max-body-size) parameter for more details.
| 500 Internal Server Error | The system is overloaded or significantly impaired. Can occur if a user attempts to write to a retention policy that does not exist. The returned JSON offers further information. |
#### Examples

View File

@ -12,9 +12,10 @@ alt_links:
Influx Inspect is an InfluxDB disk utility that can be used to:
* View detailed information about disk shards.
* Export data from a shard to [InfluxDB line protocol](/influxdb/v1/concepts/glossary/#influxdb-line-protocol) that can be inserted back into the database.
* Convert TSM index shards to TSI index shards.
- View detailed information about disk shards.
- Export data from a shard to [InfluxDB line protocol](/influxdb/v1/concepts/glossary/#influxdb-line-protocol)
that can be inserted back into the database.
- Convert TSM index shards to TSI index shards.
## `influx_inspect` utility
@ -52,7 +53,9 @@ Builds TSI (Time Series Index) disk-based shard index files and associated serie
The index is written to a temporary location until complete and then moved to a permanent location.
If an error occurs, then this operation will fall back to the original in-memory index.
> ***Note:*** **For offline conversion only.**
> [!Note]
> #### For offline conversion only
>
> When TSI is enabled, new shards use the TSI indexes.
> Existing shards continue as TSM-based shards until
> converted offline.
@ -62,7 +65,9 @@ If an error occurs, then this operation will fall back to the original in-memory
```
influx_inspect buildtsi -datadir <data_dir> -waldir <wal_dir> [ options ]
```
> **Note:** Use the `buildtsi` command with the user account that you are going to run the database as,
> [!Note]
> Use the `buildtsi` command with the user account that you are going to run the database as,
> or ensure that the permissions match after running the command.
#### Options
@ -73,9 +78,8 @@ Optional arguments are in brackets.
The size of the batches written to the index. Default value is `10000`.
{{% warn %}}
**Warning:** Setting this value can have adverse effects on performance and heap size.
{{% /warn %}}
> [!Warning]
> Setting this value can have adverse effects on performance and heap size.
##### `[ -compact-series-file ]`
@ -123,7 +127,7 @@ Flag to enable output in verbose mode.
##### `-waldir <wal_dir>`
The directory for the (WAL (Write Ahead Log)](/influxdb/v1/concepts/file-system-layout/#wal-directory) files.
The directory for the [WAL (Write Ahead Log)](/influxdb/v1/concepts/file-system-layout/#wal-directory) files.
Default value is `$HOME/.influxdb/wal`.
See the [file system layout](/influxdb/v1/concepts/file-system-layout/#file-system-layout)
@ -181,10 +185,9 @@ The filename where schema data should be written. Default is `schema.json`.
Use `deletetsm -measurement` to delete a measurement in a raw TSM file (from specified shards).
Use `deletetsm -sanitize` to remove all tag and field keys containing non-printable Unicode characters in a raw TSM file (from specified shards).
{{% warn %}}
**Warning:** Use the `deletetsm` command only when your InfluxDB instance is
offline (`influxd` service is not running).
{{% /warn %}}
> [!Warning]
> Use the `deletetsm` command only when your InfluxDB instance is
> offline (`influxd` service is not running).
#### Syntax
@ -286,19 +289,19 @@ Filter data by tag value regular expression.
##### Specifying paths to the `_series` and `index` directories
```
$ influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index
```bash
influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index
```
##### Specifying paths to the `_series` directory and an `index` file
```
$ influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index/file0
```bash
influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index/file0
```
##### Specifying paths to the `_series` directory and multiple `index` files
```
$ influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index/file0 /path/to/index/file1 ...
```bash
influx_inspect dumptsi -series-file /path/to/db/_series /path/to/index/file0 /path/to/index/file1 ...
```
### `dumptsm`
@ -360,8 +363,8 @@ If a user writes points with timestamps set by the client, then multiple points
### `export`
Exports all TSM files in InfluxDB line protocol data format.
This output file can be imported using the
Exports all TSM files or a single TSM file in InfluxDB line protocol data format.
The output file can be imported using the
[influx](/influxdb/v1/tools/shell/#import-data-from-a-file-with-import) command.
#### Syntax
@ -413,9 +416,12 @@ YYYY-MM-DDTHH:MM:SS-08:00
YYYY-MM-DDTHH:MM:SS+07:00
```
> **Note:** With offsets, avoid replacing the + or - sign with a Z. It may cause an error or print Z (ISO 8601 behavior) instead of the time zone offset.
> [!Note]
> With offsets, avoid replacing the + or - sign with a Z. It may cause an error
> or print Z (ISO 8601 behavior) instead of the time zone offset.
##### [ `-lponly` ]
Output data in line protocol format only.
Does not output data definition language (DDL) statements (such as `CREATE DATABASE`)
or DML context metadata (such as `# CONTEXT-DATABASE`).
@ -443,6 +449,11 @@ Default value is `$HOME/.influxdb/wal`.
See the [file system layout](/influxdb/v1/concepts/file-system-layout/#file-system-layout)
for InfluxDB on your system.
##### [ `-tsmfile <tsm_file>` ]
Path to a single tsm file to export. This requires both `-database` and
`-retention` to be specified.
#### Examples
##### Export all databases and compress the output
@ -457,6 +468,15 @@ influx_inspect export -compress
influx_inspect export -database DATABASE_NAME -retention RETENTION_POLICY
```
##### Export data from a single TSM file
```bash
influx_inspect export \
-database DATABASE_NAME \
-retention RETENTION_POLICY \
-tsmfile TSM_FILE_NAME
```
##### Output file
```bash
@ -650,11 +670,11 @@ influx_inspect report-disk -detailed ~/.influxdb/data/
The report does the following:
* Calculates the total exact series cardinality in the database.
* Segments that cardinality by measurement, and emits those cardinality values.
* Emits total exact cardinality for each shard in the database.
* Segments for each shard the exact cardinality for each measurement in the shard.
* Optionally limits the results in each shard to the "top n".
- Calculates the total exact series cardinality in the database.
- Segments that cardinality by measurement, and emits those cardinality values.
- Emits total exact cardinality for each shard in the database.
- Segments for each shard the exact cardinality for each measurement in the shard.
- Optionally limits the results in each shard to the "top n".
The `reporttsi` command is primarily useful when there has been a change in cardinality
and it's not clear which measurement is responsible for this change, and further, _when_
@ -769,7 +789,8 @@ Enables very verbose logging. Displays progress for every series key and time ra
Enables very very verbose logging. Displays progress for every series key and time range in the tombstone files. Timestamps are displayed in [RFC3339 format](https://tools.ietf.org/html/rfc3339) with nanosecond precision.
> **Note on verbose logging:** Higher verbosity levels override lower levels.
> [!Note]
> Higher verbosity levels override lower levels.
## Caveats

View File

@ -12,9 +12,8 @@ menu:
This page documents errors, their descriptions, and, where applicable,
common resolutions.
{{% warn %}}
**Disclaimer:** This document does not contain an exhaustive list of all possible InfluxDB errors.
{{% /warn %}}
> [!Warning]
> **Disclaimer:** This document does not contain an exhaustive list of all possible InfluxDB errors.
## `error: database name required`
@ -47,7 +46,7 @@ By default `max-series-per-database` is set to one million.
Changing the setting to `0` allows an unlimited number of series per database.
**Resources:**
[Database Configuration](/influxdb/v1/administration/config/#max-series-per-database-1000000)
[Database Configuration](/influxdb/v1/administration/config/#max-series-per-database)
## `error parsing query: found < >, expected identifier at line < >, char < >`
@ -326,7 +325,7 @@ The maximum valid timestamp is `9223372036854775806` or `2262-04-11T23:47:16.854
The `cache maximum memory size exceeded` error occurs when the cached
memory size increases beyond the
[`cache-max-memory-size` setting](/influxdb/v1/administration/config/#cache-max-memory-size-1g)
[`cache-max-memory-size` setting](/influxdb/v1/administration/config/#cache-max-memory-size)
in the configuration file.
By default, `cache-max-memory-size` is set to 512mb.
@ -398,11 +397,15 @@ This error occurs when the Docker container cannot read files on the host machin
#### Make host machine files readable to Docker
1. Create a directory, and then copy files to import into InfluxDB to this directory.
2. When you launch the Docker container, mount the new directory on the InfluxDB container by running the following command:
1. Create a directory, and then copy files to import into InfluxDB to this directory.
2. When you launch the Docker container, mount the new directory on the InfluxDB container by running the following command:
docker run -v /dir/path/on/host:/dir/path/in/container
```bash
docker run -v /dir/path/on/host:/dir/path/in/container
```
3. Verify the Docker container can read host machine files by running the following command:
3. Verify the Docker container can read host machine files by running the following command:
influx -import -path=/path/in/container
```bash
influx -import -path=/path/in/container
```

View File

@ -164,7 +164,7 @@ an RP every 30 minutes.
You may need to wait for the next RP check for InfluxDB to drop data that are
outside the RP's new `DURATION` setting.
The 30 minute interval is
[configurable](/influxdb/v1/administration/config/#check-interval-30m0s).
[configurable](/influxdb/v1/administration/config/#check-interval).
Second, altering both the `DURATION` and `SHARD DURATION` of an RP can result in
unexpected data retention.
@ -623,9 +623,9 @@ Avoid using the same name for a tag and field key. If you inadvertently add the
#### Example
1. [Launch `influx`](/influxdb/v1/tools/shell/#launch-influx).
1. [Launch `influx`](/influxdb/v1/tools/shell/#launch-influx).
2. Write the following points to create both a field and tag key with the same name `leaves`:
2. Write the following points to create both a field and tag key with the same name `leaves`:
```bash
# create the `leaves` tag key
@ -635,7 +635,7 @@ Avoid using the same name for a tag and field key. If you inadvertently add the
INSERT grape leaves=5
```
3. If you view both keys, you'll notice that neither key includes `_1`:
3. If you view both keys, you'll notice that neither key includes `_1`:
```bash
# show the `leaves` tag key
@ -655,7 +655,7 @@ Avoid using the same name for a tag and field key. If you inadvertently add the
leaves float
```
4. If you query the `grape` measurement, you'll see the `leaves` tag key has an appended `_1`:
4. If you query the `grape` measurement, you'll see the `leaves` tag key has an appended `_1`:
```bash
# query the `grape` measurement
@ -668,7 +668,7 @@ Avoid using the same name for a tag and field key. If you inadvertently add the
1574128238044155000 5.00
```
5. To query a duplicate key name, you **must drop** `_1` **and include** `::tag` or `::field` after the key:
5. To query a duplicate key name, you **must drop** `_1` **and include** `::tag` or `::field` after the key:
```bash
# query duplicate keys using the correct syntax
@ -693,9 +693,9 @@ the allotted memory.
#### Remove a duplicate key
1. [Launch `influx`](/influxdb/v1/tools/shell/#launch-influx).
1. [Launch `influx`](/influxdb/v1/tools/shell/#launch-influx).
2. Use the following queries to remove a duplicate key.
2. Use the following queries to remove a duplicate key.
```sql
@ -1093,39 +1093,39 @@ time az hostname val_1 val_2
To store both points:
* Introduce an arbitrary new tag to enforce uniqueness.
- Introduce an arbitrary new tag to enforce uniqueness.
Old point: `cpu_load,hostname=server02,az=us_west,uniq=1 val_1=24.5,val_2=7 1234567890000000`
Old point: `cpu_load,hostname=server02,az=us_west,uniq=1 val_1=24.5,val_2=7 1234567890000000`
New point: `cpu_load,hostname=server02,az=us_west,uniq=2 val_1=5.24 1234567890000000`
New point: `cpu_load,hostname=server02,az=us_west,uniq=2 val_1=5.24 1234567890000000`
After writing the new point to InfluxDB:
After writing the new point to InfluxDB:
```sql
> SELECT * FROM "cpu_load" WHERE time = 1234567890000000
name: cpu_load
--------------
time az hostname uniq val_1 val_2
1970-01-15T06:56:07.89Z us_west server02 1 24.5 7
1970-01-15T06:56:07.89Z us_west server02 2 5.24
```
```sql
> SELECT * FROM "cpu_load" WHERE time = 1234567890000000
name: cpu_load
--------------
time az hostname uniq val_1 val_2
1970-01-15T06:56:07.89Z us_west server02 1 24.5 7
1970-01-15T06:56:07.89Z us_west server02 2 5.24
```
* Increment the timestamp by a nanosecond.
- Increment the timestamp by a nanosecond.
Old point: `cpu_load,hostname=server02,az=us_west val_1=24.5,val_2=7 1234567890000000`
Old point: `cpu_load,hostname=server02,az=us_west val_1=24.5,val_2=7 1234567890000000`
New point: `cpu_load,hostname=server02,az=us_west val_1=5.24 1234567890000001`
New point: `cpu_load,hostname=server02,az=us_west val_1=5.24 1234567890000001`
After writing the new point to InfluxDB:
After writing the new point to InfluxDB:
```sql
> SELECT * FROM "cpu_load" WHERE time >= 1234567890000000 and time <= 1234567890000001
name: cpu_load
--------------
time az hostname val_1 val_2
1970-01-15T06:56:07.89Z us_west server02 24.5 7
1970-01-15T06:56:07.890000001Z us_west server02 5.24
```
```sql
> SELECT * FROM "cpu_load" WHERE time >= 1234567890000000 and time <= 1234567890000001
name: cpu_load
--------------
time az hostname val_1 val_2
1970-01-15T06:56:07.89Z us_west server02 24.5 7
1970-01-15T06:56:07.890000001Z us_west server02 5.24
```
## What newline character does the InfluxDB API require?
@ -1207,27 +1207,31 @@ To keep regular expressions and quoting simple, avoid using the following charac
## When should I single quote and when should I double quote when writing data?
* Avoid single quoting and double quoting identifiers when writing data via the line protocol; see the examples below for how writing identifiers with quotes can complicate queries.
Identifiers are database names, retention policy names, user names, measurement names, tag keys, and field keys.
- Avoid single quoting and double quoting identifiers when writing data via
line protocol; see the examples below for how writing identifiers with quotes
can complicate queries. Identifiers are database names, retention policy
names, user names, measurement names, tag keys, and field keys.
*Not recommended approaches (complicate queries):**
Write with a double-quoted measurement: `INSERT "bikes" bikes_available=3`
Applicable query: `SELECT * FROM "\"bikes\""`
Write with a double-quoted measurement: `INSERT "bikes" bikes_available=3`
Applicable query: `SELECT * FROM "\"bikes\""`
Write with a single-quoted measurement: `INSERT 'bikes' bikes_available=3`
Applicable query: `SELECT * FROM "\'bikes\'"`
Write with a single-quoted measurement: `INSERT 'bikes' bikes_available=3`
Applicable query: `SELECT * FROM "\'bikes\'"`
**Recommended approach (simpler queries):**
Write with an unquoted measurement: `INSERT bikes bikes_available=3`
Applicable query: `SELECT * FROM "bikes"`
Write with an unquoted measurement: `INSERT bikes bikes_available=3`
Applicable query: `SELECT * FROM "bikes"`
- Double quote field values that are strings--for example:
* Double quote field values that are strings.
Write: `INSERT bikes happiness="level 2"`
Applicable query: `SELECT * FROM "bikes" WHERE "happiness"='level 2'`
Write: `INSERT bikes happiness="level 2"`
Applicable query: `SELECT * FROM "bikes" WHERE "happiness"='level 2'`
- Special characters should be escaped with a backslash and not placed in quotes--for example:
* Special characters should be escaped with a backslash and not placed in quotes.
Write: `INSERT wacky va\"ue=4`
Applicable query: `SELECT "va\"ue" FROM "wacky"`
Write: `INSERT wacky va\"ue=4`
Applicable query: `SELECT "va\"ue" FROM "wacky"`
For more information , see [Line protocol](/influxdb/v1/write_protocols/).
@ -1255,6 +1259,6 @@ The default shard group duration is one week and if your data cover several hund
Having an extremely high number of shards is inefficient for InfluxDB.
Increase the shard group duration for your datas retention policy with the [`ALTER RETENTION POLICY` query](/influxdb/v1/query_language/manage-database/#modify-retention-policies-with-alter-retention-policy).
Second, temporarily lowering the [`cache-snapshot-write-cold-duration` configuration setting](/influxdb/v1/administration/config/#cache-snapshot-write-cold-duration-10m).
Second, temporarily lowering the [`cache-snapshot-write-cold-duration` configuration setting](/influxdb/v1/administration/config/#cache-snapshot-write-cold-duration).
If youre writing a lot of historical data, the default setting (`10m`) can cause the system to hold all of your data in cache for every shard.
Temporarily lowering the `cache-snapshot-write-cold-duration` setting to `10s` while you write the historical data makes the process more efficient.

View File

@ -541,6 +541,9 @@ The number of Flux query requests served.
#### fluxQueryReqDurationNs
The duration (wall-time), in nanoseconds, spent executing Flux query requests.
#### fluxQueryRespBytes
The sum of all bytes returned in Flux query responses.
#### pingReq
The number of times InfluxDB HTTP server served the `/ping` HTTP endpoint.

View File

@ -100,7 +100,7 @@ influxdb:
latest: v2.7
latest_patches:
v2: 2.7.12
v1: 1.11.8
v1: 1.12.1
latest_cli:
v2: 2.7.5
ai_sample_questions:
@ -183,9 +183,9 @@ enterprise_influxdb:
menu_category: self-managed
list_order: 5
versions: [v1]
latest: v1.11
latest: v1.12
latest_patches:
v1: 1.11.8
v1: 1.12.1
ai_sample_questions:
- How can I configure my InfluxDB v1 Enterprise server?
- How do I replicate data between InfluxDB v1 Enterprise and OSS?