fix(v1): DAR 451: Fix import command examples and cleanup for Enterprise

Fixes https://github.com/influxdata/DAR/issues/451
Fixes other examples that don't agree with their descriptions.
Cleanup in backup, influx, and influxd-inspect docs for Enterprise.
pull/5692/head
Jason Stirnaman 2024-11-27 17:20:52 -06:00
parent 33ebe1c848
commit 7991297460
4 changed files with 180 additions and 98 deletions

View File

@ -11,33 +11,40 @@ menu:
parent: Administration
---
- [Overview](#overview)
- [Backup and restore utilities](#backup-and-restore-utilities)
- [Exporting and importing data](#exporting-and-importing-data)
Use the InfluxDB Enterprise `backup` and `restore` and `export` and `import` utilities
to prevent unexpected data loss and preserve the ability to restore data if it
ever is lost.
## Overview
When deploying InfluxDB Enterprise in production environments, you should have a strategy and procedures for backing up and restoring your InfluxDB Enterprise clusters to be prepared for unexpected data loss.
The tools provided by InfluxDB Enterprise can be used to:
You can use these tools in your back up and restore procedures to:
- Provide disaster recovery due to unexpected events
- Migrate data to new environments or servers
- Restore clusters to a consistent state
- Debugging
- Export and import data for debugging
Depending on the volume of data to be protected and your application requirements, InfluxDB Enterprise offers two methods, described below, for managing backups and restoring data:
- [Backup and restore utilities](#backup-and-restore-utilities) — For most applications
- [Exporting and importing data](#exporting-and-importing-data) — For large datasets
> **Note:** Use the [`backup` and `restore` utilities (InfluxDB OSS 1.5 and later)](/enterprise_influxdb/v1/administration/backup-and-restore/) to:
> [!Note]
> #### Back up and restore between InfluxDB Enterprise and OSS
>
> Use the `backup` and `restore` utilities in
> [InfluxDB Enterprise](#backup-and-restore-utilities) and
> [InfluxDB OSS (version 1.5 and later)](/influxdb/v1/administration/backup-and-restore/) to:
>
> - Restore InfluxDB Enterprise backup files to InfluxDB OSS instances.
> - Back up InfluxDB OSS data that can be restored in InfluxDB Enterprise clusters.
## Backup and restore utilities
Use InfluxDB Enterprise back up and restore utilities to:
- Back up and restore multiple databases at a time.
- Back up specific time ranges.
- Create backup files compatible with InfluxDB OSS.
InfluxDB Enterprise supports backing up and restoring data in a cluster,
a single database and retention policy, and single shards.
Most InfluxDB Enterprise applications can use the backup and restore utilities.
@ -46,11 +53,20 @@ Use the `backup` and `restore` utilities to back up and restore between `influxd
instances with the same versions or with only minor version differences.
For example, you can backup from {{< latest-patch version="1.10" >}} and restore on {{< latest-patch >}}.
- [Backup utility](#backup-utility)
- [Examples](#examples)
- [Restore utility](#restore-utility)
- [Exporting and importing data](#exporting-and-importing-data)
- [Exporting data](#exporting-data)
- [Importing data](#importing-data)
- [Example](#example)
### Backup utility
A backup creates a copy of the [metastore](/enterprise_influxdb/v1/concepts/glossary/#metastore) and [shard](/enterprise_influxdb/v1/concepts/glossary/#shard) data at that point in time and stores the copy in the specified directory.
Or, back up **only the cluster metastore** using the `-strategy only-meta` backup option. For more information, see [perform a metastore only backup](#perform-a-metastore-only-backup).
To back up **only the cluster metastore**, use the `-strategy only-meta` backup option.
For more information, see how to [perform a metastore only backup](#perform-a-metastore-only-backup).
All backups include a manifest, a JSON file describing what was collected during the backup.
The filenames reflect the UTC timestamp of when the backup was created, for example:
@ -104,38 +120,44 @@ for a complete list of the global `influxd-ctl` flags.
#### Back up a database and all retention policies
Store the following incremental backups in different directories.
The first backup specifies `-db myfirstdb` and the second backup specifies
different options: `-db myfirstdb` and `-rp autogen`.
The following example stores incremental backups of the database
and all retention policies in the `./myfirstdb-allrp-backup` directory:
```bash
influxd-ctl backup -db myfirstdb ./myfirstdb-allrp-backup
```
#### Back up a database with a specific retention policy
The following example stores incremental backups in separate directories for the
specified database and retention policy combinations.
```bash
influxd-ctl backup -db myfirstdb -rp oneday ./myfirstdb-oneday-backup
influxd-ctl backup -db myfirstdb -rp autogen ./myfirstdb-autogen-backup
```
#### Back up a database with a specific retention policy
Store the following incremental backups in the same directory.
Both backups specify the same `-db` flag and the same database.
The output contains the status and backup file paths--for example:
```bash
influxd-ctl backup -db myfirstdb ./myfirstdb-allrp-backup
influxd-ctl backup -db myfirstdb ./myfirstdb-allrp-backup
```sh
backing up db=myfirstdb rp=oneday shard=8 to <USER_HOME>/myfirstdb-oneday-backup/myfirstdb.oneday.00008.00
backing up db=myfirstdb rp=autogen shard=10 to <USER_HOME>/myfirstdb-autogen-backup/myfirstdb.autogen.00010.00
```
#### Back up data from a specific time range
To back up data in a specific time range, use the `-start` and `-end` options:
```bash
influxd-ctl backup -db myfirstdb ./myfirstdb-jandata -start 2022-01-01T012:00:00Z -end 2022-01-31T011:59:00Z
```
#### Perform an incremental backup
Perform an incremental backup into the current directory with the command below.
If there are any existing backups the current directory, the system performs an incremental backup.
If there aren't any existing backups in the current directory, the system performs a backup of all data in InfluxDB.
The following example shows how to run an incremental backup stored in the current directory.
If a backup already exists in the directory, `influxd-ctl` performs an incremental backup.
If no backup is found in the directory, then `influxd-ctl` creates a full backup of all data in InfluxDB.
```bash
# Syntax
@ -156,7 +178,7 @@ $ ls
#### Perform a full backup
Perform a full backup into a specific directory with the command below.
The following example shows how to run a full backup stored in a specific directory.
The directory must already exist.
```bash
@ -178,7 +200,11 @@ $ ls backup_dir
#### Perform an incremental backup on a single database
Point at a remote meta server and back up only one database into a given directory (the directory must already exist):
Use the `-bind` option to specify a remote [meta node](/enterprise_influxdb/v1/concepts/glossary/#meta-node) to connect to.
The following example shows how to connect to a remote meta server and back up
a specific database into a given directory in the local system.
The directory must already exist.
```bash
# Syntax
@ -195,7 +221,8 @@ $ ls ./telegrafbackup
#### Perform a metadata only backup
Perform a metadata only backup into a specific directory with the command below.
The following example shows how to create and store a metadata-only backup
in a specific directory.
The directory must already exist.
```bash
@ -213,10 +240,10 @@ Backed up to backup_dir in 51.388233ms, transferred 481 bytes
### Restore utility
#### Disable anti-entropy (AE) before restoring a backup
> [!Note]
> #### Disable anti-entropy (AE) before restoring a backup
>
> Before restoring a backup, stop the anti-entropy (AE) service (if enabled) on **each data node in the cluster, one at a time**.
>
> 1. Stop the `influxd` service.
> 2. Set `[anti-entropy].enabled` to `false` in the influx configuration file (by default, influx.conf).
@ -469,48 +496,72 @@ for [restoring from a full backup](#restore-from-a-full-backup).
## Exporting and importing data
For most InfluxDB Enterprise applications, the [backup and restore utilities](#backup-and-restore-utilities) provide the tools you need for your backup and restore strategy. However, in some cases, the standard backup and restore utilities may not adequately handle the volumes of data in your application.
For most InfluxDB Enterprise applications, the [backup and restore utilities](#backup-and-restore-utilities) provide the tools you need for your backup and restore strategy. However, in some cases, the standard backup and restore utilities might not adequately handle the volumes of data in your application.
As an alternative to the standard backup and restore utilities, use the InfluxDB `influx_inspect export` and `influx -import` commands to create backup and restore procedures for your disaster recovery and backup strategy. These commands can be executed manually or included in shell scripts that run the export and import operations at scheduled intervals (example below).
- [Exporting data](#exporting-data)
- [Importing data](#importing-data)
- [Example: export and import for disaster recovery](#example-export-and-import-for-disaster-recovery)
### Exporting data
Use the [`influx_inspect export` command](/enterprise_influxdb/v1/tools/influx_inspect#export) to export data in line protocol format from your InfluxDB Enterprise cluster. Options include:
Use the [`influx_inspect export` command](/enterprise_influxdb/v1/tools/influx_inspect#export) to export data in line protocol format from your InfluxDB Enterprise cluster.
Options include the following:
- Exporting all, or specific, databases
- Filtering with starting and ending timestamps
- Using gzip compression for smaller files and faster exports
- `-database`: Export all or specific databases
- `-start` and `-end`: Filter with starting and ending timestamps
- `-compress`: Use GNU zip (gzip) compression for smaller files and faster exports
For details on optional settings and usage, see [`influx_inspect export` command](/enterprise_influxdb/v1/tools/influx_inspect#export).
In the following example, the database is exported filtered to include only one day and compressed for optimal speed and file size.
The following example shows how to export data filtered to one day and compressed
for optimal speed and file size:
```bash
influx_inspect export \
-database myDB \
-database DATABASE_NAME \
-compress \
-start 2019-05-19T00:00:00.000Z \
-end 2019-05-19T23:59:59.999Z
```
The exported file contains the following:
```sh
# DDL
CREATE DATABASE <DATABASE_NAME> WITH NAME <RETENTION_POLICY>
# DML
# CONTEXT-DATABASE:<DATABASE_NAME>
# CONTEXT-RETENTION-POLICY:<RETENTION_POLICY>
<LINE_PROTOCOL_DATA>
```
- `DDL`: an InfluxQL `CREATE` statement to create the target database when [importing the data](#importing-data)
- `DML`: Context metadata that specifies the target database and retention policy
for [importing the data](#importing-data)
- the line protocol data
For details on optional settings and usage, see [`influx_inspect export` command](/enterprise_influxdb/v1/tools/influx_inspect#export).
### Importing data
After exporting the data in line protocol format, you can import the data using the [`influx -import` CLI command](/enterprise_influxdb/v1/tools/influx-cli/use-influx/#import-data-from-a-file-with--import).
After exporting the data in line protocol format, you can import the data using the [`influx -import` CLI command](/enterprise_influxdb/v1/tools/influx-cli/use-influx/#import-data-from-a-file).
In the following example, the compressed data file is imported into the specified database.
In the following example, the compressed data file (in GNU zip format) is imported into the database
specified in the file's `DML` metadata.
```bash
influx -import -path -compressed
```
For details on using the `influx -import` command, see [Import data from a file with -import](/enterprise_influxdb/v1/tools/influx-cli/use-influx/#import-data-from-a-file-with--import).
For details on using the `influx -import` command, see [Import data from a file with `-import`](/enterprise_influxdb/v1/tools/influx-cli/use-influx/#import-data-from-a-file).
### Example
### Example: export and import for disaster recovery
For an example of using the exporting and importing data approach for disaster recovery, see the Capital One presentation from Influxdays 2019 on ["Architecting for Disaster Recovery."](https://www.youtube.com/watch?v=LyQDhSdnm4A). In this presentation, Capital One discusses the following:
For an example of using the exporting and importing data approach for disaster recovery, see the presentation from Influxdays 2019 on ["Architecting for Disaster Recovery."](https://www.youtube.com/watch?v=LyQDhSdnm4A). In this presentation, Capital One discusses the following:
- Exporting data every 15 minutes from an active cluster to an AWS S3 bucket.
- Exporting data every 15 minutes from an active InfluxDB Enterprise cluster to an AWS S3 bucket.
- Replicating the export file in the S3 bucket using the AWS S3 copy command.
- Importing data every 15 minutes from the AWS S3 bucket to a cluster available for disaster recovery.
- Importing data every 15 minutes from the AWS S3 bucket to an InfluxDB Enterprise cluster available for disaster recovery.
- Advantages of the export-import approach over the standard backup and restore utilities for large volumes of data.
- Managing users and scheduled exports and imports with a custom administration tool.

View File

@ -8,8 +8,8 @@ menu:
v2: /influxdb/v2/reference/cli/influx/
---
The `influx` command line interface (CLI) includes commands to manage many aspects of InfluxDB, including databases, organizations, users, and tasks.
The `influx` command line interface (CLI) provides an interactive shell for the HTTP API associated with `influxd`.
It includes commands for writing and querying data, and managing many aspects of InfluxDB, including databases, organizations, users, and tasks.
## Usage
@ -17,7 +17,6 @@ The `influx` command line interface (CLI) includes commands to manage many aspec
influx [flags]
```
## Flags {.no-shorthand}
| Flag | Description |

View File

@ -1,40 +1,51 @@
---
title: Use influx - InfluxDB command line interface
description: InfluxDB's command line interface (`influx`) is an interactive shell for the HTTP API.
aliases:
- /enterprise_influxdb/v1/tools/shell
- /enterprise_influxdb/v1/tools/use-influx/
menu:
enterprise_influxdb_v1:
name: Use influx
name: Use influx CLI
weight: 10
parent: influx
aliases:
- /enterprise_influxdb/v1/tools/influx-cli/use-influx/
- /enterprise_influxdb/v1/tools/shell
- /enterprise_influxdb/v1/tools/use-influx/
related:
- /enterprise_influxdb/v1/administration/backup-and-restore/
---
InfluxDB's command line interface (`influx`) is an interactive shell for the HTTP API.
Use `influx` to write data (manually or from a file), query data interactively, and view query output in different formats.
The `influx` command line interface (CLI) provides an interactive shell for the HTTP API associated with `influxd`.
Use `influx` to write data (manually or from a file), query data interactively, view query output in different formats, and manage resources in InfluxDB.
* [Launch `influx`](#launch-influx)
* [`influx` Arguments](#influx-arguments)
* [`influx` Commands](#influx-commands)
## Launch `influx`
If you [install](https://influxdata.com/downloads/) InfluxDB via a package manager, the CLI is installed at `/usr/bin/influx` (`/usr/local/bin/influx` on macOS).
The `influx` CLI is included when you [install InfluxDB Enterprise](/enterprise_influxdb/v1/introduction/installation/).
If you [install](https://influxdata.com/downloads/) InfluxDB via a package manager, the CLI is installed at `/usr/bin/influx` ( on macOS).
To access the CLI, first launch the `influxd` database process and then launch `influx` in your terminal.
Once you've entered the shell and successfully connected to an InfluxDB node, you'll see the following output:
<br>
<br>
```bash
$ influx
Connected to http://localhost:8086 version {{< latest-patch >}}
InfluxDB shell version: {{< latest-patch >}}
influx
```
> **Note:** The versions of InfluxDB and the CLI should be identical. If not, parsing issues can occur with queries.
If successfully connected to an InfluxDB node, the output is the following:
You can now enter InfluxQL queries as well as some CLI-specific commands directly in your terminal.
You can use `help` at any time to get a list of available commands. Use `Ctrl+C` to cancel if you want to cancel a long-running InfluxQL query.
```bash
Connected to http://localhost:8086 version {{< latest-patch >}}
InfluxDB shell version: {{< latest-patch >}}
>
```
_The versions of InfluxDB and the CLI should be identical. If not, parsing issues can occur with queries._
In the prompt, you can enter InfluxQL queries as well as CLI-specific commands.
Enter `help` to get a list of available commands.
Use `Ctrl+C` to cancel if you want to cancel a long-running InfluxQL query.
## Environment Variables
@ -67,11 +78,14 @@ List of host names that should **not** go through any proxy. If set to an asteri
NO_PROXY=123.45.67.89,123.45.67.90
```
## `influx` Arguments
There are several arguments you can pass into `influx` when starting.
List them with `$ influx --help`.
The list below offers a brief discussion of each option.
We provide detailed information on `-execute`, `-format`, and `-import` at the end of this section.
## `influx` arguments
Arguments specify connection, write, import, and output options for the CLI session.
`influx` provides the following arguments:
`-h`, `-help`
List `influx` arguments
`-compressed`
Set to true if the import file is compressed.
@ -96,8 +110,8 @@ The host to which `influx` connects.
By default, InfluxDB runs on localhost.
`-import`
Import new data from a file or import a previously [exported](https://github.com/influxdb/influxdb/blob/1.8/importer/README.md) database from a file.
See [-import](#import-data-from-a-file-with--import).
Import new data or [exported data](/enterprise_influxdb/v1/administration/backup-and-restore/#exporting-data) from a file.
See [-import](#import-data-from-a-file).
`-password 'password'`
The password `influx` uses to connect to the server.
@ -107,7 +121,7 @@ variable.
`-path`
The path to the file to import.
Use with `-import`.
Use with [-import](#import-data-from-a-file).
`-port 'port #'`
The port to which `influx` connects.
@ -141,6 +155,12 @@ Alternatively, set the username for the CLI with the `INFLUX_USERNAME` environme
`-version`
Display the InfluxDB version and exit.
The following sections provide detailed examples for some arguments, including `-execute`, `-format`, and `-import`.
- [Execute an InfluxQL command and quit with `-execute`](#execute-an-influxql-command-and-quit-with--execute)
- [Specify the format of the server responses with `-format`](#specify-the-format-of-the-server-responses-with--format)
- [Import data from a file](#import-data-from-a-file)
### Execute an InfluxQL command and quit with `-execute`
Execute queries that don't require a database specification:
@ -243,13 +263,13 @@ $ influx -format=json -pretty
}
```
### Import data from a file with `-import`
### Import data from a file
The import file has two sections:
* **DDL (Data Definition Language)**: Contains the [InfluxQL commands](/enterprise_influxdb/v1/query_language/manage-database/) for creating the relevant [database](/enterprise_influxdb/v1/concepts/glossary/) and managing the [retention policy](/enterprise_influxdb/v1/concepts/glossary/#retention-policy-rp).
If your database and retention policy already exist, your file can skip this section.
* **DML (Data Manipulation Language)**: Lists the relevant database and (if desired) retention policy and contains the data in [line protocol](/enterprise_influxdb/v1/concepts/glossary/#influxdb-line-protocol).
* **DML (Data Manipulation Language)**: Context metadata that specifies the database and (if desired) retention policy for the import and contains the data in [line protocol](/enterprise_influxdb/v1/concepts/glossary/#influxdb-line-protocol).
Example:
@ -272,7 +292,7 @@ treasures,captain_id=crunch value=109 1439858880
Command:
```
$influx -import -path=datarrr.txt -precision=s
influx -import -path=datarrr.txt -precision=s
```
Results:
@ -291,16 +311,18 @@ Results:
Things to note about `-import`:
* Allow the database to ingest points by using `-pps` to set the number of points per second allowed by the import. By default, pps is zero and `influx` does not throttle importing.
* Imports work with `.gz` files, just include `-compressed` in the command.
* Include timestamps in the data file. InfluxDB will assign the same timestamp to points without a timestamp. This can lead to unintended [overwrite behavior](/enterprise_influxdb/v1/troubleshooting/frequently-asked-questions/#how-does-influxdb-handle-duplicate-points).
* If your data file has more than 5,000 points, it may be necessary to split that file into several files in order to write your data in batches to InfluxDB.
We recommend writing points in batches of 5,000 to 10,000 points.
Smaller batches, and more HTTP requests, will result in sub-optimal performance.
By default, the HTTP request times out after five seconds.
InfluxDB will still attempt to write the points after that time out but there will be no confirmation that they were successfully written.
- To throttle the import, use `-pps` to set the number of points per second to ingest. By default, pps is zero and `influx` does not throttle importing.
- To import a file compressed with `gzip` (GNU zip), include the -compressed flag.
- Include timestamps in the data file.
If points dont include a timestamp, InfluxDB assigns the same timestamp to those points, which can result in unintended [duplicate points or overwrites](/enterprise_influxdb/v1/troubleshooting/frequently-asked-questions/#how-does-influxdb-handle-duplicate-points).
- If your data file contains more than 5,000 points, consider splitting it into smaller files to write data to InfluxDB in batches.
We recommend writing points in batches of 5,000 to 10,000 for optimal performance.
Writing smaller batches increases the number of HTTP requests, which can negatively impact performance.
By default, the HTTP request times out after five seconds. Although InfluxDB continues attempting to write the points after a timeout, you wont receive confirmation of a successful write.
> **Note:** For how to export data from InfluxDB version 0.8.9, see [Exporting from 0.8.9](https://github.com/influxdb/influxdb/blob/1.8/importer/README.md).
> **Note:** To export data from InfluxDB version 0.8.9, see [Exporting from 0.8.9](https://github.com/influxdb/influxdb/blob/1.8/importer/README.md).
For more information, see [exporting and importing data](/enterprise_influxdb/v1/administration/backup-and-restore/#exporting-and-importing-data).
## `influx` commands

View File

@ -93,6 +93,9 @@ The name of the database.
The path to the `data` directory.
Default value is `$HOME/.influxdb/data`.
See the [file system layout](/enterprise_influxdb/v1/concepts/file-system-layout/) for InfluxDB on your system.
##### `[ -max-cache-size ]`
The maximum size of the cache before it starts rejecting writes.
@ -120,26 +123,29 @@ Flag to enable output in verbose mode.
The directory for the WAL (Write Ahead Log) files.
Default value is `$HOME/.influxdb/wal`.
See the [file system layout](/enterprise_influxdb/v1/concepts/file-system-layout/) for InfluxDB on your system.
#### Examples
##### Converting all shards on a node
```
$ influx_inspect buildtsi -datadir ~/.influxdb/data -waldir ~/.influxdb/wal
$ influx_inspect buildtsi -datadir /var/lib/influxdb/data -waldir /var/lib/influxdb/wal
```
##### Converting all shards for a database
```
$ influx_inspect buildtsi -database mydb -datadir ~/.influxdb/data -waldir ~/.influxdb/wal
$ influx_inspect buildtsi -database mydb datadir /var/lib/influxdb/data -waldir /var/lib/influxdb/wal
```
##### Converting a specific shard
```
$ influx_inspect buildtsi -database stress -shard 1 -datadir ~/.influxdb/data -waldir ~/.influxdb/wal
$ influx_inspect buildtsi -database stress -shard 1 datadir /var/lib/influxdb/data -waldir /var/lib/influxdb/wal
```
### `check-schema`
@ -378,7 +384,9 @@ Default value is `""`.
##### `-datadir <data_dir>`
The path to the `data` directory.
Default value is `"$HOME/.influxdb/data"`.
Default value is `$HOME/.influxdb/data`.
See the [file system layout](/enterprise_influxdb/v1/concepts/file-system-layout/) for InfluxDB on your system.
##### [ `-end <timestamp>` ]
@ -405,12 +413,12 @@ YYYY-MM-DDTHH:MM:SS+07:00
##### [ `-lponly` ]
Output data in line protocol format only.
Does not include comments or data definition language (DDL), like `CREATE DATABASE`.
Does not output data definition language (DDL) statements (such as `CREATE DATABASE`) or DML context metadata (such as `# CONTEXT-DATABASE`).
##### [ `-out <export_dir>` ]
The location for the export file.
Default value is `"$HOME/.influxdb/export"`.
Default value is `$HOME/.influxdb/export`.
##### [ `-retention <rp_name> ` ]
@ -424,7 +432,9 @@ The timestamp string must be in [RFC3339 format](https://tools.ietf.org/html/rfc
##### [ `-waldir <wal_dir>` ]
Path to the [WAL](/enterprise_influxdb/v1/concepts/glossary/#wal-write-ahead-log) directory.
Default value is `"$HOME/.influxdb/wal"`.
Default value is `$HOME/.influxdb/wal`.
See the [file system layout](/enterprise_influxdb/v1/concepts/file-system-layout/) for InfluxDB on your system.
#### Examples
@ -437,19 +447,19 @@ influx_inspect export -compress
##### Export data from a specific database and retention policy
```bash
influx_inspect export -database mydb -retention autogen
influx_inspect export -database DATABASE_NAME -retention RETENTION_POLICY
```
##### Output file
```bash
# DDL
CREATE DATABASE MY_DB_NAME
CREATE RETENTION POLICY autogen ON MY_DB_NAME DURATION inf REPLICATION 1
CREATE DATABASE DATABASE_NAME
CREATE RETENTION POLICY <RETENTION_POLICY> ON <DATABASE_NAME> DURATION inf REPLICATION 1
# DML
# CONTEXT-DATABASE:MY_DB_NAME
# CONTEXT-RETENTION-POLICY:autogen
# CONTEXT-DATABASE:DATABASE_NAME
# CONTEXT-RETENTION-POLICY:RETENTION_POLICY
randset value=97.9296104805 1439856000000000000
randset value=25.3849066842 1439856100000000000
```