commit
ddf17d2339
|
@ -30,24 +30,25 @@ influx write [command]
|
|||
| [dryrun](/influxdb/v2.0/reference/cli/influx/write/dryrun) | Write to stdout instead of InfluxDB |
|
||||
|
||||
## Flags
|
||||
| Flag | | Description | Input type | {{< cli/mapped >}} |
|
||||
|:---- |:--- |:----------- |:----------:|:------------------ |
|
||||
| `-c` | `--active-config` | CLI configuration to use for command | string | |
|
||||
| `-b` | `--bucket` | Bucket name | string | `INFLUX_BUCKET_NAME` |
|
||||
| | `--bucket-id` | Bucket ID | string | `INFLUX_BUCKET_ID` |
|
||||
| | `--configs-path` | Path to `influx` CLI configurations (default `~/.influxdbv2/configs`) | string |`INFLUX_CONFIGS_PATH` |
|
||||
| | `--debug` | Output errors to stderr | | |
|
||||
| | `--encoding` | Character encoding of input (default `UTF-8`) | string | |
|
||||
| `-f` | `--file` | File to import | string | |
|
||||
| | `--format` | Input format (`lp` or `csv`, default `lp`) | string | |
|
||||
| | `--header` | Prepend header line to CSV input data | string | |
|
||||
| `-h` | `--help` | Help for the `dryrun` command | | |
|
||||
| | `--host` | HTTP address of InfluxDB (default `http://localhost:8086`) | string | `INFLUX_HOST` |
|
||||
| `-o` | `--org` | Organization name | string | `INFLUX_ORG` |
|
||||
| | `--org-id` | Organization ID | string | `INFLUX_ORG_ID` |
|
||||
| `-p` | `--precision` | Precision of the timestamps (default `ns`) | string | `INFLUX_PRECISION` |
|
||||
| | `--skipHeader` | Skip first n rows of input data | integer | |
|
||||
| | `--skipRowOnError` | Output CSV errors to stderr, but continue processing | | |
|
||||
| | `--skip-verify` | Skip TLS certificate verification | | |
|
||||
| `-t` | `--token` | Authentication token | string | `INFLUX_TOKEN` |
|
||||
| `-u` | `--url` | URL to import data from | string | |
|
||||
| Flag | | Description | Input type | {{< cli/mapped >}} |
|
||||
|:-----|:-------------------|:----------------------------------------------------------------------|:----------:|:----------------------|
|
||||
| `-c` | `--active-config` | CLI configuration to use for command | string | |
|
||||
| `-b` | `--bucket` | Bucket name | string | `INFLUX_BUCKET_NAME` |
|
||||
| | `--bucket-id` | Bucket ID | string | `INFLUX_BUCKET_ID` |
|
||||
| | `--configs-path` | Path to `influx` CLI configurations (default `~/.influxdbv2/configs`) | string | `INFLUX_CONFIGS_PATH` |
|
||||
| | `--debug` | Output errors to stderr | | |
|
||||
| | `--encoding` | Character encoding of input (default `UTF-8`) | string | |
|
||||
| | `--error-file` | Path to a file used for recording rejected row errors | string | |
|
||||
| `-f` | `--file` | File to import | string | |
|
||||
| | `--format` | Input format (`lp` or `csv`, default `lp`) | string | |
|
||||
| | `--header` | Prepend header line to CSV input data | string | |
|
||||
| `-h` | `--help` | Help for the `dryrun` command | | |
|
||||
| | `--host` | HTTP address of InfluxDB (default `http://localhost:9999`) | string | `INFLUX_HOST` |
|
||||
| `-o` | `--org` | Organization name | string | `INFLUX_ORG` |
|
||||
| | `--org-id` | Organization ID | string | `INFLUX_ORG_ID` |
|
||||
| `-p` | `--precision` | Precision of the timestamps (default `ns`) | string | `INFLUX_PRECISION` |
|
||||
| | `--skipHeader` | Skip first *n* rows of input data | integer | |
|
||||
| | `--skipRowOnError` | Output CSV errors to stderr, but continue processing | | |
|
||||
| | `--skip-verify` | Skip TLS certificate verification | | |
|
||||
| `-t` | `--token` | Authentication token | string | `INFLUX_TOKEN` |
|
||||
| `-u` | `--url` | URL to import data from | string | |
|
||||
|
|
|
@ -27,7 +27,7 @@ annotations.
|
|||
|
||||
{{% warn %}}
|
||||
The Flux [`csv.from` function](/influxdb/v2.0/reference/flux/stdlib/csv/from/) only supports
|
||||
**annotated CSV**, not **extended annotated CSV**.
|
||||
[annotated CSV](/influxdb/v2.0/reference/syntax/annotated-csv/), not extended annotated CSV.
|
||||
{{% /warn %}}
|
||||
|
||||
To write data to InfluxDB, line protocol must include the following:
|
||||
|
@ -45,6 +45,7 @@ Extended annotated CSV extends and adds the following annotations:
|
|||
- [datatype](#datatype)
|
||||
- [constant](#constant)
|
||||
- [timezone](#timezone)
|
||||
- [concat](#concat)
|
||||
|
||||
### datatype
|
||||
Use the `#datatype` annotation to specify the [line protocol element](/influxdb/v2.0/reference/syntax/line-protocol/#elements-of-line-protocol)
|
||||
|
@ -312,6 +313,13 @@ Use the `#timezone` annotation to update timestamps to a specific timezone.
|
|||
By default, timestamps are parsed as UTC.
|
||||
Use the `±HHmm` format to specify the timezone offset relative to UTC.
|
||||
|
||||
### strict mode
|
||||
Use the `:strict` keyword to indicate a loss of precision when parsing `long` or `unsignedLong` data types.
|
||||
Turn on strict mode by using a column data type that ends with `strict`, such as `long:strict`.
|
||||
When parsing `long` or `unsignedLong` value from a string value with fraction digits, the whole CSV row fails when in a strict mode.
|
||||
A warning is printed when not in a strict mode, saying `line x: column y: '1.2' truncated to '1' to fit into long data type`.
|
||||
For more information on strict parsing, see the [package documentation](https://github.com/influxdata/influxdb/tree/master/pkg/csv2lp).
|
||||
|
||||
##### Timezone examples
|
||||
| Timezone | Offset |
|
||||
|:-------- | ------: |
|
||||
|
@ -325,6 +333,28 @@ Use the `±HHmm` format to specify the timezone offset relative to UTC.
|
|||
#timezone -0600
|
||||
```
|
||||
|
||||
### concat
|
||||
|
||||
The `#concat` annotation adds a new column that is concatenated from existing columns according to bash-like string interpolation literal with variables referencing existing column labels.
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
#concat,string,fullName,${firstName} ${lastName}
|
||||
```
|
||||
|
||||
This is especially useful when constructing a timestamp from multiple columns.
|
||||
For example, the following annotation will combine the given CSV columns into a timestamp:
|
||||
|
||||
```
|
||||
#concat,dateTime:2006-01-02,${Year}-${Month}-${Day}
|
||||
|
||||
Year,Month,Day,Hour,Minute,Second,Tag,Value
|
||||
2020,05,22,00,00,00,test,0
|
||||
2020,05,22,00,05,00,test,1
|
||||
2020,05,22,00,10,00,test,2
|
||||
```
|
||||
|
||||
## Define custom column separator
|
||||
If columns are delimited using a character other than a comma, use the `sep`
|
||||
keyword to define a custom separator **in the first line of your CSV file**.
|
||||
|
|
|
@ -226,6 +226,14 @@ influx write -b example-bucket \
|
|||
Skipped rows are ignored and are not written to InfluxDB.
|
||||
{{% /warn %}}
|
||||
|
||||
Use the `--error-file` flag to record errors to a file.
|
||||
The error file identifies all rows that cannot be imported and includes error messages for debugging.
|
||||
For example:
|
||||
|
||||
```error : line 3: column 'a': '1.1' cannot fit into long data type
|
||||
cpu,1.1
|
||||
```
|
||||
|
||||
## Advanced examples
|
||||
|
||||
- [Define constants](#define-constants)
|
||||
|
|
Loading…
Reference in New Issue