Merge pull request #1050 from influxdata/write/csv

Write CSV data
pull/1083/head
Scott Anderson 2020-05-28 16:59:36 -06:00 committed by GitHub
commit c395f64292
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
6 changed files with 919 additions and 27 deletions

View File

@ -9,6 +9,9 @@ menu:
parent: influx
weight: 101
v2.0/tags: [write]
related:
- /v2.0/write-data/
- /v2.0/write-data/csv/
---
The `influx write` command writes data to InfluxDB via stdin or from a specified file.
@ -31,11 +34,16 @@ influx write [command]
|:---- |:----------- |:----------:|:------------------ |
| `-b`, `--bucket` | Bucket name | string | `INFLUX_BUCKET_NAME` |
| `--bucket-id` | Bucket ID | string | `INFLUX_BUCKET_ID` |
| `--debug` | Output errors to stderr | | |
| `--encoding` | Character encoding of input (default `UTF-8`) | string | |
| `-f`, `--file` | File to import | string | |
| `--format` | Input format (`lp` or `csv`, default `lp`) | string | |
| `--header` | Prepend header line to CSV input data | string | |
| `-h`, `--help` | Help for the `dryrun` command | | |
| `-o`, `--org` | Organization name | string | `INFLUX_ORG` |
| `--org-id` | Organization ID | string | `INFLUX_ORG_ID` |
| `-p`, `--precision` | Precision of the timestamps (default `ns`) | string | `INFLUX_PRECISION` |
| `--skipHeader` | Skip first n rows of input data | integer | |
| `--skipRowOnError` | Output CSV errors to stderr, but continue processing | | |
{{% cli/influx-global-flags %}}

View File

@ -28,11 +28,16 @@ influx write dryrun [flags]
|:---- |:----------- |:----------:|:------------------ |
| `-b`, `--bucket` | Bucket name | string | `INFLUX_BUCKET_NAME` |
| `--bucket-id` | Bucket ID | string | `INFLUX_BUCKET_ID` |
| `--debug` | Output errors to stderr | | |
| `--encoding` | Character encoding of input (default `UTF-8`) | string | |
| `-f`, `--file` | File to import | string | |
| `--format` | Input format (`lp` or `csv`, defaults `lp`) | string | |
| `--format` | Input format (`lp` or `csv`, default `lp`) | string | |
| `--header` | Prepend header line to CSV input data | string | |
| `-h`, `--help` | Help for the `dryrun` command | | |
| `-o`, `--org` | Organization name | string | `INFLUX_ORG` |
| `--org-id` | Organization ID | string | `INFLUX_ORG_ID` |
| `-p`, `--precision` | Precision of the timestamps (default `ns`) | string | `INFLUX_PRECISION` |
| `--skipHeader` | Skip first n rows of input data | integer | |
| `--skipRowOnError` | Output CSV errors to stderr, but continue processing | | |
{{% cli/influx-global-flags %}}

View File

@ -90,6 +90,8 @@ Related entries: [implicit block](#implicit-block), [explicit block](#explicit-b
A data type with two possible values: true or false.
By convention, you can express `true` as the integer `1` and false as the integer `0` (zero).
In [annotated CSV](/v2.0/reference/syntax/annotated-csv/), columns that contain
boolean values are annotated with the `boolean` datatype.
### bucket
@ -337,8 +339,10 @@ Related entries: [block](#block)
### float
A float represents real numbers and is written with a decimal point dividing the integer and fractional parts.
For example, 1.0, 3.14.
A real number written with a decimal point dividing the integer and fractional parts (`1.0`, `3.14`, `-20.1`).
InfluxDB supports 64-bit float values.
In [annotated CSV](/v2.0/reference/syntax/annotated-csv/), columns that contain
float values are annotated with the `double` datatype.
### flush interval
@ -466,9 +470,14 @@ Related entries: [aggregator plugin](#aggregator-plugin), [collection interval](
An entity comprising data on a server (or virtual server in cloud computing).
<!-- An instance in an InfluxDB Enterprise cluster may scale across multiple servers or nodes in a network. -->
### int (data type)
### integer
A data type that represents an integer, a whole number that's positive, negative, or zero.
A whole number that is positive, negative, or zero (`0`, `-5`, `143`).
InfluxDB supports 64-bit integers (minimum: `-9223372036854775808`, maximum: `9223372036854775807`).
In [annotated CSV](/v2.0/reference/syntax/annotated-csv/), columns that contain
integers are annotated with the `long` datatype.
Related entries: [unsigned integer](#unsigned-integer)
## J
@ -945,6 +954,8 @@ A stream includes a series of tables over a sequence of time intervals.
### string
A data type used to represent text.
In [annotated CSV](/v2.0/reference/syntax/annotated-csv/), columns that contain
string values are annotated with the `string` datatype.
## T
@ -1094,6 +1105,14 @@ InfluxDB supports the following unix timestamp precisions:
Related entries: [timestamp](#timestamp), [RFC3339 timestamp](#rfc3339-timestamp)
### unsigned integer
A whole number that is positive or zero (`0`, `143`). Also known as a "uinteger."
InfluxDB supports 64-bit unsigned integers (minimum: `0`, maximum: `18446744073709551615`).
In [annotated CSV](/v2.0/reference/syntax/annotated-csv/), columns that contain
integers are annotated with the `unisgnedLong` datatype.
Related entries: [integer](#integer)
### user
InfluxDB users are granted permission to access to InfluxDB.

View File

@ -1,6 +1,5 @@
---
title: Annotated CSV syntax
list_title: Annotated CSV
title: Annotated CSV
description: >
InfluxDB and Flux return query results in annotated CSV format.
You can also read annotated CSV directly from Flux with the `csv.from()` function
@ -9,10 +8,12 @@ weight: 103
menu:
v2_0_ref:
parent: Syntax
name: Annotated CSV
v2.0/tags: [csv, syntax]
aliases:
- /v2.0/reference/annotated-csv/
related:
- /v2.0/reference/flux/stdlib/csv/from/
- /v2.0/reference/syntax/annotated-csv/extended/
---
InfluxDB and Flux return query results in annotated CSV format.

View File

@ -0,0 +1,366 @@
---
title: Extended annotated CSV
description: >
Extended annotated CSV provides additional annotations and options that specify
how CSV data should be converted to [line protocol](/v2.0/reference/syntax/line-protocol/)
and written to InfluxDB.
menu:
v2_0_ref:
name: Extended annotated CSV
parent: Annotated CSV
weight: 201
v2.0/tags: [csv, syntax, write]
related:
- /v2.0/write-data/csv/
- /v2.0/reference/cli/influx/write/
- /v2.0/reference/syntax/line-protocol/
- /v2.0/reference/syntax/annotated-csv/
---
**Extended annotated CSV** provides additional annotations and options that specify
how CSV data should be converted to [line protocol](/v2.0/reference/syntax/line-protocol/)
and written to InfluxDB.
InfluxDB uses the [`csv2lp` library](https://github.com/influxdata/influxdb/tree/master/pkg/csv2lp)
to convert CSV into line protocol.
Extended annotated CSV supports all [Annotated CSV](/v2.0/reference/syntax/annotated-csv/)
annotations.
{{% warn %}}
The Flux [`csv.from` function](/v2.0/reference/flux/stdlib/csv/from/) only supports
**annotated CSV**, not **extended annotated CSV**.
{{% /warn %}}
To write data to InfluxDB, line protocol must include the following:
- [measurement](/v2.0/reference/syntax/line-protocol/#measurement)
- [field set](/v2.0/reference/syntax/line-protocol/#field-set)
- [timestamp](/v2.0/reference/syntax/line-protocol/#timestamp) _(Optional but recommended)_
- [tag set](/v2.0/reference/syntax/line-protocol/#tag-set) _(Optional)_
Extended CSV annotations identify the element of line protocol a column represents.
## CSV Annotations
Extended annotated CSV extends and adds the following annotations:
- [datatype](#datatype)
- [constant](#constant)
- [timezone](#timezone)
### datatype
Use the `#datatype` annotation to specify the [line protocol element](/v2.0/reference/syntax/line-protocol/#elements-of-line-protocol)
a column represents.
To explicitly define a column as a **field** of a specific data type, use the field
type in the annotation (for example: `string`, `double`, `long`, etc.).
| Data type | Resulting line protocol |
|:---------- |:----------------------- |
| [measurement](#measurement) | Column is the **measurement** |
| [tag](#tag) | Column is a **tag** |
| [dateTime](#datetime) | Column is the **timestamp** |
| [field](#field) | Column is a **field** |
| [ignored](#ignored) | Column is ignored |
| [string](#string) | Column is a **string field** |
| [double](#double) | Column is a **float field** |
| [long](#long) | Column is an **integer field** |
| [unsignedLong](#unsignedlong) | Column is an **unsigned integer field** |
| [boolean](#boolean) | Column is a **boolean field** |
#### measurement
Indicates the column is the **measurement**.
#### tag
Indicates the column is a **tag**.
The **column label** is the **tag key**.
The **column value** is the **tag value**.
#### dateTime
Indicates the column is the **timestamp**.
`time` is an alias for `dateTime`.
If the [timestamp format](#supported-timestamp-formats) includes a time zone,
the parsed timestamp includes the time zone offset.
By default, all timestamps are UTC.
You can also use the [`#timezone` annotation](#timezone) to adjust timestamps to
a specific time zone.
{{% note %}}
There can only be **one** `dateTime` column.
{{% /note %}}
The `influx write` command converts timestamps to [Unix timestamps](/v2.0/reference/glossary/#unix-timestamp).
Append the timestamp format to the `dateTime` datatype with (`:`).
```csv
#datatype dateTime:RFC3339
#datatype dateTime:RFC3339Nano
#datatype dateTime:number
#datatype dateTime:2006-01-02
```
##### Supported timestamp formats
| Timestamp format | Description | Example |
|:---------------- |:----------- |:------- |
| **RFC3339** | RFC3339 timestamp | `2020-01-01T00:00:00Z` |
| **RFC3339Nano** | RFC3339 timestamp | `2020-01-01T00:00:00.000000000Z` |
| **number** | Unix timestamp | `1577836800000000000` |
{{% note %}}
If using the `number` timestamp format and timestamps are **not in nanoseconds**,
use the [`influx write --precision` flag](/v2.0/reference/cli/influx/write/#flags)
to specify the [timestamp precision](/v2.0/reference/glossary/#precision).
{{% /note %}}
##### Custom timestamp formats
To specify a custom timestamp format, use timestamp formats as described in the
[Go time package](https://golang.org/pkg/time).
For example: `2020-01-02`.
#### field
Indicates the column is a **field**.
The **column label** is the **field key**.
The **column value** is the **field value**.
{{% note %}}
With the `field` datatype, field values are copies **as-is** to line protocol.
For information about line protocol values and how they are written to InfluxDB,
see [Line protocol data types and formats](/v2.0/reference/syntax/line-protocol/#data-types-and-format).
We generally recommend specifying the [field type](#field-types) in annotations.
{{% /note %}}
#### ignored
The column is ignored and not written to InfluxDB.
#### Field types
The column is a **field** of a specified type.
The **column label** is the **field key**.
The **column value** is the **field value**.
- [string](#string)
- [double](#double)
- [long](#long)
- [unsignedLong](#unsignedlong)
- [boolean](#boolean)
##### string
Column is a **[string](/v2.0/reference/glossary/#string) field**.
##### double
Column is a **[float](/v2.0/reference/glossary/#float) field**.
By default, InfluxDB expects float values that use a period (`.`) to separate the
fraction from the whole number.
If column values include or use other separators, such as commas (`,`) to visually
separate large numbers into groups, specify the following **float separators**:
- **fraction separator**: Separates the fraction from the whole number.
- **ignored separator**: Visually separates the whole number into groups but ignores
the separator when parsing the float value.
Use the following syntax to specify **float separators**:
```sh
# Syntax
<fraction-separator><ignored-separator>
# Example
.,
# With the float separators above
# 1,200,000.15 => 1200000.15
```
Append **float separators** to the `double` datatype annotation with a colon (`:`).
For example:
```
#datatype "double:.,"
```
{{% note %}}
If your **float separators** include a comma (`,`), wrap the column annotation in double
quotes (`""`) to prevent the comma from being parsed as a column separator or delimiter.
You can also [define a custom column separator](#define-custom-column-separator).
{{% /note %}}
##### long
Column is an **[integer](/v2.0/reference/glossary/#integer) field**.
If column values contain separators such as periods (`.`) or commas (`,`), specify
the following **integer separators**:
- **fraction separator**: Separates the fraction from the whole number.
_**Integer values are truncated at the fraction separator when converted to line protocol.**_
- **ignored separator**: Visually separates the whole number into groups but ignores
the separator when parsing the integer value.
Use the following syntax to specify **integer separators**:
```sh
# Syntax
<fraction-separator><ignored-separator>
# Example
.,
# With the integer separators above
# 1,200,000.00 => 1200000i
```
Append **integer separators** to the `long` datatype annotation with a colon (`:`).
For example:
```
#datatype "long:.,"
```
{{% note %}}
If your **integer separators** include a comma (`,`), wrap the column annotation in double
quotes (`""`) to prevent the comma from being parsed as a column separator or delimiter.
You can also [define a custom column separator](#define-custom-column-separator).
{{% /note %}}
##### unsignedLong
Column is an **[unsigned integer (uinteger)](/v2.0/reference/glossary/#unsigned-integer) field**.
If column values contain separators such as periods (`.`) or commas (`,`), specify
the following **uinteger separators**:
- **fraction separator**: Separates the fraction from the whole number.
_**Uinteger values are truncated at the fraction separator when converted to line protocol.**_
- **ignored separator**: Visually separates the whole number into groups but ignores
the separator when parsing the uinteger value.
Use the following syntax to specify **uinteger separators**:
```sh
# Syntax
<fraction-separator><ignored-separator>
# Example
.,
# With the uinteger separators above
# 1,200,000.00 => 1200000u
```
Append **uinteger separators** to the `long` datatype annotation with a colon (`:`).
For example:
```
#datatype "usignedLong:.,"
```
{{% note %}}
If your **uinteger separators** include a comma (`,`), wrap the column annotation in double
quotes (`""`) to prevent the comma from being parsed as a column separator or delimiter.
You can also [define a custom column separator](#define-custom-column-separator).
{{% /note %}}
##### boolean
Column is a **[boolean](/v2.0/reference/glossary/#boolean) field**.
If column values are not [supported boolean values](/v2.0/reference/syntax/line-protocol/#boolean),
specify the **boolean format** with the following syntax:
```sh
# Syntax
<true-values>:<false-values>
# Example
y,Y,1:n,N,0
# With the boolean format above
# y => true, Y => true, 1 => true
# n => false, N => false, 0 => false
```
Append the **boolean format** to the `boolean` datatype annotation with a colon (`:`).
For example:
```
#datatype "boolean:y,Y:n,N"
```
{{% note %}}
If your **boolean format** contains commas (`,`), wrap the column annotation in double
quotes (`""`) to prevent the comma from being parsed as a column separator or delimiter.
You can also [define a custom column separator](#define-custom-column-separator).
{{% /note %}}
### constant
Use the `#constant` annotation to define a constant column label and value for each row.
The `#constant` annotation provides a way to supply
[line protocol elements](/v2.0/reference/syntax/line-protocol/#elements-of-line-protocol)
that don't exist in the CSV data.
Use the following syntax to define constants:
```
#constant <datatype>,<column-label>,<column-value>
```
To provide multiple constants, include each `#constant` annotations on a separate line.
```
#constant measurement,m
#constant tag,dataSource,csv
```
{{% note %}}
For constants with `measurement` and `dateTime` datatypes, the second value in
the constant definition is the **column-value**.
{{% /note %}}
### timezone
Use the `#timezone` annotation to update timestamps to a specific timezone.
By default, timestamps are parsed as UTC.
Use the `±HHmm` format to specify the timezone offset relative to UTC.
##### Timezone examples
| Timezone | Offset |
|:-------- | ------: |
| US Mountain Daylight Time | `-0600` |
| Central European Summer Time | `+0200` |
| Australia Eastern Standard Time | `+1000` |
| Apia Daylight Time | `+1400` |
##### Timezone annotation example
```
#timezone -0600
```
## Define custom column separator
If columns are delimited using a character other than a comma, use the `sep`
keyword to define a custom separator **in the first line of your CSV file**.
```
sep=;
```
## Annotation shorthand
Extended annotated CSV supports **annotation shorthand**.
Include the column label, datatype, and _(optional)_ default value in each column
header row using the following syntax:
```
<column-label>|<column-datatype>|<column-default-value>
```
##### Example annotation shorthand
```
m|measurement,location|tag|Hong Kong,temp|double,pm|long|0,time|dateTime:RFC3339
weather,San Francisco,51.9,38,2020-01-01T00:00:00Z
weather,New York,18.2,,2020-01-01T00:00:00Z
weather,,53.6,171,2020-01-01T00:00:00Z
```
##### The shorthand explained
- The `m` column represents the **measurement** and has no default value.
- The `location` column is a **tag** with the default value, `Hong Kong`.
- The `temp` column is a **field** with **float** (`double`) values and no default value.
- The `pm` column is a **field** with **integer** (`long`) values and a default of `0`.
- The `time` column represents the **timestamp**, uses the **RFC3339** timestamp format,
and has no default value.
##### Resulting line protocol
```
weather,location=San\ Francisco temp=51.9,pm=38i 1577836800000000000
weather,location=New\ York temp=18.2,pm=0i 1577836800000000000
weather,location=Hong\ Kong temp=53.6,pm=171i 1577836800000000000
```

View File

@ -0,0 +1,493 @@
---
title: Write CSV data to InfluxDB
description: >
Use the [`influx write` command](/v2.0/reference/cli/influx/write/) to write CSV data
to InfluxDB. Include annotations with the CSV data to determine how the data translates
into [line protocol](/v2.0/reference/syntax/line-protocol/).
menu:
v2_0:
name: Write CSV data
parent: Write data
weight: 104
related:
- /v2.0/reference/syntax/line-protocol/
- /v2.0/reference/syntax/annotated-csv/
- /v2.0/reference/cli/influx/write/
---
Use the [`influx write` command](/v2.0/reference/cli/influx/write/) to write CSV data
to InfluxDB. Include [Extended annotated CSV](/v2.0/reference/syntax/annotated-csv/extended/)
annotations to specify how the data translates into [line protocol](/v2.0/reference/syntax/line-protocol/).
Include annotations in the CSV file or inject them using the `--header` flag of
the `influx write` command.
##### On this page
- [CSV Annotations](#csv-annotations)
- [Inject annotation headers](#inject-annotation-headers)
- [Skip annotation headers](#skip-annotation-headers)
- [Process input as CSV](#process-input-as-csv)
- [Specify CSV character encoding](#specify-csv-character-encoding)
- [Skip rows with errors](#skip-rows-with-errors)
- [Advanced examples](#advanced-examples)
##### Example write command
```sh
influx write -b example-bucket -f path/to/example.csv
```
##### example.csv
```
#datatype measurement,tag,float,dateTime:RFC3339
m,host,used_percent,time
mem,host1,64.23,2020-01-01T00:00:00Z
mem,host2,72.01,2020-01-01T00:00:00Z
mem,host1,62.61,2020-01-01T00:00:10Z
mem,host2,72.98,2020-01-01T00:00:10Z
mem,host1,63.40,2020-01-01T00:00:20Z
mem,host2,73.77,2020-01-01T00:00:20Z
```
##### Resulting line protocol
```
mem,host=host1 used_percent=64.23 1577836800000000000
mem,host=host2 used_percent=72.01 1577836800000000000
mem,host=host1 used_percent=62.61 1577836810000000000
mem,host=host2 used_percent=72.98 1577836810000000000
mem,host=host1 used_percent=63.40 1577836820000000000
mem,host=host2 used_percent=73.77 1577836820000000000
```
{{% note %}}
To test the CSV to line protocol conversion process, use the `influx write dryrun`
command to print the resulting line protocol to stdout rather than write to InfluxDB.
{{% /note %}}
## CSV Annotations
Use **CSV annotations** to specify which element of line protocol each CSV column
represents and how to format the data. CSV annotations are rows at the beginning
of a CSV file that describe column properties.
The `influx write` command supports [Extended annotated CSV](/v2.0/reference/syntax/annotated-csv/extended)
which provides options for specifying how CSV data should be converted into line
protocol and how data is formatted.
To write data to InfluxDB, data must include the following:
- [measurement](/v2.0/reference/syntax/line-protocol/#measurement)
- [field set](/v2.0/reference/syntax/line-protocol/#field-set)
- [timestamp](/v2.0/reference/syntax/line-protocol/#timestamp) _(Optional but recommended)_
- [tag set](/v2.0/reference/syntax/line-protocol/#tag-set) _(Optional)_
Use CSV annotations to specify which of these elements each column represents.
## Write raw query results back to InfluxDB
Flux returns query results in [Annotated CSV](/v2.0/reference/syntax/annotated-csv/).
These results include all annotations necessary to write the data back to InfluxDB.
## Inject annotation headers
If the CSV data you want to write to InfluxDB does not contain the annotations
required to properly convert the data to line protocol, use the `--header` flag
to inject annotation rows into the CSV data.
```sh
influx write -b example-bucket \
-f path/to/example.csv \
--header "#constant measurement,birds" \
--header "#datatype dataTime:2006-01-02,long,tag"
```
{{< flex >}}
{{% flex-content %}}
##### example.csv
```
date,sighted,loc
2020-01-01,12,Boise
2020-06-01,78,Boise
2020-01-01,54,Seattle
2020-06-01,112,Seattle
2020-01-01,9,Detroit
2020-06-01,135,Detroit
```
{{% /flex-content %}}
{{% flex-content %}}
##### Resulting line protocol
```
birds,loc=Boise sighted=12 1577836800000000000
birds,loc=Boise sighted=78 1590969600000000000
birds,loc=Seattle sighted=54 1577836800000000000
birds,loc=Seattle sighted=112 1590969600000000000
birds,loc=Detroit sighted=9 1577836800000000000
birds,loc=Detroit sighted=135 1590969600000000000
```
{{% /flex-content %}}
{{< /flex >}}
#### Use files to inject headers
The `influx write` command supports importing multiple files in a single command.
Include annotations and header rows in their own file and import them with the write command.
Files are read in the order in which they're provided.
```sh
influx write -b example-bucket \
-f path/to/headers.csv \
-f path/to/example.csv
```
{{< flex >}}
{{% flex-content %}}
##### headers.csv
```
#constant measurement,birds
#datatype dataTime:2006-01-02,long,tag
```
{{% /flex-content %}}
{{% flex-content %}}
##### example.csv
```
date,sighted,loc
2020-01-01,12,Boise
2020-06-01,78,Boise
2020-01-01,54,Seattle
2020-06-01,112,Seattle
2020-01-01,9,Detroit
2020-06-01,135,Detroit
```
{{% /flex-content %}}
{{< /flex >}}
##### Resulting line protocol
```
birds,loc=Boise sighted=12 1577836800000000000
birds,loc=Boise sighted=78 1590969600000000000
birds,loc=Seattle sighted=54 1577836800000000000
birds,loc=Seattle sighted=112 1590969600000000000
birds,loc=Detroit sighted=9 1577836800000000000
birds,loc=Detroit sighted=135 1590969600000000000
```
## Skip annotation headers
Some CSV data may include header rows that conflict with or lack the annotations
necessary to write CSV data to InfluxDB.
Use the `--skipHeader` flag to specify the **number of rows to skip** at the
beginning of the CSV data.
```sh
influx write -b example-bucket \
-f path/to/example.csv \
--skipHeader=2
```
You can then [inject new header rows](#inject-annotation-headers) to rename columns
and provide the necessary annotations.
## Process input as CSV
The `influx write` command automatically processes files with the `.csv` extension as CSV files.
If your CSV file uses a different extension, use the `--format` flat to explicitly
declare the format of the input file.
```sh
influx write -b example-bucket \
-f path/to/example.txt \
--format csv
```
{{% note %}}
The `influx write` command assumes all input files are line protocol unless they
include the `.csv` extension or you declare the `csv`.
{{% /note %}}
## Specify CSV character encoding
The `influx write` command assumes CSV files contain UTF-8 encoded characters.
If your CSV data uses different character encoding, specify the encoding
with the `--encoding`.
```sh
influx write -b example-bucket \
-f path/to/example.csv \
--encoding "UTF-16"
```
## Skip rows with errors
If a row in your CSV data is missing an
[element required to write to InfluxDB](/v2.0/reference/syntax/line-protocol/#elements-of-line-protocol)
or data is incorrectly formatted, when processing the row, the `influx write` command
returns an error and cancels the write request.
To skip rows with errors, use the `--skipRowOnError` flag.
```sh
influx write -b example-bucket \
-f path/to/example.csv \
--skipRowOnError
```
{{% warn %}}
Skipped rows are ignored and are not written to InfluxDB.
{{% /warn %}}
## Advanced examples
- [Define constants](#define-constants)
- [Annotation shorthand](#annotation-shorthand)
- [Use alternate numeric formats](#use-alternate-numeric-formats)
- [Use alternate boolean format](#use-alternate-boolean-format)
- [Use different timestamp formats](#use-different-timestamp-formats)
---
### Define constants
Use the Extended annotated CSV [`#constant` annotation](/v2.0/reference/syntax/annotated-csv/extended/#constant)
to add a column and value to each row in the CSV data.
{{< flex >}}
{{% flex-content %}}
##### CSV with constants
```
#constant measurement,example
#constant tag,source,csv
#datatype long,dateTime:RFC3339
count,time
1,2020-01-01T00:00:00Z
4,2020-01-02T00:00:00Z
9,2020-01-03T00:00:00Z
18,2020-01-04T00:00:00Z
```
{{% /flex-content %}}
{{% flex-content %}}
##### Resulting line protocol
```
example,source=csv count=1 1577836800000000000
example,source=csv count=4 1577923200000000000
example,source=csv count=9 1578009600000000000
example,source=csv count=18 1578096000000000000
```
{{% /flex-content %}}
{{< /flex >}}
---
### Annotation shorthand
Extended annotated CSV supports [annotation shorthand](/v2.0/reference/syntax/annotated-csv/extended/#annotation-shorthand),
which lets you define the **column label**, **datatype**, and **default value** in the column header.
{{< flex >}}
{{% flex-content %}}
##### CSV with annotation shorthand
```
m|measurement,count|long|0,time|dateTime:RFC3339
example,1,2020-01-01T00:00:00Z
example,4,2020-01-02T00:00:00Z
example,,2020-01-03T00:00:00Z
example,18,2020-01-04T00:00:00Z
```
{{% /flex-content %}}
{{% flex-content %}}
##### Resulting line protocol
```
example count=1 1577836800000000000
example count=4 1577923200000000000
example count=0 1578009600000000000
example count=18 1578096000000000000
```
{{% /flex-content %}}
{{< /flex >}}
#### Replace column header with annotation shorthand
It's possible to replace the column header row in a CSV file with annotation
shorthand without modifying the CSV file.
This lets you define column data types and default values while writing to InfluxDB.
To replace an existing column header row with annotation shorthand:
1. Use the `--skipHeader` flag to ignore the existing column header row.
2. Use the `--header` flag to inject a new column header row that uses annotation shorthand.
{{% note %}}
`--skipHeader` is the same as `--skipHeader=1`.
{{% /note %}}
```sh
influx write -b example-bucket \
-f example.csv \
--skipHeader
--header="m|measurement,count|long|0,time|dateTime:RFC3339"
```
{{< flex >}}
{{% flex-content %}}
##### Unmodified example.csv
```
m,count,time
example,1,2020-01-01T00:00:00Z
example,4,2020-01-02T00:00:00Z
example,,2020-01-03T00:00:00Z
example,18,2020-01-04T00:00:00Z
```
{{% /flex-content %}}
{{% flex-content %}}
##### Resulting line protocol
```
example count=1 1577836800000000000
example count=4 1577923200000000000
example count=0 1578009600000000000
example count=18 1578096000000000000
```
{{% /flex-content %}}
{{< /flex >}}
---
### Use alternate numeric formats
If your CSV data contains numeric values that use a non-default fraction separator (`.`)
or contain group separators, [define your numeric format](/v2.0/reference/syntax/annotated-csv/extended/#double)
in the `double`, `long`, and `unsignedLong` datatype annotations.
{{% note %}}
If your **numeric format separators** include a comma (`,`), wrap the column annotation in double
quotes (`""`) to prevent the comma from being parsed as a column separator or delimiter.
You can also [define a custom column separator](##################).
{{% /note %}}
{{< tabs-wrapper >}}
{{% tabs %}}
[Floats](#)
[Integers](#)
[Uintegers](#)
{{% /tabs %}}
{{% tab-content %}}
{{< flex >}}
{{% flex-content %}}
##### CSV with non-default float values
```
#datatype measurement,"double:.,",dateTime:RFC3339
m,lbs,time
example,"1,280.7",2020-01-01T00:00:00Z
example,"1,352.5",2020-01-02T00:00:00Z
example,"1,862.8",2020-01-03T00:00:00Z
example,"2,014.9",2020-01-04T00:00:00Z
```
{{% /flex-content %}}
{{% flex-content %}}
##### Resulting line protocol
```
example lbs=1280.7 1577836800000000000
example lbs=1352.5 1577923200000000000
example lbs=1862.8 1578009600000000000
example lbs=2014.9 1578096000000000000
```
{{% /flex-content %}}
{{< /flex >}}
{{% /tab-content %}}
{{% tab-content %}}
{{< flex >}}
{{% flex-content %}}
##### CSV with non-default integer values
```
#datatype measurement,"long:.,",dateTime:RFC3339
m,lbs,time
example,"1,280.0",2020-01-01T00:00:00Z
example,"1,352.0",2020-01-02T00:00:00Z
example,"1,862.0",2020-01-03T00:00:00Z
example,"2,014.9",2020-01-04T00:00:00Z
```
{{% /flex-content %}}
{{% flex-content %}}
##### Resulting line protocol
```
example lbs=1280i 1577836800000000000
example lbs=1352i 1577923200000000000
example lbs=1862i 1578009600000000000
example lbs=2014i 1578096000000000000
```
{{% /flex-content %}}
{{< /flex >}}
{{% /tab-content %}}
{{% tab-content %}}
{{< flex >}}
{{% flex-content %}}
##### CSV with non-default uinteger values
```
#datatype measurement,"unsignedLong:.,",dateTime:RFC3339
m,lbs,time
example,"1,280.0",2020-01-01T00:00:00Z
example,"1,352.0",2020-01-02T00:00:00Z
example,"1,862.0",2020-01-03T00:00:00Z
example,"2,014.9",2020-01-04T00:00:00Z
```
{{% /flex-content %}}
{{% flex-content %}}
##### Resulting line protocol
```
example lbs=1280u 1577836800000000000
example lbs=1352u 1577923200000000000
example lbs=1862u 1578009600000000000
example lbs=2014u 1578096000000000000
```
{{% /flex-content %}}
{{< /flex >}}
{{% /tab-content %}}
{{< /tabs-wrapper >}}
---
### Use alternate boolean format
Line protocol supports only [specific boolean values](/v2.0/reference/syntax/line-protocol/#boolean).
If your CSV data contains boolean values that line protocol does not support,
[define your boolean format](/v2.0/reference/syntax/annotated-csv/extended/#boolean)
in the `boolean` datatype annotation.
{{< flex >}}
{{% flex-content %}}
##### CSV with non-default boolean values
```
sep=;
#datatype measurement,"boolean:y,Y,1:n,N,0",dateTime:RFC3339
m,verified,time
example,y,2020-01-01T00:00:00Z
example,n,2020-01-02T00:00:00Z
example,1,2020-01-03T00:00:00Z
example,N,2020-01-04T00:00:00Z
```
{{% /flex-content %}}
{{% flex-content %}}
##### Resulting line protocol
```
example verified=true 1577836800000000000
example verified=false 1577923200000000000
example verified=true 1578009600000000000
example verified=false 1578096000000000000
```
{{% /flex-content %}}
{{< /flex >}}
---
### Use different timestamp formats
The `influx write` command automatically detects **RFC3339** and **number** formatted
timestamps when converting CSV to line protocol.
If using a different timestamp format, [define your timestamp format](/v2.0/reference/syntax/annotated-csv/extended/#datetime)
in the `dateTime` datatype annotation.
{{< flex >}}
{{% flex-content %}}
##### CSV with non-default timestamps
```
#datatype measurement,dateTime:2006-01-02,field
m,time,lbs
example,2020-01-01,1280.7
example,2020-01-02,1352.5
example,2020-01-03,1862.8
example,2020-01-04,2014.9
```
{{% /flex-content %}}
{{% flex-content %}}
##### Resulting line protocol
```
example lbs=1280.7 1577836800000000000
example lbs=1352.5 1577923200000000000
example lbs=1862.8 1578009600000000000
example lbs=2014.9 1578096000000000000
```
{{% /flex-content %}}
{{< /flex >}}