added description and modified intro of duplicate data doc
parent
b707d15e6f
commit
3392be6050
|
@ -2,19 +2,19 @@
|
||||||
title: Handle duplicate data points
|
title: Handle duplicate data points
|
||||||
seotitle: Handle duplicate data points when writing to InfluxDB
|
seotitle: Handle duplicate data points when writing to InfluxDB
|
||||||
description: >
|
description: >
|
||||||
placeholder
|
InfluxDB identifies unique data points by their measurement, tag set, and timestamp.
|
||||||
|
This article discusses methods for preserving data from two points with a common
|
||||||
|
measurement, tag set, and timestamp but a different field set.
|
||||||
weight: 202
|
weight: 202
|
||||||
menu:
|
menu:
|
||||||
v2_0:
|
v2_0:
|
||||||
name: Handle duplicate points
|
name: Handle duplicate points
|
||||||
parent: write-best-practices
|
parent: write-best-practices
|
||||||
|
v2.0/tags: [best practices, write]
|
||||||
---
|
---
|
||||||
|
|
||||||
<!-- Intro here -->
|
InfluxDB identifies unique data points by their measurement, tag set, and timestamp
|
||||||
|
(each a part of [Line protocol](/v2.0/reference/line-protocol) used to write data to InfluxDB).
|
||||||
## Identifying unique data points
|
|
||||||
Data points are written to InfluxDB using [Line protocol](/v2.0/reference/line-protocol).
|
|
||||||
InfluxDB identifies unique data points by their measurement name, tag set, and timestamp.
|
|
||||||
|
|
||||||
```txt
|
```txt
|
||||||
web,host=host2,region=us_west firstByte=15.0 1559260800000000000
|
web,host=host2,region=us_west firstByte=15.0 1559260800000000000
|
||||||
|
@ -40,12 +40,6 @@ web,host=host2,region=us_west firstByte=15.0 1559260800000000000
|
||||||
After you submit the new point, InfluxDB overwrites `firstByte` with the new field
|
After you submit the new point, InfluxDB overwrites `firstByte` with the new field
|
||||||
value and leaves the field `dnsLookup` alone:
|
value and leaves the field `dnsLookup` alone:
|
||||||
|
|
||||||
{{% note %}}
|
|
||||||
The output of examples queries in this article has been modified to clearly show
|
|
||||||
the different approaches to handling duplicate data.
|
|
||||||
The
|
|
||||||
{{% /note %}}
|
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
from(bucket: "example-bucket")
|
from(bucket: "example-bucket")
|
||||||
|> range(start: 2019-05-31T00:00:00Z, stop: 2019-05-31T12:00:00Z)
|
|> range(start: 2019-05-31T00:00:00Z, stop: 2019-05-31T12:00:00Z)
|
||||||
|
@ -120,3 +114,8 @@ Table: keys: [_measurement, host, region]
|
||||||
2019-05-31T00:00:00.000000000Z web host2 us_west 24 7
|
2019-05-31T00:00:00.000000000Z web host2 us_west 24 7
|
||||||
2019-05-31T00:00:00.000000001Z web host2 us_west 15
|
2019-05-31T00:00:00.000000001Z web host2 us_west 15
|
||||||
```
|
```
|
||||||
|
|
||||||
|
{{% note %}}
|
||||||
|
The output of examples queries in this article has been modified to clearly show
|
||||||
|
the different approaches and results for handling duplicate data.
|
||||||
|
{{% /note %}}
|
||||||
|
|
Loading…
Reference in New Issue