Merge pull request #1893 from influxdata/various-fixes

Various fixes
pull/1899/head
Scott Anderson 2020-11-25 21:04:19 -07:00 committed by GitHub
commit c966076e2f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
13 changed files with 194 additions and 555 deletions

View File

@ -2,7 +2,7 @@
title: influx write
description: >
The `influx write` command writes data to InfluxDB via stdin or from a specified file.
Write data using line protocol or annotated CSV.
Write data using line protocol, annotated CSV, or extended annotated CSV.
menu:
influxdb_cloud_ref:
name: influx write
@ -11,7 +11,10 @@ weight: 101
influxdb/cloud/tags: [write]
related:
- /influxdb/cloud/write-data/
- /influxdb/cloud/write-data/csv/
- /influxdb/cloud/write-data/developer-tools/csv/
- /influxdb/cloud/reference/syntax/line-protocol/
- /influxdb/cloud/reference/syntax/annotated-csv/
- /influxdb/cloud/reference/syntax/annotated-csv/extended/
---
{{< duplicate-oss >}}

View File

@ -9,6 +9,12 @@ menu:
parent: influx write
weight: 101
influxdb/cloud/tags: [write]
related:
- /influxdb/cloud/write-data/
- /influxdb/cloud/write-data/developer-tools/csv/
- /influxdb/cloud/reference/syntax/line-protocol/
- /influxdb/cloud/reference/syntax/annotated-csv/
- /influxdb/cloud/reference/syntax/annotated-csv/extended/
---
{{< duplicate-oss >}}

View File

@ -11,7 +11,7 @@ menu:
weight: 201
influxdb/cloud/tags: [csv, syntax, write]
related:
- /influxdb/cloud/write-data/csv/
- /influxdb/cloud/write-data/developer-tools/csv/
- /influxdb/cloud/reference/cli/influx/write/
- /influxdb/cloud/reference/syntax/line-protocol/
- /influxdb/cloud/reference/syntax/annotated-csv/

View File

@ -14,12 +14,4 @@ menu:
influxdb/cloud/tags: [client libraries]
---
InfluxDB client libraries are language-specific packages that integrate with the InfluxDB v2 API.
The following **InfluxDB v2** client libraries are available:
{{% note %}}
These client libraries are in active development and may not be feature-complete.
This list will continue to grow as more client libraries are released.
{{% /note %}}
{{< children type="list" >}}
{{< duplicate-oss >}}

View File

@ -14,193 +14,4 @@ aliases:
- /influxdb/cloud/reference/api/client-libraries/go/
---
Use the [InfluxDB Go client library](https://github.com/influxdata/influxdb-client-go) to integrate InfluxDB into Go scripts and applications.
This guide presumes some familiarity with Go and InfluxDB.
If just getting started, see [Get started with InfluxDB](/influxdb/cloud/get-started/).
## Before you begin
1. [Install Go 1.3 or later](https://golang.org/doc/install).
2. Download the client package in your $GOPATH and build the package.
```sh
# Download the InfluxDB Go client package
go get github.com/influxdata/influxdb-client-go
# Build the package
go build
```
3. Ensure that InfluxDB is running and you can connect to it.
For information about what URL to use to connect to InfluxDB OSS or InfluxDB Cloud, see [InfluxDB URLs](/influxdb/cloud/reference/urls/).
## Boilerplate for the InfluxDB Go Client Library
Use the Go library to write and query data from InfluxDB.
1. In your Go program, import the necessary packages and specify the entry point of your executable program.
```go
package main
import (
"context"
"fmt"
"time"
influxdb2 "github.com/influxdata/influxdb-client-go"
)
```
2. Define variables for your InfluxDB [bucket](/influxdb/cloud/organizations/buckets/), [organization](/influxdb/cloud/organizations/), and [token](/influxdb/cloud/security/tokens/).
```go
bucket := "example-bucket"
org := "example-org"
token := "example-token"
// Store the URL of your InfluxDB instance
url := "https://cloud2.influxdata.com"
```
3. Create the the InfluxDB Go client and pass in the `url` and `token` parameters.
```go
client := influxdb2.NewClient(url, token)
```
4. Create a **write client** with the `WriteApiBlocking` method and pass in the `org` and `bucket` parameters.
```go
writeApi := client.WriteApiBlocking(org, bucket)
```
5. To query data, create an InfluxDB **query client** and pass in your InfluxDB `org`.
```go
queryApi := client.QueryApi(org)
```
## Write data to InfluxDB with Go
Use the Go library to write data to InfluxDB.
1. Create a [point](/influxdb/cloud/reference/glossary/#point) and write it to InfluxDB using the `WritePoint` method of the API writer struct.
2. Close the client to flush all pending writes and finish.
```go
p := influxdb2.NewPoint("stat",
map[string]string{"unit": "temperature"},
map[string]interface{}{"avg": 24.5, "max": 45},
time.Now())
writeApi.WritePoint(context.Background(), p)
client.Close()
```
### Complete example write script
```go
func main() {
bucket := "example-bucket"
org := "example-org"
token := "example-token"
// Store the URL of your InfluxDB instance
url := "https://cloud2.influxdata.com"
// Create new client with default option for server url authenticate by token
client := influxdb2.NewClient(url, token)
// User blocking write client for writes to desired bucket
writeApi := client.WriteApiBlocking(org, bucket)
// Create point using full params constructor
p := influxdb2.NewPoint("stat",
map[string]string{"unit": "temperature"},
map[string]interface{}{"avg": 24.5, "max": 45},
time.Now())
// Write point immediately
writeApi.WritePoint(context.Background(), p)
// Ensures background processes finishes
client.Close()
}
```
## Query data from InfluxDB with Go
Use the Go library to query data to InfluxDB.
1. Create a Flux query and supply your `bucket` parameter.
```js
from(bucket:"<bucket>")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "stat")
```
The query client sends the Flux query to InfluxDB and returns the results as a FluxRecord object with a table structure.
**The query client includes the following methods:**
- `Query`: Sends the Flux query to InfluxDB.
- `Next`: Iterates over the query response.
- `TableChanged`: Identifies when the group key changes.
- `Record`: Returns the last parsed FluxRecord and gives access to value and row properties.
- `Value`: Returns the actual field value.
```go
result, err := queryApi.Query(context.Background(), `from(bucket:"<bucket>")|> range(start: -1h) |> filter(fn: (r) => r._measurement == "stat")`)
if err == nil {
for result.Next() {
if result.TableChanged() {
fmt.Printf("table: %s\n", result.TableMetadata().String())
}
fmt.Printf("value: %v\n", result.Record().Value())
}
if result.Err() != nil {
fmt.Printf("query parsing error: %s\n", result.Err().Error())
}
} else {
panic(err)
}
```
**The FluxRecord object includes the following methods for accessing your data:**
- `Table()`: Returns the index of the table the record belongs to.
- `Start()`: Returns the inclusive lower time bound of all records in the current table.
- `Stop()`: Returns the exclusive upper time bound of all records in the current table.
- `Time()`: Returns the time of the record.
- `Value() `: Returns the actual field value.
- `Field()`: Returns the field name.
- `Measurement()`: Returns the measurement name of the record.
- `Values()`: Returns a map of column values.
- `ValueByKey(<your_tags>)`: Returns a value from the record for given column key.
### Complete example query script
```go
func main() {
// Create client
client := influxdb2.NewClient(url, token)
// Get query client
queryApi := client.QueryApi(org)
// Get QueryTableResult
result, err := queryApi.Query(context.Background(), `from(bucket:"my-bucket")|> range(start: -1h) |> filter(fn: (r) => r._measurement == "stat")`)
if err == nil {
// Iterate over query response
for result.Next() {
// Notice when group key has changed
if result.TableChanged() {
fmt.Printf("table: %s\n", result.TableMetadata().String())
}
// Access data
fmt.Printf("value: %v\n", result.Record().Value())
}
// Check for an error
if result.Err() != nil {
fmt.Printf("query parsing error: %s\n", result.Err().Error())
}
} else {
panic(err)
}
// Ensures background processes finishes
client.Close()
}
```
For more information, see the [Go client README on GitHub](https://github.com/influxdata/influxdb-client-go).
{{< duplicate-oss >}}

View File

@ -14,159 +14,4 @@ aliases:
- /influxdb/cloud/reference/api/client-libraries/js/
---
Use the [InfluxDB JavaScript client library](https://github.com/influxdata/influxdb-client-js) to integrate InfluxDB into JavaScript scripts and applications. This client supports both client-side (browser) and server-side (NodeJS) environments.
This guide presumes some familiarity with JavaScript, browser environments, and InfluxDB.
If just getting started, see [Get started with InfluxDB](/influxdb/cloud/get-started/).
## Before you begin
1. Install [NodeJS](https://nodejs.org/en/download/package-manager/).
2. Ensure that InfluxDB is running and you can connect to it.
For information about what URL to use to connect to InfluxDB OSS or InfluxDB Cloud, see [InfluxDB URLs](/influxdb/cloud/reference/urls/).
## Easiest way to get started
1. Clone the [examples directory](https://github.com/influxdata/influxdb-client-js/tree/master/examples) in the [influxdb-client-js](https://github.com/influxdata/influxdb-client-js) repo.
2. Navigate to the `examples` directory:
```js
cd examples
```
3. Install `yarn` or `npm` dependencies as needed:
```js
yarn install
npm install
```
3. Update your `./env` and `index.html` with the name of your InfluxDB [bucket](/influxdb/cloud/organizations/buckets/), [organization](/influxdb/cloud/organizations/), [token](/influxdb/cloud/security/tokens/), and `proxy` which relies upon proxy to forward requests to the target InfluxDB.
4. Run the following command to run the application at [http://localhost:3001/examples/index.html]()
```sh
npm run browser
```
## Boilerplate for the InfluxDB Javascript client library
Use the Javascript library to write data to and query data from InfluxDB.
1. To write a data point to InfluxDB using the JavaScript library, import the latest InfluxDB Javascript library in your script.
```js
import {InfluxDB, Point} from 'https://unpkg.com/@influxdata/influxdb-client/dist/index.browser.mjs'
```
2. Define constants for your InfluxDB [bucket](/influxdb/cloud/organizations/buckets/), [organization](/influxdb/cloud/organizations/), [token](/influxdb/cloud/security/tokens/), and `proxy` which relies on a proxy to forward requests to the target InfluxDB instance.
```js
const proxy = '/influx'
const token = 'example-token'
const org = 'example-org'
const bucket = 'example-bucket'
```
3. Instantiate the InfluxDB JavaScript client and pass in the `proxy` and `token` parameters.
```js
const InfluxDB = new InfluxDB({proxy, token})
```
## Write data to InfluxDB with JavaScript
Use the Javascript library to write data to InfluxDB.
1. Use the `getWriteApi` method of the InfluxDB client to create a **write client**. Provide your InfluxDB `org` and `bucket`.
```js
const writeApi = InfluxDB.getWriteApi(org, bucket)
```
The `useDefaultTags` method instructs the write api to use default tags when writing points. Create a [point](/influxdb/cloud/reference/glossary/#point) and write it to InfluxDB using the `writePoint` method. The `tag` and `floatField` methods add key value pairs for the tags and fields, respectively. Close the client to flush all pending writes and finish.
```js
writeApi.useDefaultTags({location: 'browser'})
const point1 = new Point('temperature')
.tag('example', 'index.html')
.floatField('value', 24)
writeApi.writePoint(point1)
console.log(`${point1}`)
writeApi.close()
```
### Complete example write script
```js
const writeApi = new InfluxDB({proxy, token})
const writeApi = influxDB.getWriteApi(org, bucket)
// setup default tags for all writes through this API
writeApi.useDefaultTags({location: 'browser'})
const point1 = new Point('temperature')
.tag('example', 'index.html')
.floatField('value', 24)
writeApi.writePoint(point1)
console.log(` ${point1}`)
// flush pending writes and close writeApi
writeApi
.close()
.then(() => {
console.log('WRITE FINISHED')
})
```
## Query data from InfluxDB with JavaScript
Use the Javascript library to query data from InfluxDB.
1. Use the `getQueryApi` method of the `InfluxDB` client to create a new **query client**. Provide your InfluxDB `org`.
```js
const queryApi = influxDB.getQueryApi(org)
```
2. Create a Flux query (including your `bucket` parameter).
```js
const fluxQuery =
'from(bucket:"<my-bucket>")
|> range(start: 0)
|> filter(fn: (r) => r._measurement == "temperature")'
```
The **query client** sends the Flux query to InfluxDB and returns line table metadata and rows.
3. Use the `next` method to iterate over the rows.
```js
queryApi.queryRows(fluxQuery, {
next(row: string[], tableMeta: FluxTableMetaData) {
const o = tableMeta.toObject(row)
// console.log(JSON.stringify(o, null, 2))
console.log(
`${o._time} ${o._measurement} in '${o.location}' (${o.example}): ${o._field}=${o._value}`
)
}
}
```
### Complete example query script
```js
// performs query and receive line table metadata and rows
// https://v2.docs.influxdata.com/v2.0/reference/syntax/annotated-csv/
queryApi.queryRows(fluxQuery, {
next(row: string[], tableMeta: FluxTableMetaData) {
const o = tableMeta.toObject(row)
// console.log(JSON.stringify(o, null, 2))
console.log(
'${o._time} ${o._measurement} in '${o.location}' (${o.example}): ${o._field}=${o._value}`
)
},
error(error: Error) {
console.error(error)
console.log('\nFinished ERROR')
},
complete() {
console.log('\nFinished SUCCESS')
},
})
```
For more information, see the [JavaScript client README on GitHub](https://github.com/influxdata/influxdb-client-js).
{{< duplicate-oss >}}

View File

@ -14,158 +14,4 @@ aliases:
weight: 201
---
Use the [InfluxDB Python client library](https://github.com/influxdata/influxdb-client-python) to integrate InfluxDB into Python scripts and applications.
This guide presumes some familiarity with Python and InfluxDB.
If just getting started, see [Get started with InfluxDB](/influxdb/cloud/get-started/).
## Before you begin
1. Install the InfluxDB Python library:
```sh
pip install influxdb-client
```
2. Visit the URL of your InfluxDB Cloud UI.
## Write data to InfluxDB with Python
We are going to write some data in [line protocol](/influxdb/cloud/reference/syntax/line-protocol/) using the Python library.
1. In your Python program, import the InfluxDB client library and use it to write data to InfluxDB.
```python
import influxdb_client
from influxdb_client.client.write_api import SYNCHRONOUS
```
2. Define a few variables with the name of your [bucket](/influxdb/cloud/organizations/buckets/), [organization](/influxdb/cloud/organizations/), and [token](/influxdb/cloud/security/tokens/).
```python
bucket = "<my-bucket>"
org = "<my-org>"
token = "<my-token>"
# Store the URL of your InfluxDB instance
url="https://cloud2.influxdata.com"
```
3. Instantiate the client. The `InfluxDBClient` object takes three named parameters: `url`, `org`, and `token`. Pass in the named parameters.
```python
client = InfluxDBClient(
url=url,
token=token,
org=org
)
```
The `InfluxDBClient` object has a `write_api` method used for configuration.
4. Instantiate a **write client** using the `client` object and the `write_api` method. Use the `write_api` method to configure the writer object.
```python
write_api = client.write_api(write_options=SYNCHRONOUS)
```
5. Create a [point](/influxdb/cloud/reference/glossary/#point) object and write it to InfluxDB using the `write` method of the API writer object. The write method requires three parameters: `bucket`, `org`, and `record`.
```python
p = influxdb_client.Point("my_measurement").tag("location", "Prague").field("temperature", 25.3)
write_api.write(bucket=bucket, org=org, record=p)
```
### Complete example write script
```python
import influxdb_client
from influxdb_client.client.write_api import SYNCHRONOUS
bucket = "<my-bucket>"
org = "<my-org>"
token = "<my-token>"
# Store the URL of your InfluxDB instance
url="https://cloud2.influxdata.com"
client = influxdb_client.InfluxDBClient(
url=url,
token=token,
org=org
)
write_api = client.write_api(write_options=SYNCHRONOUS)
p = influxdb_client.Point("my_measurement").tag("location", "Prague").field("temperature", 25.3)
write_api.write(bucket=bucket, org=org, record=p)
```
## Query data from InfluxDB with Python
1. Instantiate the **query client**.
```python
query_api = client.query_api()
```
2. Create a Flux query.
```python
query = from(bucket:"my-bucket")\
|> range(start: -10m)\
|> filter(fn:(r) => r._measurement == "my_measurement")\
|> filter(fn: (r) => r.location == "Prague")\
|> filter(fn:(r) => r._field == "temperature" )
```
The query client sends the Flux query to InfluxDB and returns a Flux object with a table structure.
3. Pass the `query()` method two named parameters:`org` and `query`.
```python
result = client.query_api().query(org=org, query=query)
```
4. Iterate through the tables and records in the Flux object.
- Use the `get_value()` method to return values.
- Use the `get_field()` method to return fields.
```python
results = []
for table in result:
for record in table.records:
results.append((record.get_field(), record.get_value()))
print(results)
[(temperature, 25.3)]
```
**The Flux object provides the following methods for accessing your data:**
- `get_measurement()`: Returns the measurement name of the record.
- `get_field()`: Returns the field name.
- `get_values()`: Returns the actual field value.
- `values()`: Returns a map of column values.
- `values.get("<your tag>")`: Returns a value from the record for given column.
- `get_time()`: Returns the time of the record.
- `get_start()`: Returns the inclusive lower time bound of all records in the current table.
- `get_stop()`: Returns the exclusive upper time bound of all records in the current table.
### Complete example query script
```python
query_api = client.query_api()
query = from(bucket:"my-bucket")\
|> range(start: -10m)\
|> filter(fn:(r) => r._measurement == "my_measurement")\
|> filter(fn: (r) => r.location == "Prague")\
|> filter(fn:(r) => r._field == "temperature" )
result = client.query_api().query(org=org, query=query)
results = []
for table in result:
for record in table.records:
results.append((record.get_field(), record.get_value()))
print(results)
[(temperature, 25.3)]
```
For more information, see the [Python client README on GitHub](https://github.com/influxdata/influxdb-client-python).
{{< duplicate-oss >}}

View File

@ -8,8 +8,6 @@ menu:
influxdb_cloud:
name: Write CSV data
parent: Developer tools
aliases:
- /influxdb/cloud/write-data/csv/
weight: 204
related:
- /influxdb/cloud/reference/syntax/line-protocol/

View File

@ -2,7 +2,7 @@
title: influx write
description: >
The `influx write` command writes data to InfluxDB via stdin or from a specified file.
Write data using line protocol or annotated CSV.
Write data using line protocol, annotated CSV, or extended annotated CSV.
menu:
influxdb_2_0_ref:
name: influx write
@ -11,12 +11,17 @@ weight: 101
influxdb/v2.0/tags: [write]
related:
- /influxdb/v2.0/write-data/
- /influxdb/v2.0/write-data/csv/
- /influxdb/v2.0/write-data/developer-tools/csv/
- /influxdb/v2.0/reference/syntax/line-protocol/
- /influxdb/v2.0/reference/syntax/annotated-csv/
- /influxdb/v2.0/reference/syntax/annotated-csv/extended/
---
The `influx write` command writes data to InfluxDB via stdin or from a specified file.
Write data using [line protocol](/influxdb/v2.0/reference/syntax/line-protocol) or
[annotated CSV](/influxdb/v2.0/reference/syntax/annotated-csv).
Write data using [line protocol](/influxdb/v2.0/reference/syntax/line-protocol),
[annotated CSV](/influxdb/v2.0/reference/syntax/annotated-csv), or
[extended annotated CSV](/influxdb/v2.0/reference/syntax/annotated-csv/extended/).
If you write CSV data, CSV annotations determine how the data translates into line protocol.
## Usage
```
@ -54,3 +59,138 @@ influx write [command]
| | `--skip-verify` | Skip TLS certificate verification | | |
| `-t` | `--token` | Authentication token | string | `INFLUX_TOKEN` |
| `-u` | `--url` | URL to import data from | string | |
## Examples
- [Write line protocol](#line-protocol)
- [via stdin](#write-line-protocol-via-stdin)
- [from a file](#write-line-protocol-from-a-file)
- [from multiple files](#write-line-protocol-from-multiple-files)
- [from a URL](#write-line-protocol-from-a-url)
- [from multiple URLs](#write-line-protocol-from-multiple-urls)
- [from multiple sources](#write-line-protocol-from-multiple-sources)
- [Write CSV data](#csv)
- [via stdin](#write-annotated-csv-data-via-stdin)
- [from a file](#write-annotated-csv-data-from-a-file)
- [from multiple files](#write-annotated-csv-data-from-multiple-files)
- [from a URL](#write-annotated-csv-data-from-a-url)
- [from multiple URLs](#write-annotated-csv-data-from-multiple-urls)
- [from multiple sources](#write-annotated-csv-data-from-multiple-sources)
- [and prepend annotation headers](#prepend-csv-data-with-annotation-headers)
### Line protocol
##### Write line protocol via stdin
```sh
influx write --bucket example-bucket "
m,host=host1 field1=1.2
m,host=host2 field1=2.4
m,host=host1 field2=5i
m,host=host2 field2=3i
"
```
##### Write line protocol from a file
```sh
influx write \
--bucket example-bucket \
--file path/to/line-protocol.txt
```
##### Write line protocol from multiple files
```sh
influx write \
--bucket example-bucket \
--file path/to/line-protocol-1.txt \
--file path/to/line-protocol-2.txt
```
##### Write line protocol from a URL
```sh
influx write \
--bucket example-bucket \
--url https://example.com/line-protocol.txt
```
##### Write line protocol from multiple URLs
```sh
influx write \
--bucket example-bucket \
--url https://example.com/line-protocol-1.txt \
--url https://example.com/line-protocol-2.txt
```
##### Write line protocol from multiple sources
```sh
influx write \
--bucket example-bucket \
--file path/to/line-protocol-1.txt \
--url https://example.com/line-protocol-2.txt
```
---
### CSV
##### Write annotated CSV data via stdin
```sh
influx write \
--bucket example-bucket \
--format csv \
"#datatype measurement,tag,tag,field,field,ignored,time
m,cpu,host,time_steal,usage_user,nothing,time
cpu,cpu1,host1,0,2.7,a,1482669077000000000
cpu,cpu1,host2,0,2.2,b,1482669087000000000
"
```
##### Write annotated CSV data from a file
```sh
influx write \
--bucket example-bucket \
--file path/to/data.csv
```
##### Write annotated CSV data from multiple files
```sh
influx write \
--bucket example-bucket \
--file path/to/data-1.csv \
--file path/to/data-2.csv
```
##### Write annotated CSV data from a URL
```sh
influx write \
--bucket example-bucket \
--url https://example.com/data.csv
```
##### Write annotated CSV data from multiple URLs
```sh
influx write \
--bucket example-bucket \
--url https://example.com/data-1.csv \
--url https://example.com/data-2.csv
```
##### Write annotated CSV data from multiple sources
```sh
influx write \
--bucket example-bucket \
--file path/to/data-1.csv \
--url https://example.com/data-2.csv
```
##### Prepend CSV data with annotation headers
```sh
influx write \
--bucket example-bucket \
--header "#constant measurement,birds" \
--header "#datatype dataTime:2006-01-02,long,tag" \
--file path/to/data.csv
```

View File

@ -9,13 +9,20 @@ menu:
parent: influx write
weight: 101
influxdb/v2.0/tags: [write]
related:
- /influxdb/v2.0/write-data/
- /influxdb/v2.0/write-data/developer-tools/csv/
- /influxdb/v2.0/reference/syntax/line-protocol/
- /influxdb/v2.0/reference/syntax/annotated-csv/
- /influxdb/v2.0/reference/syntax/annotated-csv/extended/
---
The `influx write dryrun` command prints write output to stdout instead of writing
to InfluxDB. Use this command to test writing data.
Supports [line protocol](/influxdb/v2.0/reference/syntax/line-protocol) and
[annotated CSV](/influxdb/v2.0/reference/syntax/annotated-csv).
Supports [line protocol](/influxdb/v2.0/reference/syntax/line-protocol),
[annotated CSV](/influxdb/v2.0/reference/syntax/annotated-csv), and
[extended annotated CSV](/influxdb/v2.0/reference/syntax/annotated-csv/extended).
Output is always **line protocol**.
## Usage

View File

@ -11,7 +11,7 @@ menu:
weight: 201
influxdb/v2.0/tags: [csv, syntax, write]
related:
- /influxdb/v2.0/write-data/csv/
- /influxdb/v2.0/write-data/developer-tools/csv/
- /influxdb/v2.0/reference/cli/influx/write/
- /influxdb/v2.0/reference/syntax/line-protocol/
- /influxdb/v2.0/reference/syntax/annotated-csv/

View File

@ -144,7 +144,7 @@ write_api.write(bucket=bucket, org=org, record=p)
- `get_measurement()`: Returns the measurement name of the record.
- `get_field()`: Returns the field name.
- `get_values()`: Returns the actual field value.
- `get_value()`: Returns the actual field value.
- `values()`: Returns a map of column values.
- `values.get("<your tag>")`: Returns a value from the record for given column.
- `get_time()`: Returns the time of the record.

View File

@ -59,7 +59,7 @@ input:
id: kinesis_consumer
description: |
The Amazon Kinesis Consumer input plugin reads from a Kinesis data stream and creates
metrics using one of the supported [input data formats](https://docs.influxdata.com/telegraf/latest/data_formats/input).
metrics using one of the supported [input data formats](/telegraf/latest/data_formats/input).
introduced: 1.10.0
tags: [linux, macos, windows, cloud, messaging]
@ -408,7 +408,7 @@ input:
- name: Exec
id: exec
description: |
The Exec input plugin parses supported [Telegraf input data formats](https://docs.influxdata.com/telegraf/latest/data_formats/input/)
The Exec input plugin parses supported [Telegraf input data formats](/telegraf/latest/data_formats/input/)
(line protocol, JSON, Graphite, Value, Nagios, Collectd, and Dropwizard) into metrics.
Each Telegraf metric includes the measurement name, tags, fields, and timestamp.
introduced: 0.1.5
@ -418,7 +418,7 @@ input:
id: execd
description: |
The Execd input plugin runs an external program as a daemon. Programs must output metrics in an accepted
[Telegraf input data format](https://docs.influxdata.com/telegraf/latest/data_formats/input/)
[Telegraf input data format](/telegraf/latest/data_formats/input/)
on its standard output. Configure `signal` to send a signal to the daemon running on each collection interval.
The program output on standard error is mirrored to the Telegraf log.
introduced: 1.14.0
@ -450,7 +450,7 @@ input:
then use the [Tail input plugin](#tail).
> To parse metrics from multiple files that are formatted in one of the supported
> [input data formats](https://docs.influxdata.com/telegraf/latest/data_formats/input),
> [input data formats](/telegraf/latest/data_formats/input),
> use the [Multifile input plugin](#multifile).
introduced: 1.8.0
tags: [linux, macos, windows, systems]
@ -557,7 +557,7 @@ input:
id: http
description: |
The HTTP input plugin collects metrics from one or more HTTP (or HTTPS) endpoints.
The endpoint should have metrics formatted in one of the [supported input data formats](https://docs.influxdata.com/telegraf/latest/data_formats/input/).
The endpoint should have metrics formatted in one of the [supported input data formats](/telegraf/latest/data_formats/input/).
Each data format has its own unique set of configuration options which can be added to the input configuration.
introduced: 1.6.0
tags: [linux, macos, windows, servers, web]
@ -638,13 +638,14 @@ input:
To collect data on an InfluxDB 2.x instance running on localhost, the configuration for the
Prometheus input plugin would be:
<div class="keep-url">
<div class="keep-url"></div>
```toml
[[inputs.prometheus]]
## An array of urls to scrape metrics from.
urls = ["http://localhost:8086/metrics"]
```
</div>
introduced: 1.8.0
tags: [linux, macos, windows, data-stores]
@ -652,7 +653,7 @@ input:
id: influxdb_listener
description: |
The InfluxDB Listener input plugin listens for requests sent
according to the [InfluxDB HTTP API](https://docs.influxdata.com/influxdb/latest/guides/writing_data/).
according to the [InfluxDB HTTP API](/influxdb/v1.8/guides/write_data/).
The intent of the plugin is to allow Telegraf to serve as a proxy, or router,
for the HTTP `/write` endpoint of the InfluxDB HTTP API.
@ -661,8 +662,8 @@ input:
>
> This plugin is compatible with **InfluxDB 1.x** only.
The `/write` endpoint supports the `precision` query parameter and can be set
to one of `ns`, `u`, `ms`, `s`, `m`, `h`. All other parameters are ignored and
The `/write` endpoint supports the `precision` query parameter and can be
set to `ns`, `u`, `ms`, `s`, `m`, `h`. Other parameters are ignored and
defer to the output plugins configuration.
When chaining Telegraf instances using this plugin, `CREATE DATABASE` requests
@ -673,26 +674,16 @@ input:
tags: [linux, macos, windows, data-stores]
- name: InfluxDB v2 Listener
id: influxdb_listener
id: influxdb_v2_listener
description: |
The InfluxDB Listener input plugin listens for requests sent
according to the [InfluxDB HTTP API](https://docs.influxdata.com/influxdb/latest/guides/writing_data/).
The InfluxDB v2 Listener input plugin listens for requests sent
according to the [InfluxDB HTTP API](/influxdb/latest/reference/api/).
The intent of the plugin is to allow Telegraf to serve as a proxy, or router,
for the HTTP `/write` endpoint of the InfluxDB HTTP API.
for the HTTP `/api/v2/write` endpoint of the InfluxDB HTTP API.
> This plugin was previously known as `http_listener`.
> To send general metrics via HTTP, use the [HTTP Listener v2 input plugin](#http_listener_v2) instead.
>
> This plugin is compatible with **InfluxDB 2.x** only.
The `/write` endpoint supports the `precision` query parameter and can be set
to one of `ns`, `u`, `ms`, `s`, `m`, `h`. All other parameters are ignored and
The `/api/v2/write` endpoint supports the `precision` query parameter and
can be set to `ns`, `u`, `ms`, or `s`. Other parameters are ignored and
defer to the output plugins configuration.
When chaining Telegraf instances using this plugin, `CREATE DATABASE` requests
receive a `200 OK` response with message body `{"results":[]}` but they are not
relayed. The output configuration of the Telegraf instance which ultimately
submits data to InfluxDB determines the destination database.
introduced: 1.16.0
tags: [linux, macos, windows, data-stores]
@ -990,7 +981,7 @@ input:
id: mqtt_consumer
description: |
The MQTT Consumer input plugin reads from specified MQTT topics and adds messages to InfluxDB.
Messages are in the [Telegraf input data formats](https://docs.influxdata.com/telegraf/latest/data_formats/input/).
Messages are in the [Telegraf input data formats](/telegraf/latest/data_formats/input/).
introduced: 0.10.3
tags: [linux, macos, windows, messaging, IoT]
@ -1002,7 +993,7 @@ input:
This is often useful creating custom metrics from the `/sys` or `/proc` filesystems.
> To parse metrics from a single file formatted in one of the supported
> [input data formats](https://docs.influxdata.com/telegraf/latest/data_formats/input),
> [input data formats](/telegraf/latest/data_formats/input),
> use the [file input plugin](#file).
introduced: 1.10.0
tags: [linux, macos, windows]
@ -1018,7 +1009,7 @@ input:
id: nats_consumer
description: |
The NATS Consumer input plugin reads from specified NATS subjects and adds messages to InfluxDB.
Messages are expected in the [Telegraf input data formats](https://docs.influxdata.com/telegraf/latest/data_formats/input/).
Messages are expected in the [Telegraf input data formats](/telegraf/latest/data_formats/input/).
A Queue Group is used when subscribing to subjects so multiple instances of Telegraf
can read from a NATS cluster in parallel.
introduced: 0.10.3
@ -1484,7 +1475,7 @@ input:
description: |
The Socket Listener input plugin listens for messages from streaming (TCP, UNIX)
or datagram (UDP, unixgram) protocols. Messages are expected in the
[Telegraf Input Data Formats](https://docs.influxdata.com/telegraf/latest/data_formats/input/).
[Telegraf Input Data Formats](/telegraf/latest/data_formats/input/).
introduced: 1.3.0
tags: [linux, macos, windows, networking]
@ -1904,7 +1895,7 @@ output:
id: cloud_pubsub
description: |
The Google PubSub output plugin publishes metrics to a [Google Cloud PubSub](https://cloud.google.com/pubsub)
topic as one of the supported [output data formats](https://docs.influxdata.com/telegraf/latest/data_formats/output).
topic as one of the supported [output data formats](/telegraf/latest/data_formats/output).
introduced: 1.10.0
tags: [linux, macos, windows, messaging, cloud]
@ -2009,7 +2000,7 @@ output:
id: mqtt
description: |
The MQTT Producer output plugin writes to the MQTT server using
[supported output data formats](https://docs.influxdata.com/telegraf/latest/data_formats/output/).
[supported output data formats](/telegraf/latest/data_formats/output/).
introduced: 0.2.0
tags: [linux, macos, windows, messaging, IoT]