Release Telegraf v1.25.0 and v1.25.1 (#4742)
* copy v1.24 to v1.25 * update 1_24 to 1_25 * update 1.24 to 1.25 * update plugins.md links * update change log * Add new plugins * products.yml: update with latest versions and patches for v1.25.1 --------- Co-authored-by: Scott Anderson <sanderson@users.noreply.github.com>pull/4743/head
parent
36fbb4b69c
commit
fd9d87430a
|
@ -0,0 +1,26 @@
|
|||
---
|
||||
title: Telegraf 1.25 documentation
|
||||
description: >
|
||||
Documentation for Telegraf, the plugin-driven server agent of the InfluxData
|
||||
time series platform, used to collect and report metrics. Telegraf supports four categories of plugins -- input, output, aggregator, and processor.
|
||||
menu:
|
||||
telegraf_1_25:
|
||||
name: Telegraf v1.25
|
||||
weight: 1
|
||||
related:
|
||||
- /resources/videos/intro-to-telegraf/
|
||||
- /telegraf/v1.25/install/
|
||||
- /telegraf/v1.25/get_started/
|
||||
|
||||
---
|
||||
|
||||
Telegraf, a server-based agent, collects and sends metrics and events from databases, systems, and IoT sensors.
|
||||
Written in Go, Telegraf compiles into a single binary with no external dependencies--requiring very minimal memory.
|
||||
|
||||
For an introduction to Telegraf and an overview of how it works, watch the following video:
|
||||
|
||||
{{< youtube vGJeo3FaMds >}}
|
||||
|
||||
{{< influxdbu title="Telegraf Basics" summary="Learn how to get started with Telegraf with this **free** course that covers common use cases, proper configuration, and best practices for deployment. Also, discover how to write your own custom Telegraf plugins." action="Take the course" link="https://university.influxdata.com/courses/telegraf-basics-tutorial/" >}}
|
||||
|
||||
{{< influxdbu "telegraf-102" >}}
|
|
@ -0,0 +1,92 @@
|
|||
---
|
||||
title: Telegraf commands and flags
|
||||
description: The `telegraf` command starts and runs all the processes necessary for Telegraf to function.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: Commands
|
||||
weight: 25
|
||||
---
|
||||
|
||||
The `telegraf` command starts and runs all the processes necessary for Telegraf to function.
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
telegraf [commands]
|
||||
telegraf [flags]
|
||||
```
|
||||
|
||||
## Commands
|
||||
|
||||
|
||||
|
||||
| Command | Description |
|
||||
| :-------- | :--------------------------------------------- |
|
||||
| `config` | Print out full sample configuration to stdout. |
|
||||
| `version` | Print version to stdout. |
|
||||
|
||||
## Flags {id="telegraf-command-flags"}
|
||||
|
||||
| Flag | Description |
|
||||
| :------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `--aggregator-filter <filter>` | Filter aggregators to enable. Separator is `:`. |
|
||||
| `--config <file>` | Configuration file to load. |
|
||||
| `--config-directory <directory>` | Directory containing additional `*.conf` files. |
|
||||
| `--deprecation-list` | Print all deprecated plugins or plugin options. |
|
||||
| `--watch-config` | Restart Telegraf on local configuration changes. Use either fs notifications (`inotify`) or polling (`poll`). Disabled by default |
|
||||
| `--plugin-directory <directory>` | Directory containing `*.so` files to search recursively for plugins. Found plugins are loaded, tagged, and identified. |
|
||||
| `--debug` | Enable debug logging. |
|
||||
| `--input-filter <filter>` | Filter input plugins to enable. Separator is `:`. |
|
||||
| `--input-list` | Print available input plugins. |
|
||||
| `--output-filter` | Filter output plugins to enable. Separator is `:`. |
|
||||
| `--output-list` | Print available output plugins. |
|
||||
| `--pidfile <file>` | File to write PID to. |
|
||||
| `--pprof-addr <address>` | pprof address to listen on. Disabled by default. |
|
||||
| `--processor-filter <filter>` | Filter processor plugins to enable. Separator is `:`. |
|
||||
| `--quiet` | Run in quiet mode. |
|
||||
| `--section-filter <filter>` | Filter configuration sections to output (`agent`, `global_tags`, `outputs`, `processors`, `aggregators` and `inputs`). Separator is `:`. |
|
||||
| `--sample-config` | Print full sample configuration. |
|
||||
| `--once` | Gather metrics once, write them, and exit. |
|
||||
| `--test` | Gather metrics once and print them. |
|
||||
| `--test-wait` | Number of seconds to wait for service inputs to complete in test or once mode. |
|
||||
| `--usage <plugin>` | Print plugin usage (example: `telegraf --usage mysql`). |
|
||||
| `--version` | Print Telegraf version. |
|
||||
|
||||
## Examples
|
||||
|
||||
### Generate a Telegraf configuration file
|
||||
|
||||
```sh
|
||||
telegraf config > telegraf.conf
|
||||
```
|
||||
|
||||
### Generate configuration with only CPU input and InfluxDB output plugins defined
|
||||
|
||||
```sh
|
||||
telegraf --input-filter cpu --output-filter influxdb config
|
||||
```
|
||||
|
||||
### Run a single Telegraf configuration, outputting metrics to stdout
|
||||
|
||||
```sh
|
||||
telegraf --config telegraf.conf --test
|
||||
```
|
||||
|
||||
### Run Telegraf with all plugins defined in configuration file**
|
||||
|
||||
```sh
|
||||
telegraf --config telegraf.conf
|
||||
```
|
||||
|
||||
### Run Telegraf, enabling the CPU and memory input plugins and InfluxDB output plugin**
|
||||
|
||||
```sh
|
||||
telegraf --config telegraf.conf --input-filter cpu:mem --output-filter influxdb
|
||||
```
|
||||
|
||||
### Run Telegraf with pprof
|
||||
|
||||
```sh
|
||||
telegraf --config telegraf.conf --pprof-addr localhost:6060
|
||||
```
|
|
@ -0,0 +1,479 @@
|
|||
---
|
||||
title: Configuration options
|
||||
description: Overview of the Telegraf configuration file, enabling plugins, and setting environment variables.
|
||||
aliases:
|
||||
- /telegraf/v1.25/administration/configuration/
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: Configuration options
|
||||
weight: 40
|
||||
---
|
||||
|
||||
The Telegraf configuration file (`telegraf.conf`) lists all available Telegraf plugins. See the current version here: [telegraf.conf](https://github.com/influxdata/telegraf/blob/master/etc/telegraf.conf).
|
||||
|
||||
> To quickly get started with Telegraf, see [Get started](/telegraf/v1.25/get_started/).
|
||||
|
||||
## Generate a configuration file
|
||||
|
||||
A default Telegraf configuration file can be auto-generated by Telegraf:
|
||||
|
||||
```
|
||||
telegraf config > telegraf.conf
|
||||
```
|
||||
|
||||
To generate a configuration file with specific inputs and outputs, you can use the
|
||||
`--input-filter` and `--output-filter` flags:
|
||||
|
||||
```
|
||||
telegraf --input-filter cpu:mem:net:swap --output-filter influxdb:kafka config
|
||||
```
|
||||
|
||||
## Configuration file locations
|
||||
|
||||
Use the `--config` flag to specify the configuration file location:
|
||||
|
||||
- Filename and path, for example: `--config /etc/default/telegraf`
|
||||
- Remote URL endpoint, for example: `--config "http://remote-URL-endpoint"`
|
||||
|
||||
Use the `--config-directory` flag to include files ending with `.conf` in the specified directory in the Telegraf
|
||||
configuration.
|
||||
|
||||
On most systems, the default locations are `/etc/telegraf/telegraf.conf` for
|
||||
the main configuration file and `/etc/telegraf/telegraf.d` for the directory of
|
||||
configuration files.
|
||||
|
||||
## Set environment variables
|
||||
|
||||
Add environment variables anywhere in the configuration file by prepending them with `$`.
|
||||
For strings, variables must be in quotes (for example, `"$STR_VAR"`).
|
||||
For numbers and Booleans, variables must be unquoted (for example, `$INT_VAR`, `$BOOL_VAR`).
|
||||
|
||||
You can also set environment variables using the Linux `export` command: `export password=mypassword`
|
||||
|
||||
> **Note:** We recommend using environment variables for sensitive information.
|
||||
|
||||
### Example: Telegraf environment variables
|
||||
|
||||
In the Telegraf environment variables file (`/etc/default/telegraf`):
|
||||
|
||||
```sh
|
||||
USER="alice"
|
||||
INFLUX_URL="http://localhost:8086"
|
||||
INFLUX_SKIP_DATABASE_CREATION="true"
|
||||
INFLUX_PASSWORD="monkey123"
|
||||
```
|
||||
|
||||
In the Telegraf configuration file (`/etc/telegraf.conf`):
|
||||
|
||||
```sh
|
||||
[global_tags]
|
||||
user = "${USER}"
|
||||
|
||||
[[inputs.mem]]
|
||||
|
||||
[[outputs.influxdb]]
|
||||
urls = ["${INFLUX_URL}"]
|
||||
skip_database_creation = ${INFLUX_SKIP_DATABASE_CREATION}
|
||||
password = "${INFLUX_PASSWORD}"
|
||||
```
|
||||
|
||||
The environment variables above add the following configuration settings to Telegraf:
|
||||
|
||||
```sh
|
||||
[global_tags]
|
||||
user = "alice"
|
||||
|
||||
[[outputs.influxdb]]
|
||||
urls = "http://localhost:8086"
|
||||
skip_database_creation = true
|
||||
password = "monkey123"
|
||||
|
||||
```
|
||||
|
||||
## Global tags
|
||||
|
||||
Global tags can be specified in the `[global_tags]` section of the config file
|
||||
in `key="value"` format. All metrics being gathered on this host will be tagged
|
||||
with the tags specified here.
|
||||
|
||||
## Agent configuration
|
||||
|
||||
Telegraf has a few options you can configure under the `[agent]` section of the
|
||||
config.
|
||||
|
||||
* **interval**: Default data collection interval for all inputs
|
||||
* **round_interval**: Rounds collection interval to `interval`.
|
||||
For example, if `interval` is set to 10s then always collect on :00, :10, :20, etc.
|
||||
* **metric_batch_size**: Telegraf will send metrics to output in batch of at
|
||||
most `metric_batch_size` metrics.
|
||||
* **metric_buffer_limit**: Telegraf will cache `metric_buffer_limit` metrics
|
||||
for each output, and will flush this buffer on a successful write.
|
||||
This should be a multiple of `metric_batch_size` and could not be less
|
||||
than 2 times `metric_batch_size`.
|
||||
* **collection_jitter**: Collection jitter is used to jitter
|
||||
the collection by a random amount.
|
||||
Each plugin will sleep for a random time within jitter before collecting.
|
||||
This can be used to avoid many plugins querying things like sysfs at the
|
||||
same time, which can have a measurable effect on the system.
|
||||
* **flush_interval**: Default data flushing interval for all outputs.
|
||||
You should not set this below `interval`.
|
||||
Maximum `flush_interval` will be `flush_interval` + `flush_jitter`
|
||||
* **flush_jitter**: Jitter the flush interval by a random amount.
|
||||
This is primarily to avoid
|
||||
large write spikes for users running a large number of Telegraf instances.
|
||||
For example, a `flush_jitter` of 5s and `flush_interval` of 10s means flushes will happen every 10-15s.
|
||||
* **precision**: Collected metrics are rounded to the precision specified as an
|
||||
`interval` (integer + unit, ex: `1ns`, `1us`, `1ms`, and `1s` . Precision will NOT
|
||||
be used for service inputs, such as `logparser` and `statsd`.
|
||||
* **debug**: Run Telegraf in debug mode.
|
||||
* **quiet**: Run Telegraf in quiet mode (error messages only).
|
||||
* **logtarget**: Controls the destination for logs and can be set to “file”, “stderr”, or, on Windows, “eventlog”. When set to “file”, the output file is determined by the logfile setting.
|
||||
* **logfile**: If logtarget is set to “file” specify the logfile name. If set to the empty string then logs are written to stderr.
|
||||
* **logfile_rotation_interval**: Rotates logfile after the time interval specified. When
|
||||
set to 0 no time based rotation is performed.
|
||||
* **logfile_rotation_max_size**: Rotates logfile when it becomes larger than the specified
|
||||
size. When set to 0 no size based rotation is performed.
|
||||
* **logfile_rotation_max_archives**: Maximum number of rotated archives to keep, any
|
||||
older logs are deleted. If set to -1, no archives are removed.
|
||||
* **log_with_timezone**: Set a timezone to use when logging or type 'local' for local time. Example: 'America/Chicago'.
|
||||
[See this page for options/formats.](https://socketloop.com/tutorials/golang-display-list-of-timezones-with-gmt)
|
||||
* **hostname**: Override default hostname, if empty use `os.Hostname()`.
|
||||
* **omit_hostname**: If true, do no set the `host` tag in the Telegraf agent.
|
||||
|
||||
|
||||
## Input configuration
|
||||
|
||||
The following config parameters are available for all inputs:
|
||||
|
||||
* **alias**: Name an instance of a plugin.
|
||||
* **interval**: How often to gather this metric. Normal plugins use a single
|
||||
global interval, but if one particular input should be run less or more often,
|
||||
you can configure that here. `interval` can be increased to reduce data-in rate limits.
|
||||
* **precision**: Overrides the `precision` setting of the agent. Collected
|
||||
metrics are rounded to the precision specified as an `interval`. When this value is
|
||||
set on a service input (ex: `statsd`), multiple events occuring at the same
|
||||
timestamp may be merged by the output database.
|
||||
* **collection_jitter**: Overrides the `collection_jitter` setting of the agent.
|
||||
Collection jitter is used to jitter the collection by a random `interval`
|
||||
* **name_override**: Override the base name of the measurement.
|
||||
(Default is the name of the input).
|
||||
* **name_prefix**: Specifies a prefix to attach to the measurement name.
|
||||
* **name_suffix**: Specifies a suffix to attach to the measurement name.
|
||||
* **tags**: A map of tags to apply to a specific input's measurements.
|
||||
|
||||
## Output configuration
|
||||
|
||||
* **alias**: Name an instance of a plugin.
|
||||
* **flush_interval**: Maximum time between flushes. Use this setting to
|
||||
override the agent `flush_interval` on a per plugin basis.
|
||||
* **flush_jitter**: Amount of time to jitter the flush interval. Use this
|
||||
setting to override the agent `flush_jitter` on a per plugin basis.
|
||||
* **metric_batch_size**: Maximum number of metrics to send at once. Use
|
||||
this setting to override the agent `metric_batch_size` on a per plugin basis.
|
||||
* **metric_buffer_limit**: Maximum number of unsent metrics to buffer.
|
||||
Use this setting to override the agent `metric_buffer_limit` on a per plugin basis.
|
||||
* **name_override**: Override the base name of the measurement.
|
||||
(Default is the name of the output).
|
||||
* **name_prefix**: Specifies a prefix to attach to the measurement name.
|
||||
* **name_suffix**: Specifies a suffix to attach to the measurement name.
|
||||
|
||||
## Aggregator configuration
|
||||
|
||||
The following config parameters are available for all aggregators:
|
||||
|
||||
* **alias**: Name an instance of a plugin.
|
||||
* **period**: The period on which to flush & clear each aggregator. All metrics
|
||||
that are sent with timestamps outside of this period will be ignored by the
|
||||
aggregator.
|
||||
* **delay**: The delay before each aggregator is flushed. This is to control
|
||||
how long for aggregators to wait before receiving metrics from input plugins,
|
||||
in the case that aggregators are flushing and inputs are gathering on the
|
||||
same interval.
|
||||
* **grace**: The duration the metrics will still be aggregated by the plugin
|
||||
even though they're outside of the aggregation period. This setting is needed
|
||||
in a situation when the agent is expected to receive late metrics and can
|
||||
be rolled into next aggregation period.
|
||||
* **drop_original**: If true, the original metric will be dropped by the
|
||||
aggregator and will not get sent to the output plugins.
|
||||
* **name_override**: Override the base name of the measurement.
|
||||
(Default is the name of the input).
|
||||
* **name_prefix**: Specifies a prefix to attach to the measurement name.
|
||||
* **name_suffix**: Specifies a suffix to attach to the measurement name.
|
||||
* **tags**: A map of tags to apply to a specific input's measurements.
|
||||
|
||||
For a demonstration of how to configure SNMP, MQTT, and PostGRE SQL plugins to get data into Telegraf, see the following video:
|
||||
|
||||
{{< youtube 6XJdZ_kdx14 >}}
|
||||
|
||||
## Processor configuration
|
||||
|
||||
The following config parameters are available for all processors:
|
||||
|
||||
* **alias**: Name an instance of a plugin.
|
||||
* **order**: This is the order in which processors are executed. If this
|
||||
is not specified, then processor execution order will be random.
|
||||
|
||||
The [metric filtering][] parameters can be used to limit what metrics are
|
||||
handled by the processor. Excluded metrics are passed downstream to the next
|
||||
processor.
|
||||
|
||||
## Metric filtering
|
||||
|
||||
Filters can be configured per input, output, processor, or aggregator,
|
||||
see below for examples.
|
||||
|
||||
* **namepass**:
|
||||
An array of glob pattern strings. Only points whose measurement name matches
|
||||
a pattern in this list are emitted.
|
||||
* **namedrop**:
|
||||
The inverse of `namepass`. If a match is found the point is discarded. This
|
||||
is tested on points after they have passed the `namepass` test.
|
||||
* **fieldpass**:
|
||||
An array of glob pattern strings. Only fields whose field key matches a
|
||||
pattern in this list are emitted.
|
||||
* **fielddrop**:
|
||||
The inverse of `fieldpass`. Fields with a field key matching one of the
|
||||
patterns will be discarded from the point.
|
||||
* **tagpass**:
|
||||
A table mapping tag keys to arrays of glob pattern strings. Only points
|
||||
that contain a tag key in the table and a tag value matching one of its
|
||||
patterns is emitted.
|
||||
* **tagdrop**:
|
||||
The inverse of `tagpass`. If a match is found the point is discarded. This
|
||||
is tested on points after they have passed the `tagpass` test.
|
||||
* **taginclude**:
|
||||
An array of glob pattern strings. Only tags with a tag key matching one of
|
||||
the patterns are emitted. In contrast to `tagpass`, which will pass an entire
|
||||
point based on its tag, `taginclude` removes all non matching tags from the
|
||||
point. This filter can be used on both inputs & outputs, but it is
|
||||
_recommended_ to be used on inputs, as it is more efficient to filter out tags
|
||||
at the ingestion point.
|
||||
* **tagexclude**:
|
||||
The inverse of `taginclude`. Tags with a tag key matching one of the patterns
|
||||
will be discarded from the point.
|
||||
|
||||
**NOTE** Due to the way TOML is parsed, `tagpass` and `tagdrop` parameters
|
||||
must be defined at the _end_ of the plugin definition, otherwise subsequent
|
||||
plugin config options will be interpreted as part of the tagpass/tagdrop
|
||||
tables.
|
||||
|
||||
To learn more about metric filtering, watch the following video:
|
||||
|
||||
{{< youtube R3DnObs_OKA >}}
|
||||
|
||||
## Examples
|
||||
|
||||
#### Input configuration examples
|
||||
|
||||
This is a full working config that will output CPU data to an InfluxDB instance
|
||||
at `192.168.59.103:8086`, tagging measurements with `dc="denver-1"`. It will output
|
||||
measurements at a 10s interval and will collect per-cpu data, dropping any
|
||||
fields which begin with `time_`.
|
||||
|
||||
```toml
|
||||
[global_tags]
|
||||
dc = "denver-1"
|
||||
|
||||
[agent]
|
||||
interval = "10s"
|
||||
|
||||
# OUTPUTS
|
||||
[[outputs.influxdb]]
|
||||
url = "http://192.168.59.103:8086" # required.
|
||||
database = "telegraf" # required.
|
||||
precision = "1s"
|
||||
|
||||
# INPUTS
|
||||
[[inputs.cpu]]
|
||||
percpu = true
|
||||
totalcpu = false
|
||||
# filter all fields beginning with 'time_'
|
||||
fielddrop = ["time_*"]
|
||||
```
|
||||
|
||||
#### Input Config: `tagpass` and `tagdrop`
|
||||
|
||||
**NOTE** `tagpass` and `tagdrop` parameters must be defined at the _end_ of
|
||||
the plugin definition, otherwise subsequent plugin config options will be
|
||||
interpreted as part of the tagpass/tagdrop map.
|
||||
|
||||
```toml
|
||||
[[inputs.cpu]]
|
||||
percpu = true
|
||||
totalcpu = false
|
||||
fielddrop = ["cpu_time"]
|
||||
# Don't collect CPU data for cpu6 & cpu7
|
||||
[inputs.cpu.tagdrop]
|
||||
cpu = [ "cpu6", "cpu7" ]
|
||||
|
||||
[[inputs.disk]]
|
||||
[inputs.disk.tagpass]
|
||||
# tagpass conditions are OR, not AND.
|
||||
# If the (filesystem is ext4 or xfs) OR (the path is /opt or /home)
|
||||
# then the metric passes
|
||||
fstype = [ "ext4", "xfs" ]
|
||||
# Globs can also be used on the tag values
|
||||
path = [ "/opt", "/home*" ]
|
||||
```
|
||||
|
||||
#### Input Config: `fieldpass` and `fielddrop`
|
||||
|
||||
```toml
|
||||
# Drop all metrics for guest & steal CPU usage
|
||||
[[inputs.cpu]]
|
||||
percpu = false
|
||||
totalcpu = true
|
||||
fielddrop = ["usage_guest", "usage_steal"]
|
||||
|
||||
# Only store inode related metrics for disks
|
||||
[[inputs.disk]]
|
||||
fieldpass = ["inodes*"]
|
||||
```
|
||||
|
||||
#### Input Config: `namepass` and `namedrop`
|
||||
|
||||
```toml
|
||||
# Drop all metrics about containers for kubelet
|
||||
[[inputs.prometheus]]
|
||||
urls = ["http://kube-node-1:4194/metrics"]
|
||||
namedrop = ["container_*"]
|
||||
|
||||
# Only store rest client related metrics for kubelet
|
||||
[[inputs.prometheus]]
|
||||
urls = ["http://kube-node-1:4194/metrics"]
|
||||
namepass = ["rest_client_*"]
|
||||
```
|
||||
|
||||
#### Input Config: `taginclude` and `tagexclude`
|
||||
|
||||
```toml
|
||||
# Only include the "cpu" tag in the measurements for the cpu plugin.
|
||||
[[inputs.cpu]]
|
||||
percpu = true
|
||||
totalcpu = true
|
||||
taginclude = ["cpu"]
|
||||
|
||||
# Exclude the `fstype` tag from the measurements for the disk plugin.
|
||||
[[inputs.disk]]
|
||||
tagexclude = ["fstype"]
|
||||
```
|
||||
|
||||
#### Input config: `prefix`, `suffix`, and `override`
|
||||
|
||||
This plugin will emit measurements with the name `cpu_total`.
|
||||
|
||||
```toml
|
||||
[[inputs.cpu]]
|
||||
name_suffix = "_total"
|
||||
percpu = false
|
||||
totalcpu = true
|
||||
```
|
||||
|
||||
This will emit measurements with the name `foobar`.
|
||||
|
||||
```toml
|
||||
[[inputs.cpu]]
|
||||
name_override = "foobar"
|
||||
percpu = false
|
||||
totalcpu = true
|
||||
```
|
||||
|
||||
#### Input config: tags
|
||||
|
||||
This plugin will emit measurements with two additional tags: `tag1=foo` and
|
||||
`tag2=bar`.
|
||||
|
||||
NOTE: Order matters, the `[inputs.cpu.tags]` table must be at the _end_ of the
|
||||
plugin definition.
|
||||
|
||||
```toml
|
||||
[[inputs.cpu]]
|
||||
percpu = false
|
||||
totalcpu = true
|
||||
[inputs.cpu.tags]
|
||||
tag1 = "foo"
|
||||
tag2 = "bar"
|
||||
```
|
||||
|
||||
#### Multiple inputs of the same type
|
||||
|
||||
Additional inputs (or outputs) of the same type can be specified by defining these instances in the configuration file. To avoid measurement collisions, use the `name_override`, `name_prefix`, or `name_suffix` config options:
|
||||
|
||||
```toml
|
||||
[[inputs.cpu]]
|
||||
percpu = false
|
||||
totalcpu = true
|
||||
|
||||
[[inputs.cpu]]
|
||||
percpu = true
|
||||
totalcpu = false
|
||||
name_override = "percpu_usage"
|
||||
fielddrop = ["cpu_time*"]
|
||||
```
|
||||
|
||||
#### Output configuration examples:
|
||||
|
||||
```toml
|
||||
[[outputs.influxdb]]
|
||||
urls = [ "http://localhost:8086" ]
|
||||
database = "telegraf"
|
||||
precision = "1s"
|
||||
# Drop all measurements that start with "aerospike"
|
||||
namedrop = ["aerospike*"]
|
||||
|
||||
[[outputs.influxdb]]
|
||||
urls = [ "http://localhost:8086" ]
|
||||
database = "telegraf-aerospike-data"
|
||||
precision = "1s"
|
||||
# Only accept aerospike data:
|
||||
namepass = ["aerospike*"]
|
||||
|
||||
[[outputs.influxdb]]
|
||||
urls = [ "http://localhost:8086" ]
|
||||
database = "telegraf-cpu0-data"
|
||||
precision = "1s"
|
||||
# Only store measurements where the tag "cpu" matches the value "cpu0"
|
||||
[outputs.influxdb.tagpass]
|
||||
cpu = ["cpu0"]
|
||||
```
|
||||
|
||||
#### Aggregator Configuration Examples:
|
||||
|
||||
This will collect and emit the min/max of the system load1 metric every
|
||||
30s, dropping the originals.
|
||||
|
||||
```toml
|
||||
[[inputs.system]]
|
||||
fieldpass = ["load1"] # collects system load1 metric.
|
||||
|
||||
[[aggregators.minmax]]
|
||||
period = "30s" # send & clear the aggregate every 30s.
|
||||
drop_original = true # drop the original metrics.
|
||||
|
||||
[[outputs.file]]
|
||||
files = ["stdout"]
|
||||
```
|
||||
|
||||
This will collect and emit the min/max of the swap metrics every
|
||||
30s, dropping the originals. The aggregator will not be applied
|
||||
to the system load metrics due to the `namepass` parameter.
|
||||
|
||||
```toml
|
||||
[[inputs.swap]]
|
||||
|
||||
[[inputs.system]]
|
||||
fieldpass = ["load1"] # collects system load1 metric.
|
||||
|
||||
[[aggregators.minmax]]
|
||||
period = "30s" # send & clear the aggregate every 30s.
|
||||
drop_original = true # drop the original metrics.
|
||||
namepass = ["swap"] # only "pass" swap metrics through the aggregator.
|
||||
|
||||
[[outputs.file]]
|
||||
files = ["stdout"]
|
||||
```
|
||||
|
||||
To learn more about configuring the Telegraf agent, watch the following video:
|
||||
|
||||
{{< youtube txUcAxMDBlQ >}}
|
|
@ -0,0 +1,13 @@
|
|||
---
|
||||
title: Configure plugins
|
||||
description:
|
||||
menu:
|
||||
telegraf_1_25:
|
||||
|
||||
name: Configure plugins
|
||||
weight: 50
|
||||
---
|
||||
|
||||
Telegraf is a server-based agent for collecting and sending metrics and events from databases, systems, and IoT sensors.
|
||||
|
||||
{{< children hlevel="h2" >}}
|
|
@ -0,0 +1,58 @@
|
|||
---
|
||||
title: Transform data with aggregator and processor plugins
|
||||
description: |
|
||||
Aggregator and processor plugins aggregate and process metrics.
|
||||
menu:
|
||||
telegraf_1_25:
|
||||
name: Aggregator and processor plugins
|
||||
weight: 50
|
||||
parent: Configure plugins
|
||||
---
|
||||
|
||||
|
||||
In addition to input plugins and output plugins, Telegraf includes aggregator and processor plugins, which are used to aggregate and process metrics as they pass through Telegraf.
|
||||
|
||||
{{< diagram >}}
|
||||
graph TD
|
||||
Process[Process<br/> - transform<br/> - decorate<br/> - filter]
|
||||
Aggregate[Aggregate<br/> - transform<br/> - decorate<br/> - filter]
|
||||
|
||||
CPU --> Process
|
||||
Memory --> Process
|
||||
MySQL --> Process
|
||||
SNMP --> Process
|
||||
Docker --> Process
|
||||
Process --> Aggregate
|
||||
Aggregate --> InfluxDB
|
||||
Aggregate --> File
|
||||
Aggregate --> Kafka
|
||||
|
||||
style Process text-align:left
|
||||
style Aggregate text-align:left
|
||||
{{< /diagram >}}
|
||||
|
||||
**Processor plugins** process metrics as they pass through and immediately emit
|
||||
results based on the values they process. For example, this could be printing
|
||||
all metrics or adding a tag to all metrics that pass through. For a list of processor plugins and links to their detailed configuration options, see [processor plugins](/telegraf/v1.25/plugins/#processor-plugins).
|
||||
|
||||
**Aggregator plugins**, on the other hand, are a bit more complicated. Aggregators
|
||||
are typically for emitting new _aggregate_ metrics, such as a running mean,
|
||||
minimum, maximum, quantiles, or standard deviation. For this reason, all _aggregator_
|
||||
plugins are configured with a `period`. The `period` is the size of the window
|
||||
of metrics that each _aggregate_ represents. In other words, the emitted
|
||||
_aggregate_ metric will be the aggregated value of the past `period` seconds.
|
||||
Since many users will only care about their aggregates and not every single metric
|
||||
gathered, there is also a `drop_original` argument, which tells Telegraf to only
|
||||
emit the aggregates and not the original metrics. For a list of aggregator plugins and links to their detailed configuration options, see [aggregator plugins](/telegraf/v1.25/plugins/#aggregator-plugins).
|
||||
|
||||
{{% note %}}
|
||||
#### Behavior of processors and aggregators when used together
|
||||
When using both aggregator and processor plugins in Telegraf v1.17, processor plugins
|
||||
process data and then pass it to aggregator plugins.
|
||||
After aggregator plugins aggregate the data, they pass it back to processor plugins.
|
||||
This can have unintended consequences, such as executing mathematical operations twice.
|
||||
_See [influxdata/telegraf#7993](https://github.com/influxdata/telegraf/issues/7993)._
|
||||
|
||||
If using custom processor scripts, they must be idempotent (repeatable, without side-effects).
|
||||
For custom processes that are not idempotent, use [namepass or namedrop](/telegraf/v1.17/administration/configuration/#input-config-namepass-and-namedrop) to avoid issues when aggregated data is processed a second time.
|
||||
{{% /note %}}
|
|
@ -0,0 +1,23 @@
|
|||
---
|
||||
title: Integrate with external plugins
|
||||
description: |
|
||||
External plugins are external programs that are built outside of Telegraf that can run through an `execd` plugin.
|
||||
menu:
|
||||
telegraf_1_25:
|
||||
name: External plugins
|
||||
weight: 50
|
||||
parent: Configure plugins
|
||||
---
|
||||
|
||||
[External plugins](https://github.com/influxdata/telegraf/blob/master/EXTERNAL_PLUGINS.md) are external programs that are built outside
|
||||
of Telegraf that can run through an `execd` plugin. These external plugins allow for
|
||||
more flexibility compared to internal Telegraf plugins. Benefits to using external plugins include:
|
||||
- Access to libraries not written in Go
|
||||
- Using licensed software (not available to open source community)
|
||||
- Including large dependencies that would otherwise bloat Telegraf
|
||||
- Using your external plugin immediately without waiting for the Telegraf team to publish
|
||||
- Easily convert plugins between internal and external using the [shim](https://github.com/influxdata/telegraf/blob/master/plugins/common/shim/README.md)
|
||||
|
||||
|
||||
|
||||
{{< children hlevel="h2" >}}
|
|
@ -0,0 +1,60 @@
|
|||
---
|
||||
title: Use the `execd` shim
|
||||
description:
|
||||
menu:
|
||||
telegraf_1_25:
|
||||
|
||||
name: Use the `execd` shim
|
||||
weight: 50
|
||||
parent: External plugins
|
||||
---
|
||||
|
||||
The shim makes it easy to extract an internal input,
|
||||
processor, or output plugin from the main Telegraf repo out to a stand-alone repo. This allows anyone to build and run it as a separate app using one of the
|
||||
`execd` plugins:
|
||||
- [inputs.execd](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/execd)
|
||||
- [processors.execd](https://github.com/influxdata/telegraf/blob/master//plugins/processors/execd)
|
||||
- [outputs.execd](https://github.com/influxdata/telegraf/blob/master/plugins/outputs/execd)
|
||||
|
||||
## Extract a plugin using the shim wrapper
|
||||
|
||||
1. Move the project to an external repo. We recommend preserving the path
|
||||
structure: for example, if your plugin was located at
|
||||
`plugins/inputs/cpu` in the Telegraf repo, move it to `plugins/inputs/cpu`
|
||||
in the new repo.
|
||||
2. Copy [main.go](https://github.com/influxdata/telegraf/blob/master/plugins/common/shim/example/cmd/main.go) into your project under the `cmd` folder.
|
||||
This serves as the entry point to the plugin when run as a stand-alone program.
|
||||
{{% note %}}
|
||||
The shim isn't designed to run multiple plugins at the same time, so include only one plugin per repo.
|
||||
{{% /note %}}
|
||||
3. Edit the `main.go` file to import your plugin. For example,`_ "github.com/me/my-plugin-telegraf/plugins/inputs/cpu"`. See an example of where to edit `main.go` [here](https://github.com/influxdata/telegraf/blob/7de9c5ff279e10edf7fe3fdd596f3b33902c912b/plugins/common/shim/example/cmd/main.go#L9).
|
||||
4. Add a [plugin.conf](https://github.com/influxdata/telegraf/blob/master/plugins/common/shim/example/cmd/plugin.conf) for configuration
|
||||
specific to your plugin.
|
||||
{{% note %}}
|
||||
This config file must be separate from the rest of the config for Telegraf, and must not be in a shared directory with other Telegraf configs.
|
||||
{{% /note %}}
|
||||
|
||||
## Test and run your plugin
|
||||
|
||||
1. Build the `cmd/main.go` using the following command with your plugin name: `go build -o plugin-name cmd/main.go`
|
||||
1. Test the binary:
|
||||
2. If you're building a processor or output, first feed valid metrics in on `STDIN`. Skip this step if you're building an input.
|
||||
3. Test out the binary by running it (for example, `./project-name -config plugin.conf`).
|
||||
Metrics will be written to `STDOUT`. You might need to hit enter or wait for your poll duration to elapse to see data.
|
||||
4. Press `Ctrl-C` to end your test.
|
||||
5. Configure Telegraf to call your new plugin binary. For an input, this would
|
||||
look something like:
|
||||
|
||||
```toml
|
||||
[[inputs.execd]]
|
||||
command = ["/path/to/rand", "-config", "/path/to/plugin.conf"]
|
||||
signal = "none"
|
||||
```
|
||||
|
||||
Refer to the `execd` plugin documentation for more information.
|
||||
|
||||
## Publish your plugin
|
||||
|
||||
Publishing your plugin to GitHub and open a Pull Request
|
||||
back to the Telegraf repo letting us know about the availability of your
|
||||
[external plugin](https://github.com/influxdata/telegraf/blob/master/EXTERNAL_PLUGINS.md).
|
|
@ -0,0 +1,31 @@
|
|||
---
|
||||
title: Write an external plugin
|
||||
description:
|
||||
menu:
|
||||
telegraf_1_25:
|
||||
|
||||
name: Write an external plugin
|
||||
weight: 50
|
||||
parent: External plugins
|
||||
---
|
||||
Set up your plugin to use it with `execd`.
|
||||
|
||||
{{% note %}}
|
||||
For listed [external plugins](https://github.com/influxdata/telegraf/blob/master/EXTERNAL_PLUGINS.md), the author of the external plugin is also responsible for the maintenance
|
||||
and feature development of external plugins.
|
||||
{{% /note %}}
|
||||
|
||||
1. Write your Telegraf plugin. Follow InfluxData's best practices:
|
||||
- [Input plugins](https://github.com/influxdata/telegraf/blob/master/docs/INPUTS.md)
|
||||
- [Processor plugins](https://github.com/influxdata/telegraf/blob/master/docs/PROCESSORS.md)
|
||||
- [Aggregator plugins](https://github.com/influxdata/telegraf/blob/master/docs/AGGREGATORS.md)
|
||||
- [Output plugins](https://github.com/influxdata/telegraf/blob/master/docs/OUTPUTS.md)
|
||||
2. If your plugin is written in Go, follow the steps for the [Execd Go Shim](/{{< latest "telegraf" >}}/configure_plugins/external_plugins/shim).
|
||||
3. Add usage and development instructions in the homepage of your repository for running your plugin with its respective `execd` plugin. Refer to [openvpn](https://github.com/danielnelson/telegraf-execd-openvpn#usage) and [awsalarms](https://github.com/vipinvkmenon/awsalarms#installation) for examples.
|
||||
Include the following steps:
|
||||
- How to download the release package for your platform or how to clone the binary for your external plugin
|
||||
- Commands to build your binary
|
||||
- Location to edit your `telegraf.conf`
|
||||
- Configuration to run your external plugin with [inputs.execd](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/execd),
|
||||
[processors.execd](/plugins/processors/execd) or [outputs.execd](https://github.com/influxdata/telegraf/blob/master/plugins/outputs/execd)
|
||||
4. Submit your plugin by opening a PR to add your external plugin to the [/EXTERNAL_PLUGINS.md](https://github.com/influxdata/telegraf/blob/master/EXTERNAL_PLUGINS.md) list. Include the plugin name, a link to the plugin repository and a short description of the plugin.
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
title: Collect data with input plugins
|
||||
description: |
|
||||
Collect data from a variety of sources with Telegraf input plugins.
|
||||
menu:
|
||||
telegraf_1_25:
|
||||
|
||||
name: Input plugins
|
||||
weight: 10
|
||||
parent: Configure plugins
|
||||
---
|
||||
|
||||
Telegraf input plugins are used with the InfluxData time series platform to collect metrics from the system, services, or third-party APIs. All metrics are gathered from the inputs you enable and configure in the [Telegraf configuration file](/telegraf/v1.25/configuration/).
|
||||
|
||||
For a complete list of input plugins and links to their detailed configuration options, see [input plugins](/{{< latest "telegraf" >}}/plugins/inputs/).
|
||||
|
||||
In addition to plugin-specific data formats, Telegraf supports a set of [common data formats](/{{< latest "telegraf" >}}/data_formats/input/) available when configuring many of the Telegraf input plugins.
|
|
@ -0,0 +1,106 @@
|
|||
---
|
||||
title: Using the HTTP input plugin with Citi Bike data
|
||||
description: Collect live metrics on Citi Bike stations in New York City with the HTTP input plugin.
|
||||
menu:
|
||||
telegraf_1_25:
|
||||
|
||||
name: Using the HTTP plugin
|
||||
weight: 30
|
||||
parent: Input plugins
|
||||
---
|
||||
|
||||
This example walks through using the Telegraf HTTP input plugin to collect live metrics on Citi Bike stations in New York City. Live station data is available in JSON format directly from [Citi Bike](https://ride.citibikenyc.com/system-data).
|
||||
|
||||
For the following example to work, configure [`influxdb_v2` output plugin](/telegraf/v1.25/plugins/#output-influxdb_v2). This plugin is what allows Telegraf to write the metrics to InfluxDB.
|
||||
|
||||
## Configure the HTTP Input plugin in your Telegraf configuration file
|
||||
|
||||
To retrieve data from the Citi Bike URL endpoint, enable the `inputs.http` input plugin in your Telegraf configuration file.
|
||||
|
||||
Specify the following options:
|
||||
|
||||
### `urls`
|
||||
One or more URLs to read metrics from. For this example, use `https://gbfs.citibikenyc.com/gbfs/en/station_status.json`.
|
||||
|
||||
### `data_format`
|
||||
The format of the data in the HTTP endpoints that Telegraf will ingest. For this example, use JSON.
|
||||
|
||||
|
||||
## Add parser information to your Telegraf configuration
|
||||
|
||||
Specify the following JSON-specific options. In this example, we use the objects subtable to gather
|
||||
data from [JSON Objects](https://www.w3schools.com/js/js_json_objects.asp).
|
||||
|
||||
### JSON
|
||||
|
||||
#### `path`
|
||||
To parse a JSON object, set the `path` option with a [GJSON](https://github.com/tidwall/gjson) path. The result of the query should contain a JSON object or an array of objects. The [GJSON playground](https://gjson.dev/) is a very helpful tool in checking your query.
|
||||
|
||||
#### `tags`
|
||||
List of one or more JSON keys that should be added as tags. For this example, we'll use the tag key `station_id`.
|
||||
|
||||
#### `timestamp_key`
|
||||
Key from the JSON file that creates the timestamp metric. In this case, we want to use the time that station data was last reported, or the `last_reported`. If you don't specify a key, the time that Telegraf reads the data becomes the timestamp.
|
||||
|
||||
#### `timestamp_format`
|
||||
The format used to interpret the designated `timestamp_key`. The `last_reported` time in this example is reported in unix format.
|
||||
|
||||
#### Example configuration
|
||||
|
||||
```toml
|
||||
[[inputs.http]]
|
||||
# URL for NYC's Citi Bike station data in JSON format
|
||||
urls = ["https://gbfs.citibikenyc.com/gbfs/en/station_status.json"]
|
||||
|
||||
# Overwrite measurement name from default `http` to `citibikenyc`
|
||||
name_override = "citibike"
|
||||
|
||||
# Exclude url and host items from tags
|
||||
tagexclude = ["url", "host"]
|
||||
|
||||
# Data from HTTP in JSON format
|
||||
data_format = "json_v2"
|
||||
|
||||
|
||||
# Add a subtable to use the `json_v2` parser
|
||||
[[inputs.http.json_v2]]
|
||||
|
||||
# Add an object subtable for to parse a JSON object
|
||||
[[inputs.http.json_v2.object]]
|
||||
|
||||
# Parse data in `data.stations` path only
|
||||
path = "data.stations"
|
||||
|
||||
#Set station metadata as tags
|
||||
tags = ["station_id"]
|
||||
|
||||
# Latest station information reported at `last_reported`
|
||||
timestamp_key = "last_reported"
|
||||
|
||||
# Time is reported in unix timestamp format
|
||||
timestamp_format = "unix"
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Start Telegraf and verify data appears
|
||||
|
||||
[Start the Telegraf service](/telegraf/v1.25/get_started/#start-telegraf).
|
||||
|
||||
To test that the data is being sent to InfluxDB, run the following (replacing `telegraf.conf` with the path to your configuration file):
|
||||
|
||||
```
|
||||
telegraf -config ~/telegraf.conf -test
|
||||
```
|
||||
|
||||
This command should return line protocol that looks similar to the following:
|
||||
|
||||
```
|
||||
citibike,station_id=4703 eightd_has_available_keys=false,is_installed=1,is_renting=1,is_returning=1,legacy_id="4703",num_bikes_available=6,num_bikes_disabled=2,num_docks_available=26,num_docks_disabled=0,num_ebikes_available=0,station_status="active" 1641505084000000000
|
||||
citibike,station_id=4704 eightd_has_available_keys=false,is_installed=1,is_renting=1,is_returning=1,legacy_id="4704",num_bikes_available=10,num_bikes_disabled=2,num_docks_available=36,num_docks_disabled=0,num_ebikes_available=0,station_status="active" 1641505084000000000
|
||||
citibike,station_id=4711 eightd_has_available_keys=false,is_installed=1,is_renting=1,is_returning=1,legacy_id="4711",num_bikes_available=9,num_bikes_disabled=0,num_docks_available=36,num_docks_disabled=0,num_ebikes_available=1,station_status="active" 1641505084000000000
|
||||
```
|
||||
|
||||
Now, you can explore and query the Citi Bike data in InfluxDB. The example below is an Flux query and visualization showing the number of available bikes over the past 15 minutes.
|
||||
|
||||
![Citi Bike visualization](/img/telegraf/new-citibike-query.png)
|
|
@ -0,0 +1,16 @@
|
|||
---
|
||||
title: Write data with output plugins
|
||||
description: |
|
||||
Output plugins define where Telegraf will deliver the collected metrics.
|
||||
menu:
|
||||
telegraf_1_25:
|
||||
|
||||
name: Output plugins
|
||||
weight: 20
|
||||
parent: Configure plugins
|
||||
---
|
||||
Output plugins define where Telegraf will deliver the collected metrics. Send metrics to InfluxDB or to a variety of other datastores, services, and message queues, including Graphite, OpenTSDB, Datadog, Librato, Kafka, MQTT, and NSQ.
|
||||
|
||||
For a complete list of output plugins and links to their detailed configuration options, see [output plugins](/{{< latest "telegraf" >}}/plugins/outputs/).
|
||||
|
||||
In addition to plugin-specific data formats, Telegraf supports a set of [common data formats](/{{< latest "telegraf" >}}/data_formats/output/) available when configuring many of the Telegraf output plugins.
|
|
@ -0,0 +1,54 @@
|
|||
---
|
||||
title: Troubleshoot Telegraf
|
||||
description: Resolve common issues with Telegraf.
|
||||
menu:
|
||||
telegraf_1_25:
|
||||
|
||||
name: Troubleshoot
|
||||
Parent: Configure plugins
|
||||
weight: 79
|
||||
aliases:
|
||||
- /telegraf/v1.25/administration/troubleshooting/
|
||||
- /telegraf/v1.25/administration/troubleshooting/
|
||||
---
|
||||
|
||||
## Validate your Telegraf configuration with `--test`
|
||||
|
||||
Run a single telegraf collection, outputting metrics to stdout:
|
||||
`telegraf --config telegraf.conf --test`
|
||||
|
||||
## Use the `--once` option to single-shot execute
|
||||
|
||||
Once tested, run `telegraf --config telegraf.conf --once` to perform a single-shot execution of all configured plugins. This sends output to partner systems specified in the `telegraf.conf` rather than writing to `stdout`.
|
||||
|
||||
## Add `outputs.file` to read to a file or STDOUT
|
||||
|
||||
The following step might be helpful if:
|
||||
- You're encountering issues in your output and trying to determine if it’s an issue with your configuration or connection.
|
||||
- `-test` outputs metrics to stdout as expected and your input, parsers, processors, and aggregators are configured correctly. Note that if it's a listener plugin, `-test` wouldn't output any metrics right away.
|
||||
|
||||
Add the `file` output plugin with the metrics reporting to STDOUT or to a file.
|
||||
```toml
|
||||
[[outputs.file]]
|
||||
files = ["stdout"]
|
||||
```
|
||||
|
||||
## Set `debug = true` in your settings
|
||||
|
||||
When you set `debug = true` in global settings, Telegraf runs with debug log messages.
|
||||
|
||||
```
|
||||
2021-06-28T19:18:00Z I! Starting Telegraf 1.19.0
|
||||
2021-06-28T19:18:00Z I! Loaded inputs: cpu disk diskio mem net processes swap system
|
||||
2021-06-28T19:18:00Z I! Loaded aggregators:
|
||||
2021-06-28T19:18:00Z I! Loaded processors:
|
||||
2021-06-28T19:18:00Z I! Loaded outputs: influxdb_v2
|
||||
2021-06-28T19:18:00Z I! Tags enabled: host=MBP15-INFLUX.local
|
||||
2021-06-28T19:18:00Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"MBP15-INFLUX.local", Flush Interval:30s
|
||||
2021-06-28T19:18:00Z D! [agent] Initializing plugins
|
||||
2021-06-28T19:18:00Z D! [agent] Connecting outputs
|
||||
2021-06-28T19:18:00Z D! [agent] Attempting connection to [outputs.influxdb_v2]
|
||||
2021-06-28T19:18:00Z D! [agent] Successfully connected to outputs.influxdb_v2
|
||||
2021
|
||||
-06-28T19:18:00Z D! [agent] Starting service inputs
|
||||
```
|
|
@ -0,0 +1,26 @@
|
|||
---
|
||||
title: Contribute to Telegraf
|
||||
description:
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
name: Contribute to Telegraf
|
||||
weight: 80
|
||||
---
|
||||
|
||||
To contribute to the Telegraf project, complete the following steps:
|
||||
|
||||
1. [Sign the InfluxData Contributor License Agreement (CLA)](#sign-influxdata-contributor-license-agreement-cla).
|
||||
2. [Review contribution guidelines](#review-contribution-guidelines).
|
||||
3. [Review the Telegraf open source license](#review-open-source-license).
|
||||
|
||||
## Sign InfluxData Contributor License Agreement (CLA)
|
||||
|
||||
Before contributing to the InfluxDB OSS project, you must complete and sign the [InfluxData Contributor License Agreement (CLA)](https://www.influxdata.com/legal/cla/), available on the InfluxData website.
|
||||
|
||||
## Review contribution guidelines
|
||||
|
||||
To learn how you can contribute to the Telegraf project, see our [Contributing guidelines](https://github.com/influxdata/telegraf/blob/master/CONTRIBUTING.md) in the GitHub repository.
|
||||
|
||||
## Review open source license
|
||||
|
||||
See information about our [open source MIT license for Telegraf](https://github.com/influxdata/telegraf/blob/master/LICENSE) in GitHub.
|
|
@ -0,0 +1,16 @@
|
|||
---
|
||||
title: Telegraf data formats
|
||||
description: Telegraf supports input data formats and output data formats for converting input and output data.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: Data formats
|
||||
|
||||
weight: 50
|
||||
---
|
||||
|
||||
This section covers the input data formats and output data formats used in the Telegraf plugin-driven server agent component of the InfluxData time series platform.
|
||||
|
||||
{{< children hlevel="h2" >}}
|
||||
|
||||
<!-- add table: https://github.com/influxdata/docs-v2/issues/2360 !-->
|
|
@ -0,0 +1,37 @@
|
|||
---
|
||||
title: Telegraf input data formats
|
||||
description: Telegraf supports parsing input data formats into Telegraf metrics.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: Input data formats
|
||||
weight: 1
|
||||
parent: Data formats
|
||||
---
|
||||
|
||||
Telegraf contains many general purpose plugins that support parsing input data
|
||||
using a configurable parser into [metrics][]. This allows, for example, the
|
||||
`kafka_consumer` input plugin to process messages in either InfluxDB Line
|
||||
Protocol or in JSON format. Telegraf supports the following input data formats:
|
||||
|
||||
{{< children >}}
|
||||
|
||||
Any input plugin containing the `data_format` option can use it to select the
|
||||
desired parser:
|
||||
|
||||
```toml
|
||||
[[inputs.exec]]
|
||||
## Commands array
|
||||
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
|
||||
|
||||
## measurement name suffix (for separating different commands)
|
||||
name_suffix = "_mycollector"
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "json_v2"
|
||||
```
|
||||
|
||||
[metrics]: /telegraf/v1.15/concepts/metrics/
|
|
@ -0,0 +1,49 @@
|
|||
---
|
||||
title: Collectd input data format
|
||||
description: Use the `collectd` input data format to parse the collectd network binary protocol to create tags for host, instance, type, and type instance.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: collectd
|
||||
weight: 10
|
||||
parent: Input data formats
|
||||
---
|
||||
|
||||
The collectd input data format parses the collectd network binary protocol to create tags for host, instance, type, and type instance. All collectd values are added as float64 fields.
|
||||
|
||||
For more information, see [binary protocol](https://collectd.org/wiki/index.php/Binary_protocol) in the collectd Wiki.
|
||||
|
||||
You can control the cryptographic settings with parser options.
|
||||
Create an authentication file and set `collectd_auth_file` to the path of the file, then set the desired security level in `collectd_security_level`.
|
||||
|
||||
For more information, including client setup, see
|
||||
[Cryptographic setup](https://collectd.org/wiki/index.php/Networking_introduction#Cryptographic_setup) in the collectd Wiki.
|
||||
|
||||
You can also change the path to the typesdb or add additional typesdb using
|
||||
`collectd_typesdb`.
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[inputs.file]]
|
||||
files = ["example"]
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "collectd"
|
||||
|
||||
## Authentication file for cryptographic security levels
|
||||
collectd_auth_file = "/etc/collectd/auth_file"
|
||||
## One of none (default), sign, or encrypt
|
||||
collectd_security_level = "encrypt"
|
||||
## Path of to TypesDB specifications
|
||||
collectd_typesdb = ["/usr/share/collectd/types.db"]
|
||||
|
||||
## Multi-value plugins can be handled two ways.
|
||||
## "split" will parse and store the multi-value plugin data into separate measurements
|
||||
## "join" will parse and store the multi-value plugin as a single multi-value measurement.
|
||||
## "split" is the default behavior for backward compatability with previous versions of influxdb.
|
||||
collectd_parse_multivalue = "split"
|
||||
```
|
|
@ -0,0 +1,158 @@
|
|||
---
|
||||
title: CSV input data format
|
||||
description: Use the `csv` input data format to parse a document containing comma-separated values into Telegraf metrics.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: CSV
|
||||
weight: 20
|
||||
parent: Input data formats
|
||||
---
|
||||
|
||||
The CSV input data format parses documents containing comma-separated values into Telegraf metrics.
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[inputs.file]]
|
||||
files = ["example"]
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "csv"
|
||||
|
||||
## Indicates how many rows to treat as a header. By default, the parser assumes
|
||||
## there is no header and will parse the first row as data. If set to anything more
|
||||
## than 1, column names will be concatenated with the name listed in the next header row.
|
||||
## If `csv_column_names` is specified, the column names in header will be overridden.
|
||||
csv_header_row_count = 0
|
||||
|
||||
## For assigning custom names to columns
|
||||
## If this is specified, all columns should have a name
|
||||
## Unnamed columns will be ignored by the parser.
|
||||
## If `csv_header_row_count` is set to 0, this config must be used
|
||||
csv_column_names = []
|
||||
|
||||
## For assigning explicit data types to columns.
|
||||
## Supported types: "int", "float", "bool", "string".
|
||||
## Specify types in order by column (e.g. `["string", "int", "float"]`)
|
||||
## If this is not specified, type conversion will be done on the types above.
|
||||
csv_column_types = []
|
||||
|
||||
## Indicates the number of rows to skip before looking for metadata and header information.
|
||||
csv_skip_rows = 0
|
||||
|
||||
## Indicates the number of rows to parse as metadata before looking for header information.
|
||||
## By default, the parser assumes there are no metadata rows to parse.
|
||||
## If set, the parser would use the provided separators in the csv_metadata_separators to look for metadata.
|
||||
## Please note that by default, the (key, value) pairs will be added as tags.
|
||||
## If fields are required, use the converter processor.
|
||||
csv_metadata_rows = 0
|
||||
|
||||
## A list of metadata separators. If csv_metadata_rows is set,
|
||||
## csv_metadata_separators must contain at least one separator.
|
||||
## Please note that separators are case sensitive and the sequence of the seperators are respected.
|
||||
csv_metadata_separators = [":", "="]
|
||||
|
||||
## A set of metadata trim characters.
|
||||
## If csv_metadata_trim_set is not set, no trimming is performed.
|
||||
## Please note that the trim cutset is case sensitive.
|
||||
csv_metadata_trim_set = ""
|
||||
|
||||
## Indicates the number of columns to skip before looking for data to parse.
|
||||
## These columns will be skipped in the header as well.
|
||||
csv_skip_columns = 0
|
||||
|
||||
## The separator between csv fields
|
||||
## By default, the parser assumes a comma (",")
|
||||
csv_delimiter = ","
|
||||
|
||||
## The character reserved for marking a row as a comment row
|
||||
## Commented rows are skipped and not parsed
|
||||
csv_comment = ""
|
||||
|
||||
## If set to true, the parser will remove leading whitespace from fields
|
||||
## By default, this is false
|
||||
csv_trim_space = false
|
||||
|
||||
## Columns listed here will be added as tags. Any other columns
|
||||
## will be added as fields.
|
||||
csv_tag_columns = []
|
||||
|
||||
## The column to extract the name of the metric from. Will not be
|
||||
## included as field in metric.
|
||||
csv_measurement_column = ""
|
||||
|
||||
## The column to extract time information for the metric
|
||||
## `csv_timestamp_format` must be specified if this is used.
|
||||
## Will not be included as field in metric.
|
||||
csv_timestamp_column = ""
|
||||
|
||||
## The format of time data extracted from `csv_timestamp_column`
|
||||
## this must be specified if `csv_timestamp_column` is specified
|
||||
csv_timestamp_format = ""
|
||||
|
||||
## The timezone of time data extracted from `csv_timestamp_column`
|
||||
## in case of there is no timezone information.
|
||||
## It follows the IANA Time Zone database.
|
||||
csv_timezone = ""
|
||||
|
||||
## Indicates values to skip, such as an empty string value "".
|
||||
## The field will be skipped entirely where it matches any values inserted here.
|
||||
csv_skip_values = []
|
||||
|
||||
## If set to true, the parser will skip csv lines that cannot be parsed.
|
||||
## By default, this is false
|
||||
csv_skip_errors = false
|
||||
|
||||
## Reset the parser on given conditions.
|
||||
## This option can be used to reset the parser's state e.g. when always reading a
|
||||
## full CSV structure including header etc. Available modes are
|
||||
## "none" -- do not reset the parser (default)
|
||||
## "always" -- reset the parser with each call (ignored in line-wise parsing)
|
||||
## Helpful when e.g. reading whole files in each gather-cycle.
|
||||
# csv_reset_mode = "none"
|
||||
```
|
||||
### csv_timestamp_column, csv_timestamp_format
|
||||
|
||||
By default the current time will be used for all created metrics, to set the
|
||||
time using the JSON document you can use the `csv_timestamp_column` and
|
||||
`csv_timestamp_format` options together to set the time to a value in the parsed
|
||||
document.
|
||||
|
||||
The `csv_timestamp_column` option specifies the column name containing the
|
||||
time value and `csv_timestamp_format` must be set to a Go "reference time"
|
||||
which is defined to be the specific time: `Mon Jan 2 15:04:05 MST 2006`.
|
||||
|
||||
Consult the Go [time][time parse] package for details and additional examples
|
||||
on how to set the time format.
|
||||
|
||||
## Metrics
|
||||
|
||||
One metric is created for each row with the columns added as fields. The type
|
||||
of the field is automatically determined based on the contents of the value.
|
||||
|
||||
## Examples
|
||||
|
||||
Config:
|
||||
```
|
||||
[[inputs.file]]
|
||||
files = ["example"]
|
||||
data_format = "csv"
|
||||
csv_header_row_count = 1
|
||||
csv_timestamp_column = "time"
|
||||
csv_timestamp_format = "2006-01-02T15:04:05Z07:00"
|
||||
```
|
||||
|
||||
Input:
|
||||
```
|
||||
measurement,cpu,time_user,time_system,time_idle,time
|
||||
cpu,cpu0,42,42,42,2018-09-13T13:03:28Z
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
cpu cpu=cpu0,time_user=42,time_system=42,time_idle=42 1536869008000000000
|
||||
```
|
|
@ -0,0 +1,320 @@
|
|||
---
|
||||
title: Dropwizard input data format
|
||||
description: Use the `dropwizard` input data format to parse Dropwizard JSON representations into Telegraf metrics.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: Dropwizard
|
||||
weight: 30
|
||||
parent: Input data formats
|
||||
aliases:
|
||||
- /telegraf/v1.25/data_formats/template-patterns/
|
||||
---
|
||||
|
||||
The `dropwizard` data format can parse a [Dropwizard JSON representation](http://metrics.dropwizard.io/3.1.0/manual/json/) representation of a single metrics registry. By default, tags are parsed from metric names as if they were actual InfluxDB Line Protocol keys (`measurement<,tag_set>`) which can be overridden using custom [template patterns](#templates). All field value types are supported, including `string`, `number` and `boolean`.
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[inputs.file]]
|
||||
files = ["example"]
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "dropwizard"
|
||||
|
||||
## Used by the templating engine to join matched values when cardinality is > 1
|
||||
separator = "_"
|
||||
|
||||
## Each template line requires a template pattern. It can have an optional
|
||||
## filter before the template and separated by spaces. It can also have optional extra
|
||||
## tags following the template. Multiple tags should be separated by commas and no spaces
|
||||
## similar to the line protocol format. There can be only one default template.
|
||||
## Templates support below format:
|
||||
## 1. filter + template
|
||||
## 2. filter + template + extra tag(s)
|
||||
## 3. filter + template with field key
|
||||
## 4. default template
|
||||
## By providing an empty template array, templating is disabled and measurements are parsed as influxdb line protocol keys (measurement<,tag_set>)
|
||||
templates = []
|
||||
|
||||
## You may use an appropriate [gjson path](https://github.com/tidwall/gjson#path-syntax)
|
||||
## to locate the metric registry within the JSON document
|
||||
# dropwizard_metric_registry_path = "metrics"
|
||||
|
||||
## You may use an appropriate [gjson path](https://github.com/tidwall/gjson#path-syntax)
|
||||
## to locate the default time of the measurements within the JSON document
|
||||
# dropwizard_time_path = "time"
|
||||
# dropwizard_time_format = "2006-01-02T15:04:05Z07:00"
|
||||
|
||||
## You may use an appropriate [gjson path](https://github.com/tidwall/gjson#path-syntax)
|
||||
## to locate the tags map within the JSON document
|
||||
# dropwizard_tags_path = "tags"
|
||||
|
||||
## You may even use tag paths per tag
|
||||
# [inputs.exec.dropwizard_tag_paths]
|
||||
# tag1 = "tags.tag1"
|
||||
# tag2 = "tags.tag2"
|
||||
```
|
||||
|
||||
|
||||
## Examples
|
||||
|
||||
A typical JSON of a dropwizard metric registry:
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "3.0.0",
|
||||
"counters" : {
|
||||
"measurement,tag1=green" : {
|
||||
"count" : 1
|
||||
}
|
||||
},
|
||||
"meters" : {
|
||||
"measurement" : {
|
||||
"count" : 1,
|
||||
"m15_rate" : 1.0,
|
||||
"m1_rate" : 1.0,
|
||||
"m5_rate" : 1.0,
|
||||
"mean_rate" : 1.0,
|
||||
"units" : "events/second"
|
||||
}
|
||||
},
|
||||
"gauges" : {
|
||||
"measurement" : {
|
||||
"value" : 1
|
||||
}
|
||||
},
|
||||
"histograms" : {
|
||||
"measurement" : {
|
||||
"count" : 1,
|
||||
"max" : 1.0,
|
||||
"mean" : 1.0,
|
||||
"min" : 1.0,
|
||||
"p50" : 1.0,
|
||||
"p75" : 1.0,
|
||||
"p95" : 1.0,
|
||||
"p98" : 1.0,
|
||||
"p99" : 1.0,
|
||||
"p999" : 1.0,
|
||||
"stddev" : 1.0
|
||||
}
|
||||
},
|
||||
"timers" : {
|
||||
"measurement" : {
|
||||
"count" : 1,
|
||||
"max" : 1.0,
|
||||
"mean" : 1.0,
|
||||
"min" : 1.0,
|
||||
"p50" : 1.0,
|
||||
"p75" : 1.0,
|
||||
"p95" : 1.0,
|
||||
"p98" : 1.0,
|
||||
"p99" : 1.0,
|
||||
"p999" : 1.0,
|
||||
"stddev" : 1.0,
|
||||
"m15_rate" : 1.0,
|
||||
"m1_rate" : 1.0,
|
||||
"m5_rate" : 1.0,
|
||||
"mean_rate" : 1.0,
|
||||
"duration_units" : "seconds",
|
||||
"rate_units" : "calls/second"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Would get translated into 4 different measurements:
|
||||
|
||||
```
|
||||
measurement,metric_type=counter,tag1=green count=1
|
||||
measurement,metric_type=meter count=1,m15_rate=1.0,m1_rate=1.0,m5_rate=1.0,mean_rate=1.0
|
||||
measurement,metric_type=gauge value=1
|
||||
measurement,metric_type=histogram count=1,max=1.0,mean=1.0,min=1.0,p50=1.0,p75=1.0,p95=1.0,p98=1.0,p99=1.0,p999=1.0
|
||||
measurement,metric_type=timer count=1,max=1.0,mean=1.0,min=1.0,p50=1.0,p75=1.0,p95=1.0,p98=1.0,p99=1.0,p999=1.0,stddev=1.0,m15_rate=1.0,m1_rate=1.0,m5_rate=1.0,mean_rate=1.0
|
||||
```
|
||||
|
||||
You may also parse a dropwizard registry from any JSON document which contains a dropwizard registry in some inner field.
|
||||
Eg. to parse the following JSON document:
|
||||
|
||||
```json
|
||||
{
|
||||
"time" : "2017-02-22T14:33:03.662+02:00",
|
||||
"tags" : {
|
||||
"tag1" : "green",
|
||||
"tag2" : "yellow"
|
||||
},
|
||||
"metrics" : {
|
||||
"counters" : {
|
||||
"measurement" : {
|
||||
"count" : 1
|
||||
}
|
||||
},
|
||||
"meters" : {},
|
||||
"gauges" : {},
|
||||
"histograms" : {},
|
||||
"timers" : {}
|
||||
}
|
||||
}
|
||||
```
|
||||
and translate it into:
|
||||
|
||||
```
|
||||
measurement,metric_type=counter,tag1=green,tag2=yellow count=1 1487766783662000000
|
||||
```
|
||||
|
||||
you simply need to use the following additional configuration properties:
|
||||
|
||||
```toml
|
||||
dropwizard_metric_registry_path = "metrics"
|
||||
dropwizard_time_path = "time"
|
||||
dropwizard_time_format = "2006-01-02T15:04:05Z07:00"
|
||||
dropwizard_tags_path = "tags"
|
||||
## tag paths per tag are supported too, eg.
|
||||
#[inputs.yourinput.dropwizard_tag_paths]
|
||||
# tag1 = "tags.tag1"
|
||||
# tag2 = "tags.tag2"
|
||||
```
|
||||
|
||||
## Templates <!--This content is duplicated in /telegraf/v1.25/data_formats/input/graphite/-->
|
||||
|
||||
Template patterns are a mini language that describes how a dot-delimited
|
||||
string should be mapped to and from [metrics](/telegraf/v1.25/concepts/metrics/).
|
||||
|
||||
A template has the following format:
|
||||
```
|
||||
"host.mytag.mytag.measurement.measurement.field*"
|
||||
```
|
||||
|
||||
You can set the following keywords:
|
||||
|
||||
- `measurement`: Specifies that this section of the graphite bucket corresponds
|
||||
to the measurement name. This can be specified multiple times.
|
||||
- `field`: Specifies that this section of the graphite bucket corresponds
|
||||
to the field name. This can be specified multiple times.
|
||||
- `measurement*`: Specifies that all remaining elements of the graphite bucket
|
||||
correspond to the measurement name.
|
||||
- `field*`: Specifies that all remaining elements of the graphite bucket
|
||||
correspond to the field name.
|
||||
|
||||
{{% note %}}
|
||||
`field*` can't be used in conjunction with `measurement*`.
|
||||
{{% /note %}}
|
||||
|
||||
Any part of the template that isn't a keyword is treated as a tag key, which can also be used multiple times.
|
||||
|
||||
### Examples
|
||||
|
||||
#### Measurement and tag templates
|
||||
|
||||
The most basic template is to specify a single transformation to apply to all
|
||||
incoming metrics.
|
||||
|
||||
##### Template
|
||||
|
||||
```toml
|
||||
templates = [
|
||||
"region.region.measurement*"
|
||||
]
|
||||
```
|
||||
|
||||
##### Resulting transformation
|
||||
|
||||
```
|
||||
us.west.cpu.load 100
|
||||
=> cpu.load,region=us.west value=100
|
||||
```
|
||||
|
||||
You can also specify multiple templates using [filters](#filter-templates).
|
||||
|
||||
```toml
|
||||
templates = [
|
||||
"*.*.* region.region.measurement", # <- all 3-part measurements will match this one.
|
||||
"*.*.*.* region.region.host.measurement", # <- all 4-part measurements will match this one.
|
||||
]
|
||||
```
|
||||
|
||||
#### Field templates
|
||||
|
||||
The field keyword tells Telegraf to give the metric that field name.
|
||||
|
||||
##### Template
|
||||
|
||||
```toml
|
||||
separator = "_"
|
||||
templates = [
|
||||
"measurement.measurement.field.field.region"
|
||||
]
|
||||
```
|
||||
|
||||
##### Resulting transformation
|
||||
|
||||
```
|
||||
cpu.usage.idle.percent.eu-east 100
|
||||
=> cpu_usage,region=eu-east idle_percent=100
|
||||
```
|
||||
|
||||
You can also derive the field key from all remaining elements of the graphite
|
||||
bucket by specifying `field*`.
|
||||
|
||||
##### Template
|
||||
|
||||
```toml
|
||||
separator = "_"
|
||||
templates = [
|
||||
"measurement.measurement.region.field*"
|
||||
]
|
||||
```
|
||||
|
||||
##### Resulting transformation
|
||||
|
||||
```
|
||||
cpu.usage.eu-east.idle.percentage 100
|
||||
=> cpu_usage,region=eu-east idle_percentage=100
|
||||
```
|
||||
|
||||
#### Filter templates
|
||||
|
||||
You can also filter templates based on the name of the bucket
|
||||
using a wildcard.
|
||||
|
||||
##### Template
|
||||
|
||||
```toml
|
||||
templates = [
|
||||
"cpu.* measurement.measurement.region",
|
||||
"mem.* measurement.measurement.host"
|
||||
]
|
||||
```
|
||||
|
||||
##### Resulting transformation
|
||||
|
||||
```
|
||||
cpu.load.eu-east 100
|
||||
=> cpu_load,region=eu-east value=100
|
||||
|
||||
mem.cached.localhost 256
|
||||
=> mem_cached,host=localhost value=256
|
||||
```
|
||||
|
||||
#### Adding tags
|
||||
|
||||
You can add additional tags to a metric that don't exist on the received metric by specifying them after the pattern. Tags have the same format as the line protocol.
|
||||
Separate multiple tags with commas.
|
||||
|
||||
##### Template
|
||||
|
||||
```toml
|
||||
templates = [
|
||||
"measurement.measurement.field.region datacenter=1a"
|
||||
]
|
||||
```
|
||||
|
||||
##### Resulting transformation
|
||||
|
||||
```
|
||||
cpu.usage.idle.eu-east 100
|
||||
=> cpu_usage,region=eu-east,datacenter=1a idle=100
|
|
@ -0,0 +1,195 @@
|
|||
---
|
||||
title: Graphite input data format
|
||||
description: Use the Graphite data format to translate Graphite dot buckets directly into Telegraf measurement names, with a single value field, and without any tags.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: Graphite
|
||||
weight: 40
|
||||
parent: Input data formats
|
||||
aliases:
|
||||
- /telegraf/v1.25/data_formats/template-patterns/
|
||||
---
|
||||
|
||||
The Graphite data format translates Graphite *dot* buckets directly into
|
||||
Telegraf measurement names, with a single value field, and without any tags.
|
||||
By default, the separator is left as `.`, but this can be changed using the
|
||||
`separator` argument. For more advanced options, Telegraf supports specifying
|
||||
[templates](#templates) to translate graphite buckets into Telegraf metrics.
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[inputs.exec]]
|
||||
## Commands array
|
||||
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
|
||||
|
||||
## measurement name suffix (for separating different commands)
|
||||
name_suffix = "_mycollector"
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "graphite"
|
||||
|
||||
## This string will be used to join the matched values.
|
||||
separator = "_"
|
||||
|
||||
## Each template line requires a template pattern. It can have an optional
|
||||
## filter before the template and separated by spaces. It can also have optional extra
|
||||
## tags following the template. Multiple tags should be separated by commas and no spaces
|
||||
## similar to the line protocol format. There can be only one default template.
|
||||
## Templates support below format:
|
||||
## 1. filter + template
|
||||
## 2. filter + template + extra tag(s)
|
||||
## 3. filter + template with field key
|
||||
## 4. default template
|
||||
templates = [
|
||||
"*.app env.service.resource.measurement",
|
||||
"stats.* .host.measurement* region=eu-east,agent=sensu",
|
||||
"stats2.* .host.measurement.field",
|
||||
"measurement*"
|
||||
]
|
||||
```
|
||||
|
||||
## Templates
|
||||
|
||||
Template patterns are a mini language that describes how a dot-delimited
|
||||
string should be mapped to and from [metrics](/telegraf/v1.25/concepts/metrics/).
|
||||
|
||||
A template has the following format:
|
||||
```
|
||||
"host.mytag.mytag.measurement.measurement.field*"
|
||||
```
|
||||
|
||||
You can set the following keywords:
|
||||
|
||||
- `measurement`: Specifies that this section of the graphite bucket corresponds
|
||||
to the measurement name. This can be specified multiple times.
|
||||
- `field`: Specifies that this section of the graphite bucket corresponds
|
||||
to the field name. This can be specified multiple times.
|
||||
- `measurement*`: Specifies that all remaining elements of the graphite bucket
|
||||
correspond to the measurement name.
|
||||
- `field*`: Specifies that all remaining elements of the graphite bucket
|
||||
correspond to the field name.
|
||||
|
||||
{{% note %}}
|
||||
`field*` can't be used in conjunction with `measurement*`.
|
||||
{{% /note %}}
|
||||
|
||||
Any part of the template that isn't a keyword is treated as a tag key, which can also be used multiple times.
|
||||
|
||||
### Examples
|
||||
|
||||
#### Measurement and tag templates
|
||||
|
||||
The most basic template is to specify a single transformation to apply to all
|
||||
incoming metrics.
|
||||
|
||||
##### Template <!--This content is duplicated in /telegraf/v1.25/data_formats/input/graphite/-->
|
||||
|
||||
```toml
|
||||
templates = [
|
||||
"region.region.measurement*"
|
||||
]
|
||||
```
|
||||
|
||||
##### Resulting transformation
|
||||
|
||||
```
|
||||
us.west.cpu.load 100
|
||||
=> cpu.load,region=us.west value=100
|
||||
```
|
||||
|
||||
You can also specify multiple templates using [filters](#filter-templates).
|
||||
|
||||
```toml
|
||||
templates = [
|
||||
"*.*.* region.region.measurement", # <- all 3-part measurements will match this one.
|
||||
"*.*.*.* region.region.host.measurement", # <- all 4-part measurements will match this one.
|
||||
]
|
||||
```
|
||||
|
||||
#### Field templates
|
||||
|
||||
The field keyword tells Telegraf to give the metric that field name.
|
||||
|
||||
##### Template
|
||||
|
||||
```toml
|
||||
separator = "_"
|
||||
templates = [
|
||||
"measurement.measurement.field.field.region"
|
||||
]
|
||||
```
|
||||
|
||||
##### Resulting transformation
|
||||
|
||||
```
|
||||
cpu.usage.idle.percent.eu-east 100
|
||||
=> cpu_usage,region=eu-east idle_percent=100
|
||||
```
|
||||
|
||||
You can also derive the field key from all remaining elements of the graphite
|
||||
bucket by specifying `field*`.
|
||||
|
||||
##### Template
|
||||
|
||||
```toml
|
||||
separator = "_"
|
||||
templates = [
|
||||
"measurement.measurement.region.field*"
|
||||
]
|
||||
```
|
||||
|
||||
##### Resulting transformation
|
||||
|
||||
```
|
||||
cpu.usage.eu-east.idle.percentage 100
|
||||
=> cpu_usage,region=eu-east idle_percentage=100
|
||||
```
|
||||
|
||||
#### Filter templates
|
||||
|
||||
You can also filter templates based on the name of the bucket
|
||||
using a wildcard.
|
||||
|
||||
##### Template
|
||||
|
||||
```toml
|
||||
templates = [
|
||||
"cpu.* measurement.measurement.region",
|
||||
"mem.* measurement.measurement.host"
|
||||
]
|
||||
```
|
||||
|
||||
##### Resulting transformation
|
||||
|
||||
```
|
||||
cpu.load.eu-east 100
|
||||
=> cpu_load,region=eu-east value=100
|
||||
|
||||
mem.cached.localhost 256
|
||||
=> mem_cached,host=localhost value=256
|
||||
```
|
||||
|
||||
#### Adding tags
|
||||
|
||||
You can add additional tags to a metric that don't exist on the received metric by specifying them after the pattern. Tags have the same format as the line protocol.
|
||||
Separate multiple tags with commas.
|
||||
|
||||
##### Template
|
||||
|
||||
```toml
|
||||
templates = [
|
||||
"measurement.measurement.field.region datacenter=1a"
|
||||
]
|
||||
```
|
||||
|
||||
##### Resulting transformation
|
||||
|
||||
```
|
||||
cpu.usage.idle.eu-east 100
|
||||
=> cpu_usage,region=eu-east,datacenter=1a idle=100
|
||||
```
|
|
@ -0,0 +1,227 @@
|
|||
---
|
||||
title: Grok input data format
|
||||
description: Use the grok data format to parse line-delimited data using a regular expression-like language.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: Grok
|
||||
weight: 40
|
||||
parent: Input data formats
|
||||
---
|
||||
|
||||
The grok data format parses line delimited data using a regular expression-like
|
||||
language.
|
||||
|
||||
If you need to become familiar with grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
|
||||
in the Logstash documentation. The grok parser uses a slightly modified version of logstash "grok"
|
||||
patterns, using the format:
|
||||
|
||||
```
|
||||
%{<capture_syntax>[:<semantic_name>][:<modifier>]}
|
||||
```
|
||||
|
||||
The `capture_syntax` defines the grok pattern that is used to parse the input
|
||||
line and the `semantic_name` is used to name the field or tag. The extension
|
||||
`modifier` controls the data type that the parsed item is converted to or
|
||||
other special handling.
|
||||
|
||||
By default, all named captures are converted into string fields.
|
||||
Timestamp modifiers can be used to convert captures to the timestamp of the
|
||||
parsed metric. If no timestamp is parsed the metric will be created using the
|
||||
current time.
|
||||
|
||||
You must capture at least one field per line.
|
||||
|
||||
- Available modifiers:
|
||||
- string (default if nothing is specified)
|
||||
- int
|
||||
- float
|
||||
- duration (ie, 5.23ms gets converted to int nanoseconds)
|
||||
- tag (converts the field into a tag)
|
||||
- drop (drops the field completely)
|
||||
- measurement (use the matched text as the measurement name)
|
||||
- Timestamp modifiers:
|
||||
- ts (This will auto-learn the timestamp format)
|
||||
- ts-ansic ("Mon Jan _2 15:04:05 2006")
|
||||
- ts-unix ("Mon Jan _2 15:04:05 MST 2006")
|
||||
- ts-ruby ("Mon Jan 02 15:04:05 -0700 2006")
|
||||
- ts-rfc822 ("02 Jan 06 15:04 MST")
|
||||
- ts-rfc822z ("02 Jan 06 15:04 -0700")
|
||||
- ts-rfc850 ("Monday, 02-Jan-06 15:04:05 MST")
|
||||
- ts-rfc1123 ("Mon, 02 Jan 2006 15:04:05 MST")
|
||||
- ts-rfc1123z ("Mon, 02 Jan 2006 15:04:05 -0700")
|
||||
- ts-rfc3339 ("2006-01-02T15:04:05Z07:00")
|
||||
- ts-rfc3339nano ("2006-01-02T15:04:05.999999999Z07:00")
|
||||
- ts-httpd ("02/Jan/2006:15:04:05 -0700")
|
||||
- ts-epoch (seconds since unix epoch, may contain decimal)
|
||||
- ts-epochnano (nanoseconds since unix epoch)
|
||||
- ts-syslog ("Jan 02 15:04:05", parsed time is set to the current year)
|
||||
- ts-"CUSTOM"
|
||||
|
||||
CUSTOM time layouts must be within quotes and be the representation of the
|
||||
"reference time", which is `Mon Jan 2 15:04:05 -0700 MST 2006`.
|
||||
To match a comma decimal point you can use a period. For example `%{TIMESTAMP:timestamp:ts-"2006-01-02 15:04:05.000"}` can be used to match `"2018-01-02 15:04:05,000"`
|
||||
To match a comma decimal point you can use a period in the pattern string.
|
||||
See https://golang.org/pkg/time/#Parse for more details.
|
||||
|
||||
Telegraf has many of its own [built-in patterns](https://github.com/influxdata/telegraf/blob/master/plugins/parsers/grok/influx_patterns.go),
|
||||
as well as support for most of
|
||||
[logstash's builtin patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/main/patterns/ecs-v1/grok-patterns.
|
||||
_Golang regular expressions do not support lookahead or lookbehind.
|
||||
logstash patterns that depend on these are not supported._
|
||||
|
||||
If you need help building patterns to match your logs, the
|
||||
[Grok Debugger application](https://grokdebug.herokuapp.com) might be helpful.
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[inputs.file]]
|
||||
## Files to parse each interval.
|
||||
## These accept standard unix glob matching rules, but with the addition of
|
||||
## ** as a "super asterisk". ie:
|
||||
## /var/log/**.log -> recursively find all .log files in /var/log
|
||||
## /var/log/*/*.log -> find all .log files with a parent dir in /var/log
|
||||
## /var/log/apache.log -> only tail the apache log file
|
||||
files = ["/var/log/apache/access.log"]
|
||||
|
||||
## The dataformat to be read from files
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "grok"
|
||||
|
||||
## This is a list of patterns to check the given log file(s) for.
|
||||
## Note that adding patterns here increases processing time. The most
|
||||
## efficient configuration is to have one pattern.
|
||||
## Other common built-in patterns are:
|
||||
## %{COMMON_LOG_FORMAT} (plain apache & nginx access logs)
|
||||
## %{COMBINED_LOG_FORMAT} (access logs + referrer & agent)
|
||||
grok_patterns = ["%{COMBINED_LOG_FORMAT}"]
|
||||
|
||||
## Full path(s) to custom pattern files.
|
||||
grok_custom_pattern_files = []
|
||||
|
||||
## Custom patterns can also be defined here. Put one pattern per line.
|
||||
grok_custom_patterns = '''
|
||||
'''
|
||||
|
||||
## Timezone allows you to provide an override for timestamps that
|
||||
## don't already include an offset
|
||||
## e.g. 04/06/2016 12:41:45 data one two 5.43µs
|
||||
##
|
||||
## Default: "" which renders UTC
|
||||
## Options are as follows:
|
||||
## 1. Local -- interpret based on machine localtime
|
||||
## 2. "Canada/Eastern" -- Unix TZ values like those found in https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
|
||||
## 3. UTC -- or blank/unspecified, will return timestamp in UTC
|
||||
grok_timezone = "Canada/Eastern"
|
||||
```
|
||||
|
||||
### Timestamp examples
|
||||
|
||||
This example input and config parses a file using a custom timestamp conversion:
|
||||
|
||||
```
|
||||
2017-02-21 13:10:34 value=42
|
||||
```
|
||||
|
||||
```toml
|
||||
[[inputs.file]]
|
||||
grok_patterns = ['%{TIMESTAMP_ISO8601:timestamp:ts-"2006-01-02 15:04:05"} value=%{NUMBER:value:int}']
|
||||
```
|
||||
|
||||
This example input and config parses a file using a timestamp in unix time:
|
||||
|
||||
```
|
||||
1466004605 value=42
|
||||
1466004605.123456789 value=42
|
||||
```
|
||||
|
||||
```toml
|
||||
[[inputs.file]]
|
||||
grok_patterns = ['%{NUMBER:timestamp:ts-epoch} value=%{NUMBER:value:int}']
|
||||
```
|
||||
|
||||
This example parses a file using a built-in conversion and a custom pattern:
|
||||
|
||||
```
|
||||
Wed Apr 12 13:10:34 PST 2017 value=42
|
||||
```
|
||||
|
||||
```toml
|
||||
[[inputs.file]]
|
||||
grok_patterns = ["%{TS_UNIX:timestamp:ts-unix} value=%{NUMBER:value:int}"]
|
||||
grok_custom_patterns = '''
|
||||
TS_UNIX %{DAY} %{MONTH} %{MONTHDAY} %{HOUR}:%{MINUTE}:%{SECOND} %{TZ} %{YEAR}
|
||||
'''
|
||||
```
|
||||
|
||||
For cases where the timestamp itself is without offset, the `timezone` config var is available
|
||||
to denote an offset. By default (with `timezone` either omit, blank or set to `"UTC"`), the times
|
||||
are processed as if in the UTC timezone. If specified as `timezone = "Local"`, the timestamp
|
||||
will be processed based on the current machine timezone configuration. Lastly, if using a
|
||||
timezone from the list of Unix [timezones](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones),
|
||||
grok will offset the timestamp accordingly.
|
||||
|
||||
### TOML escaping
|
||||
|
||||
When saving patterns to the configuration file, keep in mind the different TOML
|
||||
[string](https://github.com/toml-lang/toml#string) types and the escaping
|
||||
rules for each. These escaping rules must be applied in addition to the
|
||||
escaping required by the grok syntax. Using the Multi-line line literal
|
||||
syntax with `'''` may be useful.
|
||||
|
||||
The following config examples will parse this input file:
|
||||
|
||||
```
|
||||
|42|\uD83D\uDC2F|'telegraf'|
|
||||
```
|
||||
|
||||
Since `|` is a special character in the grok language, we must escape it to
|
||||
get a literal `|`. With a basic TOML string, special characters such as
|
||||
backslash must be escaped, requiring us to escape the backslash a second time.
|
||||
|
||||
```toml
|
||||
[[inputs.file]]
|
||||
grok_patterns = ["\\|%{NUMBER:value:int}\\|%{UNICODE_ESCAPE:escape}\\|'%{WORD:name}'\\|"]
|
||||
grok_custom_patterns = "UNICODE_ESCAPE (?:\\\\u[0-9A-F]{4})+"
|
||||
```
|
||||
|
||||
We cannot use a literal TOML string for the pattern, because we cannot match a
|
||||
`'` within it. However, it works well for the custom pattern.
|
||||
```toml
|
||||
[[inputs.file]]
|
||||
grok_patterns = ["\\|%{NUMBER:value:int}\\|%{UNICODE_ESCAPE:escape}\\|'%{WORD:name}'\\|"]
|
||||
grok_custom_patterns = 'UNICODE_ESCAPE (?:\\u[0-9A-F]{4})+'
|
||||
```
|
||||
|
||||
A multi-line literal string allows us to encode the pattern:
|
||||
```toml
|
||||
[[inputs.file]]
|
||||
grok_patterns = ['''
|
||||
\|%{NUMBER:value:int}\|%{UNICODE_ESCAPE:escape}\|'%{WORD:name}'\|
|
||||
''']
|
||||
grok_custom_patterns = 'UNICODE_ESCAPE (?:\\u[0-9A-F]{4})+'
|
||||
```
|
||||
|
||||
### Tips for creating patterns
|
||||
|
||||
Writing complex patterns can be difficult, here is some advice for writing a
|
||||
new pattern or testing a pattern developed [online](https://grokdebug.herokuapp.com).
|
||||
|
||||
Create a file output that writes to stdout, and disable other outputs while
|
||||
testing. This will allow you to see the captured metrics. Keep in mind that
|
||||
the file output will only print once per `flush_interval`.
|
||||
|
||||
```toml
|
||||
[[outputs.file]]
|
||||
files = ["stdout"]
|
||||
```
|
||||
|
||||
- Start with a file containing only a single line of your input.
|
||||
- Remove all but the first token or piece of the line.
|
||||
- Add the section of your pattern to match this piece to your configuration file.
|
||||
- Verify that the metric is parsed successfully by running Telegraf.
|
||||
- If successful, add the next token, update the pattern and retest.
|
||||
- Continue one token at a time until the entire line is successfully parsed.
|
|
@ -0,0 +1,28 @@
|
|||
---
|
||||
title: InfluxDB Line Protocol input data format
|
||||
description: Use the InfluxDB Line Protocol input data format to parse InfluxDB metrics directly into Telegraf metrics.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: InfluxDB Line Protocol input
|
||||
weight: 60
|
||||
parent: Input data formats
|
||||
---
|
||||
|
||||
There are no additional configuration options for InfluxDB [line protocol][]. The
|
||||
InfluxDB metrics are parsed directly into Telegraf metrics.
|
||||
|
||||
[line protocol]: /{{< latest "influxdb" "v1" >}}/write_protocols/line/
|
||||
|
||||
### Configuration
|
||||
|
||||
```toml
|
||||
[[inputs.file]]
|
||||
files = ["example"]
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "influx"
|
||||
```
|
|
@ -0,0 +1,228 @@
|
|||
---
|
||||
title: JSON input data format
|
||||
description: Use the JSON input data format to parse [JSON][json] objects, or an array of objects, into Telegraf metric fields.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: JSON input
|
||||
weight: 70
|
||||
parent: Input data formats
|
||||
---
|
||||
|
||||
{{% note %}}
|
||||
The following information applies to the legacy JSON input data format. For most cases, we recommend using the [JSON v2 input data format](/{{< latest "telegraf" >}}/data_formats/input/json_v2/) instead.
|
||||
{{% /note %}}
|
||||
|
||||
The JSON input data format parses a [JSON][json] object or an array of objects
|
||||
into Telegraf metric fields.
|
||||
|
||||
**NOTE:** All JSON numbers are converted to float fields. JSON String are
|
||||
ignored unless specified in the `tag_key` or `json_string_fields` options.
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[inputs.file]]
|
||||
files = ["example"]
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "json"
|
||||
|
||||
## Query is a GJSON path that specifies a specific chunk of JSON to be
|
||||
## parsed, if not specified the whole document will be parsed.
|
||||
##
|
||||
## GJSON query paths are described here:
|
||||
## https://github.com/tidwall/gjson#path-syntax
|
||||
json_query = ""
|
||||
|
||||
## Tag keys is an array of keys that should be added as tags.
|
||||
tag_keys = [
|
||||
"my_tag_1",
|
||||
"my_tag_2"
|
||||
]
|
||||
|
||||
## String fields is an array of keys that should be added as string fields.
|
||||
json_string_fields = []
|
||||
|
||||
## Name key is the key to use as the measurement name.
|
||||
json_name_key = ""
|
||||
|
||||
## Time key is the key containing the time that should be used to create the
|
||||
## metric.
|
||||
json_time_key = ""
|
||||
|
||||
## Time format is the time layout that should be used to interprete the
|
||||
## json_time_key. The time must be `unix`, `unix_ms` or a time in the
|
||||
## "reference time".
|
||||
## ex: json_time_format = "Mon Jan 2 15:04:05 -0700 MST 2006"
|
||||
## json_time_format = "2006-01-02T15:04:05Z07:00"
|
||||
## json_time_format = "unix"
|
||||
## json_time_format = "unix_ms"
|
||||
json_time_format = ""
|
||||
```
|
||||
|
||||
### `json_query`
|
||||
|
||||
The `json_query` is a [GJSON][gjson] path that can be used to limit the
|
||||
portion of the overall JSON document that should be parsed. The result of the
|
||||
query should contain a JSON object or an array of objects.
|
||||
|
||||
Consult the GJSON [path syntax][gjson syntax] for details and examples.
|
||||
|
||||
### json_time_key, json_time_format
|
||||
|
||||
By default the current time will be used for all created metrics, to set the
|
||||
time using the JSON document you can use the `json_time_key` and
|
||||
`json_time_format` options together to set the time to a value in the parsed
|
||||
document.
|
||||
|
||||
The `json_time_key` option specifies the key containing the time value and
|
||||
`json_time_format` must be set to `unix`, `unix_ms`, or the Go "reference
|
||||
time" which is defined to be the specific time: `Mon Jan 2 15:04:05 MST 2006`.
|
||||
|
||||
Consult the Go [time][time parse] package for details and additional examples
|
||||
on how to set the time format.
|
||||
|
||||
## Examples
|
||||
|
||||
### Basic parsing
|
||||
|
||||
Config:
|
||||
```toml
|
||||
[[inputs.file]]
|
||||
files = ["example"]
|
||||
name_override = "myjsonmetric"
|
||||
data_format = "json"
|
||||
```
|
||||
|
||||
Input:
|
||||
```json
|
||||
{
|
||||
"a": 5,
|
||||
"b": {
|
||||
"c": 6
|
||||
},
|
||||
"ignored": "I'm a string"
|
||||
}
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
myjsonmetric a=5,b_c=6
|
||||
```
|
||||
|
||||
### Name, tags, and string fields
|
||||
|
||||
Config:
|
||||
```toml
|
||||
[[inputs.file]]
|
||||
files = ["example"]
|
||||
json_name_key = "name"
|
||||
tag_keys = ["my_tag_1"]
|
||||
json_string_fields = ["my_field"]
|
||||
data_format = "json"
|
||||
```
|
||||
|
||||
Input:
|
||||
```json
|
||||
{
|
||||
"a": 5,
|
||||
"b": {
|
||||
"c": 6,
|
||||
"my_field": "description"
|
||||
},
|
||||
"my_tag_1": "foo",
|
||||
"name": "my_json"
|
||||
}
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
my_json,my_tag_1=foo a=5,b_c=6,my_field="description"
|
||||
```
|
||||
|
||||
### Arrays
|
||||
|
||||
If the JSON data is an array, then each object within the array is parsed with
|
||||
the configured settings.
|
||||
|
||||
Config:
|
||||
```toml
|
||||
[[inputs.file]]
|
||||
files = ["example"]
|
||||
data_format = "json"
|
||||
json_time_key = "b_time"
|
||||
json_time_format = "02 Jan 06 15:04 MST"
|
||||
```
|
||||
|
||||
Input:
|
||||
```json
|
||||
[
|
||||
{
|
||||
"a": 5,
|
||||
"b": {
|
||||
"c": 6,
|
||||
"time":"04 Jan 06 15:04 MST"
|
||||
},
|
||||
},
|
||||
{
|
||||
"a": 7,
|
||||
"b": {
|
||||
"c": 8,
|
||||
"time":"11 Jan 07 15:04 MST"
|
||||
},
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
file a=5,b_c=6 1136387040000000000
|
||||
file a=7,b_c=8 1168527840000000000
|
||||
```
|
||||
|
||||
### Query
|
||||
|
||||
The `json_query` option can be used to parse a subset of the document.
|
||||
|
||||
Config:
|
||||
```toml
|
||||
[[inputs.file]]
|
||||
files = ["example"]
|
||||
data_format = "json_v2"
|
||||
tag_keys = ["first"]
|
||||
json_string_fields = ["last"]
|
||||
json_query = "obj.friends"
|
||||
```
|
||||
|
||||
Input:
|
||||
```json
|
||||
{
|
||||
"obj": {
|
||||
"name": {"first": "Tom", "last": "Anderson"},
|
||||
"age":37,
|
||||
"children": ["Sara","Alex","Jack"],
|
||||
"fav.movie": "Deer Hunter",
|
||||
"friends": [
|
||||
{"first": "Dale", "last": "Murphy", "age": 44},
|
||||
{"first": "Roger", "last": "Craig", "age": 68},
|
||||
{"first": "Jane", "last": "Murphy", "age": 47}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
file,first=Dale last="Murphy",age=44
|
||||
file,first=Roger last="Craig",age=68
|
||||
file,first=Jane last="Murphy",age=47
|
||||
```
|
||||
|
||||
[gjson]: https://github.com/tidwall/gjson
|
||||
[gjson syntax]: https://github.com/tidwall/gjson#path-syntax
|
||||
[json]: https://www.json.org/
|
||||
[time parse]: https://golang.org/pkg/time/#Parse
|
|
@ -0,0 +1,174 @@
|
|||
---
|
||||
title: JSON v2 input data format
|
||||
description: Use the JSON v2 input data format to parse [JSON][json] objects, or an array of objects, into Telegraf metric fields.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: JSON v2 input
|
||||
weight: 70
|
||||
parent: Input data formats
|
||||
---
|
||||
|
||||
The JSON v2 input data format parses a [JSON][json] object or an array of objects into Telegraf metric fields.
|
||||
This parser takes valid JSON input and turns it into metrics.
|
||||
|
||||
The query syntax supported is [GJSON Path Syntax](https://github.com/tidwall/gjson/blob/v1.7.5/SYNTAX.md),
|
||||
Use to [this playground](https://gjson.dev/) to test out your GJSON path.
|
||||
|
||||
You can find multiple examples [here](https://github.com/influxdata/telegraf/tree/master/plugins/parsers/json_v2/testdata) in the Telegraf repository.
|
||||
|
||||
<!--
|
||||
is this still true?
|
||||
{{% note %}}
|
||||
All JSON numbers are converted to float fields. JSON String are
|
||||
ignored unless specified in the `tag_key` or `json_string_fields` options.
|
||||
{{% /note %}}
|
||||
-->
|
||||
|
||||
## Configuration
|
||||
|
||||
Configure this parser by describing the metric you want by defining the fields and tags from the input.
|
||||
The configuration is divided into config sub-tables called `field`, `tag`, and `object`.
|
||||
In the example below you can see all the possible configuration keys you can define for each config table.
|
||||
In the sections that follow these configuration keys are defined in more detail.
|
||||
|
||||
```toml
|
||||
[[inputs.file]]
|
||||
urls = []
|
||||
data_format = "json_v2"
|
||||
|
||||
[[inputs.file.json_v2]]
|
||||
measurement_name = "" # A string that will become the new measurement name
|
||||
measurement_name_path = "" # A string with valid GJSON path syntax, will override measurement_name
|
||||
timestamp_path = "" # A string with valid GJSON path syntax to a valid timestamp (single value)
|
||||
timestamp_format = "" # A string with a valid timestamp format (see below for possible values)
|
||||
timestamp_timezone = "" # A string with with a valid timezone (see below for possible values)
|
||||
|
||||
[[inputs.file.json_v2.field]]
|
||||
path = "" # A string with valid GJSON path syntax
|
||||
rename = "new name" # A string with a new name for the tag key
|
||||
type = "int" # A string specifying the type (int,uint,float,string,bool)
|
||||
|
||||
[[inputs.file.json_v2.tag]]
|
||||
path = "" # A string with valid GJSON path syntax
|
||||
rename = "new name" # A string with a new name for the tag key
|
||||
|
||||
[[inputs.file.json_v2.object]]
|
||||
path = "" # A string with valid GJSON path syntax
|
||||
timestamp_key = "" # A JSON key (for a nested key, prepend the parent keys with underscores) to a valid timestamp
|
||||
timestamp_format = "" # A string with a valid timestamp format (see below for possible values)
|
||||
timestamp_timezone = "" # A string with with a valid timezone (see below for possible values)
|
||||
disable_prepend_keys = false (or true, just not both)
|
||||
included_keys = [] # List of JSON keys (for a nested key, prepend the parent keys with underscores) that should be only included in result
|
||||
excluded_keys = [] # List of JSON keys (for a nested key, prepend the parent keys with underscores) that shouldn't be included in result
|
||||
tags = [] # List of JSON keys (for a nested key, prepend the parent keys with underscores) to be a tag instead of a field
|
||||
[inputs.file.json_v2.object.renames] # A map of JSON keys (for a nested key, prepend the parent keys with underscores) with a new name for the tag key
|
||||
key = "new name"
|
||||
[inputs.file.json_v2.object.fields] # A map of JSON keys (for a nested key, prepend the parent keys with underscores) with a type (int,uint,float,string,bool)
|
||||
key = "int"
|
||||
```
|
||||
|
||||
### Root configuration options
|
||||
|
||||
* **measurement_name (OPTIONAL)**: Will set the measurement name to the provided string.
|
||||
* **measurement_name_path (OPTIONAL)**: You can define a query with [GJSON Path Syntax](https://github.com/tidwall/gjson/blob/v1.7.5/SYNTAX.md) to set a measurement name from the JSON input.
|
||||
The query must return a single data value or it will use the default measurement name.
|
||||
This takes precedence over `measurement_name`.
|
||||
* **timestamp_path (OPTIONAL)**: You can define a query with [GJSON Path Syntax](https://github.com/tidwall/gjson/blob/v1.7.5/SYNTAX.md) to set a timestamp from the JSON input.
|
||||
The query must return a single data value or it will default to the current time.
|
||||
* **timestamp_format (OPTIONAL, but REQUIRED when timestamp_query is defined**: Must be set to `unix`, `unix_ms`, `unix_us`, `unix_ns`, or
|
||||
the Go "reference time" which is defined to be the specific time:
|
||||
`Mon Jan 2 15:04:05 MST 2006`
|
||||
* **timestamp_timezone (OPTIONAL, but REQUIRES timestamp_query**: This option should be set to a
|
||||
[Unix TZ value](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones),
|
||||
such as `America/New_York`, to `Local` to utilize the system timezone, or to `UTC`.
|
||||
Defaults to `UTC`
|
||||
|
||||
## Arrays and Objects
|
||||
|
||||
The following describes the high-level approach when parsing arrays and objects:
|
||||
|
||||
- **Array**: Every element in an array is treated as a *separate* metric
|
||||
- **Object**: Every key-value in a object is treated as a *single* metric
|
||||
|
||||
When handling nested arrays and objects, the rules above continue to apply as the parser creates metrics.
|
||||
When an object has multiple arrays as values,
|
||||
the arrays will become separate metrics containing only non-array values from the object.
|
||||
Below you can see an example of this behavior,
|
||||
with an input JSON containing an array of book objects that has a nested array of characters.
|
||||
|
||||
**Example JSON:**
|
||||
|
||||
```json
|
||||
{
|
||||
"book": {
|
||||
"title": "The Lord Of The Rings",
|
||||
"chapters": [
|
||||
"A Long-expected Party",
|
||||
"The Shadow of the Past"
|
||||
],
|
||||
"author": "Tolkien",
|
||||
"characters": [
|
||||
{
|
||||
"name": "Bilbo",
|
||||
"species": "hobbit"
|
||||
},
|
||||
{
|
||||
"name": "Frodo",
|
||||
"species": "hobbit"
|
||||
}
|
||||
],
|
||||
"random": [
|
||||
1,
|
||||
2
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
**Example configuration:**
|
||||
|
||||
```toml
|
||||
[[inputs.file]]
|
||||
files = ["./testdata/multiple_arrays_in_object/input.json"]
|
||||
data_format = "json_v2"
|
||||
[[inputs.file.json_v2]]
|
||||
[[inputs.file.json_v2.object]]
|
||||
path = "book"
|
||||
tags = ["title"]
|
||||
disable_prepend_keys = true
|
||||
```
|
||||
|
||||
**Expected metrics:**
|
||||
|
||||
```
|
||||
file,title=The\ Lord\ Of\ The\ Rings author="Tolkien",chapters="A Long-expected Party"
|
||||
file,title=The\ Lord\ Of\ The\ Rings author="Tolkien",chapters="The Shadow of the Past"
|
||||
file,title=The\ Lord\ Of\ The\ Rings author="Tolkien",name="Bilbo",species="hobbit"
|
||||
file,title=The\ Lord\ Of\ The\ Rings author="Tolkien",name="Frodo",species="hobbit"
|
||||
file,title=The\ Lord\ Of\ The\ Rings author="Tolkien",random=1
|
||||
file,title=The\ Lord\ Of\ The\ Rings author="Tolkien",random=2
|
||||
|
||||
```
|
||||
|
||||
You can find more complicated examples under the folder `testdata`.
|
||||
|
||||
## Types
|
||||
|
||||
For each field you have the option to define the types for each metric.
|
||||
The following rules are in place for this configuration:
|
||||
|
||||
* If a type is explicitly defined, the parser will enforce this type and convert the data to the defined type if possible.
|
||||
If the type can't be converted then the parser will fail.
|
||||
* If a type isn't defined, the parser will use the default type defined in the JSON (int, float, string)
|
||||
|
||||
The type values you can set:
|
||||
|
||||
* `int`, bool, floats or strings (with valid numbers) can be converted to a int.
|
||||
* `uint`, bool, floats or strings (with valid numbers) can be converted to a uint.
|
||||
* `string`, any data can be formatted as a string.
|
||||
* `float`, string values (with valid numbers) or integers can be converted to a float.
|
||||
* `bool`, the string values "true" or "false" (regardless of capitalization) or the integer values `0` or `1` can be turned to a bool.
|
||||
|
||||
[json]: https://www.json.org/
|
|
@ -0,0 +1,43 @@
|
|||
---
|
||||
title: Logfmt input data format
|
||||
description: Use the `logfmt` input data format to parse logfmt data into Telegraf metrics.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: logfmt
|
||||
weight: 80
|
||||
parent: Input data formats
|
||||
---
|
||||
|
||||
The `logfmt` data format parses [logfmt] data into Telegraf metrics.
|
||||
|
||||
[logfmt]: https://brandur.org/logfmt
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[inputs.file]]
|
||||
files = ["example"]
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "logfmt"
|
||||
|
||||
## Set the name of the created metric, if unset the name of the plugin will
|
||||
## be used.
|
||||
metric_name = "logfmt"
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
Each key-value pair in the line is added to a new metric as a field. The type
|
||||
of the field is automatically determined based on the contents of the value.
|
||||
|
||||
## Examples
|
||||
|
||||
```
|
||||
- method=GET host=example.org ts=2018-07-24T19:43:40.275Z connect=4ms service=8ms status=200 bytes=1653
|
||||
+ logfmt method="GET",host="example.org",ts="2018-07-24T19:43:40.275Z",connect="4ms",service="8ms",status=200i,bytes=1653i
|
||||
```
|
|
@ -0,0 +1,30 @@
|
|||
---
|
||||
title: Nagios input data format
|
||||
description: Use the Nagios input data format to parse the output of Nagios plugins into Telegraf metrics.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: Nagios
|
||||
weight: 90
|
||||
parent: Input data formats
|
||||
---
|
||||
|
||||
# Nagios
|
||||
|
||||
The Nagios input data format parses the output of
|
||||
[Nagios plugins](https://www.nagios.org/downloads/nagios-plugins/) into
|
||||
Telegraf metrics.
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[inputs.exec]]
|
||||
## Commands array
|
||||
commands = ["/usr/lib/nagios/plugins/check_load -w 5,6,7 -c 7,8,9"]
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "nagios"
|
||||
```
|
|
@ -0,0 +1,62 @@
|
|||
---
|
||||
title: Prometheus Remote Write input data format
|
||||
description: |
|
||||
Use the Prometheus Remote Write input data format to write samples directly into Telegraf metrics.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: Prometheus Remote Write
|
||||
weight: 40
|
||||
parent: Input data formats
|
||||
---
|
||||
|
||||
Use the Prometheus Remote Write plugin to convert [Prometheus Remote Write](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) samples directly into Telegraf metrics.
|
||||
|
||||
{{% note %}}
|
||||
If you are using InfluxDB 1.x and the [Prometheus Remote Write endpoint](https://github.com/influxdata/telegraf/blob/master/plugins/parsers/prometheusremotewrite/README.md
|
||||
to write in metrics, you can migrate to InfluxDB 2.0 and use this parser.
|
||||
For the metrics to completely align with the 1.x endpoint, add a Starlark processor as described [here](https://github.com/influxdata/telegraf/blob/master/plugins/processors/starlark/README.md).
|
||||
|
||||
{{% /note %}}
|
||||
|
||||
### Configuration
|
||||
|
||||
Use the [`inputs.http_listener_v2`](/telegraf/v1.25/plugins/#input-http_listener_v2) plug and set `data_format = "prometheusremotewrite"`
|
||||
|
||||
```toml
|
||||
[[inputs.http_listener_v2]]
|
||||
## Address and port to host HTTP listener on
|
||||
service_address = ":1234"
|
||||
## Path to listen to.
|
||||
path = "/receive"
|
||||
## Data format to consume.
|
||||
data_format = "prometheusremotewrite"
|
||||
```
|
||||
|
||||
### Example
|
||||
|
||||
**Example Input**
|
||||
```
|
||||
prompb.WriteRequest{
|
||||
Timeseries: []*prompb.TimeSeries{
|
||||
{
|
||||
Labels: []*prompb.Label{
|
||||
{Name: "__name__", Value: "go_gc_duration_seconds"},
|
||||
{Name: "instance", Value: "localhost:9090"},
|
||||
{Name: "job", Value: "prometheus"},
|
||||
{Name: "quantile", Value: "0.99"},
|
||||
},
|
||||
Samples: []prompb.Sample{
|
||||
{Value: 4.63, Timestamp: time.Date(2020, 4, 1, 0, 0, 0, 0, time.UTC).UnixNano()},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
**Example Output**
|
||||
```
|
||||
prometheus_remote_write,instance=localhost:9090,job=prometheus,quantile=0.99 go_gc_duration_seconds=4.63 1614889298859000000
|
||||
```
|
||||
|
||||
[here]: https://github.com/influxdata/telegraf/tree/master/plugins/parsers/prometheusremotewrite#for-alignment-with-the-influxdb-v1x-prometheus-remote-write-spec
|
|
@ -0,0 +1,45 @@
|
|||
---
|
||||
title: Value input data format
|
||||
description: Use the `value` input data format to parse single values into Telegraf metrics.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: Value
|
||||
weight: 100
|
||||
parent: Input data formats
|
||||
---
|
||||
|
||||
|
||||
The "value" input data format translates single values into Telegraf metrics. This
|
||||
is done by assigning a measurement name and setting a single field ("value")
|
||||
as the parsed metric.
|
||||
|
||||
## Configuration
|
||||
|
||||
You **must** tell Telegraf what type of metric to collect by using the
|
||||
`data_type` configuration option. Available data type options are:
|
||||
|
||||
1. integer
|
||||
2. float or long
|
||||
3. string
|
||||
4. boolean
|
||||
|
||||
> **Note:** It is also recommended that you set `name_override` to a measurement
|
||||
name that makes sense for your metric; otherwise, it will just be set to the
|
||||
name of the plugin.
|
||||
|
||||
```toml
|
||||
[[inputs.exec]]
|
||||
## Commands array
|
||||
commands = ["cat /proc/sys/kernel/random/entropy_avail"]
|
||||
|
||||
## override the default metric name of "exec"
|
||||
name_override = "entropy_available"
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "value"
|
||||
data_type = "integer" # required
|
||||
```
|
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
title: Wavefront input data format
|
||||
description: Use the Wavefront input data format to parse Wavefront data into Telegraf metrics.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: Wavefront
|
||||
weight: 110
|
||||
parent: Input data formats
|
||||
---
|
||||
|
||||
The Wavefront input data format parses Wavefront data into Telegraf metrics.
|
||||
For more information on the Wavefront native data format, see
|
||||
[Wavefront Data Format](https://docs.wavefront.com/wavefront_data_format.html) in the Wavefront documentation.
|
||||
|
||||
## Configuration
|
||||
|
||||
There are no additional configuration options for Wavefront Data Format line-protocol.
|
||||
|
||||
```toml
|
||||
[[inputs.file]]
|
||||
files = ["example"]
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "wavefront"
|
||||
```
|
|
@ -0,0 +1,59 @@
|
|||
---
|
||||
title: XML input data format
|
||||
description: Use the XML input data format to parse XML data into Telegraf metrics.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: XML
|
||||
weight: 110
|
||||
parent: Input data formats
|
||||
---
|
||||
|
||||
The XML input data format parses XML data into Telegraf metrics.
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[inputs.file]]
|
||||
files = ["example.xml"]
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "xml"
|
||||
|
||||
## Multiple parsing sections are allowed
|
||||
[[inputs.file.xml]]
|
||||
## Optional: XPath-query to select a subset of nodes from the XML document.
|
||||
#metric_selection = "/Bus/child::Sensor"
|
||||
|
||||
## Optional: XPath-query to set the metric (measurement) name.
|
||||
#metric_name = "string('example')"
|
||||
|
||||
## Optional: Query to extract metric timestamp.
|
||||
## If not specified the time of execution is used.
|
||||
#timestamp = "/Gateway/Timestamp"
|
||||
## Optional: Format of the timestamp determined by the query above.
|
||||
## This can be any of "unix", "unix_ms", "unix_us", "unix_ns" or a valid Golang
|
||||
## time format. If not specified, a "unix" timestamp (in seconds) is expected.
|
||||
#timestamp_format = "2006-01-02T15:04:05Z"
|
||||
|
||||
## Tag definitions using the given XPath queries.
|
||||
[inputs.file.xml.tags]
|
||||
name = "substring-after(Sensor/@name, ' ')"
|
||||
device = "string('the ultimate sensor')"
|
||||
|
||||
## Integer field definitions using XPath queries.
|
||||
[inputs.file.xml.fields_int]
|
||||
consumers = "Variable/@consumers"
|
||||
|
||||
## Non-integer field definitions using XPath queries.
|
||||
## The field type is defined using XPath expressions such as number(), boolean() or string(). If no conversion is performed the field will be of type string.
|
||||
[inputs.file.xml.fields]
|
||||
temperature = "number(Variable/@temperature)"
|
||||
power = "number(Variable/@power)"
|
||||
frequency = "number(Variable/@frequency)"
|
||||
ok = "Mode != 'ok'"
|
||||
```
|
|
@ -0,0 +1,31 @@
|
|||
---
|
||||
title: Telegraf output data formats
|
||||
description: Telegraf serializes metrics into output data formats.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: Output data formats
|
||||
weight: 1
|
||||
parent: Data formats
|
||||
---
|
||||
|
||||
In addition to output-specific data formats, Telegraf supports the following set
|
||||
of common data formats that may be selected when configuring many of the Telegraf
|
||||
output plugins.
|
||||
|
||||
{{< children >}}
|
||||
|
||||
You will be able to identify the plugins with support by the presence of a
|
||||
`data_format` configuration option, for example, in the File (`file`) output plugin:
|
||||
|
||||
```toml
|
||||
[[outputs.file]]
|
||||
## Files to write to, "stdout" is a specially handled file.
|
||||
files = ["stdout"]
|
||||
|
||||
## Data format to output.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||
data_format = "influx"
|
||||
```
|
|
@ -0,0 +1,61 @@
|
|||
---
|
||||
title: Carbon2 output data format
|
||||
description: Use the Carbon2 output data format (serializer) to convert Telegraf metrics into the Carbon2 format.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: Carbon2
|
||||
weight: 10
|
||||
parent: Output data formats
|
||||
---
|
||||
|
||||
The `carbon2` output data format (serializer) translates the Telegraf metric format to the [Carbon2 format](http://metrics20.org/implementations/).
|
||||
|
||||
### Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.file]]
|
||||
## Files to write to, "stdout" is a specially handled file.
|
||||
files = ["stdout", "/tmp/metrics.out"]
|
||||
|
||||
## Data format to output.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||
data_format = "carbon2"
|
||||
```
|
||||
|
||||
Standard form:
|
||||
|
||||
```
|
||||
metric=name field=field_1 host=foo 30 1234567890
|
||||
metric=name field=field_2 host=foo 4 1234567890
|
||||
metric=name field=field_N host=foo 59 1234567890
|
||||
```
|
||||
|
||||
### Metrics
|
||||
|
||||
The serializer converts the metrics by creating `intrinsic_tags` using the combination of metric name and fields. So, if one Telegraf metric has 4 fields, the `carbon2` output will be 4 separate metrics. There will be a `metric` tag that represents the name of the metric and a `field` tag to represent the field.
|
||||
|
||||
### Example
|
||||
|
||||
If we take the following InfluxDB Line Protocol:
|
||||
|
||||
```
|
||||
weather,location=us-midwest,season=summer temperature=82,wind=100 1234567890
|
||||
```
|
||||
|
||||
After serializing in Carbon2, the result would be:
|
||||
|
||||
```
|
||||
metric=weather field=temperature location=us-midwest season=summer 82 1234567890
|
||||
metric=weather field=wind location=us-midwest season=summer 100 1234567890
|
||||
```
|
||||
|
||||
### Fields and tags with spaces
|
||||
|
||||
When a field key or tag key-value have spaces, spaces will be replaced with `_`.
|
||||
|
||||
### Tags with empty values
|
||||
|
||||
When a tag's value is empty, it will be replaced with `null`.
|
|
@ -0,0 +1,59 @@
|
|||
---
|
||||
title: Graphite output data format
|
||||
description: Use the Graphite output data format to serialize data from Telegraf metrics.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: Graphite output
|
||||
weight: 20
|
||||
parent: Output data formats
|
||||
---
|
||||
|
||||
The Graphite data format is serialized from Telegraf metrics using either the
|
||||
template pattern or tag support method. You can select between the two
|
||||
methods using the [`graphite_tag_support`](#graphite_tag_support) option. When set, the tag support method is used,
|
||||
otherwise the [template pattern][templates]) option is used.
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.file]]
|
||||
## Files to write to, "stdout" is a specially handled file.
|
||||
files = ["stdout", "/tmp/metrics.out"]
|
||||
|
||||
## Data format to output.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||
data_format = "graphite"
|
||||
|
||||
## Prefix added to each graphite bucket
|
||||
prefix = "telegraf"
|
||||
## Graphite template pattern
|
||||
template = "host.tags.measurement.field"
|
||||
|
||||
## Support Graphite tags, recommended to enable when using Graphite 1.1 or later.
|
||||
# graphite_tag_support = false
|
||||
```
|
||||
|
||||
### graphite_tag_support
|
||||
|
||||
When the `graphite_tag_support` option is enabled, the template pattern is not
|
||||
used. Instead, tags are encoded using
|
||||
[Graphite tag support](http://graphite.readthedocs.io/en/latest/tags.html),
|
||||
added in Graphite 1.1. The `metric_path` is a combination of the optional
|
||||
`prefix` option, measurement name, and field name.
|
||||
|
||||
The tag `name` is reserved by Graphite, any conflicting tags and will be encoded as `_name`.
|
||||
|
||||
**Example conversion**:
|
||||
```
|
||||
cpu,cpu=cpu-total,dc=us-east-1,host=tars usage_idle=98.09,usage_user=0.89 1455320660004257758
|
||||
=>
|
||||
cpu.usage_user;cpu=cpu-total;dc=us-east-1;host=tars 0.89 1455320690
|
||||
cpu.usage_idle;cpu=cpu-total;dc=us-east-1;host=tars 98.09 1455320690
|
||||
```
|
||||
|
||||
### templates
|
||||
|
||||
For more information on templates and template patterns, see [Template patterns](/telegraf/v1.15/data_formats/template-patterns/).
|
|
@ -0,0 +1,42 @@
|
|||
---
|
||||
title: InfluxDB Line Protocol output data format
|
||||
description: The `influx` data format outputs metrics into the InfluxDB Line Protocol format.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: InfluxDB Line Protocol
|
||||
weight: 30
|
||||
parent: Output data formats
|
||||
---
|
||||
|
||||
The `influx` output data format outputs metrics into [InfluxDB Line Protocol][line protocol]. InfluxData recommends this data format unless another format is required for interoperability.
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.file]]
|
||||
## Files to write to, "stdout" is a specially handled file.
|
||||
files = ["stdout", "/tmp/metrics.out"]
|
||||
|
||||
## Data format to output.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||
data_format = "influx"
|
||||
|
||||
## Maximum line length in bytes. Useful only for debugging.
|
||||
influx_max_line_bytes = 0
|
||||
|
||||
## When true, fields will be output in ascending lexical order. Enabling
|
||||
## this option will result in decreased performance and is only recommended
|
||||
## when you need predictable ordering while debugging.
|
||||
influx_sort_fields = false
|
||||
|
||||
## When true, Telegraf will output unsigned integers as unsigned values,
|
||||
## i.e.: `42u`. You will need a version of InfluxDB supporting unsigned
|
||||
## integer values. Enabling this option will result in field type errors if
|
||||
## existing data has been written.
|
||||
influx_uint_support = false
|
||||
```
|
||||
|
||||
[line protocol]: /{{< latest "influxdb" "v1" >}}/write_protocols/line_protocol_tutorial/
|
|
@ -0,0 +1,90 @@
|
|||
---
|
||||
title: JSON output data format
|
||||
description: Telegraf's `json` output data format converts metrics into JSON documents.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: JSON
|
||||
weight: 40
|
||||
parent: Output data formats
|
||||
---
|
||||
|
||||
The `json` output data format serializes Telegraf metrics into JSON documents.
|
||||
|
||||
## Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.file]]
|
||||
## Files to write to, "stdout" is a specially handled file.
|
||||
files = ["stdout", "/tmp/metrics.out"]
|
||||
|
||||
## Data format to output.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||
data_format = "json"
|
||||
|
||||
## The resolution to use for the metric timestamp. Must be a duration string
|
||||
## such as "1ns", "1us", "1ms", "10ms", "1s". Durations are truncated to
|
||||
## the power of 10 less than the specified units.
|
||||
json_timestamp_units = "1s"
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Standard format
|
||||
|
||||
```json
|
||||
{
|
||||
"fields": {
|
||||
"field_1": 30,
|
||||
"field_2": 4,
|
||||
"field_N": 59,
|
||||
"n_images": 660
|
||||
},
|
||||
"name": "docker",
|
||||
"tags": {
|
||||
"host": "raynor"
|
||||
},
|
||||
"timestamp": 1458229140
|
||||
}
|
||||
```
|
||||
|
||||
### Batch format
|
||||
|
||||
When an output plugin needs to emit multiple metrics at one time, it may use the
|
||||
batch format. The use of batch format is determined by the plugin -- reference
|
||||
the documentation for the specific plugin.
|
||||
|
||||
```json
|
||||
{
|
||||
"metrics": [
|
||||
{
|
||||
"fields": {
|
||||
"field_1": 30,
|
||||
"field_2": 4,
|
||||
"field_N": 59,
|
||||
"n_images": 660
|
||||
},
|
||||
"name": "docker",
|
||||
"tags": {
|
||||
"host": "raynor"
|
||||
},
|
||||
"timestamp": 1458229140
|
||||
},
|
||||
{
|
||||
"fields": {
|
||||
"field_1": 30,
|
||||
"field_2": 4,
|
||||
"field_N": 59,
|
||||
"n_images": 660
|
||||
},
|
||||
"name": "docker",
|
||||
"tags": {
|
||||
"host": "raynor"
|
||||
},
|
||||
"timestamp": 1458229140
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
|
@ -0,0 +1,49 @@
|
|||
---
|
||||
title: MessagePack output data format
|
||||
description: Use the MessagePack output data format (serializer) to convert Telegraf metrics into MessagePack format.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: MessagePack
|
||||
weight: 10
|
||||
parent: Output data formats
|
||||
---
|
||||
|
||||
The `msgpack` output data format (serializer) translates the Telegraf metric format to the [MessagePack](https://msgpack.org/). MessagePack is an efficient binary serialization format that lets you exchange data among multiple languages like JSON.
|
||||
|
||||
### Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.file]]
|
||||
## Files to write to, "stdout" is a specially handled file.
|
||||
files = ["stdout", "/tmp/metrics.out"]
|
||||
|
||||
## Data format to output.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||
data_format = "msgpack"
|
||||
```
|
||||
|
||||
|
||||
### Example output
|
||||
|
||||
Output of this format is MessagePack binary representation of metrics with a structure identical to the below JSON:
|
||||
|
||||
```
|
||||
{
|
||||
"name":"cpu",
|
||||
"time": <TIMESTAMP>, // https://github.com/msgpack/msgpack/blob/master/spec.md#timestamp-extension-type
|
||||
"tags":{
|
||||
"tag_1":"host01",
|
||||
...
|
||||
},
|
||||
"fields":{
|
||||
"field_1":30,
|
||||
"field_2":true,
|
||||
"field_3":"field_value"
|
||||
"field_4":30.1
|
||||
...
|
||||
}
|
||||
}
|
||||
```
|
|
@ -0,0 +1,91 @@
|
|||
---
|
||||
title: ServiceNow Metrics output data format
|
||||
description: Use the ServiceNow Metrics output data format (serializer) to output metrics in the ServiceNow Operational Intelligence format.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: ServiceNow Metrics
|
||||
weight: 50
|
||||
parent: Output data formats
|
||||
---
|
||||
|
||||
The ServiceNow Metrics output data format (serializer) outputs metrics in the [ServiceNow Operational Intelligence format](https://docs.servicenow.com/bundle/kingston-it-operations-management/page/product/event-management/reference/mid-POST-metrics.html).
|
||||
|
||||
It can be used to write to a file using the File output plugin, or for sending metrics to a MID Server with Enable REST endpoint activated using the standard telegraf HTTP output.
|
||||
If you're using the HTTP output plugin, this serializer knows how to batch the metrics so you don't end up with an HTTP POST per metric.
|
||||
|
||||
An example event looks like:
|
||||
|
||||
```javascript
|
||||
[{
|
||||
"metric_type": "Disk C: % Free Space",
|
||||
"resource": "C:\\",
|
||||
"node": "lnux100",
|
||||
"value": 50,
|
||||
"timestamp": 1473183012000,
|
||||
"ci2metric_id": {
|
||||
"node": "lnux100"
|
||||
},
|
||||
"source": “Telegraf”
|
||||
}]
|
||||
```
|
||||
|
||||
## Using with the HTTP output plugin
|
||||
|
||||
To send this data to a ServiceNow MID Server with Web Server extension activated, you can use the HTTP output plugin, there are some custom headers that you need to add to manage the MID Web Server authorization, here's a sample config for an HTTP output:
|
||||
|
||||
```toml
|
||||
[[outputs.http]]
|
||||
## URL is the address to send metrics to
|
||||
url = "http://<mid server fqdn or ip address>:9082/api/mid/sa/metrics"
|
||||
|
||||
## Timeout for HTTP message
|
||||
# timeout = "5s"
|
||||
|
||||
## HTTP method, one of: "POST" or "PUT"
|
||||
method = "POST"
|
||||
|
||||
## HTTP Basic Auth credentials
|
||||
username = 'evt.integration'
|
||||
password = 'P@$$w0rd!'
|
||||
|
||||
## Optional TLS Config
|
||||
# tls_ca = "/etc/telegraf/ca.pem"
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
## Use TLS but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
|
||||
## Data format to output.
|
||||
## Each data format has it's own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||
data_format = "nowmetric"
|
||||
|
||||
## Additional HTTP headers
|
||||
[outputs.http.headers]
|
||||
# # Should be set manually to "application/json" for json data_format
|
||||
Content-Type = "application/json"
|
||||
Accept = "application/json"
|
||||
```
|
||||
|
||||
Starting with the London release, you also need to explicitly create event rule to allow binding of metric events to host CIs.
|
||||
|
||||
https://docs.servicenow.com/bundle/london-it-operations-management/page/product/event-management/task/event-rule-bind-metrics-to-host.html
|
||||
|
||||
## Using with the File output plugin
|
||||
|
||||
You can use the File output plugin to output the payload in a file.
|
||||
In this case, just add the following section to your telegraf configuration file.
|
||||
|
||||
```toml
|
||||
[[outputs.file]]
|
||||
## Files to write to, "stdout" is a specially handled file.
|
||||
files = ["C:/Telegraf/metrics.out"]
|
||||
|
||||
## Data format to output.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||
data_format = "nowmetric"
|
||||
```
|
|
@ -0,0 +1,148 @@
|
|||
---
|
||||
title: SplunkMetric output data format
|
||||
description: The SplunkMetric serializer formats and outputs data in a format that can be consumed by a Splunk metrics index.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: SplunkMetric
|
||||
weight: 60
|
||||
parent: Output data formats
|
||||
---
|
||||
|
||||
The SplunkMetric serializer formats and outputs the metric data in a format that can be consumed by a Splunk metrics index.
|
||||
It can be used to write to a file using the file output, or for sending metrics to a HEC using the standard Telegraf HTTP output.
|
||||
|
||||
If you're using the HTTP output, this serializer knows how to batch the metrics so you don't end up with an HTTP POST per metric.
|
||||
|
||||
Th data is output in a format that conforms to the specified Splunk HEC JSON format as found here:
|
||||
[Send metrics in JSON format](http://dev.splunk.com/view/event-collector/SP-CAAAFDN).
|
||||
|
||||
An example event looks like:
|
||||
```javascript
|
||||
{
|
||||
"time": 1529708430,
|
||||
"event": "metric",
|
||||
"host": "patas-mbp",
|
||||
"fields": {
|
||||
"_value": 0.6,
|
||||
"cpu": "cpu0",
|
||||
"dc": "mobile",
|
||||
"metric_name": "cpu.usage_user",
|
||||
"user": "ronnocol"
|
||||
}
|
||||
}
|
||||
```
|
||||
In the above snippet, the following keys are dimensions:
|
||||
* cpu
|
||||
* dc
|
||||
* user
|
||||
|
||||
## Using with the HTTP output
|
||||
|
||||
To send this data to a Splunk HEC, you can use the HTTP output, there are some custom headers that you need to add
|
||||
to manage the HEC authorization, here's a sample config for an HTTP output:
|
||||
|
||||
```toml
|
||||
[[outputs.http]]
|
||||
## URL is the address to send metrics to
|
||||
url = "https://localhost:8088/services/collector"
|
||||
|
||||
## Timeout for HTTP message
|
||||
# timeout = "5s"
|
||||
|
||||
## HTTP method, one of: "POST" or "PUT"
|
||||
# method = "POST"
|
||||
|
||||
## HTTP Basic Auth credentials
|
||||
# username = "username"
|
||||
# password = "pa$$word"
|
||||
|
||||
## Optional TLS Config
|
||||
# tls_ca = "/etc/telegraf/ca.pem"
|
||||
# tls_cert = "/etc/telegraf/cert.pem"
|
||||
# tls_key = "/etc/telegraf/key.pem"
|
||||
## Use TLS but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
|
||||
## Data format to output.
|
||||
## Each data format has it's own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||
data_format = "splunkmetric"
|
||||
## Provides time, index, source overrides for the HEC
|
||||
splunkmetric_hec_routing = true
|
||||
|
||||
## Additional HTTP headers
|
||||
[outputs.http.headers]
|
||||
# Should be set manually to "application/json" for json data_format
|
||||
Content-Type = "application/json"
|
||||
Authorization = "Splunk xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
|
||||
X-Splunk-Request-Channel = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
|
||||
```
|
||||
|
||||
## Overrides
|
||||
You can override the default values for the HEC token you are using by adding additional tags to the config file.
|
||||
|
||||
The following aspects of the token can be overriden with tags:
|
||||
* index
|
||||
* source
|
||||
|
||||
You can either use `[global_tags]` or using a more advanced configuration as documented [here](https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md).
|
||||
|
||||
Such as this example which overrides the index just on the cpu metric:
|
||||
```toml
|
||||
[[inputs.cpu]]
|
||||
percpu = false
|
||||
totalcpu = true
|
||||
[inputs.cpu.tags]
|
||||
index = "cpu_metrics"
|
||||
```
|
||||
|
||||
## Using with the File output
|
||||
|
||||
You can use the file output when running telegraf on a machine with a Splunk forwarder.
|
||||
|
||||
A sample event when `hec_routing` is false (or unset) looks like:
|
||||
```javascript
|
||||
{
|
||||
"_value": 0.6,
|
||||
"cpu": "cpu0",
|
||||
"dc": "mobile",
|
||||
"metric_name": "cpu.usage_user",
|
||||
"user": "ronnocol",
|
||||
"time": 1529708430
|
||||
}
|
||||
```
|
||||
Data formatted in this manner can be ingested with a simple `props.conf` file that
|
||||
looks like this:
|
||||
|
||||
```ini
|
||||
[telegraf]
|
||||
category = Metrics
|
||||
description = Telegraf Metrics
|
||||
pulldown_type = 1
|
||||
DATETIME_CONFIG =
|
||||
NO_BINARY_CHECK = true
|
||||
SHOULD_LINEMERGE = true
|
||||
disabled = false
|
||||
INDEXED_EXTRACTIONS = json
|
||||
KV_MODE = none
|
||||
TIMESTAMP_FIELDS = time
|
||||
TIME_FORMAT = %s.%3N
|
||||
```
|
||||
|
||||
An example configuration of a file based output is:
|
||||
|
||||
```toml
|
||||
# Send telegraf metrics to file(s)
|
||||
[[outputs.file]]
|
||||
## Files to write to, "stdout" is a specially handled file.
|
||||
files = ["/tmp/metrics.out"]
|
||||
|
||||
## Data format to output.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||
data_format = "splunkmetric"
|
||||
hec_routing = false
|
||||
```
|
|
@ -0,0 +1,120 @@
|
|||
---
|
||||
title: Get started
|
||||
description: Configure and start Telegraf
|
||||
menu:
|
||||
telegraf_1_25:
|
||||
name: Get started
|
||||
weight: 30
|
||||
---
|
||||
|
||||
After you've [downloaded and installed Telegraf](/telegraf/v1.25/install/), you're ready to begin collecting and sending data. To collect and send data, do the following:
|
||||
|
||||
1. [Configure Telegraf](#configure-telegraf)
|
||||
2. [Start Telegraf](#start-telegraf)
|
||||
3. Use [plugins available in Telegraf](/telegraf/v1.25/plugins/) to gather, transform, and output data.
|
||||
|
||||
## Configure Telegraf
|
||||
|
||||
Define which plugins Telegraf will use in the configuration file. Each configuration file needs at least one enabled [input plugin](/telegraf/v1.25/plugins/inputs/) (where the metrics come from) and at least one enabled [output plugin](/telegraf/v1.25/plugins/outputs/) (where the metrics go).
|
||||
|
||||
The following example generates a sample configuration file with all available plugins, then uses `filter` flags to enable specific plugins.
|
||||
|
||||
{{% note %}}
|
||||
For details on `filter` and other flags, see [Telegraf commands and flags](/telegraf/v1.25/commands/).
|
||||
{{% /note %}}
|
||||
|
||||
1. Run the following command to create a configuration file:
|
||||
```bash
|
||||
telegraf --sample-config > telegraf.conf
|
||||
```
|
||||
2. Locate the configuration file. The location varies depending on your system:
|
||||
* macOS [Homebrew](http://brew.sh/): `/usr/local/etc/telegraf.conf`
|
||||
* Linux debian and RPM packages: `/etc/telegraf/telegraf.conf`
|
||||
* Standalone Binary: see the next section for how to create a configuration file
|
||||
|
||||
> **Note:** You can also specify a remote URL endpoint to pull a configuration file from. See [Configuration file locations](/telegraf/v1.25/configuration/#configuration-file-locations).
|
||||
|
||||
3. Edit the configuration file using `vim` or a text editor. Because this example uses [InfluxDB V2 output plugin](https://github.com/influxdata/telegraf/blob/release-1.21/plugins/outputs/influxdb_v2/README.md), we need to add the InfluxDB URL, authentication token, organization, and bucket details to this section of the configuration file.
|
||||
|
||||
> **Note:** For more configuration file options, see [Configuration options](/telegraf/v1.25/configuration/).
|
||||
|
||||
4. For this example, specify two inputs (`cpu` and `mem`) with the `--input-filter` flag.
|
||||
Specify InfluxDB as the output with the `--output-filter` flag.
|
||||
|
||||
```bash
|
||||
telegraf --sample-config --input-filter cpu:mem --output-filter influxdb_v2 > telegraf.conf
|
||||
```
|
||||
|
||||
The resulting configuration will collect CPU and memory data and sends it to InfluxDB V2.
|
||||
|
||||
For an overview of how to configure a plugin, watch the following video:
|
||||
|
||||
{{< youtube a0js7wiQEJ4 >}}
|
||||
|
||||
|
||||
## Set environment variables
|
||||
|
||||
Add environment variables anywhere in the configuration file by prepending them with `$`.
|
||||
For strings, variables must be in quotes (for example, `"$STR_VAR"`).
|
||||
For numbers and Booleans, variables must be unquoted (for example, `$INT_VAR`, `$BOOL_VAR`).
|
||||
|
||||
You can also set environment variables using the Linux `export` command: `export password=mypassword`
|
||||
|
||||
> **Note:** We recommend using environment variables for sensitive information.
|
||||
|
||||
### Example: Telegraf environment variables
|
||||
|
||||
In the Telegraf environment variables file (`/etc/default/telegraf`):
|
||||
|
||||
```sh
|
||||
USER="alice"
|
||||
INFLUX_URL="http://localhost:8086"
|
||||
INFLUX_SKIP_DATABASE_CREATION="true"
|
||||
INFLUX_PASSWORD="monkey123"
|
||||
```
|
||||
|
||||
In the Telegraf configuration file (`/etc/telegraf.conf`):
|
||||
|
||||
```sh
|
||||
[global_tags]
|
||||
user = "${USER}"
|
||||
|
||||
[[inputs.mem]]
|
||||
|
||||
[[outputs.influxdb]]
|
||||
urls = ["${INFLUX_URL}"]
|
||||
skip_database_creation = ${INFLUX_SKIP_DATABASE_CREATION}
|
||||
password = "${INFLUX_PASSWORD}"
|
||||
```
|
||||
|
||||
The environment variables above add the following configuration settings to Telegraf:
|
||||
|
||||
```sh
|
||||
[global_tags]
|
||||
user = "alice"
|
||||
|
||||
[[outputs.influxdb]]
|
||||
urls = "http://localhost:8086"
|
||||
skip_database_creation = true
|
||||
password = "monkey123"
|
||||
|
||||
```
|
||||
|
||||
## Start Telegraf
|
||||
|
||||
Next, you need to start the Telegraf service and direct it to your configuration file:
|
||||
|
||||
### macOS [Homebrew](http://brew.sh/)
|
||||
```bash
|
||||
telegraf --config telegraf.conf
|
||||
```
|
||||
|
||||
### Linux (sysvinit and upstart installations)
|
||||
```bash
|
||||
sudo service telegraf start
|
||||
```
|
||||
|
||||
### Linux (systemd installations)
|
||||
```bash
|
||||
systemctl start telegraf
|
||||
```
|
|
@ -0,0 +1,106 @@
|
|||
---
|
||||
title: Telegraf glossary
|
||||
description: This section includes definitions of important terms for related to Telegraf.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
name: Glossary
|
||||
weight: 79
|
||||
---
|
||||
|
||||
## agent
|
||||
|
||||
An agent is the core part of Telegraf that gathers metrics from the declared input plugins and sends metrics to the declared output plugins, based on the plugins enabled by the given configuration.
|
||||
|
||||
Related entries: [input plugin](/telegraf/v1.15/concepts/glossary/#input-plugin), [output plugin](/telegraf/v1.15/concepts/glossary/#output-plugin)
|
||||
|
||||
## aggregator plugin
|
||||
|
||||
Aggregator plugins receive raw metrics from input plugins and create aggregate metrics from them.
|
||||
The aggregate metrics are then passed to the configured output plugins.
|
||||
|
||||
Related entries: [input plugin](/telegraf/v1.15/concepts/glossary/#input-plugin), [output plugin](/telegraf/v1.15/concepts/glossary/#output-plugin), [processor plugin](/telegraf/v1.15/concepts/glossary/#processor-plugin)
|
||||
|
||||
## batch size
|
||||
|
||||
The Telegraf agent sends metrics to output plugins in batches, not individually.
|
||||
The batch size controls the size of each write batch that Telegraf sends to the output plugins.
|
||||
|
||||
Related entries: [output plugin](/telegraf/v1.15/concepts/glossary/#output-plugin)
|
||||
|
||||
## collection interval
|
||||
|
||||
The default global interval for collecting data from each input plugin.
|
||||
The collection interval can be overridden by each individual input plugin's configuration.
|
||||
|
||||
Related entries: [input plugin](/telegraf/v1.15/concepts/glossary/#input-plugin)
|
||||
|
||||
## collection jitter
|
||||
|
||||
Collection jitter is used to prevent every input plugin from collecting metrics simultaneously, which can have a measurable effect on the system.
|
||||
Each collection interval, every input plugin will sleep for a random time between zero and the collection jitter before collecting the metrics.
|
||||
|
||||
Related entries: [collection interval](/telegraf/v1.15/concepts/glossary/#collection-interval), [input plugin](/telegraf/v1.15/concepts/glossary/#input-plugin)
|
||||
|
||||
## external plugin
|
||||
|
||||
Programs built outside of Telegraf that run through the `execd` plugin. Provides flexibility to add functionality that doesn't exist in internal Telegraf plugins.
|
||||
## flush interval
|
||||
|
||||
The global interval for flushing data from each output plugin to its destination.
|
||||
This value should not be set lower than the collection interval.
|
||||
|
||||
Related entries: [collection interval](/telegraf/v1.15/concepts/glossary/#collection-interval), [flush jitter](/telegraf/v1.15/concepts/glossary/#flush-jitter), [output plugin](/telegraf/v1.15/concepts/glossary/#output-plugin)
|
||||
|
||||
## flush jitter
|
||||
|
||||
Flush jitter is used to prevent every output plugin from sending writes simultaneously, which can overwhelm some data sinks.
|
||||
Each flush interval, every output plugin will sleep for a random time between zero and the flush jitter before emitting metrics.
|
||||
This helps smooth out write spikes when running a large number of Telegraf instances.
|
||||
|
||||
Related entries: [flush interval](/telegraf/v1.15/concepts/glossary/#flush-interval), [output plugin](/telegraf/v1.15/concepts/glossary/#output-plugin)
|
||||
|
||||
## input plugin
|
||||
|
||||
Input plugins actively gather metrics and deliver them to the core agent, where aggregator, processor, and output plugins can operate on the metrics.
|
||||
In order to activate an input plugin, it needs to be enabled and configured in Telegraf's configuration file.
|
||||
|
||||
Related entries: [aggregator plugin](/telegraf/v1.15/concepts/glossary/#aggregator-plugin), [collection interval](/telegraf/v1.15/concepts/glossary/#collection-interval), [output plugin](/telegraf/v1.15/concepts/glossary/#output-plugin), [processor plugin](/telegraf/v1.15/concepts/glossary/#processor-plugin)
|
||||
|
||||
## metric buffer
|
||||
|
||||
The metric buffer caches individual metrics when writes are failing for an output plugin.
|
||||
Telegraf will attempt to flush the buffer upon a successful write to the output.
|
||||
The oldest metrics are dropped first when this buffer fills.
|
||||
|
||||
Related entries: [output plugin](/telegraf/v1.15/concepts/glossary/#output-plugin)
|
||||
|
||||
## output plugin
|
||||
|
||||
Output plugins deliver metrics to their configured destination. In order to activate an output plugin, it needs to be enabled and configured in Telegraf's configuration file.
|
||||
|
||||
Related entries: [aggregator plugin](/telegraf/v1.15/concepts/glossary/#aggregator-plugin), [flush interval](/telegraf/v1.15/concepts/glossary/#flush-interval), [input plugin](/telegraf/v1.15/concepts/glossary/#input-plugin), [processor plugin](/telegraf/v1.15/concepts/glossary/#processor-plugin)
|
||||
|
||||
## precision
|
||||
|
||||
The precision configuration setting determines how much timestamp precision is retained in the points received from input plugins. All incoming timestamps are truncated to the given precision.
|
||||
Telegraf then pads the truncated timestamps with zeros to create a nanosecond timestamp; output plugins will emit timestamps in nanoseconds.
|
||||
Valid precisions are `ns`, `us` or `µs`, `ms`, and `s`.
|
||||
|
||||
For example, if the precision is set to `ms`, the nanosecond epoch timestamp `1480000000123456789` would be truncated to `1480000000123` in millisecond precision and then padded with zeroes to make a new, less precise nanosecond timestamp of `1480000000123000000`.
|
||||
Output plugins do not alter the timestamp further. The precision setting is ignored for service input plugins.
|
||||
|
||||
Related entries: [aggregator plugin](/telegraf/v1.15/concepts/glossary/#aggregator-plugin), [input plugin](/telegraf/v1.15/concepts/glossary/#input-plugin), [output plugin](/telegraf/v1.15/concepts/glossary/#output-plugin), [processor plugin](/telegraf/v1.15/concepts/glossary/#processor-plugin), [service input plugin](/telegraf/v1.15/concepts/glossary/#service-input-plugin)
|
||||
|
||||
## processor plugin
|
||||
|
||||
Processor plugins transform, decorate, and/or filter metrics collected by input plugins, passing the transformed metrics to the output plugins.
|
||||
|
||||
Related entries: [aggregator plugin](/telegraf/v1.15/concepts/glossary/#aggregator-plugin), [input plugin](/telegraf/v1.15/concepts/glossary/#input-plugin), [output plugin](/telegraf/v1.15/concepts/glossary/#output-plugin)
|
||||
|
||||
## service input plugin
|
||||
|
||||
Service input plugins are input plugins that run in a passive collection mode while the Telegraf agent is running.
|
||||
They listen on a socket for known protocol inputs, or apply their own logic to ingested metrics before delivering them to the Telegraf agent.
|
||||
|
||||
Related entries: [aggregator plugin](/telegraf/v1.15/concepts/glossary/#aggregator-plugin), [input plugin](/telegraf/v1.15/concepts/glossary/#input-plugin), [output plugin](/telegraf/v1.15/concepts/glossary/#output-plugin), [processor plugin](/telegraf/v1.15/concepts/glossary/#processor-plugin)
|
|
@ -0,0 +1,402 @@
|
|||
---
|
||||
title: Install Telegraf
|
||||
description: Install Telegraf on your operating system.
|
||||
menu:
|
||||
telegraf_1_25:
|
||||
name: Install
|
||||
weight: 20
|
||||
aliases:
|
||||
- /telegraf/v1.25/introduction/installation/
|
||||
- /telegraf/v1.25/install/
|
||||
---
|
||||
|
||||
This page provides directions for installing, starting, and configuring Telegraf. To install Telegraf, do the following:
|
||||
|
||||
- [Download Telegraf](#download)
|
||||
- [Review requirements](#requirements)
|
||||
- [Complete the installation](#installation)
|
||||
- [Custom compile Telegraf](#custom-compile)
|
||||
|
||||
## Download
|
||||
|
||||
Download the latest Telegraf release at the [InfluxData download page](https://portal.influxdata.com/downloads).
|
||||
|
||||
## Requirements
|
||||
|
||||
Installation of the Telegraf package may require `root` or administrator privileges in order to complete successfully. <!--check instruction for each one to clarify-->
|
||||
|
||||
### Networking
|
||||
|
||||
Telegraf offers multiple service [input plugins](/telegraf/v1.25/plugins/inputs/) that may
|
||||
require custom ports.
|
||||
Modify port mappings through the configuration file (`telegraf.conf`).
|
||||
|
||||
For Linux distributions, this file is located at `/etc/telegraf` for default installations.
|
||||
|
||||
For Windows distributions, the configuration file is located in the directory where you unzipped the Telegraf ZIP archive.
|
||||
The default location is `C:\InfluxData\telegraf`.
|
||||
|
||||
### NTP
|
||||
|
||||
Telegraf uses a host's local time in UTC to assign timestamps to data.
|
||||
Use the Network Time Protocol (NTP) to synchronize time between hosts. If hosts' clocks
|
||||
aren't synchronized with NTP, the timestamps on the data might be inaccurate.
|
||||
|
||||
## Installation
|
||||
|
||||
{{< tabs-wrapper >}}
|
||||
{{% tabs style="even-wrap" %}}
|
||||
[Ubuntu & Debian](#)
|
||||
[RedHat & CentOS](#)
|
||||
[SLES & openSUSE](#)
|
||||
[FreeBSD/PC-BSD](#)
|
||||
[macOS](#)
|
||||
[Windows](#)
|
||||
{{% /tabs %}}
|
||||
<!---------- BEGIN Ubuntu & Debian ---------->
|
||||
{{% tab-content %}}
|
||||
Debian and Ubuntu users can install the latest stable version of Telegraf using the `apt-get` package manager.
|
||||
|
||||
### Ubuntu & Debian
|
||||
|
||||
Install Telegraf from the InfluxData repository with the following commands:
|
||||
|
||||
{{< code-tabs-wrapper >}}
|
||||
{{% code-tabs %}}
|
||||
[wget](#)
|
||||
[curl](#)
|
||||
{{% /code-tabs %}}
|
||||
|
||||
{{% code-tab-content %}}
|
||||
```bash
|
||||
# influxdata-archive_compat.key GPG Fingerprint: 9D539D90D3328DC7D6C8D3B9D8FF8E1F7DF8B07E
|
||||
wget -q https://repos.influxdata.com/influxdata-archive_compat.key
|
||||
echo '393e8779c89ac8d958f81f942f9ad7fb82a25e133faddaf92e15b16e6ac9ce4c influxdata-archive_compat.key' | sha256sum -c && cat influxdata-archive_compat.key | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg > /dev/null
|
||||
echo 'deb [signed-by=/etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg] https://repos.influxdata.com/debian stable main' | sudo tee /etc/apt/sources.list.d/influxdata.list
|
||||
sudo apt-get update && sudo apt-get install telegraf
|
||||
```
|
||||
{{% /code-tab-content %}}
|
||||
|
||||
{{% code-tab-content %}}
|
||||
```bash
|
||||
# influxdata-archive_compat.key GPG Fingerprint: 9D539D90D3328DC7D6C8D3B9D8FF8E1F7DF8B07E
|
||||
curl -s https://repos.influxdata.com/influxdata-archive_compat.key > influxdata-archive_compat.key
|
||||
echo '393e8779c89ac8d958f81f942f9ad7fb82a25e133faddaf92e15b16e6ac9ce4c influxdata-archive_compat.key' | sha256sum -c && cat influxdata-archive_compat.key | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg > /dev/null
|
||||
echo 'deb [signed-by=/etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg] https://repos.influxdata.com/debian stable main' | sudo tee /etc/apt/sources.list.d/influxdata.list
|
||||
sudo apt-get update && sudo apt-get install telegraf
|
||||
```
|
||||
{{% /code-tab-content %}}
|
||||
{{< /code-tabs-wrapper >}}
|
||||
|
||||
**Install from a `.deb` file**:
|
||||
|
||||
To manually install the Debian package from a `.deb` file:
|
||||
|
||||
1. Download the latest Telegraf `.deb` release
|
||||
from the Telegraf section of the [downloads page](https://influxdata.com/downloads/).
|
||||
2. Run the following command (making sure to supply the correct version number for the downloaded file):
|
||||
|
||||
```sh
|
||||
sudo dpkg -i telegraf_{{< latest-patch >}}-1_amd64.deb
|
||||
```
|
||||
|
||||
{{% telegraf/verify %}}
|
||||
|
||||
## Configuration
|
||||
|
||||
### Create a configuration file with default input and output plugins.
|
||||
|
||||
Every plugin will be in the file, but most will be commented out.
|
||||
|
||||
```
|
||||
telegraf config > telegraf.conf
|
||||
```
|
||||
|
||||
### Create a configuration file with specific inputs and outputs
|
||||
```
|
||||
telegraf --input-filter <pluginname>[:<pluginname>] --output-filter <outputname>[:<outputname>] config > telegraf.conf
|
||||
```
|
||||
|
||||
For more advanced configuration details, see the
|
||||
[configuration documentation](/telegraf/v1.25/administration/configuration/).
|
||||
{{% /tab-content %}}
|
||||
<!---------- BEGIN RedHat & CentOS ---------->
|
||||
{{% tab-content %}}
|
||||
For instructions on how to manually install the RPM package from a file, please see the [downloads page](https://influxdata.com/downloads/).
|
||||
|
||||
**RedHat and CentOS:** Install the latest stable version of Telegraf using the `yum` package manager:
|
||||
|
||||
```bash
|
||||
cat <<EOF | sudo tee /etc/yum.repos.d/influxdb.repo
|
||||
[influxdb]
|
||||
name = InfluxData Repository - Stable
|
||||
baseurl = https://repos.influxdata.com/stable/\$basearch/main
|
||||
enabled = 1
|
||||
gpgcheck = 1
|
||||
gpgkey = https://repos.influxdata.com/influxdata-archive_compat.key
|
||||
EOF
|
||||
```
|
||||
|
||||
Install telegraf once the repository is added to the `yum` configuration:
|
||||
|
||||
```bash
|
||||
sudo yum install telegraf
|
||||
```
|
||||
|
||||
{{% telegraf/verify %}}
|
||||
|
||||
## Configuration
|
||||
|
||||
### Create a configuration file with default input and output plugins
|
||||
|
||||
Every plugin will be in the file, but most will be commented out.
|
||||
|
||||
```
|
||||
telegraf config > telegraf.conf
|
||||
```
|
||||
|
||||
### Create a configuration file with specific inputs and outputs
|
||||
```
|
||||
telegraf --input-filter <pluginname>[:<pluginname>] --output-filter <outputname>[:<outputname>] config > telegraf.conf
|
||||
```
|
||||
|
||||
For more advanced configuration details, see the
|
||||
[configuration documentation](/telegraf/v1.25/administration/configuration/).
|
||||
{{% /tab-content %}}
|
||||
<!---------- BEGIN SLES & openSUSE ---------->
|
||||
{{% tab-content %}}
|
||||
There are RPM packages provided by openSUSE Build Service for SUSE Linux users:
|
||||
|
||||
```bash
|
||||
# add go repository
|
||||
zypper ar -f obs://devel:languages:go/ go
|
||||
# install latest telegraf
|
||||
zypper in telegraf
|
||||
```
|
||||
|
||||
{{% telegraf/verify %}}
|
||||
|
||||
## Configuration
|
||||
|
||||
### Create a configuration file with default input and output plugins.
|
||||
|
||||
Every plugin will be in the file, but most will be commented out.
|
||||
|
||||
```
|
||||
telegraf config > telegraf.conf
|
||||
```
|
||||
|
||||
### Create a configuration file with specific inputs and outputs
|
||||
```
|
||||
telegraf --input-filter <pluginname>[:<pluginname>] --output-filter <outputname>[:<outputname>] config > telegraf.conf
|
||||
```
|
||||
|
||||
For more advanced configuration details, see the
|
||||
[configuration documentation](/telegraf/v1.25/administration/configuration/).
|
||||
{{% /tab-content %}}
|
||||
<!---------- BEGIN FreeBSD/PC-BSD ---------->
|
||||
{{% tab-content %}}
|
||||
Telegraf is part of the FreeBSD package system.
|
||||
It can be installed by running:
|
||||
|
||||
```bash
|
||||
sudo pkg install telegraf
|
||||
```
|
||||
|
||||
The configuration file is located at `/usr/local/etc/telegraf.conf` with examples in `/usr/local/etc/telegraf.conf.sample`.
|
||||
|
||||
{{% telegraf/verify %}}
|
||||
|
||||
## Configuration
|
||||
|
||||
### Create a configuration file with default input and output plugins.
|
||||
|
||||
Every plugin will be in the file, but most will be commented out.
|
||||
|
||||
```
|
||||
telegraf config > telegraf.conf
|
||||
```
|
||||
|
||||
### Create a configuration file with specific inputs and outputs
|
||||
```
|
||||
telegraf --input-filter <pluginname>[:<pluginname>] --output-filter <outputname>[:<outputname>] config > telegraf.conf
|
||||
```
|
||||
|
||||
For more advanced configuration details, see the
|
||||
[configuration documentation](/telegraf/v1.25/administration/configuration/).
|
||||
{{% /tab-content %}}
|
||||
<!---------- BEGIN macOS ---------->
|
||||
{{% tab-content %}}
|
||||
Users of macOS 10.8 and higher can install Telegraf using the [Homebrew](http://brew.sh/) package manager.
|
||||
Once `brew` is installed, you can install Telegraf by running:
|
||||
|
||||
```bash
|
||||
brew update
|
||||
brew install telegraf
|
||||
```
|
||||
|
||||
To have launchd start telegraf at next login:
|
||||
```
|
||||
ln -sfv /usr/local/opt/telegraf/*.plist ~/Library/LaunchAgents
|
||||
```
|
||||
To load telegraf now:
|
||||
```
|
||||
launchctl load ~/Library/LaunchAgents/homebrew.mxcl.telegraf.plist
|
||||
```
|
||||
|
||||
Or, if you don't want/need launchctl, you can just run:
|
||||
```
|
||||
telegraf -config /usr/local/etc/telegraf.conf
|
||||
```
|
||||
|
||||
{{% telegraf/verify %}}
|
||||
|
||||
## Configuration
|
||||
|
||||
### Create a configuration file with default input and output plugins.
|
||||
|
||||
Every plugin will be in the file, but most will be commented out.
|
||||
|
||||
```
|
||||
telegraf config > telegraf.conf
|
||||
```
|
||||
|
||||
### Create a configuration file with specific inputs and outputs
|
||||
```
|
||||
telegraf --input-filter <pluginname>[:<pluginname>] --output-filter <outputname>[:<outputname>] config > telegraf.conf
|
||||
```
|
||||
|
||||
For more advanced configuration details, see the
|
||||
[configuration documentation](/telegraf/v1.25/configuration/).
|
||||
{{% /tab-content %}}
|
||||
<!---------- BEGIN Windows ---------->
|
||||
{{% tab-content %}}
|
||||
|
||||
#### Download and run Telegraf as a Windows service
|
||||
|
||||
{{% note %}}
|
||||
Installing a Windows service requires administrative permissions.
|
||||
To run PowerShell as an administrator,
|
||||
see [Launch PowerShell as administrator](https://docs.microsoft.com/en-us/powershell/scripting/windows-powershell/starting-windows-powershell?view=powershell-7#with-administrative-privileges-run-as-administrator).
|
||||
{{% /note %}}
|
||||
|
||||
In PowerShell _as an administrator_, do the following:
|
||||
|
||||
1. Use the following commands to download the Telegraf Windows binary
|
||||
and extract its contents to `C:\Program Files\InfluxData\telegraf\`:
|
||||
|
||||
```powershell
|
||||
> wget https://dl.influxdata.com/telegraf/releases/telegraf-{{% latest-patch %}}_windows_amd64.zip -UseBasicParsing -OutFile telegraf-{{< latest-patch >}}_windows_amd64.zip
|
||||
> Expand-Archive .\telegraf-{{% latest-patch %}}_windows_amd64.zip -DestinationPath 'C:\Program Files\InfluxData\telegraf\'
|
||||
```
|
||||
|
||||
2. Move the `telegraf.exe` and `telegraf.conf` files from
|
||||
`C:\Program Files\InfluxData\telegraf\telegraf-{{% latest-patch %}}`
|
||||
up a level to `C:\Program Files\InfluxData\telegraf`:
|
||||
|
||||
```powershell
|
||||
> cd "C:\Program Files\InfluxData\telegraf"
|
||||
> mv .\telegraf-{{% latest-patch %}}\telegraf.* .
|
||||
```
|
||||
|
||||
Or create a [Windows symbolic link (Symlink)](https://blogs.windows.com/windowsdeveloper/2016/12/02/symlinks-windows-10/)
|
||||
to point to this directory.
|
||||
|
||||
> The instructions below assume that either the `telegraf.exe` and `telegraf.conf` files are stored in `C:\Program Files\InfluxData\telegraf`, or you've created a Symlink to point to this directory.
|
||||
|
||||
3. Install Telegraf as a service:
|
||||
|
||||
```powershell
|
||||
> .\telegraf.exe --service install --config "C:\Program Files\InfluxData\telegraf\telegraf.conf"
|
||||
```
|
||||
|
||||
Make sure to provide the absolute path of the `telegraf.conf` configuration file,
|
||||
otherwise the Windows service may fail to start.
|
||||
|
||||
4. To test that the installation works, run:
|
||||
|
||||
```powershell
|
||||
> C:\"Program Files"\InfluxData\telegraf\telegraf.exe --config C:\"Program Files"\InfluxData\telegraf\telegraf.conf --test
|
||||
```
|
||||
|
||||
5. To start collecting data, run:
|
||||
|
||||
```powershell
|
||||
telegraf.exe --service start
|
||||
```
|
||||
|
||||
<!--
|
||||
#### (Optional) Specify multiple configuration files
|
||||
|
||||
If you have multiple Telegraf configuration files, you can specify a `--config-directory` for the service to use:
|
||||
|
||||
1. Create a directory for configuration snippets at `C:\Program Files\Telegraf\telegraf.d`.
|
||||
2. Include the `--config-directory` option when registering the service:
|
||||
```powershell
|
||||
> C:\"Program Files"\Telegraf\telegraf.exe --service install --config C:\"Program Files"\Telegraf\telegraf.conf --config-directory C:\"Program Files"\Telegraf\telegraf.d
|
||||
```
|
||||
-->
|
||||
|
||||
### Logging and troubleshooting
|
||||
|
||||
When Telegraf runs as a Windows service, Telegraf logs messages to Windows event logs.
|
||||
If the Telegraf service fails to start, view error logs by selecting **Event Viewer**→**Windows Logs**→**Application**.
|
||||
|
||||
### Windows service commands
|
||||
|
||||
The following commands are available:
|
||||
|
||||
| Command | Effect |
|
||||
|------------------------------------|-------------------------------|
|
||||
| `telegraf.exe --service install` | Install telegraf as a service |
|
||||
| `telegraf.exe --service uninstall` | Remove the telegraf service |
|
||||
| `telegraf.exe --service start` | Start the telegraf service |
|
||||
| `telegraf.exe --service stop` | Stop the telegraf service |
|
||||
|
||||
{{< /tab-content >}}
|
||||
{{< /tabs-wrapper >}}
|
||||
|
||||
## Custom-compile Telegraf
|
||||
|
||||
Use the Telegraf custom builder tool to compile Telegraf with only the plugins you need and reduce the Telegraf binary size.
|
||||
|
||||
### Requirements
|
||||
|
||||
- Ensure you've installed [Go](https://go.dev/) version is 1.18.0 or later.
|
||||
- Create your Telegraf configuration file with the plugins you want to use. For details, see [Configuration options](/telegraf/v1.25/configuration/).
|
||||
|
||||
### Build and run the custom builder
|
||||
|
||||
1. Clone the Telegraf repository:
|
||||
```sh
|
||||
git clone https://github.com/influxdata/telegraf.git
|
||||
```
|
||||
2. Change directories into the top-level of the Telegraf repository:
|
||||
```
|
||||
cd telegraf
|
||||
```
|
||||
3. Build the Telegraf custom builder tool by entering the folllowing command:
|
||||
```sh
|
||||
make build_tools
|
||||
```
|
||||
4. Run the `custom_builder` utility with at least one `--config` or `--config-directory` flag to specify Telegraf configuration files to build from. `--config` accepts local file paths and URLs. `--config-dir` accepts local directory paths. You can include multiple `--config` and `--config-dir` flags. The custom builder builds a `telegraf` binary with only the plugins included in the specified configuration files or directories:
|
||||
- **Single Telegraf configuration**:
|
||||
```sh
|
||||
./tools/custom_builder/custom_builder --config /etc/telegraf.conf
|
||||
```
|
||||
- **Single Telegraf confiuaration and Telegraf configuration directory**:
|
||||
```sh
|
||||
./tools/custom_builder/custom_builder \
|
||||
--config /etc/telegraf.conf \
|
||||
--config-dir /etc/telegraf/telegraf.d
|
||||
```
|
||||
- **Remote Telegraf configuration**:
|
||||
```sh
|
||||
./tools/custom_builder/custom_builder --config http://url-to-remote-telegraf/telegraf.conf
|
||||
```
|
||||
|
||||
5. View your customized Telegraf binary within the top level of your Telegraf repository.
|
||||
|
||||
### Update your custom binary
|
||||
|
||||
To add or remove plugins from your customized Telegraf build, edit your configuration file and rerun the command from step 4 above.
|
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
title: Telegraf metrics
|
||||
description: Telegraf metrics are internal representations used to model data during processing and are based on InfluxDB's data model. Each metric component includes the measurement name, tags, fields, and timestamp.
|
||||
menu:
|
||||
telegraf_1_25:
|
||||
name: Metrics
|
||||
weight: 10
|
||||
parent: Concepts
|
||||
draft: true
|
||||
---
|
||||
|
||||
Telegraf metrics are the internal representation used to model data during
|
||||
processing. These metrics are closely based on InfluxDB's data model and contain
|
||||
four main components:
|
||||
|
||||
- **Measurement name**: Description and namespace for the metric.
|
||||
- **Tags**: Key/Value string pairs and usually used to identify the
|
||||
metric.
|
||||
- **Fields**: Key/Value pairs that are typed and usually contain the
|
||||
metric data.
|
||||
- **Timestamp**: Date and time associated with the fields.
|
||||
|
||||
This metric type exists only in memory and must be converted to a concrete
|
||||
representation in order to be transmitted or viewed. Telegraf provides [output data formats][output data formats] (also known as *serializers*) for these conversions. Telegraf's default serializer converts to [InfluxDB Line
|
||||
Protocol][line protocol], which provides a high performance and one-to-one
|
||||
direct mapping from Telegraf metrics.
|
||||
|
||||
[output data formats]: /telegraf/v1.25/data_formats/output/
|
||||
[line protocol]: /telegraf/v1.25/data_formats/output/influx/
|
|
@ -0,0 +1,91 @@
|
|||
---
|
||||
title: Plugin directory
|
||||
description: >
|
||||
Telegraf is a plugin-driven agent that collects, processes, aggregates, and writes metrics.
|
||||
It supports four categories of plugins including input, output, aggregator, and processor.
|
||||
View and search all available Telegraf plugins.
|
||||
menu:
|
||||
telegraf_1_25_ref:
|
||||
|
||||
weight: 10
|
||||
weight: 6
|
||||
aliases:
|
||||
- /telegraf/v1.25/plugins/processors/
|
||||
- /telegraf/v1.25/plugins/plugins-list/
|
||||
- /telegraf/v1.25/plugins/aggregators/
|
||||
- /telegraf/v1.25/plugins/outputs/
|
||||
- /telegraf/v1.25/plugins/inputs/
|
||||
- /telegraf/v1.24/plugins/processors/
|
||||
- /telegraf/v1.24/plugins/plugins-list/
|
||||
- /telegraf/v1.24/plugins/aggregators/
|
||||
- /telegraf/v1.24/plugins/outputs/
|
||||
- /telegraf/v1.24/plugins/inputs/
|
||||
- /telegraf/v1.23/plugins/plugins-list/
|
||||
- /telegraf/v1.23/plugins/aggregators/
|
||||
- /telegraf/v1.23/plugins/inputs/
|
||||
- /telegraf/v1.23/plugins/outputs/
|
||||
- /telegraf/v1.23/plugins/processors/
|
||||
- /telegraf/v1.22/plugins/plugins-list/
|
||||
- /telegraf/v1.22/plugins/aggregators/
|
||||
- /telegraf/v1.22/plugins/inputs/
|
||||
- /telegraf/v1.22/plugins/outputs/
|
||||
- /telegraf/v1.22/plugins/processors/
|
||||
- /telegraf/v1.21/plugins/plugins-list/
|
||||
- /telegraf/v1.21/plugins/aggregators/
|
||||
- /telegraf/v1.21/plugins/inputs/
|
||||
- /telegraf/v1.21/plugins/outputs/
|
||||
- /telegraf/v1.21/plugins/processors/
|
||||
- /telegraf/v1.20/plugins/plugins-list/
|
||||
- /telegraf/v1.20/plugins/aggregators/
|
||||
- /telegraf/v1.20/plugins/inputs/
|
||||
- /telegraf/v1.20/plugins/outputs/
|
||||
- /telegraf/v1.20/plugins/processors/
|
||||
- /telegraf/v1.19/plugins/plugins-list/
|
||||
- /telegraf/v1.19/plugins/aggregators/
|
||||
- /telegraf/v1.19/plugins/inputs/
|
||||
- /telegraf/v1.19/plugins/outputs/
|
||||
- /telegraf/v1.19/plugins/processors/
|
||||
- /telegraf/v1.18/plugins/plugins-list/
|
||||
- /telegraf/v1.18/plugins/aggregators/
|
||||
- /telegraf/v1.18/plugins/inputs/
|
||||
- /telegraf/v1.18/plugins/outputs/
|
||||
- /telegraf/v1.18/plugins/processors/
|
||||
- /telegraf/v1.17/plugins/plugins-list/
|
||||
- /telegraf/v1.17/plugins/aggregators/
|
||||
- /telegraf/v1.17/plugins/inputs/
|
||||
- /telegraf/v1.17/plugins/outputs/
|
||||
- /telegraf/v1.17/plugins/processors/
|
||||
---
|
||||
|
||||
Telegraf is a plugin-driven agent that collects, processes, aggregates, and writes metrics.
|
||||
It supports four categories of plugins including input, output, aggregator, processor, and external.
|
||||
|
||||
{{< list-filters >}}
|
||||
|
||||
**Jump to:**
|
||||
|
||||
- [Input plugins](#input-plugins)
|
||||
- [Output plugins](#output-plugins)
|
||||
- [Aggregator plugins](#aggregator-plugins)
|
||||
- [Processor plugins](#processor-plugins)
|
||||
|
||||
## Input plugins
|
||||
Telegraf input plugins are used with the InfluxData time series platform to collect
|
||||
metrics from the system, services, or third-party APIs.
|
||||
|
||||
{{< telegraf/plugins type="input" >}}
|
||||
|
||||
## Output plugins
|
||||
Telegraf processor plugins write metrics to various destinations.
|
||||
|
||||
{{< telegraf/plugins type="output" >}}
|
||||
|
||||
## Aggregator plugins
|
||||
Telegraf aggregator plugins create aggregate metrics (for example, mean, min, max, quantiles, etc.)
|
||||
|
||||
{{< telegraf/plugins type="aggregator" >}}
|
||||
|
||||
## Processor plugins
|
||||
Telegraf output plugins transform, decorate, and filter metrics.
|
||||
|
||||
{{< telegraf/plugins type="processor" >}}
|
File diff suppressed because it is too large
Load Diff
|
@ -49,10 +49,11 @@ telegraf:
|
|||
name: Telegraf
|
||||
namespace: telegraf
|
||||
list_order: 5
|
||||
versions: [v1.9, v1.10, v1.11, v1.12, v1.13, v1.14, v1.15, v1.16, v1.17, v1.18, v1.19, v1.20, v1.21, v1.22, v1.23, v1.24]
|
||||
latest: v1.24
|
||||
versions: [v1.9, v1.10, v1.11, v1.12, v1.13, v1.14, v1.15, v1.16, v1.17, v1.18, v1.19, v1.20, v1.21, v1.22, v1.23, v1.24, v1.25]
|
||||
latest: v1.25
|
||||
latest_patches:
|
||||
"1.24": 2
|
||||
"1.25": 1
|
||||
"1.24": 4
|
||||
"1.23": 4
|
||||
"1.22": 2
|
||||
"1.21": 4
|
||||
|
|
|
@ -182,6 +182,13 @@ input:
|
|||
introduced: 1.24.0
|
||||
tags: [linux, macos, windows, aws]
|
||||
|
||||
- name: Azure Monitor
|
||||
id: azure_monitor
|
||||
description: |
|
||||
The Azure Monitor plugin gathers metrics from Azure Monitor API.
|
||||
introduced: 1.25.0
|
||||
tags: [linux, macos, windows, systems, cloud]
|
||||
|
||||
- name: Azure Storage Queue
|
||||
id: azure_storage_queue
|
||||
description: |
|
||||
|
@ -614,6 +621,14 @@ input:
|
|||
introduced: 1.10.0
|
||||
tags: [linux, macos, windows, cloud, messaging]
|
||||
|
||||
- name: Google Cloud Storage
|
||||
id: google_cloud_storage
|
||||
description: |
|
||||
The Google Cloud Storage input plugin collects metrics by iterating files
|
||||
located on a cloud storage bucket.
|
||||
introduced: 1.25.0
|
||||
tags: [linux, macos, windows, storage, cloud]
|
||||
|
||||
- name: Graylog
|
||||
id: graylog
|
||||
description: |
|
||||
|
@ -820,6 +835,13 @@ input:
|
|||
introduced: 1.16.0
|
||||
tags: [linux, macos, windows, data-stores]
|
||||
|
||||
- name: Intel DLB
|
||||
id: intel_dlb
|
||||
description: |
|
||||
The Intel DLB input plugin reads metrics from DPDK using the telemetry v2 interface.
|
||||
introduced: 1.25.0
|
||||
tags: [linux, systems]
|
||||
|
||||
- name: Intel PMU
|
||||
id: intel_pmu
|
||||
description: |
|
||||
|
@ -1038,6 +1060,14 @@ input:
|
|||
introduced: 0.1.5
|
||||
tags: [linux, macos, windows, systems, data-stores]
|
||||
|
||||
- name: Libvirt
|
||||
id: libvirt
|
||||
description: |
|
||||
The Libvirt plugin collects statistics from virtualized guests using
|
||||
virtualization libvirt API.
|
||||
introduced: 1.25.0
|
||||
tags: [linux, systems]
|
||||
|
||||
- name: Linux CPU
|
||||
id: linux_cpu
|
||||
description: |
|
||||
|
@ -1256,6 +1286,14 @@ input:
|
|||
introduced: 0.1.1
|
||||
tags: [linux, macos, networking]
|
||||
|
||||
- name: Netflow
|
||||
id: netflow
|
||||
description: |
|
||||
The Netflow input plugin gathers metrics from Netflow v5, Netflow v9 and
|
||||
IPFIX collectors.
|
||||
introduced: 1.25.0
|
||||
tags: [linux, networking]
|
||||
|
||||
- name: Netstat
|
||||
id: netstat
|
||||
description: |
|
||||
|
@ -1412,6 +1450,13 @@ input:
|
|||
introduced: 1.16.0
|
||||
tags: [linux, macos, windows, iot]
|
||||
|
||||
- name: OPC UA Listener
|
||||
id: opcua_listener
|
||||
description: |
|
||||
The OPC UA plugin gathers metrics from subscriptions to OPC UA devices.
|
||||
introduced: 1.25.0
|
||||
tags: [linux, macos, windows, iot]
|
||||
|
||||
- name: OpenLDAP
|
||||
id: openldap
|
||||
description: |
|
||||
|
|
Loading…
Reference in New Issue