ported telegraf 1.10

pull/1345/head^2
Scott Anderson 2020-07-30 16:35:06 -06:00
parent ccbc6a508d
commit cf6aa1d912
45 changed files with 6109 additions and 0 deletions

View File

@ -0,0 +1,24 @@
---
title: Telegraf 1.10 documentation
description: Documentation for Telegraf, the plugin-driven server agent of the InfluxData time series platform, used to collect and report metrics. Telegraf supports four categories of plugins -- input, output, aggregator, and processor.
menu:
telegraf:
name: v1.10
identifier: telegraf_1_10
weight: 11
---
Telegraf is a plugin-driven server agent for collecting & reporting metrics,
and is the first piece of the [TICK stack](https://influxdata.com/time-series-platform/).
Telegraf has plugins to source a variety of metrics directly from the system it's running on, pull metrics from third party APIs, or even listen for metrics via a statsd and Kafka consumer services.
It also has output plugins to send metrics to a variety of other datastores, services, and message queues, including InfluxDB, Graphite, OpenTSDB, Datadog, Librato, Kafka, MQTT, NSQ, and many others.
## Key features
Here are some of the features that Telegraf currently supports that make it a great choice for metrics collection.
* Written entirely in Go.
It compiles into a single binary with no external dependencies.
* Minimal memory footprint.
* Plugin system allows new inputs and outputs to be easily added.
* A wide number of plugins for many popular services already exist for well known services and APIs.

View File

@ -0,0 +1,26 @@
---
title: About the Telegraf project
menu:
telegraf_1_10:
name: About the project
weight: 10
---
## [Telegraf release notes](/telegraf/v1.10/about_the_project/release-notes-changelog/)
## [Contributing to Telegraf](https://github.com/influxdata/telegraf/blob/master/CONTRIBUTING.md)
## [Contributor License Agreement (CLA)](https://influxdata.com/community/cla/)
## [License](https://github.com/influxdata/telegraf/blob/master/LICENSE)
## <a name="third_party">Third party software</a>
InfluxData products contain third party software, which means the copyrighted, patented, or otherwise legally protected
software of third parties that is incorporated in InfluxData products.
Third party suppliers make no representation nor warranty with respect to such third party software or any portion thereof.
Third party suppliers assume no liability for any claim that might arise with respect to such third party software, nor for a
customers use of or inability to use the third party software.
The [list of third party software components, including references to associated licenses and other materials](https://github.com/influxdata/telegraf/blob/release-1.10/docs/LICENSE_OF_DEPENDENCIES.md), is maintained on a version by version basis.

View File

@ -0,0 +1,10 @@
---
title: InfluxData Contributor License Agreement (CLA)
menu:
telegraf_1_10:
name: Contributor License Agreement (CLA)
parent: About the project
weight: 30
url: https://influxdata.com/community/cla/
---

View File

@ -0,0 +1,10 @@
---
title: Contributing to Telegraf
menu:
telegraf_1_10:
name: Contributing
parent: About the project
weight: 20
url: https://github.com/influxdata/telegraf/blob/master/CONTRIBUTING.md
---

View File

@ -0,0 +1,10 @@
---
title: License
menu:
telegraf_1_10:
name: License
parent: About the project
weight: 40
url: https://github.com/influxdata/telegraf/blob/master/LICENSE
---

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,21 @@
---
title: Administering Telegraf
menu:
telegraf_1_10:
name: Administration
weight: 60
---
## [Configuring Telegraf](/telegraf/v1.10/administration/configuration/)
[Configuring Telegraf](/telegraf/v1.10/administration/configuration/) discusses the Telegraf configuration file, enabling plugins, and setting environment variables.
## [Running Telegraf as a Windows service](/telegraf/v1.10/administration/windows_service/)
[Running Telegraf as a Windows service](/telegraf/v1.10/administration/windows_service/) describes how to use Telegraf as a Windows service.
## [Troubleshooting Telegraf](/telegraf/v1.10/administration/troubleshooting/)
[Troubleshooting Telegraf](/telegraf/v1.10/administration/troubleshooting/) shows you how to capture Telegraf output, submit sample metrics, and see how Telegraf formats and emits points to its output plugins.

View File

@ -0,0 +1,383 @@
---
title: Configuring Telegraf
menu:
telegraf_1_10:
name: Configuring
weight: 20
parent: Administration
---
The Telegraf configuration file (`telegraf.conf`) lists all of the available plugins. The current version is available here:
[telegraf.conf](https://github.com/influxdata/telegraf/blob/master/etc/telegraf.conf)
## Generating a Configuration File
A default Telegraf configuration file can be auto-generated by Telegraf:
```
telegraf config > telegraf.conf
```
To generate a configuration file with specific inputs and outputs, you can use the
`--input-filter` and `--output-filter` flags:
```
telegraf --input-filter cpu:mem:net:swap --output-filter influxdb:kafka config
```
## Environment variables
Environment variables can be used anywhere in the configuration file by prepending them with `$`. For strings, the variables must be within quotes (i.e., `"$STR_VAR"`) and for numbers and Booleans they should be unquoted (i.e., `$INT_VAR`, `$BOOL_VAR`)
Environment variables can be set using the Linux `export` command
(i.e., `export password=mypassword`). Using enviroment variables for sensitive
information is considered a best practice.
## Configuration file locations
The location of the configuration file can be set via the `--config` command
line flag.
When the `--config-directory` command line flag is used, files ending with
`.conf` in the specified directory will also be included in the Telegraf
configuration.
On most systems, the default locations are `/etc/telegraf/telegraf.conf` for
the main configuration file and `/etc/telegraf/telegraf.d` for the directory of
configuration files.
# Global tags
Global tags can be specified in the `[global_tags]` section of the config file
in `key="value"` format. All metrics being gathered on this host will be tagged
with the tags specified here.
## Agent configuration
Telegraf has a few options you can configure under the `[agent]` section of the
config.
* **interval**: Default data collection interval for all inputs
* **round_interval**: Rounds collection interval to `interval`.
For example, if `interval` is set to 10s then always collect on :00, :10, :20, etc.
* **metric_batch_size**: Telegraf will send metrics to output in batch of at
most `metric_batch_size` metrics.
* **metric_buffer_limit**: Telegraf will cache `metric_buffer_limit` metrics
for each output, and will flush this buffer on a successful write.
This should be a multiple of `metric_batch_size` and could not be less
than 2 times `metric_batch_size`.
* **collection_jitter**: Collection jitter is used to jitter
the collection by a random amount.
Each plugin will sleep for a random time within jitter before collecting.
This can be used to avoid many plugins querying things like sysfs at the
same time, which can have a measurable effect on the system.
* **flush_interval**: Default data flushing interval for all outputs.
You should not set this below `interval`.
Maximum `flush_interval` will be `flush_interval` + `flush_jitter`
* **flush_jitter**: Jitter the flush interval by a random amount.
This is primarily to avoid
large write spikes for users running a large number of Telegraf instances.
For example, a `flush_jitter` of 5s and `flush_interval` of 10s means flushes will happen every 10-15s.
* **precision**: By default, precision will be set to the same timestamp order
as the collection interval, with the maximum being 1s. Precision will NOT
be used for service inputs, such as `logparser` and `statsd`. Valid values are
`ns`, `us` (or `µs`), `ms`, and `s`.
* **logfile**: Specify the log file name. The empty string means to log to `stderr`.
* **debug**: Run Telegraf in debug mode.
* **quiet**: Run Telegraf in quiet mode (error messages only).
* **hostname**: Override default hostname, if empty use `os.Hostname()`.
* **omit_hostname**: If true, do no set the `host` tag in the Telegraf agent.
## Input configuration
The following config parameters are available for all inputs:
* **interval**: How often to gather this metric. Normal plugins use a single
global interval, but if one particular input should be run less or more often,
you can configure that here.
* **name_override**: Override the base name of the measurement.
(Default is the name of the input).
* **name_prefix**: Specifies a prefix to attach to the measurement name.
* **name_suffix**: Specifies a suffix to attach to the measurement name.
* **tags**: A map of tags to apply to a specific input's measurements.
## Output configuration
There are no generic configuration options available for all outputs.
## Aggregator configuration
The following config parameters are available for all aggregators:
* **period**: The period on which to flush & clear each aggregator. All metrics
that are sent with timestamps outside of this period will be ignored by the
aggregator.
* **delay**: The delay before each aggregator is flushed. This is to control
how long for aggregators to wait before receiving metrics from input plugins,
in the case that aggregators are flushing and inputs are gathering on the
same interval.
* **drop_original**: If true, the original metric will be dropped by the
aggregator and will not get sent to the output plugins.
* **name_override**: Override the base name of the measurement.
(Default is the name of the input).
* **name_prefix**: Specifies a prefix to attach to the measurement name.
* **name_suffix**: Specifies a suffix to attach to the measurement name.
* **tags**: A map of tags to apply to a specific input's measurements.
## Processor configuration
The following config parameters are available for all processors:
* **order**: This is the order in which processors are executed. If this
is not specified, then processor execution order will be random.
#### Measurement filtering
Filters can be configured per input, output, processor, or aggregator,
see below for examples.
* **namepass**:
An array of glob pattern strings. Only points whose measurement name matches
a pattern in this list are emitted.
* **namedrop**:
The inverse of `namepass`. If a match is found the point is discarded. This
is tested on points after they have passed the `namepass` test.
* **fieldpass**:
An array of glob pattern strings. Only fields whose field key matches a
pattern in this list are emitted. Not available for outputs.
* **fielddrop**:
The inverse of `fieldpass`. Fields with a field key matching one of the
patterns will be discarded from the point. Not available for outputs.
* **tagpass**:
A table mapping tag keys to arrays of glob pattern strings. Only points
that contain a tag key in the table and a tag value matching one of its
patterns is emitted.
* **tagdrop**:
The inverse of `tagpass`. If a match is found the point is discarded. This
is tested on points after they have passed the `tagpass` test.
* **taginclude**:
An array of glob pattern strings. Only tags with a tag key matching one of
the patterns are emitted. In contrast to `tagpass`, which will pass an entire
point based on its tag, `taginclude` removes all non matching tags from the
point. This filter can be used on both inputs & outputs, but it is
_recommended_ to be used on inputs, as it is more efficient to filter out tags
at the ingestion point.
* **tagexclude**:
The inverse of `taginclude`. Tags with a tag key matching one of the patterns
will be discarded from the point.
**NOTE** Due to the way TOML is parsed, `tagpass` and `tagdrop` parameters
must be defined at the _end_ of the plugin definition, otherwise subsequent
plugin config options will be interpreted as part of the tagpass/tagdrop
tables.
#### Input configuration examples
This is a full working config that will output CPU data to an InfluxDB instance
at `192.168.59.103:8086`, tagging measurements with `dc="denver-1"`. It will output
measurements at a 10s interval and will collect per-cpu data, dropping any
fields which begin with `time_`.
```toml
[global_tags]
dc = "denver-1"
[agent]
interval = "10s"
# OUTPUTS
[[outputs.influxdb]]
url = "http://192.168.59.103:8086" # required.
database = "telegraf" # required.
precision = "s"
# INPUTS
[[inputs.cpu]]
percpu = true
totalcpu = false
# filter all fields beginning with 'time_'
fielddrop = ["time_*"]
```
#### Input Config: `tagpass` and `tagdrop`
**NOTE** `tagpass` and `tagdrop` parameters must be defined at the _end_ of
the plugin definition, otherwise subsequent plugin config options will be
interpreted as part of the tagpass/tagdrop map.
```toml
[[inputs.cpu]]
percpu = true
totalcpu = false
fielddrop = ["cpu_time"]
# Don't collect CPU data for cpu6 & cpu7
[inputs.cpu.tagdrop]
cpu = [ "cpu6", "cpu7" ]
[[inputs.disk]]
[inputs.disk.tagpass]
# tagpass conditions are OR, not AND.
# If the (filesystem is ext4 or xfs) OR (the path is /opt or /home)
# then the metric passes
fstype = [ "ext4", "xfs" ]
# Globs can also be used on the tag values
path = [ "/opt", "/home*" ]
```
#### Input Config: `fieldpass` and `fielddrop`
```toml
# Drop all metrics for guest & steal CPU usage
[[inputs.cpu]]
percpu = false
totalcpu = true
fielddrop = ["usage_guest", "usage_steal"]
# Only store inode related metrics for disks
[[inputs.disk]]
fieldpass = ["inodes*"]
```
#### Input Config: `namepass` and `namedrop`
```toml
# Drop all metrics about containers for kubelet
[[inputs.prometheus]]
urls = ["http://kube-node-1:4194/metrics"]
namedrop = ["container_*"]
# Only store rest client related metrics for kubelet
[[inputs.prometheus]]
urls = ["http://kube-node-1:4194/metrics"]
namepass = ["rest_client_*"]
```
#### Input Config: `taginclude` and `tagexclude`
```toml
# Only include the "cpu" tag in the measurements for the cpu plugin.
[[inputs.cpu]]
percpu = true
totalcpu = true
taginclude = ["cpu"]
# Exclude the `fstype` tag from the measurements for the disk plugin.
[[inputs.disk]]
tagexclude = ["fstype"]
```
#### Input config: `prefix`, `suffix`, and `override`
This plugin will emit measurements with the name `cpu_total`.
```toml
[[inputs.cpu]]
name_suffix = "_total"
percpu = false
totalcpu = true
```
This will emit measurements with the name `foobar`.
```toml
[[inputs.cpu]]
name_override = "foobar"
percpu = false
totalcpu = true
```
#### Input config: tags
This plugin will emit measurements with two additional tags: `tag1=foo` and
`tag2=bar`.
NOTE: Order matters, the `[inputs.cpu.tags]` table must be at the _end_ of the
plugin definition.
```toml
[[inputs.cpu]]
percpu = false
totalcpu = true
[inputs.cpu.tags]
tag1 = "foo"
tag2 = "bar"
```
#### Multiple inputs of the same type
Additional inputs (or outputs) of the same type can be specified by defining these instances in the configuration file. To avoid measurement collisions, use the `name_override`, `name_prefix`, or `name_suffix` config options:
```toml
[[inputs.cpu]]
percpu = false
totalcpu = true
[[inputs.cpu]]
percpu = true
totalcpu = false
name_override = "percpu_usage"
fielddrop = ["cpu_time*"]
```
#### Output configuration examples:
```toml
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf"
precision = "s"
# Drop all measurements that start with "aerospike"
namedrop = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf-aerospike-data"
precision = "s"
# Only accept aerospike data:
namepass = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf-cpu0-data"
precision = "s"
# Only store measurements where the tag "cpu" matches the value "cpu0"
[outputs.influxdb.tagpass]
cpu = ["cpu0"]
```
#### Aggregator Configuration Examples:
This will collect and emit the min/max of the system load1 metric every
30s, dropping the originals.
```toml
[[inputs.system]]
fieldpass = ["load1"] # collects system load1 metric.
[[aggregators.minmax]]
period = "30s" # send & clear the aggregate every 30s.
drop_original = true # drop the original metrics.
[[outputs.file]]
files = ["stdout"]
```
This will collect and emit the min/max of the swap metrics every
30s, dropping the originals. The aggregator will not be applied
to the system load metrics due to the `namepass` parameter.
```toml
[[inputs.swap]]
[[inputs.system]]
fieldpass = ["load1"] # collects system load1 metric.
[[aggregators.minmax]]
period = "30s" # send & clear the aggregate every 30s.
drop_original = true # drop the original metrics.
namepass = ["swap"] # only "pass" swap metrics through the aggregator.
[[outputs.file]]
files = ["stdout"]
```

View File

@ -0,0 +1,18 @@
---
title: Recommended Telegraf plugins for Enterprise users
menu:
telegraf_1_10:
name: Recommended plugins for Enterprise users
weight: 20
parent: Administration
draft: true
---
The Telegraf configuration file (`telegraf.conf`) lists all of the available plugins. The current version is available here:
[telegraf.conf](https://github.com/influxdata/telegraf/blob/master/etc/telegraf.conf)
## Core Telegraf plugins for Enterprise users
## Optional Telegraf plugins for Enterprise users

View File

@ -0,0 +1,89 @@
---
title: Troubleshooting Telegraf
menu:
telegraf_1_10:
name: Troubleshooting
weight: 30
parent: Administration
---
This guide will show you how to capture Telegraf output, submit sample metrics, and see how Telegraf formats and emits points to its output plugins.
## Capture output
A quick way to view Telegraf output is by enabling a new UDP output plugin to run in parallel with the existing output plugins. Since each output plugin creates its own stream, the already existing outputs will not be affected. Traffic will be replicated to all active outputs.
> **NOTE:** This approach requires Telegraf to be restarted, which will cause a brief interruption to your metrics collection.
The minimal Telegraf configuration required to enable a UDP output is:
```
[[outputs.influxdb]]
urls = ["udp://localhost:8089"]
```
This setup utilizes the UDP format of the [InfluxDB output plugin](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/influxdb) and emits points formatted in InfluxDB's [line protocol](/influxdb/latest/concepts/glossary/#line-protocol).
You will need to append this section to the Telegraf configuration file and restart Telegraf for the change to take effect.
Now you are ready to start listening on the destination port (`8089` in this example) using a simple tool like `netcat`:
```
nc -lup 8089
```
`nc` will print the exact Telegraf output on stdout.
You can also direct the output to a file for further inspection:
```
nc -lup 8089 > telegraf_dump.txt
```
## Submit test inputs
Once you have Telegraf's output arriving to your `nc` socket, you can enable the [inputs.socket_listener](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/socket_listener) plugins to submit some sample metrics.
Append the TCP or UDP input section to Telegraf's config file and restart Telegraf for the change to take effect.
```
[[inputs.socket_listener]]
service_address = "tcp://:8094"
data_format = "influx"
```
Submit sample data to the Telegraf socket listener:
```
echo 'mymeasurement,my_tag_key=mytagvalue my_field="my field value"' | nc localhost 8094
```
The output from your `netcat` listener will look like the following:
```
mymeasurement,host=kubuntu,my_tag_key=mytagvalue my_field="my field value" 1478106104713745634
```
## Testing other plugins
The same approach can be used to test other plugins, like the [inputs.statsd](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/statsd) plugin.
Here is a basic configuration example of how to set up the Telegraf statsd input plugin:
```
[[inputs.statsd]]
service_address = ":8125"
metric_separator = "_"
allowed_pending_messages = 10000
```
Sending a sample metric to the Telegraf statsd port:
```
echo "a.b.c:1|g" | nc -u localhost 8125
```
The output from `nc` will look like the following:
```
a_b_c,host=myserver,metric_type=gauge value=1 1478106500000000000
```

View File

@ -0,0 +1,48 @@
---
title: Running Telegraf as a Windows service
description: How to configure Telegraf as a Windows service.
menu:
telegraf_1_10:
name: Running as Windows service
weight: 20
parent: Administration
---
# Running Telegraf as a Windows service
Telegraf natively supports running as a Windows service. Outlined below are
the general steps to set it up.
1. Obtain the Telegraf distribution for Windows.
2. Create the directory `C:\Program Files\Telegraf` (if you install in a different location, specify the `-config` parameter with the desired location)
3. Place the `telegraf.exe` and the `telegraf.conf` files into `C:\Program Files\Telegraf`.
4. To install the service into the Windows Service Manager, run the following in PowerShell as an administrator. If necessary, you can wrap any spaces in the file directories in double quotes `"<file directory>"`:
```
> C:\"Program Files"\Telegraf\telegraf.exe --service install
```
5. Edit the configuration file to meet your requirements.
6. To verify that it works, run:
```
> C:\"Program Files"\Telegraf\telegraf.exe --config C:\"Program Files"\Telegraf\telegraf.conf --test
```
7. To start collecting data, run:
```
> net start telegraf
```
## Other supported operations
Telegraf can manage its own service through the `--service` flag:
| Command | Effect |
|------------------------------------|-------------------------------|
| `telegraf.exe --service install` | Install telegraf as a service |
| `telegraf.exe --service uninstall` | Remove the telegraf service |
| `telegraf.exe --service start` | Start the telegraf service |
| `telegraf.exe --service stop` | Stop the telegraf service |

View File

@ -0,0 +1,21 @@
---
title: Key Telegraf concepts
description: This section discusses key concepts about Telegraf, including information on supported input data formats, output data formats, aggregator and processor plugins, and includes a glossary of important terms.
menu:
telegraf_1_10:
name: Concepts
weight: 30
---
This section discusses key concepts about Telegraf, the plug-in driven server agent component of the InfluxData time series platform. Topics covered include metrics, aggregator and processor plugins, and a glossary of important terms.
## [Telegraf metrics](/telegraf/v1.10/concepts/metrics/)
[Telegraf metrics](/telegraf/v1.10/concepts/metrics/) are internal representations used to model data during processing.
## [Telegraf aggregator and processor plugins](/telegraf/v1.10/concepts/aggregator_processor_plugins/)
[Telegraf aggregator and processor plugins](/telegraf/v1.10/concepts/aggregator_processor_plugins/) work between the input plugins and output plugins to aggregate and process metrics in Telegraf.
## [Glossary of terms (for Telegraf)](/telegraf/v1.10/concepts/glossary/)
This section includes definitions of important terms for related to Telegraf.

View File

@ -0,0 +1,62 @@
---
title: Telegraf aggregator and processor plugins
description: Use Telegraf aggregator and processor plugins to aggregate and process data between the input plugins and output plugins.
menu:
telegraf_1_10:
name: Aggregator and processor plugins
weight: 20
parent: Concepts
---
Besides the input plugins and output plugins, Telegraf includes aggregator and processor plugins, which are used to aggregate and process metrics as they pass through Telegraf.
```
┌───────────┐
│ │
│ CPU │───┐
│ │ │
└───────────┘ │
┌───────────┐ │ ┌───────────┐
│ │ │ │ │
│ Memory │───┤ ┌──▶│ InfluxDB │
│ │ │ │ │ │
└───────────┘ │ ┌─────────────┐ ┌─────────────┐ │ └───────────┘
│ │ │ │Aggregate │ │
┌───────────┐ │ │Process │ │ - mean │ │ ┌───────────┐
│ │ │ │ - transform │ │ - quantiles │ │ │ │
│ MySQL │───┼──▶ │ - decorate │────▶│ - min/max │───┼──▶│ File │
│ │ │ │ - filter │ │ - count │ │ │ │
└───────────┘ │ │ │ │ │ │ └───────────┘
│ └─────────────┘ └─────────────┘ │
┌───────────┐ │ │ ┌───────────┐
│ │ │ │ │ │
│ SNMP │───┤ └──▶│ Kafka │
│ │ │ │ │
└───────────┘ │ └───────────┘
┌───────────┐ │
│ │ │
│ Docker │───┘
│ │
└───────────┘
```
**Processor plugins** process metrics as they pass through and immediately emit
results based on the values they process. For example, this could be printing
all metrics or adding a tag to all metrics that pass through.
**Aggregator plugins**, on the other hand, are a bit more complicated. Aggregators
are typically for emitting new _aggregate_ metrics, such as a running mean,
minimum, maximum, quantiles, or standard deviation. For this reason, all _aggregator_
plugins are configured with a `period`. The `period` is the size of the window
of metrics that each _aggregate_ represents. In other words, the emitted
_aggregate_ metric will be the aggregated value of the past `period` seconds.
Since many users will only care about their aggregates and not every single metric
gathered, there is also a `drop_original` argument, which tells Telegraf to only
emit the aggregates and not the original metrics.
**NOTE** Since aggregator plugins only aggregate metrics within their periods,
historical data is not supported. In other words, if your metric timestamp is more
than `now() - period` in the past, it will not be aggregated. If this is a feature
that you need, please comment on this [GitHub issue](https://github.com/influxdata/telegraf/issues/1992).

View File

@ -0,0 +1,103 @@
---
title: Telegraf glossary of terms
description: This section includes definitions of important terms for related to Telegraf, the plug-in driven server agent component of the InfluxData time series platform.
menu:
telegraf_1_10:
name: Glossary of terms
weight: 30
parent: Concepts
---
## agent
An agent is the core part of Telegraf that gathers metrics from the declared input plugins and sends metrics to the declared output plugins, based on the plugins enabled by the given configuration.
Related entries: [input plugin](/telegraf/v1.10/concepts/glossary/#input-plugin), [output plugin](/telegraf/v1.10/concepts/glossary/#output-plugin)
## aggregator plugin
Aggregator plugins receive raw metrics from input plugins and create aggregate metrics from them.
The aggregate metrics are then passed to the configured output plugins.
Related entries: [input plugin](/telegraf/v1.10/concepts/glossary/#input-plugin), [output plugin](/telegraf/v1.10/concepts/glossary/#output-plugin), [processor plugin](/telegraf/v1.10/concepts/glossary/#processor-plugin)
## batch size
The Telegraf agent sends metrics to output plugins in batches, not individually.
The batch size controls the size of each write batch that Telegraf sends to the output plugins.
Related entries: [output plugin](/telegraf/v1.10/concepts/glossary/#output-plugin)
## collection interval
The default global interval for collecting data from each input plugin.
The collection interval can be overridden by each individual input plugin's configuration.
Related entries: [input plugin](/telegraf/v1.10/concepts/glossary/#input-plugin)
## collection jitter
Collection jitter is used to prevent every input plugin from collecting metrics simultaneously, which can have a measurable effect on the system.
Each collection interval, every input plugin will sleep for a random time between zero and the collection jitter before collecting the metrics.
Related entries: [collection interval](/telegraf/v1.10/concepts/glossary/#collection-interval), [input plugin](/telegraf/v1.10/concepts/glossary/#input-plugin)
## flush interval
The global interval for flushing data from each output plugin to its destination.
This value should not be set lower than the collection interval.
Related entries: [collection interval](/telegraf/v1.10/concepts/glossary/#collection-interval), [flush jitter](/telegraf/v1.10/concepts/glossary/#flush-jitter), [output plugin](/telegraf/v1.10/concepts/glossary/#output-plugin)
## flush jitter
Flush jitter is used to prevent every output plugin from sending writes simultaneously, which can overwhelm some data sinks.
Each flush interval, every output plugin will sleep for a random time between zero and the flush jitter before emitting metrics.
This helps smooth out write spikes when running a large number of Telegraf instances.
Related entries: [flush interval](/telegraf/v1.10/concepts/glossary/#flush-interval), [output plugin](/telegraf/v1.10/concepts/glossary/#output-plugin)
## input plugin
Input plugins actively gather metrics and deliver them to the core agent, where aggregator, processor, and output plugins can operate on the metrics.
In order to activate an input plugin, it needs to be enabled and configured in Telegraf's configuration file.
Related entries: [aggregator plugin](/telegraf/v1.10/concepts/glossary/#aggregator-plugin), [collection interval](/telegraf/v1.10/concepts/glossary/#collection-interval), [output plugin](/telegraf/v1.10/concepts/glossary/#output-plugin), [processor plugin](/telegraf/v1.10/concepts/glossary/#processor-plugin)
## metric buffer
The metric buffer caches individual metrics when writes are failing for an output plugin.
Telegraf will attempt to flush the buffer upon a successful write to the output.
The oldest metrics are dropped first when this buffer fills.
Related entries: [output plugin](/telegraf/v1.10/concepts/glossary/#output-plugin)
## output plugin
Output plugins deliver metrics to their configured destination. In order to activate an output plugin, it needs to be enabled and configured in Telegraf's configuration file.
Related entries: [aggregator plugin](/telegraf/v1.10/concepts/glossary/#aggregator-plugin), [flush interval](/telegraf/v1.10/concepts/glossary/#flush-interval), [input plugin](/telegraf/v1.10/concepts/glossary/#input-plugin), [processor plugin](/telegraf/v1.10/concepts/glossary/#processor-plugin)
## precision
The precision configuration setting determines how much timestamp precision is retained in the points received from input plugins. All incoming timestamps are truncated to the given precision.
Telegraf then pads the truncated timestamps with zeros to create a nanosecond timestamp; output plugins will emit timestamps in nanoseconds.
Valid precisions are `ns`, `us` or `µs`, `ms`, and `s`.
For example, if the precision is set to `ms`, the nanosecond epoch timestamp `1480000000123456789` would be truncated to `1480000000123` in millisecond precision and then padded with zeroes to make a new, less precise nanosecond timestamp of `1480000000123000000`.
Output plugins do not alter the timestamp further. The precision setting is ignored for service input plugins.
Related entries: [aggregator plugin](/telegraf/v1.10/concepts/glossary/#aggregator-plugin), [input plugin](/telegraf/v1.10/concepts/glossary/#input-plugin), [output plugin](/telegraf/v1.10/concepts/glossary/#output-plugin), [processor plugin](/telegraf/v1.10/concepts/glossary/#processor-plugin), [service input plugin](/telegraf/v1.10/concepts/glossary/#service-input-plugin)
## processor plugin
Processor plugins transform, decorate, and/or filter metrics collected by input plugins, passing the transformed metrics to the output plugins.
Related entries: [aggregator plugin](/telegraf/v1.10/concepts/glossary/#aggregator-plugin), [input plugin](/telegraf/v1.10/concepts/glossary/#input-plugin), [output plugin](/telegraf/v1.10/concepts/glossary/#output-plugin)
## service input plugin
Service input plugins are input plugins that run in a passive collection mode while the Telegraf agent is running.
They listen on a socket for known protocol inputs, or apply their own logic to ingested metrics before delivering them to the Telegraf agent.
Related entries: [aggregator plugin](/telegraf/v1.10/concepts/glossary/#aggregator-plugin), [input plugin](/telegraf/v1.10/concepts/glossary/#input-plugin), [output plugin](/telegraf/v1.10/concepts/glossary/#output-plugin), [processor plugin](/telegraf/v1.10/concepts/glossary/#processor-plugin)

View File

@ -0,0 +1,28 @@
---
title: Telegraf metrics
description: Telegraf metrics are internal representations used to model data during processing and are based on InfluxDB's data model. Each metric component includes the measurement name, tags, fields, and timestamp.
menu:
telegraf_1_10:
name: Metrics
weight: 10
parent: Concepts
---
Telegraf metrics are the internal representation used to model data during
processing. These metrics are closely based on InfluxDB's data model and contain
four main components:
- **Measurement name**: Description and namespace for the metric.
- **Tags**: Key/Value string pairs and usually used to identify the
metric.
- **Fields**: Key/Value pairs that are typed and usually contain the
metric data.
- **Timestamp**: Date and time associated with the fields.
This metric type exists only in memory and must be converted to a concrete
representation in order to be transmitted or viewed. Telegraf provides [output data formats][output data formats] (also known as *serializers*) for these conversions. Telegraf's default serializer converts to [InfluxDB Line
Protocol][line protocol], which provides a high performance and one-to-one
direct mapping from Telegraf metrics.
[output data formats]: /telegraf/v1.10/data_formats/output/
[line protocol]: /telegraf/v1.10/data_formats/output/influx/

View File

@ -0,0 +1,21 @@
---
title: Telegraf data formats
description: Telegraf supports input data formats and output data formats for converting input and output data.
menu:
telegraf_1_10:
name: Data formats
weight: 50
---
This section covers the input data formats and output data formats used in the Telegraf plugin-driven server agent component of the InfluxData time series platform.
## [Telegraf input data formats](/telegraf/v1.10/data_formats/input/)
[Telegraf input data formats](/telegraf/v1.10/data_formats/input/) supports parsing input data formats into metrics for InfluxDB Line Protocol, JSON, Graphite, Value, Nagios, Collectd, and Dropwizard.
## [Telegraf output data formats](/telegraf/v1.10/data_formats/output/)
[Telegraf output data formats](/telegraf/v1.10/data_formats/output/) can serialize metrics into output data formats for InfluxDB Line Protocol, JSON, and Graphite.
## [Telegraf template patterns](/telegraf/v1.10/data_formats/template-patterns/)
[Telegraf template patterns](/telegraf/v1.10/data_formats/template-patterns/) are used to define templates for use with parsing and serializing data formats in Telegraf.

View File

@ -0,0 +1,46 @@
---
title: Telegraf input data formats
description: Telegraf supports parsing input data formats into Telegraf metrics for InfluxDB Line Protocol, CollectD, CSV, Dropwizard, Graphite, Grok, JSON, Logfmt, Nagios, Value, and Wavefront.
menu:
telegraf_1_10:
name: Input data formats
weight: 1
parent: Data formats
---
Telegraf contains many general purpose plugins that support parsing input data
using a configurable parser into [metrics][]. This allows, for example, the
`kafka_consumer` input plugin to process messages in either InfluxDB Line
Protocol or in JSON format. Telegraf supports the following input data formats:
- [InfluxDB Line Protocol](/telegraf/v1.10/data_formats/input/influx/)
- [collectd](/telegraf/v1.10/data_formats/input/collectd/)
- [CSV](/telegraf/v1.10/data_formats/input/csv/)
- [Dropwizard](/telegraf/v1.10/data_formats/input/dropwizard/)
- [Graphite](/telegraf/v1.10/data_formats/input/graphite/)
- [Grok](/telegraf/v1.10/data_formats/input/grok/)
- [JSON](/telegraf/v1.10/data_formats/input/json/)
- [logfmt](/telegraf/v1.10/data_formats/input/logfmt/)
- [Nagios](/telegraf/v1.10/data_formats/input/nagios/)
- [Value](/telegraf/v1.10/data_formats/input/value/), ie: 45 or "booyah"
- [Wavefront](/telegraf/v1.10/data_formats/input/wavefront/)
Any input plugin containing the `data_format` option can use it to select the
desired parser:
```toml
[[inputs.exec]]
## Commands array
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
## measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "json"
```
[metrics]: /telegraf/v1.10/concepts/metrics/

View File

@ -0,0 +1,48 @@
---
title: Collectd input data format
description: Use the collectd input data format to parse the collectd network binary protocol to create tags for host, instance, type, and type instance.
menu:
telegraf_1_10:
name: collectd
weight: 10
parent: Input data formats
---
The collectd input data format parses the collectd network binary protocol to create tags for host, instance, type, and type instance. All collectd values are added as float64 fields.
For more information, see [binary protocol](https://collectd.org/wiki/index.php/Binary_protocol) in the collectd Wiki.
You can control the cryptographic settings with parser options.
Create an authentication file and set `collectd_auth_file` to the path of the file, then set the desired security level in `collectd_security_level`.
For more information, including client setup, see
[Cryptographic setup](https://collectd.org/wiki/index.php/Networking_introduction#Cryptographic_setup) in the collectd Wiki.
You can also change the path to the typesdb or add additional typesdb using
`collectd_typesdb`.
## Configuration
```toml
[[inputs.file]]
files = ["example"]
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "collectd"
## Authentication file for cryptographic security levels
collectd_auth_file = "/etc/collectd/auth_file"
## One of none (default), sign, or encrypt
collectd_security_level = "encrypt"
## Path of to TypesDB specifications
collectd_typesdb = ["/usr/share/collectd/types.db"]
## Multi-value plugins can be handled two ways.
## "split" will parse and store the multi-value plugin data into separate measurements
## "join" will parse and store the multi-value plugin as a single multi-value measurement.
## "split" is the default behavior for backward compatability with previous versions of influxdb.
collectd_parse_multivalue = "split"
```

View File

@ -0,0 +1,111 @@
---
title: CSV input data format
description: Use the "csv" input data format to parse a document containing comma-separated values into Telegraf metrics.
menu:
telegraf_1_10:
name: CSV
weight: 20
parent: Input data formats
---
The CSV input data format parses documents containing comma-separated values into Telegraf metrics.
## Configuration
```toml
[[inputs.file]]
files = ["example"]
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "csv"
## Indicates how many rows to treat as a header. By default, the parser assumes
## there is no header and will parse the first row as data. If set to anything more
## than 1, column names will be concatenated with the name listed in the next header row.
## If `csv_column_names` is specified, the column names in header will be overridden.
csv_header_row_count = 0
## For assigning custom names to columns
## If this is specified, all columns should have a name
## Unnamed columns will be ignored by the parser.
## If `csv_header_row_count` is set to 0, this config must be used
csv_column_names = []
## Indicates the number of rows to skip before looking for header information.
csv_skip_rows = 0
## Indicates the number of columns to skip before looking for data to parse.
## These columns will be skipped in the header as well.
csv_skip_columns = 0
## The seperator between csv fields
## By default, the parser assumes a comma (",")
csv_delimiter = ","
## The character reserved for marking a row as a comment row
## Commented rows are skipped and not parsed
csv_comment = ""
## If set to true, the parser will remove leading whitespace from fields
## By default, this is false
csv_trim_space = false
## Columns listed here will be added as tags. Any other columns
## will be added as fields.
csv_tag_columns = []
## The column to extract the name of the metric from
csv_measurement_column = ""
## The column to extract time information for the metric
## `csv_timestamp_format` must be specified if this is used
csv_timestamp_column = ""
## The format of time data extracted from `csv_timestamp_column`
## this must be specified if `csv_timestamp_column` is specified
csv_timestamp_format = ""
```
### csv_timestamp_column, csv_timestamp_format
By default the current time will be used for all created metrics, to set the
time using the JSON document you can use the `csv_timestamp_column` and
`csv_timestamp_format` options together to set the time to a value in the parsed
document.
The `csv_timestamp_column` option specifies the column name containing the
time value and `csv_timestamp_format` must be set to a Go "reference time"
which is defined to be the specific time: `Mon Jan 2 15:04:05 MST 2006`.
Consult the Go [time][time parse] package for details and additional examples
on how to set the time format.
## Metrics
One metric is created for each row with the columns added as fields. The type
of the field is automatically determined based on the contents of the value.
## Examples
Config:
```
[[inputs.file]]
files = ["example"]
data_format = "csv"
csv_header_row_count = 1
csv_timestamp_column = "time"
csv_timestamp_format = "2006-01-02T15:04:05Z07:00"
```
Input:
```
measurement,cpu,time_user,time_system,time_idle,time
cpu,cpu0,42,42,42,2018-09-13T13:03:28Z
```
Output:
```
cpu cpu=cpu0,time_user=42,time_system=42,time_idle=42 1536869008000000000
```

View File

@ -0,0 +1,179 @@
---
title: Dropwizard input data format
description: Use the "dropwizard" input data format to parse Dropwizard JSON representations into Telegraf metrics.
menu:
telegraf_1_10:
name: Dropwizard
weight: 30
parent: Input data formats
---
The `dropwizard` data format can parse a [Dropwizard JSON representation][dropwizard] representation of a single metrics registry. By default, tags are parsed from metric names as if they were actual InfluxDB Line Protocol keys (`measurement<,tag_set>`) which can be overridden using custom [template patterns][templates]. All field value types are supported, including `string`, `number` and `boolean`.
[templates]: /telegraf/v1.10/data_formats/template-patterns/
[dropwizard]: http://metrics.dropwizard.io/3.1.0/manual/json/
## Configuration
```toml
[[inputs.file]]
files = ["example"]
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "dropwizard"
## Used by the templating engine to join matched values when cardinality is > 1
separator = "_"
## Each template line requires a template pattern. It can have an optional
## filter before the template and separated by spaces. It can also have optional extra
## tags following the template. Multiple tags should be separated by commas and no spaces
## similar to the line protocol format. There can be only one default template.
## Templates support below format:
## 1. filter + template
## 2. filter + template + extra tag(s)
## 3. filter + template with field key
## 4. default template
## By providing an empty template array, templating is disabled and measurements are parsed as influxdb line protocol keys (measurement<,tag_set>)
templates = []
## You may use an appropriate [gjson path](https://github.com/tidwall/gjson#path-syntax)
## to locate the metric registry within the JSON document
# dropwizard_metric_registry_path = "metrics"
## You may use an appropriate [gjson path](https://github.com/tidwall/gjson#path-syntax)
## to locate the default time of the measurements within the JSON document
# dropwizard_time_path = "time"
# dropwizard_time_format = "2006-01-02T15:04:05Z07:00"
## You may use an appropriate [gjson path](https://github.com/tidwall/gjson#path-syntax)
## to locate the tags map within the JSON document
# dropwizard_tags_path = "tags"
## You may even use tag paths per tag
# [inputs.exec.dropwizard_tag_paths]
# tag1 = "tags.tag1"
# tag2 = "tags.tag2"
```
## Examples
A typical JSON of a dropwizard metric registry:
```json
{
"version": "3.0.0",
"counters" : {
"measurement,tag1=green" : {
"count" : 1
}
},
"meters" : {
"measurement" : {
"count" : 1,
"m15_rate" : 1.0,
"m1_rate" : 1.0,
"m5_rate" : 1.0,
"mean_rate" : 1.0,
"units" : "events/second"
}
},
"gauges" : {
"measurement" : {
"value" : 1
}
},
"histograms" : {
"measurement" : {
"count" : 1,
"max" : 1.0,
"mean" : 1.0,
"min" : 1.0,
"p50" : 1.0,
"p75" : 1.0,
"p95" : 1.0,
"p98" : 1.0,
"p99" : 1.0,
"p999" : 1.0,
"stddev" : 1.0
}
},
"timers" : {
"measurement" : {
"count" : 1,
"max" : 1.0,
"mean" : 1.0,
"min" : 1.0,
"p50" : 1.0,
"p75" : 1.0,
"p95" : 1.0,
"p98" : 1.0,
"p99" : 1.0,
"p999" : 1.0,
"stddev" : 1.0,
"m15_rate" : 1.0,
"m1_rate" : 1.0,
"m5_rate" : 1.0,
"mean_rate" : 1.0,
"duration_units" : "seconds",
"rate_units" : "calls/second"
}
}
}
```
Would get translated into 4 different measurements:
```
measurement,metric_type=counter,tag1=green count=1
measurement,metric_type=meter count=1,m15_rate=1.0,m1_rate=1.0,m5_rate=1.0,mean_rate=1.0
measurement,metric_type=gauge value=1
measurement,metric_type=histogram count=1,max=1.0,mean=1.0,min=1.0,p50=1.0,p75=1.0,p95=1.0,p98=1.0,p99=1.0,p999=1.0
measurement,metric_type=timer count=1,max=1.0,mean=1.0,min=1.0,p50=1.0,p75=1.0,p95=1.0,p98=1.0,p99=1.0,p999=1.0,stddev=1.0,m15_rate=1.0,m1_rate=1.0,m5_rate=1.0,mean_rate=1.0
```
You may also parse a dropwizard registry from any JSON document which contains a dropwizard registry in some inner field.
Eg. to parse the following JSON document:
```json
{
"time" : "2017-02-22T14:33:03.662+02:00",
"tags" : {
"tag1" : "green",
"tag2" : "yellow"
},
"metrics" : {
"counters" : {
"measurement" : {
"count" : 1
}
},
"meters" : {},
"gauges" : {},
"histograms" : {},
"timers" : {}
}
}
```
and translate it into:
```
measurement,metric_type=counter,tag1=green,tag2=yellow count=1 1487766783662000000
```
you simply need to use the following additional configuration properties:
```toml
dropwizard_metric_registry_path = "metrics"
dropwizard_time_path = "time"
dropwizard_time_format = "2006-01-02T15:04:05Z07:00"
dropwizard_tags_path = "tags"
## tag paths per tag are supported too, eg.
#[inputs.yourinput.dropwizard_tag_paths]
# tag1 = "tags.tag1"
# tag2 = "tags.tag2"
```

View File

@ -0,0 +1,55 @@
---
title: Graphite input data format
description: Us the Graphite data format to translate Graphite dot buckets directly into Telegraf measurement names, with a single value field, and without any tags.
menu:
telegraf_1_10:
name: Graphite
weight: 40
parent: Input data formats
---
The Graphite data format translates Graphite *dot* buckets directly into
Telegraf measurement names, with a single value field, and without any tags.
By default, the separator is left as `.`, but this can be changed using the
`separator` argument. For more advanced options, Telegraf supports specifying
[templates](#templates) to translate graphite buckets into Telegraf metrics.
## Configuration
```toml
[[inputs.exec]]
## Commands array
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
## measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "graphite"
## This string will be used to join the matched values.
separator = "_"
## Each template line requires a template pattern. It can have an optional
## filter before the template and separated by spaces. It can also have optional extra
## tags following the template. Multiple tags should be separated by commas and no spaces
## similar to the line protocol format. There can be only one default template.
## Templates support below format:
## 1. filter + template
## 2. filter + template + extra tag(s)
## 3. filter + template with field key
## 4. default template
templates = [
"*.app env.service.resource.measurement",
"stats.* .host.measurement* region=eu-east,agent=sensu",
"stats2.* .host.measurement.field",
"measurement*"
]
```
### templates
For information on creating templates, see [Template patterns](/telegraf/v1.10/data_formats/template-patterns/).

View File

@ -0,0 +1,226 @@
---
title: Grok input data format
description: Use the grok data format to parse line-delimited data using a regular expression-like language.
menu:
telegraf_1_10:
name: Grok
weight: 40
parent: Input data formats
---
The grok data format parses line delimited data using a regular expression-like
language.
If you need to become familiar with grok patterns, see [Grok Basics](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#_grok_basics)
in the Logstash documentation. The grok parser uses a slightly modified version of logstash "grok"
patterns, using the format:
```
%{<capture_syntax>[:<semantic_name>][:<modifier>]}
```
The `capture_syntax` defines the grok pattern that is used to parse the input
line and the `semantic_name` is used to name the field or tag. The extension
`modifier` controls the data type that the parsed item is converted to or
other special handling.
By default, all named captures are converted into string fields.
Timestamp modifiers can be used to convert captures to the timestamp of the
parsed metric. If no timestamp is parsed the metric will be created using the
current time.
You must capture at least one field per line.
- Available modifiers:
- string (default if nothing is specified)
- int
- float
- duration (ie, 5.23ms gets converted to int nanoseconds)
- tag (converts the field into a tag)
- drop (drops the field completely)
- measurement (use the matched text as the measurement name)
- Timestamp modifiers:
- ts (This will auto-learn the timestamp format)
- ts-ansic ("Mon Jan _2 15:04:05 2006")
- ts-unix ("Mon Jan _2 15:04:05 MST 2006")
- ts-ruby ("Mon Jan 02 15:04:05 -0700 2006")
- ts-rfc822 ("02 Jan 06 15:04 MST")
- ts-rfc822z ("02 Jan 06 15:04 -0700")
- ts-rfc850 ("Monday, 02-Jan-06 15:04:05 MST")
- ts-rfc1123 ("Mon, 02 Jan 2006 15:04:05 MST")
- ts-rfc1123z ("Mon, 02 Jan 2006 15:04:05 -0700")
- ts-rfc3339 ("2006-01-02T15:04:05Z07:00")
- ts-rfc3339nano ("2006-01-02T15:04:05.999999999Z07:00")
- ts-httpd ("02/Jan/2006:15:04:05 -0700")
- ts-epoch (seconds since unix epoch, may contain decimal)
- ts-epochnano (nanoseconds since unix epoch)
- ts-syslog ("Jan 02 15:04:05", parsed time is set to the current year)
- ts-"CUSTOM"
CUSTOM time layouts must be within quotes and be the representation of the
"reference time", which is `Mon Jan 2 15:04:05 -0700 MST 2006`.
To match a comma decimal point you can use a period. For example `%{TIMESTAMP:timestamp:ts-"2006-01-02 15:04:05.000"}` can be used to match `"2018-01-02 15:04:05,000"`
To match a comma decimal point you can use a period in the pattern string.
See https://golang.org/pkg/time/#Parse for more details.
Telegraf has many of its own [built-in patterns](https://github.com/influxdata/telegraf/blob/master/plugins/parsers/grok/influx_patterns.go),
as well as support for most of
[logstash's builtin patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns).
_Golang regular expressions do not support lookahead or lookbehind.
logstash patterns that depend on these are not supported._
If you need help building patterns to match your logs, the
[Grok Debugger application](https://grokdebug.herokuapp.com) might be helpful.
## Configuration
```toml
[[inputs.file]]
## Files to parse each interval.
## These accept standard unix glob matching rules, but with the addition of
## ** as a "super asterisk". ie:
## /var/log/**.log -> recursively find all .log files in /var/log
## /var/log/*/*.log -> find all .log files with a parent dir in /var/log
## /var/log/apache.log -> only tail the apache log file
files = ["/var/log/apache/access.log"]
## The dataformat to be read from files
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "grok"
## This is a list of patterns to check the given log file(s) for.
## Note that adding patterns here increases processing time. The most
## efficient configuration is to have one pattern.
## Other common built-in patterns are:
## %{COMMON_LOG_FORMAT} (plain apache & nginx access logs)
## %{COMBINED_LOG_FORMAT} (access logs + referrer & agent)
grok_patterns = ["%{COMBINED_LOG_FORMAT}"]
## Full path(s) to custom pattern files.
grok_custom_pattern_files = []
## Custom patterns can also be defined here. Put one pattern per line.
grok_custom_patterns = '''
'''
## Timezone allows you to provide an override for timestamps that
## don't already include an offset
## e.g. 04/06/2016 12:41:45 data one two 5.43µs
##
## Default: "" which renders UTC
## Options are as follows:
## 1. Local -- interpret based on machine localtime
## 2. "Canada/Eastern" -- Unix TZ values like those found in https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
## 3. UTC -- or blank/unspecified, will return timestamp in UTC
grok_timezone = "Canada/Eastern"
```
### Timestamp examples
This example input and config parses a file using a custom timestamp conversion:
```
2017-02-21 13:10:34 value=42
```
```toml
[[inputs.file]]
grok_patterns = ['%{TIMESTAMP_ISO8601:timestamp:ts-"2006-01-02 15:04:05"} value=%{NUMBER:value:int}']
```
This example input and config parses a file using a timestamp in unix time:
```
1466004605 value=42
1466004605.123456789 value=42
```
```toml
[[inputs.file]]
grok_patterns = ['%{NUMBER:timestamp:ts-epoch} value=%{NUMBER:value:int}']
```
This example parses a file using a built-in conversion and a custom pattern:
```
Wed Apr 12 13:10:34 PST 2017 value=42
```
```toml
[[inputs.file]]
grok_patterns = ["%{TS_UNIX:timestamp:ts-unix} value=%{NUMBER:value:int}"]
grok_custom_patterns = '''
TS_UNIX %{DAY} %{MONTH} %{MONTHDAY} %{HOUR}:%{MINUTE}:%{SECOND} %{TZ} %{YEAR}
'''
```
For cases where the timestamp itself is without offset, the `timezone` config var is available
to denote an offset. By default (with `timezone` either omit, blank or set to `"UTC"`), the times
are processed as if in the UTC timezone. If specified as `timezone = "Local"`, the timestamp
will be processed based on the current machine timezone configuration. Lastly, if using a
timezone from the list of Unix [timezones](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones),
grok will offset the timestamp accordingly.
### TOML escaping
When saving patterns to the configuration file, keep in mind the different TOML
[string](https://github.com/toml-lang/toml#string) types and the escaping
rules for each. These escaping rules must be applied in addition to the
escaping required by the grok syntax. Using the Multi-line line literal
syntax with `'''` may be useful.
The following config examples will parse this input file:
```
|42|\uD83D\uDC2F|'telegraf'|
```
Since `|` is a special character in the grok language, we must escape it to
get a literal `|`. With a basic TOML string, special characters such as
backslash must be escaped, requiring us to escape the backslash a second time.
```toml
[[inputs.file]]
grok_patterns = ["\\|%{NUMBER:value:int}\\|%{UNICODE_ESCAPE:escape}\\|'%{WORD:name}'\\|"]
grok_custom_patterns = "UNICODE_ESCAPE (?:\\\\u[0-9A-F]{4})+"
```
We cannot use a literal TOML string for the pattern, because we cannot match a
`'` within it. However, it works well for the custom pattern.
```toml
[[inputs.file]]
grok_patterns = ["\\|%{NUMBER:value:int}\\|%{UNICODE_ESCAPE:escape}\\|'%{WORD:name}'\\|"]
grok_custom_patterns = 'UNICODE_ESCAPE (?:\\u[0-9A-F]{4})+'
```
A multi-line literal string allows us to encode the pattern:
```toml
[[inputs.file]]
grok_patterns = ['''
\|%{NUMBER:value:int}\|%{UNICODE_ESCAPE:escape}\|'%{WORD:name}'\|
''']
grok_custom_patterns = 'UNICODE_ESCAPE (?:\\u[0-9A-F]{4})+'
```
### Tips for creating patterns
Writing complex patterns can be difficult, here is some advice for writing a
new pattern or testing a pattern developed [online](https://grokdebug.herokuapp.com).
Create a file output that writes to stdout, and disable other outputs while
testing. This will allow you to see the captured metrics. Keep in mind that
the file output will only print once per `flush_interval`.
```toml
[[outputs.file]]
files = ["stdout"]
```
- Start with a file containing only a single line of your input.
- Remove all but the first token or piece of the line.
- Add the section of your pattern to match this piece to your configuration file.
- Verify that the metric is parsed successfully by running Telegraf.
- If successful, add the next token, update the pattern and retest.
- Continue one token at a time until the entire line is successfully parsed.

View File

@ -0,0 +1,27 @@
---
title: InfluxDB Line Protocol input data format
description: Use the InfluxDB Line Protocol input data format to parse InfluxDB metrics directly into Telegraf metrics.
menu:
telegraf_1_10:
name: InfluxDB Line Protocol input
weight: 60
parent: Input data formats
---
There are no additional configuration options for InfluxDB [line protocol][]. The
InfluxDB metrics are parsed directly into Telegraf metrics.
[line protocol]: /influxdb/latest/write_protocols/line/
### Configuration
```toml
[[inputs.file]]
files = ["example"]
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx"
```

View File

@ -0,0 +1,224 @@
---
title: JSON input data format
description: Use the JSON input data format to parse [JSON][json] objects, or an array of objects, into Telegraf metric fields.
menu:
telegraf_1_10:
name: JSON input
weight: 70
parent: Input data formats
---
The JSON input data format parses a [JSON][json] object or an array of objects
into Telegraf metric fields.
**NOTE:** All JSON numbers are converted to float fields. JSON String are
ignored unless specified in the `tag_key` or `json_string_fields` options.
## Configuration
```toml
[[inputs.file]]
files = ["example"]
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "json"
## Query is a GJSON path that specifies a specific chunk of JSON to be
## parsed, if not specified the whole document will be parsed.
##
## GJSON query paths are described here:
## https://github.com/tidwall/gjson#path-syntax
json_query = ""
## Tag keys is an array of keys that should be added as tags.
tag_keys = [
"my_tag_1",
"my_tag_2"
]
## String fields is an array of keys that should be added as string fields.
json_string_fields = []
## Name key is the key to use as the measurement name.
json_name_key = ""
## Time key is the key containing the time that should be used to create the
## metric.
json_time_key = ""
## Time format is the time layout that should be used to interprete the
## json_time_key. The time must be `unix`, `unix_ms` or a time in the
## "reference time".
## ex: json_time_format = "Mon Jan 2 15:04:05 -0700 MST 2006"
## json_time_format = "2006-01-02T15:04:05Z07:00"
## json_time_format = "unix"
## json_time_format = "unix_ms"
json_time_format = ""
```
### `json_query`
The `json_query` is a [GJSON][gjson] path that can be used to limit the
portion of the overall JSON document that should be parsed. The result of the
query should contain a JSON object or an array of objects.
Consult the GJSON [path syntax][gjson syntax] for details and examples.
### json_time_key, json_time_format
By default the current time will be used for all created metrics, to set the
time using the JSON document you can use the `json_time_key` and
`json_time_format` options together to set the time to a value in the parsed
document.
The `json_time_key` option specifies the key containing the time value and
`json_time_format` must be set to `unix`, `unix_ms`, or the Go "reference
time" which is defined to be the specific time: `Mon Jan 2 15:04:05 MST 2006`.
Consult the Go [time][time parse] package for details and additional examples
on how to set the time format.
## Examples
### Basic parsing
Config:
```toml
[[inputs.file]]
files = ["example"]
name_override = "myjsonmetric"
data_format = "json"
```
Input:
```json
{
"a": 5,
"b": {
"c": 6
},
"ignored": "I'm a string"
}
```
Output:
```
myjsonmetric a=5,b_c=6
```
### Name, tags, and string fields
Config:
```toml
[[inputs.file]]
files = ["example"]
json_name_key = "name"
tag_keys = ["my_tag_1"]
json_string_fields = ["my_field"]
data_format = "json"
```
Input:
```json
{
"a": 5,
"b": {
"c": 6,
"my_field": "description"
},
"my_tag_1": "foo",
"name": "my_json"
}
```
Output:
```
my_json,my_tag_1=foo a=5,b_c=6,my_field="description"
```
### Arrays
If the JSON data is an array, then each object within the array is parsed with
the configured settings.
Config:
```toml
[[inputs.file]]
files = ["example"]
data_format = "json"
json_time_key = "b_time"
json_time_format = "02 Jan 06 15:04 MST"
```
Input:
```json
[
{
"a": 5,
"b": {
"c": 6,
"time":"04 Jan 06 15:04 MST"
},
},
{
"a": 7,
"b": {
"c": 8,
"time":"11 Jan 07 15:04 MST"
},
}
]
```
Output:
```
file a=5,b_c=6 1136387040000000000
file a=7,b_c=8 1168527840000000000
```
### Query
The `json_query` option can be used to parse a subset of the document.
Config:
```toml
[[inputs.file]]
files = ["example"]
data_format = "json"
tag_keys = ["first"]
json_string_fields = ["last"]
json_query = "obj.friends"
```
Input:
```json
{
"obj": {
"name": {"first": "Tom", "last": "Anderson"},
"age":37,
"children": ["Sara","Alex","Jack"],
"fav.movie": "Deer Hunter",
"friends": [
{"first": "Dale", "last": "Murphy", "age": 44},
{"first": "Roger", "last": "Craig", "age": 68},
{"first": "Jane", "last": "Murphy", "age": 47}
]
}
}
```
Output:
```
file,first=Dale last="Murphy",age=44
file,first=Roger last="Craig",age=68
file,first=Jane last="Murphy",age=47
```
[gjson]: https://github.com/tidwall/gjson
[gjson syntax]: https://github.com/tidwall/gjson#path-syntax
[json]: https://www.json.org/
[time parse]: https://golang.org/pkg/time/#Parse

View File

@ -0,0 +1,42 @@
---
title: Logfmt input data format
description: Use the "logfmt" input data format to parse "logfmt" data into Telegraf metrics.
menu:
telegraf_1_10:
name: logfmt
weight: 80
parent: Input data formats
---
The `logfmt` data format parses [logfmt] data into Telegraf metrics.
[logfmt]: https://brandur.org/logfmt
## Configuration
```toml
[[inputs.file]]
files = ["example"]
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "logfmt"
## Set the name of the created metric, if unset the name of the plugin will
## be used.
metric_name = "logfmt"
```
## Metrics
Each key/value pair in the line is added to a new metric as a field. The type
of the field is automatically determined based on the contents of the value.
## Examples
```
- method=GET host=example.org ts=2018-07-24T19:43:40.275Z connect=4ms service=8ms status=200 bytes=1653
+ logfmt method="GET",host="example.org",ts="2018-07-24T19:43:40.275Z",connect="4ms",service="8ms",status=200i,bytes=1653i
```

View File

@ -0,0 +1,29 @@
---
title: Nagios input data format
description: Use the Nagios input data format to parse the output of Nagios plugins into Telegraf metrics.
menu:
telegraf_1_10:
name: Nagios
weight: 90
parent: Input data formats
---
# Nagios
The Nagios input data format parses the output of
[Nagios plugins](https://www.nagios.org/downloads/nagios-plugins/) into
Telegraf metrics.
## Configuration
```toml
[[inputs.exec]]
## Commands array
commands = ["/usr/lib/nagios/plugins/check_load -w 5,6,7 -c 7,8,9"]
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "nagios"
```

View File

@ -0,0 +1,44 @@
---
title: Value input data format
description: Use the "value" input data format to parse single values into Telegraf metrics.
menu:
telegraf_1_10:
name: Value
weight: 100
parent: Input data formats
---
The "value" input data format translates single values into Telegraf metrics. This
is done by assigning a measurement name and setting a single field ("value")
as the parsed metric.
## Configuration
You **must** tell Telegraf what type of metric to collect by using the
`data_type` configuration option. Available data type options are:
1. integer
2. float or long
3. string
4. boolean
> **Note:** It is also recommended that you set `name_override` to a measurement
name that makes sense for your metric; otherwise, it will just be set to the
name of the plugin.
```toml
[[inputs.exec]]
## Commands array
commands = ["cat /proc/sys/kernel/random/entropy_avail"]
## override the default metric name of "exec"
name_override = "entropy_available"
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "value"
data_type = "integer" # required
```

View File

@ -0,0 +1,28 @@
---
title: Wavefront input data format
description: Use the Wavefront input data format to parse Wavefront data into Telegraf metrics.
menu:
telegraf_1_10:
name: Wavefront
weight: 110
parent: Input data formats
---
The Wavefront input data format parse Wavefront data into Telegraf metrics.
For more information on the Wavefront native data format, see
[Wavefront Data Format](https://docs.wavefront.com/wavefront_data_format.html) in the Wavefront documentation.
## Configuration
There are no additional configuration options for Wavefront Data Format line-protocol.
```toml
[[inputs.file]]
files = ["example"]
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "wavefront"
```

View File

@ -0,0 +1,35 @@
---
title: Telegraf output data formats
description: Telegraf serializes metrics into output data formats for InfluxDB Line Protocol, JSON, Graphite, and Splunk metrics.
menu:
telegraf_1_10:
name: Output data formats
weight: 1
parent: Data formats
---
In addition to output-specific data formats, Telegraf supports the following set
of common data formats that may be selected when configuring many of the Telegraf
output plugins.
* [Carbon2](/telegraf/v1.10/data_formats/output/carbon2)
* [Graphite](/telegraf/v1.10/data_formats/output/graphite)
* [InfluxDB Line Protocol](/telegraf/v1.10/data_formats/output/influx)
* [JSON](/telegraf/v1.10/data_formats/output/json)
* [ServiceNow Metrics](/telegraf/v1.10/data_formats/output/nowmetric)
* [SplunkMetric](/telegraf/v1.10/data_formats/output/splunkmetric)
You will be able to identify the plugins with support by the presence of a
`data_format` configuration option, for example, in the File (`file`) output plugin:
```toml
[[outputs.file]]
## Files to write to, "stdout" is a specially handled file.
files = ["stdout"]
## Data format to output.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "influx"
```

View File

@ -0,0 +1,60 @@
---
title: Carbon2 output data format
description: Use the Carbon2 output data format (serializer) converts Telegraf metrics into the Carbon2 format.
menu:
telegraf_1_10:
name: Carbon2
weight: 10
parent: Output data formats
---
The `carbon2` output data format (serializer) translates the Telegraf metric format to the [Carbon2 format](http://metrics20.org/implementations/).
### Configuration
```toml
[[outputs.file]]
## Files to write to, "stdout" is a specially handled file.
files = ["stdout", "/tmp/metrics.out"]
## Data format to output.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "carbon2"
```
Standard form:
```
metric=name field=field_1 host=foo 30 1234567890
metric=name field=field_2 host=foo 4 1234567890
metric=name field=field_N host=foo 59 1234567890
```
### Metrics
The serializer converts the metrics by creating `intrinsic_tags` using the combination of metric name and fields. So, if one Telegraf metric has 4 fields, the `carbon2` output will be 4 separate metrics. There will be a `metric` tag that represents the name of the metric and a `field` tag to represent the field.
### Example
If we take the following InfluxDB Line Protocol:
```
weather,location=us-midwest,season=summer temperature=82,wind=100 1234567890
```
After serializing in Carbon2, the result would be:
```
metric=weather field=temperature location=us-midwest season=summer 82 1234567890
metric=weather field=wind location=us-midwest season=summer 100 1234567890
```
### Fields and tags with spaces
When a field key or tag key/value have spaces, spaces will be replaced with `_`.
### Tags with empty values
When a tag's value is empty, it will be replaced with `null`.

View File

@ -0,0 +1,58 @@
---
title: Graphite output data format
description: Use the "Graphite" output data format to serialize data from Telegraf metrics.
menu:
telegraf_1_10:
name: Graphite output
weight: 20
parent: Output data formats
---
The Graphite data format is serialized from Telegraf metrics using either the
template pattern or tag support method. You can select between the two
methods using the [`graphite_tag_support`](#graphite-tag-support) option. When set, the tag support method is used,
otherwise the [template pattern][templates]) option is used.
## Configuration
```toml
[[outputs.file]]
## Files to write to, "stdout" is a specially handled file.
files = ["stdout", "/tmp/metrics.out"]
## Data format to output.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "graphite"
## Prefix added to each graphite bucket
prefix = "telegraf"
## Graphite template pattern
template = "host.tags.measurement.field"
## Support Graphite tags, recommended to enable when using Graphite 1.1 or later.
# graphite_tag_support = false
```
### graphite_tag_support
When the `graphite_tag_support` option is enabled, the template pattern is not
used. Instead, tags are encoded using
[Graphite tag support](http://graphite.readthedocs.io/en/latest/tags.html),
added in Graphite 1.1. The `metric_path` is a combination of the optional
`prefix` option, measurement name, and field name.
The tag `name` is reserved by Graphite, any conflicting tags and will be encoded as `_name`.
**Example conversion**:
```
cpu,cpu=cpu-total,dc=us-east-1,host=tars usage_idle=98.09,usage_user=0.89 1455320660004257758
=>
cpu.usage_user;cpu=cpu-total;dc=us-east-1;host=tars 0.89 1455320690
cpu.usage_idle;cpu=cpu-total;dc=us-east-1;host=tars 98.09 1455320690
```
### templates
For more information on templates and template patterns, see [Template patterns](/telegraf/v1.10/data_formats/template-patterns/).

View File

@ -0,0 +1,41 @@
---
title: InfluxDB Line Protocol output data format
description: The "influx" data format outputs metrics into the InfluxDB Line Protocol format.
menu:
telegraf_1_10:
name: InfluxDB Line Protocol
weight: 30
parent: Output data formats
---
The `influx` output data format outputs metrics into [InfluxDB Line Protocol][line protocol]. InfluxData recommends this data format unless another format is required for interoperability.
## Configuration
```toml
[[outputs.file]]
## Files to write to, "stdout" is a specially handled file.
files = ["stdout", "/tmp/metrics.out"]
## Data format to output.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "influx"
## Maximum line length in bytes. Useful only for debugging.
influx_max_line_bytes = 0
## When true, fields will be output in ascending lexical order. Enabling
## this option will result in decreased performance and is only recommended
## when you need predictable ordering while debugging.
influx_sort_fields = false
## When true, Telegraf will output unsigned integers as unsigned values,
## i.e.: `42u`. You will need a version of InfluxDB supporting unsigned
## integer values. Enabling this option will result in field type errors if
## existing data has been written.
influx_uint_support = false
```
[line protocol]: /influxdb/latest/write_protocols/line_protocol_tutorial/

View File

@ -0,0 +1,89 @@
---
title: JSON output data format
description: Telegraf's "json" output data format converts metrics into JSON documents.
menu:
telegraf_1_10:
name: JSON
weight: 40
parent: Output data formats
---
The `json` output data format serializes Telegraf metrics into JSON documents.
## Configuration
```toml
[[outputs.file]]
## Files to write to, "stdout" is a specially handled file.
files = ["stdout", "/tmp/metrics.out"]
## Data format to output.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "json"
## The resolution to use for the metric timestamp. Must be a duration string
## such as "1ns", "1us", "1ms", "10ms", "1s". Durations are truncated to
## the power of 10 less than the specified units.
json_timestamp_units = "1s"
```
## Examples
### Standard format
```json
{
"fields": {
"field_1": 30,
"field_2": 4,
"field_N": 59,
"n_images": 660
},
"name": "docker",
"tags": {
"host": "raynor"
},
"timestamp": 1458229140
}
```
### Batch format
When an output plugin needs to emit multiple metrics at one time, it may use the
batch format. The use of batch format is determined by the plugin -- reference
the documentation for the specific plugin.
```json
{
"metrics": [
{
"fields": {
"field_1": 30,
"field_2": 4,
"field_N": 59,
"n_images": 660
},
"name": "docker",
"tags": {
"host": "raynor"
},
"timestamp": 1458229140
},
{
"fields": {
"field_1": 30,
"field_2": 4,
"field_N": 59,
"n_images": 660
},
"name": "docker",
"tags": {
"host": "raynor"
},
"timestamp": 1458229140
}
]
}
```

View File

@ -0,0 +1,90 @@
---
title: ServiceNow Metrics output data format
description: Use the ServiceNow Metrics output data format (serializer) to output metrics in the ServiceNow Operational Intelligence format.
menu:
telegraf_1_10:
name: ServiceNow Metrics
weight: 50
parent: Output data formats
---
The ServiceNow Metrics output data format (serializer) outputs metrics in the [ServiceNow Operational Intelligence format](https://docs.servicenow.com/bundle/london-it-operations-management/page/product/event-management/reference/mid-POST-metrics.html).
It can be used to write to a file using the File output plugin, or for sending metrics to a MID Server with Enable REST endpoint activated using the standard telegraf HTTP output.
If you're using the HTTP output plugin, this serializer knows how to batch the metrics so you don't end up with an HTTP POST per metric.
An example event looks like:
```javascript
[{
"metric_type": "Disk C: % Free Space",
"resource": "C:\\",
"node": "lnux100",
"value": 50,
"timestamp": 1473183012000,
"ci2metric_id": {
"node": "lnux100"
},
"source": “Telegraf”
}]
```
## Using with the HTTP output plugin
To send this data to a ServiceNow MID Server with Web Server extension activated, you can use the HTTP output plugin, there are some custom headers that you need to add to manage the MID Web Server authorization, here's a sample config for an HTTP output:
```toml
[[outputs.http]]
## URL is the address to send metrics to
url = "http://<mid server fqdn or ip address>:9082/api/mid/sa/metrics"
## Timeout for HTTP message
# timeout = "5s"
## HTTP method, one of: "POST" or "PUT"
method = "POST"
## HTTP Basic Auth credentials
username = 'evt.integration'
password = 'P@$$w0rd!'
## Optional TLS Config
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
## Data format to output.
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "nowmetric"
## Additional HTTP headers
[outputs.http.headers]
# # Should be set manually to "application/json" for json data_format
Content-Type = "application/json"
Accept = "application/json"
```
Starting with the London release, you also need to explicitly create event rule to allow binding of metric events to host CIs.
https://docs.servicenow.com/bundle/london-it-operations-management/page/product/event-management/task/event-rule-bind-metrics-to-host.html
## Using with the File output plugin
You can use the File output plugin to output the payload in a file.
In this case, just add the following section to your telegraf configuration file.
```toml
[[outputs.file]]
## Files to write to, "stdout" is a specially handled file.
files = ["C:/Telegraf/metrics.out"]
## Data format to output.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "nowmetric"
```

View File

@ -0,0 +1,147 @@
---
title: SplunkMetric output data format
description: The SplunkMetric serializer formats and outputs data in a format that can be consumed by a Splunk metrics index.
menu:
telegraf_1_10:
name: SplunkMetric
weight: 60
parent: Output data formats
---
The SplunkMetric serializer formats and outputs the metric data in a format that can be consumed by a Splunk metrics index.
It can be used to write to a file using the file output, or for sending metrics to a HEC using the standard Telegraf HTTP output.
If you're using the HTTP output, this serializer knows how to batch the metrics so you don't end up with an HTTP POST per metric.
Th data is output in a format that conforms to the specified Splunk HEC JSON format as found here:
[Send metrics in JSON format](http://dev.splunk.com/view/event-collector/SP-CAAAFDN).
An example event looks like:
```javascript
{
"time": 1529708430,
"event": "metric",
"host": "patas-mbp",
"fields": {
"_value": 0.6,
"cpu": "cpu0",
"dc": "mobile",
"metric_name": "cpu.usage_user",
"user": "ronnocol"
}
}
```
In the above snippet, the following keys are dimensions:
* cpu
* dc
* user
## Using with the HTTP output
To send this data to a Splunk HEC, you can use the HTTP output, there are some custom headers that you need to add
to manage the HEC authorization, here's a sample config for an HTTP output:
```toml
[[outputs.http]]
## URL is the address to send metrics to
url = "https://localhost:8088/services/collector"
## Timeout for HTTP message
# timeout = "5s"
## HTTP method, one of: "POST" or "PUT"
# method = "POST"
## HTTP Basic Auth credentials
# username = "username"
# password = "pa$$word"
## Optional TLS Config
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
## Data format to output.
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "splunkmetric"
## Provides time, index, source overrides for the HEC
splunkmetric_hec_routing = true
## Additional HTTP headers
[outputs.http.headers]
# Should be set manually to "application/json" for json data_format
Content-Type = "application/json"
Authorization = "Splunk xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
X-Splunk-Request-Channel = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
```
## Overrides
You can override the default values for the HEC token you are using by adding additional tags to the config file.
The following aspects of the token can be overriden with tags:
* index
* source
You can either use `[global_tags]` or using a more advanced configuration as documented [here](https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md).
Such as this example which overrides the index just on the cpu metric:
```toml
[[inputs.cpu]]
percpu = false
totalcpu = true
[inputs.cpu.tags]
index = "cpu_metrics"
```
## Using with the File output
You can use the file output when running telegraf on a machine with a Splunk forwarder.
A sample event when `hec_routing` is false (or unset) looks like:
```javascript
{
"_value": 0.6,
"cpu": "cpu0",
"dc": "mobile",
"metric_name": "cpu.usage_user",
"user": "ronnocol",
"time": 1529708430
}
```
Data formatted in this manner can be ingested with a simple `props.conf` file that
looks like this:
```ini
[telegraf]
category = Metrics
description = Telegraf Metrics
pulldown_type = 1
DATETIME_CONFIG =
NO_BINARY_CHECK = true
SHOULD_LINEMERGE = true
disabled = false
INDEXED_EXTRACTIONS = json
KV_MODE = none
TIMESTAMP_FIELDS = time
TIME_FORMAT = %s.%3N
```
An example configuration of a file based output is:
```toml
# Send telegraf metrics to file(s)
[[outputs.file]]
## Files to write to, "stdout" is a specially handled file.
files = ["/tmp/metrics.out"]
## Data format to output.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "splunkmetric"
hec_routing = false
```

View File

@ -0,0 +1,145 @@
---
title: Telegraf template patterns
description: Use template patterns to describe how dot-delimited strings should map to and from Telegraf metrics.
menu:
telegraf_1_10:
name: Template patterns
weight: 30
parent: Data formats
---
Template patterns are a mini language that describes how a dot delimited
string should be mapped to and from [metrics][].
A template has the form:
```
"host.mytag.mytag.measurement.measurement.field*"
```
Where the following keywords can be set:
1. `measurement`: specifies that this section of the graphite bucket corresponds
to the measurement name. This can be specified multiple times.
2. `field`: specifies that this section of the graphite bucket corresponds
to the field name. This can be specified multiple times.
3. `measurement*`: specifies that all remaining elements of the graphite bucket
correspond to the measurement name.
4. `field*`: specifies that all remaining elements of the graphite bucket
correspond to the field name.
Any part of the template that is not a keyword is treated as a tag key. This
can also be specified multiple times.
**NOTE:** `field*` cannot be used in conjunction with `measurement*`.
## Examples
### Measurement and tag templates
The most basic template is to specify a single transformation to apply to all
incoming metrics. So the following template:
```toml
templates = [
"region.region.measurement*"
]
```
would result in the following Graphite -> Telegraf transformation.
```
us.west.cpu.load 100
=> cpu.load,region=us.west value=100
```
Multiple templates can also be specified, but these should be differentiated
using _filters_ (see below for more details)
```toml
templates = [
"*.*.* region.region.measurement", # <- all 3-part measurements will match this one.
"*.*.*.* region.region.host.measurement", # <- all 4-part measurements will match this one.
]
```
### Field templates
The field keyword tells Telegraf to give the metric that field name.
So the following template:
```toml
separator = "_"
templates = [
"measurement.measurement.field.field.region"
]
```
would result in the following Graphite -> Telegraf transformation.
```
cpu.usage.idle.percent.eu-east 100
=> cpu_usage,region=eu-east idle_percent=100
```
The field key can also be derived from all remaining elements of the graphite
bucket by specifying `field*`:
```toml
separator = "_"
templates = [
"measurement.measurement.region.field*"
]
```
which would result in the following Graphite -> Telegraf transformation.
```
cpu.usage.eu-east.idle.percentage 100
=> cpu_usage,region=eu-east idle_percentage=100
```
### Filter templates
Users can also filter the template(s) to use based on the name of the bucket,
using glob matching, like so:
```toml
templates = [
"cpu.* measurement.measurement.region",
"mem.* measurement.measurement.host"
]
```
which would result in the following transformation:
```
cpu.load.eu-east 100
=> cpu_load,region=eu-east value=100
mem.cached.localhost 256
=> mem_cached,host=localhost value=256
```
### Adding Tags
Additional tags can be added to a metric that don't exist on the received metric.
You can add additional tags by specifying them after the pattern.
Tags have the same format as the line protocol.
Multiple tags are separated by commas.
```toml
templates = [
"measurement.measurement.field.region datacenter=1a"
]
```
would result in the following Graphite -> Telegraf transformation.
```
cpu.usage.idle.eu-east 100
=> cpu_usage,region=eu-east,datacenter=1a idle=100
```
[metrics]: /telegraf/v1.10/concepts/metrics/

View File

@ -0,0 +1,22 @@
---
title: Introducing Telegraf
menu:
telegraf_1_10:
name: Introduction
weight: 20
---
The introductory documentation includes all the information you need to get up and running with Telegraf.
## [Downloading Telegraf](/telegraf/v1.10/introduction/downloading/)
Go to the [InfluxData downloads page](https://portal.influxdata.com/downloads) to get the latest release of Telegraf.
## [Installing Telegraf](/telegraf/v1.10/introduction/installation/)
[Installing Telegraf](/telegraf/v1.10/introduction/installation/) includes directions for installing, starting, and configuring Telegraf.
## [Getting started with Telegraf](/telegraf/v1.10/introduction/getting-started/)
[Getting started with Telegraf](/telegraf/v1.10/introduction/getting-started/) walks you through the download, installation, and configuration processes, and it shows how to use Telegraf to get data into InfluxDB.

View File

@ -0,0 +1,12 @@
---
title: Downloading Telegraf
menu:
telegraf_1_10:
name: Downloading
weight: 10
parent: Introduction
---
Download the latest Telegraf release at the [InfluxData download page](https://portal.influxdata.com/downloads).

View File

@ -0,0 +1,129 @@
---
title: Getting started with Telegraf
description: Downloading, installing, configuring and getting started with Telegraf, the plug-in driven server agent of the InfluxData time series platform.
aliases:
- /telegraf/v1.10/introduction/getting_started/
menu:
telegraf_1_10:
name: Getting started
weight: 30
parent: Introduction
---
## Getting started with Telegraf
Telegraf is an agent written in Go for collecting metrics and writing them into InfluxDB or other possible outputs.
This guide will get you up and running with Telegraf.
It walks you through the download, installation, and configuration processes, and it shows how to use Telegraf to get data into InfluxDB.
## Download and install Telegraf
Follow the instructions in the Telegraf section on the [Downloads page](https://influxdata.com/downloads/).
> **Note:** Telegraf will start automatically using the default configuration when installed from a deb package.
## Configuring Telegraf
### Configuration file location by installation type
* macOS [Homebrew](http://brew.sh/): `/usr/local/etc/telegraf.conf`
* Linux debian and RPM packages: `/etc/telegraf/telegraf.conf`
* Standalone Binary: see the next section for how to create a configuration file
### Creating and editing the configuration file
Before starting the Telegraf server you need to edit and/or create an initial configuration that specifies your desired [inputs](/telegraf/v1.10/plugins/inputs/) (where the metrics come from) and [outputs](/telegraf/v1.10/plugins/outputs/) (where the metrics go). There are [several ways](/telegraf/v1.10/administration/configuration/) to create and edit the configuration file.
Here, we'll generate a configuration file and simultaneously specify the desired inputs with the `-input-filter` flag and the desired output with the `-output-filter` flag.
In the example below, we create a configuration file called `telegraf.conf` with two inputs:
one that reads metrics about the system's cpu usage (`cpu`) and one that reads metrics about the system's memory usage (`mem`). We specify InfluxDB as the desired output.
```bash
telegraf -sample-config -input-filter cpu:mem -output-filter influxdb > telegraf.conf
```
## Start the Telegraf service
Start the Telegraf service and direct it to the relevant configuration file:
### macOS [Homebrew](http://brew.sh/)
```bash
telegraf --config telegraf.conf
```
### Linux (sysvinit and upstart installations)
```bash
sudo service telegraf start
```
### Linux (systemd installations)
```bash
systemctl start telegraf
```
## Results
Once Telegraf is up and running it will start collecting data and writing them to the desired output.
Returning to our sample configuration, we show what the `cpu` and `mem` data look like in InfluxDB below.
Note that we used the default input and output configuration settings to get these data.
* List all [measurements](/influxdb/v1.4/concepts/glossary/#measurement) in the `telegraf` [database](/influxdb/v1.4/concepts/glossary/#database):
```
> SHOW MEASUREMENTS
name: measurements
------------------
name
cpu
mem
```
* List all [field keys](/influxdb/v1.4/concepts/glossary/#field-key) by measurement:
```
> SHOW FIELD KEYS
name: cpu
---------
fieldKey fieldType
usage_guest float
usage_guest_nice float
usage_idle float
usage_iowait float
usage_irq float
usage_nice float
usage_softirq float
usage_steal float
usage_system float
usage_user float
name: mem
---------
fieldKey fieldType
active integer
available integer
available_percent float
buffered integer
cached integer
free integer
inactive integer
total integer
used integer
used_percent float
```
* Select a sample of the data in the [field](/influxdb/v1.4/concepts/glossary/#field) `usage_idle` in the measurement `cpu_usage_idle`:
```bash
> SELECT usage_idle FROM cpu WHERE cpu = 'cpu-total' LIMIT 5
name: cpu
---------
time usage_idle
2016-01-16T00:03:00Z 97.56189047261816
2016-01-16T00:03:10Z 97.76305923519121
2016-01-16T00:03:20Z 97.32533433320835
2016-01-16T00:03:30Z 95.68857785553611
2016-01-16T00:03:40Z 98.63715928982245
```
Notice that the timestamps occur at rounded ten second intervals (that is, `:00`, `:10`, `:20`, and so on) - this is a configurable setting.
That's it! You now have the foundation for using Telegraf to collect metrics and write them to your output of choice.

View File

@ -0,0 +1,235 @@
---
title: Installing Telegraf
menu:
telegraf_1_10:
name: Installing
weight: 20
parent: Introduction
---
This page provides directions for installing, starting, and configuring Telegraf.
## Requirements
Installation of the Telegraf package may require `root` or administrator privileges in order to complete successfully.
### Networking
Telegraf offers multiple service [input plugins](/telegraf/v1.10/plugins/inputs/) that may
require custom ports.
All port mappings can be modified through the configuration file,
which is located at `/etc/telegraf/telegraf.conf` for default installations.
### NTP
Telegraf uses a host's local time in UTC to assign timestamps to data.
Use the Network Time Protocol (NTP) to synchronize time between hosts; if hosts' clocks
aren't synchronized with NTP, the timestamps on the data can be inaccurate.
## Installation
{{< tabs-wrapper >}}
{{% tabs %}}
[Ubuntu & Debian](#)
[RedHat & CentOS](#)
[SLES & openSUSE](#)
[FreeBSD/PC-BSD](#)
[macOS](#)
[Windows](#)
{{% /tabs %}}
{{% tab-content %}}
For instructions on how to install the Debian package from a file, please see the [downloads page](https://influxdata.com/downloads/).
Debian and Ubuntu users can install the latest stable version of Telegraf using the `apt-get` package manager.
**Ubuntu:** Add the InfluxData repository with the following commands:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[wget](#)
[curl](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```bash
wget -qO- https://repos.influxdata.com/influxdb.key | sudo apt-key add -
source /etc/lsb-release
echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```bash
curl -sL https://repos.influxdata.com/influxdb.key | sudo apt-key add -
source /etc/lsb-release
echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
**Debian:** Add the InfluxData repository with the following commands:
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[wget](#)
[curl](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```bash
# Before adding Influx repository, run this so that apt will be able to read the repository.
sudo apt-get update && sudo apt-get install apt-transport-https
# Add the InfluxData key
wget -qO- https://repos.influxdata.com/influxdb.key | sudo apt-key add -
source /etc/os-release
test $VERSION_ID = "7" && echo "deb https://repos.influxdata.com/debian wheezy stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
test $VERSION_ID = "8" && echo "deb https://repos.influxdata.com/debian jessie stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
test $VERSION_ID = "9" && echo "deb https://repos.influxdata.com/debian stretch stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```bash
# Before adding Influx repository, run this so that apt will be able to read the repository.
sudo apt-get update && sudo apt-get install apt-transport-https
# Add the InfluxData key
curl -sL https://repos.influxdata.com/influxdb.key | sudo apt-key add -
source /etc/os-release
test $VERSION_ID = "7" && echo "deb https://repos.influxdata.com/debian wheezy stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
test $VERSION_ID = "8" && echo "deb https://repos.influxdata.com/debian jessie stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
test $VERSION_ID = "9" && echo "deb https://repos.influxdata.com/debian stretch stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
Then, install and start the Telegraf service:
```bash
sudo apt-get update && sudo apt-get install telegraf
sudo service telegraf start
```
Or if your operating system is using systemd (Ubuntu 15.04+, Debian 8+):
```
sudo apt-get update && sudo apt-get install telegraf
sudo systemctl start telegraf
```
{{% /tab-content %}}
{{% tab-content %}}
For instructions on how to install the RPM package from a file, please see the [downloads page](https://influxdata.com/downloads/).
**RedHat and CentOS:** Install the latest stable version of Telegraf using the `yum` package manager:
```bash
cat <<EOF | sudo tee /etc/yum.repos.d/influxdb.repo
[influxdb]
name = InfluxDB Repository - RHEL \$releasever
baseurl = https://repos.influxdata.com/rhel/\$releasever/\$basearch/stable
enabled = 1
gpgcheck = 1
gpgkey = https://repos.influxdata.com/influxdb.key
EOF
```
Once repository is added to the `yum` configuration,
install and start the Telegraf service by running:
```bash
sudo yum install telegraf
sudo service telegraf start
```
Or if your operating system is using systemd (CentOS 7+, RHEL 7+):
```sh
sudo yum install telegraf
sudo systemctl start telegraf
```
{{% /tab-content %}}
{{% tab-content %}}
There are RPM packages provided by openSUSE Build Service for SUSE Linux users:
```bash
# add go repository
zypper ar -f obs://devel:languages:go/ go
# install latest telegraf
zypper in telegraf
```
{{% /tab-content %}}
{{% tab-content %}}
Telegraf is part of the FreeBSD package system.
It can be installed by running:
```bash
sudo pkg install telegraf
```
The configuration file is located at `/usr/local/etc/telegraf.conf` with examples in `/usr/local/etc/telegraf.conf.sample`.
{{% /tab-content %}}
{{% tab-content %}}
Users of macOS 10.8 and higher can install Telegraf using the [Homebrew](http://brew.sh/) package manager.
Once `brew` is installed, you can install Telegraf by running:
```bash
brew update
brew install telegraf
```
To have launchd start telegraf at next login:
```sh
ln -sfv /usr/local/opt/telegraf/*.plist ~/Library/LaunchAgents
```
To load telegraf now:
```sh
launchctl load ~/Library/LaunchAgents/homebrew.mxcl.telegraf.plist
```
Or, if you don't want/need launchctl, you can just run:
```sh
telegraf -config /usr/local/etc/telegraf.conf
```
{{% /tab-content %}}
{{% tab-content %}}
Install Telegraf as a [Windows service](https://github.com/influxdata/telegraf/blob/master/docs/WINDOWS_SERVICE.md) (Windows support is still experimental):
```sh
telegraf.exe -service install -config <path_to_config>
```
{{% /tab-content %}}
{{< /tabs-wrapper >}}
## Configuration
### Create a configuration file with default input and output plugins.
Every plugin will be in the file, but most will be commented.
```
telegraf config > telegraf.conf
```
### Create a configuration file with specific inputs and outputs
```
telegraf --input-filter <pluginname>[:<pluginname>] --output-filter <outputname>[:<outputname>] config > telegraf.conf
```
For more advanced configuration details, see the
[configuration documentation](/telegraf/v1.10/administration/configuration/).

View File

@ -0,0 +1,27 @@
---
title: Telegraf plugins
description: Telegraf plugins are agents used in the InfluxData time series platform for collecting, processing, aggregating, and writing metrics from time series data on the InfluxDB time series database and other popular databases and applications.
menu:
telegraf_1_10:
name: Plugins
weight: 40
---
Telegraf is an agent, written in the Go programming language, for collecting, processing, aggregating, and writing metrics. Telegraf is plugin-driven and supports four categories of plugin types, including input, output, aggregator, and processor.
## [Telegraf input plugins](/telegraf/v1.10/plugins/inputs/)
The [Telegraf input plugins](/telegraf/v1.10/plugins/inputs/) collect metrics from the system, services, or third party APIs.
## [Telegraf output plugins](/telegraf/v1.10/plugins/outputs/)
The [Telegraf output plugins](/telegraf/v1.10/plugins/outputs/) transform, decorate, and filter metrics.
## [Telegraf aggregator plugins](/telegraf/v1.10/plugins/aggregators/)
The [Telegraf aggregator plugins](/telegraf/v1.10/plugins/aggregators/) create aggregate metrics (for example, mean, min, max, quantiles, etc.)
## [Telegraf processor plugins](/telegraf/v1.10/plugins/processors/)
The [Telegraf processor plugins](/telegraf/v1.10/plugins/processors/) write metrics to various destinations.

View File

@ -0,0 +1,50 @@
---
title: Telegraf aggregator plugins
description: Use the Telegraf aggregator plugins with the InfluxData time series platfrom to create aggregate metrics (for example, mean, min, max, quantiles, etc.) collected by the input plugins. Aggregator plugins support basic statistics, histograms, and min/max values.
menu:
telegraf_1_10:
name: Aggregator
weight: 30
parent: Plugins
---
Aggregators emit new aggregate metrics based on the metrics collected by the input plugins.
## Supported Telegraf aggregator plugins
### BasicStats
Plugin ID: `basicstats`
The [BasicStats aggregator plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/aggregators/basicstats/README.md) gives `count`, `max`, `min`, `mean`, `s2`(variance), and `stdev` for a set of values, emitting the aggregate every period seconds.
### Histogram
Plugin ID: `histogram`
The [Histogram aggregator plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/aggregators/histogram/README.md) creates histograms containing the counts of field values within a range.
Values added to a bucket are also added to the larger buckets in the distribution. This creates a [cumulative histogram](https://upload.wikimedia.org/wikipedia/commons/5/53/Cumulative_vs_normal_histogram.svg).
Like other Telegraf aggregator plugins, the metric is emitted every period seconds. Bucket counts, however, are not reset between periods and will be non-strictly increasing while Telegraf is running.
### MinMax
Plugin ID: `minmax`
The [MinMax aggregator plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/aggregators/minmax/README.md) aggregates `min` and `max` values of each field it sees, emitting the aggregrate every period seconds.
### ValueCounter
Plugin ID: `valuecounter`
The [ValueCounter aggregator plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/aggregators/valuecounter/README.md) counts the occurrence of values in fields and emits the counter once every 'period' seconds.
A use case for the ValueCounter aggregator plugin is when you are processing an HTTP access log with the [Logparser input plugin](/telegraf/v1.8/plugins/inputs/#logparser) and want to count the HTTP status codes.
The fields which will be counted must be configured with the fields configuration directive. When no fields are provided, the plugin will not count any fields.
The results are emitted in fields, formatted as `originalfieldname_fieldvalue = count`.
ValueCounter only works on fields of the type `int`, `bool`, or `string`. Float fields are being dropped to prevent the creating of too many fields.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,216 @@
---
title: Telegraf output plugins
descriptions: Use Telegraf output plugins to transform, decorate, and filter metrics. Supported output plugins include Datadog, Elasticsearch, Graphite, InfluxDB, Kafka, MQTT, Prometheus Client, Riemann, and Wavefront.
menu:
telegraf_1_10:
name: Output
weight: 20
parent: Plugins
---
Telegraf allows users to specify multiple output sinks in the configuration file.
## Supported Telegraf output plugins
### Amazon CloudWatch
Plugin ID: `cloudwatch`
The [Amazon CloudWatch output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/cloudwatch/README.md) send metrics to Amazon CloudWatch.
### Amazon Kinesis
Plugin ID: `kinesis`
The [Amazon Kinesis output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/kinesis/README.md) is an experimental plugin that is still in the early stages of development. It will batch up all of the points into one `PUT` request to Kinesis. This should save the number of API requests by a considerable level.
### Amon
Plugin ID: `amon`
The [Amon output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/amon/README.md) writes metrics to an [Amon server](https://github.com/amonapp/amon). For details on the Amon Agent, see [Monitoring Agent](https://docs.amon.cx/agent/) and requires a `apikey` and `amoninstance` URL.
If the point value being sent cannot be converted to a float64 value, the metric is skipped.
Metrics are grouped by converting any `_` characters to `.` in the Point Name.
### AMQP
Plugin ID: `amqp`
The [AMQP output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/amqp/README.md) writes to an AMQP 0-9-1 exchange, a prominent implementation of the Advanced Message Queuing Protocol (AMQP) protocol being [RabbitMQ](https://www.rabbitmq.com/).
Metrics are written to a topic exchange using `tag`, defined in configuration file as `RoutingTag`, as a routing key.
### Apache Kafka
Plugin ID: `kafka`
The [Apache Kafka output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/kafka/README.md) writes to a [Kafka Broker](http://kafka.apache.org/07/quickstart.html) acting a Kafka Producer.
### CrateDB
Plugin ID: `cratedb`
The [CrateDB output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/cratedb/README.md) writes to [CrateDB](https://crate.io/), a real-time SQL database for machine data and IoT, using its [PostgreSQL protocol](https://crate.io/docs/crate/reference/protocols/postgres.html).
### Datadog
Plugin ID: `datadog`
The [Datadog output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/datadog/README.md) writes to the [Datadog Metrics API](http://docs.datadoghq.com/api/#metrics) and requires an `apikey` which can be obtained [here](https://app.datadoghq.com/account/settings#api) for the account.
### Discard
Plugin ID: `discard`
The [Discard output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/discard/README.md) simply drops all metrics that are sent to it. It is only meant to be used for testing purposes.
### Elasticsearch
Plugin ID: `elasticsearch`
The [Elasticsearch output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/elasticsearch/README.md) writes to Elasticsearch via HTTP using [Elastic](http://olivere.github.io/elastic/). Currently it only supports Elasticsearch 5.x series.
### File
Plugin ID: `file`
The [File output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/file/README.md) writes Telegraf metrics to files.
### Google Cloud PubSub
Plugin ID: `cloud_pubsub`
The [Google PubSub output plugin]() publishes metrics to a [Google Cloud PubSub](https://cloud.google.com/pubsub) topic
as one of the supported [output data formats](/telegraf/data_formats/output).
### Graphite
Plugin ID: `graphite`
The [Graphite output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/graphite/README.md) writes to [Graphite](http://graphite.readthedocs.org/en/latest/index.html) via raw TCP.
### Graylog
Plugin ID: `graylog`
The [Graylog output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/graylog/README.md) writes to a Graylog instance using the `gelf` format.
### HTTP
Plugin ID: `http`
The [HTTP output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/http/README.md) sends metrics in a HTTP message encoded using one of the output data formats. For `data_formats` that support batching, metrics are sent in batch format.
### InfluxDB v1.x
Plugin ID: `influxdb`
The [InfluxDB v1.x output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/influxdb/README.md) writes to InfluxDB using HTTP or UDP.
### InfluxDB v2
Plugin ID: `influxdb_v2`
The [InfluxDB v2 output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/influxdb_v2/README.md) writes metrics to the [InfluxDB 2.0](https://github.com/influxdata/platform) HTTP service.
### Instrumental
Plugin ID: `instrumental`
The [Instrumental output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/instrumental/README.md) writes to the [Instrumental Collector API](https://instrumentalapp.com/docs/tcp-collector) and requires a Project-specific API token.
Instrumental accepts stats in a format very close to Graphite, with the only difference being that the type of stat (gauge, increment) is the first token, separated from the metric itself by whitespace. The increment type is only used if the metric comes in as a counter through [[input.statsd]].
### Librato
Plugin ID: `librato`
The [Librato output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/librato/README.md) writes to the [Librato Metrics API](http://dev.librato.com/v1/metrics#metrics) and requires an `api_user` and `api_token` which can be obtained [here](https://metrics.librato.com/account/api_tokens) for the account.
### Microsoft Azure Application Insights
Plugin ID: `application_insights`
The [Microsoft Azure Application Insights output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/application_insights/README.md) writes Telegraf metrics to [Application Insights (Microsoft Azure)](https://azure.microsoft.com/en-us/services/application-insights/).
### Microsoft Azure Monitor
Plugin ID: `azure_monitor`
>**Note:** The Azure Monitor custom metrics service is currently in preview and not available in a subset of Azure regions.
The [Microsoft Azure Monitor output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/azure_monitor/README.md) sends custom metrics to [Microsoft Azure Monitor](https://azure.microsoft.com/en-us/services/monitor/). Azure Monitor has a metric resolution of one minute. To handle this in Telegraf, the Azure Monitor output plugin automatically aggregates metrics into one minute buckets, which are then sent to Azure Monitor on every flush interval.
For a Microsoft blog posting on using Telegraf with Microsoft Azure Monitor, see [Collect custom metrics for a Linux VM with the InfluxData Telegraf Agent](https://docs.microsoft.com/en-us/azure/monitoring-and-diagnostics/metrics-store-custom-linux-telegraf).
The metrics from each input plugin will be written to a separate Azure Monitor namespace, prefixed with `Telegraf/` by default. The field name for each metric is written as the Azure Monitor metric name. All field values are written as a summarized set that includes `min`, `max`, `sum`, and `count`. Tags are written as a dimension on each Azure Monitor metric.
### MQTT Producer
Plugin ID: `mqtt`
The [MQTT Producer output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/mqtt/README.md) writes to the MQTT server using [supported output data formats](/telegraf/v1.8/data_formats/output/).
### NATS Output
Plugin ID: `nats`
The [NATS Output output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/nats/README.md) writes to a (list of) specified NATS instance(s).
### NSQ
Plugin ID: `nsq`
The [NSQ output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/nsq/README.md) writes to a specified NSQD instance, usually local to the producer. It requires a server name and a topic name.
### OpenTSDB
Plugin ID: `opentsdb`
The [OpenTSDB output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/opentsdb/README.md) writes to an OpenTSDB instance using either the telnet or HTTP mode.
Using the HTTP API is the recommended way of writing metrics since OpenTSDB 2.0 To use HTTP mode, set `useHttp` to true in config. You can also control how many metrics are sent in each HTTP request by setting `batchSize` in config. See http://opentsdb.net/docs/build/html/api_http/put.html for details.
### Prometheus Client
Plugin ID: `prometheus_client`
The [Prometheus Client output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/prometheus_client/README.md) starts a [Prometheus](https://prometheus.io/) Client, it exposes all metrics on `/metrics` (default) to be polled by a Prometheus server.
### Riemann
Plugin ID: `riemann`
The [Riemann output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/riemann/README.md) writes to [Riemann](http://riemann.io/) using TCP or UDP.
### Socket Writer
Plugin ID: `socket_writer`
The [Socket Writer output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/socket_writer/README.md) writes to a UDP, TCP, or UNIX socket. It can output data in any of the [supported output formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md).
### Stackdriver
Plugin ID: `stackdriver`
The [Stackdriver output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/stackdriver/README.md) writes to the [Google Cloud Stackdriver API](https://cloud.google.com/monitoring/api/v3/)
and requires [Google Cloud authentication](https://cloud.google.com/docs/authentication/getting-started) with Google Cloud using either a service account or user credentials. For details on pricing, see the [Stackdriver documentation](https://cloud.google.com/stackdriver/pricing).
Requires `project` to specify where Stackdriver metrics will be delivered to.
Metrics are grouped by the `namespace` variable and metric key, for example `custom.googleapis.com/telegraf/system/load5`.
### Wavefront
Plugin ID: `wavefront`
The [Wavefront output plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/outputs/wavefront/README.md) writes to a Wavefront proxy, in Wavefront data format over TCP.
## Deprecated Telegraf output plugins
### Riemann Legacy
Plugin ID: `riemann_legacy`
The [Riemann Legacy output plugin](https://github.com/influxdata/telegraf/tree/release-1.10/plugins/outputs/riemann_legacy) will be deprecated in a future release, see https://github.com/influxdata/telegraf/issues/1878 for more details & discussion.

View File

@ -0,0 +1,103 @@
---
title: Telegraf processor plugins
description: Use Telegraf processor plugins in the InfluxData time series platform to process metrics and emit results based on the values processed.
menu:
telegraf_1_10:
name: Processor
identifier: processors
weight: 40
parent: Plugins
---
Processor plugins process metrics as they pass through and immediately emit results based on the values they process.
## Supported Telegraf processor plugins
### Converter
Plugin ID: `converter`
The [Converter processor plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/processors/converter/README.md) is used to change the type of tag or field values. In addition to changing field types, it can convert between fields and tags. Values that cannot be converted are dropped.
### Enum
Plugin ID: `enum`
The [Enum processor plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/processors/enum/README.md) allows the configuration of value mappings for metric fields. The main use case for this is to rewrite status codes such as `red`, `amber`, and `green` by numeric values such as `0`, `1`, `2`. The plugin supports string and bool types for the field values. Multiple Fields can be configured with separate value mappings for each field. Default mapping values can be configured to be used for all values, which are not contained in the value_mappings. The processor supports explicit configuration of a destination field. By default the source field is overwritten.
### Override
Plugin ID: `override`
The [Override processor plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/processors/override/README.md) allows overriding all modifications that are supported by input plugins and aggregator plugins:
* `name_override`
* `name_prefix`
* `name_suffix`
* tags
All metrics passing through this processor will be modified accordingly. Select the metrics to modify using the standard measurement filtering options.
Values of `name_override`, `name_prefix`, `name_suffix`, and already present tags with conflicting keys will be overwritten. Absent tags will be created.
Use case of this plugin encompass ensuring certain tags or naming conventions are adhered to irrespective of input plugin configurations, e.g., by `taginclude`.
### Parser
Plugin ID: `parser`
The [Parser processor plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/processors/parser/README.md) parses defined fields containing the specified data format and creates new metrics based on the contents of the field.
### Printer
Plugin ID: `printer`
The [Printer processor plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/processors/printer/README.md) simply prints every metric passing through it.
### Regex
Plugin ID: `regex`
The [Regex processor plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/processors/regex/README.md) transforms tag and field values using a regular expression (regex) pattern. If `result_key `parameter is present, it can produce new tags and fields from existing ones.
### Rename
Plugin ID: `rename`
The [Rename processor plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/processors/rename/README.md) renames InfluxDB measurements, fields, and tags.
### Strings
Plugin ID: `strings`
The [Strings processor plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/processors/strings/README.md) maps certain Go string functions onto InfluxDB measurement, tag, and field values. Values can be modified in place or stored in another key.
Implemented functions are:
* `lowercase`
* `uppercase`
* `trim`
* `trim_left`
* `trim_right`
* `trim_prefix`
* `trim_suffix`
Note that in this implementation these are processed in the order that they appear above. You can specify the `measurement`, `tag` or `field` that you want processed in each section and optionally a `dest` if you want the result stored in a new tag or field. You can specify lots of transformations on data with a single strings processor.
### TopK
Plugin ID: `topk`
The [TopK processor plugin](https://github.com/influxdata/telegraf/blob/release-1.10/plugins/processors/topk/README.md) is a filter designed to get the top series over a period of time. It can be tweaked to do its top `K` computation over a period of time, so spikes can be smoothed out.
This processor goes through the following steps when processing a batch of metrics:
1. Groups metrics in buckets using their tags and name as key.
2. Aggregates each of the selected fields for each bucket by the selected aggregation function (sum, mean, etc.).
3. Orders the buckets by one of the generated aggregations, returns all metrics in the top `K` buckets, then reorders the buckets by the next of the generated aggregations, returns all metrics in the top `K` buckets, etc, etc, etc, until it runs out of fields.
The plugin makes sure not to duplicate metrics.
Note that depending on the amount of metrics on each computed bucket, more than `K` metrics may be returned.