Add config options back to 1.22 documentation (#4026)

* Add config option back; fix broken links

* fix latest links

Co-authored-by: lwandzura <51929958+lwandzura@users.noreply.github.com>
pull/4036/head
noramullen1 2022-05-17 17:54:32 -07:00 committed by GitHub
parent 082cdf8f9e
commit a968a9aae8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 525 additions and 1 deletions

View File

@ -0,0 +1,474 @@
---
title: Configuration options
description: Overview of the Telegraf configuration file, enabling plugins, and setting environment variables.
menu:
telegraf_1_22_ref:
name: Configuration options
weight: 20
---
The Telegraf configuration file (`telegraf.conf`) lists all available Telegraf plugins. See the current version here: [telegraf.conf](https://github.com/influxdata/telegraf/blob/master/etc/telegraf.conf).
> To quickly get started with Telegraf, see [Get started](/telegraf/v1.22/get_started/).
## Generate a configuration file
A default Telegraf configuration file can be auto-generated by Telegraf:
```
telegraf config > telegraf.conf
```
To generate a configuration file with specific inputs and outputs, you can use the
`--input-filter` and `--output-filter` flags:
```
telegraf --input-filter cpu:mem:net:swap --output-filter influxdb:kafka config
```
## Configuration file locations
Use the `--config` flag to specify the configuration file location:
- Filename and path, for example: `--config /etc/default/telegraf`
- Remote URL endpoint, for example: `--config "http://remote-URL-endpoint"`
Use the `--config-directory` flag to include files ending with `.conf` in the specified directory in the Telegraf
configuration.
On most systems, the default locations are `/etc/telegraf/telegraf.conf` for
the main configuration file and `/etc/telegraf/telegraf.d` for the directory of
configuration files.
## Set environment variables
Add environment variables anywhere in the configuration file by prepending them with `$`.
For strings, variables must be in quotes (for example, `"$STR_VAR"`).
For numbers and Booleans, variables must be unquoted (for example, `$INT_VAR`, `$BOOL_VAR`).
You can also set environment variables using the Linux `export` command: `export password=mypassword`
> **Note:** We recommend using environment variables for sensitive information.
### Example: Telegraf environment variables
In the Telegraf environment variables file (`/etc/default/telegraf`):
```sh
USER="alice"
INFLUX_URL="http://localhost:8086"
INFLUX_SKIP_DATABASE_CREATION="true"
INFLUX_PASSWORD="monkey123"
```
In the Telegraf configuration file (`/etc/telegraf.conf`):
```sh
[global_tags]
user = "${USER}"
[[inputs.mem]]
[[outputs.influxdb]]
urls = ["${INFLUX_URL}"]
skip_database_creation = ${INFLUX_SKIP_DATABASE_CREATION}
password = "${INFLUX_PASSWORD}"
```
The environment variables above add the following configuration settings to Telegraf:
```sh
[global_tags]
user = "alice"
[[outputs.influxdb]]
urls = "http://localhost:8086"
skip_database_creation = true
password = "monkey123"
```
## Global tags
Global tags can be specified in the `[global_tags]` section of the config file
in `key="value"` format. All metrics being gathered on this host will be tagged
with the tags specified here.
## Agent configuration
Telegraf has a few options you can configure under the `[agent]` section of the
config.
* **interval**: Default data collection interval for all inputs
* **round_interval**: Rounds collection interval to `interval`.
For example, if `interval` is set to 10s then always collect on :00, :10, :20, etc.
* **metric_batch_size**: Telegraf will send metrics to output in batch of at
most `metric_batch_size` metrics.
* **metric_buffer_limit**: Telegraf will cache `metric_buffer_limit` metrics
for each output, and will flush this buffer on a successful write.
This should be a multiple of `metric_batch_size` and could not be less
than 2 times `metric_batch_size`.
* **collection_jitter**: Collection jitter is used to jitter
the collection by a random amount.
Each plugin will sleep for a random time within jitter before collecting.
This can be used to avoid many plugins querying things like sysfs at the
same time, which can have a measurable effect on the system.
* **flush_interval**: Default data flushing interval for all outputs.
You should not set this below `interval`.
Maximum `flush_interval` will be `flush_interval` + `flush_jitter`
* **flush_jitter**: Jitter the flush interval by a random amount.
This is primarily to avoid
large write spikes for users running a large number of Telegraf instances.
For example, a `flush_jitter` of 5s and `flush_interval` of 10s means flushes will happen every 10-15s.
* **precision**: Collected metrics are rounded to the precision specified as an
`interval` (integer + unit, ex: `1ns`, `1us`, `1ms`, and `1s` . Precision will NOT
be used for service inputs, such as `logparser` and `statsd`.
* **logfile**: Specify the log file name. The empty string means to log to `stderr`.
* **debug**: Run Telegraf in debug mode.
* **quiet**: Run Telegraf in quiet mode (error messages only).
* **logtarget**: Control the destination for logs. Can be one of "file",
"stderr" or, on Windows, "eventlog". When set to "file", the output file is
determined by the "logfile" setting.
* **logfile**: Name the file to be logged to when using the "file" logtarget. If set
to the empty then logs are written to stderr.
* **logfile_rotation_interval**: Rotates logfile after the time interval specified. When
set to 0 no time based rotation is performed.
* **logfile_rotation_max_size**: Rotates logfile when it becomes larger than the specified
size. When set to 0 no size based rotation is performed.
* **logfile_rotation_max_archives**: Maximum number of rotated archives to keep, any
older logs are deleted. If set to -1, no archives are removed.
* **log_with_timezone**: Set a timezone to use when logging or type 'local' for local time. Example: 'America/Chicago'.
[See this page for options/formats.](https://socketloop.com/tutorials/golang-display-list-of-timezones-with-gmt)
* **hostname**: Override default hostname, if empty use `os.Hostname()`.
* **omit_hostname**: If true, do no set the `host` tag in the Telegraf agent.
## Input configuration
The following config parameters are available for all inputs:
* **alias**: Name an instance of a plugin.
* **interval**: How often to gather this metric. Normal plugins use a single
global interval, but if one particular input should be run less or more often,
you can configure that here. `interval` can be increased to reduce data-in rate limits.
* **precision**: Overrides the `precision` setting of the agent. Collected
metrics are rounded to the precision specified as an `interval`. When this value is
set on a service input (ex: `statsd`), multiple events occuring at the same
timestamp may be merged by the output database.
* **collection_jitter**: Overrides the `collection_jitter` setting of the agent.
Collection jitter is used to jitter the collection by a random `interval`
* **name_override**: Override the base name of the measurement.
(Default is the name of the input).
* **name_prefix**: Specifies a prefix to attach to the measurement name.
* **name_suffix**: Specifies a suffix to attach to the measurement name.
* **tags**: A map of tags to apply to a specific input's measurements.
## Output configuration
* **alias**: Name an instance of a plugin.
* **flush_interval**: Maximum time between flushes. Use this setting to
override the agent `flush_interval` on a per plugin basis.
* **flush_jitter**: Amount of time to jitter the flush interval. Use this
setting to override the agent `flush_jitter` on a per plugin basis.
* **metric_batch_size**: Maximum number of metrics to send at once. Use
this setting to override the agent `metric_batch_size` on a per plugin basis.
* **metric_buffer_limit**: Maximum number of unsent metrics to buffer.
Use this setting to override the agent `metric_buffer_limit` on a per plugin basis.
* **name_override**: Override the base name of the measurement.
(Default is the name of the output).
* **name_prefix**: Specifies a prefix to attach to the measurement name.
* **name_suffix**: Specifies a suffix to attach to the measurement name.
## Aggregator configuration
The following config parameters are available for all aggregators:
* **alias**: Name an instance of a plugin.
* **period**: The period on which to flush & clear each aggregator. All metrics
that are sent with timestamps outside of this period will be ignored by the
aggregator.
* **delay**: The delay before each aggregator is flushed. This is to control
how long for aggregators to wait before receiving metrics from input plugins,
in the case that aggregators are flushing and inputs are gathering on the
same interval.
* **grace**: The duration the metrics will still be aggregated by the plugin
even though they're outside of the aggregation period. This setting is needed
in a situation when the agent is expected to receive late metrics and can
be rolled into next aggregation period.
* **drop_original**: If true, the original metric will be dropped by the
aggregator and will not get sent to the output plugins.
* **name_override**: Override the base name of the measurement.
(Default is the name of the input).
* **name_prefix**: Specifies a prefix to attach to the measurement name.
* **name_suffix**: Specifies a suffix to attach to the measurement name.
* **tags**: A map of tags to apply to a specific input's measurements.
## Processor configuration
The following config parameters are available for all processors:
* **alias**: Name an instance of a plugin.
* **order**: This is the order in which processors are executed. If this
is not specified, then processor execution order will be random.
The [metric filtering][] parameters can be used to limit what metrics are
handled by the processor. Excluded metrics are passed downstream to the next
processor.
## Metric filtering
Filters can be configured per input, output, processor, or aggregator,
see below for examples.
* **namepass**:
An array of glob pattern strings. Only points whose measurement name matches
a pattern in this list are emitted.
* **namedrop**:
The inverse of `namepass`. If a match is found the point is discarded. This
is tested on points after they have passed the `namepass` test.
* **fieldpass**:
An array of glob pattern strings. Only fields whose field key matches a
pattern in this list are emitted.
* **fielddrop**:
The inverse of `fieldpass`. Fields with a field key matching one of the
patterns will be discarded from the point.
* **tagpass**:
A table mapping tag keys to arrays of glob pattern strings. Only points
that contain a tag key in the table and a tag value matching one of its
patterns is emitted.
* **tagdrop**:
The inverse of `tagpass`. If a match is found the point is discarded. This
is tested on points after they have passed the `tagpass` test.
* **taginclude**:
An array of glob pattern strings. Only tags with a tag key matching one of
the patterns are emitted. In contrast to `tagpass`, which will pass an entire
point based on its tag, `taginclude` removes all non matching tags from the
point. This filter can be used on both inputs & outputs, but it is
_recommended_ to be used on inputs, as it is more efficient to filter out tags
at the ingestion point.
* **tagexclude**:
The inverse of `taginclude`. Tags with a tag key matching one of the patterns
will be discarded from the point.
**NOTE** Due to the way TOML is parsed, `tagpass` and `tagdrop` parameters
must be defined at the _end_ of the plugin definition, otherwise subsequent
plugin config options will be interpreted as part of the tagpass/tagdrop
tables.
## Examples
#### Input configuration examples
This is a full working config that will output CPU data to an InfluxDB instance
at `192.168.59.103:8086`, tagging measurements with `dc="denver-1"`. It will output
measurements at a 10s interval and will collect per-cpu data, dropping any
fields which begin with `time_`.
```toml
[global_tags]
dc = "denver-1"
[agent]
interval = "10s"
# OUTPUTS
[[outputs.influxdb]]
url = "http://192.168.59.103:8086" # required.
database = "telegraf" # required.
precision = "1s"
# INPUTS
[[inputs.cpu]]
percpu = true
totalcpu = false
# filter all fields beginning with 'time_'
fielddrop = ["time_*"]
```
#### Input Config: `tagpass` and `tagdrop`
**NOTE** `tagpass` and `tagdrop` parameters must be defined at the _end_ of
the plugin definition, otherwise subsequent plugin config options will be
interpreted as part of the tagpass/tagdrop map.
```toml
[[inputs.cpu]]
percpu = true
totalcpu = false
fielddrop = ["cpu_time"]
# Don't collect CPU data for cpu6 & cpu7
[inputs.cpu.tagdrop]
cpu = [ "cpu6", "cpu7" ]
[[inputs.disk]]
[inputs.disk.tagpass]
# tagpass conditions are OR, not AND.
# If the (filesystem is ext4 or xfs) OR (the path is /opt or /home)
# then the metric passes
fstype = [ "ext4", "xfs" ]
# Globs can also be used on the tag values
path = [ "/opt", "/home*" ]
```
#### Input Config: `fieldpass` and `fielddrop`
```toml
# Drop all metrics for guest & steal CPU usage
[[inputs.cpu]]
percpu = false
totalcpu = true
fielddrop = ["usage_guest", "usage_steal"]
# Only store inode related metrics for disks
[[inputs.disk]]
fieldpass = ["inodes*"]
```
#### Input Config: `namepass` and `namedrop`
```toml
# Drop all metrics about containers for kubelet
[[inputs.prometheus]]
urls = ["http://kube-node-1:4194/metrics"]
namedrop = ["container_*"]
# Only store rest client related metrics for kubelet
[[inputs.prometheus]]
urls = ["http://kube-node-1:4194/metrics"]
namepass = ["rest_client_*"]
```
#### Input Config: `taginclude` and `tagexclude`
```toml
# Only include the "cpu" tag in the measurements for the cpu plugin.
[[inputs.cpu]]
percpu = true
totalcpu = true
taginclude = ["cpu"]
# Exclude the `fstype` tag from the measurements for the disk plugin.
[[inputs.disk]]
tagexclude = ["fstype"]
```
#### Input config: `prefix`, `suffix`, and `override`
This plugin will emit measurements with the name `cpu_total`.
```toml
[[inputs.cpu]]
name_suffix = "_total"
percpu = false
totalcpu = true
```
This will emit measurements with the name `foobar`.
```toml
[[inputs.cpu]]
name_override = "foobar"
percpu = false
totalcpu = true
```
#### Input config: tags
This plugin will emit measurements with two additional tags: `tag1=foo` and
`tag2=bar`.
NOTE: Order matters, the `[inputs.cpu.tags]` table must be at the _end_ of the
plugin definition.
```toml
[[inputs.cpu]]
percpu = false
totalcpu = true
[inputs.cpu.tags]
tag1 = "foo"
tag2 = "bar"
```
#### Multiple inputs of the same type
Additional inputs (or outputs) of the same type can be specified by defining these instances in the configuration file. To avoid measurement collisions, use the `name_override`, `name_prefix`, or `name_suffix` config options:
```toml
[[inputs.cpu]]
percpu = false
totalcpu = true
[[inputs.cpu]]
percpu = true
totalcpu = false
name_override = "percpu_usage"
fielddrop = ["cpu_time*"]
```
#### Output configuration examples:
```toml
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf"
precision = "1s"
# Drop all measurements that start with "aerospike"
namedrop = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf-aerospike-data"
precision = "1s"
# Only accept aerospike data:
namepass = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf-cpu0-data"
precision = "1s"
# Only store measurements where the tag "cpu" matches the value "cpu0"
[outputs.influxdb.tagpass]
cpu = ["cpu0"]
```
#### Aggregator Configuration Examples:
This will collect and emit the min/max of the system load1 metric every
30s, dropping the originals.
```toml
[[inputs.system]]
fieldpass = ["load1"] # collects system load1 metric.
[[aggregators.minmax]]
period = "30s" # send & clear the aggregate every 30s.
drop_original = true # drop the original metrics.
[[outputs.file]]
files = ["stdout"]
```
This will collect and emit the min/max of the swap metrics every
30s, dropping the originals. The aggregator will not be applied
to the system load metrics due to the `namepass` parameter.
```toml
[[inputs.swap]]
[[inputs.system]]
fieldpass = ["load1"] # collects system load1 metric.
[[aggregators.minmax]]
period = "30s" # send & clear the aggregate every 30s.
drop_original = true # drop the original metrics.
namepass = ["swap"] # only "pass" swap metrics through the aggregator.
[[outputs.file]]
files = ["stdout"]
```
To learn more about configuring the Telegraf agent, watch the following video:
{{< youtube txUcAxMDBlQ >}}

View File

@ -32,10 +32,12 @@ For details on `filter` and other flags, see [Telegraf commands and flags](/tele
* Linux debian and RPM packages: `/etc/telegraf/telegraf.conf`
* Standalone Binary: see the next section for how to create a configuration file
> **Note:** You can also specify a remote URL endpoint to pull a configuration file from. See [Configuration file locations](/telegraf/v1.15/administration/configuration/#configuration-file-locations).
> **Note:** You can also specify a remote URL endpoint to pull a configuration file from. See [Configuration file locations](/telegraf/v1.22/configuration/#configuration-file-locations).
3. Edit the configuration file using `vim` or a text editor. Because this example uses [InfluxDB V2 output plugin](https://github.com/influxdata/telegraf/blob/release-1.21/plugins/outputs/influxdb_v2/README.md), we need to add the InfluxDB URL, authentication token, organization, and bucket details to this section of the configuration file.
> **Note:** For more configuration file options, see [Configuration options](/telegraf/v1.22/configuration/).
4. For this example, specify two inputs (`cpu` and `mem`) with the `--input-filter` flag.
Specify InfluxDB as the output with the `--output-filter` flag.
@ -45,6 +47,54 @@ telegraf --sample-config --input-filter cpu:mem --output-filter influxdb_v2 > te
The resulting configuration will collect CPU and memory data and sends it to InfluxDB V2.
## Set environment variables
Add environment variables anywhere in the configuration file by prepending them with `$`.
For strings, variables must be in quotes (for example, `"$STR_VAR"`).
For numbers and Booleans, variables must be unquoted (for example, `$INT_VAR`, `$BOOL_VAR`).
You can also set environment variables using the Linux `export` command: `export password=mypassword`
> **Note:** We recommend using environment variables for sensitive information.
### Example: Telegraf environment variables
In the Telegraf environment variables file (`/etc/default/telegraf`):
```sh
USER="alice"
INFLUX_URL="http://localhost:8086"
INFLUX_SKIP_DATABASE_CREATION="true"
INFLUX_PASSWORD="monkey123"
```
In the Telegraf configuration file (`/etc/telegraf.conf`):
```sh
[global_tags]
user = "${USER}"
[[inputs.mem]]
[[outputs.influxdb]]
urls = ["${INFLUX_URL}"]
skip_database_creation = ${INFLUX_SKIP_DATABASE_CREATION}
password = "${INFLUX_PASSWORD}"
```
The environment variables above add the following configuration settings to Telegraf:
```sh
[global_tags]
user = "alice"
[[outputs.influxdb]]
urls = "http://localhost:8086"
skip_database_creation = true
password = "monkey123"
```
## Start Telegraf
Next, you need to start the Telegraf service and direct it to your configuration file: