docs(telegraf): add missing serializer documentation (#6654)

* docs(telegraf): add missing serializer documentation

Add documentation for missing output data formats (serializers):
- binary: Binary protocol serialization with configurable entries
- cloudevents: CloudEvents JSON format (v0.3 and v1.0)
- csv: Comma-separated values with configurable columns
- prometheus: Prometheus text exposition format
- prometheusremotewrite: Prometheus protobuf for remote write
- wavefront: Wavefront data format

Also fixes:
- Rename messagepack.md to msgpack.md to match Telegraf source

This completes the serializer documentation coverage.

* Apply suggestion from @jstirnaman

* Apply suggestion from @jstirnaman

* docs(telegraf): clarify prometheusremotewrite example shows logical representation (#6661)

* Initial plan

* docs(telegraf): clarify prometheusremotewrite example shows logical representation

Co-authored-by: jstirnaman <212227+jstirnaman@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: jstirnaman <212227+jstirnaman@users.noreply.github.com>

* Update content/telegraf/v1/data_formats/output/binary.md

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* docs: improve prometheus serializer warnings and clarify configuration options (#6660)

* hotfix(influxdb3): fix duplicate menu entry for Enterprise security page

Change menu key from influxdb3_core to influxdb3_enterprise.

* chore(influxdb3) Security: style and cleanup: intro, requirements, callouts

* fix(influxdb3): restore clean install.md from jdstrand branch

Remove duplicate content and fix malformed code blocks introduced
during rebase conflict resolution.

* docs(telegraf): add design plan for batch format documentation

Defines documentation changes to clarify:
- Output plugin vs serializer relationship
- use_batch_format option location and purpose
- Histogram/summary handling with prometheus_client
- Choosing the right output approach

* docs(telegraf): clarify output plugins, serializers, and batch format

- Add "How output plugins use serializers" section explaining the
  relationship between output plugins and data formats
- Add "Choosing an output approach" guidance by destination and metric type
- Create prometheus serializer doc with histogram/summary guidance
- Add "Use this plugin for..." sections to prometheus_client
- Add Data formats subsection to configuration.md
- Expand output plugins index with serializer relationship

Addresses confusion about use_batch_format being an output plugin option
rather than a serializer option, and provides clear guidance on when to
use prometheus_client vs the prometheus serializer.

---------

Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>

---------

Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
pull/6670/head^2
Jason Stirnaman 2025-12-23 11:09:07 -05:00 committed by GitHub
parent 2ab200d78b
commit 6090142bb3
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
11 changed files with 655 additions and 24 deletions

View File

@ -512,6 +512,26 @@ The following config parameters are available for all inputs:
- **name_prefix**: Specifies a prefix to attach to the measurement name.
- **name_suffix**: Specifies a suffix to attach to the measurement name.
### Data formats
Some output plugins support the `data_format` option, which specifies a serializer
to convert metrics before writing.
Common serializers include `json`, `influx`, `prometheus`, and `csv`.
Output plugins that support serializers may also offer `use_batch_format`, which
controls whether the serializer receives metrics individually or as a batch.
Batch mode enables more efficient encoding for formats like JSON arrays.
```toml
[[outputs.file]]
files = ["stdout"]
data_format = "json"
use_batch_format = true
```
For available serializers and configuration options, see
[output data formats](/telegraf/v1/data_formats/output/).
## Aggregator configuration
The following config parameters are available for all aggregators:

View File

@ -1,18 +1,41 @@
---
title: Write data with output plugins
description: |
Output plugins define where Telegraf will deliver the collected metrics.
Output plugins define where Telegraf delivers the collected metrics.
menu:
telegraf_v1:
name: Output plugins
weight: 20
parent: Configure plugins
related:
- /telegraf/v1/output-plugins/
- /telegraf/v1/data_formats/output/
---
Output plugins define where Telegraf will deliver the collected metrics. Send metrics to InfluxDB or to a variety of other datastores, services, and message queues, including Graphite, OpenTSDB, Datadog, Librato, Kafka, MQTT, and NSQ.
For a complete list of output plugins and links to their detailed configuration options, see [output plugins](/telegraf/v1/plugins/#output-plugins).
Output plugins define where Telegraf delivers collected metrics.
Send metrics to InfluxDB or to a variety of other datastores, services, and
message queues, including Graphite, OpenTSDB, Datadog, Kafka, MQTT, and NSQ.
In addition to plugin-specific data formats, Telegraf supports a set of [common data formats](/telegraf/v1/data_formats/output/) available when configuring many of the Telegraf output plugins.
For a complete list of output plugins and links to their detailed configuration
options, see [output plugins](/telegraf/v1/output-plugins/).
## Output plugins and data formats
Output plugins control *where* metrics go.
Many output plugins also support *data formats* (serializers) that control how
metrics are formatted before writing.
Configure a serializer using the `data_format` option in your output plugin:
```toml
[[outputs.http]]
url = "http://example.com/metrics"
data_format = "json"
```
Some output plugins (like `influxdb_v2` or `prometheus_client`) use a fixed
format and don't support `data_format`.
Others (like `file`, `http`, `kafka`) support multiple serializers.
For available serializers and their options, see
[output data formats](/telegraf/v1/data_formats/output/).

View File

@ -7,25 +7,73 @@ menu:
name: Output data formats
weight: 1
parent: Data formats
related:
- /telegraf/v1/configure_plugins/output_plugins/
- /telegraf/v1/configuration/
---
In addition to output-specific data formats, Telegraf supports the following set
of common data formats that may be selected when configuring many of the Telegraf
output plugins.
Telegraf uses **serializers** to convert metrics into output data formats.
Many [output plugins](/telegraf/v1/configure_plugins/output_plugins/) support the `data_format` option, which lets you choose
how metrics are formatted before writing.
{{< children >}}
- [How output plugins use serializers](#how-output-plugins-use-serializers)
- [Choosing an output approach](#choosing-an-output-approach)
- [Available serializers](#available-serializers)
You will be able to identify the plugins with support by the presence of a
`data_format` configuration option, for example, in the File (`file`) output plugin:
## How output plugins use serializers
When you configure `data_format` in an output plugin, Telegraf uses a **serializer**
to convert metrics into that format before writing.
The output plugin controls *where* data goes; the serializer controls *how* it's formatted.
Some output plugins support `use_batch_format`, which changes how the serializer
processes metrics.
When enabled, the serializer receives all metrics in a batch together rather than
one at a time, enabling more efficient encoding and formats that represent multiple
metrics as a unit (like JSON arrays).
```toml
[[outputs.file]]
## Files to write to, "stdout" is a specially handled file.
files = ["stdout"]
## Data format to output.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "influx"
## Output plugin option: process metrics as a batch
use_batch_format = true
## Serializer selection: format metrics as JSON
data_format = "json"
```
Output plugins that support `use_batch_format`:
`file`, `http`, `amqp`, `kafka`, `nats`, `mqtt`, `exec`, `execd`, `remotefile`
## Choosing an output approach
### By destination
| Destination | Recommended Approach |
|-------------|---------------------|
| **Prometheus scraping** | [`prometheus_client`](/telegraf/v1/output-plugins/prometheus_client/) output plugin (exposes `/metrics` endpoint) |
| **InfluxDB** | [`influxdb`](/telegraf/v1/output-plugins/influxdb/) or [`influxdb_v2`](/telegraf/v1/output-plugins/influxdb_v2/) output plugin (native protocol) |
| **Remote HTTP endpoints** | [`http`](/telegraf/v1/output-plugins/http/) output + serializer |
| **Files** | [`file`](/telegraf/v1/output-plugins/file/) output + serializer |
| **Message queues** | [`kafka`](/telegraf/v1/output-plugins/kafka/), [`nats`](/telegraf/v1/output-plugins/nats/), [`amqp`](/telegraf/v1/output-plugins/amqp/) + serializer |
### By metric type
Some metric types require state across collection intervals:
- **Histograms** accumulate observations into buckets
- **Summaries** track quantiles over a sliding window
Serializers process each batch independently and cannot maintain this state.
When a histogram or summary spans multiple batches, the serializer may produce
incomplete or incorrect output.
For these metric types, use a dedicated output plugin that maintains state--for example:
- **Prometheus metrics**: Use [`prometheus_client`](/telegraf/v1/output-plugins/prometheus_client/)
instead of the prometheus serializer
## Available serializers
{{< children >}}

View File

@ -0,0 +1,80 @@
---
title: Binary output data format
list_title: Binary
description: Use the `binary` output data format (serializer) to serialize Telegraf metrics into binary protocols using user-specified configurations.
menu:
telegraf_v1_ref:
name: Binary
weight: 10
parent: Output data formats
identifier: output-data-format-binary
---
Use the `binary` output data format (serializer) to serialize Telegraf metrics into binary protocols using user-specified configurations.
## Configuration
```toml
[[outputs.socket_writer]]
address = "tcp://127.0.0.1:54000"
metric_batch_size = 1
## Data format to output.
data_format = "binary"
## Specify the endianness of the data.
## Available values are "little" (little-endian), "big" (big-endian) and "host",
## where "host" means the same endianness as the machine running Telegraf.
# endianness = "host"
## Definition of the message format and the serialized data.
## Please note that you need to define all elements of the data in the
## correct order as the binary format is position-dependent.
##
## Entry properties:
## read_from -- Source of the data: "field", "tag", "time" or "name".
## Defaults to "field" if omitted.
## name -- Name of the field or tag. Can be omitted for "time" and "name".
## data_format -- Target data-type: "int8/16/32/64", "uint8/16/32/64",
## "float32/64", "string".
## For time: "unix" (default), "unix_ms", "unix_us", "unix_ns".
## string_length -- Length of the string in bytes (for "string" type only).
## string_terminator -- Terminator for strings: "null", "0x00", etc.
## Defaults to "null" for strings.
entries = [
{ read_from = "name", data_format = "string", string_length = 32 },
{ read_from = "tag", name = "host", data_format = "string", string_length = 64 },
{ read_from = "field", name = "value", data_format = "float64" },
{ read_from = "time", data_format = "unix_ns" },
]
```
### Configuration options
| Option | Type | Description |
|--------|------|-------------|
| `endianness` | string | Byte order: `"little"`, `"big"`, or `"host"` (default) |
| `entries` | array | Ordered list of data elements to serialize |
### Entry properties
Each entry in the `entries` array defines how to serialize a piece of metric data:
| Property | Type | Description |
|----------|------|-------------|
| `read_from` | string | Data source: `"field"`, `"tag"`, `"time"`, or `"name"` |
| `name` | string | Field or tag name (required for `"field"` and `"tag"`) |
| `data_format` | string | Target type: `"int8/16/32/64"`, `"uint8/16/32/64"`, `"float32/64"`, `"string"` |
| `string_length` | integer | Fixed string length in bytes (for `"string"` type) |
| `string_terminator` | string | String terminator: `"null"`, `"0x00"`, etc. |
## Type conversion
If the original field type differs from the target type, the serializer converts the value.
A warning is logged if the conversion may cause loss of precision.
## String handling
For string fields:
- If the string is longer than `string_length`, it is truncated so that the string and its terminator together fit within `string_length` bytes
- If the string is shorter than `string_length`, it is padded with the terminator character so that the string and its terminator together occupy `string_length` bytes

View File

@ -0,0 +1,68 @@
---
title: CloudEvents output data format
list_title: CloudEvents
description: Use the `cloudevents` output data format (serializer) to format Telegraf metrics as CloudEvents in JSON format.
menu:
telegraf_v1_ref:
name: CloudEvents
weight: 10
parent: Output data formats
identifier: output-data-format-cloudevents
---
Use the `cloudevents` output data format (serializer) to format Telegraf metrics as [CloudEvents](https://cloudevents.io) in [JSON format](https://github.com/cloudevents/spec/blob/v1.0/json-format.md).
Versions v1.0 and v0.3 of the CloudEvents specification are supported, with v1.0 as the default.
## Configuration
```toml
[[outputs.file]]
files = ["stdout", "/tmp/metrics.out"]
## Data format to output.
data_format = "cloudevents"
## Specification version to use for events.
## Supported versions: "0.3" and "1.0" (default).
# cloudevents_version = "1.0"
## Event source specifier.
## Overwrites the source header field with the given value.
# cloudevents_source = "telegraf"
## Tag to use as event source specifier.
## Overwrites the source header field with the value of the specified tag.
## Takes precedence over 'cloudevents_source' if both are set.
## Falls back to 'cloudevents_source' if the tag doesn't exist for a metric.
# cloudevents_source_tag = ""
## Event-type specifier to overwrite the default value.
## Default for single metric: 'com.influxdata.telegraf.metric'
## Default for batch: 'com.influxdata.telegraf.metrics' (plural)
# cloudevents_event_type = ""
## Set time header of the event.
## Supported values:
## none -- do not set event time
## earliest -- use timestamp of the earliest metric
## latest -- use timestamp of the latest metric
# cloudevents_time = "earliest"
```
### Configuration options
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `cloudevents_version` | string | `"1.0"` | CloudEvents specification version (`"0.3"` or `"1.0"`) |
| `cloudevents_source` | string | `"telegraf"` | Event source identifier |
| `cloudevents_source_tag` | string | `""` | Tag to use as source (overrides `cloudevents_source`) |
| `cloudevents_event_type` | string | auto | Event type (auto-detected based on single/batch) |
| `cloudevents_time` | string | `"earliest"` | Event timestamp: `"none"`, `"earliest"`, or `"latest"` |
## Event types
By default, the serializer sets the event type based on the content:
- Single metric: `com.influxdata.telegraf.metric`
- Batch of metrics: `com.influxdata.telegraf.metrics`
Use `cloudevents_event_type` to override this behavior.

View File

@ -0,0 +1,104 @@
---
title: CSV output data format
list_title: CSV
description: Use the `csv` output data format (serializer) to convert Telegraf metrics into CSV lines.
menu:
telegraf_v1_ref:
name: CSV
weight: 10
parent: Output data formats
identifier: output-data-format-csv
---
Use the `csv` output data format (serializer) to convert Telegraf metrics into CSV (Comma-Separated Values) lines.
## Configuration
```toml
[[outputs.file]]
files = ["stdout", "/tmp/metrics.out"]
## Data format to output.
data_format = "csv"
## Timestamp format.
## Default is Unix epoch time. Use Go time layout for custom formats.
## See: https://golang.org/pkg/time/#Time.Format
# csv_timestamp_format = "unix"
## Field separator character.
# csv_separator = ","
## Output the CSV header in the first line.
## Enable when writing to a new file.
## Disable when appending or using stateless outputs to prevent
## headers appearing between data lines.
# csv_header = false
## Prefix tag and field columns with "tag_" and "field_" respectively.
# csv_column_prefix = false
## Specify column order.
## Use "tag." prefix for tags, "field." prefix for fields,
## "name" for measurement name, and "timestamp" for the timestamp.
## Only specified columns are included; others are dropped.
## Default order: timestamp, name, tags (alphabetical), fields (alphabetical)
# csv_columns = ["timestamp", "name", "tag.host", "field.value"]
```
### Configuration options
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `csv_timestamp_format` | string | `"unix"` | Timestamp format (Unix epoch or Go time layout) |
| `csv_separator` | string | `","` | Field separator character |
| `csv_header` | boolean | `false` | Output CSV header row |
| `csv_column_prefix` | boolean | `false` | Prefix columns with `tag_` or `field_` |
| `csv_columns` | array | `[]` | Explicit column order (empty = all columns) |
## Examples
### Basic CSV output
```toml
[[outputs.file]]
files = ["/tmp/metrics.csv"]
data_format = "csv"
csv_header = true
```
**Input metric:**
```
cpu,host=server01 usage_idle=98.5,usage_user=1.2 1640000000000000000
```
**Output:**
```csv
timestamp,name,host,usage_idle,usage_user
1640000000,cpu,server01,98.5,1.2
```
### Custom column order
```toml
[[outputs.file]]
files = ["/tmp/metrics.csv"]
data_format = "csv"
csv_header = true
csv_columns = ["timestamp", "tag.host", "field.usage_idle"]
```
**Output:**
```csv
timestamp,host,usage_idle
1640000000,server01,98.5
```
### Custom timestamp format
```toml
[[outputs.file]]
files = ["/tmp/metrics.csv"]
data_format = "csv"
csv_timestamp_format = "2006-01-02T15:04:05Z07:00"
```

View File

@ -7,9 +7,12 @@ menu:
name: MessagePack
weight: 10
parent: Output data formats
identifier: output-data-format-msgpack
aliases:
- /telegraf/v1/data_formats/output/messagepack/
---
The `msgpack` output data format (serializer) translates the Telegraf metric format to the [MessagePack](https://msgpack.org/). MessagePack is an efficient binary serialization format that lets you exchange data among multiple languages like JSON.
The `msgpack` output data format (serializer) translates the Telegraf metric format to the [MessagePack](https://msgpack.org/) format. MessagePack is an efficient binary serialization format that lets you exchange data among multiple languages like JSON.
### Configuration
@ -28,9 +31,9 @@ The `msgpack` output data format (serializer) translates the Telegraf metric for
### Example output
Output of this format is MessagePack binary representation of metrics with a structure identical to the below JSON:
Output of this format is MessagePack binary representation of metrics with a structure identical to the following JSON:
```
```json
{
"name":"cpu",
"time": <TIMESTAMP>, // https://github.com/msgpack/msgpack/blob/master/spec.md#timestamp-extension-type

View File

@ -0,0 +1,126 @@
---
title: Prometheus output data format
list_title: Prometheus
description: >
Use the `prometheus` output data format (serializer) to convert Telegraf
metrics into Prometheus text exposition format.
menu:
telegraf_v1_ref:
name: Prometheus
weight: 10
parent: Output data formats
identifier: output-data-format-prometheus
related:
- /telegraf/v1/output-plugins/prometheus_client/
- /telegraf/v1/data_formats/output/
- /telegraf/v1/input-plugins/prometheus/
---
Use the `prometheus` output data format (serializer) to convert Telegraf metrics
into the [Prometheus text exposition format](https://prometheus.io/docs/instrumenting/exposition_formats/).
When used with the `prometheus` input plugin, set `metric_version = 2` in the
input to properly round-trip metrics.
## Configuration
```toml
[[outputs.file]]
files = ["stdout"]
data_format = "prometheus"
## Optional: Enable batch serialization for improved efficiency.
## This is an output plugin option that affects how the serializer
## receives metrics.
# use_batch_format = false
## Serializer options (prometheus-specific)
# prometheus_export_timestamp = false
# prometheus_sort_metrics = false
# prometheus_string_as_label = false
# prometheus_compact_encoding = false
```
### Serializer options
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `prometheus_export_timestamp` | boolean | `false` | Include timestamp on each sample |
| `prometheus_sort_metrics` | boolean | `false` | Sort metric families and samples (useful for debugging) |
| `prometheus_string_as_label` | boolean | `false` | Convert string fields to labels |
| `prometheus_compact_encoding` | boolean | `false` | Omit HELP metadata to reduce payload size |
### Metric type mappings
Use `prometheus_metric_types` to explicitly set metric types, overriding
Telegraf's automatic type detection.
Supports glob patterns.
```toml
[[outputs.file]]
files = ["stdout"]
data_format = "prometheus"
[outputs.file.prometheus_metric_types]
counter = ["*_total", "*_count"]
gauge = ["*_current", "*_ratio"]
```
## Metric naming
Prometheus metric names are created by joining the measurement name with the
field key.
**Special case:** When the measurement name is `prometheus`, it is not included
in the final metric name.
## Labels
Prometheus labels are created from Telegraf tags.
String fields are ignored by default and do not produce Prometheus metrics.
Set `prometheus_string_as_label = true` to convert string fields to labels.
Set `log_level = "trace"` to see serialization issues.
## Histograms and summaries
Histogram and summary metrics require special consideration.
These metric types accumulate state across observations:
- **Histograms** count observations in configurable buckets
- **Summaries** calculate quantiles over a sliding time window
### Use prometheus_client for histograms and summaries
Serializers process metrics in batches and have no memory of previous batches.
When histogram or summary data arrives across multiple batches, the serializer
cannot combine them correctly.
For example, a histogram with 10 buckets might arrive as:
- Batch 1: buckets 1-5
- Batch 2: buckets 6-10
The serializer outputs each batch independently, producing two incomplete
histograms instead of one complete histogram.
The [`prometheus_client` output plugin](/telegraf/v1/output-plugins/prometheus_client/)
maintains metric state in memory and produces correct output regardless of
batch boundaries.
```toml
# Recommended for histogram/summary metrics
[[outputs.prometheus_client]]
listen = ":9273"
```
### Use the serializer for counters and gauges
For counters and gauges, the prometheus serializer works well.
Enable `use_batch_format = true` in your output plugin for more efficient output.
```toml
[[outputs.file]]
files = ["stdout"]
data_format = "prometheus"
use_batch_format = true
```

View File

@ -0,0 +1,69 @@
---
title: Prometheus Remote Write output data format
list_title: Prometheus Remote Write
description: Use the `prometheusremotewrite` output data format (serializer) to convert Telegraf metrics into Prometheus protobuf format for remote write endpoints.
menu:
telegraf_v1_ref:
name: Prometheus Remote Write
weight: 10
parent: Output data formats
identifier: output-data-format-prometheusremotewrite
---
Use the `prometheusremotewrite` output data format (serializer) to convert Telegraf metrics into the Prometheus protobuf exposition format for [remote write](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) endpoints.
> [!Warning]
> When generating histogram and summary types, output may not be correct if the metric spans multiple batches.
> Use outputs that support batch format to mitigate this issue.
> For histogram and summary types, the `prometheus_client` output is recommended.
## Configuration
```toml
[[outputs.http]]
## URL of the remote write endpoint.
url = "https://cortex/api/prom/push"
## Optional TLS configuration.
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Data format to output.
data_format = "prometheusremotewrite"
## Required headers for Prometheus remote write.
[outputs.http.headers]
Content-Type = "application/x-protobuf"
Content-Encoding = "snappy"
X-Prometheus-Remote-Write-Version = "0.1.0"
```
## Metrics
A Prometheus metric is created for each integer, float, boolean, or unsigned field:
- Boolean values convert to `1.0` (true) or `0.0` (false)
- String fields are ignored (set `log_level = "trace"` to see serialization issues)
### Metric naming
Prometheus metric names are created by joining the measurement name with the field key.
**Special case:** When the measurement name is `prometheus`, it is not included in the final metric name.
### Labels
Prometheus labels are created from Telegraf tags.
## Example
**Input metric:**
```
cpu,host=server01 usage_idle=98.5,usage_user=1.2 1640000000000000000
```
**Resulting Prometheus metrics (logical representation - actual output is binary protobuf format):**
```
cpu_usage_idle{host="server01"} 98.5
cpu_usage_user{host="server01"} 1.2
```

View File

@ -0,0 +1,65 @@
---
title: Wavefront output data format
list_title: Wavefront
description: Use the `wavefront` output data format (serializer) to convert Telegraf metrics into Wavefront data format.
menu:
telegraf_v1_ref:
name: Wavefront
weight: 10
parent: Output data formats
identifier: output-data-format-wavefront
---
Use the `wavefront` output data format (serializer) to convert Telegraf metrics into the [Wavefront Data Format](https://docs.wavefront.com/wavefront_data_format.html).
## Configuration
```toml
[[outputs.file]]
files = ["stdout"]
## Data format to output.
data_format = "wavefront"
## Use strict rules to sanitize metric and tag names.
## When enabled, forward slash (/) and comma (,) are accepted.
# wavefront_use_strict = false
## Point tags to use as the source name for Wavefront.
## If none found, "host" is used.
# wavefront_source_override = ["hostname", "address", "agent_host", "node_host"]
## Disable prefix path conversion.
## Default behavior (enabled): prod.prefix.name.metric.name
## Disabled behavior: prod.prefix_name.metric_name
# wavefront_disable_prefix_conversion = false
```
### Configuration options
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `wavefront_use_strict` | boolean | `false` | Use strict sanitization rules |
| `wavefront_source_override` | array | `[]` | Tags to use as source name |
| `wavefront_disable_prefix_conversion` | boolean | `false` | Disable path-style prefix conversion |
## Metrics
A Wavefront metric equals a single field value of a Telegraf measurement.
The metric name format is: `<measurement_name>.<field_name>`
Only boolean and numeric fields are serialized. Other types generate an error.
## Example
**Input metric:**
```
cpu,cpu=cpu0,host=testHost user=12,idle=88,system=0 1234567890
```
**Output (Wavefront format):**
```
"cpu.user" 12.000000 1234567890 source="testHost" "cpu"="cpu0"
"cpu.idle" 88.000000 1234567890 source="testHost" "cpu"="cpu0"
"cpu.system" 0.000000 1234567890 source="testHost" "cpu"="cpu0"
```

View File

@ -25,6 +25,33 @@ by a Prometheus server.
[prometheus]: https://prometheus.io
## Use this plugin for Prometheus scraping
When Prometheus scrapes your Telegraf instance, use this plugin.
It exposes a `/metrics` endpoint that Prometheus can poll directly.
For other Prometheus output scenarios, see the comparison table:
| Use Case | Recommended Approach |
|----------|---------------------|
| Prometheus scrapes Telegraf | `prometheus_client` output plugin |
| Counters and gauges to file/HTTP | [Prometheus serializer](/telegraf/v1/data_formats/output/prometheus/) + `file` or `http` output |
| Histograms and summaries | `prometheus_client` output plugin |
| Remote write to Prometheus-compatible endpoint | `http` output + `prometheusremotewrite` serializer |
## Use this plugin for histograms and summaries
Histogram and summary metrics accumulate observations over time.
The [prometheus serializer](/telegraf/v1/data_formats/output/prometheus/) processes
each batch independently and cannot maintain this state.
When metric data spans multiple batches, the serializer produces incomplete output.
This plugin keeps metrics in memory until they expire or are scraped, ensuring
complete and correct histogram buckets and summary quantiles.
For counters and gauges, you can use either this plugin or the prometheus
serializer with an output plugin like `file` or `http`.
## Global configuration options <!-- @/docs/includes/plugin_config.md -->
Plugins support additional global and plugin configuration settings for tasks
@ -107,7 +134,5 @@ to use them.
## Metrics
Prometheus metrics are produced in the same manner as the [prometheus
serializer](/telegraf/v1/plugins/#serializer-prometheus).
[prometheus serializer]: /plugins/serializers/prometheus/README.md#Metrics
Prometheus metrics are produced in the same manner as the
[prometheus serializer](/telegraf/v1/data_formats/output/prometheus/).