fix(telegraf): Correct links for Telegraf v1.36.4 (#6570)

* Updating plugins

* Fix typos

* Update content/telegraf/v1/input-plugins/dpdk/_index.md

Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>

* Update content/telegraf/v1/input-plugins/dpdk/_index.md

Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>

* Update content/telegraf/v1/input-plugins/haproxy/_index.md

Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>

* Update content/telegraf/v1/input-plugins/http_listener_v2/_index.md

Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>

* Update content/telegraf/v1/input-plugins/intel_pmu/_index.md

Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>

* Update content/telegraf/v1/input-plugins/ldap/_index.md

Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>

---------

Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>
pull/6571/head^2
Sven Rebhan 2025-11-25 23:55:02 +01:00 committed by GitHub
parent e0d2a6941c
commit d4eff43cf0
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
46 changed files with 145 additions and 89 deletions

View File

@ -82,7 +82,7 @@ For implementation details see the underlying [golang library](https://github.co
### exact R7 and R8
These algorithms compute quantiles as described in [Hyndman & Fan
(1996)](). The R7 variant is used in Excel and NumPy. The R8
(1996)](http://www.maths.usyd.edu.au/u/UG/SM/STAT3022/r/current/Misc/Sample%20Quantiles%20in%20Statistical%20Packages.pdf). The R7 variant is used in Excel and NumPy. The R8
variant is recommended by Hyndman & Fan due to its independence of the
underlying sample distribution.

View File

@ -19,7 +19,7 @@ This plugin gathers statistics including memory and GPU usage, temperatures
etc from [AMD ROCm platform](https://rocm.docs.amd.com/) GPUs.
> [!IMPORTANT]
> The [`rocm-smi` binary]() is required and needs to be installed on the
> The [`rocm-smi` binary](https://github.com/RadeonOpenCompute/rocm_smi_lib/tree/master/python_smi_tools) is required and needs to be installed on the
> system.
**Introduced in:** Telegraf v1.20.0

View File

@ -16,7 +16,7 @@ related:
# Apache Input Plugin
This plugin collects performance information from [Apache HTTP Servers](https://httpd.apache.org)
using the [`mod_status` module](). Typically, this module is
using the [`mod_status` module](https://httpd.apache.org/docs/current/mod/mod_status.html). Typically, this module is
configured to expose a page at the `/server-status?auto` endpoint the server.
The [ExtendedStatus option](https://httpd.apache.org/docs/current/mod/core.html#extendedstatus) must be enabled in order to collect

View File

@ -126,14 +126,14 @@ See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details
- fields:
- all rows from [system.metrics](https://clickhouse.tech/docs/en/operations/system-tables/metrics/)
- clickhouse_asynchronous_metrics (see [system.asynchronous_metrics]()
- clickhouse_asynchronous_metrics (see [system.asynchronous_metrics](https://clickhouse.tech/docs/en/operations/system-tables/asynchronous_metrics/)
for details)
- tags:
- source (ClickHouse server hostname)
- cluster (Name of the cluster [optional])
- shard_num (Shard number in the cluster [optional])
- fields:
- all rows from [system.asynchronous_metrics]()
- all rows from [system.asynchronous_metrics](https://clickhouse.tech/docs/en/operations/system-tables/asynchronous_metrics/)
- clickhouse_tables
- tags:
@ -155,7 +155,7 @@ See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details
- fields:
- root_nodes (count of node where path=/)
- clickhouse_replication_queue (see [system.replication_queue]() for details)
- clickhouse_replication_queue (see [system.replication_queue](https://clickhouse.com/docs/en/operations/system-tables/replication_queue/) for details)
- tags:
- source (ClickHouse server hostname)
- cluster (Name of the cluster [optional])
@ -163,14 +163,14 @@ See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details
- fields:
- too_many_tries_replicas (count of replicas which have `num_tries > 1`)
- clickhouse_detached_parts (see [system.detached_parts]() for details)
- clickhouse_detached_parts (see [system.detached_parts](https://clickhouse.tech/docs/en/operations/system-tables/detached_parts/) for details)
- tags:
- source (ClickHouse server hostname)
- cluster (Name of the cluster [optional])
- shard_num (Shard number in the cluster [optional])
- fields:
- detached_parts (total detached parts for all tables and databases
from [system.detached_parts]())
from [system.detached_parts](https://clickhouse.tech/docs/en/operations/system-tables/detached_parts/))
- clickhouse_dictionaries (see [system.dictionaries](https://clickhouse.tech/docs/en/operations/system-tables/dictionaries/) for details)
- tags:
@ -222,7 +222,7 @@ See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details
- longest_running - float gauge which show maximum value for `elapsed`
field of running processes
- clickhouse_text_log (see [system.text_log]() for details)
- clickhouse_text_log (see [system.text_log](https://clickhouse.tech/docs/en/operations/system-tables/text_log/) for details)
- tags:
- source (ClickHouse server hostname)
- cluster (Name of the cluster [optional])

View File

@ -96,8 +96,8 @@ docker run --privileged -v /:/hostfs:ro -v /run/udev:/run/udev:ro -e HOST_PROC=/
- io_await (float64, gauge, milliseconds)
- io_svctm (float64, gauge, milliseconds)
On linux these values correspond to the values in [`/proc/diskstats`]() and
[`/sys/block/<dev>/stat`]().
On linux these values correspond to the values in [`/proc/diskstats`](https://www.kernel.org/doc/Documentation/ABI/testing/procfs-diskstats) and
[`/sys/block/<dev>/stat`](https://www.kernel.org/doc/Documentation/block/stat.txt).
[1]: https://www.kernel.org/doc/Documentation/ABI/testing/procfs-diskstats

View File

@ -108,7 +108,7 @@ See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details
### Environment Configuration
When using the `"ENV"` endpoint, the connection is configured using the [cli
Docker environment variables]().
Docker environment variables](https://godoc.org/github.com/moby/moby/client#NewEnvClient).
[3]: https://godoc.org/github.com/moby/moby/client#NewEnvClient

View File

@ -135,7 +135,8 @@ This configuration allows getting metrics for all devices reported via
Since this configuration will query `/ethdev/link_status` it's recommended to
increase timeout to `socket_access_timeout = "10s"`.
The plugin collecting interval.
The plugin collecting interval
should be adjusted accordingly (e.g. `interval = "30s"`).
### Example: Excluding NIC link status from being collected
@ -243,7 +244,20 @@ measurements.
The DPDK socket accepts `command,params` requests and returns metric data in
JSON format. All metrics from DPDK socket become flattened using Telegraf's
JSON Flattener, and a set of tags that identify
JSON Flattener and exposed as fields. If DPDK
response contains no information (is empty or is null) then such response will
be discarded.
> **NOTE:** Since DPDK allows registering custom metrics in its telemetry
> framework the JSON response from DPDK may contain various sets of metrics.
> While metrics from `/ethdev/stats` should be mostly stable, the `/ethdev/xstats`
> may contain driver-specific metrics (depending on DPDK application
> configuration). The application-specific commands like `/l3fwd-power/stats`
> can return their own specific set of metrics.
## Example Output
The output consists of the plugin name (`dpdk`), and a set of tags that identify
querying hierarchy:
```text

View File

@ -94,7 +94,7 @@ bandwidth. Will create `fritzbox_hosts` metrics.
## Metrics
By default field names are directly derived from the corresponding [interface
specification]().
specification](https://avm.de/service/schnittstellen/).
- `fritzbox_device`
- tags

View File

@ -67,11 +67,11 @@ See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details
### HAProxy Configuration
The following information may be useful when getting started, but please consult
the HAProxy documentation for complete and up to date instructions.
the HAProxy documentation for complete and up-to-date instructions.
The [`stats enable`]() option can be used to add unauthenticated access over
The [`stats enable`](https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-stats%20enable) option can be used to add unauthenticated access over
HTTP using the default settings. To enable the unix socket begin by reading
about the [`stats socket`]() option.
about the [`stats socket`](https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-stats%20socket) option.
[4]: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-stats%20enable
[5]: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-stats%20socket
@ -113,7 +113,7 @@ The following renames are made:
## Metrics
For more details about collected metrics reference the [HAProxy CSV format
documentation]().
documentation](https://cbonte.github.io/haproxy-dconv/1.8/management.html#9.1).
- haproxy
- tags:

View File

@ -20,9 +20,9 @@ This plugin listens for metrics sent via HTTP in any of the supported
> [!NOTE]
> If you would like Telegraf to act as a proxy/relay for InfluxDB v1 or
> InfluxDB v2 it is recommended to use the
> [influxdb__listener]() or
> [influxdb_v2_listener]() plugin instead.
> InfluxDB v2, use the
> [influxdb__listener](/telegraf/v1/plugins/#input-influxdb_listener) or
> [influxdb_v2_listener](/telegraf/v1/plugins/#input-influxdb_v2_listener) plugin instead.
**Introduced in:** Telegraf v1.9.0
**Tags:** server

View File

@ -22,7 +22,7 @@ proxy/router for the `/write` endpoint of the InfluxDB HTTP API.
> [!NOTE]
> This plugin was previously known as `http_listener`. If you wish to
> send general metrics via HTTP it is recommended to use the
> [`http_listener_v2`]() instead.
> [`http_listener_v2`](/telegraf/v1/plugins/#input-http_listener_v2) instead.
The `/write` endpoint supports the `precision` query parameter and can be set
to one of `ns`, `u`, `ms`, `s`, `m`, `h`. All other parameters are ignored and

View File

@ -44,7 +44,7 @@ Linux kernel's perf interface.
Event definition JSON files for specific architectures can be found at the
[Github repository](https://github.com/intel/perfmon). Download the event definitions appropriate for your
system e.g. using the [`event_download.py` PMU tool]() and keep them
system--for example, using the [`event_download.py` PMU tool](https://github.com/andikleen/pmu-tools) and keep them
in a safe place on your system.
[iaevents_lib]: https://github.com/intel/iaevents
@ -117,7 +117,7 @@ See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details
Perf modifiers adjust event-specific perf attribute to fulfill particular
requirements. Details about perf attribute structure could be found in
[perf_event_open]()
[perf_event_open](https://man7.org/linux/man-pages/man2/perf_event_open.2.html)
syscall manual.
General schema of configuration's `events` list element:

View File

@ -17,7 +17,7 @@ related:
This plugin gathers metrics from the
[Intelligent Platform Management Interface](https://www.intel.com/content/dam/www/public/us/en/documents/specification-updates/ipmi-intelligent-platform-mgt-interface-spec-2nd-gen-v2-0-spec-update.pdf) using the
[`ipmitool`]() command line utility.
[`ipmitool`](https://github.com/ipmitool/ipmitool) command line utility.
> [!IMPORTANT]
> The `ipmitool` requires access to the IPMI device. Please check the

View File

@ -102,6 +102,11 @@ Optionally, specify TLS options for communicating with proxies:
Please see
Jolokia agent documentation.
## Metrics
The metrics depend on the definition(s) in the `inputs.jolokia2_proxy.metric`
section(s).
## Example Output
```text

View File

@ -170,7 +170,7 @@ manner.
## kapacitor_cluster
The `kapacitor_cluster` measurement reflects the ability of [Kapacitor nodes to
communicate]() with one another. Specifically, these metrics track the
communicate](https://docs.influxdata.com/enterprise_kapacitor/v1.5/administration/configuration/#cluster-communications) with one another. Specifically, these metrics track the
gossip communication between the Kapacitor nodes.
[cluster]: https://docs.influxdata.com/enterprise_kapacitor/v1.5/administration/configuration/#cluster-communications

View File

@ -55,7 +55,7 @@ Please check the documentation of the underlying kernel interfaces in the
`/proc interfaces` section of the [random man page](https://man7.org/linux/man-pages/man4/random.4.html).
Kernel Samepage Merging is generally documented in the
[kernel documentation](https://www.kernel.org/doc/html/latest/accounting/psi.html) and the available metrics exposed via sysfs
[kernel documentation](https://www.kernel.org/doc/html/latest/mm/ksm.html) and the available metrics exposed via sysfs
are documented in the [admin guide](https://www.kernel.org/doc/html/latest/admin-guide/mm/ksm.html#ksm-daemon-sysfs-interface).
Pressure Stall Information is exposed through `/proc/pressure` and is documented

View File

@ -17,14 +17,14 @@ related:
This service plugin listens for messages on the [KNX home-automation bus](https://www.knx.org)
by connecting via a KNX-IP interface. Information about supported KNX
datapoint-types can be found at the underlying [`knx-go` project]().
datapoint-types can be found at the underlying [`knx-go` project](https://github.com/vapourismo/knx-go).
**Introduced in:** Telegraf v1.19.0
**Tags:** iot
**OS support:** all
[knx]: https://www.knx.org
[knxgo]: https://github.com/vapourismo/knx-go>
[knxgo]: https://github.com/vapourismo/knx-go
## Service Input <!-- @/docs/includes/service_input.md -->

View File

@ -106,7 +106,7 @@ See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details
If using [RBAC authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/), you will need to create a cluster role to
list "persistentvolumes" and "nodes". You will then need to make an [aggregated
ClusterRole]() that will eventually be bound to a user or group.
ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles) that will eventually be bound to a user or group.
[rbac]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
[agg]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles

View File

@ -15,8 +15,8 @@ related:
# LDAP Input Plugin
This plugin gathers metrics from LDAP servers' monitoring (`cn=Monitor`)
backend. Currently this plugin supports [OpenLDAP](https://www.openldap.org/devel/admin/monitoringslapd.html) and [389ds](https://www.port389.org/)
This plugin gathers metrics from an LDAP server's monitoring (`cn=Monitor`)
backend. Currently this plugin supports [OpenLDAP](https://www.openldap.org/) and [389ds](https://www.port389.org/)
servers.
**Introduced in:** Telegraf v1.29.0

View File

@ -59,7 +59,7 @@ See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details
### Server Setup
Enable [RCON](http://wiki.vg/RCON) on the Minecraft server and add the following to your
[`server.properties`]() file:
[`server.properties`](https://minecraft.gamepedia.com/Server.properties) file:
```conf
enable-rcon=true

View File

@ -19,7 +19,7 @@ This plugin gathers metrics from [OpenLDAP](https://www.openldap.org/)'s `cn=Mon
To use this plugin you must enable the [slapd monitoring](https://www.openldap.org/devel/admin/monitoringslapd.html) backend.
> [!NOTE]
> It is recommended to use the newer [`ldap` input plugin]() instead.
> It is recommended to use the newer [`ldap` input plugin](/telegraf/v1/plugins/#input-ldap) instead.
**Introduced in:** Telegraf v1.4.0
**Tags:** server, network

View File

@ -20,7 +20,7 @@ This plugin gathers metrics from the [Phusion Passenger](https://www.phusionpass
> [!WARNING]
> Depending on your environment, this plugin can create a high number of series
> which can cause high load on your database. Please use
> [measurement filtering](https://docs.influxdata.com/telegraf/latest/administration/configuration/#measurement-filtering) to manage your series cardinality!
> [measurement filtering](/telegraf/v1/configuration/#metric-filtering) to manage your series cardinality!
The plugin uses the `passenger-status` command line tool.

View File

@ -174,7 +174,7 @@ Without systemd:
setcap cap_net_raw=eip /usr/bin/telegraf
```
Reference [`man 7 capabilities`]() for more information about
Reference [`man 7 capabilities`](http://man7.org/linux/man-pages/man7/capabilities.7.html) for more information about
setting capabilities.
[man 7 capabilities]: http://man7.org/linux/man-pages/man7/capabilities.7.html

View File

@ -118,8 +118,8 @@ to use them.
```
The system can be easily extended using homemade metrics collection tools or
using the postgresql extensions [pg_stat_statements](),
[pg_proctab]() or [powa](http://dalibo.github.io/powa/).
using the postgresql extensions [pg_stat_statements](http://www.postgresql.org/docs/current/static/pgstatstatements.html),
[pg_proctab](https://github.com/markwkm/pg_proctab) or [powa](http://dalibo.github.io/powa/).
[pg_stat_statements]: http://www.postgresql.org/docs/current/static/pgstatstatements.html
[pg_proctab]: https://github.com/markwkm/pg_proctab

View File

@ -261,7 +261,7 @@ option in both to ensure metrics are round-tripped without modification.
URLs listed in the `kubernetes_services` parameter will be expanded by looking
up all A records assigned to the hostname as described in [Kubernetes DNS
service discovery]().
service discovery](https://kubernetes.io/docs/concepts/services-networking/service/#dns).
This method can be used to locate all [Kubernetes headless services](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services).

View File

@ -18,11 +18,11 @@ related:
This plugin collects [Self-Monitoring, Analysis and Reporting Technology](https://en.wikipedia.org/wiki/Self-Monitoring,_Analysis_and_Reporting_Technology)
information for storage devices information using the
[`smartmontools`]() package. This plugin also supports NVMe devices by
using the [`nvme-cli`]() package.
using the [`nvme-cli`](https://github.com/linux-nvme/nvme-cli) package.
> [!NOTE]
> This plugin requires the [`smartmontools`]() and, for NVMe devices,
> the [`nvme-cli`]() packages to be installed on your system. The
> the [`nvme-cli`](https://github.com/linux-nvme/nvme-cli) packages to be installed on your system. The
> `smartctl` and `nvme` commands must to be executable by Telegraf.
**Introduced in:** Telegraf v1.5.0

View File

@ -18,7 +18,7 @@ related:
This plugin collects [Self-Monitoring, Analysis and Reporting Technology](https://en.wikipedia.org/wiki/Self-Monitoring,_Analysis_and_Reporting_Technology)
information for storage devices information using the
[`smartmontools`]() package. Contrary to the
[smart plugin](/telegraf/v1/plugins/#input-smart), this plugin does not use the [`nvme-cli`]()
[smart plugin](/telegraf/v1/plugins/#input-smart), this plugin does not use the [`nvme-cli`](https://github.com/linux-nvme/nvme-cli)
package to collect additional information about NVMe devices.
> [!NOTE]

View File

@ -18,7 +18,7 @@ related:
This plugin reads metrics from performing [SQL](https://www.iso.org/standard/76583.html) queries against a SQL
server. Different server types are supported and their settings might differ
(especially the connection parameters). Please check the list of
[supported SQL drivers](../../../docs/SQL_DRIVERS_INPUT.md) for the `driver` name and options
[supported SQL drivers](/docs/SQL_DRIVERS_INPUT.md) for the `driver` name and options
for the data-source-name (`dsn`) options.
**Introduced in:** Telegraf v1.19.0

View File

@ -532,7 +532,7 @@ ensure to check additional setup section in this documentation.
cntr_type column value is 537003264 are
already returned with a percentage format
between 0 and 100. For other counters,
please check [sys.dm_os_performance_counters]()
please check [sys.dm_os_performance_counters](https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-os-performance-counters-transact-sql?view=azuresqldb-current)
documentation.
- *AzureSQLPoolSchedulers*: This captures `sys.dm_os_schedulers` snapshots.

View File

@ -140,8 +140,8 @@ See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details
### Message transport
The `framing` option only applies to streams. It governs the way we expect to
receive messages within the stream. Namely, with the [`"octet counting"`]()
technique (default) or with the [`"non-transparent"`]() framing.
receive messages within the stream. Namely, with the [`"octet counting"`](https://tools.ietf.org/html/rfc5425#section-4.3)
technique (default) or with the [`"non-transparent"`](https://tools.ietf.org/html/rfc6587#section-3.4.2) framing.
The `trailer` option only applies when `framing` option is
`"non-transparent"`. It must have one of the following values: `"LF"` (default),
@ -212,7 +212,7 @@ echo "<13>1 2018-10-01T12:00:00.0Z example.org root - - - test" | nc -u 127.0.0.
The `source` tag stores the remote IP address of the syslog sender.
To resolve these IPs to DNS names, use the
[`reverse_dns` processor]()
[`reverse_dns` processor](/telegraf/v1/plugins/#processor-reverse_dns)
You can send debugging messages directly to the input plugin using netcat:

View File

@ -84,7 +84,7 @@ restart-counts, PID, etc. See the metrics section
### Load
Enumeration of [unit_load_state_table]()
Enumeration of [unit_load_state_table](https://github.com/systemd/systemd/blob/c87700a1335f489be31cd3549927da68b5638819/src/basic/unit-def.c#L87)
| Value | Meaning | Description |
| ----- | ------- | ----------- |
@ -100,7 +100,7 @@ Enumeration of [unit_load_state_table]()
### Active
Enumeration of [unit_active_state_table]()
Enumeration of [unit_active_state_table](https://github.com/systemd/systemd/blob/c87700a1335f489be31cd3549927da68b5638819/src/basic/unit-def.c#L99)
| Value | Meaning | Description |
| ----- | ------- | ----------- |
@ -115,7 +115,7 @@ Enumeration of [unit_active_state_table]()
### Sub
enumeration of sub states, see various [unittype_state_tables](); duplicates
enumeration of sub states, see various [unittype_state_tables](https://github.com/systemd/systemd/blob/c87700a1335f489be31cd3549927da68b5638819/src/basic/unit-def.c#L163); duplicates
were removed, tables are hex aligned to keep some space for future values
| Value | Meaning | Description |

View File

@ -16,7 +16,7 @@ related:
# Wireguard Input Plugin
This plugin collects statistics on a local [Wireguard](https://www.wireguard.com/) server
using the [`wgctrl` library](). The plugin reports gauge metrics for
using the [`wgctrl` library](https://github.com/WireGuard/wgctrl-go). The plugin reports gauge metrics for
Wireguard interface device(s) and its peers.
**Introduced in:** Telegraf v1.14.0

View File

@ -16,7 +16,7 @@ related:
# Apache Zookeeper Input Plugin
This plugin collects variables from [Zookeeper](https://zookeeper.apache.org) instances using the
[`mntr` command]().
[`mntr` command](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_zkCommands).
> [!NOTE]
> If the Prometheus Metric provider is enabled in Zookeeper use the

View File

@ -97,7 +97,7 @@ The plugin will group the metrics by the metric name, and will send each group
of metrics to an Azure Data Explorer table. If the table doesn't exist the
plugin will create the table, if the table exists then the plugin will try to
merge the Telegraf metric schema to the existing table. For more information
about the merge process check the [`.create-merge` documentation]().
about the merge process check the [`.create-merge` documentation](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/create-merge-table-command).
The table name will match the `name` property of the metric, this means that the
name of the metric should comply with the Azure Data Explorer table naming
@ -112,7 +112,7 @@ table. The name of the table must be supplied via `table_name` in the config
file. If the table doesn't exist the plugin will create the table, if the table
exists then the plugin will try to merge the Telegraf metric schema to the
existing table. For more information about the merge process check the
[`.create-merge` documentation]().
[`.create-merge` documentation](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/create-merge-table-command).
## Tables Schema
@ -158,7 +158,7 @@ These methods are:
1. AAD Application Tokens (Service Principals with secrets or certificates).
For guidance on how to create and register an App in Azure Active Directory
check [this article](https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals), and for more information on the Service
check [this article](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app#register-an-application), and for more information on the Service
Principals check [this article](https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals).
2. AAD User Tokens
@ -215,7 +215,7 @@ below**:
platform. Requires that code is running in Azure, e.g. on a VM. All
configuration is handled by Azure. See [Azure Managed Service Identity](https://docs.microsoft.com/en-us/azure/active-directory/msi-overview)
for more details. Only available when using the [Azure Resource
Manager]().
Manager](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview).
[msi]: https://docs.microsoft.com/en-us/azure/active-directory/msi-overview
[arm]: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview

View File

@ -92,9 +92,9 @@ See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details
## Setup
1. [Register the `microsoft.insights` resource provider in your Azure
subscription]().
subscription](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-supported-services).
1. If using Managed Service Identities to authenticate an Azure VM, [enable
system-assigned managed identity]().
system-assigned managed identity](https://docs.microsoft.com/en-us/azure/active-directory/managed-service-identity/qs-configure-portal-windows-vm).
1. Use a region that supports Azure Monitor Custom Metrics, For regions with
Custom Metrics support, an endpoint will be available with the format
`https://<region>.monitoring.azure.com`.
@ -166,7 +166,7 @@ configurations:
platform. Requires that code is running in Azure, e.g. on a VM. All
configuration is handled by Azure. See [Azure Managed Service Identity](https://docs.microsoft.com/en-us/azure/active-directory/msi-overview)
for more details. Only available when using the [Azure Resource
Manager]().
Manager](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview).
[msi]: https://docs.microsoft.com/en-us/azure/active-directory/msi-overview
[arm]: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview
@ -188,7 +188,7 @@ dimension limit.
To convert only a subset of string-typed fields as dimensions, enable
`strings_as_dimensions` and use the [`fieldinclude` or `fieldexclude`
modifiers]() to limit the string-typed fields that are sent to
modifiers](/telegraf/v1/configuration/#modifiers) to limit the string-typed fields that are sent to
the plugin.
[conf-modifiers]: ../../../docs/CONFIGURATION.md#modifiers

View File

@ -116,7 +116,7 @@ Avoid hyphens on BigQuery tables, underlying SDK cannot handle streaming inserts
to Table with hyphens.
In cases of metrics with hyphens please use the [Rename Processor
Plugin]().
Plugin](../../processors/rename/README.md).
In case of a metric with hyphen by default hyphens shall be replaced with
underscores (_). This can be altered using the `replace_hyphen_to`

View File

@ -21,7 +21,7 @@ OneAgent for automatic authentication or it may be run standalone on a host
without OneAgent by specifying a URL and API Token.
More information on the plugin can be found in the
[Dynatrace documentation](https://docs.dynatrace.com/docs/shortlink/api-metrics-v2-post-datapoints).
[Dynatrace documentation](https://docs.dynatrace.com/docs/shortlink/telegraf).
> [!NOTE]
> All metrics are reported as gauges, unless they are specified to be delta
@ -50,7 +50,7 @@ higher.
## Getting Started
Setting up Telegraf is explained in the [Telegraf
Documentation]().
Documentation](https://docs.influxdata.com/telegraf/latest/introduction/getting-started/).
The Dynatrace exporter may be enabled by adding an `[[outputs.dynatrace]]`
section to your `telegraf.conf` config file. All configurations are optional,
but if a `url` other than the OneAgent metric ingestion endpoint is specified
@ -67,7 +67,7 @@ configuration. The Dynatrace Telegraf output plugin will send all metrics to the
OneAgent which will use its secure and load balanced connection to send the
metrics to your Dynatrace SaaS or Managed environment. Depending on your
environment, you might have to enable metrics ingestion on the OneAgent first as
described in the [Dynatrace documentation](https://docs.dynatrace.com/docs/shortlink/api-metrics-v2-post-datapoints).
described in the [Dynatrace documentation](https://docs.dynatrace.com/docs/shortlink/telegraf).
Note: The name and identifier of the host running Telegraf will be added as a
dimension to every metric. If this is undesirable, then the output plugin may be
@ -85,7 +85,7 @@ to configure the environment API endpoint to send the metrics to and an API
token for security.
You will also need to configure an API token for secure access. Find out how to
create a token in the [Dynatrace documentation](https://docs.dynatrace.com/docs/shortlink/api-metrics-v2-post-datapoints) or simply navigate to
create a token in the [Dynatrace documentation](https://docs.dynatrace.com/docs/shortlink/api-authentication) or simply navigate to
**Settings > Integration > Dynatrace API** in your Dynatrace environment and
create a token with Dynatrace API and create a new token with 'Ingest metrics'
(`metrics.ingest`) scope enabled. It is recommended to limit Token scope to only

View File

@ -37,7 +37,7 @@ The timestamp of the metric collected will be used to decide the index
destination.
For more information about this usage on Elasticsearch, check [the
docs]().
docs](https://www.elastic.co/guide/en/elasticsearch/guide/master/time-based.html#index-per-timeframe).
[1]: https://www.elastic.co/guide/en/elasticsearch/guide/master/time-based.html#index-per-timeframe

View File

@ -101,7 +101,7 @@ The plugin will group the metrics by the metric name and will send each group
of metrics to an Eventhouse KQL DB table. If the table doesn't exist the
plugin will create the table, if the table exists then the plugin will try to
merge the Telegraf metric schema to the existing table. For more information
about the merge process check the [`.create-merge` documentation]().
about the merge process check the [`.create-merge` documentation](https://learn.microsoft.com/kusto/management/create-merge-tables-command?view=microsoft-fabric).
The table name will match the metric name, i.e. the name of the metric must
comply with the Eventhouse KQL DB table naming constraints in case you plan
@ -116,7 +116,7 @@ table. The name of the table must be supplied via `table_name` parameter in the
`connection_string`. If the table doesn't exist the plugin will create the
table, if the table exists then the plugin will try to merge the Telegraf metric
schema to the existing table. For more information about the merge process check
the [`.create-merge` documentation]().
the [`.create-merge` documentation](https://learn.microsoft.com/kusto/management/create-merge-tables-command?view=microsoft-fabric).
#### Tables Schema

View File

@ -171,7 +171,7 @@ add "create_index" and "write" permission to your specific index pattern.
This plugin can manage indexes per time-frame, as commonly done in other tools
with OpenSearch. The timestamp of the metric collected will be used to decide
the index destination. For more information about this usage on OpenSearch,
check [the docs](https://opensearch.org/docs/latest/opensearch/index-templates/).
check [the docs](https://opensearch.org/docs/latest/).
[1]: https://opensearch.org/docs/latest/

View File

@ -115,7 +115,11 @@ For metrics, two input schemata exist. Line protocol with measurement name
`prometheus` is assumed to have a schema matching Prometheus input
plugin when `metric_version = 2`. Line
protocol with other measurement names is assumed to have schema matching
Prometheus input plugin
Prometheus input plugin when
`metric_version = 1`. If both schema assumptions fail, then the line protocol
data is interpreted as:
- Metric type = gauge (or counter, if indicated by the input plugin)
- Metric name = `[measurement]_[field key]`
- Metric value = line protocol field value, cast to float
- Metric labels = line protocol tags

View File

@ -109,6 +109,6 @@ to use them.
## Metrics
Prometheus metrics are produced in the same manner as the [prometheus
serializer]().
serializer](/telegraf/v1/plugins/#serializer-prometheus).
[prometheus serializer]: /plugins/serializers/prometheus/README.md#Metrics

View File

@ -200,7 +200,7 @@ MySQL default quoting differs from standard ANSI/ISO SQL quoting. You must use
MySQL's ANSI\_QUOTES mode with this plugin. You can enable this mode by using
the setting `init_sql = "SET sql_mode='ANSI_QUOTES';"` or through a command-line
option when running MySQL. See MySQL's docs for [details on
ANSI\_QUOTES]() and [how to set the SQL mode](https://dev.mysql.com/doc/refman/8.0/en/sql-mode.html#sql-mode-setting).
ANSI\_QUOTES](https://dev.mysql.com/doc/refman/8.0/en/sql-mode.html#sqlmode_ansi_quotes) and [how to set the SQL mode](https://dev.mysql.com/doc/refman/8.0/en/sql-mode.html#sql-mode-setting).
You can use a DSN of the format "username:password@tcp(host:port)/dbname". See
the [driver docs](https://github.com/go-sql-driver/mysql) for details.

View File

@ -64,7 +64,7 @@ See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details
On Windows, only the `Local` and `UTC` zones are available by default. To use
other timezones, set the `ZONEINFO` environment variable to the location of
[`zoneinfo.zip`]():
[`zoneinfo.zip`](https://github.com/golang/go/raw/50bd1c4d4eb4fac8ddeb5f063c099daccfb71b26/lib/time/zoneinfo.zip):
```text
set ZONEINFO=C:\zoneinfo.zip

View File

@ -191,7 +191,20 @@ of the underlying golang implementation.
## Processing paths from tail plugin
This plugin can be used together with the tail input
plugin
plugin to make modifications to the `path` tag
injected for every file.
Scenario:
* A log file `/var/log/myjobs/mysql_backup.log`, containing logs for a job
execution. Whenever the job ends, a line is written to the log file following
this format: `2020-04-05 11:45:21 total time execution: 70 seconds`
* We want to generate a measurement that captures the duration of the script as
a field and includes the `path` as a tag
* We are interested in the filename without its extensions, since it might be
enough information for plotting our execution times in a dashboard
* Just in case, we don't want to override the original path (if for some
reason we end up having duplicates we might want this information)
For this purpose, we will use the `tail` input plugin, the `grok` parser plugin
and the `filepath` processor.

View File

@ -143,17 +143,19 @@ following libraries are available for loading:
for an example. For more details about the functions, please refer to the
[library documentation](https://pkg.go.dev/go.starlark.net/lib/json).
- log: `load("logging.star", "log")` provides the functions `log.debug()`,
`log.info()`, `log.warn()`, `log.error()`. See logging.star` for an example.
- math: `load('math.star', 'math')` provides basic mathematical constants and functions.
See math.star for an example. For more details, please refer to the
[library documentation](https://pkg.go.dev/go.starlark.net/lib/math).
- time: `load('time.star', 'time')` provides time-related constants and functions.
See
time_date.star,
time_duration.star and
time_timestamp.star for examples. For
more details about the functions, please refer to the
[library documentation](https://pkg.go.dev/go.starlark.net/lib/time).
`log.info()`, `log.warn()`, `log.error()`. See
logging.star for an example.
- math: `load("math.star", "math")` provides the function
[documented in the library](https://pkg.go.dev/go.starlark.net/lib/math). See
math.star for an example.
- time: `load("time.star", "time")` provides the functions `time.from_timestamp()`,
`time.is_valid_timezone()`, `time.now()`, `time.parse_duration()`,
`time.parse_time()`, `time.time()`. See
time_date.star,
time_duration.star and
time_timestamp.star for examples. For
more details about the functions, please refer to the
[library documentation](https://pkg.go.dev/go.starlark.net/lib/time).
If you would like to see support for something else here, please open an issue.
@ -225,6 +227,23 @@ Telegraf freezes the global scope, which prevents it from being modified, except
for a special shared global dictionary named `state`, this can be used by the
`apply` function. See an example of this in compare with previous
metric
Other than the `state` variable, attempting to modify the global scope will fail
with an error.
**How to manage errors that occur in the apply function?**
In case you need to call some code that may return an error, you can delegate
the call to the built-in function `catch` which takes as argument a `Callable`
and returns the error that occurred if any, `None` otherwise.
So for example:
```python
load("json.star", "json")
def apply(metric):
error = catch(lambda: failing(metric))
if error != None:
# Some code to execute in case of an error
metric.fields["error"] = error
@ -278,11 +297,12 @@ or return the value as a floating-point number.
### Examples
- drop fields containing string values
- drop fields with unexpected types](testdata/iops.star)
- drop fields with unexpected types
- obtain IOPS for aggregation and computing max IOPS)
- process JSON in a metric field - see
[library documentation](https://pkg.go.dev/go.starlark.net/lib/time) for function documentation
[library documentation](https://pkg.go.dev/go.starlark.net/lib/json) for function documentation
- use math function to compute a field value - see
[library documentation](https://pkg.go.dev/go.starlark.net/lib/time) for function documentation
[library documentation](https://pkg.go.dev/go.starlark.net/lib/math) for function documentation
- transform numerical values
- pivot a key's value to be the key for another field
- compute the ratio of two integer fields