diff --git a/content/telegraf/v1/aggregator-plugins/_index.md b/content/telegraf/v1/aggregator-plugins/_index.md
new file mode 100644
index 000000000..53474fdf0
--- /dev/null
+++ b/content/telegraf/v1/aggregator-plugins/_index.md
@@ -0,0 +1,15 @@
+---
+title: "Telegraf Aggregator Plugins"
+description: "Telegraf aggregator plugins aggregator data across multiple metrics."
+menu:
+  telegraf_v1_ref:
+    name: Aggregator plugins
+    identifier: aggregator_plugins_reference
+    weight: 10
+tags: [aggregator-plugins]
+---
+
+Telegraf aggregator plugins aggregator data across multiple metrics using e.g.
+statistical functions like min, max or mean.
+
+{{<children>}}
diff --git a/content/telegraf/v1/aggregator-plugins/basicstats/_index.md b/content/telegraf/v1/aggregator-plugins/basicstats/_index.md
new file mode 100644
index 000000000..97a70afe4
--- /dev/null
+++ b/content/telegraf/v1/aggregator-plugins/basicstats/_index.md
@@ -0,0 +1,81 @@
+---
+description: "Telegraf plugin for aggregating metrics using BasicStats"
+menu:
+  telegraf_v1_ref:
+    parent: aggregator_plugins_reference
+    name: BasicStats
+    identifier: aggregator-basicstats
+tags: [BasicStats, "aggregator-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# BasicStats Aggregator Plugin
+
+The BasicStats aggregator plugin gives count, diff, max, min, mean,
+non_negative_diff, sum, s2(variance), stdev for a set of values, emitting the
+aggregate every `period` seconds.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Keep the aggregate basicstats of each metric passing through.
+[[aggregators.basicstats]]
+  ## The period on which to flush & clear the aggregator.
+  # period = "30s"
+
+  ## If true, the original metric will be dropped by the
+  ## aggregator and will not get sent to the output plugins.
+  # drop_original = false
+
+  ## Configures which basic stats to push as fields
+  # stats = ["count","diff","rate","min","max","mean","non_negative_diff","non_negative_rate","percent_change","stdev","s2","sum","interval","last"]
+```
+
+- stats
+  - If not specified, then `count`, `min`, `max`, `mean`, `stdev`, and `s2` are
+  aggregated and pushed as fields. Other fields are not aggregated by default
+  to maintain backwards compatibility.
+  - If empty array, no stats are aggregated
+
+## Measurements & Fields
+
+- measurement1
+  - field1_count
+  - field1_diff (difference)
+  - field1_rate (rate per second)
+  - field1_max
+  - field1_min
+  - field1_mean
+  - field1_non_negative_diff (non-negative difference)
+  - field1_non_negative_rate (non-negative rate per second)
+  - field1_percent_change
+  - field1_sum
+  - field1_s2 (variance)
+  - field1_stdev (standard deviation)
+  - field1_interval (interval in nanoseconds)
+  - field1_last (last aggregated value)
+
+## Tags
+
+No tags are applied by this aggregator.
+
+## Example Output
+
+```text
+system,host=tars load1=1 1475583980000000000
+system,host=tars load1=1 1475583990000000000
+system,host=tars load1_count=2,load1_diff=0,load1_rate=0,load1_max=1,load1_min=1,load1_mean=1,load1_sum=2,load1_s2=0,load1_stdev=0,load1_interval=10000000000i,load1_last=1 1475584010000000000
+system,host=tars load1=1 1475584020000000000
+system,host=tars load1=3 1475584030000000000
+system,host=tars load1_count=2,load1_diff=2,load1_rate=0.2,load1_max=3,load1_min=1,load1_mean=2,load1_sum=4,load1_s2=2,load1_stdev=1.414162,load1_interval=10000000000i,load1_last=3 1475584010000000000
+```
diff --git a/content/telegraf/v1/aggregator-plugins/derivative/_index.md b/content/telegraf/v1/aggregator-plugins/derivative/_index.md
new file mode 100644
index 000000000..6018731fd
--- /dev/null
+++ b/content/telegraf/v1/aggregator-plugins/derivative/_index.md
@@ -0,0 +1,140 @@
+---
+description: "Telegraf plugin for aggregating metrics using Derivative"
+menu:
+  telegraf_v1_ref:
+    parent: aggregator_plugins_reference
+    name: Derivative
+    identifier: aggregator-derivative
+tags: [Derivative, "aggregator-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Derivative Aggregator Plugin
+
+The Derivative Aggregator Plugin estimates the derivative for all fields of the
+aggregated metrics.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Calculates a derivative for every field.
+[[aggregators.derivative]]
+  ## The period in which to flush the aggregator.
+  # period = "30s"
+
+  ## Suffix to append for the resulting derivative field.
+  # suffix = "_rate"
+
+  ## Field to use for the quotient when computing the derivative.
+  ## When using a field as the derivation parameter the name of that field will
+  ## be used for the resulting derivative, e.g. *fieldname_by_parameter*.
+  ## By default the timestamps of the metrics are used and the suffix is omitted.
+  # variable = ""
+
+  ## Maximum number of roll-overs in case only one measurement is found during a period.
+  # max_roll_over = 10
+```
+
+This aggregator will estimate a derivative for each field of a metric, which is
+contained in both the first and last metric of the aggregation interval.
+Without further configuration the derivative will be calculated with respect to
+the time difference between these two measurements in seconds.
+The following formula is applied is for every field
+
+```text
+derivative = (value_last - value_first) / (time_last - time_first)
+```
+
+The resulting derivative will be named `<fieldname>_rate` if no `suffix` is
+configured.
+
+To calculate a derivative for every field use
+
+```toml
+[[aggregators.derivative]]
+  ## Specific Derivative Aggregator Arguments:
+
+  ## Configure a custom derivation variable. Timestamp is used if none is given.
+  # variable = ""
+
+  ## Suffix to add to the field name for the derivative name.
+  # suffix = "_rate"
+
+  ## Roll-Over last measurement to first measurement of next period
+  # max_roll_over = 10
+
+  ## General Aggregator Arguments:
+
+  ## calculate derivative every 30 seconds
+  period = "30s"
+```
+
+## Time Derivatives
+
+In its default configuration it determines the first and last measurement of
+the period. From these measurements the time difference in seconds is
+calculated. This time difference is than used to divide the difference of each
+field using the following formula:
+
+```text
+derivative = (value_last - value_first) / (time_last - time_first)
+```
+
+For each field the derivative is emitted with a naming pattern
+`<fieldname>_rate`.
+
+## Custom Derivation Variable
+
+The plugin supports to use a field of the aggregated measurements as derivation
+variable in the denominator. This variable is assumed to be a monotonically
+increasing value. In this feature the following formula is used:
+
+```text
+derivative = (value_last - value_first) / (variable_last - variable_first)
+```
+
+**Make sure the specified variable is not filtered and exists in the metrics
+passed to this aggregator!**
+
+When using a custom derivation variable, you should change the `suffix` of the
+derivative name.  See the next section on customizing the derivative
+name |
+| 16        |  4.0  |                     |             |                     |             |
+| 18        |  2.0  |                     |             |                     |             |
+| 20        |  0.0  |                     |             |                     |             |
+||| -1.0 | -1.0 | | |
+
+The difference stems from the change of the value between periods, e.g. from 6.0
+to 8.0 between first and second period.  Those changes are omitted with
+`max_roll_over = 0` but are respected with `max_roll_over = 1`.  That there are
+no more differences in the calculated derivatives is due to the example data,
+which has constant derivatives in during the first and last period, even when
+including the gap between the periods.  Using `max_roll_over` with a value
+greater 0 may be important, if you need to detect changes between periods,
+e.g. when you have very few measurements in a period or quasi-constant metrics
+with only occasional changes.
+
+### Tags
+
+No tags are applied by this aggregator.
+Existing tags are passed through the aggregator untouched.
+
+## Example Output
+
+```text
+net bytes_recv=15409i,packets_recv=164i,bytes_sent=16649i,packets_sent=120i 1508843640000000000
+net bytes_recv=73987i,packets_recv=364i,bytes_sent=87328i,packets_sent=452i 1508843660000000000
+net bytes_recv_by_packets_recv=292.89 1508843660000000000
+net packets_sent_rate=16.6,bytes_sent_rate=3533.95 1508843660000000000
+net bytes_sent_by_packet=292.89 1508843660000000000
+```
diff --git a/content/telegraf/v1/aggregator-plugins/final/_index.md b/content/telegraf/v1/aggregator-plugins/final/_index.md
new file mode 100644
index 000000000..e48497887
--- /dev/null
+++ b/content/telegraf/v1/aggregator-plugins/final/_index.md
@@ -0,0 +1,94 @@
+---
+description: "Telegraf plugin for aggregating metrics using Final"
+menu:
+  telegraf_v1_ref:
+    parent: aggregator_plugins_reference
+    name: Final
+    identifier: aggregator-final
+tags: [Final, "aggregator-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Final Aggregator Plugin
+
+The final aggregator emits the last metric of a contiguous series.  A
+contiguous series is defined as a series which receives updates within the
+time period in `series_timeout`. The contiguous series may be longer than the
+time interval defined by `period`.
+
+This is useful for getting the final value for data sources that produce
+discrete time series such as procstat, cgroup, kubernetes etc.
+
+When a series has not been updated within the time defined in
+`series_timeout`, the last metric is emitted with the `_final` appended.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Report the final metric of a series
+[[aggregators.final]]
+  ## The period on which to flush & clear the aggregator.
+  # period = "30s"
+
+  ## If true, the original metric will be dropped by the
+  ## aggregator and will not get sent to the output plugins.
+  # drop_original = false
+
+  ## If false, _final is added to every field name
+  # keep_original_field_names = false
+
+  ## The time that a series is not updated until considering it final. Ignored
+  ## when output_strategy is "periodic".
+  # series_timeout = "5m"
+
+  ## Output strategy, supported values:
+  ##   timeout  -- output a metric if no new input arrived for `series_timeout`
+  ##   periodic -- output the last received metric every `period`
+  # output_strategy = "timeout"
+```
+
+### Output strategy
+
+By default (`output_strategy = "timeout"`) the plugin will only emit a metric
+for the period if the last received one is older than the series_timeout. This
+will not guarantee a regular output of a `final` metric e.g. if the
+series-timeout is a multiple of the gathering interval for an input. In this
+case metric sporadically arrive in the timeout phase of the period and emitting
+the `final` metric is suppressed.
+
+Contrary to this, `output_strategy = "periodic"` will always output a `final`
+metric at the end of the period irrespectively of when the last metric arrived,
+the `series_timeout` is ignored.
+
+## Metrics
+
+Measurement and tags are unchanged, fields are emitted with the suffix
+`_final`.
+
+## Example Output
+
+```text
+counter,host=bar i_final=3,j_final=6 1554281635115090133
+counter,host=foo i_final=3,j_final=6 1554281635112992012
+```
+
+Original input:
+
+```text
+counter,host=bar i=1,j=4 1554281633101153300
+counter,host=foo i=1,j=4 1554281633099323601
+counter,host=bar i=2,j=5 1554281634107980073
+counter,host=foo i=2,j=5 1554281634105931116
+counter,host=bar i=3,j=6 1554281635115090133
+counter,host=foo i=3,j=6 1554281635112992012
+```
diff --git a/content/telegraf/v1/aggregator-plugins/histogram/_index.md b/content/telegraf/v1/aggregator-plugins/histogram/_index.md
new file mode 100644
index 000000000..31af96045
--- /dev/null
+++ b/content/telegraf/v1/aggregator-plugins/histogram/_index.md
@@ -0,0 +1,158 @@
+---
+description: "Telegraf plugin for aggregating metrics using Histogram"
+menu:
+  telegraf_v1_ref:
+    parent: aggregator_plugins_reference
+    name: Histogram
+    identifier: aggregator-histogram
+tags: [Histogram, "aggregator-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Histogram Aggregator Plugin
+
+The histogram aggregator plugin creates histograms containing the counts of
+field values within a range.
+
+If `cumulative` is set to true, values added to a bucket are also added to the
+larger buckets in the distribution. This creates a [cumulative histogram](https://en.wikipedia.org/wiki/Histogram#/media/File:Cumulative_vs_normal_histogram.svg).
+Otherwise, values are added to only one bucket, which creates an [ordinary
+histogram]()
+
+Like other Telegraf aggregators, the metric is emitted every `period` seconds.
+By default bucket counts are not reset between periods and will be non-strictly
+increasing while Telegraf is running. This behavior can be changed by setting
+the `reset` parameter to true.
+
+[1]: https://en.wikipedia.org/wiki/Histogram#/media/File:Cumulative_vs_normal_histogram.svg
+
+## Design
+
+Each metric is passed to the aggregator and this aggregator searches histogram
+buckets for those fields, which have been specified in the config. If buckets
+are found, the aggregator will increment +1 to the appropriate
+bucket. Otherwise, it will be added to the `+Inf` bucket.  Every `period`
+seconds this data will be forwarded to the outputs.
+
+The algorithm of hit counting to buckets was implemented on the base of the
+algorithm which is implemented in the Prometheus [client](https://github.com/prometheus/client_golang/blob/master/prometheus/histogram.go).
+
+[2]: https://github.com/prometheus/client_golang/blob/master/prometheus/histogram.go
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for aggregate histogram metrics
+[[aggregators.histogram]]
+  ## The period in which to flush the aggregator.
+  # period = "30s"
+
+  ## If true, the original metric will be dropped by the
+  ## aggregator and will not get sent to the output plugins.
+  # drop_original = false
+
+  ## If true, the histogram will be reset on flush instead
+  ## of accumulating the results.
+  reset = false
+
+  ## Whether bucket values should be accumulated. If set to false, "gt" tag will be added.
+  ## Defaults to true.
+  cumulative = true
+
+  ## Expiration interval for each histogram. The histogram will be expired if
+  ## there are no changes in any buckets for this time interval. 0 == no expiration.
+  # expiration_interval = "0m"
+
+  ## If true, aggregated histogram are pushed to output only if it was updated since
+  ## previous push. Defaults to false.
+  # push_only_on_update = false
+
+  ## Example config that aggregates all fields of the metric.
+  # [[aggregators.histogram.config]]
+  #   ## Right borders of buckets (with +Inf implicitly added).
+  #   buckets = [0.0, 15.6, 34.5, 49.1, 71.5, 80.5, 94.5, 100.0]
+  #   ## The name of metric.
+  #   measurement_name = "cpu"
+
+  ## Example config that aggregates only specific fields of the metric.
+  # [[aggregators.histogram.config]]
+  #   ## Right borders of buckets (with +Inf implicitly added).
+  #   buckets = [0.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0]
+  #   ## The name of metric.
+  #   measurement_name = "diskio"
+  #   ## The concrete fields of metric
+  #   fields = ["io_time", "read_time", "write_time"]
+```
+
+The user is responsible for defining the bounds of the histogram bucket as
+well as the measurement name and fields to aggregate.
+
+Each histogram config section must contain a `buckets` and `measurement_name`
+option.  Optionally, if `fields` is set only the fields listed will be
+aggregated.  If `fields` is not set all fields are aggregated.
+
+The `buckets` option contains a list of floats which specify the bucket
+boundaries.  Each float value defines the inclusive upper (right) bound of the
+bucket.  The `+Inf` bucket is added automatically and does not need to be
+defined.  (For left boundaries, these specified bucket borders and `-Inf` will
+be used).
+
+## Measurements & Fields
+
+The postfix `bucket` will be added to each field key.
+
+- measurement1
+  - field1_bucket
+  - field2_bucket
+
+### Tags
+
+- `cumulative = true` (default):
+  - `le`: Right bucket border. It means that the metric value is less than or
+    equal to the value of this tag. If a metric value is sorted into a bucket,
+    it is also sorted into all larger buckets. As a result, the value of
+    `<field>_bucket` is rising with rising `le` value. When `le` is `+Inf`,
+    the bucket value is the count of all metrics, because all metric values are
+    less than or equal to positive infinity.
+- `cumulative = false`:
+  - `gt`: Left bucket border. It means that the metric value is greater than
+    (and not equal to) the value of this tag.
+  - `le`: Right bucket border. It means that the metric value is less than or
+    equal to the value of this tag.
+  - As both `gt` and `le` are present, each metric is sorted in only exactly
+    one bucket.
+
+## Example Output
+
+Let assume we have the buckets [0, 10, 50, 100] and the following field values
+for `usage_idle`: [50, 7, 99, 12]
+
+With `cumulative = true`:
+
+```text
+cpu,cpu=cpu1,host=localhost,le=0.0 usage_idle_bucket=0i 1486998330000000000  # none
+cpu,cpu=cpu1,host=localhost,le=10.0 usage_idle_bucket=1i 1486998330000000000  # 7
+cpu,cpu=cpu1,host=localhost,le=50.0 usage_idle_bucket=2i 1486998330000000000  # 7, 12
+cpu,cpu=cpu1,host=localhost,le=100.0 usage_idle_bucket=4i 1486998330000000000  # 7, 12, 50, 99
+cpu,cpu=cpu1,host=localhost,le=+Inf usage_idle_bucket=4i 1486998330000000000  # 7, 12, 50, 99
+```
+
+With `cumulative = false`:
+
+```text
+cpu,cpu=cpu1,host=localhost,gt=-Inf,le=0.0 usage_idle_bucket=0i 1486998330000000000  # none
+cpu,cpu=cpu1,host=localhost,gt=0.0,le=10.0 usage_idle_bucket=1i 1486998330000000000  # 7
+cpu,cpu=cpu1,host=localhost,gt=10.0,le=50.0 usage_idle_bucket=1i 1486998330000000000  # 12
+cpu,cpu=cpu1,host=localhost,gt=50.0,le=100.0 usage_idle_bucket=2i 1486998330000000000  # 50, 99
+cpu,cpu=cpu1,host=localhost,gt=100.0,le=+Inf usage_idle_bucket=0i 1486998330000000000  # none
+```
diff --git a/content/telegraf/v1/aggregator-plugins/merge/_index.md b/content/telegraf/v1/aggregator-plugins/merge/_index.md
new file mode 100644
index 000000000..1c11c358d
--- /dev/null
+++ b/content/telegraf/v1/aggregator-plugins/merge/_index.md
@@ -0,0 +1,53 @@
+---
+description: "Telegraf plugin for aggregating metrics using Merge"
+menu:
+  telegraf_v1_ref:
+    parent: aggregator_plugins_reference
+    name: Merge
+    identifier: aggregator-merge
+tags: [Merge, "aggregator-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Merge Aggregator Plugin
+
+Merge metrics together into a metric with multiple fields into the most memory
+and network transfer efficient form.
+
+Use this plugin when fields are split over multiple metrics, with the same
+measurement, tag set and timestamp.  By merging into a single metric they can
+be handled more efficiently by the output.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Merge metrics into multifield metrics by series key
+[[aggregators.merge]]
+  ## Precision to round the metric timestamp to
+  ## This is useful for cases where metrics to merge arrive within a small
+  ## interval and thus vary in timestamp. The timestamp of the resulting metric
+  ## is also rounded.
+  # round_timestamp_to = "1ns"
+
+  ## If true, the original metric will be dropped by the
+  ## aggregator and will not get sent to the output plugins.
+  drop_original = true
+```
+
+## Example
+
+```diff
+- cpu,host=localhost usage_time=42 1567562620000000000
+- cpu,host=localhost idle_time=42 1567562620000000000
++ cpu,host=localhost idle_time=42,usage_time=42 1567562620000000000
+```
diff --git a/content/telegraf/v1/aggregator-plugins/minmax/_index.md b/content/telegraf/v1/aggregator-plugins/minmax/_index.md
new file mode 100644
index 000000000..44425805f
--- /dev/null
+++ b/content/telegraf/v1/aggregator-plugins/minmax/_index.md
@@ -0,0 +1,63 @@
+---
+description: "Telegraf plugin for aggregating metrics using MinMax"
+menu:
+  telegraf_v1_ref:
+    parent: aggregator_plugins_reference
+    name: MinMax
+    identifier: aggregator-minmax
+tags: [MinMax, "aggregator-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# MinMax Aggregator Plugin
+
+The minmax aggregator plugin aggregates min & max values of each field it sees,
+emitting the aggrate every `period` seconds.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Keep the aggregate min/max of each metric passing through.
+[[aggregators.minmax]]
+  ## General Aggregator Arguments:
+  ## The period on which to flush & clear the aggregator.
+  # period = "30s"
+
+  ## If true, the original metric will be dropped by the
+  ## aggregator and will not get sent to the output plugins.
+  # drop_original = false
+```
+
+## Measurements & Fields
+
+- measurement1
+  - field1_max
+  - field1_min
+
+## Tags
+
+No tags are applied by this aggregator.
+
+## Example Output
+
+```text
+system,host=tars load1=1.72 1475583980000000000
+system,host=tars load1=1.6 1475583990000000000
+system,host=tars load1=1.66 1475584000000000000
+system,host=tars load1=1.63 1475584010000000000
+system,host=tars load1_max=1.72,load1_min=1.6 1475584010000000000
+system,host=tars load1=1.46 1475584020000000000
+system,host=tars load1=1.39 1475584030000000000
+system,host=tars load1=1.41 1475584040000000000
+system,host=tars load1_max=1.46,load1_min=1.39 1475584040000000000
+```
diff --git a/content/telegraf/v1/aggregator-plugins/quantile/_index.md b/content/telegraf/v1/aggregator-plugins/quantile/_index.md
new file mode 100644
index 000000000..984c8fc7a
--- /dev/null
+++ b/content/telegraf/v1/aggregator-plugins/quantile/_index.md
@@ -0,0 +1,157 @@
+---
+description: "Telegraf plugin for aggregating metrics using Quantile"
+menu:
+  telegraf_v1_ref:
+    parent: aggregator_plugins_reference
+    name: Quantile
+    identifier: aggregator-quantile
+tags: [Quantile, "aggregator-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Quantile Aggregator Plugin
+
+The quantile aggregator plugin aggregates specified quantiles for each numeric
+field per metric it sees and emits the quantiles every `period`.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Keep the aggregate quantiles of each metric passing through.
+[[aggregators.quantile]]
+  ## General Aggregator Arguments:
+  ## The period on which to flush & clear the aggregator.
+  # period = "30s"
+
+  ## If true, the original metric will be dropped by the
+  ## aggregator and will not get sent to the output plugins.
+  # drop_original = false
+
+  ## Quantiles to output in the range [0,1]
+  # quantiles = [0.25, 0.5, 0.75]
+
+  ## Type of aggregation algorithm
+  ## Supported are:
+  ##  "t-digest" -- approximation using centroids, can cope with large number of samples
+  ##  "exact R7" -- exact computation also used by Excel or NumPy (Hyndman & Fan 1996 R7)
+  ##  "exact R8" -- exact computation (Hyndman & Fan 1996 R8)
+  ## NOTE: Do not use "exact" algorithms with large number of samples
+  ##       to not impair performance or memory consumption!
+  # algorithm = "t-digest"
+
+  ## Compression for approximation (t-digest). The value needs to be
+  ## greater or equal to 1.0. Smaller values will result in more
+  ## performance but less accuracy.
+  # compression = 100.0
+```
+
+## Algorithm types
+
+### t-digest
+
+Proposed by [Dunning & Ertl (2019)](https://arxiv.org/abs/1902.04023) this type uses a
+special data-structure to cluster data. These clusters are later used
+to approximate the requested quantiles. The bounds of the approximation
+can be controlled by the `compression` setting where smaller values
+result in higher performance but less accuracy.
+
+Due to its incremental nature, this algorithm can handle large
+numbers of samples efficiently.  It is recommended for applications
+where exact quantile calculation isn't required.
+
+For implementation details see the underlying [golang library](https://github.com/caio/go-tdigest).
+
+### exact R7 and R8
+
+These algorithms compute quantiles as described in [Hyndman & Fan
+(1996)]().  The R7 variant is used in Excel and NumPy.  The R8
+variant is recommended by Hyndman & Fan due to its independence of the
+underlying sample distribution.
+
+These algorithms save all data for the aggregation `period`.  They require a lot
+of memory when used with a large number of series or a large number of
+samples. They are slower than the `t-digest` algorithm and are recommended only
+to be used with a small number of samples and series.
+
+## Benchmark (linux/amd64)
+
+The benchmark was performed by adding 100 metrics with six numeric
+(and two non-numeric) fields to the aggregator and the derive the aggregation
+result.
+
+| algorithm  | # quantiles   | avg. runtime  |
+| :------------ | -------------:| -------------:|
+| t-digest      |            3  |  376372 ns/op |
+| exact R7      |            3  | 9782946 ns/op |
+| exact R8      |            3  | 9158205 ns/op |
+| t-digest      |          100  |  899204 ns/op |
+| exact R7      |          100  | 7868816 ns/op |
+| exact R8      |          100  | 8099612 ns/op |
+
+## Measurements
+
+Measurement names are passed through this aggregator.
+
+### Fields
+
+For all numeric fields (int32/64, uint32/64 and float32/64) new *quantile*
+fields are aggregated in the form `<fieldname>_<quantile*100>`. Other field
+types (e.g. boolean, string) are ignored and dropped from the output.
+
+For example passing in the following metric as *input*:
+
+- somemetric
+  - average_response_ms (float64)
+  - minimum_response_ms (float64)
+  - maximum_response_ms (float64)
+  - status (string)
+  - ok (boolean)
+
+and the default setting for `quantiles` you get the following *output*
+
+- somemetric
+  - average_response_ms_025 (float64)
+  - average_response_ms_050 (float64)
+  - average_response_ms_075 (float64)
+  - minimum_response_ms_025 (float64)
+  - minimum_response_ms_050 (float64)
+  - minimum_response_ms_075 (float64)
+  - maximum_response_ms_025 (float64)
+  - maximum_response_ms_050 (float64)
+  - maximum_response_ms_075 (float64)
+
+The `status` and `ok` fields are dropped because they are not numeric.  Note
+that the number of resulting fields scales with the number of `quantiles`
+specified.
+
+### Tags
+
+Tags are passed through to the output by this aggregator.
+
+### Example Output
+
+```text
+cpu,cpu=cpu-total,host=Hugin usage_user=10.814851731872487,usage_system=2.1679541490155687,usage_irq=1.046598554697342,usage_steal=0,usage_guest_nice=0,usage_idle=85.79616247197244,usage_nice=0,usage_iowait=0,usage_softirq=0.1744330924495688,usage_guest=0 1608288360000000000
+cpu,cpu=cpu-total,host=Hugin usage_guest=0,usage_system=2.1601016518428664,usage_iowait=0.02541296060990694,usage_irq=1.0165184243964942,usage_softirq=0.1778907242693666,usage_steal=0,usage_guest_nice=0,usage_user=9.275730622616953,usage_idle=87.34434561626493,usage_nice=0 1608288370000000000
+cpu,cpu=cpu-total,host=Hugin usage_idle=85.78199052131747,usage_nice=0,usage_irq=1.0476428036915637,usage_guest=0,usage_guest_nice=0,usage_system=1.995510102269591,usage_iowait=0,usage_softirq=0.1995510102269662,usage_steal=0,usage_user=10.975305562484735 1608288380000000000
+cpu,cpu=cpu-total,host=Hugin usage_guest_nice_075=0,usage_user_050=10.814851731872487,usage_guest_075=0,usage_steal_025=0,usage_irq_025=1.031558489546918,usage_irq_075=1.0471206791944527,usage_iowait_025=0,usage_guest_050=0,usage_guest_nice_050=0,usage_nice_075=0,usage_iowait_050=0,usage_system_050=2.1601016518428664,usage_irq_050=1.046598554697342,usage_guest_nice_025=0,usage_idle_050=85.79616247197244,usage_softirq_075=0.1887208672481664,usage_steal_075=0,usage_system_025=2.0778058770562287,usage_system_075=2.1640279004292173,usage_softirq_050=0.1778907242693666,usage_nice_050=0,usage_iowait_075=0.01270648030495347,usage_user_075=10.895078647178611,usage_nice_025=0,usage_steal_050=0,usage_user_025=10.04529117724472,usage_idle_025=85.78907649664495,usage_idle_075=86.57025404411868,usage_softirq_025=0.1761619083594677,usage_guest_025=0 1608288390000000000
+```
+
+## References
+
+- Dunning & Ertl: "Computing Extremely Accurate Quantiles Using t-Digests", arXiv:1902.04023 (2019)  [pdf](http://www.maths.usyd.edu.au/u/UG/SM/STAT3022/r/current/Misc/Sample%20Quantiles%20in%20Statistical%20Packages.pdf)
+- Hyndman & Fan: "Sample Quantiles in Statistical Packages", The American Statistician, vol. 50, pp. 361-365 (1996) [pdf](http://www.maths.usyd.edu.au/u/UG/SM/STAT3022/r/current/Misc/Sample%20Quantiles%20in%20Statistical%20Packages.pdf)
+
+[tdigest_paper]: https://arxiv.org/abs/1902.04023
+[tdigest_lib]:   https://github.com/caio/go-tdigest
+[hyndman_fan]:   http://www.maths.usyd.edu.au/u/UG/SM/STAT3022/r/current/Misc/Sample%20Quantiles%20in%20Statistical%20Packages.pdf
diff --git a/content/telegraf/v1/aggregator-plugins/starlark/_index.md b/content/telegraf/v1/aggregator-plugins/starlark/_index.md
new file mode 100644
index 000000000..453acdd95
--- /dev/null
+++ b/content/telegraf/v1/aggregator-plugins/starlark/_index.md
@@ -0,0 +1,139 @@
+---
+description: "Telegraf plugin for aggregating metrics using Starlark"
+menu:
+  telegraf_v1_ref:
+    parent: aggregator_plugins_reference
+    name: Starlark
+    identifier: aggregator-starlark
+tags: [Starlark, "aggregator-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Starlark Aggregator Plugin
+
+The `starlark` aggregator allows to implement a custom aggregator plugin with a
+Starlark script. The Starlark script needs to be composed of the three methods
+defined in the Aggregator plugin interface which are `add`, `push` and `reset`.
+
+The Starlark Aggregator plugin calls the Starlark function `add` to add the
+metrics to the aggregator, then calls the Starlark function `push` to push the
+resulting metrics into the accumulator and finally calls the Starlark function
+`reset` to reset the entire state of the plugin.
+
+The Starlark functions can use the global function `state` to keep temporary the
+metrics to aggregate.
+
+The Starlark language is a dialect of Python, and will be familiar to those who
+have experience with the Python language. However, there are major
+differences.  Existing
+Python code is unlikely to work unmodified.  The execution environment is
+sandboxed, and it is not possible to do I/O operations such as reading from
+files or sockets.
+
+The **[Starlark specification](https://github.com/google/starlark-go/blob/d1966c6b9fcd/doc/spec.md)** has details about the syntax and available
+functions.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Aggregate metrics using a Starlark script
+[[aggregators.starlark]]
+  ## The Starlark source can be set as a string in this configuration file, or
+  ## by referencing a file containing the script.  Only one source or script
+  ## should be set at once.
+  ##
+  ## Source of the Starlark script.
+  source = '''
+state = {}
+
+def add(metric):
+  state["last"] = metric
+
+def push():
+  return state.get("last")
+
+def reset():
+  state.clear()
+'''
+
+  ## File containing a Starlark script.
+  # script = "/usr/local/bin/myscript.star"
+
+  ## The constants of the Starlark script.
+  # [aggregators.starlark.constants]
+  #   max_size = 10
+  #   threshold = 0.75
+  #   default_name = "Julia"
+  #   debug_mode = true
+```
+
+## Usage
+
+The Starlark code should contain a function called `add` that takes a metric as
+argument.  The function will be called with each metric to add, and doesn't
+return anything.
+
+```python
+def add(metric):
+  state["last"] = metric
+```
+
+The Starlark code should also contain a function called `push` that doesn't take
+any argument.  The function will be called to compute the aggregation, and
+returns the metrics to push to the accumulator.
+
+```python
+def push():
+  return state.get("last")
+```
+
+The Starlark code should also contain a function called `reset` that doesn't
+take any argument.  The function will be called to reset the plugin, and doesn't
+return anything.
+
+```python
+def reset():
+  state.clear()
+```
+
+For a list of available types and functions that can be used in the code, see
+the [Starlark specification](https://github.com/google/starlark-go/blob/d1966c6b9fcd/doc/spec.md).
+
+## Python Differences
+
+Refer to the section Python
+Differences of the
+documentation about the Starlark processor.
+
+## Libraries available
+
+Refer to the section Libraries
+available of the
+documentation about the Starlark processor.
+
+## Common Questions
+
+Refer to the section Common
+Questions of the
+documentation about the Starlark processor.
+
+## Examples
+
+- minmax - A minmax aggregator implemented with a Starlark script.
+- merge - A merge aggregator implemented with a Starlark script.
+
+All examples are in the testdata folder.
+
+Open a Pull Request to add any other useful Starlark examples.
+
+[Starlark specification]: https://github.com/google/starlark-go/blob/d1966c6b9fcd/doc/spec.md
diff --git a/content/telegraf/v1/aggregator-plugins/valuecounter/_index.md b/content/telegraf/v1/aggregator-plugins/valuecounter/_index.md
new file mode 100644
index 000000000..adb0c9047
--- /dev/null
+++ b/content/telegraf/v1/aggregator-plugins/valuecounter/_index.md
@@ -0,0 +1,97 @@
+---
+description: "Telegraf plugin for aggregating metrics using ValueCounter"
+menu:
+  telegraf_v1_ref:
+    parent: aggregator_plugins_reference
+    name: ValueCounter
+    identifier: aggregator-valuecounter
+tags: [ValueCounter, "aggregator-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# ValueCounter Aggregator Plugin
+
+The valuecounter plugin counts the occurrence of values in fields and emits the
+counter once every 'period' seconds.
+
+A use case for the valuecounter plugin is when you are processing a HTTP access
+log (with the logparser input) and want to count the HTTP status codes.
+
+The fields which will be counted must be configured with the `fields`
+configuration directive. When no `fields` is provided the plugin will not count
+any fields. The results are emitted in fields in the format:
+`originalfieldname_fieldvalue = count`.
+
+Counting fields with a high number of potential values may produce significant
+amounts of new fields and memory usage, take care to only count fields with a
+limited set of values.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Count the occurrence of values in fields.
+[[aggregators.valuecounter]]
+  ## General Aggregator Arguments:
+  ## The period on which to flush & clear the aggregator.
+  # period = "30s"
+
+  ## If true, the original metric will be dropped by the
+  ## aggregator and will not get sent to the output plugins.
+  # drop_original = false
+
+  ## The fields for which the values will be counted
+  fields = ["status"]
+```
+
+### Measurements & Fields
+
+- measurement1
+  - field_value1
+  - field_value2
+
+### Tags
+
+No tags are applied by this aggregator.
+
+## Example Output
+
+Example for parsing a HTTP access log.
+
+telegraf.conf:
+
+```toml
+[[inputs.logparser]]
+  files = ["/tmp/tst.log"]
+  [inputs.logparser.grok]
+    patterns = ['%{DATA:url:tag} %{NUMBER:response:string}']
+    measurement = "access"
+
+[[aggregators.valuecounter]]
+  namepass = ["access"]
+  fields = ["response"]
+```
+
+/tmp/tst.log
+
+```text
+/some/path 200
+/some/path 401
+/some/path 200
+```
+
+```text
+access,url=/some/path,path=/tmp/tst.log,host=localhost.localdomain response="200" 1511948755991487011
+access,url=/some/path,path=/tmp/tst.log,host=localhost.localdomain response="401" 1511948755991522282
+access,url=/some/path,path=/tmp/tst.log,host=localhost.localdomain response="200" 1511948755991531697
+access,path=/tmp/tst.log,host=localhost.localdomain,url=/some/path response_200=2i,response_401=1i 1511948761000000000
+```
diff --git a/content/telegraf/v1/input-plugins/_index.md b/content/telegraf/v1/input-plugins/_index.md
new file mode 100644
index 000000000..966ecdfe1
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/_index.md
@@ -0,0 +1,14 @@
+---
+title: "Telegraf Input Plugins"
+description: "Telegraf input plugins collect metrics from the system, services, and third-party APIs."
+menu:
+  telegraf_v1_ref:
+    name: Input plugins
+    identifier: input_plugins_reference
+    weight: 10
+tags: [input-plugins]
+---
+
+Telegraf input plugins collect metrics from the system, services, and third-party APIs.
+
+{{< children >}}
diff --git a/content/telegraf/v1/input-plugins/activemq/_index.md b/content/telegraf/v1/input-plugins/activemq/_index.md
new file mode 100644
index 000000000..97206d443
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/activemq/_index.md
@@ -0,0 +1,106 @@
+---
+description: "Telegraf plugin for collecting metrics from ActiveMQ"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: ActiveMQ
+    identifier: input-activemq
+tags: [ActiveMQ, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# ActiveMQ Input Plugin
+
+This plugin gather queues, topics & subscribers metrics using ActiveMQ Console
+API.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Gather ActiveMQ metrics
+[[inputs.activemq]]
+  ## ActiveMQ WebConsole URL
+  url = "http://127.0.0.1:8161"
+
+  ## Credentials for basic HTTP authentication
+  # username = "admin"
+  # password = "admin"
+
+  ## Required ActiveMQ webadmin root path
+  # webadmin = "admin"
+
+  ## Maximum time to receive response.
+  # response_timeout = "5s"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+## Metrics
+
+Every effort was made to preserve the names based on the XML response from the
+ActiveMQ Console API.
+
+- activemq_queues
+  - tags:
+    - name
+    - source
+    - port
+  - fields:
+    - size
+    - consumer_count
+    - enqueue_count
+    - dequeue_count
+- activemq_topics
+  - tags:
+    - name
+    - source
+    - port
+  - fields:
+    - size
+    - consumer_count
+    - enqueue_count
+    - dequeue_count
+- activemq_subscribers
+  - tags:
+    - client_id
+    - subscription_name
+    - connection_id
+    - destination_name
+    - selector
+    - active
+    - source
+    - port
+  - fields:
+    - pending_queue_size
+    - dispatched_queue_size
+    - dispatched_counter
+    - enqueue_counter
+    - dequeue_counter
+
+## Example Output
+
+```text
+activemq_queues,name=sandra,host=88284b2fe51b,source=localhost,port=8161 consumer_count=0i,enqueue_count=0i,dequeue_count=0i,size=0i 1492610703000000000
+activemq_queues,name=Test,host=88284b2fe51b,source=localhost,port=8161 dequeue_count=0i,size=0i,consumer_count=0i,enqueue_count=0i 1492610703000000000
+activemq_topics,name=ActiveMQ.Advisory.MasterBroker\ ,host=88284b2fe51b,source=localhost,port=8161 size=0i,consumer_count=0i,enqueue_count=1i,dequeue_count=0i 1492610703000000000
+activemq_topics,host=88284b2fe51b,name=AAA\,source=localhost,port=8161  size=0i,consumer_count=1i,enqueue_count=0i,dequeue_count=0i 1492610703000000000
+activemq_topics,name=ActiveMQ.Advisory.Topic\,source=localhost,port=8161 ,host=88284b2fe51b enqueue_count=1i,dequeue_count=0i,size=0i,consumer_count=0i 1492610703000000000
+activemq_topics,name=ActiveMQ.Advisory.Queue\,source=localhost,port=8161 ,host=88284b2fe51b size=0i,consumer_count=0i,enqueue_count=2i,dequeue_count=0i 1492610703000000000
+activemq_topics,name=AAAA\ ,host=88284b2fe51b,source=localhost,port=8161 consumer_count=0i,enqueue_count=0i,dequeue_count=0i,size=0i 1492610703000000000
+activemq_subscribers,connection_id=NOTSET,destination_name=AAA,,source=localhost,port=8161,selector=AA,active=no,host=88284b2fe51b,client_id=AAA,subscription_name=AAA pending_queue_size=0i,dispatched_queue_size=0i,dispatched_counter=0i,enqueue_counter=0i,dequeue_counter=0i 1492610703000000000
+```
diff --git a/content/telegraf/v1/input-plugins/aerospike/_index.md b/content/telegraf/v1/input-plugins/aerospike/_index.md
new file mode 100644
index 000000000..321ff29aa
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/aerospike/_index.md
@@ -0,0 +1,169 @@
+---
+description: "Telegraf plugin for collecting metrics from Aerospike"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Aerospike
+    identifier: input-aerospike
+tags: [Aerospike, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Aerospike Input Plugin
+
+**DEPRECATED: As of version 1.30 the Aerospike plugin has been deprecated in
+favor of the prometheus plugin and get node statistics & stats
+for all the configured namespaces.
+
+For what the measurements mean, please consult the [Aerospike Metrics Reference
+Docs](http://www.aerospike.com/docs/reference/metrics).
+
+The metric names, to make it less complicated in querying, have replaced all `-`
+with `_` as Aerospike metrics come in both forms (no idea why).
+
+All metrics are attempted to be cast to integers, then booleans, then strings.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read stats from aerospike server(s)
+[[inputs.aerospike]]
+  ## Aerospike servers to connect to (with port)
+  ## This plugin will query all namespaces the aerospike
+  ## server has configured and get stats for them.
+  servers = ["localhost:3000"]
+
+  # username = "telegraf"
+  # password = "pa$$word"
+
+  ## Optional TLS Config
+  # enable_tls = false
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  # tls_name = "tlsname"
+  ## If false, skip chain & host verification
+  # insecure_skip_verify = true
+
+  # Feature Options
+  # Add namespace variable to limit the namespaces executed on
+  # Leave blank to do all
+  # disable_query_namespaces = true # default false
+  # namespaces = ["namespace1", "namespace2"]
+
+  # Enable set level telemetry
+  # query_sets = true # default: false
+  # Add namespace set combinations to limit sets executed on
+  # Leave blank to do all sets
+  # sets = ["namespace1/set1", "namespace1/set2", "namespace3"]
+
+  # Histograms
+  # enable_ttl_histogram = true # default: false
+  # enable_object_size_linear_histogram = true # default: false
+
+  # by default, aerospike produces a 100 bucket histogram
+  # this is not great for most graphing tools, this will allow
+  # the ability to squash this to a smaller number of buckets
+  # To have a balanced histogram, the number of buckets chosen
+  # should divide evenly into 100.
+  # num_histogram_buckets = 100 # default: 10
+```
+
+## Metrics
+
+The aerospike metrics are under a few measurement names:
+
+***aerospike_node***: These are the aerospike **node** measurements, which are
+available from the aerospike `statistics` command.
+
+```text
+  telnet localhost 3003
+  statistics
+  ...
+```
+
+***aerospike_namespace***: These are aerospike namespace measurements, which
+are available from the aerospike `namespace/<namespace_name>` command.
+
+```text
+  telnet localhost 3003
+  namespaces
+  <namespace_1>;<namespace_2>;etc.
+  namespace/<namespace_name>
+  ...
+```
+
+***aerospike_set***: These are aerospike set measurements, which
+are available from the aerospike `sets/<namespace_name>/<set_name>` command.
+
+```text
+  telnet localhost 3003
+  sets
+  sets/<namespace_name>
+  sets/<namespace_name>/<set_name>
+  ...
+```
+
+***aerospike_histogram_ttl***: These are aerospike ttl hisogram measurements,
+which is available from the aerospike
+`histogram:namespace=<namespace_name>;[set=<set_name>;]type=ttl` command.
+
+```text
+  telnet localhost 3003
+  histogram:namespace=<namespace_name>;type=ttl
+  histogram:namespace=<namespace_name>;[set=<set_name>;]type=ttl
+  ...
+```
+
+***aerospike_histogram_object_size_linear***: These are aerospike object size
+linear histogram measurements, which is available from the aerospike
+`histogram:namespace=<namespace_name>;[set=<set_name>;]type=object_size_linear`
+command.
+
+```text
+  telnet localhost 3003
+  histogram:namespace=<namespace_name>;type=object_size_linear
+  histogram:namespace=<namespace_name>;[set=<set_name>;]type=object_size_linear
+  ...
+```
+
+### Tags
+
+All measurements have tags:
+
+- aerospike_host
+- node_name
+
+Namespace metrics have tags:
+
+- namespace_name
+
+Set metrics have tags:
+
+- namespace_name
+- set_name
+
+Histogram metrics have tags:
+
+- namespace_name
+- set_name (optional)
+- type
+
+## Example Output
+
+```text
+aerospike_node,aerospike_host=localhost:3000,node_name="BB9020011AC4202" batch_error=0i,batch_index_complete=0i,batch_index_created_buffers=0i,batch_index_destroyed_buffers=0i,batch_index_error=0i,batch_index_huge_buffers=0i,batch_index_initiate=0i,batch_index_queue="0:0,0:0,0:0,0:0",batch_index_timeout=0i,batch_index_unused_buffers=0i,batch_initiate=0i,batch_queue=0i,batch_timeout=0i,client_connections=6i,cluster_integrity=true,cluster_key="8AF422E05281249E",cluster_size=1i,delete_queue=0i,demarshal_error=0i,early_tsvc_batch_sub_error=0i,early_tsvc_client_error=0i,early_tsvc_udf_sub_error=0i,fabric_connections=16i,fabric_msgs_rcvd=0i,fabric_msgs_sent=0i,heartbeat_connections=0i,heartbeat_received_foreign=0i,heartbeat_received_self=0i,info_complete=47i,info_queue=0i,migrate_allowed=true,migrate_partitions_remaining=0i,migrate_progress_recv=0i,migrate_progress_send=0i,objects=0i,paxos_principal="BB9020011AC4202",proxy_in_progress=0i,proxy_retry=0i,query_long_running=0i,query_short_running=0i,reaped_fds=0i,record_refs=0i,rw_in_progress=0i,scans_active=0i,sindex_gc_activity_dur=0i,sindex_gc_garbage_cleaned=0i,sindex_gc_garbage_found=0i,sindex_gc_inactivity_dur=0i,sindex_gc_list_creation_time=0i,sindex_gc_list_deletion_time=0i,sindex_gc_locktimedout=0i,sindex_gc_objects_validated=0i,sindex_ucgarbage_found=0i,sub_objects=0i,system_free_mem_pct=92i,system_swapping=false,tsvc_queue=0i,uptime=1457i 1468923222000000000
+aerospike_namespace,aerospike_host=localhost:3000,namespace=test,node_name="BB9020011AC4202" allow_nonxdr_writes=true,allow_xdr_writes=true,available_bin_names=32768i,batch_sub_proxy_complete=0i,batch_sub_proxy_error=0i,batch_sub_proxy_timeout=0i,batch_sub_read_error=0i,batch_sub_read_not_found=0i,batch_sub_read_success=0i,batch_sub_read_timeout=0i,batch_sub_tsvc_error=0i,batch_sub_tsvc_timeout=0i,client_delete_error=0i,client_delete_not_found=0i,client_delete_success=0i,client_delete_timeout=0i,client_lang_delete_success=0i,client_lang_error=0i,client_lang_read_success=0i,client_lang_write_success=0i,client_proxy_complete=0i,client_proxy_error=0i,client_proxy_timeout=0i,client_read_error=0i,client_read_not_found=0i,client_read_success=0i,client_read_timeout=0i,client_tsvc_error=0i,client_tsvc_timeout=0i,client_udf_complete=0i,client_udf_error=0i,client_udf_timeout=0i,client_write_error=0i,client_write_success=0i,client_write_timeout=0i,cold_start_evict_ttl=4294967295i,conflict_resolution_policy="generation",current_time=206619222i,data_in_index=false,default_ttl=432000i,device_available_pct=99i,device_free_pct=100i,device_total_bytes=4294967296i,device_used_bytes=0i,disallow_null_setname=false,enable_benchmarks_batch_sub=false,enable_benchmarks_read=false,enable_benchmarks_storage=false,enable_benchmarks_udf=false,enable_benchmarks_udf_sub=false,enable_benchmarks_write=false,enable_hist_proxy=false,enable_xdr=false,evict_hist_buckets=10000i,evict_tenths_pct=5i,evict_ttl=0i,evicted_objects=0i,expired_objects=0i,fail_generation=0i,fail_key_busy=0i,fail_record_too_big=0i,fail_xdr_forbidden=0i,geo2dsphere_within.earth_radius_meters=6371000i,geo2dsphere_within.level_mod=1i,geo2dsphere_within.max_cells=12i,geo2dsphere_within.max_level=30i,geo2dsphere_within.min_level=1i,geo2dsphere_within.strict=true,geo_region_query_cells=0i,geo_region_query_falsepos=0i,geo_region_query_points=0i,geo_region_query_reqs=0i,high_water_disk_pct=50i,high_water_memory_pct=60i,hwm_breached=false,ldt_enabled=false,ldt_gc_rate=0i,ldt_page_size=8192i,master_objects=0i,master_sub_objects=0i,max_ttl=315360000i,max_void_time=0i,memory_free_pct=100i,memory_size=1073741824i,memory_used_bytes=0i,memory_used_data_bytes=0i,memory_used_index_bytes=0i,memory_used_sindex_bytes=0i,migrate_order=5i,migrate_record_receives=0i,migrate_record_retransmits=0i,migrate_records_skipped=0i,migrate_records_transmitted=0i,migrate_rx_instances=0i,migrate_rx_partitions_active=0i,migrate_rx_partitions_initial=0i,migrate_rx_partitions_remaining=0i,migrate_sleep=1i,migrate_tx_instances=0i,migrate_tx_partitions_active=0i,migrate_tx_partitions_imbalance=0i,migrate_tx_partitions_initial=0i,migrate_tx_partitions_remaining=0i,non_expirable_objects=0i,ns_forward_xdr_writes=false,nsup_cycle_duration=0i,nsup_cycle_sleep_pct=0i,objects=0i,prole_objects=0i,prole_sub_objects=0i,query_agg=0i,query_agg_abort=0i,query_agg_avg_rec_count=0i,query_agg_error=0i,query_agg_success=0i,query_fail=0i,query_long_queue_full=0i,query_long_reqs=0i,query_lookup_abort=0i,query_lookup_avg_rec_count=0i,query_lookup_error=0i,query_lookup_success=0i,query_lookups=0i,query_reqs=0i,query_short_queue_full=0i,query_short_reqs=0i,query_udf_bg_failure=0i,query_udf_bg_success=0i,read_consistency_level_override="off",repl_factor=1i,scan_aggr_abort=0i,scan_aggr_complete=0i,scan_aggr_error=0i,scan_basic_abort=0i,scan_basic_complete=0i,scan_basic_error=0i,scan_udf_bg_abort=0i,scan_udf_bg_complete=0i,scan_udf_bg_error=0i,set_deleted_objects=0i,sets_enable_xdr=true,sindex.data_max_memory="ULONG_MAX",sindex.num_partitions=32i,single_bin=false,stop_writes=false,stop_writes_pct=90i,storage_engine="device",storage_engine.cold_start_empty=false,storage_engine.data_in_memory=true,storage_engine.defrag_lwm_pct=50i,storage_engine.defrag_queue_min=0i,storage_engine.defrag_sleep=1000i,storage_engine.defrag_startup_minimum=10i,storage_engine.disable_odirect=false,storage_engine.enable_osync=false,storage_engine.file="/opt/aerospike/data/test.dat",storage_engine.filesize=4294967296i,storage_engine.flush_max_ms=1000i,storage_engine.fsync_max_sec=0i,storage_engine.max_write_cache=67108864i,storage_engine.min_avail_pct=5i,storage_engine.post_write_queue=0i,storage_engine.scheduler_mode="null",storage_engine.write_block_size=1048576i,storage_engine.write_threads=1i,sub_objects=0i,udf_sub_lang_delete_success=0i,udf_sub_lang_error=0i,udf_sub_lang_read_success=0i,udf_sub_lang_write_success=0i,udf_sub_tsvc_error=0i,udf_sub_tsvc_timeout=0i,udf_sub_udf_complete=0i,udf_sub_udf_error=0i,udf_sub_udf_timeout=0i,write_commit_level_override="off",xdr_write_error=0i,xdr_write_success=0i,xdr_write_timeout=0i,{test}_query_hist_track_back=300i,{test}_query_hist_track_slice=10i,{test}_query_hist_track_thresholds="1,8,64",{test}_read_hist_track_back=300i,{test}_read_hist_track_slice=10i,{test}_read_hist_track_thresholds="1,8,64",{test}_udf_hist_track_back=300i,{test}_udf_hist_track_slice=10i,{test}_udf_hist_track_thresholds="1,8,64",{test}_write_hist_track_back=300i,{test}_write_hist_track_slice=10i,{test}_write_hist_track_thresholds="1,8,64" 1468923222000000000
+aerospike_set,aerospike_host=localhost:3000,node_name=BB99458B42826B0,set=test/test disable_eviction=false,memory_data_bytes=0i,objects=0i,set_enable_xdr="use-default",stop_writes_count=0i,tombstones=0i,truncate_lut=0i 1598033805000000000
+aerospike_histogram_ttl,aerospike_host=localhost:3000,namespace=test,node_name=BB98EE5B42826B0,set=test 0=0i,1=0i,10=0i,11=0i,12=0i,13=0i,14=0i,15=0i,16=0i,17=0i,18=0i,19=0i,2=0i,20=0i,21=0i,22=0i,23=0i,24=0i,25=0i,26=0i,27=0i,28=0i,29=0i,3=0i,30=0i,31=0i,32=0i,33=0i,34=0i,35=0i,36=0i,37=0i,38=0i,39=0i,4=0i,40=0i,41=0i,42=0i,43=0i,44=0i,45=0i,46=0i,47=0i,48=0i,49=0i,5=0i,50=0i,51=0i,52=0i,53=0i,54=0i,55=0i,56=0i,57=0i,58=0i,59=0i,6=0i,60=0i,61=0i,62=0i,63=0i,64=0i,65=0i,66=0i,67=0i,68=0i,69=0i,7=0i,70=0i,71=0i,72=0i,73=0i,74=0i,75=0i,76=0i,77=0i,78=0i,79=0i,8=0i,80=0i,81=0i,82=0i,83=0i,84=0i,85=0i,86=0i,87=0i,88=0i,89=0i,9=0i,90=0i,91=0i,92=0i,93=0i,94=0i,95=0i,96=0i,97=0i,98=0i,99=0i 1598034191000000000
+```
diff --git a/content/telegraf/v1/input-plugins/aliyuncms/_index.md b/content/telegraf/v1/input-plugins/aliyuncms/_index.md
new file mode 100644
index 000000000..f39f72dc8
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/aliyuncms/_index.md
@@ -0,0 +1,201 @@
+---
+description: "Telegraf plugin for collecting metrics from Alibaba (Aliyun) CloudMonitor Service Statistics"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Alibaba (Aliyun) CloudMonitor Service Statistics
+    identifier: input-aliyuncms
+tags: [Alibaba (Aliyun) CloudMonitor Service Statistics, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Alibaba (Aliyun) CloudMonitor Service Statistics Input Plugin
+
+Here and after we use `Aliyun` instead `Alibaba` as it is default naming
+across web console and docs.
+
+This plugin will pull metric statistics from Aliyun CMS.
+
+## Aliyun Authentication
+
+This plugin uses an [AccessKey](https://www.alibabacloud.com/help/doc-detail/53045.htm?spm=a2c63.p38356.b99.127.5cba21fdt5MJKr&parentId=28572) credential for Authentication with the
+Aliyun OpenAPI endpoint.  In the following order the plugin will attempt
+to authenticate.
+
+1. Ram RoleARN credential if `access_key_id`, `access_key_secret`, `role_arn`,
+   `role_session_name` is specified
+2. AccessKey STS token credential if `access_key_id`, `access_key_secret`,
+   `access_key_sts_token` is specified
+3. AccessKey credential if `access_key_id`, `access_key_secret` is specified
+4. Ecs Ram Role Credential if `role_name` is specified
+5. RSA keypair credential if `private_key`, `public_key_id` is specified
+6. Environment variables credential
+7. Instance metadata credential
+
+[1]: https://www.alibabacloud.com/help/doc-detail/53045.htm?spm=a2c63.p38356.b99.127.5cba21fdt5MJKr&parentId=28572
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Pull Metric Statistics from Aliyun CMS
+[[inputs.aliyuncms]]
+  ## Aliyun Credentials
+  ## Credentials are loaded in the following order
+  ## 1) Ram RoleArn credential
+  ## 2) AccessKey STS token credential
+  ## 3) AccessKey credential
+  ## 4) Ecs Ram Role credential
+  ## 5) RSA keypair credential
+  ## 6) Environment variables credential
+  ## 7) Instance metadata credential
+
+  # access_key_id = ""
+  # access_key_secret = ""
+  # access_key_sts_token = ""
+  # role_arn = ""
+  # role_session_name = ""
+  # private_key = ""
+  # public_key_id = ""
+  # role_name = ""
+
+  ## Specify ali cloud regions to be queried for metric and object discovery
+  ## If not set, all supported regions (see below) would be covered, it can
+  ## provide a significant load on API, so the recommendation here is to
+  ## limit the list as much as possible.
+  ## Allowed values: https://www.alibabacloud.com/help/zh/doc-detail/40654.htm
+  ## Default supported regions are:
+  ##   cn-qingdao,cn-beijing,cn-zhangjiakou,cn-huhehaote,cn-hangzhou,
+  ##   cn-shanghai, cn-shenzhen, cn-heyuan,cn-chengdu,cn-hongkong,
+  ##   ap-southeast-1,ap-southeast-2,ap-southeast-3,ap-southeast-5,
+  ##   ap-south-1,ap-northeast-1, us-west-1,us-east-1,eu-central-1,
+  ##   eu-west-1,me-east-1
+  ##
+  ## From discovery perspective it set the scope for object discovery,
+  ## the discovered info can be used to enrich the metrics with objects
+  ##  attributes/tags. Discovery is not supported for all projects.
+  ## Currently, discovery supported for the following projects:
+  ## - acs_ecs_dashboard
+  ## - acs_rds_dashboard
+  ## - acs_slb_dashboard
+  ## - acs_vpc_eip
+  regions = ["cn-hongkong"]
+
+  ## Requested AliyunCMS aggregation Period (required)
+  ## The period must be multiples of 60s and the minimum for AliyunCMS metrics
+  ## is 1 minute (60s). However not all metrics are made available to the
+  ## one minute period. Some are collected at 3 minute, 5 minute, or larger
+  ## intervals.
+  ## See: https://help.aliyun.com/document_detail/51936.html?spm=a2c4g.11186623.2.18.2bc1750eeOw1Pv
+  ## Note that if a period is configured that is smaller than the minimum for
+  ## a particular metric, that metric will not be returned by Aliyun's
+  ## OpenAPI and will not be collected by Telegraf.
+  period = "5m"
+
+  ## Collection Delay (required)
+  ## The delay must account for metrics availability via AliyunCMS API.
+  delay = "1m"
+
+  ## Recommended: use metric 'interval' that is a multiple of 'period'
+  ## to avoid gaps or overlap in pulled data
+  interval = "5m"
+
+  ## Metric Statistic Project (required)
+  project = "acs_slb_dashboard"
+
+  ## Maximum requests per second, default value is 200
+  ratelimit = 200
+
+  ## How often the discovery API call executed (default 1m)
+  #discovery_interval = "1m"
+
+  ## NOTE: Due to the way TOML is parsed, tables must be at the END of the
+  ## plugin definition, otherwise additional config options are read as part of
+  ## the table
+
+  ## Metrics to Pull
+  ## At least one metrics definition required
+  [[inputs.aliyuncms.metrics]]
+    ## Metrics names to be requested,
+    ## Description can be found here (per project):
+    ## https://help.aliyun.com/document_detail/28619.html?spm=a2c4g.11186623.6.690.1938ad41wg8QSq
+    names = ["InstanceActiveConnection", "InstanceNewConnection"]
+
+    ## Dimension filters for Metric (optional)
+    ## This allows to get additional metric dimension. If dimension is not
+    ## specified it can be returned or the data can be aggregated - it depends
+    ## on particular metric, you can find details here:
+    ##   https://help.aliyun.com/document_detail/28619.html?spm=a2c4g.11186623.6.690.1938ad41wg8QSq
+    ##
+    ## Note, that by default dimension filter includes the list of discovered
+    ## objects in scope (if discovery is enabled). Values specified here would
+    ## be added into the list of discovered objects. You can specify either
+    ## single dimension:
+    # dimensions = '{"instanceId": "p-example"}'
+
+    ## Or you can specify several dimensions at once:
+    # dimensions = '[{"instanceId": "p-example"},{"instanceId": "q-example"}]'
+
+    ## Tag Query Path
+    ## The following tags added by default:
+    ##   * regionId (if discovery enabled)
+    ##   * userId
+    ##   * instanceId
+    ## Enrichment tags, can be added from discovery (if supported)
+    ## Notation is
+    ##   <measurement_tag_name>:<JMES query path (https://jmespath.org/tutorial.html)>
+    ## To figure out which fields are available, consult the
+    ## Describe<ObjectType> API per project. For example, for SLB see:
+    ##   https://api.aliyun.com/#/?product=Slb&version=2014-05-15&api=DescribeLoadBalancers&params={}&tab=MOCK&lang=GO
+    # tag_query_path = [
+    #    "address:Address",
+    #    "name:LoadBalancerName",
+    #    "cluster_owner:Tags.Tag[?TagKey=='cs.cluster.name'].TagValue | [0]"
+    #    ]
+
+    ## Allow metrics without discovery data, if discovery is enabled.
+    ## If set to true, then metric without discovery data would be emitted, otherwise dropped.
+    ## This cane be of help, in case debugging dimension filters, or partial coverage of
+    ## discovery scope vs monitoring scope
+    # allow_dps_without_discovery = false
+```
+
+### Requirements and Terminology
+
+Plugin Configuration utilizes [preset metric items references](https://www.alibabacloud.com/help/doc-detail/28619.htm?spm=a2c63.p38356.a3.2.389f233d0kPJn0)
+
+- `discovery_region` must be a valid Aliyun
+  [Region](https://www.alibabacloud.com/help/doc-detail/40654.htm) value
+- `period` must be a valid duration value
+- `project` must be a preset project value
+- `names` must be preset metric names
+- `dimensions` must be preset dimension values
+
+[2]: https://www.alibabacloud.com/help/doc-detail/28619.htm?spm=a2c63.p38356.a3.2.389f233d0kPJn0
+
+## Metrics
+
+Each Aliyun CMS Project monitored records a measurement with fields for each
+available Metric Statistic Project and Metrics are represented in [snake
+case](https://en.wikipedia.org/wiki/Snake_case)
+
+- aliyuncms_{project}
+  - {metric}_average     (metric Average value)
+  - {metric}_minimum     (metric Minimum value)
+  - {metric}_maximum     (metric Maximum value)
+  - {metric}_value       (metric Value value)
+
+## Example Output
+
+```text
+aliyuncms_acs_slb_dashboard,instanceId=p-example,regionId=cn-hangzhou,userId=1234567890 latency_average=0.004810798017284538,latency_maximum=0.1100282669067383,latency_minimum=0.0006084442138671875
+```
diff --git a/content/telegraf/v1/input-plugins/amd_rocm_smi/_index.md b/content/telegraf/v1/input-plugins/amd_rocm_smi/_index.md
new file mode 100644
index 000000000..dfbabb41b
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/amd_rocm_smi/_index.md
@@ -0,0 +1,113 @@
+---
+description: "Telegraf plugin for collecting metrics from AMD ROCm System Management Interface (SMI)"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: AMD ROCm System Management Interface (SMI)
+    identifier: input-amd_rocm_smi
+tags: [AMD ROCm System Management Interface (SMI), "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# AMD ROCm System Management Interface (SMI) Input Plugin
+
+This plugin uses a query on the [`rocm-smi`]() binary to pull GPU stats
+including memory and GPU usage, temperatures and other.
+
+[1]: https://github.com/RadeonOpenCompute/rocm_smi_lib/tree/master/python_smi_tools
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Startup error behavior options
+
+In addition to the plugin-specific and global configuration settings the plugin
+supports options for specifying the behavior when experiencing startup errors
+using the `startup_error_behavior` setting. Available values are:
+
+- `error`:  Telegraf with stop and exit in case of startup errors. This is the
+            default behavior.
+- `ignore`: Telegraf will ignore startup errors for this plugin and disables it
+            but continues processing for all other plugins.
+- `retry`:  NOT AVAILABLE
+
+## Configuration
+
+```toml @sample.conf
+# Query statistics from AMD Graphics cards using rocm-smi binary
+[[inputs.amd_rocm_smi]]
+  ## Optional: path to rocm-smi binary, defaults to $PATH via exec.LookPath
+  # bin_path = "/opt/rocm/bin/rocm-smi"
+
+  ## Optional: timeout for GPU polling
+  # timeout = "5s"
+```
+
+## Metrics
+
+- measurement: `amd_rocm_smi`
+  - tags
+    - `name` (entry name assigned by rocm-smi executable)
+    - `gpu_id` (id of the GPU according to rocm-smi)
+    - `gpu_unique_id` (unique id of the GPU)
+
+  - fields
+    - `driver_version` (integer)
+    - `fan_speed` (integer)
+    - `memory_total` (integer, B)
+    - `memory_used` (integer, B)
+    - `memory_free` (integer, B)
+    - `temperature_sensor_edge` (float, Celsius)
+    - `temperature_sensor_junction` (float, Celsius)
+    - `temperature_sensor_memory` (float, Celsius)
+    - `utilization_gpu` (integer, percentage)
+    - `utilization_memory` (integer, percentage)
+    - `clocks_current_sm` (integer, Mhz)
+    - `clocks_current_memory` (integer, Mhz)
+    - `clocks_current_display` (integer, Mhz)
+    - `clocks_current_fabric` (integer, Mhz)
+    - `clocks_current_system` (integer, Mhz)
+    - `power_draw` (float, Watt)
+    - `card_series` (string)
+    - `card_model` (string)
+    - `card_vendor` (string)
+
+## Troubleshooting
+
+Check the full output by running `rocm-smi` binary manually.
+
+Linux:
+
+```sh
+rocm-smi rocm-smi -o -l -m -M  -g -c -t -u -i -f -p -P -s -S -v --showreplaycount --showpids --showdriverversion --showmemvendor --showfwinfo --showproductname --showserial --showuniqueid --showbus --showpendingpages --showpagesinfo --showretiredpages --showunreservablepages --showmemuse --showvoltage --showtopo --showtopoweight --showtopohops --showtopotype --showtoponuma --showmeminfo all --json
+```
+
+Please include the output of this command if opening a GitHub issue, together
+with ROCm version.
+
+## Example Output
+
+```text
+amd_rocm_smi,gpu_id=0x6861,gpu_unique_id=0x2150e7d042a1124,host=ali47xl,name=card0 clocks_current_memory=167i,clocks_current_sm=852i,driver_version=51114i,fan_speed=14i,memory_free=17145282560i,memory_total=17163091968i,memory_used=17809408i,power_draw=7,temperature_sensor_edge=28,temperature_sensor_junction=29,temperature_sensor_memory=92,utilization_gpu=0i 1630572551000000000
+amd_rocm_smi,gpu_id=0x6861,gpu_unique_id=0x2150e7d042a1124,host=ali47xl,name=card0 clocks_current_memory=167i,clocks_current_sm=852i,driver_version=51114i,fan_speed=14i,memory_free=17145282560i,memory_total=17163091968i,memory_used=17809408i,power_draw=7,temperature_sensor_edge=29,temperature_sensor_junction=30,temperature_sensor_memory=91,utilization_gpu=0i 1630572701000000000
+amd_rocm_smi,gpu_id=0x6861,gpu_unique_id=0x2150e7d042a1124,host=ali47xl,name=card0 clocks_current_memory=167i,clocks_current_sm=852i,driver_version=51114i,fan_speed=14i,memory_free=17145282560i,memory_total=17163091968i,memory_used=17809408i,power_draw=7,temperature_sensor_edge=29,temperature_sensor_junction=29,temperature_sensor_memory=92,utilization_gpu=0i 1630572749000000000
+```
+
+## Limitations and notices
+
+Please notice that this plugin has been developed and tested on a limited number
+of versions and small set of GPUs. Currently the latest ROCm version tested is
+4.3.0.  Notice that depending on the device and driver versions the amount of
+information provided by `rocm-smi` can vary so that some fields would start/stop
+appearing in the metrics upon updates.  The `rocm-smi` JSON output is not
+perfectly homogeneous and is possibly changing in the future, hence parsing and
+unmarshalling can start failing upon updating ROCm.
+
+Inspired by the current state of the art of the `nvidia-smi` plugin.
diff --git a/content/telegraf/v1/input-plugins/amqp_consumer/_index.md b/content/telegraf/v1/input-plugins/amqp_consumer/_index.md
new file mode 100644
index 000000000..8ce65fb81
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/amqp_consumer/_index.md
@@ -0,0 +1,193 @@
+---
+description: "Telegraf plugin for collecting metrics from AMQP Consumer"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: AMQP Consumer
+    identifier: input-amqp_consumer
+tags: [AMQP Consumer, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# AMQP Consumer Input Plugin
+
+This plugin provides a consumer for use with AMQP 0-9-1, a prominent
+implementation of this protocol being [RabbitMQ](https://www.rabbitmq.com/).
+
+Metrics are read from a topic exchange using the configured queue and
+binding_key.
+
+Message payload should be formatted in one of the
+Telegraf Data Formats.
+
+For an introduction to AMQP see:
+
+- [amqp - concepts](https://www.rabbitmq.com/tutorials/amqp-concepts.html)
+- [rabbitmq: getting started](https://www.rabbitmq.com/getstarted.html)
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Startup error behavior options <!-- @/docs/includes/startup_error_behavior.md -->
+
+In addition to the plugin-specific and global configuration settings the plugin
+supports options for specifying the behavior when experiencing startup errors
+using the `startup_error_behavior` setting. Available values are:
+
+- `error`:  Telegraf with stop and exit in case of startup errors. This is the
+            default behavior.
+- `ignore`: Telegraf will ignore startup errors for this plugin and disables it
+            but continues processing for all other plugins.
+- `retry`:  Telegraf will try to startup the plugin in every gather or write
+            cycle in case of startup errors. The plugin is disabled until
+            the startup succeeds.
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `username` and
+`password` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# AMQP consumer plugin
+[[inputs.amqp_consumer]]
+  ## Brokers to consume from.  If multiple brokers are specified a random broker
+  ## will be selected anytime a connection is established.  This can be
+  ## helpful for load balancing when not using a dedicated load balancer.
+  brokers = ["amqp://localhost:5672/influxdb"]
+
+  ## Authentication credentials for the PLAIN auth_method.
+  # username = ""
+  # password = ""
+
+  ## Name of the exchange to declare.  If unset, no exchange will be declared.
+  exchange = "telegraf"
+
+  ## Exchange type; common types are "direct", "fanout", "topic", "header", "x-consistent-hash".
+  # exchange_type = "topic"
+
+  ## If true, exchange will be passively declared.
+  # exchange_passive = false
+
+  ## Exchange durability can be either "transient" or "durable".
+  # exchange_durability = "durable"
+
+  ## Additional exchange arguments.
+  # exchange_arguments = { }
+  # exchange_arguments = {"hash_property" = "timestamp"}
+
+  ## AMQP queue name.
+  queue = "telegraf"
+
+  ## AMQP queue durability can be "transient" or "durable".
+  queue_durability = "durable"
+
+  ## If true, queue will be passively declared.
+  # queue_passive = false
+
+  ## Additional arguments when consuming from Queue
+  # queue_consume_arguments = { }
+  # queue_consume_arguments = {"x-stream-offset" = "first"}
+
+  ## A binding between the exchange and queue using this binding key is
+  ## created.  If unset, no binding is created.
+  binding_key = "#"
+
+  ## Maximum number of messages server should give to the worker.
+  # prefetch_count = 50
+
+  ## Max undelivered messages
+  ## This plugin uses tracking metrics, which ensure messages are read to
+  ## outputs before acknowledging them to the original broker to ensure data
+  ## is not lost. This option sets the maximum messages to read from the
+  ## broker that have not been written by an output.
+  ##
+  ## This value needs to be picked with awareness of the agent's
+  ## metric_batch_size value as well. Setting max undelivered messages too high
+  ## can result in a constant stream of data batches to the output. While
+  ## setting it too low may never flush the broker's messages.
+  # max_undelivered_messages = 1000
+
+  ## Timeout for establishing the connection to a broker
+  # timeout = "30s"
+
+  ## Auth method. PLAIN and EXTERNAL are supported
+  ## Using EXTERNAL requires enabling the rabbitmq_auth_mechanism_ssl plugin as
+  ## described here: https://www.rabbitmq.com/plugins.html
+  # auth_method = "PLAIN"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## Content encoding for message payloads, can be set to
+  ## "gzip", "identity" or "auto"
+  ## - Use "gzip" to decode gzip
+  ## - Use "identity" to apply no encoding
+  ## - Use "auto" determine the encoding using the ContentEncoding header
+  # content_encoding = "identity"
+
+  ## Maximum size of decoded message.
+  ## Acceptable units are B, KiB, KB, MiB, MB...
+  ## Without quotes and units, interpreted as size in bytes.
+  # max_decompression_size = "500MB"
+
+  ## Data format to consume.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+  data_format = "influx"
+```
+
+## Message acknowledgement behavior
+
+This plugin tracks metrics to report the delivery state to the broker.
+
+Messages are **acknowledged** (ACK) in the broker if they were successfully
+parsed and delivered to all corresponding output sinks.
+
+Messages are **not acknowledged** (NACK) if parsing of the messages fails and no
+metrics were created. In this case requeueing is disabled so messages will not
+be sent out to any other queue. The message will then be discarded or sent to a
+dead-letter exchange depending on the server configuration. See
+[RabitMQ documentation](https://www.rabbitmq.com/docs/confirms) for more details.
+
+Messages are **rejected** (REJECT) if the messages were parsed correctly but
+could not be delivered e.g. due to output-service outages. Requeueing is
+disabled in this case and messages will be discarded by the server. See
+[RabitMQ documentation](https://www.rabbitmq.com/docs/confirms) for more details.
+
+[rabbitmq_doc]: https://www.rabbitmq.com/docs/confirms
+
+## Metrics
+
+The format of metrics produced by this plugin depends on the content and
+data format of received messages.
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/apache/_index.md b/content/telegraf/v1/input-plugins/apache/_index.md
new file mode 100644
index 000000000..d6c799b12
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/apache/_index.md
@@ -0,0 +1,113 @@
+---
+description: "Telegraf plugin for collecting metrics from Apache"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Apache
+    identifier: input-apache
+tags: [Apache, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Apache Input Plugin
+
+The Apache plugin collects server performance information using the
+[`mod_status`](https://httpd.apache.org/docs/2.4/mod/mod_status.html) module of
+the [Apache HTTP Server](https://httpd.apache.org/).
+
+Typically, the `mod_status` module is configured to expose a page at the
+`/server-status?auto` location of the Apache server.  The
+[ExtendedStatus](https://httpd.apache.org/docs/2.4/mod/core.html#extendedstatus)
+option must be enabled in order to collect all available fields.  For
+information about how to configure your server reference the [module
+documentation](https://httpd.apache.org/docs/2.4/mod/mod_status.html#enable).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read Apache status information (mod_status)
+[[inputs.apache]]
+  ## An array of URLs to gather from, must be directed at the machine
+  ## readable version of the mod_status page including the auto query string.
+  ## Default is "http://localhost/server-status?auto".
+  urls = ["http://localhost/server-status?auto"]
+
+  ## Credentials for basic HTTP authentication.
+  # username = "myuser"
+  # password = "mypassword"
+
+  ## Maximum time to receive response.
+  # response_timeout = "5s"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+## Metrics
+
+- apache
+  - BusyWorkers (float)
+  - BytesPerReq (float)
+  - BytesPerSec (float)
+  - ConnsAsyncClosing (float)
+  - ConnsAsyncKeepAlive (float)
+  - ConnsAsyncWriting (float)
+  - ConnsTotal (float)
+  - CPUChildrenSystem (float)
+  - CPUChildrenUser (float)
+  - CPULoad (float)
+  - CPUSystem (float)
+  - CPUUser (float)
+  - IdleWorkers (float)
+  - Load1 (float)
+  - Load5 (float)
+  - Load15 (float)
+  - ParentServerConfigGeneration (float)
+  - ParentServerMPMGeneration (float)
+  - ReqPerSec (float)
+  - ServerUptimeSeconds (float)
+  - TotalAccesses (float)
+  - TotalkBytes (float)
+  - Uptime (float)
+
+The following fields are collected from the `Scoreboard`, and represent the
+number of requests in the given state:
+
+- apache
+  - scboard_closing (float)
+  - scboard_dnslookup (float)
+  - scboard_finishing (float)
+  - scboard_idle_cleanup (float)
+  - scboard_keepalive (float)
+  - scboard_logging (float)
+  - scboard_open (float)
+  - scboard_reading (float)
+  - scboard_sending (float)
+  - scboard_starting (float)
+  - scboard_waiting (float)
+
+## Tags
+
+- All measurements have the following tags:
+  - port
+  - server
+
+## Example Output
+
+```text
+apache,port=80,server=debian-stretch-apache BusyWorkers=1,BytesPerReq=0,BytesPerSec=0,CPUChildrenSystem=0,CPUChildrenUser=0,CPULoad=0.00995025,CPUSystem=0.01,CPUUser=0.01,ConnsAsyncClosing=0,ConnsAsyncKeepAlive=0,ConnsAsyncWriting=0,ConnsTotal=0,IdleWorkers=49,Load1=0.01,Load15=0,Load5=0,ParentServerConfigGeneration=3,ParentServerMPMGeneration=2,ReqPerSec=0.00497512,ServerUptimeSeconds=201,TotalAccesses=1,TotalkBytes=0,Uptime=201,scboard_closing=0,scboard_dnslookup=0,scboard_finishing=0,scboard_idle_cleanup=0,scboard_keepalive=0,scboard_logging=0,scboard_open=100,scboard_reading=0,scboard_sending=1,scboard_starting=0,scboard_waiting=49 1502489900000000000
+```
diff --git a/content/telegraf/v1/input-plugins/apcupsd/_index.md b/content/telegraf/v1/input-plugins/apcupsd/_index.md
new file mode 100644
index 000000000..55aab778d
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/apcupsd/_index.md
@@ -0,0 +1,77 @@
+---
+description: "Telegraf plugin for collecting metrics from APCUPSD"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: APCUPSD
+    identifier: input-apcupsd
+tags: [APCUPSD, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# APCUPSD Input Plugin
+
+This plugin reads data from an apcupsd daemon over its NIS network protocol.
+
+## Requirements
+
+apcupsd should be installed and it's daemon should be running.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Monitor APC UPSes connected to apcupsd
+[[inputs.apcupsd]]
+  # A list of running apcupsd server to connect to.
+  # If not provided will default to tcp://127.0.0.1:3551
+  servers = ["tcp://127.0.0.1:3551"]
+
+  ## Timeout for dialing server.
+  timeout = "5s"
+```
+
+## Metrics
+
+- apcupsd
+  - tags:
+    - serial
+    - ups_name
+    - status (string representing the set status_flags)
+    - model
+  - fields:
+    - status_flags ([status-bits](http://www.apcupsd.org/manual/manual.html#status-bits))
+    - input_voltage
+    - load_percent
+    - battery_charge_percent
+    - time_left_ns
+    - output_voltage
+    - internal_temp
+    - battery_voltage
+    - input_frequency
+    - time_on_battery_ns
+    - cumulative_time_on_battery_ns
+    - nominal_input_voltage
+    - nominal_battery_voltage
+    - nominal_power
+    - firmware
+    - battery_date
+    - last_transfer
+    - number_transfers
+
+## Example Output
+
+```text
+apcupsd,serial=AS1231515,status=ONLINE,ups_name=name1 time_on_battery=0,load_percent=9.7,time_left_minutes=98,output_voltage=230.4,internal_temp=32.4,battery_voltage=27.4,input_frequency=50.2,input_voltage=230.4,battery_charge_percent=100,status_flags=8i 1490035922000000000
+```
+
+[status-bits]: http://www.apcupsd.org/manual/manual.html#status-bits
diff --git a/content/telegraf/v1/input-plugins/aurora/_index.md b/content/telegraf/v1/input-plugins/aurora/_index.md
new file mode 100644
index 000000000..941218a2f
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/aurora/_index.md
@@ -0,0 +1,89 @@
+---
+description: "Telegraf plugin for collecting metrics from Aurora"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Aurora
+    identifier: input-aurora
+tags: [Aurora, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Aurora Input Plugin
+
+The Aurora Input Plugin gathers metrics from [Apache
+Aurora](https://aurora.apache.org/) schedulers.
+
+For monitoring recommendations reference [Monitoring your Aurora
+cluster](https://aurora.apache.org/documentation/latest/operations/monitoring/)
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Gather metrics from Apache Aurora schedulers
+[[inputs.aurora]]
+  ## Schedulers are the base addresses of your Aurora Schedulers
+  schedulers = ["http://127.0.0.1:8081"]
+
+  ## Set of role types to collect metrics from.
+  ##
+  ## The scheduler roles are checked each interval by contacting the
+  ## scheduler nodes; zookeeper is not contacted.
+  # roles = ["leader", "follower"]
+
+  ## Timeout is the max time for total network operations.
+  # timeout = "5s"
+
+  ## Username and password are sent using HTTP Basic Auth.
+  # username = "username"
+  # password = "pa$$word"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+## Metrics
+
+- aurora
+  - tags:
+    - scheduler (URL of scheduler)
+    - role (leader or follower)
+  - fields:
+    - Numeric metrics are collected from the `/vars` endpoint; string fields
+      are not gathered.
+
+## Troubleshooting
+
+Check the Scheduler role, the leader will return a 200 status:
+
+```shell
+curl -v http://127.0.0.1:8081/leaderhealth
+```
+
+Get available metrics:
+
+```shell
+curl http://127.0.0.1:8081/vars
+```
+
+## Example Output
+
+The example output below has been trimmed.
+
+```text
+aurora,role=leader,scheduler=http://debian-stretch-aurora-coordinator-3.virt:8081 CronBatchWorker_batch_locked_events=0i,CronBatchWorker_batch_locked_events_per_sec=0,CronBatchWorker_batch_locked_nanos_per_event=0,CronBatchWorker_batch_locked_nanos_total=0i,CronBatchWorker_batch_locked_nanos_total_per_sec=0,CronBatchWorker_batch_unlocked_events=0i,CronBatchWorker_batch_unlocked_events_per_sec=0,CronBatchWorker_batch_unlocked_nanos_per_event=0,CronBatchWorker_batch_unlocked_nanos_total=0i,CronBatchWorker_batch_unlocked_nanos_total_per_sec=0,CronBatchWorker_batches_processed=0i,CronBatchWorker_items_processed=0i,CronBatchWorker_last_processed_batch_size=0i,CronBatchWorker_queue_size=0i,TaskEventBatchWorker_batch_locked_events=0i,TaskEventBatchWorker_batch_locked_events_per_sec=0,TaskEventBatchWorker_batch_locked_nanos_per_event=0,TaskEventBatchWorker_batch_locked_nanos_total=0i,TaskEventBatchWorker_batch_locked_nanos_total_per_sec=0,TaskEventBatchWorker_batch_unlocked_events=0i,TaskEventBatchWorker_batch_unlocked_events_per_sec=0,TaskEventBatchWorker_batch_unlocked_nanos_per_event=0,TaskEventBatchWorker_batch_unlocked_nanos_total=0i,TaskEventBatchWorker_batch_unlocked_nanos_total_per_sec=0,TaskEventBatchWorker_batches_processed=0i,TaskEventBatchWorker_items_processed=0i,TaskEventBatchWorker_last_processed_batch_size=0i,TaskEventBatchWorker_queue_size=0i,TaskGroupBatchWorker_batch_locked_events=0i,TaskGroupBatchWorker_batch_locked_events_per_sec=0,TaskGroupBatchWorker_batch_locked_nanos_per_event=0,TaskGroupBatchWorker_batch_locked_nanos_total=0i,TaskGroupBatchWorker_batch_locked_nanos_total_per_sec=0,TaskGroupBatchWorker_batch_unlocked_events=0i,TaskGroupBatchWorker_batch_unlocked_events_per_sec=0,TaskGroupBatchWorker_batch_unlocked_nanos_per_event=0,TaskGroupBatchWorker_batch_unlocked_nanos_total=0i,TaskGroupBatchWorker_batch_unlocked_nanos_total_per_sec=0,TaskGroupBatchWorker_batches_processed=0i,TaskGroupBatchWorker_items_processed=0i,TaskGroupBatchWorker_last_processed_batch_size=0i,TaskGroupBatchWorker_queue_size=0i,assigner_launch_failures=0i,async_executor_uncaught_exceptions=0i,async_tasks_completed=1i,cron_job_collisions=0i,cron_job_concurrent_runs=0i,cron_job_launch_failures=0i,cron_job_misfires=0i,cron_job_parse_failures=0i,cron_job_triggers=0i,cron_jobs_loaded=1i,empty_slots_dedicated_large=0i,empty_slots_dedicated_medium=0i,empty_slots_dedicated_revocable_large=0i,empty_slots_dedicated_revocable_medium=0i,empty_slots_dedicated_revocable_small=0i,empty_slots_dedicated_revocable_xlarge=0i,empty_slots_dedicated_small=0i,empty_slots_dedicated_xlarge=0i,empty_slots_large=0i,empty_slots_medium=0i,empty_slots_revocable_large=0i,empty_slots_revocable_medium=0i,empty_slots_revocable_small=0i,empty_slots_revocable_xlarge=0i,empty_slots_small=0i,empty_slots_xlarge=0i,event_bus_dead_events=0i,event_bus_exceptions=1i,framework_registered=1i,globally_banned_offers_size=0i,http_200_responses_events=55i,http_200_responses_events_per_sec=0,http_200_responses_nanos_per_event=0,http_200_responses_nanos_total=310416694i,http_200_responses_nanos_total_per_sec=0,job_update_delete_errors=0i,job_update_recovery_errors=0i,job_update_state_change_errors=0i,job_update_store_delete_all_events=1i,job_update_store_delete_all_events_per_sec=0,job_update_store_delete_all_nanos_per_event=0,job_update_store_delete_all_nanos_total=1227254i,job_update_store_delete_all_nanos_total_per_sec=0,job_update_store_fetch_details_query_events=74i,job_update_store_fetch_details_query_events_per_sec=0,job_update_store_fetch_details_query_nanos_per_event=0,job_update_store_fetch_details_query_nanos_total=24643149i,job_update_store_fetch_details_query_nanos_total_per_sec=0,job_update_store_prune_history_events=59i,job_update_store_prune_history_events_per_sec=0,job_update_store_prune_history_nanos_per_event=0,job_update_store_prune_history_nanos_total=262868218i,job_update_store_prune_history_nanos_total_per_sec=0,job_updates_pruned=0i,jvm_available_processors=2i,jvm_class_loaded_count=6707i,jvm_class_total_loaded_count=6732i,jvm_class_unloaded_count=25i,jvm_gc_PS_MarkSweep_collection_count=2i,jvm_gc_PS_MarkSweep_collection_time_ms=223i,jvm_gc_PS_Scavenge_collection_count=27i,jvm_gc_PS_Scavenge_collection_time_ms=1691i,jvm_gc_collection_count=29i,jvm_gc_collection_time_ms=1914i,jvm_memory_free_mb=65i,jvm_memory_heap_mb_committed=157i,jvm_memory_heap_mb_max=446i,jvm_memory_heap_mb_used=91i,jvm_memory_max_mb=446i,jvm_memory_mb_total=157i,jvm_memory_non_heap_mb_committed=50i,jvm_memory_non_heap_mb_max=0i,jvm_memory_non_heap_mb_used=49i,jvm_threads_active=47i,jvm_threads_daemon=28i,jvm_threads_peak=48i,jvm_threads_started=62i,jvm_time_ms=1526530686927i,jvm_uptime_secs=79947i,log_entry_serialize_events=16i,log_entry_serialize_events_per_sec=0,log_entry_serialize_nanos_per_event=0,log_entry_serialize_nanos_total=4815321i,log_entry_serialize_nanos_total_per_sec=0,log_manager_append_events=16i,log_manager_append_events_per_sec=0,log_manager_append_nanos_per_event=0,log_manager_append_nanos_total=506453428i,log_manager_append_nanos_total_per_sec=0,log_manager_deflate_events=14i,log_manager_deflate_events_per_sec=0,log_manager_deflate_nanos_per_event=0,log_manager_deflate_nanos_total=21010565i,log_manager_deflate_nanos_total_per_sec=0 1526530687000000000
+```
diff --git a/content/telegraf/v1/input-plugins/azure_monitor/_index.md b/content/telegraf/v1/input-plugins/azure_monitor/_index.md
new file mode 100644
index 000000000..42570fa89
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/azure_monitor/_index.md
@@ -0,0 +1,194 @@
+---
+description: "Telegraf plugin for collecting metrics from Azure Monitor"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Azure Monitor
+    identifier: input-azure_monitor
+tags: [Azure Monitor, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Azure Monitor Input Plugin
+
+The `azure_monitor` plugin, gathers metrics of each Azure
+resource using Azure Monitor API. Uses **Logz.io
+azure-monitor-metrics-receiver** package -
+an SDK wrapper for Azure Monitor SDK.
+
+## Azure Credential
+
+This plugin uses `client_id`, `client_secret` and `tenant_id`
+for authentication (access token), and `subscription_id`
+is for accessing Azure resources.
+
+## Property Locations
+
+`subscription_id` can be found under **Overview**->**Essentials** in
+the Azure portal for your application/service.
+
+`client_id` and `client_secret` can be obtained by registering an
+application under Azure Active Directory.
+
+`tenant_id` can be found under **Azure Active Directory**->**Properties**.
+
+resource target `resource_id` can be found under
+**Overview**->**Essentials**->**JSON View** (link) in the Azure
+portal for your application/service.
+
+`cloud_option` defines the optional value for the API endpoints in case you
+are using the solution to get the metrics from the Azure Sovereign Cloud
+shipment e.g. AzureChina, AzureGovernment or AzurePublic.
+The default value is AzurePublic
+
+## More Information
+
+To see a table of resource types and their metrics, please use this link:
+
+`https://docs.microsoft.com/en-us/azure/azure-monitor/
+essentials/metrics-supported`
+
+## Rate Limits
+
+Azure API read limit is 12000 requests per hour.
+Please make sure the total number of metrics you are requesting is proportional
+to your time interval.
+
+## Usage
+
+Use `resource_targets` to collect metrics from specific resources using
+resource id.
+
+Use `resource_group_targets` to collect metrics from resources under the
+resource group with resource type.
+
+Use `subscription_targets` to collect metrics from resources under the
+subscription with resource type.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Gather Azure resources metrics from Azure Monitor API
+[[inputs.azure_monitor]]
+  # can be found under Overview->Essentials in the Azure portal for your application/service
+  subscription_id = "<<SUBSCRIPTION_ID>>"
+  # can be obtained by registering an application under Azure Active Directory
+  client_id = "<<CLIENT_ID>>"
+  # can be obtained by registering an application under Azure Active Directory.
+  # If not specified Default Azure Credentials chain will be attempted:
+  # - Environment credentials (AZURE_*)
+  # - Workload Identity in Kubernetes cluster
+  # - Managed Identity
+  # - Azure CLI auth
+  # - Developer Azure CLI auth
+  client_secret = "<<CLIENT_SECRET>>"
+  # can be found under Azure Active Directory->Properties
+  tenant_id = "<<TENANT_ID>>"
+  # Define the optional Azure cloud option e.g. AzureChina, AzureGovernment or AzurePublic. The default is AzurePublic.
+  # cloud_option = "AzurePublic"
+
+  # resource target #1 to collect metrics from
+  [[inputs.azure_monitor.resource_target]]
+    # can be found under Overview->Essentials->JSON View in the Azure portal for your application/service
+    # must start with 'resourceGroups/...' ('/subscriptions/xxxxxxxx-xxxx-xxxx-xxx-xxxxxxxxxxxx'
+    # must be removed from the beginning of Resource ID property value)
+    resource_id = "<<RESOURCE_ID>>"
+    # the metric names to collect
+    # leave the array empty to use all metrics available to this resource
+    metrics = [ "<<METRIC>>", "<<METRIC>>" ]
+    # metrics aggregation type value to collect
+    # can be 'Total', 'Count', 'Average', 'Minimum', 'Maximum'
+    # leave the array empty to collect all aggregation types values for each metric
+    aggregations = [ "<<AGGREGATION>>", "<<AGGREGATION>>" ]
+
+  # resource target #2 to collect metrics from
+  [[inputs.azure_monitor.resource_target]]
+    resource_id = "<<RESOURCE_ID>>"
+    metrics = [ "<<METRIC>>", "<<METRIC>>" ]
+    aggregations = [ "<<AGGREGATION>>", "<<AGGREGATION>>" ]
+
+  # resource group target #1 to collect metrics from resources under it with resource type
+  [[inputs.azure_monitor.resource_group_target]]
+    # the resource group name
+    resource_group = "<<RESOURCE_GROUP_NAME>>"
+
+    # defines the resources to collect metrics from
+    [[inputs.azure_monitor.resource_group_target.resource]]
+      # the resource type
+      resource_type = "<<RESOURCE_TYPE>>"
+      metrics = [ "<<METRIC>>", "<<METRIC>>" ]
+      aggregations = [ "<<AGGREGATION>>", "<<AGGREGATION>>" ]
+
+    # defines the resources to collect metrics from
+    [[inputs.azure_monitor.resource_group_target.resource]]
+      resource_type = "<<RESOURCE_TYPE>>"
+      metrics = [ "<<METRIC>>", "<<METRIC>>" ]
+      aggregations = [ "<<AGGREGATION>>", "<<AGGREGATION>>" ]
+
+  # resource group target #2 to collect metrics from resources under it with resource type
+  [[inputs.azure_monitor.resource_group_target]]
+    resource_group = "<<RESOURCE_GROUP_NAME>>"
+
+    [[inputs.azure_monitor.resource_group_target.resource]]
+      resource_type = "<<RESOURCE_TYPE>>"
+      metrics = [ "<<METRIC>>", "<<METRIC>>" ]
+      aggregations = [ "<<AGGREGATION>>", "<<AGGREGATION>>" ]
+
+  # subscription target #1 to collect metrics from resources under it with resource type
+  [[inputs.azure_monitor.subscription_target]]
+    resource_type = "<<RESOURCE_TYPE>>"
+    metrics = [ "<<METRIC>>", "<<METRIC>>" ]
+    aggregations = [ "<<AGGREGATION>>", "<<AGGREGATION>>" ]
+
+  # subscription target #2 to collect metrics from resources under it with resource type
+  [[inputs.azure_monitor.subscription_target]]
+    resource_type = "<<RESOURCE_TYPE>>"
+    metrics = [ "<<METRIC>>", "<<METRIC>>" ]
+    aggregations = [ "<<AGGREGATION>>", "<<AGGREGATION>>" ]
+```
+
+## Metrics
+
+* azure_monitor_<<RESOURCE_NAMESPACE>>_<<METRIC_NAME>>
+  * fields:
+    * total (float64)
+    * count (float64)
+    * average (float64)
+    * minimum (float64)
+    * maximum (float64)
+  * tags:
+    * namespace
+    * resource_group
+    * resource_name
+    * subscription_id
+    * resource_region
+    * unit
+
+## Example Output
+
+```text
+azure_monitor_microsoft_storage_storageaccounts_used_capacity,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=Bytes average=9065573,maximum=9065573,minimum=9065573,timeStamp="2021-11-08T09:52:00Z",total=9065573 1636368744000000000
+azure_monitor_microsoft_storage_storageaccounts_transactions,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=Count average=1,count=6,maximum=1,minimum=0,timeStamp="2021-11-08T09:52:00Z",total=6 1636368744000000000
+azure_monitor_microsoft_storage_storageaccounts_ingress,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=Bytes average=5822.333333333333,count=6,maximum=5833,minimum=0,timeStamp="2021-11-08T09:52:00Z",total=34934 1636368744000000000
+azure_monitor_microsoft_storage_storageaccounts_egress,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=Bytes average=840.1666666666666,count=6,maximum=841,minimum=0,timeStamp="2021-11-08T09:52:00Z",total=5041 1636368744000000000
+azure_monitor_microsoft_storage_storageaccounts_success_server_latency,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=MilliSeconds average=12.833333333333334,count=6,maximum=30,minimum=8,timeStamp="2021-11-08T09:52:00Z",total=77 1636368744000000000
+azure_monitor_microsoft_storage_storageaccounts_success_e2e_latency,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=MilliSeconds average=12.833333333333334,count=6,maximum=30,minimum=8,timeStamp="2021-11-08T09:52:00Z",total=77 1636368744000000000
+azure_monitor_microsoft_storage_storageaccounts_availability,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=Percent average=100,count=6,maximum=100,minimum=100,timeStamp="2021-11-08T09:52:00Z",total=600 1636368744000000000
+azure_monitor_microsoft_storage_storageaccounts_used_capacity,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=Bytes average=9065573,maximum=9065573,minimum=9065573,timeStamp="2021-11-08T09:52:00Z",total=9065573 1636368745000000000
+azure_monitor_microsoft_storage_storageaccounts_transactions,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=Count average=1,count=6,maximum=1,minimum=0,timeStamp="2021-11-08T09:52:00Z",total=6 1636368745000000000
+azure_monitor_microsoft_storage_storageaccounts_ingress,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=Bytes average=5822.333333333333,count=6,maximum=5833,minimum=0,timeStamp="2021-11-08T09:52:00Z",total=34934 1636368745000000000
+azure_monitor_microsoft_storage_storageaccounts_egress,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=Bytes average=840.1666666666666,count=6,maximum=841,minimum=0,timeStamp="2021-11-08T09:52:00Z",total=5041 1636368745000000000
+azure_monitor_microsoft_storage_storageaccounts_success_server_latency,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=MilliSeconds average=12.833333333333334,count=6,maximum=30,minimum=8,timeStamp="2021-11-08T09:52:00Z",total=77 1636368745000000000
+azure_monitor_microsoft_storage_storageaccounts_success_e2e_latency,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=MilliSeconds average=12.833333333333334,count=6,maximum=30,minimum=8,timeStamp="2021-11-08T09:52:00Z",total=77 1636368745000000000
+azure_monitor_microsoft_storage_storageaccounts_availability,host=Azure-MBP,namespace=Microsoft.Storage/storageAccounts,resource_group=azure-rg,resource_name=azuresa,resource_region=eastus,subscription_id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,unit=Percent average=100,count=6,maximum=100,minimum=100,timeStamp="2021-11-08T09:52:00Z",total=600 1636368745000000000
+```
diff --git a/content/telegraf/v1/input-plugins/azure_storage_queue/_index.md b/content/telegraf/v1/input-plugins/azure_storage_queue/_index.md
new file mode 100644
index 000000000..f1bb96687
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/azure_storage_queue/_index.md
@@ -0,0 +1,58 @@
+---
+description: "Telegraf plugin for collecting metrics from Azure Storage Queue"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Azure Storage Queue
+    identifier: input-azure_storage_queue
+tags: [Azure Storage Queue, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Azure Storage Queue Input Plugin
+
+This plugin gathers sizes of Azure Storage Queues.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Gather Azure Storage Queue metrics
+[[inputs.azure_storage_queue]]
+  ## Required Azure Storage Account name
+  account_name = "mystorageaccount"
+
+  ## Required Azure Storage Account access key
+  account_key = "storageaccountaccesskey"
+
+  ## Set to false to disable peeking age of oldest message (executes faster)
+  # peek_oldest_message_age = true
+```
+
+## Metrics
+
+- azure_storage_queues
+  - tags:
+    - queue
+    - account
+  - fields:
+    - size (integer, count)
+    - oldest_message_age_ns (integer, nanoseconds) Age of message at the head
+      of the queue. Requires `peek_oldest_message_age` to be configured
+      to `true`.
+
+## Example Output
+
+```text
+azure_storage_queues,queue=myqueue,account=mystorageaccount oldest_message_age=799714900i,size=7i 1565970503000000000
+azure_storage_queues,queue=myemptyqueue,account=mystorageaccount size=0i 1565970502000000000
+```
diff --git a/content/telegraf/v1/input-plugins/bcache/_index.md b/content/telegraf/v1/input-plugins/bcache/_index.md
new file mode 100644
index 000000000..73eb3e0e5
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/bcache/_index.md
@@ -0,0 +1,95 @@
+---
+description: "Telegraf plugin for collecting metrics from bcache"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: bcache
+    identifier: input-bcache
+tags: [bcache, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# bcache Input Plugin
+
+Get bcache stat from stats_total directory and dirty_data file.
+
+## Metrics
+
+Meta:
+
+- tags: `backing_dev=dev bcache_dev=dev`
+
+Measurement names:
+
+- dirty_data
+- bypassed
+- cache_bypass_hits
+- cache_bypass_misses
+- cache_hit_ratio
+- cache_hits
+- cache_miss_collisions
+- cache_misses
+- cache_readaheads
+
+## Description
+
+```text
+dirty_data
+  Amount of dirty data for this backing device in the cache. Continuously
+  updated unlike the cache set's version, but may be slightly off.
+
+bypassed
+  Amount of IO (both reads and writes) that has bypassed the cache
+
+
+cache_bypass_hits
+cache_bypass_misses
+  Hits and misses for IO that is intended to skip the cache are still counted,
+  but broken out here.
+
+cache_hits
+cache_misses
+cache_hit_ratio
+  Hits and misses are counted per individual IO as bcache sees them; a
+  partial hit is counted as a miss.
+
+cache_miss_collisions
+  Counts instances where data was going to be inserted into the cache from a
+  cache miss, but raced with a write and data was already present (usually 0
+  since the synchronization for cache misses was rewritten)
+
+cache_readaheads
+  Count of times readahead occurred.
+```
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics of bcache from stats_total and dirty_data
+# This plugin ONLY supports Linux
+[[inputs.bcache]]
+  ## Bcache sets path
+  ## If not specified, then default is:
+  bcachePath = "/sys/fs/bcache"
+
+  ## By default, Telegraf gather stats for all bcache devices
+  ## Setting devices will restrict the stats to the specified
+  ## bcache devices.
+  bcacheDevs = ["bcache0"]
+```
+
+## Example Output
+
+```text
+bcache,backing_dev="md10",bcache_dev="bcache0" dirty_data=11639194i,bypassed=5167704440832i,cache_bypass_hits=146270986i,cache_bypass_misses=0i,cache_hit_ratio=90i,cache_hits=511941651i,cache_miss_collisions=157678i,cache_misses=50647396i,cache_readaheads=0i
+```
diff --git a/content/telegraf/v1/input-plugins/beanstalkd/_index.md b/content/telegraf/v1/input-plugins/beanstalkd/_index.md
new file mode 100644
index 000000000..a0103f310
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/beanstalkd/_index.md
@@ -0,0 +1,125 @@
+---
+description: "Telegraf plugin for collecting metrics from Beanstalkd"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Beanstalkd
+    identifier: input-beanstalkd
+tags: [Beanstalkd, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Beanstalkd Input Plugin
+
+The `beanstalkd` plugin collects server stats as well as tube stats (reported by
+`stats` and `stats-tube` commands respectively).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Collects Beanstalkd server and tubes stats
+[[inputs.beanstalkd]]
+  ## Server to collect data from
+  server = "localhost:11300"
+
+  ## List of tubes to gather stats about.
+  ## If no tubes specified then data gathered for each tube on server reported by list-tubes command
+  tubes = ["notifications"]
+```
+
+## Metrics
+
+Please see the [Beanstalk Protocol
+doc](https://raw.githubusercontent.com/kr/beanstalkd/master/doc/protocol.txt)
+for detailed explanation of `stats` and `stats-tube` commands output.
+
+`beanstalkd_overview` – statistical information about the system as a whole
+
+- fields
+  - cmd_delete
+  - cmd_pause_tube
+  - current_jobs_buried
+  - current_jobs_delayed
+  - current_jobs_ready
+  - current_jobs_reserved
+  - current_jobs_urgent
+  - current_using
+  - current_waiting
+  - current_watching
+  - pause
+  - pause_time_left
+  - total_jobs
+- tags
+  - name
+  - server (address taken from config)
+
+`beanstalkd_tube` – statistical information about the specified tube
+
+- fields
+  - binlog_current_index
+  - binlog_max_size
+  - binlog_oldest_index
+  - binlog_records_migrated
+  - binlog_records_written
+  - cmd_bury
+  - cmd_delete
+  - cmd_ignore
+  - cmd_kick
+  - cmd_list_tube_used
+  - cmd_list_tubes
+  - cmd_list_tubes_watched
+  - cmd_pause_tube
+  - cmd_peek
+  - cmd_peek_buried
+  - cmd_peek_delayed
+  - cmd_peek_ready
+  - cmd_put
+  - cmd_release
+  - cmd_reserve
+  - cmd_reserve_with_timeout
+  - cmd_stats
+  - cmd_stats_job
+  - cmd_stats_tube
+  - cmd_touch
+  - cmd_use
+  - cmd_watch
+  - current_connections
+  - current_jobs_buried
+  - current_jobs_delayed
+  - current_jobs_ready
+  - current_jobs_reserved
+  - current_jobs_urgent
+  - current_producers
+  - current_tubes
+  - current_waiting
+  - current_workers
+  - job_timeouts
+  - max_job_size
+  - pid
+  - rusage_stime
+  - rusage_utime
+  - total_connections
+  - total_jobs
+  - uptime
+- tags
+  - hostname
+  - id
+  - server (address taken from config)
+  - version
+
+## Example Output
+
+```text
+beanstalkd_overview,host=server.local,hostname=a2ab22ed12e0,id=232485800aa11b24,server=localhost:11300,version=1.10 cmd_stats_tube=29482i,current_jobs_delayed=0i,current_jobs_urgent=6i,cmd_kick=0i,cmd_stats=7378i,cmd_stats_job=0i,current_waiting=0i,max_job_size=65535i,pid=6i,cmd_bury=0i,cmd_reserve_with_timeout=0i,cmd_touch=0i,current_connections=1i,current_jobs_ready=6i,current_producers=0i,cmd_delete=0i,cmd_list_tubes=7369i,cmd_peek_ready=0i,cmd_put=6i,cmd_use=3i,cmd_watch=0i,current_jobs_reserved=0i,rusage_stime=6.07,cmd_list_tubes_watched=0i,cmd_pause_tube=0i,total_jobs=6i,binlog_records_migrated=0i,cmd_list_tube_used=0i,cmd_peek_delayed=0i,cmd_release=0i,current_jobs_buried=0i,job_timeouts=0i,binlog_current_index=0i,binlog_max_size=10485760i,total_connections=7378i,cmd_peek_buried=0i,cmd_reserve=0i,current_tubes=4i,binlog_records_written=0i,cmd_peek=0i,rusage_utime=1.13,uptime=7099i,binlog_oldest_index=0i,current_workers=0i,cmd_ignore=0i 1528801650000000000
+beanstalkd_tube,host=server.local,name=notifications,server=localhost:11300 pause_time_left=0i,current_jobs_buried=0i,current_jobs_delayed=0i,current_jobs_reserved=0i,current_using=0i,current_waiting=0i,pause=0i,total_jobs=3i,cmd_delete=0i,cmd_pause_tube=0i,current_jobs_ready=3i,current_jobs_urgent=3i,current_watching=0i 1528801650000000000
+```
diff --git a/content/telegraf/v1/input-plugins/beat/_index.md b/content/telegraf/v1/input-plugins/beat/_index.md
new file mode 100644
index 000000000..5e8ea3fc6
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/beat/_index.md
@@ -0,0 +1,166 @@
+---
+description: "Telegraf plugin for collecting metrics from Beat"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Beat
+    identifier: input-beat
+tags: [Beat, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Beat Input Plugin
+
+The Beat plugin will collect metrics from the given Beat instances. It is
+known to work with Filebeat and Kafkabeat.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics exposed by Beat
+[[inputs.beat]]
+  ## An URL from which to read Beat-formatted JSON
+  ## Default is "http://127.0.0.1:5066".
+  url = "http://127.0.0.1:5066"
+
+  ## Enable collection of the listed stats
+  ## An empty list means collect all. Available options are currently
+  ## "beat", "libbeat", "system" and "filebeat".
+  # include = ["beat", "libbeat", "filebeat"]
+
+  ## HTTP method
+  # method = "GET"
+
+  ## Optional HTTP headers
+  # headers = {"X-Special-Header" = "Special-Value"}
+
+  ## Override HTTP "Host" header
+  # host_header = "logstash.example.com"
+
+  ## Timeout for HTTP requests
+  # timeout = "5s"
+
+  ## Optional HTTP Basic Auth credentials
+  # username = "username"
+  # password = "pa$$word"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+## Metrics
+
+- **beat**
+  - Fields:
+    - cpu_system_ticks
+    - cpu_system_time_ms
+    - cpu_total_ticks
+    - cpu_total_time_ms
+    - cpu_total_value
+    - cpu_user_ticks
+    - cpu_user_time_ms
+    - info_uptime_ms
+    - memstats_gc_next
+    - memstats_memory_alloc
+    - memstats_memory_total
+    - memstats_rss
+  - Tags:
+    - beat_beat
+    - beat_host
+    - beat_id
+    - beat_name
+    - beat_version
+
+- **beat_filebeat**
+  - Fields:
+    - events_active
+    - events_added
+    - events_done
+    - harvester_closed
+    - harvester_open_files
+    - harvester_running
+    - harvester_skipped
+    - harvester_started
+    - input_log_files_renamed
+    - input_log_files_truncated
+  - Tags:
+    - beat_beat
+    - beat_host
+    - beat_id
+    - beat_name
+    - beat_version
+
+- **beat_libbeat**
+  - Fields:
+    - config_module_running
+    - config_module_starts
+    - config_module_stops
+    - config_reloads
+    - output_events_acked
+    - output_events_active
+    - output_events_batches
+    - output_events_dropped
+    - output_events_duplicates
+    - output_events_failed
+    - output_events_total
+    - output_type
+    - output_read_bytes
+    - output_read_errors
+    - output_write_bytes
+    - output_write_errors
+    - outputs_kafka_bytes_read
+    - outputs_kafka_bytes_write
+    - pipeline_clients
+    - pipeline_events_active
+    - pipeline_events_dropped
+    - pipeline_events_failed
+    - pipeline_events_filtered
+    - pipeline_events_published
+    - pipeline_events_retry
+    - pipeline_events_total
+    - pipeline_queue_acked
+  - Tags:
+    - beat_beat
+    - beat_host
+    - beat_id
+    - beat_name
+    - beat_version
+
+- **beat_system**
+  - Field:
+    - cpu_cores
+    - load_1
+    - load_15
+    - load_5
+    - load_norm_1
+    - load_norm_15
+    - load_norm_5
+  - Tags:
+    - beat_beat
+    - beat_host
+    - beat_id
+    - beat_name
+    - beat_version
+
+## Example Output
+
+```text
+beat,beat_beat=filebeat,beat_host=node-6,beat_id=9c1c8697-acb4-4df0-987d-28197814f788,beat_name=node-6-test,beat_version=6.4.2,host=node-6 cpu_system_ticks=656750,cpu_system_time_ms=656750,cpu_total_ticks=5461190,cpu_total_time_ms=5461198,cpu_total_value=5461190,cpu_user_ticks=4804440,cpu_user_time_ms=4804448,info_uptime_ms=342634196,memstats_gc_next=20199584,memstats_memory_alloc=12547424,memstats_memory_total=486296424792,memstats_rss=72552448 1540316047000000000
+beat_libbeat,beat_beat=filebeat,beat_host=node-6,beat_id=9c1c8697-acb4-4df0-987d-28197814f788,beat_name=node-6-test,beat_version=6.4.2,host=node-6 config_module_running=0,config_module_starts=0,config_module_stops=0,config_reloads=0,output_events_acked=192404,output_events_active=0,output_events_batches=1607,output_events_dropped=0,output_events_duplicates=0,output_events_failed=0,output_events_total=192404,output_read_bytes=0,output_read_errors=0,output_write_bytes=0,output_write_errors=0,outputs_kafka_bytes_read=1118528,outputs_kafka_bytes_write=48002014,pipeline_clients=1,pipeline_events_active=0,pipeline_events_dropped=0,pipeline_events_failed=0,pipeline_events_filtered=11496,pipeline_events_published=192404,pipeline_events_retry=14,pipeline_events_total=203900,pipeline_queue_acked=192404 1540316047000000000
+beat_system,beat_beat=filebeat,beat_host=node-6,beat_id=9c1c8697-acb4-4df0-987d-28197814f788,beat_name=node-6-test,beat_version=6.4.2,host=node-6 cpu_cores=32,load_1=46.08,load_15=49.82,load_5=47.88,load_norm_1=1.44,load_norm_15=1.5569,load_norm_5=1.4963 1540316047000000000
+beat_filebeat,beat_beat=filebeat,beat_host=node-6,beat_id=9c1c8697-acb4-4df0-987d-28197814f788,beat_name=node-6-test,beat_version=6.4.2,host=node-6 events_active=0,events_added=3223,events_done=3223,harvester_closed=0,harvester_open_files=0,harvester_running=0,harvester_skipped=0,harvester_started=0,input_log_files_renamed=0,input_log_files_truncated=0 1540320286000000000
+```
diff --git a/content/telegraf/v1/input-plugins/bind/_index.md b/content/telegraf/v1/input-plugins/bind/_index.md
new file mode 100644
index 000000000..f042d4470
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/bind/_index.md
@@ -0,0 +1,158 @@
+---
+description: "Telegraf plugin for collecting metrics from BIND 9 Nameserver Statistics"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: BIND 9 Nameserver Statistics
+    identifier: input-bind
+tags: [BIND 9 Nameserver Statistics, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# BIND 9 Nameserver Statistics Input Plugin
+
+This plugin decodes the JSON or XML statistics provided by BIND 9 nameservers.
+
+## XML Statistics Channel
+
+Version 2 statistics (BIND 9.6 - 9.9) and version 3 statistics (BIND 9.9+) are
+supported. Note that for BIND 9.9 to support version 3 statistics, it must be
+built with the `--enable-newstats` compile flag, and it must be specifically
+requested via the correct URL. Version 3 statistics are the default (and only)
+XML format in BIND 9.10+.
+
+## JSON Statistics Channel
+
+JSON statistics schema version 1 (BIND 9.10+) is supported. As of writing, some
+distros still do not enable support for JSON statistics in their BIND packages.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read BIND nameserver XML statistics
+[[inputs.bind]]
+  ## An array of BIND XML statistics URI to gather stats.
+  ## Default is "http://localhost:8053/xml/v3".
+  # urls = ["http://localhost:8053/xml/v3"]
+  # gather_memory_contexts = false
+  # gather_views = false
+
+  ## Timeout for http requests made by bind nameserver
+  # timeout = "4s"
+```
+
+- **urls** []string: List of BIND statistics channel URLs to collect from.
+  Do not include a trailing slash in the URL.
+  Default is `http://localhost:8053/xml/v3`.
+- **gather_memory_contexts** bool: Report per-context memory statistics.
+- **gather_views** bool: Report per-view query statistics.
+- **timeout** Timeout for http requests made by bind (example: "4s").
+
+The following table summarizes the URL formats which should be used,
+depending on your BIND version and configured statistics channel.
+
+| BIND Version | Statistics Format | Example URL                   |
+| ------------ | ----------------- | ----------------------------- |
+| 9.6 - 9.8    | XML v2            | `http://localhost:8053`         |
+| 9.9          | XML v2            | `http://localhost:8053/xml/v2`  |
+| 9.9+         | XML v3            | `http://localhost:8053/xml/v3`  |
+| 9.10+        | JSON v1           | `http://localhost:8053/json/v1` |
+
+### Configuration of BIND Daemon
+
+Add the following to your named.conf if running Telegraf on the same host
+as the BIND daemon:
+
+```json
+statistics-channels {
+    inet 127.0.0.1 port 8053;
+};
+```
+
+Alternatively, specify a wildcard address (e.g., 0.0.0.0) or specific
+IP address of an interface to configure the BIND daemon to listen on that
+address. Note that you should secure the statistics channel with an ACL if
+it is publicly reachable. Consult the BIND Administrator Reference Manual
+for more information.
+
+## Metrics
+
+- bind_counter
+  - name=value (multiple)
+- bind_memory
+  - total_use
+  - in_use
+  - block_size
+  - context_size
+  - lost
+- bind_memory_context
+  - total
+  - in_use
+
+## Tags
+
+- All measurements
+  - url
+  - source
+  - port
+- bind_counter
+  - type
+  - view (optional)
+- bind_memory_context
+  - id
+  - name
+
+## Sample Queries
+
+These are some useful queries (to generate dashboards or other) to run against
+data from this plugin:
+
+```sql
+SELECT non_negative_derivative(mean(/^A$|^PTR$/), 5m) FROM bind_counter \
+WHERE "url" = 'localhost:8053' AND "type" = 'qtype' AND time > now() - 1h \
+GROUP BY time(5m), "type"
+```
+
+```text
+name: bind_counter
+tags: type=qtype
+time                non_negative_derivative_A non_negative_derivative_PTR
+----                ------------------------- ---------------------------
+1553862000000000000 254.99444444430992        1388.311111111194
+1553862300000000000 354                       2135.716666666791
+1553862600000000000 316.8666666666977         2130.133333333768
+1553862900000000000 309.05000000004657        2126.75
+1553863200000000000 315.64999999990687        2128.483333332464
+1553863500000000000 308.9166666667443         2132.350000000559
+1553863800000000000 302.64999999990687        2131.1833333335817
+1553864100000000000 310.85000000009313        2132.449999999255
+1553864400000000000 314.3666666666977         2136.216666666791
+1553864700000000000 303.2333333331626         2133.8166666673496
+1553865000000000000 304.93333333334886        2127.333333333023
+1553865300000000000 317.93333333334886        2130.3166666664183
+1553865600000000000 280.6666666667443         1807.9071428570896
+```
+
+## Example Output
+
+Here is example output of this plugin:
+
+```text
+bind_memory,host=LAP,port=8053,source=localhost,url=localhost:8053 block_size=12058624i,context_size=4575056i,in_use=4113717i,lost=0i,total_use=16663252i 1554276619000000000
+bind_counter,host=LAP,port=8053,source=localhost,type=opcode,url=localhost:8053 IQUERY=0i,NOTIFY=0i,QUERY=9i,STATUS=0i,UPDATE=0i 1554276619000000000
+bind_counter,host=LAP,port=8053,source=localhost,type=rcode,url=localhost:8053 17=0i,18=0i,19=0i,20=0i,21=0i,22=0i,BADCOOKIE=0i,BADVERS=0i,FORMERR=0i,NOERROR=7i,NOTAUTH=0i,NOTIMP=0i,NOTZONE=0i,NXDOMAIN=0i,NXRRSET=0i,REFUSED=0i,RESERVED11=0i,RESERVED12=0i,RESERVED13=0i,RESERVED14=0i,RESERVED15=0i,SERVFAIL=2i,YXDOMAIN=0i,YXRRSET=0i 1554276619000000000
+bind_counter,host=LAP,port=8053,source=localhost,type=qtype,url=localhost:8053 A=1i,ANY=1i,NS=1i,PTR=5i,SOA=1i 1554276619000000000
+bind_counter,host=LAP,port=8053,source=localhost,type=nsstat,url=localhost:8053 AuthQryRej=0i,CookieBadSize=0i,CookieBadTime=0i,CookieIn=9i,CookieMatch=0i,CookieNew=9i,CookieNoMatch=0i,DNS64=0i,ECSOpt=0i,ExpireOpt=0i,KeyTagOpt=0i,NSIDOpt=0i,OtherOpt=0i,QryAuthAns=7i,QryBADCOOKIE=0i,QryDropped=0i,QryDuplicate=0i,QryFORMERR=0i,QryFailure=0i,QryNXDOMAIN=0i,QryNXRedir=0i,QryNXRedirRLookup=0i,QryNoauthAns=0i,QryNxrrset=1i,QryRecursion=2i,QryReferral=0i,QrySERVFAIL=2i,QrySuccess=6i,QryTCP=1i,QryUDP=8i,RPZRewrites=0i,RateDropped=0i,RateSlipped=0i,RecQryRej=0i,RecursClients=0i,ReqBadEDNSVer=0i,ReqBadSIG=0i,ReqEdns0=9i,ReqSIG0=0i,ReqTCP=1i,ReqTSIG=0i,Requestv4=9i,Requestv6=0i,RespEDNS0=9i,RespSIG0=0i,RespTSIG=0i,Response=9i,TruncatedResp=0i,UpdateBadPrereq=0i,UpdateDone=0i,UpdateFail=0i,UpdateFwdFail=0i,UpdateRej=0i,UpdateReqFwd=0i,UpdateRespFwd=0i,XfrRej=0i,XfrReqDone=0i 1554276619000000000
+bind_counter,host=LAP,port=8053,source=localhost,type=zonestat,url=localhost:8053 AXFRReqv4=0i,AXFRReqv6=0i,IXFRReqv4=0i,IXFRReqv6=0i,NotifyInv4=0i,NotifyInv6=0i,NotifyOutv4=0i,NotifyOutv6=0i,NotifyRej=0i,SOAOutv4=0i,SOAOutv6=0i,XfrFail=0i,XfrSuccess=0i 1554276619000000000
+bind_counter,host=LAP,port=8053,source=localhost,type=sockstat,url=localhost:8053 FDWatchClose=0i,FDwatchConn=0i,FDwatchConnFail=0i,FDwatchRecvErr=0i,FDwatchSendErr=0i,FdwatchBindFail=0i,RawActive=1i,RawClose=0i,RawOpen=1i,RawOpenFail=0i,RawRecvErr=0i,TCP4Accept=6i,TCP4AcceptFail=0i,TCP4Active=9i,TCP4BindFail=0i,TCP4Close=5i,TCP4Conn=0i,TCP4ConnFail=0i,TCP4Open=8i,TCP4OpenFail=0i,TCP4RecvErr=0i,TCP4SendErr=0i,TCP6Accept=0i,TCP6AcceptFail=0i,TCP6Active=2i,TCP6BindFail=0i,TCP6Close=0i,TCP6Conn=0i,TCP6ConnFail=0i,TCP6Open=2i,TCP6OpenFail=0i,TCP6RecvErr=0i,TCP6SendErr=0i,UDP4Active=18i,UDP4BindFail=14i,UDP4Close=14i,UDP4Conn=0i,UDP4ConnFail=0i,UDP4Open=32i,UDP4OpenFail=0i,UDP4RecvErr=0i,UDP4SendErr=0i,UDP6Active=3i,UDP6BindFail=0i,UDP6Close=6i,UDP6Conn=0i,UDP6ConnFail=6i,UDP6Open=9i,UDP6OpenFail=0i,UDP6RecvErr=0i,UDP6SendErr=0i,UnixAccept=0i,UnixAcceptFail=0i,UnixActive=0i,UnixBindFail=0i,UnixClose=0i,UnixConn=0i,UnixConnFail=0i,UnixOpen=0i,UnixOpenFail=0i,UnixRecvErr=0i,UnixSendErr=0i 1554276619000000000
+```
diff --git a/content/telegraf/v1/input-plugins/bond/_index.md b/content/telegraf/v1/input-plugins/bond/_index.md
new file mode 100644
index 000000000..0f0901dc1
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/bond/_index.md
@@ -0,0 +1,132 @@
+---
+description: "Telegraf plugin for collecting metrics from Bond"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Bond
+    identifier: input-bond
+tags: [Bond, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Bond Input Plugin
+
+The Bond input plugin collects network bond interface status for both the
+network bond interface as well as slave interfaces.
+The plugin collects these metrics from `/proc/net/bonding/*` files.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Collect bond interface status, slaves statuses and failures count
+[[inputs.bond]]
+  ## Sets 'proc' directory path
+  ## If not specified, then default is /proc
+  # host_proc = "/proc"
+
+  ## Sets 'sys' directory path
+  ## If not specified, then default is /sys
+  # host_sys = "/sys"
+
+  ## By default, telegraf gather stats for all bond interfaces
+  ## Setting interfaces will restrict the stats to the specified
+  ## bond interfaces.
+  # bond_interfaces = ["bond0"]
+
+  ## Tries to collect additional bond details from /sys/class/net/{bond}
+  ## currently only useful for LACP (mode 4) bonds
+  # collect_sys_details = false
+```
+
+## Metrics
+
+- bond
+  - active_slave (for active-backup mode)
+  - status
+
+- bond_slave
+  - failures
+  - status
+  - count
+  - actor_churned (for LACP bonds)
+  - partner_churned (for LACP bonds)
+  - total_churned (for LACP bonds)
+
+- bond_sys
+  - slave_count
+  - ad_port_count
+
+## Description
+
+- active_slave
+  - Currently active slave interface for active-backup mode.
+- status
+  - Status of bond interface or bonds's slave interface (down = 0, up = 1).
+- failures
+  - Amount of failures for bond's slave interface.
+- count
+  - Number of slaves attached to bond
+- actor_churned
+  - number of times local end of LACP bond flapped
+- partner_churned
+  - number of times remote end of LACP bond flapped
+- total_churned
+  - full count of all churn events
+
+## Tags
+
+- bond
+  - bond
+
+- bond_slave
+  - bond
+  - interface
+
+- bond_sys
+  - bond
+  - mode
+
+## Example Output
+
+Configuration:
+
+```toml
+[[inputs.bond]]
+  ## Sets 'proc' directory path
+  ## If not specified, then default is /proc
+  host_proc = "/proc"
+
+  ## By default, telegraf gather stats for all bond interfaces
+  ## Setting interfaces will restrict the stats to the specified
+  ## bond interfaces.
+  bond_interfaces = ["bond0", "bond1"]
+```
+
+Run:
+
+```bash
+telegraf --config telegraf.conf --input-filter bond --test
+```
+
+Output:
+
+```text
+bond,bond=bond1,host=local active_slave="eth0",status=1i 1509704525000000000
+bond_slave,bond=bond1,interface=eth0,host=local status=1i,failures=0i 1509704525000000000
+bond_slave,host=local,bond=bond1,interface=eth1 status=1i,failures=0i 1509704525000000000
+bond_slave,host=local,bond=bond1 count=2i 1509704525000000000
+bond,bond=bond0,host=isvetlov-mac.local status=1i 1509704525000000000
+bond_slave,bond=bond0,interface=eth1,host=local status=1i,failures=0i 1509704525000000000
+bond_slave,bond=bond0,interface=eth2,host=local status=1i,failures=0i 1509704525000000000
+bond_slave,bond=bond0,host=local count=2i 1509704525000000000
+```
diff --git a/content/telegraf/v1/input-plugins/burrow/_index.md b/content/telegraf/v1/input-plugins/burrow/_index.md
new file mode 100644
index 000000000..70dbe16e9
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/burrow/_index.md
@@ -0,0 +1,128 @@
+---
+description: "Telegraf plugin for collecting metrics from Burrow Kafka Consumer Lag Checking"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Burrow Kafka Consumer Lag Checking
+    identifier: input-burrow
+tags: [Burrow Kafka Consumer Lag Checking, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Burrow Kafka Consumer Lag Checking Input Plugin
+
+Collect Kafka topic, consumer and partition status via
+[Burrow](https://github.com/linkedin/Burrow) HTTP
+[API](https://github.com/linkedin/Burrow/wiki/HTTP-Endpoint).
+
+Supported Burrow version: `1.x`
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Collect Kafka topics and consumers status from Burrow HTTP API.
+[[inputs.burrow]]
+  ## Burrow API endpoints in format "schema://host:port".
+  ## Default is "http://localhost:8000".
+  servers = ["http://localhost:8000"]
+
+  ## Override Burrow API prefix.
+  ## Useful when Burrow is behind reverse-proxy.
+  # api_prefix = "/v3/kafka"
+
+  ## Maximum time to receive response.
+  # response_timeout = "5s"
+
+  ## Limit per-server concurrent connections.
+  ## Useful in case of large number of topics or consumer groups.
+  # concurrent_connections = 20
+
+  ## Filter clusters, default is no filtering.
+  ## Values can be specified as glob patterns.
+  # clusters_include = []
+  # clusters_exclude = []
+
+  ## Filter consumer groups, default is no filtering.
+  ## Values can be specified as glob patterns.
+  # groups_include = []
+  # groups_exclude = []
+
+  ## Filter topics, default is no filtering.
+  ## Values can be specified as glob patterns.
+  # topics_include = []
+  # topics_exclude = []
+
+  ## Credentials for basic HTTP authentication.
+  # username = ""
+  # password = ""
+
+  ## Optional SSL config
+  # ssl_ca = "/etc/telegraf/ca.pem"
+  # ssl_cert = "/etc/telegraf/cert.pem"
+  # ssl_key = "/etc/telegraf/key.pem"
+  # insecure_skip_verify = false
+```
+
+## Group/Partition Status mappings
+
+* `OK` = 1
+* `NOT_FOUND` = 2
+* `WARN` = 3
+* `ERR` = 4
+* `STOP` = 5
+* `STALL` = 6
+
+> unknown value will be mapped to 0
+
+## Metrics
+
+### Fields
+
+* `burrow_group` (one event per each consumer group)
+  * status (string, see Partition Status mappings)
+  * status_code (int, `1..6`, see Partition status mappings)
+  * partition_count (int, `number of partitions`)
+  * offset (int64, `total offset of all partitions`)
+  * total_lag (int64, `totallag`)
+  * lag (int64, `maxlag.current_lag || 0`)
+  * timestamp (int64, `end.timestamp`)
+
+* `burrow_partition` (one event per each topic partition)
+  * status (string, see Partition Status mappings)
+  * status_code (int, `1..6`, see Partition status mappings)
+  * lag (int64, `current_lag || 0`)
+  * offset (int64, `end.timestamp`)
+  * timestamp (int64, `end.timestamp`)
+
+* `burrow_topic` (one event per topic offset)
+  * offset (int64)
+
+### Tags
+
+* `burrow_group`
+  * cluster (string)
+  * group (string)
+
+* `burrow_partition`
+  * cluster (string)
+  * group (string)
+  * topic (string)
+  * partition (int)
+  * owner (string)
+
+* `burrow_topic`
+  * cluster (string)
+  * topic (string)
+  * partition (int)
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/ceph/_index.md b/content/telegraf/v1/input-plugins/ceph/_index.md
new file mode 100644
index 000000000..76eb42779
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/ceph/_index.md
@@ -0,0 +1,484 @@
+---
+description: "Telegraf plugin for collecting metrics from Ceph Storage"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Ceph Storage
+    identifier: input-ceph
+tags: [Ceph Storage, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Ceph Storage Input Plugin
+
+Collects performance metrics from the MON and OSD nodes in a Ceph storage
+cluster.
+
+Ceph has introduced a Telegraf and Influx plugin in the 13.x Mimic release.
+The Telegraf module sends to a Telegraf configured with a socket_listener.
+[Learn more in their docs](https://docs.ceph.com/en/latest/mgr/telegraf/)
+
+## Admin Socket Stats
+
+This gatherer works by scanning the configured SocketDir for OSD, MON, MDS
+and RGW socket files.  When it finds a MON socket, it runs
+
+```shell
+ceph --admin-daemon $file perfcounters_dump
+```
+
+For OSDs it runs
+
+```shell
+ceph --admin-daemon $file perf dump
+```
+
+The resulting JSON is parsed and grouped into collections, based on
+top-level key. Top-level keys are used as collection tags, and all
+sub-keys are flattened. For example:
+
+```json
+ {
+   "paxos": {
+     "refresh": 9363435,
+     "refresh_latency": {
+       "avgcount": 9363435,
+       "sum": 5378.794002000
+     }
+   }
+ }
+```
+
+Would be parsed into the following metrics, all of which would be tagged
+with `collection=paxos`:
+
+- refresh = 9363435
+- refresh_latency.avgcount: 9363435
+- refresh_latency.sum: 5378.794002000
+
+## Cluster Stats
+
+This gatherer works by invoking ceph commands against the cluster thus only
+requires the ceph client, valid ceph configuration and an access key to
+function (the ceph_config and ceph_user configuration variables work in
+conjunction to specify these prerequisites). It may be run on any server you
+wish which has access to the cluster.  The currently supported commands are:
+
+- ceph status
+- ceph df
+- ceph osd pool stats
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Collects performance metrics from the MON, OSD, MDS and RGW nodes
+# in a Ceph storage cluster.
+[[inputs.ceph]]
+  ## This is the recommended interval to poll. Too frequent and you
+  ## will lose data points due to timeouts during rebalancing and recovery
+  interval = '1m'
+
+  ## All configuration values are optional, defaults are shown below
+
+  ## location of ceph binary
+  ceph_binary = "/usr/bin/ceph"
+
+  ## directory in which to look for socket files
+  socket_dir = "/var/run/ceph"
+
+  ## prefix of MON and OSD socket files, used to determine socket type
+  mon_prefix = "ceph-mon"
+  osd_prefix = "ceph-osd"
+  mds_prefix = "ceph-mds"
+  rgw_prefix = "ceph-client"
+
+  ## suffix used to identify socket files
+  socket_suffix = "asok"
+
+  ## Ceph user to authenticate as, ceph will search for the corresponding
+  ## keyring e.g. client.admin.keyring in /etc/ceph, or the explicit path
+  ## defined in the client section of ceph.conf for example:
+  ##
+  ##     [client.telegraf]
+  ##         keyring = /etc/ceph/client.telegraf.keyring
+  ##
+  ## Consult the ceph documentation for more detail on keyring generation.
+  ceph_user = "client.admin"
+
+  ## Ceph configuration to use to locate the cluster
+  ceph_config = "/etc/ceph/ceph.conf"
+
+  ## Whether to gather statistics via the admin socket
+  gather_admin_socket_stats = true
+
+  ## Whether to gather statistics via ceph commands, requires ceph_user
+  ## and ceph_config to be specified
+  gather_cluster_stats = false
+```
+
+## Metrics
+
+### Admin Socket
+
+All fields are collected under the **ceph** measurement and stored as
+float64s. For a full list of fields, see the sample perf dumps in ceph_test.go.
+
+All admin measurements will have the following tags:
+
+- type: either 'osd', 'mon', 'mds' or 'rgw' to indicate the queried node type
+- id: a unique string identifier, parsed from the socket file name for the node
+- collection: the top-level key under which these fields were reported.
+  Possible values are:
+  - for MON nodes:
+    - cluster
+    - leveldb
+    - mon
+    - paxos
+    - throttle-mon_client_bytes
+    - throttle-mon_daemon_bytes
+    - throttle-msgr_dispatch_throttler-mon
+  - for OSD nodes:
+    - WBThrottle
+    - filestore
+    - leveldb
+    - mutex-FileJournal::completions_lock
+    - mutex-FileJournal::finisher_lock
+    - mutex-FileJournal::write_lock
+    - mutex-FileJournal::writeq_lock
+    - mutex-JOS::ApplyManager::apply_lock
+    - mutex-JOS::ApplyManager::com_lock
+    - mutex-JOS::SubmitManager::lock
+    - mutex-WBThrottle::lock
+    - objecter
+    - osd
+    - recoverystate_perf
+    - throttle-filestore_bytes
+    - throttle-filestore_ops
+    - throttle-msgr_dispatch_throttler-client
+    - throttle-msgr_dispatch_throttler-cluster
+    - throttle-msgr_dispatch_throttler-hb_back_server
+    - throttle-msgr_dispatch_throttler-hb_front_serve
+    - throttle-msgr_dispatch_throttler-hbclient
+    - throttle-msgr_dispatch_throttler-ms_objecter
+    - throttle-objecter_bytes
+    - throttle-objecter_ops
+    - throttle-osd_client_bytes
+    - throttle-osd_client_messages
+  - for MDS nodes:
+    - AsyncMessenger::Worker-0
+    - AsyncMessenger::Worker-1
+    - AsyncMessenger::Worker-2
+    - finisher-PurgeQueue
+    - mds
+    - mds_cache
+    - mds_log
+    - mds_mem
+    - mds_server
+    - mds_sessions
+    - objecter
+    - purge_queue
+    - throttle-msgr_dispatch_throttler-mds
+    - throttle-objecter_bytes
+    - throttle-objecter_ops
+    - throttle-write_buf_throttle
+  - for RGW nodes:
+    - AsyncMessenger::Worker-0
+    - AsyncMessenger::Worker-1
+    - AsyncMessenger::Worker-2
+    - cct
+    - finisher-radosclient
+    - mempool
+    - objecter
+    - rgw
+    - simple-throttler
+    - throttle-msgr_dispatch_throttler-radosclient
+    - throttle-objecter_bytes
+    - throttle-objecter_ops
+    - throttle-rgw_async_rados_ops
+
+## Cluster
+
+- ceph_fsmap
+  - fields:
+    - up (float)
+    - in (float)
+    - max (float)
+    - up_standby (float)
+
+- ceph_health
+  - fields:
+    - status (string)
+    - status_code (int)
+    - overall_status (string, exists only in ceph <15)
+
+- ceph_monmap
+  - fields:
+    - num_mons (float)
+
+- ceph_osdmap
+  - fields:
+    - epoch (float)
+    - full (bool, exists only in ceph <15)
+    - nearfull (bool, exists only in ceph <15)
+    - num_in_osds (float)
+    - num_osds (float)
+    - num_remapped_pgs (float)
+    - num_up_osds (float)
+
+- ceph_pgmap
+  - fields:
+    - bytes_avail (float)
+    - bytes_total (float)
+    - bytes_used (float)
+    - data_bytes (float)
+    - degraded_objects (float)
+    - degraded_ratio (float)
+    - degraded_total (float)
+    - inactive_pgs_ratio (float)
+    - num_bytes_recovered (float)
+    - num_keys_recovered (float)
+    - num_objects (float)
+    - num_objects_recovered (float)
+    - num_pgs (float)
+    - num_pools (float)
+    - op_per_sec (float, exists only in ceph <10)
+    - read_bytes_sec (float)
+    - read_op_per_sec (float)
+    - recovering_bytes_per_sec (float)
+    - recovering_keys_per_sec (float)
+    - recovering_objects_per_sec (float)
+    - version (float)
+    - write_bytes_sec (float)
+    - write_op_per_sec (float)
+
+- ceph_pgmap_state
+  - tags:
+    - state
+  - fields:
+    - count (float)
+
+- ceph_usage
+  - fields:
+    - num_osd (float)
+    - num_per_pool_omap_osds (float)
+    - num_per_pool_osds (float)
+    - total_avail (float, exists only in ceph <0.84)
+    - total_avail_bytes (float)
+    - total_bytes (float)
+    - total_space (float, exists only in ceph <0.84)
+    - total_used (float, exists only in ceph <0.84)
+    - total_used_bytes (float)
+    - total_used_raw_bytes (float)
+    - total_used_raw_ratio (float)
+
+- ceph_deviceclass_usage
+  - tags:
+    - class
+  - fields:
+    - total_avail_bytes (float)
+    - total_bytes (float)
+    - total_used_bytes (float)
+    - total_used_raw_bytes (float)
+    - total_used_raw_ratio (float)
+
+- ceph_pool_usage
+  - tags:
+    - name
+  - fields:
+    - bytes_used (float)
+    - kb_used (float)
+    - max_avail (float)
+    - objects (float)
+    - percent_used (float)
+    - stored (float)
+
+- ceph_pool_stats
+  - tags:
+    - name
+  - fields:
+    - degraded_objects (float)
+    - degraded_ratio (float)
+    - degraded_total (float)
+    - num_bytes_recovered (float)
+    - num_keys_recovered (float)
+    - num_objects_recovered (float)
+    - op_per_sec (float, exists only in ceph <10)
+    - read_bytes_sec (float)
+    - read_op_per_sec (float)
+    - recovering_bytes_per_sec (float)
+    - recovering_keys_per_sec (float)
+    - recovering_objects_per_sec (float)
+    - write_bytes_sec (float)
+    - write_op_per_sec (float)
+
+## Example Output
+
+Below is an example of a cluster stats:
+
+```text
+ceph_fsmap,host=ceph in=1,max=1,up=1,up_standby=2 1646782035000000000
+ceph_health,host=ceph status="HEALTH_OK",status_code=2 1646782035000000000
+ceph_monmap,host=ceph num_mons=3 1646782035000000000
+ceph_osdmap,host=ceph epoch=10560,num_in_osds=6,num_osds=6,num_remapped_pgs=0,num_up_osds=6 1646782035000000000
+ceph_pgmap,host=ceph bytes_avail=7863124942848,bytes_total=14882929901568,bytes_used=7019804958720,data_bytes=2411111520818,degraded_objects=0,degraded_ratio=0,degraded_total=0,inactive_pgs_ratio=0,num_bytes_recovered=0,num_keys_recovered=0,num_objects=973030,num_objects_recovered=0,num_pgs=233,num_pools=6,read_bytes_sec=7334,read_op_per_sec=2,recovering_bytes_per_sec=0,recovering_keys_per_sec=0,recovering_objects_per_sec=0,version=0,write_bytes_sec=13113085,write_op_per_sec=355 1646782035000000000
+ceph_pgmap_state,host=ceph,state=active+clean count=233 1646782035000000000
+ceph_usage,host=ceph num_osds=6,num_per_pool_omap_osds=6,num_per_pool_osds=6,total_avail_bytes=7863124942848,total_bytes=14882929901568,total_used_bytes=7019804958720,total_used_raw_bytes=7019804958720,total_used_raw_ratio=0.47166821360588074 1646782035000000000
+ceph_deviceclass_usage,class=hdd,host=ceph total_avail_bytes=6078650843136,total_bytes=12002349023232,total_used_bytes=5923698180096,total_used_raw_bytes=5923698180096,total_used_raw_ratio=0.49354490637779236 1646782035000000000
+ceph_deviceclass_usage,class=ssd,host=ceph total_avail_bytes=1784474099712,total_bytes=2880580878336,total_used_bytes=1096106778624,total_used_raw_bytes=1096106778624,total_used_raw_ratio=0.3805158734321594 1646782035000000000
+ceph_pool_usage,host=ceph,name=Foo bytes_used=2019483848658,kb_used=1972152196,max_avail=1826022621184,objects=161029,percent_used=0.26935243606567383,stored=672915064134 1646782035000000000
+ceph_pool_usage,host=ceph,name=Bar_metadata bytes_used=4370899787,kb_used=4268457,max_avail=546501918720,objects=89702,percent_used=0.002658897778019309,stored=1456936576 1646782035000000000
+ceph_pool_usage,host=ceph,name=Bar_data bytes_used=3893328740352,kb_used=3802078848,max_avail=1826022621184,objects=518396,percent_used=0.41544806957244873,stored=1292214337536 1646782035000000000
+ceph_pool_usage,host=ceph,name=device_health_metrics bytes_used=85289044,kb_used=83291,max_avail=3396406870016,objects=9,percent_used=0.000012555617104226258,stored=42644520 1646782035000000000
+ceph_pool_usage,host=ceph,name=Foo_Fast bytes_used=597511814461,kb_used=583507632,max_avail=546501918720,objects=67014,percent_used=0.2671019732952118,stored=199093853972 1646782035000000000
+ceph_pool_usage,host=ceph,name=Bar_data_fast bytes_used=490009280512,kb_used=478524688,max_avail=546501918720,objects=136880,percent_used=0.23010368645191193,stored=163047325696 1646782035000000000
+ceph_pool_stats,host=ceph,name=Foo degraded_objects=0,degraded_ratio=0,degraded_total=0,num_bytes_recovered=0,num_keys_recovered=0,num_objects_recovered=0,read_bytes_sec=0,read_op_per_sec=0,recovering_bytes_per_sec=0,recovering_keys_per_sec=0,recovering_objects_per_sec=0,write_bytes_sec=27720,write_op_per_sec=4 1646782036000000000
+ceph_pool_stats,host=ceph,name=Bar_metadata degraded_objects=0,degraded_ratio=0,degraded_total=0,num_bytes_recovered=0,num_keys_recovered=0,num_objects_recovered=0,read_bytes_sec=9638,read_op_per_sec=3,recovering_bytes_per_sec=0,recovering_keys_per_sec=0,recovering_objects_per_sec=0,write_bytes_sec=11802778,write_op_per_sec=60 1646782036000000000
+ceph_pool_stats,host=ceph,name=Bar_data degraded_objects=0,degraded_ratio=0,degraded_total=0,num_bytes_recovered=0,num_keys_recovered=0,num_objects_recovered=0,read_bytes_sec=0,read_op_per_sec=0,recovering_bytes_per_sec=0,recovering_keys_per_sec=0,recovering_objects_per_sec=0,write_bytes_sec=0,write_op_per_sec=104 1646782036000000000
+ceph_pool_stats,host=ceph,name=device_health_metrics degraded_objects=0,degraded_ratio=0,degraded_total=0,num_bytes_recovered=0,num_keys_recovered=0,num_objects_recovered=0,read_bytes_sec=0,read_op_per_sec=0,recovering_bytes_per_sec=0,recovering_keys_per_sec=0,recovering_objects_per_sec=0,write_bytes_sec=0,write_op_per_sec=0 1646782036000000000
+ceph_pool_stats,host=ceph,name=Foo_Fast degraded_objects=0,degraded_ratio=0,degraded_total=0,num_bytes_recovered=0,num_keys_recovered=0,num_objects_recovered=0,read_bytes_sec=0,read_op_per_sec=0,recovering_bytes_per_sec=0,recovering_keys_per_sec=0,recovering_objects_per_sec=0,write_bytes_sec=11173,write_op_per_sec=1 1646782036000000000
+ceph_pool_stats,host=ceph,name=Bar_data_fast degraded_objects=0,degraded_ratio=0,degraded_total=0,num_bytes_recovered=0,num_keys_recovered=0,num_objects_recovered=0,read_bytes_sec=0,read_op_per_sec=0,recovering_bytes_per_sec=0,recovering_keys_per_sec=0,recovering_objects_per_sec=0,write_bytes_sec=2155404,write_op_per_sec=262 1646782036000000000
+```
+
+Below is an example of admin socket stats:
+
+```text
+ceph,collection=cct,host=stefanmon1,id=stefanmon1,type=monitor total_workers=0,unhealthy_workers=0 1587117563000000000
+ceph,collection=mempool,host=stefanmon1,id=stefanmon1,type=monitor bloom_filter_bytes=0,bloom_filter_items=0,bluefs_bytes=0,bluefs_items=0,bluestore_alloc_bytes=0,bluestore_alloc_items=0,bluestore_cache_data_bytes=0,bluestore_cache_data_items=0,bluestore_cache_onode_bytes=0,bluestore_cache_onode_items=0,bluestore_cache_other_bytes=0,bluestore_cache_other_items=0,bluestore_fsck_bytes=0,bluestore_fsck_items=0,bluestore_txc_bytes=0,bluestore_txc_items=0,bluestore_writing_bytes=0,bluestore_writing_deferred_bytes=0,bluestore_writing_deferred_items=0,bluestore_writing_items=0,buffer_anon_bytes=719152,buffer_anon_items=192,buffer_meta_bytes=352,buffer_meta_items=4,mds_co_bytes=0,mds_co_items=0,osd_bytes=0,osd_items=0,osd_mapbl_bytes=0,osd_mapbl_items=0,osd_pglog_bytes=0,osd_pglog_items=0,osdmap_bytes=15872,osdmap_items=138,osdmap_mapping_bytes=63112,osdmap_mapping_items=7626,pgmap_bytes=38680,pgmap_items=477,unittest_1_bytes=0,unittest_1_items=0,unittest_2_bytes=0,unittest_2_items=0 1587117563000000000
+ceph,collection=throttle-mon_client_bytes,host=stefanmon1,id=stefanmon1,type=monitor get=1041157,get_or_fail_fail=0,get_or_fail_success=1041157,get_started=0,get_sum=64928901,max=104857600,put=1041157,put_sum=64928901,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117563000000000
+ceph,collection=throttle-msgr_dispatch_throttler-mon,host=stefanmon1,id=stefanmon1,type=monitor get=12695426,get_or_fail_fail=0,get_or_fail_success=12695426,get_started=0,get_sum=42542216884,max=104857600,put=12695426,put_sum=42542216884,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117563000000000
+ceph,collection=finisher-mon_finisher,host=stefanmon1,id=stefanmon1,type=monitor complete_latency.avgcount=0,complete_latency.avgtime=0,complete_latency.sum=0,queue_len=0 1587117563000000000
+ceph,collection=finisher-monstore,host=stefanmon1,id=stefanmon1,type=monitor complete_latency.avgcount=1609831,complete_latency.avgtime=0.015857621,complete_latency.sum=25528.09131035,queue_len=0 1587117563000000000
+ceph,collection=mon,host=stefanmon1,id=stefanmon1,type=monitor election_call=25,election_lose=0,election_win=22,num_elections=94,num_sessions=3,session_add=174679,session_rm=439316,session_trim=137 1587117563000000000
+ceph,collection=throttle-mon_daemon_bytes,host=stefanmon1,id=stefanmon1,type=monitor get=72697,get_or_fail_fail=0,get_or_fail_success=72697,get_started=0,get_sum=32261199,max=419430400,put=72697,put_sum=32261199,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117563000000000
+ceph,collection=rocksdb,host=stefanmon1,id=stefanmon1,type=monitor compact=1,compact_queue_len=0,compact_queue_merge=1,compact_range=19126,get=62449211,get_latency.avgcount=62449211,get_latency.avgtime=0.000022216,get_latency.sum=1387.371811726,rocksdb_write_delay_time.avgcount=0,rocksdb_write_delay_time.avgtime=0,rocksdb_write_delay_time.sum=0,rocksdb_write_memtable_time.avgcount=0,rocksdb_write_memtable_time.avgtime=0,rocksdb_write_memtable_time.sum=0,rocksdb_write_pre_and_post_time.avgcount=0,rocksdb_write_pre_and_post_time.avgtime=0,rocksdb_write_pre_and_post_time.sum=0,rocksdb_write_wal_time.avgcount=0,rocksdb_write_wal_time.avgtime=0,rocksdb_write_wal_time.sum=0,submit_latency.avgcount=0,submit_latency.avgtime=0,submit_latency.sum=0,submit_sync_latency.avgcount=3219961,submit_sync_latency.avgtime=0.007532173,submit_sync_latency.sum=24253.303584224,submit_transaction=0,submit_transaction_sync=3219961 1587117563000000000
+ceph,collection=AsyncMessenger::Worker-0,host=stefanmon1,id=stefanmon1,type=monitor msgr_active_connections=148317,msgr_created_connections=162806,msgr_recv_bytes=11557888328,msgr_recv_messages=5113369,msgr_running_fast_dispatch_time=0,msgr_running_recv_time=868.377161686,msgr_running_send_time=1626.525392721,msgr_running_total_time=4222.235694322,msgr_send_bytes=91516226816,msgr_send_messages=6973706 1587117563000000000
+ceph,collection=AsyncMessenger::Worker-2,host=stefanmon1,id=stefanmon1,type=monitor msgr_active_connections=146396,msgr_created_connections=159788,msgr_recv_bytes=2162802496,msgr_recv_messages=689168,msgr_running_fast_dispatch_time=0,msgr_running_recv_time=164.148550562,msgr_running_send_time=153.462890368,msgr_running_total_time=644.188791379,msgr_send_bytes=7422484152,msgr_send_messages=749381 1587117563000000000
+ceph,collection=cluster,host=stefanmon1,id=stefanmon1,type=monitor num_bytes=5055,num_mon=3,num_mon_quorum=3,num_object=245,num_object_degraded=0,num_object_misplaced=0,num_object_unfound=0,num_osd=9,num_osd_in=8,num_osd_up=8,num_pg=504,num_pg_active=504,num_pg_active_clean=504,num_pg_peering=0,num_pool=17,osd_bytes=858959904768,osd_bytes_avail=849889787904,osd_bytes_used=9070116864,osd_epoch=203 1587117563000000000
+ceph,collection=paxos,host=stefanmon1,id=stefanmon1,type=monitor accept_timeout=1,begin=1609847,begin_bytes.avgcount=1609847,begin_bytes.sum=41408662074,begin_keys.avgcount=1609847,begin_keys.sum=4829541,begin_latency.avgcount=1609847,begin_latency.avgtime=0.007213392,begin_latency.sum=11612.457661116,collect=0,collect_bytes.avgcount=0,collect_bytes.sum=0,collect_keys.avgcount=0,collect_keys.sum=0,collect_latency.avgcount=0,collect_latency.avgtime=0,collect_latency.sum=0,collect_timeout=1,collect_uncommitted=17,commit=1609831,commit_bytes.avgcount=1609831,commit_bytes.sum=41087428442,commit_keys.avgcount=1609831,commit_keys.sum=11637931,commit_latency.avgcount=1609831,commit_latency.avgtime=0.006236333,commit_latency.sum=10039.442388355,lease_ack_timeout=0,lease_timeout=0,new_pn=33,new_pn_latency.avgcount=33,new_pn_latency.avgtime=3.844272773,new_pn_latency.sum=126.86100151,refresh=1609856,refresh_latency.avgcount=1609856,refresh_latency.avgtime=0.005900486,refresh_latency.sum=9498.932866761,restart=109,share_state=2,share_state_bytes.avgcount=2,share_state_bytes.sum=39612,share_state_keys.avgcount=2,share_state_keys.sum=2,start_leader=22,start_peon=0,store_state=14,store_state_bytes.avgcount=14,store_state_bytes.sum=51908281,store_state_keys.avgcount=14,store_state_keys.sum=7016,store_state_latency.avgcount=14,store_state_latency.avgtime=11.668377665,store_state_latency.sum=163.357287311 1587117563000000000
+ceph,collection=throttle-msgr_dispatch_throttler-mon-mgrc,host=stefanmon1,id=stefanmon1,type=monitor get=13225,get_or_fail_fail=0,get_or_fail_success=13225,get_started=0,get_sum=158700,max=104857600,put=13225,put_sum=158700,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117563000000000
+ceph,collection=AsyncMessenger::Worker-1,host=stefanmon1,id=stefanmon1,type=monitor msgr_active_connections=147680,msgr_created_connections=162374,msgr_recv_bytes=29781706740,msgr_recv_messages=7170733,msgr_running_fast_dispatch_time=0,msgr_running_recv_time=1728.559151358,msgr_running_send_time=2086.681244508,msgr_running_total_time=6084.532916585,msgr_send_bytes=94062125718,msgr_send_messages=9161564 1587117563000000000
+ceph,collection=throttle-msgr_dispatch_throttler-cluster,host=stefanosd1,id=0,type=osd get=281745,get_or_fail_fail=0,get_or_fail_success=281745,get_started=0,get_sum=446024457,max=104857600,put=281745,put_sum=446024457,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=throttle-bluestore_throttle_bytes,host=stefanosd1,id=0,type=osd get=275707,get_or_fail_fail=0,get_or_fail_success=0,get_started=275707,get_sum=185073179842,max=67108864,put=268870,put_sum=185073179842,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=throttle-msgr_dispatch_throttler-hb_front_server,host=stefanosd1,id=0,type=osd get=2606982,get_or_fail_fail=0,get_or_fail_success=2606982,get_started=0,get_sum=5224391928,max=104857600,put=2606982,put_sum=5224391928,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=rocksdb,host=stefanosd1,id=0,type=osd compact=0,compact_queue_len=0,compact_queue_merge=0,compact_range=0,get=1570,get_latency.avgcount=1570,get_latency.avgtime=0.000051233,get_latency.sum=0.080436788,rocksdb_write_delay_time.avgcount=0,rocksdb_write_delay_time.avgtime=0,rocksdb_write_delay_time.sum=0,rocksdb_write_memtable_time.avgcount=0,rocksdb_write_memtable_time.avgtime=0,rocksdb_write_memtable_time.sum=0,rocksdb_write_pre_and_post_time.avgcount=0,rocksdb_write_pre_and_post_time.avgtime=0,rocksdb_write_pre_and_post_time.sum=0,rocksdb_write_wal_time.avgcount=0,rocksdb_write_wal_time.avgtime=0,rocksdb_write_wal_time.sum=0,submit_latency.avgcount=275707,submit_latency.avgtime=0.000174936,submit_latency.sum=48.231345334,submit_sync_latency.avgcount=268870,submit_sync_latency.avgtime=0.006097313,submit_sync_latency.sum=1639.384555624,submit_transaction=275707,submit_transaction_sync=268870 1587117698000000000
+ceph,collection=throttle-msgr_dispatch_throttler-hb_back_server,host=stefanosd1,id=0,type=osd get=2606982,get_or_fail_fail=0,get_or_fail_success=2606982,get_started=0,get_sum=5224391928,max=104857600,put=2606982,put_sum=5224391928,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=throttle-objecter_bytes,host=stefanosd1,id=0,type=osd get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=104857600,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=throttle-msgr_dispatch_throttler-hb_back_client,host=stefanosd1,id=0,type=osd get=2610285,get_or_fail_fail=0,get_or_fail_success=2610285,get_started=0,get_sum=5231011140,max=104857600,put=2610285,put_sum=5231011140,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=AsyncMessenger::Worker-1,host=stefanosd1,id=0,type=osd msgr_active_connections=2093,msgr_created_connections=29142,msgr_recv_bytes=7214238199,msgr_recv_messages=3928206,msgr_running_fast_dispatch_time=171.289615064,msgr_running_recv_time=278.531155966,msgr_running_send_time=489.482588813,msgr_running_total_time=1134.004853662,msgr_send_bytes=9814725232,msgr_send_messages=3814927 1587117698000000000
+ceph,collection=throttle-msgr_dispatch_throttler-client,host=stefanosd1,id=0,type=osd get=488206,get_or_fail_fail=0,get_or_fail_success=488206,get_started=0,get_sum=104085134,max=104857600,put=488206,put_sum=104085134,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=finisher-defered_finisher,host=stefanosd1,id=0,type=osd complete_latency.avgcount=0,complete_latency.avgtime=0,complete_latency.sum=0,queue_len=0 1587117698000000000
+ceph,collection=recoverystate_perf,host=stefanosd1,id=0,type=osd activating_latency.avgcount=87,activating_latency.avgtime=0.114348341,activating_latency.sum=9.948305683,active_latency.avgcount=25,active_latency.avgtime=1790.961574431,active_latency.sum=44774.039360795,backfilling_latency.avgcount=0,backfilling_latency.avgtime=0,backfilling_latency.sum=0,clean_latency.avgcount=25,clean_latency.avgtime=1790.830827794,clean_latency.sum=44770.770694867,down_latency.avgcount=0,down_latency.avgtime=0,down_latency.sum=0,getinfo_latency.avgcount=141,getinfo_latency.avgtime=0.446233476,getinfo_latency.sum=62.918920183,getlog_latency.avgcount=87,getlog_latency.avgtime=0.007708069,getlog_latency.sum=0.670602073,getmissing_latency.avgcount=87,getmissing_latency.avgtime=0.000077594,getmissing_latency.sum=0.006750701,incomplete_latency.avgcount=0,incomplete_latency.avgtime=0,incomplete_latency.sum=0,initial_latency.avgcount=166,initial_latency.avgtime=0.001313715,initial_latency.sum=0.218076764,notbackfilling_latency.avgcount=0,notbackfilling_latency.avgtime=0,notbackfilling_latency.sum=0,notrecovering_latency.avgcount=0,notrecovering_latency.avgtime=0,notrecovering_latency.sum=0,peering_latency.avgcount=141,peering_latency.avgtime=0.948324273,peering_latency.sum=133.713722563,primary_latency.avgcount=79,primary_latency.avgtime=567.706192991,primary_latency.sum=44848.78924634,recovered_latency.avgcount=87,recovered_latency.avgtime=0.000378284,recovered_latency.sum=0.032910791,recovering_latency.avgcount=2,recovering_latency.avgtime=0.338242008,recovering_latency.sum=0.676484017,replicaactive_latency.avgcount=23,replicaactive_latency.avgtime=1790.893991295,replicaactive_latency.sum=41190.561799786,repnotrecovering_latency.avgcount=25,repnotrecovering_latency.avgtime=1647.627024984,repnotrecovering_latency.sum=41190.675624616,reprecovering_latency.avgcount=2,reprecovering_latency.avgtime=0.311884638,reprecovering_latency.sum=0.623769276,repwaitbackfillreserved_latency.avgcount=0,repwaitbackfillreserved_latency.avgtime=0,repwaitbackfillreserved_latency.sum=0,repwaitrecoveryreserved_latency.avgcount=2,repwaitrecoveryreserved_latency.avgtime=0.000462873,repwaitrecoveryreserved_latency.sum=0.000925746,reset_latency.avgcount=372,reset_latency.avgtime=0.125056393,reset_latency.sum=46.520978537,start_latency.avgcount=372,start_latency.avgtime=0.000109397,start_latency.sum=0.040695881,started_latency.avgcount=206,started_latency.avgtime=418.299777245,started_latency.sum=86169.754112641,stray_latency.avgcount=231,stray_latency.avgtime=0.98203205,stray_latency.sum=226.849403565,waitactingchange_latency.avgcount=0,waitactingchange_latency.avgtime=0,waitactingchange_latency.sum=0,waitlocalbackfillreserved_latency.avgcount=0,waitlocalbackfillreserved_latency.avgtime=0,waitlocalbackfillreserved_latency.sum=0,waitlocalrecoveryreserved_latency.avgcount=2,waitlocalrecoveryreserved_latency.avgtime=0.002802377,waitlocalrecoveryreserved_latency.sum=0.005604755,waitremotebackfillreserved_latency.avgcount=0,waitremotebackfillreserved_latency.avgtime=0,waitremotebackfillreserved_latency.sum=0,waitremoterecoveryreserved_latency.avgcount=2,waitremoterecoveryreserved_latency.avgtime=0.012855439,waitremoterecoveryreserved_latency.sum=0.025710878,waitupthru_latency.avgcount=87,waitupthru_latency.avgtime=0.805727895,waitupthru_latency.sum=70.09832695 1587117698000000000
+ceph,collection=cct,host=stefanosd1,id=0,type=osd total_workers=6,unhealthy_workers=0 1587117698000000000
+ceph,collection=throttle-msgr_dispatch_throttler-hb_front_client,host=stefanosd1,id=0,type=osd get=2610285,get_or_fail_fail=0,get_or_fail_success=2610285,get_started=0,get_sum=5231011140,max=104857600,put=2610285,put_sum=5231011140,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=bluefs,host=stefanosd1,id=0,type=osd bytes_written_slow=0,bytes_written_sst=9018781,bytes_written_wal=831081573,db_total_bytes=4294967296,db_used_bytes=434110464,files_written_sst=3,files_written_wal=2,gift_bytes=0,log_bytes=134291456,log_compactions=1,logged_bytes=1101668352,max_bytes_db=1234173952,max_bytes_slow=0,max_bytes_wal=0,num_files=11,reclaim_bytes=0,slow_total_bytes=0,slow_used_bytes=0,wal_total_bytes=0,wal_used_bytes=0 1587117698000000000
+ceph,collection=mempool,host=stefanosd1,id=0,type=osd bloom_filter_bytes=0,bloom_filter_items=0,bluefs_bytes=10600,bluefs_items=458,bluestore_alloc_bytes=230288,bluestore_alloc_items=28786,bluestore_cache_data_bytes=622592,bluestore_cache_data_items=43,bluestore_cache_onode_bytes=249280,bluestore_cache_onode_items=380,bluestore_cache_other_bytes=192678,bluestore_cache_other_items=20199,bluestore_fsck_bytes=0,bluestore_fsck_items=0,bluestore_txc_bytes=8272,bluestore_txc_items=11,bluestore_writing_bytes=0,bluestore_writing_deferred_bytes=670130,bluestore_writing_deferred_items=176,bluestore_writing_items=0,buffer_anon_bytes=2412465,buffer_anon_items=297,buffer_meta_bytes=5896,buffer_meta_items=67,mds_co_bytes=0,mds_co_items=0,osd_bytes=2124800,osd_items=166,osd_mapbl_bytes=155152,osd_mapbl_items=10,osd_pglog_bytes=3214704,osd_pglog_items=6288,osdmap_bytes=710892,osdmap_items=4426,osdmap_mapping_bytes=0,osdmap_mapping_items=0,pgmap_bytes=0,pgmap_items=0,unittest_1_bytes=0,unittest_1_items=0,unittest_2_bytes=0,unittest_2_items=0 1587117698000000000
+ceph,collection=osd,host=stefanosd1,id=0,type=osd agent_evict=0,agent_flush=0,agent_skip=0,agent_wake=0,cached_crc=0,cached_crc_adjusted=0,copyfrom=0,heartbeat_to_peers=7,loadavg=11,map_message_epoch_dups=21,map_message_epochs=40,map_messages=31,messages_delayed_for_map=0,missed_crc=0,numpg=166,numpg_primary=62,numpg_removing=0,numpg_replica=104,numpg_stray=0,object_ctx_cache_hit=476529,object_ctx_cache_total=476536,op=476525,op_before_dequeue_op_lat.avgcount=755708,op_before_dequeue_op_lat.avgtime=0.000205759,op_before_dequeue_op_lat.sum=155.493843473,op_before_queue_op_lat.avgcount=755702,op_before_queue_op_lat.avgtime=0.000047877,op_before_queue_op_lat.sum=36.181069552,op_cache_hit=0,op_in_bytes=0,op_latency.avgcount=476525,op_latency.avgtime=0.000365956,op_latency.sum=174.387387878,op_out_bytes=10882,op_prepare_latency.avgcount=476527,op_prepare_latency.avgtime=0.000205307,op_prepare_latency.sum=97.834380034,op_process_latency.avgcount=476525,op_process_latency.avgtime=0.000139616,op_process_latency.sum=66.530847665,op_r=476521,op_r_latency.avgcount=476521,op_r_latency.avgtime=0.00036559,op_r_latency.sum=174.21148267,op_r_out_bytes=10882,op_r_prepare_latency.avgcount=476523,op_r_prepare_latency.avgtime=0.000205302,op_r_prepare_latency.sum=97.831473175,op_r_process_latency.avgcount=476521,op_r_process_latency.avgtime=0.000139396,op_r_process_latency.sum=66.425498624,op_rw=2,op_rw_in_bytes=0,op_rw_latency.avgcount=2,op_rw_latency.avgtime=0.048818975,op_rw_latency.sum=0.097637951,op_rw_out_bytes=0,op_rw_prepare_latency.avgcount=2,op_rw_prepare_latency.avgtime=0.000467887,op_rw_prepare_latency.sum=0.000935775,op_rw_process_latency.avgcount=2,op_rw_process_latency.avgtime=0.013741256,op_rw_process_latency.sum=0.027482512,op_w=2,op_w_in_bytes=0,op_w_latency.avgcount=2,op_w_latency.avgtime=0.039133628,op_w_latency.sum=0.078267257,op_w_prepare_latency.avgcount=2,op_w_prepare_latency.avgtime=0.000985542,op_w_prepare_latency.sum=0.001971084,op_w_process_latency.avgcount=2,op_w_process_latency.avgtime=0.038933264,op_w_process_latency.sum=0.077866529,op_wip=0,osd_map_bl_cache_hit=22,osd_map_bl_cache_miss=40,osd_map_cache_hit=4570,osd_map_cache_miss=15,osd_map_cache_miss_low=0,osd_map_cache_miss_low_avg.avgcount=0,osd_map_cache_miss_low_avg.sum=0,osd_pg_biginfo=2050,osd_pg_fastinfo=265780,osd_pg_info=274542,osd_tier_flush_lat.avgcount=0,osd_tier_flush_lat.avgtime=0,osd_tier_flush_lat.sum=0,osd_tier_promote_lat.avgcount=0,osd_tier_promote_lat.avgtime=0,osd_tier_promote_lat.sum=0,osd_tier_r_lat.avgcount=0,osd_tier_r_lat.avgtime=0,osd_tier_r_lat.sum=0,pull=0,push=2,push_out_bytes=10,recovery_bytes=10,recovery_ops=2,stat_bytes=107369988096,stat_bytes_avail=106271539200,stat_bytes_used=1098448896,subop=253554,subop_in_bytes=168644225,subop_latency.avgcount=253554,subop_latency.avgtime=0.0073036,subop_latency.sum=1851.857230388,subop_pull=0,subop_pull_latency.avgcount=0,subop_pull_latency.avgtime=0,subop_pull_latency.sum=0,subop_push=0,subop_push_in_bytes=0,subop_push_latency.avgcount=0,subop_push_latency.avgtime=0,subop_push_latency.sum=0,subop_w=253554,subop_w_in_bytes=168644225,subop_w_latency.avgcount=253554,subop_w_latency.avgtime=0.0073036,subop_w_latency.sum=1851.857230388,tier_clean=0,tier_delay=0,tier_dirty=0,tier_evict=0,tier_flush=0,tier_flush_fail=0,tier_promote=0,tier_proxy_read=0,tier_proxy_write=0,tier_try_flush=0,tier_try_flush_fail=0,tier_whiteout=0 1587117698000000000
+ceph,collection=throttle-msgr_dispatch_throttler-ms_objecter,host=stefanosd1,id=0,type=osd get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=104857600,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=AsyncMessenger::Worker-2,host=stefanosd1,id=0,type=osd msgr_active_connections=2055,msgr_created_connections=27411,msgr_recv_bytes=6431950009,msgr_recv_messages=3552443,msgr_running_fast_dispatch_time=162.271664213,msgr_running_recv_time=254.307853033,msgr_running_send_time=503.037285799,msgr_running_total_time=1130.21070681,msgr_send_bytes=10865436237,msgr_send_messages=3523374 1587117698000000000
+ceph,collection=bluestore,host=stefanosd1,id=0,type=osd bluestore_allocated=24641536,bluestore_blob_split=0,bluestore_blobs=88,bluestore_buffer_bytes=622592,bluestore_buffer_hit_bytes=160578,bluestore_buffer_miss_bytes=540236,bluestore_buffers=43,bluestore_compressed=0,bluestore_compressed_allocated=0,bluestore_compressed_original=0,bluestore_extent_compress=0,bluestore_extents=88,bluestore_fragmentation_micros=1,bluestore_gc_merged=0,bluestore_onode_hits=532102,bluestore_onode_misses=388,bluestore_onode_reshard=0,bluestore_onode_shard_hits=0,bluestore_onode_shard_misses=0,bluestore_onodes=380,bluestore_read_eio=0,bluestore_reads_with_retries=0,bluestore_stored=1987856,bluestore_txc=275707,bluestore_write_big=0,bluestore_write_big_blobs=0,bluestore_write_big_bytes=0,bluestore_write_small=60,bluestore_write_small_bytes=343843,bluestore_write_small_deferred=22,bluestore_write_small_new=38,bluestore_write_small_pre_read=22,bluestore_write_small_unused=0,commit_lat.avgcount=275707,commit_lat.avgtime=0.00699778,commit_lat.sum=1929.337103334,compress_lat.avgcount=0,compress_lat.avgtime=0,compress_lat.sum=0,compress_rejected_count=0,compress_success_count=0,csum_lat.avgcount=67,csum_lat.avgtime=0.000032601,csum_lat.sum=0.002184323,decompress_lat.avgcount=0,decompress_lat.avgtime=0,decompress_lat.sum=0,deferred_write_bytes=0,deferred_write_ops=0,kv_commit_lat.avgcount=268870,kv_commit_lat.avgtime=0.006365428,kv_commit_lat.sum=1711.472749866,kv_final_lat.avgcount=268867,kv_final_lat.avgtime=0.000043227,kv_final_lat.sum=11.622427109,kv_flush_lat.avgcount=268870,kv_flush_lat.avgtime=0.000000223,kv_flush_lat.sum=0.060141588,kv_sync_lat.avgcount=268870,kv_sync_lat.avgtime=0.006365652,kv_sync_lat.sum=1711.532891454,omap_lower_bound_lat.avgcount=2,omap_lower_bound_lat.avgtime=0.000006524,omap_lower_bound_lat.sum=0.000013048,omap_next_lat.avgcount=6704,omap_next_lat.avgtime=0.000004721,omap_next_lat.sum=0.031654097,omap_seek_to_first_lat.avgcount=323,omap_seek_to_first_lat.avgtime=0.00000522,omap_seek_to_first_lat.sum=0.00168614,omap_upper_bound_lat.avgcount=4,omap_upper_bound_lat.avgtime=0.000013086,omap_upper_bound_lat.sum=0.000052344,read_lat.avgcount=227,read_lat.avgtime=0.000699457,read_lat.sum=0.158776879,read_onode_meta_lat.avgcount=311,read_onode_meta_lat.avgtime=0.000072207,read_onode_meta_lat.sum=0.022456667,read_wait_aio_lat.avgcount=84,read_wait_aio_lat.avgtime=0.001556141,read_wait_aio_lat.sum=0.130715885,state_aio_wait_lat.avgcount=275707,state_aio_wait_lat.avgtime=0.000000345,state_aio_wait_lat.sum=0.095246457,state_deferred_aio_wait_lat.avgcount=0,state_deferred_aio_wait_lat.avgtime=0,state_deferred_aio_wait_lat.sum=0,state_deferred_cleanup_lat.avgcount=0,state_deferred_cleanup_lat.avgtime=0,state_deferred_cleanup_lat.sum=0,state_deferred_queued_lat.avgcount=0,state_deferred_queued_lat.avgtime=0,state_deferred_queued_lat.sum=0,state_done_lat.avgcount=275696,state_done_lat.avgtime=0.00000286,state_done_lat.sum=0.788700007,state_finishing_lat.avgcount=275696,state_finishing_lat.avgtime=0.000000302,state_finishing_lat.sum=0.083437168,state_io_done_lat.avgcount=275707,state_io_done_lat.avgtime=0.000001041,state_io_done_lat.sum=0.287025147,state_kv_commiting_lat.avgcount=275707,state_kv_commiting_lat.avgtime=0.006424459,state_kv_commiting_lat.sum=1771.268407864,state_kv_done_lat.avgcount=275707,state_kv_done_lat.avgtime=0.000001627,state_kv_done_lat.sum=0.448805853,state_kv_queued_lat.avgcount=275707,state_kv_queued_lat.avgtime=0.000488565,state_kv_queued_lat.sum=134.7009424,state_prepare_lat.avgcount=275707,state_prepare_lat.avgtime=0.000082464,state_prepare_lat.sum=22.736065534,submit_lat.avgcount=275707,submit_lat.avgtime=0.000120236,submit_lat.sum=33.149934412,throttle_lat.avgcount=275707,throttle_lat.avgtime=0.000001571,throttle_lat.sum=0.433185935,write_pad_bytes=151773,write_penalty_read_ops=0 1587117698000000000
+ceph,collection=finisher-objecter-finisher-0,host=stefanosd1,id=0,type=osd complete_latency.avgcount=0,complete_latency.avgtime=0,complete_latency.sum=0,queue_len=0 1587117698000000000
+ceph,collection=objecter,host=stefanosd1,id=0,type=osd command_active=0,command_resend=0,command_send=0,linger_active=0,linger_ping=0,linger_resend=0,linger_send=0,map_epoch=203,map_full=0,map_inc=19,omap_del=0,omap_rd=0,omap_wr=0,op=0,op_active=0,op_laggy=0,op_pg=0,op_r=0,op_reply=0,op_resend=0,op_rmw=0,op_send=0,op_send_bytes=0,op_w=0,osd_laggy=0,osd_session_close=0,osd_session_open=0,osd_sessions=0,osdop_append=0,osdop_call=0,osdop_clonerange=0,osdop_cmpxattr=0,osdop_create=0,osdop_delete=0,osdop_getxattr=0,osdop_mapext=0,osdop_notify=0,osdop_other=0,osdop_pgls=0,osdop_pgls_filter=0,osdop_read=0,osdop_resetxattrs=0,osdop_rmxattr=0,osdop_setxattr=0,osdop_sparse_read=0,osdop_src_cmpxattr=0,osdop_stat=0,osdop_truncate=0,osdop_watch=0,osdop_write=0,osdop_writefull=0,osdop_writesame=0,osdop_zero=0,poolop_active=0,poolop_resend=0,poolop_send=0,poolstat_active=0,poolstat_resend=0,poolstat_send=0,statfs_active=0,statfs_resend=0,statfs_send=0 1587117698000000000
+ceph,collection=finisher-commit_finisher,host=stefanosd1,id=0,type=osd complete_latency.avgcount=11,complete_latency.avgtime=0.003447516,complete_latency.sum=0.037922681,queue_len=0 1587117698000000000
+ceph,collection=throttle-objecter_ops,host=stefanosd1,id=0,type=osd get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=1024,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=AsyncMessenger::Worker-0,host=stefanosd1,id=0,type=osd msgr_active_connections=2128,msgr_created_connections=33685,msgr_recv_bytes=8679123051,msgr_recv_messages=4200356,msgr_running_fast_dispatch_time=151.889337454,msgr_running_recv_time=297.632294886,msgr_running_send_time=599.20020523,msgr_running_total_time=1321.361931202,msgr_send_bytes=11716202897,msgr_send_messages=4347418 1587117698000000000
+ceph,collection=throttle-osd_client_bytes,host=stefanosd1,id=0,type=osd get=476554,get_or_fail_fail=0,get_or_fail_success=476554,get_started=0,get_sum=103413728,max=524288000,put=476587,put_sum=103413728,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=throttle-bluestore_throttle_deferred_bytes,host=stefanosd1,id=0,type=osd get=11,get_or_fail_fail=0,get_or_fail_success=11,get_started=0,get_sum=7723117,max=201326592,put=0,put_sum=0,take=0,take_sum=0,val=7723117,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=throttle-msgr_dispatch_throttler-cluster,host=stefanosd1,id=1,type=osd get=860895,get_or_fail_fail=0,get_or_fail_success=860895,get_started=0,get_sum=596482256,max=104857600,put=860895,put_sum=596482256,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=throttle-objecter_ops,host=stefanosd1,id=1,type=osd get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=1024,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=throttle-objecter_bytes,host=stefanosd1,id=1,type=osd get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=104857600,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=finisher-defered_finisher,host=stefanosd1,id=1,type=osd complete_latency.avgcount=0,complete_latency.avgtime=0,complete_latency.sum=0,queue_len=0 1587117698000000000
+ceph,collection=osd,host=stefanosd1,id=1,type=osd agent_evict=0,agent_flush=0,agent_skip=0,agent_wake=0,cached_crc=0,cached_crc_adjusted=0,copyfrom=0,heartbeat_to_peers=7,loadavg=11,map_message_epoch_dups=29,map_message_epochs=50,map_messages=39,messages_delayed_for_map=0,missed_crc=0,numpg=188,numpg_primary=71,numpg_removing=0,numpg_replica=117,numpg_stray=0,object_ctx_cache_hit=1349777,object_ctx_cache_total=2934118,op=1319230,op_before_dequeue_op_lat.avgcount=3792053,op_before_dequeue_op_lat.avgtime=0.000405802,op_before_dequeue_op_lat.sum=1538.826381623,op_before_queue_op_lat.avgcount=3778690,op_before_queue_op_lat.avgtime=0.000033273,op_before_queue_op_lat.sum=125.731131596,op_cache_hit=0,op_in_bytes=0,op_latency.avgcount=1319230,op_latency.avgtime=0.002858138,op_latency.sum=3770.541581676,op_out_bytes=1789210,op_prepare_latency.avgcount=1336472,op_prepare_latency.avgtime=0.000279458,op_prepare_latency.sum=373.488913339,op_process_latency.avgcount=1319230,op_process_latency.avgtime=0.002666408,op_process_latency.sum=3517.606407526,op_r=1075394,op_r_latency.avgcount=1075394,op_r_latency.avgtime=0.000303779,op_r_latency.sum=326.682443032,op_r_out_bytes=1789210,op_r_prepare_latency.avgcount=1075394,op_r_prepare_latency.avgtime=0.000171228,op_r_prepare_latency.sum=184.138580631,op_r_process_latency.avgcount=1075394,op_r_process_latency.avgtime=0.00011609,op_r_process_latency.sum=124.842894319,op_rw=243832,op_rw_in_bytes=0,op_rw_latency.avgcount=243832,op_rw_latency.avgtime=0.014123636,op_rw_latency.sum=3443.79445124,op_rw_out_bytes=0,op_rw_prepare_latency.avgcount=261072,op_rw_prepare_latency.avgtime=0.000725265,op_rw_prepare_latency.sum=189.346543463,op_rw_process_latency.avgcount=243832,op_rw_process_latency.avgtime=0.013914089,op_rw_process_latency.sum=3392.700241086,op_w=4,op_w_in_bytes=0,op_w_latency.avgcount=4,op_w_latency.avgtime=0.016171851,op_w_latency.sum=0.064687404,op_w_prepare_latency.avgcount=6,op_w_prepare_latency.avgtime=0.00063154,op_w_prepare_latency.sum=0.003789245,op_w_process_latency.avgcount=4,op_w_process_latency.avgtime=0.01581803,op_w_process_latency.sum=0.063272121,op_wip=0,osd_map_bl_cache_hit=36,osd_map_bl_cache_miss=40,osd_map_cache_hit=5404,osd_map_cache_miss=14,osd_map_cache_miss_low=0,osd_map_cache_miss_low_avg.avgcount=0,osd_map_cache_miss_low_avg.sum=0,osd_pg_biginfo=2333,osd_pg_fastinfo=576157,osd_pg_info=591751,osd_tier_flush_lat.avgcount=0,osd_tier_flush_lat.avgtime=0,osd_tier_flush_lat.sum=0,osd_tier_promote_lat.avgcount=0,osd_tier_promote_lat.avgtime=0,osd_tier_promote_lat.sum=0,osd_tier_r_lat.avgcount=0,osd_tier_r_lat.avgtime=0,osd_tier_r_lat.sum=0,pull=0,push=22,push_out_bytes=0,recovery_bytes=0,recovery_ops=21,stat_bytes=107369988096,stat_bytes_avail=106271997952,stat_bytes_used=1097990144,subop=306946,subop_in_bytes=204236742,subop_latency.avgcount=306946,subop_latency.avgtime=0.006744881,subop_latency.sum=2070.314452989,subop_pull=0,subop_pull_latency.avgcount=0,subop_pull_latency.avgtime=0,subop_pull_latency.sum=0,subop_push=0,subop_push_in_bytes=0,subop_push_latency.avgcount=0,subop_push_latency.avgtime=0,subop_push_latency.sum=0,subop_w=306946,subop_w_in_bytes=204236742,subop_w_latency.avgcount=306946,subop_w_latency.avgtime=0.006744881,subop_w_latency.sum=2070.314452989,tier_clean=0,tier_delay=0,tier_dirty=8,tier_evict=0,tier_flush=0,tier_flush_fail=0,tier_promote=0,tier_proxy_read=0,tier_proxy_write=0,tier_try_flush=0,tier_try_flush_fail=0,tier_whiteout=0 1587117698000000000
+ceph,collection=objecter,host=stefanosd1,id=1,type=osd command_active=0,command_resend=0,command_send=0,linger_active=0,linger_ping=0,linger_resend=0,linger_send=0,map_epoch=203,map_full=0,map_inc=19,omap_del=0,omap_rd=0,omap_wr=0,op=0,op_active=0,op_laggy=0,op_pg=0,op_r=0,op_reply=0,op_resend=0,op_rmw=0,op_send=0,op_send_bytes=0,op_w=0,osd_laggy=0,osd_session_close=0,osd_session_open=0,osd_sessions=0,osdop_append=0,osdop_call=0,osdop_clonerange=0,osdop_cmpxattr=0,osdop_create=0,osdop_delete=0,osdop_getxattr=0,osdop_mapext=0,osdop_notify=0,osdop_other=0,osdop_pgls=0,osdop_pgls_filter=0,osdop_read=0,osdop_resetxattrs=0,osdop_rmxattr=0,osdop_setxattr=0,osdop_sparse_read=0,osdop_src_cmpxattr=0,osdop_stat=0,osdop_truncate=0,osdop_watch=0,osdop_write=0,osdop_writefull=0,osdop_writesame=0,osdop_zero=0,poolop_active=0,poolop_resend=0,poolop_send=0,poolstat_active=0,poolstat_resend=0,poolstat_send=0,statfs_active=0,statfs_resend=0,statfs_send=0 1587117698000000000
+ceph,collection=AsyncMessenger::Worker-0,host=stefanosd1,id=1,type=osd msgr_active_connections=1356,msgr_created_connections=12290,msgr_recv_bytes=8577187219,msgr_recv_messages=6387040,msgr_running_fast_dispatch_time=475.903632306,msgr_running_recv_time=425.937196699,msgr_running_send_time=783.676217521,msgr_running_total_time=1989.242459076,msgr_send_bytes=12583034449,msgr_send_messages=6074344 1587117698000000000
+ceph,collection=bluestore,host=stefanosd1,id=1,type=osd bluestore_allocated=24182784,bluestore_blob_split=0,bluestore_blobs=88,bluestore_buffer_bytes=614400,bluestore_buffer_hit_bytes=142047,bluestore_buffer_miss_bytes=541480,bluestore_buffers=41,bluestore_compressed=0,bluestore_compressed_allocated=0,bluestore_compressed_original=0,bluestore_extent_compress=0,bluestore_extents=88,bluestore_fragmentation_micros=1,bluestore_gc_merged=0,bluestore_onode_hits=1403948,bluestore_onode_misses=1584732,bluestore_onode_reshard=0,bluestore_onode_shard_hits=0,bluestore_onode_shard_misses=0,bluestore_onodes=459,bluestore_read_eio=0,bluestore_reads_with_retries=0,bluestore_stored=1985647,bluestore_txc=593150,bluestore_write_big=0,bluestore_write_big_blobs=0,bluestore_write_big_bytes=0,bluestore_write_small=58,bluestore_write_small_bytes=343091,bluestore_write_small_deferred=20,bluestore_write_small_new=38,bluestore_write_small_pre_read=20,bluestore_write_small_unused=0,commit_lat.avgcount=593150,commit_lat.avgtime=0.006514834,commit_lat.sum=3864.274280733,compress_lat.avgcount=0,compress_lat.avgtime=0,compress_lat.sum=0,compress_rejected_count=0,compress_success_count=0,csum_lat.avgcount=60,csum_lat.avgtime=0.000028258,csum_lat.sum=0.001695512,decompress_lat.avgcount=0,decompress_lat.avgtime=0,decompress_lat.sum=0,deferred_write_bytes=0,deferred_write_ops=0,kv_commit_lat.avgcount=578129,kv_commit_lat.avgtime=0.00570707,kv_commit_lat.sum=3299.423186928,kv_final_lat.avgcount=578124,kv_final_lat.avgtime=0.000042752,kv_final_lat.sum=24.716171934,kv_flush_lat.avgcount=578129,kv_flush_lat.avgtime=0.000000209,kv_flush_lat.sum=0.121169044,kv_sync_lat.avgcount=578129,kv_sync_lat.avgtime=0.00570728,kv_sync_lat.sum=3299.544355972,omap_lower_bound_lat.avgcount=22,omap_lower_bound_lat.avgtime=0.000005979,omap_lower_bound_lat.sum=0.000131539,omap_next_lat.avgcount=13248,omap_next_lat.avgtime=0.000004836,omap_next_lat.sum=0.064077797,omap_seek_to_first_lat.avgcount=525,omap_seek_to_first_lat.avgtime=0.000004906,omap_seek_to_first_lat.sum=0.002575786,omap_upper_bound_lat.avgcount=0,omap_upper_bound_lat.avgtime=0,omap_upper_bound_lat.sum=0,read_lat.avgcount=406,read_lat.avgtime=0.000383254,read_lat.sum=0.155601529,read_onode_meta_lat.avgcount=483,read_onode_meta_lat.avgtime=0.000008805,read_onode_meta_lat.sum=0.004252832,read_wait_aio_lat.avgcount=77,read_wait_aio_lat.avgtime=0.001907361,read_wait_aio_lat.sum=0.146866799,state_aio_wait_lat.avgcount=593150,state_aio_wait_lat.avgtime=0.000000388,state_aio_wait_lat.sum=0.230498048,state_deferred_aio_wait_lat.avgcount=0,state_deferred_aio_wait_lat.avgtime=0,state_deferred_aio_wait_lat.sum=0,state_deferred_cleanup_lat.avgcount=0,state_deferred_cleanup_lat.avgtime=0,state_deferred_cleanup_lat.sum=0,state_deferred_queued_lat.avgcount=0,state_deferred_queued_lat.avgtime=0,state_deferred_queued_lat.sum=0,state_done_lat.avgcount=593140,state_done_lat.avgtime=0.000003048,state_done_lat.sum=1.80789161,state_finishing_lat.avgcount=593140,state_finishing_lat.avgtime=0.000000325,state_finishing_lat.sum=0.192952339,state_io_done_lat.avgcount=593150,state_io_done_lat.avgtime=0.000001202,state_io_done_lat.sum=0.713333116,state_kv_commiting_lat.avgcount=593150,state_kv_commiting_lat.avgtime=0.005788541,state_kv_commiting_lat.sum=3433.473378536,state_kv_done_lat.avgcount=593150,state_kv_done_lat.avgtime=0.000001472,state_kv_done_lat.sum=0.873559611,state_kv_queued_lat.avgcount=593150,state_kv_queued_lat.avgtime=0.000634215,state_kv_queued_lat.sum=376.18491577,state_prepare_lat.avgcount=593150,state_prepare_lat.avgtime=0.000089694,state_prepare_lat.sum=53.202464675,submit_lat.avgcount=593150,submit_lat.avgtime=0.000127856,submit_lat.sum=75.83816759,throttle_lat.avgcount=593150,throttle_lat.avgtime=0.000001726,throttle_lat.sum=1.023832181,write_pad_bytes=144333,write_penalty_read_ops=0 1587117698000000000
+ceph,collection=throttle-osd_client_bytes,host=stefanosd1,id=1,type=osd get=2920772,get_or_fail_fail=0,get_or_fail_success=2920772,get_started=0,get_sum=739935873,max=524288000,put=4888498,put_sum=739935873,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=throttle-msgr_dispatch_throttler-hb_front_client,host=stefanosd1,id=1,type=osd get=2605442,get_or_fail_fail=0,get_or_fail_success=2605442,get_started=0,get_sum=5221305768,max=104857600,put=2605442,put_sum=5221305768,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=AsyncMessenger::Worker-2,host=stefanosd1,id=1,type=osd msgr_active_connections=1375,msgr_created_connections=12689,msgr_recv_bytes=6393440855,msgr_recv_messages=3260458,msgr_running_fast_dispatch_time=120.622437418,msgr_running_recv_time=225.24709441,msgr_running_send_time=499.150587343,msgr_running_total_time=1043.340296846,msgr_send_bytes=11134862571,msgr_send_messages=3450760 1587117698000000000
+ceph,collection=bluefs,host=stefanosd1,id=1,type=osd bytes_written_slow=0,bytes_written_sst=19824993,bytes_written_wal=1788507023,db_total_bytes=4294967296,db_used_bytes=522190848,files_written_sst=4,files_written_wal=2,gift_bytes=0,log_bytes=1056768,log_compactions=2,logged_bytes=1933271040,max_bytes_db=1483735040,max_bytes_slow=0,max_bytes_wal=0,num_files=12,reclaim_bytes=0,slow_total_bytes=0,slow_used_bytes=0,wal_total_bytes=0,wal_used_bytes=0 1587117698000000000
+ceph,collection=throttle-msgr_dispatch_throttler-hb_back_client,host=stefanosd1,id=1,type=osd get=2605442,get_or_fail_fail=0,get_or_fail_success=2605442,get_started=0,get_sum=5221305768,max=104857600,put=2605442,put_sum=5221305768,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=throttle-bluestore_throttle_deferred_bytes,host=stefanosd1,id=1,type=osd get=10,get_or_fail_fail=0,get_or_fail_success=10,get_started=0,get_sum=7052009,max=201326592,put=0,put_sum=0,take=0,take_sum=0,val=7052009,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=rocksdb,host=stefanosd1,id=1,type=osd compact=0,compact_queue_len=0,compact_queue_merge=0,compact_range=0,get=1586061,get_latency.avgcount=1586061,get_latency.avgtime=0.000083009,get_latency.sum=131.658296684,rocksdb_write_delay_time.avgcount=0,rocksdb_write_delay_time.avgtime=0,rocksdb_write_delay_time.sum=0,rocksdb_write_memtable_time.avgcount=0,rocksdb_write_memtable_time.avgtime=0,rocksdb_write_memtable_time.sum=0,rocksdb_write_pre_and_post_time.avgcount=0,rocksdb_write_pre_and_post_time.avgtime=0,rocksdb_write_pre_and_post_time.sum=0,rocksdb_write_wal_time.avgcount=0,rocksdb_write_wal_time.avgtime=0,rocksdb_write_wal_time.sum=0,submit_latency.avgcount=593150,submit_latency.avgtime=0.000172072,submit_latency.sum=102.064900673,submit_sync_latency.avgcount=578129,submit_sync_latency.avgtime=0.005447017,submit_sync_latency.sum=3149.078822012,submit_transaction=593150,submit_transaction_sync=578129 1587117698000000000
+ceph,collection=throttle-msgr_dispatch_throttler-hb_back_server,host=stefanosd1,id=1,type=osd get=2607669,get_or_fail_fail=0,get_or_fail_success=2607669,get_started=0,get_sum=5225768676,max=104857600,put=2607669,put_sum=5225768676,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=recoverystate_perf,host=stefanosd1,id=1,type=osd activating_latency.avgcount=104,activating_latency.avgtime=0.071646485,activating_latency.sum=7.451234493,active_latency.avgcount=33,active_latency.avgtime=1734.369034268,active_latency.sum=57234.178130859,backfilling_latency.avgcount=1,backfilling_latency.avgtime=2.598401698,backfilling_latency.sum=2.598401698,clean_latency.avgcount=33,clean_latency.avgtime=1734.213467342,clean_latency.sum=57229.044422292,down_latency.avgcount=0,down_latency.avgtime=0,down_latency.sum=0,getinfo_latency.avgcount=167,getinfo_latency.avgtime=0.373444627,getinfo_latency.sum=62.365252849,getlog_latency.avgcount=105,getlog_latency.avgtime=0.003575062,getlog_latency.sum=0.375381569,getmissing_latency.avgcount=104,getmissing_latency.avgtime=0.000157091,getmissing_latency.sum=0.016337565,incomplete_latency.avgcount=0,incomplete_latency.avgtime=0,incomplete_latency.sum=0,initial_latency.avgcount=188,initial_latency.avgtime=0.001833512,initial_latency.sum=0.344700343,notbackfilling_latency.avgcount=0,notbackfilling_latency.avgtime=0,notbackfilling_latency.sum=0,notrecovering_latency.avgcount=0,notrecovering_latency.avgtime=0,notrecovering_latency.sum=0,peering_latency.avgcount=167,peering_latency.avgtime=1.501818082,peering_latency.sum=250.803619796,primary_latency.avgcount=97,primary_latency.avgtime=591.344286378,primary_latency.sum=57360.395778762,recovered_latency.avgcount=104,recovered_latency.avgtime=0.000291138,recovered_latency.sum=0.030278433,recovering_latency.avgcount=2,recovering_latency.avgtime=0.142378096,recovering_latency.sum=0.284756192,replicaactive_latency.avgcount=32,replicaactive_latency.avgtime=1788.474901442,replicaactive_latency.sum=57231.196846165,repnotrecovering_latency.avgcount=34,repnotrecovering_latency.avgtime=1683.273587087,repnotrecovering_latency.sum=57231.301960987,reprecovering_latency.avgcount=2,reprecovering_latency.avgtime=0.418094818,reprecovering_latency.sum=0.836189637,repwaitbackfillreserved_latency.avgcount=0,repwaitbackfillreserved_latency.avgtime=0,repwaitbackfillreserved_latency.sum=0,repwaitrecoveryreserved_latency.avgcount=2,repwaitrecoveryreserved_latency.avgtime=0.000588413,repwaitrecoveryreserved_latency.sum=0.001176827,reset_latency.avgcount=433,reset_latency.avgtime=0.15669689,reset_latency.sum=67.849753631,start_latency.avgcount=433,start_latency.avgtime=0.000412707,start_latency.sum=0.178702508,started_latency.avgcount=245,started_latency.avgtime=468.419544137,started_latency.sum=114762.788313581,stray_latency.avgcount=266,stray_latency.avgtime=1.489291271,stray_latency.sum=396.151478238,waitactingchange_latency.avgcount=1,waitactingchange_latency.avgtime=0.982689906,waitactingchange_latency.sum=0.982689906,waitlocalbackfillreserved_latency.avgcount=1,waitlocalbackfillreserved_latency.avgtime=0.000542092,waitlocalbackfillreserved_latency.sum=0.000542092,waitlocalrecoveryreserved_latency.avgcount=2,waitlocalrecoveryreserved_latency.avgtime=0.00391669,waitlocalrecoveryreserved_latency.sum=0.007833381,waitremotebackfillreserved_latency.avgcount=1,waitremotebackfillreserved_latency.avgtime=0.003110409,waitremotebackfillreserved_latency.sum=0.003110409,waitremoterecoveryreserved_latency.avgcount=2,waitremoterecoveryreserved_latency.avgtime=0.012229338,waitremoterecoveryreserved_latency.sum=0.024458677,waitupthru_latency.avgcount=104,waitupthru_latency.avgtime=1.807608905,waitupthru_latency.sum=187.991326197 1587117698000000000
+ceph,collection=AsyncMessenger::Worker-1,host=stefanosd1,id=1,type=osd msgr_active_connections=1289,msgr_created_connections=9469,msgr_recv_bytes=8348149800,msgr_recv_messages=5048791,msgr_running_fast_dispatch_time=313.754567889,msgr_running_recv_time=372.054833029,msgr_running_send_time=694.900405016,msgr_running_total_time=1656.294769387,msgr_send_bytes=11550148208,msgr_send_messages=5175962 1587117698000000000
+ceph,collection=throttle-bluestore_throttle_bytes,host=stefanosd1,id=1,type=osd get=593150,get_or_fail_fail=0,get_or_fail_success=0,get_started=593150,get_sum=398147414260,max=67108864,put=578129,put_sum=398147414260,take=0,take_sum=0,val=0,wait.avgcount=29,wait.avgtime=0.000972655,wait.sum=0.028207005 1587117698000000000
+ceph,collection=throttle-msgr_dispatch_throttler-ms_objecter,host=stefanosd1,id=1,type=osd get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=104857600,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=cct,host=stefanosd1,id=1,type=osd total_workers=6,unhealthy_workers=0 1587117698000000000
+ceph,collection=mempool,host=stefanosd1,id=1,type=osd bloom_filter_bytes=0,bloom_filter_items=0,bluefs_bytes=13064,bluefs_items=593,bluestore_alloc_bytes=230288,bluestore_alloc_items=28786,bluestore_cache_data_bytes=614400,bluestore_cache_data_items=41,bluestore_cache_onode_bytes=301104,bluestore_cache_onode_items=459,bluestore_cache_other_bytes=230945,bluestore_cache_other_items=26119,bluestore_fsck_bytes=0,bluestore_fsck_items=0,bluestore_txc_bytes=7520,bluestore_txc_items=10,bluestore_writing_bytes=0,bluestore_writing_deferred_bytes=657768,bluestore_writing_deferred_items=172,bluestore_writing_items=0,buffer_anon_bytes=2328515,buffer_anon_items=271,buffer_meta_bytes=5808,buffer_meta_items=66,mds_co_bytes=0,mds_co_items=0,osd_bytes=2406400,osd_items=188,osd_mapbl_bytes=139623,osd_mapbl_items=9,osd_pglog_bytes=6768784,osd_pglog_items=18179,osdmap_bytes=710892,osdmap_items=4426,osdmap_mapping_bytes=0,osdmap_mapping_items=0,pgmap_bytes=0,pgmap_items=0,unittest_1_bytes=0,unittest_1_items=0,unittest_2_bytes=0,unittest_2_items=0 1587117698000000000
+ceph,collection=throttle-msgr_dispatch_throttler-client,host=stefanosd1,id=1,type=osd get=2932513,get_or_fail_fail=0,get_or_fail_success=2932513,get_started=0,get_sum=740620215,max=104857600,put=2932513,put_sum=740620215,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=throttle-msgr_dispatch_throttler-hb_front_server,host=stefanosd1,id=1,type=osd get=2607669,get_or_fail_fail=0,get_or_fail_success=2607669,get_started=0,get_sum=5225768676,max=104857600,put=2607669,put_sum=5225768676,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=finisher-commit_finisher,host=stefanosd1,id=1,type=osd complete_latency.avgcount=10,complete_latency.avgtime=0.002884646,complete_latency.sum=0.028846469,queue_len=0 1587117698000000000
+ceph,collection=finisher-objecter-finisher-0,host=stefanosd1,id=1,type=osd complete_latency.avgcount=0,complete_latency.avgtime=0,complete_latency.sum=0,queue_len=0 1587117698000000000
+ceph,collection=throttle-objecter_bytes,host=stefanosd1,id=2,type=osd get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=104857600,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=finisher-commit_finisher,host=stefanosd1,id=2,type=osd complete_latency.avgcount=11,complete_latency.avgtime=0.002714416,complete_latency.sum=0.029858583,queue_len=0 1587117698000000000
+ceph,collection=finisher-defered_finisher,host=stefanosd1,id=2,type=osd complete_latency.avgcount=0,complete_latency.avgtime=0,complete_latency.sum=0,queue_len=0 1587117698000000000
+ceph,collection=objecter,host=stefanosd1,id=2,type=osd command_active=0,command_resend=0,command_send=0,linger_active=0,linger_ping=0,linger_resend=0,linger_send=0,map_epoch=203,map_full=0,map_inc=19,omap_del=0,omap_rd=0,omap_wr=0,op=0,op_active=0,op_laggy=0,op_pg=0,op_r=0,op_reply=0,op_resend=0,op_rmw=0,op_send=0,op_send_bytes=0,op_w=0,osd_laggy=0,osd_session_close=0,osd_session_open=0,osd_sessions=0,osdop_append=0,osdop_call=0,osdop_clonerange=0,osdop_cmpxattr=0,osdop_create=0,osdop_delete=0,osdop_getxattr=0,osdop_mapext=0,osdop_notify=0,osdop_other=0,osdop_pgls=0,osdop_pgls_filter=0,osdop_read=0,osdop_resetxattrs=0,osdop_rmxattr=0,osdop_setxattr=0,osdop_sparse_read=0,osdop_src_cmpxattr=0,osdop_stat=0,osdop_truncate=0,osdop_watch=0,osdop_write=0,osdop_writefull=0,osdop_writesame=0,osdop_zero=0,poolop_active=0,poolop_resend=0,poolop_send=0,poolstat_active=0,poolstat_resend=0,poolstat_send=0,statfs_active=0,statfs_resend=0,statfs_send=0 1587117698000000000
+ceph,collection=throttle-msgr_dispatch_throttler-hb_back_client,host=stefanosd1,id=2,type=osd get=2607136,get_or_fail_fail=0,get_or_fail_success=2607136,get_started=0,get_sum=5224700544,max=104857600,put=2607136,put_sum=5224700544,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=mempool,host=stefanosd1,id=2,type=osd bloom_filter_bytes=0,bloom_filter_items=0,bluefs_bytes=11624,bluefs_items=522,bluestore_alloc_bytes=230288,bluestore_alloc_items=28786,bluestore_cache_data_bytes=614400,bluestore_cache_data_items=41,bluestore_cache_onode_bytes=228288,bluestore_cache_onode_items=348,bluestore_cache_other_bytes=174158,bluestore_cache_other_items=18527,bluestore_fsck_bytes=0,bluestore_fsck_items=0,bluestore_txc_bytes=8272,bluestore_txc_items=11,bluestore_writing_bytes=0,bluestore_writing_deferred_bytes=670130,bluestore_writing_deferred_items=176,bluestore_writing_items=0,buffer_anon_bytes=2311664,buffer_anon_items=244,buffer_meta_bytes=5456,buffer_meta_items=62,mds_co_bytes=0,mds_co_items=0,osd_bytes=1920000,osd_items=150,osd_mapbl_bytes=155152,osd_mapbl_items=10,osd_pglog_bytes=3393520,osd_pglog_items=9128,osdmap_bytes=710892,osdmap_items=4426,osdmap_mapping_bytes=0,osdmap_mapping_items=0,pgmap_bytes=0,pgmap_items=0,unittest_1_bytes=0,unittest_1_items=0,unittest_2_bytes=0,unittest_2_items=0 1587117698000000000
+ceph,collection=osd,host=stefanosd1,id=2,type=osd agent_evict=0,agent_flush=0,agent_skip=0,agent_wake=0,cached_crc=0,cached_crc_adjusted=0,copyfrom=0,heartbeat_to_peers=7,loadavg=11,map_message_epoch_dups=37,map_message_epochs=56,map_messages=37,messages_delayed_for_map=0,missed_crc=0,numpg=150,numpg_primary=59,numpg_removing=0,numpg_replica=91,numpg_stray=0,object_ctx_cache_hit=705923,object_ctx_cache_total=705951,op=690584,op_before_dequeue_op_lat.avgcount=1155697,op_before_dequeue_op_lat.avgtime=0.000217926,op_before_dequeue_op_lat.sum=251.856487141,op_before_queue_op_lat.avgcount=1148445,op_before_queue_op_lat.avgtime=0.000039696,op_before_queue_op_lat.sum=45.589516462,op_cache_hit=0,op_in_bytes=0,op_latency.avgcount=690584,op_latency.avgtime=0.002488685,op_latency.sum=1718.646504654,op_out_bytes=1026000,op_prepare_latency.avgcount=698700,op_prepare_latency.avgtime=0.000300375,op_prepare_latency.sum=209.872029659,op_process_latency.avgcount=690584,op_process_latency.avgtime=0.00230742,op_process_latency.sum=1593.46739165,op_r=548020,op_r_latency.avgcount=548020,op_r_latency.avgtime=0.000298287,op_r_latency.sum=163.467760649,op_r_out_bytes=1026000,op_r_prepare_latency.avgcount=548020,op_r_prepare_latency.avgtime=0.000186359,op_r_prepare_latency.sum=102.128629183,op_r_process_latency.avgcount=548020,op_r_process_latency.avgtime=0.00012716,op_r_process_latency.sum=69.686468884,op_rw=142562,op_rw_in_bytes=0,op_rw_latency.avgcount=142562,op_rw_latency.avgtime=0.010908597,op_rw_latency.sum=1555.151525732,op_rw_out_bytes=0,op_rw_prepare_latency.avgcount=150678,op_rw_prepare_latency.avgtime=0.000715043,op_rw_prepare_latency.sum=107.741399304,op_rw_process_latency.avgcount=142562,op_rw_process_latency.avgtime=0.01068836,op_rw_process_latency.sum=1523.754107887,op_w=2,op_w_in_bytes=0,op_w_latency.avgcount=2,op_w_latency.avgtime=0.013609136,op_w_latency.sum=0.027218273,op_w_prepare_latency.avgcount=2,op_w_prepare_latency.avgtime=0.001000586,op_w_prepare_latency.sum=0.002001172,op_w_process_latency.avgcount=2,op_w_process_latency.avgtime=0.013407439,op_w_process_latency.sum=0.026814879,op_wip=0,osd_map_bl_cache_hit=15,osd_map_bl_cache_miss=41,osd_map_cache_hit=4241,osd_map_cache_miss=14,osd_map_cache_miss_low=0,osd_map_cache_miss_low_avg.avgcount=0,osd_map_cache_miss_low_avg.sum=0,osd_pg_biginfo=1824,osd_pg_fastinfo=285998,osd_pg_info=294869,osd_tier_flush_lat.avgcount=0,osd_tier_flush_lat.avgtime=0,osd_tier_flush_lat.sum=0,osd_tier_promote_lat.avgcount=0,osd_tier_promote_lat.avgtime=0,osd_tier_promote_lat.sum=0,osd_tier_r_lat.avgcount=0,osd_tier_r_lat.avgtime=0,osd_tier_r_lat.sum=0,pull=0,push=1,push_out_bytes=0,recovery_bytes=0,recovery_ops=0,stat_bytes=107369988096,stat_bytes_avail=106271932416,stat_bytes_used=1098055680,subop=134165,subop_in_bytes=89501237,subop_latency.avgcount=134165,subop_latency.avgtime=0.007313523,subop_latency.sum=981.218888627,subop_pull=0,subop_pull_latency.avgcount=0,subop_pull_latency.avgtime=0,subop_pull_latency.sum=0,subop_push=0,subop_push_in_bytes=0,subop_push_latency.avgcount=0,subop_push_latency.avgtime=0,subop_push_latency.sum=0,subop_w=134165,subop_w_in_bytes=89501237,subop_w_latency.avgcount=134165,subop_w_latency.avgtime=0.007313523,subop_w_latency.sum=981.218888627,tier_clean=0,tier_delay=0,tier_dirty=4,tier_evict=0,tier_flush=0,tier_flush_fail=0,tier_promote=0,tier_proxy_read=0,tier_proxy_write=0,tier_try_flush=0,tier_try_flush_fail=0,tier_whiteout=0 1587117698000000000
+ceph,collection=AsyncMessenger::Worker-1,host=stefanosd1,id=2,type=osd msgr_active_connections=746,msgr_created_connections=15212,msgr_recv_bytes=8633229006,msgr_recv_messages=4284202,msgr_running_fast_dispatch_time=153.820479102,msgr_running_recv_time=282.031655658,msgr_running_send_time=585.444749736,msgr_running_total_time=1231.431789242,msgr_send_bytes=11962769351,msgr_send_messages=4440622 1587117698000000000
+ceph,collection=throttle-msgr_dispatch_throttler-ms_objecter,host=stefanosd1,id=2,type=osd get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=104857600,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=throttle-msgr_dispatch_throttler-hb_front_client,host=stefanosd1,id=2,type=osd get=2607136,get_or_fail_fail=0,get_or_fail_success=2607136,get_started=0,get_sum=5224700544,max=104857600,put=2607136,put_sum=5224700544,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=bluefs,host=stefanosd1,id=2,type=osd bytes_written_slow=0,bytes_written_sst=9065815,bytes_written_wal=901884611,db_total_bytes=4294967296,db_used_bytes=546308096,files_written_sst=3,files_written_wal=2,gift_bytes=0,log_bytes=225726464,log_compactions=1,logged_bytes=1195945984,max_bytes_db=1234173952,max_bytes_slow=0,max_bytes_wal=0,num_files=11,reclaim_bytes=0,slow_total_bytes=0,slow_used_bytes=0,wal_total_bytes=0,wal_used_bytes=0 1587117698000000000
+ceph,collection=recoverystate_perf,host=stefanosd1,id=2,type=osd activating_latency.avgcount=88,activating_latency.avgtime=0.086149065,activating_latency.sum=7.581117751,active_latency.avgcount=29,active_latency.avgtime=1790.849396082,active_latency.sum=51934.632486379,backfilling_latency.avgcount=0,backfilling_latency.avgtime=0,backfilling_latency.sum=0,clean_latency.avgcount=29,clean_latency.avgtime=1790.754765195,clean_latency.sum=51931.888190683,down_latency.avgcount=0,down_latency.avgtime=0,down_latency.sum=0,getinfo_latency.avgcount=134,getinfo_latency.avgtime=0.427567953,getinfo_latency.sum=57.294105786,getlog_latency.avgcount=88,getlog_latency.avgtime=0.011810192,getlog_latency.sum=1.03929697,getmissing_latency.avgcount=88,getmissing_latency.avgtime=0.000104598,getmissing_latency.sum=0.009204673,incomplete_latency.avgcount=0,incomplete_latency.avgtime=0,incomplete_latency.sum=0,initial_latency.avgcount=150,initial_latency.avgtime=0.001251361,initial_latency.sum=0.187704197,notbackfilling_latency.avgcount=0,notbackfilling_latency.avgtime=0,notbackfilling_latency.sum=0,notrecovering_latency.avgcount=0,notrecovering_latency.avgtime=0,notrecovering_latency.sum=0,peering_latency.avgcount=134,peering_latency.avgtime=0.998405763,peering_latency.sum=133.786372331,primary_latency.avgcount=75,primary_latency.avgtime=693.473306562,primary_latency.sum=52010.497992212,recovered_latency.avgcount=88,recovered_latency.avgtime=0.000609715,recovered_latency.sum=0.053654964,recovering_latency.avgcount=1,recovering_latency.avgtime=0.100713031,recovering_latency.sum=0.100713031,replicaactive_latency.avgcount=21,replicaactive_latency.avgtime=1790.852354921,replicaactive_latency.sum=37607.89945336,repnotrecovering_latency.avgcount=21,repnotrecovering_latency.avgtime=1790.852315529,repnotrecovering_latency.sum=37607.898626121,reprecovering_latency.avgcount=0,reprecovering_latency.avgtime=0,reprecovering_latency.sum=0,repwaitbackfillreserved_latency.avgcount=0,repwaitbackfillreserved_latency.avgtime=0,repwaitbackfillreserved_latency.sum=0,repwaitrecoveryreserved_latency.avgcount=0,repwaitrecoveryreserved_latency.avgtime=0,repwaitrecoveryreserved_latency.sum=0,reset_latency.avgcount=346,reset_latency.avgtime=0.126826803,reset_latency.sum=43.882073917,start_latency.avgcount=346,start_latency.avgtime=0.000233277,start_latency.sum=0.080713962,started_latency.avgcount=196,started_latency.avgtime=457.885378797,started_latency.sum=89745.534244237,stray_latency.avgcount=212,stray_latency.avgtime=1.013774396,stray_latency.sum=214.920172121,waitactingchange_latency.avgcount=0,waitactingchange_latency.avgtime=0,waitactingchange_latency.sum=0,waitlocalbackfillreserved_latency.avgcount=0,waitlocalbackfillreserved_latency.avgtime=0,waitlocalbackfillreserved_latency.sum=0,waitlocalrecoveryreserved_latency.avgcount=1,waitlocalrecoveryreserved_latency.avgtime=0.001572379,waitlocalrecoveryreserved_latency.sum=0.001572379,waitremotebackfillreserved_latency.avgcount=0,waitremotebackfillreserved_latency.avgtime=0,waitremotebackfillreserved_latency.sum=0,waitremoterecoveryreserved_latency.avgcount=1,waitremoterecoveryreserved_latency.avgtime=0.012729633,waitremoterecoveryreserved_latency.sum=0.012729633,waitupthru_latency.avgcount=88,waitupthru_latency.avgtime=0.857137729,waitupthru_latency.sum=75.428120205 1587117698000000000
+ceph,collection=throttle-objecter_ops,host=stefanosd1,id=2,type=osd get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=1024,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=bluestore,host=stefanosd1,id=2,type=osd bluestore_allocated=24248320,bluestore_blob_split=0,bluestore_blobs=83,bluestore_buffer_bytes=614400,bluestore_buffer_hit_bytes=161362,bluestore_buffer_miss_bytes=534799,bluestore_buffers=41,bluestore_compressed=0,bluestore_compressed_allocated=0,bluestore_compressed_original=0,bluestore_extent_compress=0,bluestore_extents=83,bluestore_fragmentation_micros=1,bluestore_gc_merged=0,bluestore_onode_hits=723852,bluestore_onode_misses=364,bluestore_onode_reshard=0,bluestore_onode_shard_hits=0,bluestore_onode_shard_misses=0,bluestore_onodes=348,bluestore_read_eio=0,bluestore_reads_with_retries=0,bluestore_stored=1984402,bluestore_txc=295997,bluestore_write_big=0,bluestore_write_big_blobs=0,bluestore_write_big_bytes=0,bluestore_write_small=60,bluestore_write_small_bytes=343843,bluestore_write_small_deferred=22,bluestore_write_small_new=38,bluestore_write_small_pre_read=22,bluestore_write_small_unused=0,commit_lat.avgcount=295997,commit_lat.avgtime=0.006994931,commit_lat.sum=2070.478673619,compress_lat.avgcount=0,compress_lat.avgtime=0,compress_lat.sum=0,compress_rejected_count=0,compress_success_count=0,csum_lat.avgcount=47,csum_lat.avgtime=0.000034434,csum_lat.sum=0.001618423,decompress_lat.avgcount=0,decompress_lat.avgtime=0,decompress_lat.sum=0,deferred_write_bytes=0,deferred_write_ops=0,kv_commit_lat.avgcount=291889,kv_commit_lat.avgtime=0.006347015,kv_commit_lat.sum=1852.624108527,kv_final_lat.avgcount=291885,kv_final_lat.avgtime=0.00004358,kv_final_lat.sum=12.720529751,kv_flush_lat.avgcount=291889,kv_flush_lat.avgtime=0.000000211,kv_flush_lat.sum=0.061636079,kv_sync_lat.avgcount=291889,kv_sync_lat.avgtime=0.006347227,kv_sync_lat.sum=1852.685744606,omap_lower_bound_lat.avgcount=1,omap_lower_bound_lat.avgtime=0.000004482,omap_lower_bound_lat.sum=0.000004482,omap_next_lat.avgcount=6933,omap_next_lat.avgtime=0.000003956,omap_next_lat.sum=0.027427456,omap_seek_to_first_lat.avgcount=309,omap_seek_to_first_lat.avgtime=0.000005879,omap_seek_to_first_lat.sum=0.001816658,omap_upper_bound_lat.avgcount=0,omap_upper_bound_lat.avgtime=0,omap_upper_bound_lat.sum=0,read_lat.avgcount=229,read_lat.avgtime=0.000394981,read_lat.sum=0.090450704,read_onode_meta_lat.avgcount=295,read_onode_meta_lat.avgtime=0.000016832,read_onode_meta_lat.sum=0.004965516,read_wait_aio_lat.avgcount=66,read_wait_aio_lat.avgtime=0.001237841,read_wait_aio_lat.sum=0.081697561,state_aio_wait_lat.avgcount=295997,state_aio_wait_lat.avgtime=0.000000357,state_aio_wait_lat.sum=0.105827433,state_deferred_aio_wait_lat.avgcount=0,state_deferred_aio_wait_lat.avgtime=0,state_deferred_aio_wait_lat.sum=0,state_deferred_cleanup_lat.avgcount=0,state_deferred_cleanup_lat.avgtime=0,state_deferred_cleanup_lat.sum=0,state_deferred_queued_lat.avgcount=0,state_deferred_queued_lat.avgtime=0,state_deferred_queued_lat.sum=0,state_done_lat.avgcount=295986,state_done_lat.avgtime=0.000003017,state_done_lat.sum=0.893199127,state_finishing_lat.avgcount=295986,state_finishing_lat.avgtime=0.000000306,state_finishing_lat.sum=0.090792683,state_io_done_lat.avgcount=295997,state_io_done_lat.avgtime=0.000001066,state_io_done_lat.sum=0.315577655,state_kv_commiting_lat.avgcount=295997,state_kv_commiting_lat.avgtime=0.006423586,state_kv_commiting_lat.sum=1901.362268572,state_kv_done_lat.avgcount=295997,state_kv_done_lat.avgtime=0.00000155,state_kv_done_lat.sum=0.458963064,state_kv_queued_lat.avgcount=295997,state_kv_queued_lat.avgtime=0.000477234,state_kv_queued_lat.sum=141.260101773,state_prepare_lat.avgcount=295997,state_prepare_lat.avgtime=0.000091806,state_prepare_lat.sum=27.174436583,submit_lat.avgcount=295997,submit_lat.avgtime=0.000135729,submit_lat.sum=40.17557682,throttle_lat.avgcount=295997,throttle_lat.avgtime=0.000002734,throttle_lat.sum=0.809479837,write_pad_bytes=151773,write_penalty_read_ops=0 1587117698000000000
+ceph,collection=throttle-bluestore_throttle_bytes,host=stefanosd1,id=2,type=osd get=295997,get_or_fail_fail=0,get_or_fail_success=0,get_started=295997,get_sum=198686579299,max=67108864,put=291889,put_sum=198686579299,take=0,take_sum=0,val=0,wait.avgcount=83,wait.avgtime=0.003670612,wait.sum=0.304660858 1587117698000000000
+ceph,collection=throttle-msgr_dispatch_throttler-cluster,host=stefanosd1,id=2,type=osd get=452060,get_or_fail_fail=0,get_or_fail_success=452060,get_started=0,get_sum=269934345,max=104857600,put=452060,put_sum=269934345,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=throttle-bluestore_throttle_deferred_bytes,host=stefanosd1,id=2,type=osd get=11,get_or_fail_fail=0,get_or_fail_success=11,get_started=0,get_sum=7723117,max=201326592,put=0,put_sum=0,take=0,take_sum=0,val=7723117,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=throttle-msgr_dispatch_throttler-hb_front_server,host=stefanosd1,id=2,type=osd get=2607433,get_or_fail_fail=0,get_or_fail_success=2607433,get_started=0,get_sum=5225295732,max=104857600,put=2607433,put_sum=5225295732,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=finisher-objecter-finisher-0,host=stefanosd1,id=2,type=osd complete_latency.avgcount=0,complete_latency.avgtime=0,complete_latency.sum=0,queue_len=0 1587117698000000000
+ceph,collection=cct,host=stefanosd1,id=2,type=osd total_workers=6,unhealthy_workers=0 1587117698000000000
+ceph,collection=AsyncMessenger::Worker-2,host=stefanosd1,id=2,type=osd msgr_active_connections=670,msgr_created_connections=13455,msgr_recv_bytes=6334605563,msgr_recv_messages=3287843,msgr_running_fast_dispatch_time=137.016615819,msgr_running_recv_time=240.687997039,msgr_running_send_time=471.710658466,msgr_running_total_time=1034.029109337,msgr_send_bytes=9753423475,msgr_send_messages=3439611 1587117698000000000
+ceph,collection=throttle-msgr_dispatch_throttler-client,host=stefanosd1,id=2,type=osd get=710355,get_or_fail_fail=0,get_or_fail_success=710355,get_started=0,get_sum=166306283,max=104857600,put=710355,put_sum=166306283,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=throttle-msgr_dispatch_throttler-hb_back_server,host=stefanosd1,id=2,type=osd get=2607433,get_or_fail_fail=0,get_or_fail_success=2607433,get_started=0,get_sum=5225295732,max=104857600,put=2607433,put_sum=5225295732,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=AsyncMessenger::Worker-0,host=stefanosd1,id=2,type=osd msgr_active_connections=705,msgr_created_connections=17953,msgr_recv_bytes=7261438733,msgr_recv_messages=4496034,msgr_running_fast_dispatch_time=254.716476808,msgr_running_recv_time=272.196741555,msgr_running_send_time=571.102924903,msgr_running_total_time=1338.461077493,msgr_send_bytes=10772250508,msgr_send_messages=4192781 1587117698000000000
+ceph,collection=rocksdb,host=stefanosd1,id=2,type=osd compact=0,compact_queue_len=0,compact_queue_merge=0,compact_range=0,get=1424,get_latency.avgcount=1424,get_latency.avgtime=0.000030752,get_latency.sum=0.043792142,rocksdb_write_delay_time.avgcount=0,rocksdb_write_delay_time.avgtime=0,rocksdb_write_delay_time.sum=0,rocksdb_write_memtable_time.avgcount=0,rocksdb_write_memtable_time.avgtime=0,rocksdb_write_memtable_time.sum=0,rocksdb_write_pre_and_post_time.avgcount=0,rocksdb_write_pre_and_post_time.avgtime=0,rocksdb_write_pre_and_post_time.sum=0,rocksdb_write_wal_time.avgcount=0,rocksdb_write_wal_time.avgtime=0,rocksdb_write_wal_time.sum=0,submit_latency.avgcount=295997,submit_latency.avgtime=0.000173137,submit_latency.sum=51.248072285,submit_sync_latency.avgcount=291889,submit_sync_latency.avgtime=0.006094397,submit_sync_latency.sum=1778.887521449,submit_transaction=295997,submit_transaction_sync=291889 1587117698000000000
+ceph,collection=throttle-osd_client_bytes,host=stefanosd1,id=2,type=osd get=698701,get_or_fail_fail=0,get_or_fail_success=698701,get_started=0,get_sum=165630172,max=524288000,put=920880,put_sum=165630172,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117698000000000
+ceph,collection=mds_sessions,host=stefanmds1,id=stefanmds1,type=mds average_load=0,avg_session_uptime=0,session_add=0,session_count=0,session_remove=0,sessions_open=0,sessions_stale=0,total_load=0 1587117476000000000
+ceph,collection=mempool,host=stefanmds1,id=stefanmds1,type=mds bloom_filter_bytes=0,bloom_filter_items=0,bluefs_bytes=0,bluefs_items=0,bluestore_alloc_bytes=0,bluestore_alloc_items=0,bluestore_cache_data_bytes=0,bluestore_cache_data_items=0,bluestore_cache_onode_bytes=0,bluestore_cache_onode_items=0,bluestore_cache_other_bytes=0,bluestore_cache_other_items=0,bluestore_fsck_bytes=0,bluestore_fsck_items=0,bluestore_txc_bytes=0,bluestore_txc_items=0,bluestore_writing_bytes=0,bluestore_writing_deferred_bytes=0,bluestore_writing_deferred_items=0,bluestore_writing_items=0,buffer_anon_bytes=132069,buffer_anon_items=82,buffer_meta_bytes=0,buffer_meta_items=0,mds_co_bytes=44208,mds_co_items=154,osd_bytes=0,osd_items=0,osd_mapbl_bytes=0,osd_mapbl_items=0,osd_pglog_bytes=0,osd_pglog_items=0,osdmap_bytes=16952,osdmap_items=139,osdmap_mapping_bytes=0,osdmap_mapping_items=0,pgmap_bytes=0,pgmap_items=0,unittest_1_bytes=0,unittest_1_items=0,unittest_2_bytes=0,unittest_2_items=0 1587117476000000000
+ceph,collection=objecter,host=stefanmds1,id=stefanmds1,type=mds command_active=0,command_resend=0,command_send=0,linger_active=0,linger_ping=0,linger_resend=0,linger_send=0,map_epoch=203,map_full=0,map_inc=1,omap_del=0,omap_rd=28,omap_wr=1,op=33,op_active=0,op_laggy=0,op_pg=0,op_r=26,op_reply=33,op_resend=2,op_rmw=0,op_send=35,op_send_bytes=364,op_w=7,osd_laggy=0,osd_session_close=91462,osd_session_open=91468,osd_sessions=6,osdop_append=0,osdop_call=0,osdop_clonerange=0,osdop_cmpxattr=0,osdop_create=0,osdop_delete=5,osdop_getxattr=14,osdop_mapext=0,osdop_notify=0,osdop_other=0,osdop_pgls=0,osdop_pgls_filter=0,osdop_read=8,osdop_resetxattrs=0,osdop_rmxattr=0,osdop_setxattr=0,osdop_sparse_read=0,osdop_src_cmpxattr=0,osdop_stat=2,osdop_truncate=0,osdop_watch=0,osdop_write=0,osdop_writefull=0,osdop_writesame=0,osdop_zero=1,poolop_active=0,poolop_resend=0,poolop_send=0,poolstat_active=0,poolstat_resend=0,poolstat_send=0,statfs_active=0,statfs_resend=0,statfs_send=0 1587117476000000000
+ceph,collection=cct,host=stefanmds1,id=stefanmds1,type=mds total_workers=1,unhealthy_workers=0 1587117476000000000
+ceph,collection=mds_server,host=stefanmds1,id=stefanmds1,type=mds cap_revoke_eviction=0,dispatch_client_request=0,dispatch_server_request=0,handle_client_request=0,handle_client_session=0,handle_slave_request=0,req_create_latency.avgcount=0,req_create_latency.avgtime=0,req_create_latency.sum=0,req_getattr_latency.avgcount=0,req_getattr_latency.avgtime=0,req_getattr_latency.sum=0,req_getfilelock_latency.avgcount=0,req_getfilelock_latency.avgtime=0,req_getfilelock_latency.sum=0,req_link_latency.avgcount=0,req_link_latency.avgtime=0,req_link_latency.sum=0,req_lookup_latency.avgcount=0,req_lookup_latency.avgtime=0,req_lookup_latency.sum=0,req_lookuphash_latency.avgcount=0,req_lookuphash_latency.avgtime=0,req_lookuphash_latency.sum=0,req_lookupino_latency.avgcount=0,req_lookupino_latency.avgtime=0,req_lookupino_latency.sum=0,req_lookupname_latency.avgcount=0,req_lookupname_latency.avgtime=0,req_lookupname_latency.sum=0,req_lookupparent_latency.avgcount=0,req_lookupparent_latency.avgtime=0,req_lookupparent_latency.sum=0,req_lookupsnap_latency.avgcount=0,req_lookupsnap_latency.avgtime=0,req_lookupsnap_latency.sum=0,req_lssnap_latency.avgcount=0,req_lssnap_latency.avgtime=0,req_lssnap_latency.sum=0,req_mkdir_latency.avgcount=0,req_mkdir_latency.avgtime=0,req_mkdir_latency.sum=0,req_mknod_latency.avgcount=0,req_mknod_latency.avgtime=0,req_mknod_latency.sum=0,req_mksnap_latency.avgcount=0,req_mksnap_latency.avgtime=0,req_mksnap_latency.sum=0,req_open_latency.avgcount=0,req_open_latency.avgtime=0,req_open_latency.sum=0,req_readdir_latency.avgcount=0,req_readdir_latency.avgtime=0,req_readdir_latency.sum=0,req_rename_latency.avgcount=0,req_rename_latency.avgtime=0,req_rename_latency.sum=0,req_renamesnap_latency.avgcount=0,req_renamesnap_latency.avgtime=0,req_renamesnap_latency.sum=0,req_rmdir_latency.avgcount=0,req_rmdir_latency.avgtime=0,req_rmdir_latency.sum=0,req_rmsnap_latency.avgcount=0,req_rmsnap_latency.avgtime=0,req_rmsnap_latency.sum=0,req_rmxattr_latency.avgcount=0,req_rmxattr_latency.avgtime=0,req_rmxattr_latency.sum=0,req_setattr_latency.avgcount=0,req_setattr_latency.avgtime=0,req_setattr_latency.sum=0,req_setdirlayout_latency.avgcount=0,req_setdirlayout_latency.avgtime=0,req_setdirlayout_latency.sum=0,req_setfilelock_latency.avgcount=0,req_setfilelock_latency.avgtime=0,req_setfilelock_latency.sum=0,req_setlayout_latency.avgcount=0,req_setlayout_latency.avgtime=0,req_setlayout_latency.sum=0,req_setxattr_latency.avgcount=0,req_setxattr_latency.avgtime=0,req_setxattr_latency.sum=0,req_symlink_latency.avgcount=0,req_symlink_latency.avgtime=0,req_symlink_latency.sum=0,req_unlink_latency.avgcount=0,req_unlink_latency.avgtime=0,req_unlink_latency.sum=0 1587117476000000000
+ceph,collection=AsyncMessenger::Worker-2,host=stefanmds1,id=stefanmds1,type=mds msgr_active_connections=84,msgr_created_connections=68511,msgr_recv_bytes=238078,msgr_recv_messages=2655,msgr_running_fast_dispatch_time=0.004247777,msgr_running_recv_time=25.369012545,msgr_running_send_time=3.743427461,msgr_running_total_time=130.277111559,msgr_send_bytes=172767043,msgr_send_messages=18172 1587117476000000000
+ceph,collection=mds_log,host=stefanmds1,id=stefanmds1,type=mds ev=0,evadd=0,evex=0,evexd=0,evexg=0,evtrm=0,expos=4194304,jlat.avgcount=0,jlat.avgtime=0,jlat.sum=0,rdpos=4194304,replayed=1,seg=1,segadd=0,segex=0,segexd=0,segexg=0,segtrm=0,wrpos=0 1587117476000000000
+ceph,collection=AsyncMessenger::Worker-0,host=stefanmds1,id=stefanmds1,type=mds msgr_active_connections=595,msgr_created_connections=943825,msgr_recv_bytes=78618003,msgr_recv_messages=914080,msgr_running_fast_dispatch_time=0.001544386,msgr_running_recv_time=459.627068807,msgr_running_send_time=469.337032316,msgr_running_total_time=2744.084305898,msgr_send_bytes=61684163658,msgr_send_messages=1858008 1587117476000000000
+ceph,collection=throttle-msgr_dispatch_throttler-mds,host=stefanmds1,id=stefanmds1,type=mds get=1216458,get_or_fail_fail=0,get_or_fail_success=1216458,get_started=0,get_sum=51976882,max=104857600,put=1216458,put_sum=51976882,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117476000000000
+ceph,collection=AsyncMessenger::Worker-1,host=stefanmds1,id=stefanmds1,type=mds msgr_active_connections=226,msgr_created_connections=42679,msgr_recv_bytes=63140151,msgr_recv_messages=299727,msgr_running_fast_dispatch_time=26.316138629,msgr_running_recv_time=36.969916165,msgr_running_send_time=70.457421128,msgr_running_total_time=226.230019936,msgr_send_bytes=193154464,msgr_send_messages=310481 1587117476000000000
+ceph,collection=mds,host=stefanmds1,id=stefanmds1,type=mds caps=0,dir_commit=0,dir_fetch=12,dir_merge=0,dir_split=0,exported=0,exported_inodes=0,forward=0,imported=0,imported_inodes=0,inode_max=2147483647,inodes=10,inodes_bottom=3,inodes_expired=0,inodes_pin_tail=0,inodes_pinned=10,inodes_top=7,inodes_with_caps=0,load_cent=0,openino_backtrace_fetch=0,openino_dir_fetch=0,openino_peer_discover=0,q=0,reply=0,reply_latency.avgcount=0,reply_latency.avgtime=0,reply_latency.sum=0,request=0,subtrees=2,traverse=0,traverse_dir_fetch=0,traverse_discover=0,traverse_forward=0,traverse_hit=0,traverse_lock=0,traverse_remote_ino=0 1587117476000000000
+ceph,collection=purge_queue,host=stefanmds1,id=stefanmds1,type=mds pq_executed=0,pq_executing=0,pq_executing_ops=0 1587117476000000000
+ceph,collection=throttle-write_buf_throttle,host=stefanmds1,id=stefanmds1,type=mds get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=3758096384,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117476000000000
+ceph,collection=throttle-write_buf_throttle-0x5624e9377f40,host=stefanmds1,id=stefanmds1,type=mds get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=3758096384,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117476000000000
+ceph,collection=mds_cache,host=stefanmds1,id=stefanmds1,type=mds ireq_enqueue_scrub=0,ireq_exportdir=0,ireq_flush=0,ireq_fragmentdir=0,ireq_fragstats=0,ireq_inodestats=0,num_recovering_enqueued=0,num_recovering_prioritized=0,num_recovering_processing=0,num_strays=0,num_strays_delayed=0,num_strays_enqueuing=0,recovery_completed=0,recovery_started=0,strays_created=0,strays_enqueued=0,strays_migrated=0,strays_reintegrated=0 1587117476000000000
+ceph,collection=throttle-objecter_bytes,host=stefanmds1,id=stefanmds1,type=mds get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=104857600,put=16,put_sum=1016,take=33,take_sum=1016,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117476000000000
+ceph,collection=throttle-objecter_ops,host=stefanmds1,id=stefanmds1,type=mds get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=1024,put=33,put_sum=33,take=33,take_sum=33,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117476000000000
+ceph,collection=mds_mem,host=stefanmds1,id=stefanmds1,type=mds cap=0,cap+=0,cap-=0,dir=12,dir+=12,dir-=0,dn=10,dn+=10,dn-=0,heap=322284,ino=13,ino+=13,ino-=0,rss=76032 1587117476000000000
+ceph,collection=finisher-PurgeQueue,host=stefanmds1,id=stefanmds1,type=mds complete_latency.avgcount=4,complete_latency.avgtime=0.000176985,complete_latency.sum=0.000707941,queue_len=0 1587117476000000000
+ceph,collection=cct,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw total_workers=0,unhealthy_workers=0 1587117156000000000
+ceph,collection=throttle-objecter_bytes,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw get=791732,get_or_fail_fail=0,get_or_fail_success=791732,get_started=0,get_sum=0,max=104857600,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117156000000000
+ceph,collection=rgw,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw cache_hit=0,cache_miss=791706,failed_req=0,get=0,get_b=0,get_initial_lat.avgcount=0,get_initial_lat.avgtime=0,get_initial_lat.sum=0,keystone_token_cache_hit=0,keystone_token_cache_miss=0,pubsub_event_lost=0,pubsub_event_triggered=0,pubsub_events=0,pubsub_push_failed=0,pubsub_push_ok=0,pubsub_push_pending=0,pubsub_store_fail=0,pubsub_store_ok=0,put=0,put_b=0,put_initial_lat.avgcount=0,put_initial_lat.avgtime=0,put_initial_lat.sum=0,qactive=0,qlen=0,req=791705 1587117156000000000
+ceph,collection=throttle-msgr_dispatch_throttler-radosclient,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw get=2697988,get_or_fail_fail=0,get_or_fail_success=2697988,get_started=0,get_sum=444563051,max=104857600,put=2697988,put_sum=444563051,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117156000000000
+ceph,collection=finisher-radosclient,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw complete_latency.avgcount=2,complete_latency.avgtime=0.003530161,complete_latency.sum=0.007060323,queue_len=0 1587117156000000000
+ceph,collection=throttle-rgw_async_rados_ops,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw get=0,get_or_fail_fail=0,get_or_fail_success=0,get_started=0,get_sum=0,max=64,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117156000000000
+ceph,collection=throttle-objecter_ops,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw get=791732,get_or_fail_fail=0,get_or_fail_success=791732,get_started=0,get_sum=791732,max=24576,put=791732,put_sum=791732,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117156000000000
+ceph,collection=throttle-objecter_bytes-0x5598969981c0,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw get=1637900,get_or_fail_fail=0,get_or_fail_success=1637900,get_started=0,get_sum=0,max=104857600,put=0,put_sum=0,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117156000000000
+ceph,collection=objecter,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw command_active=0,command_resend=0,command_send=0,linger_active=8,linger_ping=1905736,linger_resend=4,linger_send=13,map_epoch=203,map_full=0,map_inc=17,omap_del=0,omap_rd=0,omap_wr=0,op=2697488,op_active=0,op_laggy=0,op_pg=0,op_r=791730,op_reply=2697476,op_resend=1,op_rmw=0,op_send=2697490,op_send_bytes=362,op_w=1905758,osd_laggy=5,osd_session_close=59558,osd_session_open=59566,osd_sessions=8,osdop_append=0,osdop_call=1,osdop_clonerange=0,osdop_cmpxattr=0,osdop_create=8,osdop_delete=0,osdop_getxattr=0,osdop_mapext=0,osdop_notify=0,osdop_other=791714,osdop_pgls=0,osdop_pgls_filter=0,osdop_read=16,osdop_resetxattrs=0,osdop_rmxattr=0,osdop_setxattr=0,osdop_sparse_read=0,osdop_src_cmpxattr=0,osdop_stat=791706,osdop_truncate=0,osdop_watch=1905750,osdop_write=0,osdop_writefull=0,osdop_writesame=0,osdop_zero=0,poolop_active=0,poolop_resend=0,poolop_send=0,poolstat_active=0,poolstat_resend=0,poolstat_send=0,statfs_active=0,statfs_resend=0,statfs_send=0 1587117156000000000
+ceph,collection=AsyncMessenger::Worker-2,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw msgr_active_connections=11,msgr_created_connections=59839,msgr_recv_bytes=342697143,msgr_recv_messages=1441603,msgr_running_fast_dispatch_time=161.807937536,msgr_running_recv_time=118.174064257,msgr_running_send_time=207.679154333,msgr_running_total_time=698.527662129,msgr_send_bytes=530785909,msgr_send_messages=1679950 1587117156000000000
+ceph,collection=mempool,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw bloom_filter_bytes=0,bloom_filter_items=0,bluefs_bytes=0,bluefs_items=0,bluestore_alloc_bytes=0,bluestore_alloc_items=0,bluestore_cache_data_bytes=0,bluestore_cache_data_items=0,bluestore_cache_onode_bytes=0,bluestore_cache_onode_items=0,bluestore_cache_other_bytes=0,bluestore_cache_other_items=0,bluestore_fsck_bytes=0,bluestore_fsck_items=0,bluestore_txc_bytes=0,bluestore_txc_items=0,bluestore_writing_bytes=0,bluestore_writing_deferred_bytes=0,bluestore_writing_deferred_items=0,bluestore_writing_items=0,buffer_anon_bytes=225471,buffer_anon_items=163,buffer_meta_bytes=0,buffer_meta_items=0,mds_co_bytes=0,mds_co_items=0,osd_bytes=0,osd_items=0,osd_mapbl_bytes=0,osd_mapbl_items=0,osd_pglog_bytes=0,osd_pglog_items=0,osdmap_bytes=33904,osdmap_items=278,osdmap_mapping_bytes=0,osdmap_mapping_items=0,pgmap_bytes=0,pgmap_items=0,unittest_1_bytes=0,unittest_1_items=0,unittest_2_bytes=0,unittest_2_items=0 1587117156000000000
+ceph,collection=throttle-msgr_dispatch_throttler-radosclient-0x559896998120,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw get=1652935,get_or_fail_fail=0,get_or_fail_success=1652935,get_started=0,get_sum=276333029,max=104857600,put=1652935,put_sum=276333029,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117156000000000
+ceph,collection=AsyncMessenger::Worker-1,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw msgr_active_connections=17,msgr_created_connections=84859,msgr_recv_bytes=211170759,msgr_recv_messages=922646,msgr_running_fast_dispatch_time=31.487443762,msgr_running_recv_time=83.190789333,msgr_running_send_time=174.670510496,msgr_running_total_time=484.22086275,msgr_send_bytes=1322113179,msgr_send_messages=1636839 1587117156000000000
+ceph,collection=finisher-radosclient-0x559896998080,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw complete_latency.avgcount=0,complete_latency.avgtime=0,complete_latency.sum=0,queue_len=0 1587117156000000000
+ceph,collection=throttle-objecter_ops-0x559896997b80,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw get=1637900,get_or_fail_fail=0,get_or_fail_success=1637900,get_started=0,get_sum=1637900,max=24576,put=1637900,put_sum=1637900,take=0,take_sum=0,val=0,wait.avgcount=0,wait.avgtime=0,wait.sum=0 1587117156000000000
+ceph,collection=AsyncMessenger::Worker-0,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw msgr_active_connections=18,msgr_created_connections=74757,msgr_recv_bytes=489001094,msgr_recv_messages=1986686,msgr_running_fast_dispatch_time=168.60950961,msgr_running_recv_time=142.903031533,msgr_running_send_time=267.911165712,msgr_running_total_time=824.885614951,msgr_send_bytes=707973504,msgr_send_messages=2463727 1587117156000000000
+ceph,collection=objecter-0x559896997720,host=stefanrgw1,id=rgw.stefanrgw1.4219.94113851143184,type=rgw command_active=0,command_resend=0,command_send=0,linger_active=0,linger_ping=0,linger_resend=0,linger_send=0,map_epoch=203,map_full=0,map_inc=8,omap_del=0,omap_rd=0,omap_wr=0,op=1637998,op_active=0,op_laggy=0,op_pg=0,op_r=1062803,op_reply=1637998,op_resend=15,op_rmw=0,op_send=1638013,op_send_bytes=63321099,op_w=575195,osd_laggy=0,osd_session_close=125555,osd_session_open=125563,osd_sessions=8,osdop_append=0,osdop_call=1637886,osdop_clonerange=0,osdop_cmpxattr=0,osdop_create=0,osdop_delete=0,osdop_getxattr=0,osdop_mapext=0,osdop_notify=0,osdop_other=112,osdop_pgls=0,osdop_pgls_filter=0,osdop_read=0,osdop_resetxattrs=0,osdop_rmxattr=0,osdop_setxattr=0,osdop_sparse_read=0,osdop_src_cmpxattr=0,osdop_stat=0,osdop_truncate=0,osdop_watch=0,osdop_write=0,osdop_writefull=0,osdop_writesame=0,osdop_zero=0,poolop_active=0,poolop_resend=0,poolop_send=0,poolstat_active=0,poolstat_resend=0,poolstat_send=0,statfs_active=0,statfs_resend=0,statfs_send=0 1587117156000000000
+```
diff --git a/content/telegraf/v1/input-plugins/cgroup/_index.md b/content/telegraf/v1/input-plugins/cgroup/_index.md
new file mode 100644
index 000000000..3885b8610
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/cgroup/_index.md
@@ -0,0 +1,100 @@
+---
+description: "Telegraf plugin for collecting metrics from CGroup"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: CGroup
+    identifier: input-cgroup
+tags: [CGroup, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# CGroup Input Plugin
+
+This input plugin will capture specific statistics per cgroup.
+
+Consider restricting paths to the set of cgroups you really
+want to monitor if you have a large number of cgroups, to avoid
+any cardinality issues.
+
+Following file formats are supported:
+
+* Single value
+
+```text
+VAL\n
+```
+
+* New line separated values
+
+```text
+VAL0\n
+VAL1\n
+```
+
+* Space separated values
+
+```text
+VAL0 VAL1 ...\n
+```
+
+* Space separated keys and value, separated by new line
+
+```text
+KEY0 ... VAL0\n
+KEY1 ... VAL1\n
+```
+
+## Metrics
+
+All measurements have the `path` tag.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read specific statistics per cgroup
+# This plugin ONLY supports Linux
+[[inputs.cgroup]]
+  ## Directories in which to look for files, globs are supported.
+  ## Consider restricting paths to the set of cgroups you really
+  ## want to monitor if you have a large number of cgroups, to avoid
+  ## any cardinality issues.
+  # paths = [
+  #   "/sys/fs/cgroup/memory",
+  #   "/sys/fs/cgroup/memory/child1",
+  #   "/sys/fs/cgroup/memory/child2/*",
+  # ]
+  ## cgroup stat fields, as file names, globs are supported.
+  ## these file names are appended to each path from above.
+  # files = ["memory.*usage*", "memory.limit_in_bytes"]
+```
+
+## Example Configurations
+
+```toml
+# [[inputs.cgroup]]
+  # paths = [
+  #   "/sys/fs/cgroup/cpu",              # root cgroup
+  #   "/sys/fs/cgroup/cpu/*",            # all container cgroups
+  #   "/sys/fs/cgroup/cpu/*/*",          # all children cgroups under each container cgroup
+  # ]
+  # files = ["cpuacct.usage", "cpu.cfs_period_us", "cpu.cfs_quota_us"]
+
+# [[inputs.cgroup]]
+  # paths = [
+  #   "/sys/fs/cgroup/unified/*",        # root cgroup
+  # ]
+  # files = ["*"]
+```
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/chrony/_index.md b/content/telegraf/v1/input-plugins/chrony/_index.md
new file mode 100644
index 000000000..3b767005d
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/chrony/_index.md
@@ -0,0 +1,96 @@
+---
+description: "Telegraf plugin for collecting metrics from chrony"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: chrony
+    identifier: input-chrony
+tags: [chrony, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# chrony Input Plugin
+
+This plugin queries metrics from a chrony NTP server. For details on the
+meaning of the gathered fields please check the [chronyc manual](https://chrony-project.org/doc/4.4/chronyc.html)
+
+[chronyc manual]: https://chrony-project.org/doc/4.4/chronyc.html
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Get standard chrony metrics.
+[[inputs.chrony]]
+  ## Server address of chronyd with address scheme
+  ## If empty or not set, the plugin will mimic the behavior of chronyc and
+  ## check "unixgram:///run/chrony/chronyd.sock", "udp://127.0.0.1:323"
+  ## and "udp://[::1]:323".
+  # server = ""
+
+  ## Timeout for establishing the connection
+  # timeout = "5s"
+
+  ## Try to resolve received addresses to host-names via DNS lookups
+  ## Disabled by default to avoid DNS queries especially for slow DNS servers.
+  # dns_lookup = false
+
+  ## Metrics to query named according to chronyc commands
+  ## Available settings are:
+  ##   activity    -- number of peers online or offline
+  ##   tracking    -- information about system's clock performance
+  ##   serverstats -- chronyd server statistics
+  ##   sources     -- extended information about peers
+  ##   sourcestats -- statistics on peers
+  # metrics = ["tracking"]
+
+  ## Socket group & permissions
+  ## If the user requests collecting metrics via unix socket, then it is created
+  ## with the following group and permissions.
+  # socket_group = "chrony"
+  # socket_perms = "0660"
+```
+
+## Local socket permissions
+
+To use the unix socket, telegraf must be able to talk to it. Please ensure that
+the telegraf user is a member of the `chrony` group or telegraf won't be able to
+use the socket!
+
+The unix socket is needed in order to use the `serverstats` metrics. All other
+metrics can be gathered using the udp connection.
+
+## Metrics
+
+- chrony
+  - system_time (float, seconds)
+  - last_offset (float, seconds)
+  - rms_offset (float, seconds)
+  - frequency (float, ppm)
+  - residual_freq (float, ppm)
+  - skew (float, ppm)
+  - root_delay (float, seconds)
+  - root_dispersion (float, seconds)
+  - update_interval (float, seconds)
+
+### Tags
+
+- All measurements have the following tags:
+  - reference_id
+  - stratum
+  - leap_status
+
+## Example Output
+
+```text
+chrony,leap_status=not\ synchronized,reference_id=A29FC87B,stratum=3 frequency=-16.000999450683594,last_offset=0.000012651000361074694,residual_freq=0,rms_offset=0.000025576999178156257,root_delay=0.0016550000291317701,root_dispersion=0.00330700003542006,skew=0.006000000052154064,system_time=0.000020389999917824753,update_interval=507.1999816894531 1706271167571675297
+```
diff --git a/content/telegraf/v1/input-plugins/cisco_telemetry_gnmi/_index.md b/content/telegraf/v1/input-plugins/cisco_telemetry_gnmi/_index.md
new file mode 100644
index 000000000..1b4afad6f
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/cisco_telemetry_gnmi/_index.md
@@ -0,0 +1,29 @@
+---
+description: "Telegraf plugin for collecting metrics from Cisco GNMI Telemetry"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Cisco GNMI Telemetry
+    identifier: input-cisco_telemetry_gnmi
+tags: [Cisco GNMI Telemetry, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Cisco GNMI Telemetry Input Plugin
+
+Cisco GNMI Telemetry input plugin consumes telemetry data similar to the GNMI specification.
+This GRPC-based protocol can utilize TLS for authentication and encryption.
+This plugin has been developed to support GNMI telemetry as produced by Cisco IOS XR (64-bit) version 6.5.1 and later.
+
+> [!NOTE]
+> The `inputs.cisco_telemetry_gnmi` plugin was renamed to [`gmni`]()
+> in v1.15.0 to better reflect its general support for gNMI devices.
+
+**introduces in:** Telegraf v1.11.0
+**deprecated in:** Telegraf v1.15.0
+**removal in:** Telegraf v1.35.0
+**tags:** networking
+**supported OS:** all
+
+[gnmi]: /plugins/inputs/gnmi/README.md
diff --git a/content/telegraf/v1/input-plugins/cisco_telemetry_mdt/_index.md b/content/telegraf/v1/input-plugins/cisco_telemetry_mdt/_index.md
new file mode 100644
index 000000000..be7844a98
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/cisco_telemetry_mdt/_index.md
@@ -0,0 +1,174 @@
+---
+description: "Telegraf plugin for collecting metrics from Cisco Model-Driven Telemetry (MDT)"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Cisco Model-Driven Telemetry (MDT)
+    identifier: input-cisco_telemetry_mdt
+tags: [Cisco Model-Driven Telemetry (MDT), "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Cisco Model-Driven Telemetry (MDT) Input Plugin
+
+Cisco model-driven telemetry (MDT) is an input plugin that consumes telemetry
+data from Cisco IOS XR, IOS XE and NX-OS platforms. It supports TCP & GRPC
+dialout transports.  RPC-based transport can utilize TLS for authentication and
+encryption.  Telemetry data is expected to be GPB-KV (self-describing-gpb)
+encoded.
+
+The GRPC dialout transport is supported on various IOS XR (64-bit) 6.1.x and
+later, IOS XE 16.10 and later, as well as NX-OS 7.x and later platforms.
+
+The TCP dialout transport is supported on IOS XR (32-bit and 64-bit) 6.1.x and
+later.
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Cisco model-driven telemetry (MDT) input plugin for IOS XR, IOS XE and NX-OS platforms
+[[inputs.cisco_telemetry_mdt]]
+ ## Telemetry transport can be "tcp" or "grpc".  TLS is only supported when
+ ## using the grpc transport.
+ transport = "grpc"
+
+ ## Address and port to host telemetry listener
+ service_address = ":57000"
+
+ ## Grpc Maximum Message Size, default is 4MB, increase the size. This is
+ ## stored as a uint32, and limited to 4294967295.
+ max_msg_size = 4000000
+
+ ## Enable TLS; grpc transport only.
+ # tls_cert = "/etc/telegraf/cert.pem"
+ # tls_key = "/etc/telegraf/key.pem"
+
+ ## Enable TLS client authentication and define allowed CA certificates; grpc
+ ##  transport only.
+ # tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
+
+ ## Define (for certain nested telemetry measurements with embedded tags) which fields are tags
+ # embedded_tags = ["Cisco-IOS-XR-qos-ma-oper:qos/interface-table/interface/input/service-policy-names/service-policy-instance/statistics/class-stats/class-name"]
+
+ ## Include the delete field in every telemetry message.
+ # include_delete_field = false
+
+ ## Specify custom name for incoming MDT source field.
+ # source_field_name = "mdt_source"
+
+ ## Define aliases to map telemetry encoding paths to simple measurement names
+ [inputs.cisco_telemetry_mdt.aliases]
+   ifstats = "ietf-interfaces:interfaces-state/interface/statistics"
+ ## Define Property Xformation, please refer README and https://pubhub.devnetcloud.com/media/dme-docs-9-3-3/docs/appendix/ for Model details.
+ [inputs.cisco_telemetry_mdt.dmes]
+#    Global Property Xformation.
+#    prop1 = "uint64 to int"
+#    prop2 = "uint64 to string"
+#    prop3 = "string to uint64"
+#    prop4 = "string to int64"
+#    prop5 = "string to float64"
+#    auto-prop-xfrom = "auto-float-xfrom" #Xform any property which is string, and has float number to type float64
+#    Per Path property xformation, Name is telemetry configuration under sensor-group, path configuration "WORD         Distinguished Name"
+#    Per Path configuration is better as it avoid property collision issue of types.
+#    dnpath = '{"Name": "show ip route summary","prop": [{"Key": "routes","Value": "string"}, {"Key": "best-paths","Value": "string"}]}'
+#    dnpath2 = '{"Name": "show processes cpu","prop": [{"Key": "kernel_percent","Value": "float"}, {"Key": "idle_percent","Value": "float"}, {"Key": "process","Value": "string"}, {"Key": "user_percent","Value": "float"}, {"Key": "onesec","Value": "float"}]}'
+#    dnpath3 = '{"Name": "show processes memory physical","prop": [{"Key": "processname","Value": "string"}]}'
+
+ ## Additional GRPC connection settings.
+ [inputs.cisco_telemetry_mdt.grpc_enforcement_policy]
+  ## GRPC permit keepalives without calls, set to true if your clients are
+  ## sending pings without calls in-flight. This can sometimes happen on IOS-XE
+  ## devices where the GRPC connection is left open but subscriptions have been
+  ## removed, and adding subsequent subscriptions does not keep a stable session.
+  # permit_keepalive_without_calls = false
+
+  ## GRPC minimum timeout between successive pings, decreasing this value may
+  ## help if this plugin is closing connections with ENHANCE_YOUR_CALM (too_many_pings).
+  # keepalive_minimum_time = "5m"
+```
+
+## Metrics
+
+Metrics are named by the encoding path that generated the data, or by the alias
+if the `inputs.cisco_telemetry_mdt.aliases` config section is defined.
+Metric fields are dependent on the device type and path.
+
+Tags included in all metrics:
+
+- source
+- path
+- subscription
+
+Additional tags (such as interface_name) may be included depending on the path.
+
+## Example Output
+
+```text
+ifstats,path=ietf-interfaces:interfaces-state/interface/statistics,host=linux,name=GigabitEthernet2,source=csr1kv,subscription=101 in-unicast-pkts=27i,in-multicast-pkts=0i,discontinuity-time="2019-05-23T07:40:23.000362+00:00",in-octets=5233i,in-errors=0i,out-multicast-pkts=0i,out-discards=0i,in-broadcast-pkts=0i,in-discards=0i,in-unknown-protos=0i,out-unicast-pkts=0i,out-broadcast-pkts=0i,out-octets=0i,out-errors=0i 1559150462624000000
+ifstats,path=ietf-interfaces:interfaces-state/interface/statistics,host=linux,name=GigabitEthernet1,source=csr1kv,subscription=101 in-octets=3394770806i,in-broadcast-pkts=0i,in-multicast-pkts=0i,out-broadcast-pkts=0i,in-unknown-protos=0i,out-octets=350212i,in-unicast-pkts=9477273i,in-discards=0i,out-unicast-pkts=2726i,out-discards=0i,discontinuity-time="2019-05-23T07:40:23.000363+00:00",in-errors=30i,out-multicast-pkts=0i,out-errors=0i 1559150462624000000
+```
+
+### NX-OS Configuration Example
+
+```text
+Requirement      DATA-SOURCE   Configuration
+-----------------------------------------
+Environment      DME           path sys/ch query-condition query-target=subtree&target-subtree-class=eqptPsuSlot,eqptFtSlot,eqptSupCSlot,eqptPsu,eqptFt,eqptSensor,eqptLCSlot
+                 DME           path sys/ch depth 5  (Another configuration option)
+Environment      NXAPI         show environment power
+                 NXAPI         show environment fan
+                 NXAPI         show environment temperature
+Interface Stats  DME           path sys/intf query-condition query-target=subtree&target-subtree-class=rmonIfIn,rmonIfOut,rmonIfHCIn,rmonIfHCOut,rmonEtherStats
+Interface State  DME           path sys/intf depth unbounded query-condition query-target=subtree&target-subtree-class=l1PhysIf,pcAggrIf,l3EncRtdIf,l3LbRtdIf,ethpmPhysIf
+VPC              DME           path sys/vpc query-condition query-target=subtree&target-subtree-class=vpcDom,vpcIf
+Resources cpu    DME           path sys/procsys query-condition query-target=subtree&target-subtree-class=procSystem,procSysCore,procSysCpuSummary,procSysCpu,procIdle,procIrq,procKernel,procNice,procSoftirq,procTotal,procUser,procWait,procSysCpuHistory,procSysLoad
+Resources Mem    DME           path sys/procsys/sysmem/sysmemused
+                               path sys/procsys/sysmem/sysmemusage
+                               path sys/procsys/sysmem/sysmemfree
+Per Process cpu  DME           path sys/proc depth unbounded query-condition rsp-foreign-subtree=ephemeral
+vxlan(svi stats) DME           path sys/bd query-condition query-target=subtree&target-subtree-class=l2VlanStats
+BGP              DME           path sys/bgp query-condition query-target=subtree&target-subtree-class=bgpDom,bgpPeer,bgpPeerAf,bgpDomAf,bgpPeerAfEntry,bgpOperRtctrlL3,bgpOperRttP,bgpOperRttEntry,bgpOperAfCtrl
+mac dynamic      DME           path sys/mac query-condition query-target=subtree&target-subtree-class=l2MacAddressTable
+bfd              DME           path sys/bfd/inst depth unbounded
+lldp             DME           path sys/lldp depth unbounded
+urib             DME           path sys/urib depth unbounded query-condition rsp-foreign-subtree=ephemeral
+u6rib            DME           path sys/u6rib depth unbounded query-condition rsp-foreign-subtree=ephemeral
+multicast flow   DME           path sys/mca/show/flows depth unbounded
+multicast stats  DME           path sys/mca/show/stats depth unbounded
+multicast igmp   NXAPI         show ip igmp groups vrf all
+multicast igmp   NXAPI         show ip igmp interface vrf all
+multicast igmp   NXAPI         show ip igmp snooping
+multicast igmp   NXAPI         show ip igmp snooping groups
+multicast igmp   NXAPI         show ip igmp snooping groups detail
+multicast igmp   NXAPI         show ip igmp snooping groups summary
+multicast igmp   NXAPI         show ip igmp snooping mrouter
+multicast igmp   NXAPI         show ip igmp snooping statistics
+multicast pim    NXAPI         show ip pim interface vrf all
+multicast pim    NXAPI         show ip pim neighbor vrf all
+multicast pim    NXAPI         show ip pim route vrf all
+multicast pim    NXAPI         show ip pim rp vrf all
+multicast pim    NXAPI         show ip pim statistics vrf all
+multicast pim    NXAPI         show ip pim vrf all
+microburst       NATIVE        path microburst
+```
diff --git a/content/telegraf/v1/input-plugins/clickhouse/_index.md b/content/telegraf/v1/input-plugins/clickhouse/_index.md
new file mode 100644
index 000000000..a5903787e
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/clickhouse/_index.md
@@ -0,0 +1,248 @@
+---
+description: "Telegraf plugin for collecting metrics from ClickHouse"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: ClickHouse
+    identifier: input-clickhouse
+tags: [ClickHouse, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# ClickHouse Input Plugin
+
+This plugin gathers the statistic data from
+[ClickHouse](https://github.com/ClickHouse/ClickHouse) server.
+
+User's on Clickhouse Cloud will not see the Zookeeper metrics as they may not
+have permissions to query those tables.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from one or many ClickHouse servers
+[[inputs.clickhouse]]
+  ## Username for authorization on ClickHouse server
+  username = "default"
+
+  ## Password for authorization on ClickHouse server
+  # password = ""
+
+  ## HTTP(s) timeout while getting metrics values
+  ## The timeout includes connection time, any redirects, and reading the
+  ## response body.
+  # timeout = 5s
+
+  ## List of servers for metrics scraping
+  ## metrics scrape via HTTP(s) clickhouse interface
+  ## https://clickhouse.tech/docs/en/interfaces/http/
+  servers = ["http://127.0.0.1:8123"]
+
+  ## Server Variant
+  ## When set to "managed", some queries are excluded from being run. This is
+  ## useful for instances hosted in ClickHouse Cloud where certain tables are
+  ## not available.
+  # variant = "self-hosted"
+
+  ## If "auto_discovery"" is "true" plugin tries to connect to all servers
+  ## available in the cluster with using same "user:password" described in
+  ## "user" and "password" parameters and get this server hostname list from
+  ## "system.clusters" table. See
+  ## - https://clickhouse.tech/docs/en/operations/system_tables/#system-clusters
+  ## - https://clickhouse.tech/docs/en/operations/server_settings/settings/#server_settings_remote_servers
+  ## - https://clickhouse.tech/docs/en/operations/table_engines/distributed/
+  ## - https://clickhouse.tech/docs/en/operations/table_engines/replication/#creating-replicated-tables
+  # auto_discovery = true
+
+  ## Filter cluster names in "system.clusters" when "auto_discovery" is "true"
+  ## when this filter present then "WHERE cluster IN (...)" filter will apply
+  ## please use only full cluster names here, regexp and glob filters is not
+  ## allowed for "/etc/clickhouse-server/config.d/remote.xml"
+  ## <yandex>
+  ##  <remote_servers>
+  ##    <my-own-cluster>
+  ##        <shard>
+  ##          <replica><host>clickhouse-ru-1.local</host><port>9000</port></replica>
+  ##          <replica><host>clickhouse-ru-2.local</host><port>9000</port></replica>
+  ##        </shard>
+  ##        <shard>
+  ##          <replica><host>clickhouse-eu-1.local</host><port>9000</port></replica>
+  ##          <replica><host>clickhouse-eu-2.local</host><port>9000</port></replica>
+  ##        </shard>
+  ##    </my-own-cluster>
+  ##  </remote_servers>
+  ##
+  ## </yandex>
+  ##
+  ## example: cluster_include = ["my-own-cluster"]
+  # cluster_include = []
+
+  ## Filter cluster names in "system.clusters" when "auto_discovery" is
+  ## "true" when this filter present then "WHERE cluster NOT IN (...)"
+  ## filter will apply
+  ##    example: cluster_exclude = ["my-internal-not-discovered-cluster"]
+  # cluster_exclude = []
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+## Metrics
+
+- clickhouse_events (see [system.events](https://clickhouse.tech/docs/en/operations/system-tables/events/) for details)
+  - tags:
+    - source (ClickHouse server hostname)
+    - cluster (Name of the cluster [optional])
+    - shard_num (Shard number in the cluster [optional])
+  - fields:
+    - all rows from [system.events](https://clickhouse.tech/docs/en/operations/system-tables/events/)
+
+- clickhouse_metrics (see [system.metrics](https://clickhouse.tech/docs/en/operations/system-tables/metrics/) for details)
+  - tags:
+    - source (ClickHouse server hostname)
+    - cluster (Name of the cluster [optional])
+    - shard_num (Shard number in the cluster [optional])
+  - fields:
+    - all rows from [system.metrics](https://clickhouse.tech/docs/en/operations/system-tables/metrics/)
+
+- clickhouse_asynchronous_metrics (see [system.asynchronous_metrics]()
+  for details)
+  - tags:
+    - source (ClickHouse server hostname)
+    - cluster (Name of the cluster [optional])
+    - shard_num (Shard number in the cluster [optional])
+  - fields:
+    - all rows from [system.asynchronous_metrics]()
+
+- clickhouse_tables
+  - tags:
+    - source (ClickHouse server hostname)
+    - table
+    - database
+    - cluster (Name of the cluster [optional])
+    - shard_num (Shard number in the cluster [optional])
+  - fields:
+    - bytes
+    - parts
+    - rows
+
+- clickhouse_zookeeper (see [system.zookeeper](https://clickhouse.tech/docs/en/operations/system-tables/zookeeper/) for details)
+  - tags:
+    - source (ClickHouse server hostname)
+    - cluster (Name of the cluster [optional])
+    - shard_num (Shard number in the cluster [optional])
+  - fields:
+    - root_nodes (count of node where path=/)
+
+- clickhouse_replication_queue (see [system.replication_queue]() for details)
+  - tags:
+    - source (ClickHouse server hostname)
+    - cluster (Name of the cluster [optional])
+    - shard_num (Shard number in the cluster [optional])
+  - fields:
+    - too_many_tries_replicas (count of replicas which have `num_tries > 1`)
+
+- clickhouse_detached_parts (see [system.detached_parts]() for details)
+  - tags:
+    - source (ClickHouse server hostname)
+    - cluster (Name of the cluster [optional])
+    - shard_num (Shard number in the cluster [optional])
+  - fields:
+    - detached_parts (total detached parts for all tables and databases
+      from [system.detached_parts]())
+
+- clickhouse_dictionaries (see [system.dictionaries](https://clickhouse.tech/docs/en/operations/system-tables/dictionaries/) for details)
+  - tags:
+    - source (ClickHouse server hostname)
+    - cluster (Name of the cluster [optional])
+    - shard_num (Shard number in the cluster [optional])
+    - dict_origin (xml Filename when dictionary created from *_dictionary.xml,
+      database.table when dictionary created from DDL)
+  - fields:
+    - is_loaded (0 - when dictionary data not successful load, 1 - when
+      dictionary data loading fail
+    - bytes_allocated (bytes allocated in RAM after a dictionary loaded)
+
+- clickhouse_mutations (see [system.mutations](https://clickhouse.tech/docs/en/operations/system-tables/mutations/) for details)
+  - tags:
+    - source (ClickHouse server hostname)
+    - cluster (Name of the cluster [optional])
+    - shard_num (Shard number in the cluster [optional])
+  - fields:
+    - running - gauge which show how much mutation doesn't complete now
+    - failed - counter which show total failed mutations from first
+      clickhouse-server run
+    - completed - counter which show total successful finished mutations
+      from first clickhouse-server run
+
+- clickhouse_disks (see [system.disks](https://clickhouse.tech/docs/en/operations/system-tables/disks/) for details)
+  - tags:
+    - source (ClickHouse server hostname)
+    - cluster (Name of the cluster [optional])
+    - shard_num (Shard number in the cluster [optional])
+    - name (disk name in storage configuration)
+    - path (path to disk)
+  - fields:
+    - free_space_percent - 0-100, gauge which show current percent of
+      free disk space bytes relative to total disk space bytes
+    - keep_free_space_percent - 0-100, gauge which show current percent
+      of required keep free disk bytes relative to total disk space bytes
+
+- clickhouse_processes (see [system.processes](https://clickhouse.tech/docs/en/operations/system-tables/processes/) for details)
+  - tags:
+    - source (ClickHouse server hostname)
+    - cluster (Name of the cluster [optional])
+    - shard_num (Shard number in the cluster [optional])
+  - fields:
+    - percentile_50 - float gauge which show 50% percentile (quantile 0.5) for
+      `elapsed` field of running processes
+    - percentile_90 - float gauge which show 90% percentile (quantile 0.9) for
+      `elapsed` field of running processes
+    - longest_running - float gauge which show maximum value for `elapsed`
+      field of running processes
+
+- clickhouse_text_log (see [system.text_log]() for details)
+  - tags:
+    - source (ClickHouse server hostname)
+    - cluster (Name of the cluster [optional])
+    - shard_num (Shard number in the cluster [optional])
+    - level (message level, only messages with level less or equal Notice are
+      collected)
+  - fields:
+    - messages_last_10_min - gauge which show how many messages collected
+
+## Example Output
+
+```text
+clickhouse_events,cluster=test_cluster_two_shards_localhost,host=kshvakov,source=localhost,shard_num=1 read_compressed_bytes=212i,arena_alloc_chunks=35i,function_execute=85i,merge_tree_data_writer_rows=3i,rw_lock_acquired_read_locks=421i,file_open=46i,io_buffer_alloc_bytes=86451985i,inserted_bytes=196i,regexp_created=3i,real_time_microseconds=116832i,query=23i,network_receive_elapsed_microseconds=268i,merge_tree_data_writer_compressed_bytes=1080i,arena_alloc_bytes=212992i,disk_write_elapsed_microseconds=556i,inserted_rows=3i,compressed_read_buffer_bytes=81i,read_buffer_from_file_descriptor_read_bytes=148i,write_buffer_from_file_descriptor_write=47i,merge_tree_data_writer_blocks=3i,soft_page_faults=896i,hard_page_faults=7i,select_query=21i,merge_tree_data_writer_uncompressed_bytes=196i,merge_tree_data_writer_blocks_already_sorted=3i,user_time_microseconds=40196i,compressed_read_buffer_blocks=5i,write_buffer_from_file_descriptor_write_bytes=3246i,io_buffer_allocs=296i,created_write_buffer_ordinary=12i,disk_read_elapsed_microseconds=59347044i,network_send_elapsed_microseconds=1538i,context_lock=1040i,insert_query=1i,system_time_microseconds=14582i,read_buffer_from_file_descriptor_read=3i 1569421000000000000
+clickhouse_asynchronous_metrics,cluster=test_cluster_two_shards_localhost,host=kshvakov,source=localhost,shard_num=1 jemalloc.metadata_thp=0i,replicas_max_relative_delay=0i,jemalloc.mapped=1803177984i,jemalloc.allocated=1724839256i,jemalloc.background_thread.run_interval=0i,jemalloc.background_thread.num_threads=0i,uncompressed_cache_cells=0i,replicas_max_absolute_delay=0i,mark_cache_bytes=0i,compiled_expression_cache_count=0i,replicas_sum_queue_size=0i,number_of_tables=35i,replicas_max_merges_in_queue=0i,replicas_max_inserts_in_queue=0i,replicas_sum_merges_in_queue=0i,replicas_max_queue_size=0i,mark_cache_files=0i,jemalloc.background_thread.num_runs=0i,jemalloc.active=1726210048i,uptime=158i,jemalloc.retained=380481536i,replicas_sum_inserts_in_queue=0i,uncompressed_cache_bytes=0i,number_of_databases=2i,jemalloc.metadata=9207704i,max_part_count_for_partition=1i,jemalloc.resident=1742442496i 1569421000000000000
+clickhouse_metrics,cluster=test_cluster_two_shards_localhost,host=kshvakov,source=localhost,shard_num=1 replicated_send=0i,write=0i,ephemeral_node=0i,zoo_keeper_request=0i,distributed_files_to_insert=0i,replicated_fetch=0i,background_schedule_pool_task=0i,interserver_connection=0i,leader_replica=0i,delayed_inserts=0i,global_thread_active=41i,merge=0i,readonly_replica=0i,memory_tracking_in_background_schedule_pool=0i,memory_tracking_for_merges=0i,zoo_keeper_session=0i,context_lock_wait=0i,storage_buffer_bytes=0i,background_pool_task=0i,send_external_tables=0i,zoo_keeper_watch=0i,part_mutation=0i,disk_space_reserved_for_merge=0i,distributed_send=0i,version_integer=19014003i,local_thread=0i,replicated_checks=0i,memory_tracking=0i,memory_tracking_in_background_processing_pool=0i,leader_election=0i,revision=54425i,open_file_for_read=0i,open_file_for_write=0i,storage_buffer_rows=0i,rw_lock_waiting_readers=0i,rw_lock_waiting_writers=0i,rw_lock_active_writers=0i,local_thread_active=0i,query_preempted=0i,tcp_connection=1i,http_connection=1i,read=2i,query_thread=0i,dict_cache_requests=0i,rw_lock_active_readers=1i,global_thread=43i,query=1i 1569421000000000000
+clickhouse_tables,cluster=test_cluster_two_shards_localhost,database=system,host=kshvakov,source=localhost,shard_num=1,table=trace_log bytes=754i,parts=1i,rows=1i 1569421000000000000
+clickhouse_tables,cluster=test_cluster_two_shards_localhost,database=default,host=kshvakov,source=localhost,shard_num=1,table=example bytes=326i,parts=2i,rows=2i 1569421000000000000
+```
+
+[system.asynchronous_metrics]: https://clickhouse.tech/docs/en/operations/system-tables/asynchronous_metrics/
+[system.detached_parts]: https://clickhouse.tech/docs/en/operations/system-tables/detached_parts/
+[system.dictionaries]: https://clickhouse.tech/docs/en/operations/system-tables/dictionaries/
+[system.disks]: https://clickhouse.tech/docs/en/operations/system-tables/disks/
+[system.events]: https://clickhouse.tech/docs/en/operations/system-tables/events/
+[system.metrics]: https://clickhouse.tech/docs/en/operations/system-tables/metrics/
+[system.mutations]: https://clickhouse.tech/docs/en/operations/system-tables/mutations/
+[system.processes]: https://clickhouse.tech/docs/en/operations/system-tables/processes/
+[system.replication_queue]:https://clickhouse.com/docs/en/operations/system-tables/replication_queue/
+[system.text_log]: https://clickhouse.tech/docs/en/operations/system-tables/text_log/
+[system.zookeeper]: https://clickhouse.tech/docs/en/operations/system-tables/zookeeper/
diff --git a/content/telegraf/v1/input-plugins/cloud_pubsub/_index.md b/content/telegraf/v1/input-plugins/cloud_pubsub/_index.md
new file mode 100644
index 000000000..046d43e4e
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/cloud_pubsub/_index.md
@@ -0,0 +1,143 @@
+---
+description: "Telegraf plugin for collecting metrics from Google Cloud PubSub"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Google Cloud PubSub
+    identifier: input-cloud_pubsub
+tags: [Google Cloud PubSub, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Google Cloud PubSub Input Plugin
+
+The GCP PubSub plugin ingests metrics from [Google Cloud PubSub](https://cloud.google.com/pubsub)
+and creates metrics using one of the supported [input data formats](/telegraf/v1/data_formats/input).
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from Google PubSub
+[[inputs.cloud_pubsub]]
+  ## Required. Name of Google Cloud Platform (GCP) Project that owns
+  ## the given PubSub subscription.
+  project = "my-project"
+
+  ## Required. Name of PubSub subscription to ingest metrics from.
+  subscription = "my-subscription"
+
+  ## Required. Data format to consume.
+  ## Each data format has its own unique set of configuration options.
+  ## Read more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+  data_format = "influx"
+
+  ## Optional. Filepath for GCP credentials JSON file to authorize calls to
+  ## PubSub APIs. If not set explicitly, Telegraf will attempt to use
+  ## Application Default Credentials, which is preferred.
+  # credentials_file = "path/to/my/creds.json"
+
+  ## Optional. Number of seconds to wait before attempting to restart the
+  ## PubSub subscription receiver after an unexpected error.
+  ## If the streaming pull for a PubSub Subscription fails (receiver),
+  ## the agent attempts to restart receiving messages after this many seconds.
+  # retry_delay_seconds = 5
+
+  ## Optional. Maximum byte length of a message to consume.
+  ## Larger messages are dropped with an error. If less than 0 or unspecified,
+  ## treated as no limit.
+  # max_message_len = 1000000
+
+  ## Max undelivered messages
+  ## This plugin uses tracking metrics, which ensure messages are read to
+  ## outputs before acknowledging them to the original broker to ensure data
+  ## is not lost. This option sets the maximum messages to read from the
+  ## broker that have not been written by an output.
+  ##
+  ## This value needs to be picked with awareness of the agent's
+  ## metric_batch_size value as well. Setting max undelivered messages too high
+  ## can result in a constant stream of data batches to the output. While
+  ## setting it too low may never flush the broker's messages.
+  # max_undelivered_messages = 1000
+
+  ## The following are optional Subscription ReceiveSettings in PubSub.
+  ## Read more about these values:
+  ## https://godoc.org/cloud.google.com/go/pubsub#ReceiveSettings
+
+  ## Optional. Maximum number of seconds for which a PubSub subscription
+  ## should auto-extend the PubSub ACK deadline for each message. If less than
+  ## 0, auto-extension is disabled.
+  # max_extension = 0
+
+  ## Optional. Maximum number of unprocessed messages in PubSub
+  ## (unacknowledged but not yet expired in PubSub).
+  ## A value of 0 is treated as the default PubSub value.
+  ## Negative values will be treated as unlimited.
+  # max_outstanding_messages = 0
+
+  ## Optional. Maximum size in bytes of unprocessed messages in PubSub
+  ## (unacknowledged but not yet expired in PubSub).
+  ## A value of 0 is treated as the default PubSub value.
+  ## Negative values will be treated as unlimited.
+  # max_outstanding_bytes = 0
+
+  ## Optional. Max number of goroutines a PubSub Subscription receiver can spawn
+  ## to pull messages from PubSub concurrently. This limit applies to each
+  ## subscription separately and is treated as the PubSub default if less than
+  ## 1. Note this setting does not limit the number of messages that can be
+  ## processed concurrently (use "max_outstanding_messages" instead).
+  # max_receiver_go_routines = 0
+
+  ## Optional. If true, Telegraf will attempt to base64 decode the
+  ## PubSub message data before parsing. Many GCP services that
+  ## output JSON to Google PubSub base64-encode the JSON payload.
+  # base64_data = false
+
+  ## Content encoding for message payloads, can be set to "gzip" or
+  ## "identity" to apply no encoding.
+  # content_encoding = "identity"
+
+  ## If content encoding is not "identity", sets the maximum allowed size, 
+  ## in bytes, for a message payload when it's decompressed. Can be increased 
+  ## for larger payloads or reduced to protect against decompression bombs.
+  ## Acceptable units are B, KiB, KB, MiB, MB...
+  # max_decompression_size = "500MB"
+```
+
+### Multiple Subscriptions and Topics
+
+This plugin assumes you have already created a PULL subscription for a given
+PubSub topic. To learn how to do so, see [how to create a subscription](https://cloud.google.com/pubsub/docs/admin#create_a_pull_subscription).
+
+Each plugin agent can listen to one subscription at a time, so you will
+need to run multiple instances of the plugin to pull messages from multiple
+subscriptions/topics.
+
+[pubsub]: https://cloud.google.com/pubsub
+[pubsub create sub]: https://cloud.google.com/pubsub/docs/admin#create_a_pull_subscription
+[input data formats]: /docs/DATA_FORMATS_INPUT.md
+
+## Metrics
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/cloud_pubsub_push/_index.md b/content/telegraf/v1/input-plugins/cloud_pubsub_push/_index.md
new file mode 100644
index 000000000..30c6140ec
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/cloud_pubsub_push/_index.md
@@ -0,0 +1,112 @@
+---
+description: "Telegraf plugin for collecting metrics from Google Cloud PubSub Push"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Google Cloud PubSub Push
+    identifier: input-cloud_pubsub_push
+tags: [Google Cloud PubSub Push, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Google Cloud PubSub Push Input Plugin
+
+The Google Cloud PubSub Push listener is a service input plugin that listens
+for messages sent via an HTTP POST from [Google Cloud PubSub](https://cloud.google.com/pubsub).
+The plugin expects messages in Google's Pub/Sub JSON Format ONLY. The intent
+of the plugin is to allow Telegraf to serve as an endpoint of the
+Google Pub/Sub 'Push' service.  Google's PubSub service will **only** send
+over HTTPS/TLS so this plugin must be behind a valid proxy or must be
+configured to use TLS.
+
+Enable TLS by specifying the file names of a service TLS certificate and key.
+
+Enable mutually authenticated TLS and authorize client connections by signing
+certificate authority by including a list of allowed CA certificate file names
+in `tls_allowed_cacerts`.
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Google Cloud Pub/Sub Push HTTP listener
+[[inputs.cloud_pubsub_push]]
+  ## Address and port to host HTTP listener on
+  service_address = ":8080"
+
+  ## Application secret to verify messages originate from Cloud Pub/Sub
+  # token = ""
+
+  ## Path to listen to.
+  # path = "/"
+
+  ## Maximum duration before timing out read of the request
+  # read_timeout = "10s"
+  ## Maximum duration before timing out write of the response. This should be
+  ## set to a value large enough that you can send at least 'metric_batch_size'
+  ## number of messages within the duration.
+  # write_timeout = "10s"
+
+  ## Maximum allowed http request body size in bytes.
+  ## 0 means to use the default of 524,288,00 bytes (500 mebibytes)
+  # max_body_size = "500MB"
+
+  ## Whether to add the pubsub metadata, such as message attributes and
+  ## subscription as a tag.
+  # add_meta = false
+
+  ## Max undelivered messages
+  ## This plugin uses tracking metrics, which ensure messages are read to
+  ## outputs before acknowledging them to the original broker to ensure data
+  ## is not lost. This option sets the maximum messages to read from the
+  ## broker that have not been written by an output.
+  ##
+  ## This value needs to be picked with awareness of the agent's
+  ## metric_batch_size value as well. Setting max undelivered messages too high
+  ## can result in a constant stream of data batches to the output. While
+  ## setting it too low may never flush the broker's messages.
+  # max_undelivered_messages = 1000
+
+  ## Set one or more allowed client CA certificate file names to
+  ## enable mutually authenticated TLS connections
+  # tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
+
+  ## Add service certificate and key
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+
+  ## Data format to consume.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+  data_format = "influx"
+```
+
+This plugin assumes you have already created a PUSH subscription for a given
+PubSub topic.
+
+[pubsub]: https://cloud.google.com/pubsub
+
+## Metrics
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/cloudwatch/_index.md b/content/telegraf/v1/input-plugins/cloudwatch/_index.md
new file mode 100644
index 000000000..f5bdcc4b3
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/cloudwatch/_index.md
@@ -0,0 +1,350 @@
+---
+description: "Telegraf plugin for collecting metrics from Amazon CloudWatch Statistics"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Amazon CloudWatch Statistics
+    identifier: input-cloudwatch
+tags: [Amazon CloudWatch Statistics, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Amazon CloudWatch Statistics Input Plugin
+
+This plugin will pull Metric Statistics from Amazon CloudWatch.
+
+## Amazon Authentication
+
+This plugin uses a credential chain for Authentication with the CloudWatch
+API endpoint. In the following order the plugin will attempt to authenticate.
+
+1. Assumed credentials via STS if `role_arn` attribute is specified
+   (source credentials are evaluated from subsequent rules)
+2. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
+3. Shared profile from `profile` attribute
+4. [Environment Variables](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#environment-variables)
+5. [Shared Credentials](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#shared-credentials-file)
+6. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Pull Metric Statistics from Amazon CloudWatch
+[[inputs.cloudwatch]]
+  ## Amazon Region
+  region = "us-east-1"
+
+  ## Amazon Credentials
+  ## Credentials are loaded in the following order
+  ## 1) Web identity provider credentials via STS if role_arn and
+  ##    web_identity_token_file are specified
+  ## 2) Assumed credentials via STS if role_arn is specified
+  ## 3) explicit credentials from 'access_key' and 'secret_key'
+  ## 4) shared profile from 'profile'
+  ## 5) environment variables
+  ## 6) shared credentials file
+  ## 7) EC2 Instance Profile
+  # access_key = ""
+  # secret_key = ""
+  # token = ""
+  # role_arn = ""
+  # web_identity_token_file = ""
+  # role_session_name = ""
+  # profile = ""
+  # shared_credential_file = ""
+
+  ## If you are using CloudWatch cross-account observability, you can
+  ## set IncludeLinkedAccounts to true in a monitoring account
+  ## and collect metrics from the linked source accounts
+  # include_linked_accounts = false
+
+  ## Endpoint to make request against, the correct endpoint is automatically
+  ## determined and this option should only be set if you wish to override the
+  ## default.
+  ##   ex: endpoint_url = "http://localhost:8000"
+  # endpoint_url = ""
+
+  ## Set http_proxy
+  # use_system_proxy = false
+  # http_proxy_url = "http://localhost:8888"
+
+  ## The minimum period for Cloudwatch metrics is 1 minute (60s). However not
+  ## all metrics are made available to the 1 minute period. Some are collected
+  ## at 3 minute, 5 minute, or larger intervals.
+  ## See https://aws.amazon.com/cloudwatch/faqs/#monitoring.
+  ## Note that if a period is configured that is smaller than the minimum for a
+  ## particular metric, that metric will not be returned by the Cloudwatch API
+  ## and will not be collected by Telegraf.
+  #
+  ## Requested CloudWatch aggregation Period (required)
+  ## Must be a multiple of 60s.
+  period = "5m"
+
+  ## Collection Delay (required)
+  ## Must account for metrics availability via CloudWatch API
+  delay = "5m"
+
+  ## Recommended: use metric 'interval' that is a multiple of 'period' to avoid
+  ## gaps or overlap in pulled data
+  interval = "5m"
+
+  ## Recommended if "delay" and "period" are both within 3 hours of request
+  ## time. Invalid values will be ignored. Recently Active feature will only
+  ## poll for CloudWatch ListMetrics values that occurred within the last 3h.
+  ## If enabled, it will reduce total API usage of the CloudWatch ListMetrics
+  ## API and require less memory to retain.
+  ## Do not enable if "period" or "delay" is longer than 3 hours, as it will
+  ## not return data more than 3 hours old.
+  ## See https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_ListMetrics.html
+  #recently_active = "PT3H"
+
+  ## Configure the TTL for the internal cache of metrics.
+  # cache_ttl = "1h"
+
+  ## Metric Statistic Namespaces (required)
+  namespaces = ["AWS/ELB"]
+
+  ## Metric Format
+  ## This determines the format of the produces metrics. 'sparse', the default
+  ## will produce a unique field for each statistic. 'dense' will report all
+  ## statistics will be in a field called value and have a metric_name tag
+  ## defining the name of the statistic. See the plugin README for examples.
+  # metric_format = "sparse"
+
+  ## Maximum requests per second. Note that the global default AWS rate limit
+  ## is 50 reqs/sec, so if you define multiple namespaces, these should add up
+  ## to a maximum of 50.
+  ## See http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_limits.html
+  # ratelimit = 25
+
+  ## Timeout for http requests made by the cloudwatch client.
+  # timeout = "5s"
+
+  ## Batch Size
+  ## The size of each batch to send requests to Cloudwatch. 500 is the
+  ## suggested largest size. If a request gets to large (413 errors), consider
+  ## reducing this amount.
+  # batch_size = 500
+
+  ## Namespace-wide statistic filters. These allow fewer queries to be made to
+  ## cloudwatch.
+  # statistic_include = ["average", "sum", "minimum", "maximum", sample_count"]
+  # statistic_exclude = []
+
+  ## Metrics to Pull
+  ## Defaults to all Metrics in Namespace if nothing is provided
+  ## Refreshes Namespace available metrics every 1h
+  #[[inputs.cloudwatch.metrics]]
+  #  names = ["Latency", "RequestCount"]
+  #
+  #  ## Statistic filters for Metric.  These allow for retrieving specific
+  #  ## statistics for an individual metric.
+  #  # statistic_include = ["average", "sum", "minimum", "maximum", sample_count"]
+  #  # statistic_exclude = []
+  #
+  #  ## Dimension filters for Metric.
+  #  ## All dimensions defined for the metric names must be specified in order
+  #  ## to retrieve the metric statistics.
+  #  ## 'value' has wildcard / 'glob' matching support such as 'p-*'.
+  #  [[inputs.cloudwatch.metrics.dimensions]]
+  #    name = "LoadBalancerName"
+  #    value = "p-example"
+```
+
+Please note, the `namespace` option is deprecated in favor of the `namespaces`
+list option.
+
+## Requirements and Terminology
+
+Plugin Configuration utilizes [CloudWatch concepts](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html) and access
+pattern to allow monitoring of any CloudWatch Metric.
+
+- `region` must be a valid AWS [region](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#CloudWatchRegions) value
+- `period` must be a valid CloudWatch [period](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#CloudWatchPeriods) value
+- `namespaces` must be a list of valid CloudWatch [namespace](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Namespace) value(s)
+- `names` must be valid CloudWatch [metric](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Metric) names
+- `dimensions` must be valid CloudWatch [dimension](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Dimension) name/value pairs
+
+Omitting or specifying a value of `'*'` for a dimension value configures all
+available metrics that contain a dimension with the specified name to be
+retrieved. If specifying >1 dimension, then the metric must contain *all* the
+configured dimensions where the value of the wildcard dimension is ignored.
+
+Example:
+
+```toml
+[[inputs.cloudwatch]]
+  period = "1m"
+  interval = "5m"
+
+  [[inputs.cloudwatch.metrics]]
+    names = ["Latency"]
+
+    ## Dimension filters for Metric (optional)
+    [[inputs.cloudwatch.metrics.dimensions]]
+      name = "LoadBalancerName"
+      value = "p-example"
+
+    [[inputs.cloudwatch.metrics.dimensions]]
+      name = "AvailabilityZone"
+      value = "*"
+```
+
+If the following ELBs are available:
+
+- name: `p-example`, availabilityZone: `us-east-1a`
+- name: `p-example`, availabilityZone: `us-east-1b`
+- name: `q-example`, availabilityZone: `us-east-1a`
+- name: `q-example`, availabilityZone: `us-east-1b`
+
+Then 2 metrics will be output:
+
+- name: `p-example`, availabilityZone: `us-east-1a`
+- name: `p-example`, availabilityZone: `us-east-1b`
+
+If the `AvailabilityZone` wildcard dimension was omitted, then a single metric
+(name: `p-example`) would be exported containing the aggregate values of the ELB
+across availability zones.
+
+To maximize efficiency and savings, consider making fewer requests by increasing
+`interval` but keeping `period` at the duration you would like metrics to be
+reported. The above example will request metrics from Cloudwatch every 5 minutes
+but will output five metrics timestamped one minute apart.
+
+## Restrictions and Limitations
+
+- CloudWatch metrics are not available instantly via the CloudWatch API.
+  You should adjust your collection `delay` to account for this lag in metrics
+  availability based on your [monitoring subscription level](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html)
+- CloudWatch API usage incurs cost - see [GetMetricData Pricing](https://aws.amazon.com/cloudwatch/pricing/)
+
+## Metrics
+
+Each CloudWatch Namespace monitored records a measurement with fields for each
+available Metric Statistic. Namespace and Metrics are represented in [snake
+case](https://en.wikipedia.org/wiki/Snake_case)
+
+### Sparse Metrics
+
+By default, metrics generated by this plugin are sparse. Use the `metric_format`
+option to override this setting.
+
+Sparse metrics produce a set of fields for every AWS Metric.
+
+- cloudwatch_{namespace}
+  - Fields
+    - {metric}_sum         (metric Sum value)
+    - {metric}_average     (metric Average value)
+    - {metric}_minimum     (metric Minimum value)
+    - {metric}_maximum     (metric Maximum value)
+    - {metric}_sample_count (metric SampleCount value)
+
+For example:
+
+```text
+cloudwatch_aws_usage,class=None,resource=GetSecretValue,service=Secrets\ Manager,type=API call_count_maximum=1,call_count_minimum=1,call_count_sum=8,call_count_sample_count=8,call_count_average=1 1715097720000000000
+```
+
+### Dense Metrics
+
+Dense metrics are generated when `metric_format` is set to `dense`.
+
+Dense metrics use the same fields over and over for every AWS Metric and
+differentiate between AWS Metrics using a tag called `metric_name` with the AWS
+Metric name:
+
+- cloudwatch_{namespace}
+  - Tags
+    - metric_name (AWS Metric name)
+  - Fields
+    - sum         (metric Sum value)
+    - average     (metric Average value)
+    - minimum     (metric Minimum value)
+    - maximum     (metric Maximum value)
+    - sample_count (metric SampleCount value)
+
+For example:
+
+```text
+cloudwatch_aws_usage,class=None,resource=GetSecretValue,service=Secrets\ Manager,metric_name=call_count,type=API sum=6,sample_count=6,average=1,maximum=1,minimum=1 1715097840000000000
+```
+
+### Tags
+
+Each measurement is tagged with the following identifiers to uniquely identify
+the associated metric Tag Dimension names are represented in [snake
+case](https://en.wikipedia.org/wiki/Snake_case)
+
+- All measurements have the following tags:
+  - region           (CloudWatch Region)
+  - {dimension-name} (Cloudwatch Dimension value - one per metric dimension)
+- If `include_linked_accounts` is set to true then below tag is also provided:
+  - account           (The ID of the account where the metrics are located.)
+
+## Troubleshooting
+
+You can use the aws cli to get a list of available metrics and dimensions:
+
+```shell
+aws cloudwatch list-metrics --namespace AWS/EC2 --region us-east-1
+aws cloudwatch list-metrics --namespace AWS/EC2 --region us-east-1 --metric-name CPUCreditBalance
+```
+
+If the expected metrics are not returned, you can try getting them manually
+for a short period of time:
+
+```shell
+aws cloudwatch get-metric-data \
+  --start-time 2018-07-01T00:00:00Z \
+  --end-time 2018-07-01T00:15:00Z \
+  --metric-data-queries '[
+  {
+    "Id": "avgCPUCreditBalance",
+    "MetricStat": {
+      "Metric": {
+        "Namespace": "AWS/EC2",
+        "MetricName": "CPUCreditBalance",
+        "Dimensions": [
+          {
+            "Name": "InstanceId",
+            "Value": "i-deadbeef"
+          }
+        ]
+      },
+      "Period": 300,
+      "Stat": "Average"
+    },
+    "Label": "avgCPUCreditBalance"
+  }
+]'
+```
+
+## Example Output
+
+See the discussion above about sparse vs dense metrics for more details.
+
+```text
+cloudwatch_aws_elb,load_balancer_name=p-example,region=us-east-1 latency_average=0.004810798017284538,latency_maximum=0.1100282669067383,latency_minimum=0.0006084442138671875,latency_sample_count=4029,latency_sum=19.382705211639404 1459542420000000000
+```
+
+[concept]: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html
+[credentials]: https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#shared-credentials-file
+[dimension]: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Dimension
+[env]: https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#environment-variables
+[iam-roles]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
+[metric]: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Metric
+[namespace]: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Namespace
+[period]: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#CloudWatchPeriods
+[pricing]: https://aws.amazon.com/cloudwatch/pricing/
+[region]: http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#CloudWatchRegions
+[using]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html
diff --git a/content/telegraf/v1/input-plugins/cloudwatch_metric_streams/_index.md b/content/telegraf/v1/input-plugins/cloudwatch_metric_streams/_index.md
new file mode 100644
index 000000000..a38e46205
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/cloudwatch_metric_streams/_index.md
@@ -0,0 +1,177 @@
+---
+description: "Telegraf plugin for collecting metrics from CloudWatch Metric Streams"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: CloudWatch Metric Streams
+    identifier: input-cloudwatch_metric_streams
+tags: [CloudWatch Metric Streams, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# CloudWatch Metric Streams Input Plugin
+
+The CloudWatch Metric Streams plugin is a service input plugin that listens
+for metrics sent via HTTP and performs the required processing for
+Metric Streams from AWS.
+
+For cost, see the Metric Streams example in
+CloudWatch pricing.
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# AWS Metric Streams listener
+[[inputs.cloudwatch_metric_streams]]
+  ## Address and port to host HTTP listener on
+  service_address = ":443"
+
+  ## Paths to listen to.
+  # paths = ["/telegraf"]
+
+  ## maximum duration before timing out read of the request
+  # read_timeout = "10s"
+
+  ## maximum duration before timing out write of the response
+  # write_timeout = "10s"
+
+  ## Maximum allowed http request body size in bytes.
+  ## 0 means to use the default of 524,288,000 bytes (500 mebibytes)
+  # max_body_size = "500MB"
+
+  ## Optional access key for Firehose security.
+  # access_key = "test-key"
+
+  ## An optional flag to keep Metric Streams metrics compatible with
+  ## CloudWatch's API naming
+  # api_compatability = false
+
+  ## Set one or more allowed client CA certificate file names to
+  ## enable mutually authenticated TLS connections
+  # tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
+
+  ## Add service certificate and key
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+```
+
+## Metrics
+
+Metrics sent by AWS are Base64 encoded blocks of JSON data.
+The JSON block below is the Base64 decoded data in the `data`
+field of a `record`.
+There can be multiple blocks of JSON for each `data` field
+in each `record` and there can be multiple `record` fields in
+a `record`.
+
+The metric when decoded may look like this:
+
+```json
+{
+    "metric_stream_name": "sandbox-dev-cloudwatch-metric-stream",
+    "account_id": "541737779709",
+    "region": "us-west-2",
+    "namespace": "AWS/EC2",
+    "metric_name": "CPUUtilization",
+    "dimensions": {
+        "InstanceId": "i-0efc7ghy09c123428"
+    },
+    "timestamp": 1651679580000,
+    "value": {
+        "max": 10.011666666666667,
+        "min": 10.011666666666667,
+        "sum": 10.011666666666667,
+        "count": 1
+    },
+    "unit": "Percent"
+}
+```
+
+### Tags
+
+All tags in the `dimensions` list are added as tags to the metric.
+
+The `account_id` and `region` tag are added to each metric as well.
+
+### Measurements and Fields
+
+The metric name is a combination of `namespace` and `metric_name`,
+separated by `_` and lowercased.
+
+The fields are each aggregate in the `value` list.
+
+These fields are optionally renamed to match the CloudWatch API for
+easier transition from the API to Metric Streams. This relies on
+setting the `api_compatability` flag in the configuration.
+
+The timestamp applied is the timestamp from the metric,
+typically 3-5 minutes older than the time processed due
+to CloudWatch delays.
+
+## Example Output
+
+Example output based on the above JSON & compatability flag is:
+
+**Standard Metric Streams format:**
+
+```text
+aws_ec2_cpuutilization,accountId=541737779709,region=us-west-2,InstanceId=i-0efc7ghy09c123428 max=10.011666666666667,min=10.011666666666667,sum=10.011666666666667,count=1 1651679580000
+```
+
+**API Compatability format:**
+
+```text
+aws_ec2_cpuutilization,accountId=541737779709,region=us-west-2,InstanceId=i-0efc7ghy09c123428 maximum=10.011666666666667,minimum=10.011666666666667,sum=10.011666666666667,samplecount=1 1651679580000
+```
+
+## Troubleshooting
+
+The plugin has its own internal metrics for troubleshooting:
+
+* Requests Received
+  * The number of requests received by the listener.
+* Writes Served
+  * The number of writes served by the listener.
+* Bad Requests
+  * The number of bad requests, separated by the error code as a tag.
+* Request Time
+  * The duration of the request measured in ns.
+* Age Max
+  * The maximum age of a metric in this interval. This is useful for offsetting
+    any lag or latency measurements in a metrics pipeline that measures based
+    on the timestamp.
+* Age Min
+  * The minimum age of a metric in this interval.
+
+Specific errors will be logged and an error will be returned to AWS.
+
+### Troubleshooting Documentation
+
+Additional troubleshooting for a Metric Stream can be found
+in AWS's documentation:
+
+* [CloudWatch Metric Streams](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Metric-Streams.html)
+* [AWS HTTP Specifications](https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html)
+* [Firehose Troubleshooting](https://docs.aws.amazon.com/firehose/latest/dev/http_troubleshooting.html)
+* [CloudWatch Pricing](https://aws.amazon.com/cloudwatch/pricing/)
diff --git a/content/telegraf/v1/input-plugins/conntrack/_index.md b/content/telegraf/v1/input-plugins/conntrack/_index.md
new file mode 100644
index 000000000..1a77bbe53
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/conntrack/_index.md
@@ -0,0 +1,135 @@
+---
+description: "Telegraf plugin for collecting metrics from Conntrack"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Conntrack
+    identifier: input-conntrack
+tags: [Conntrack, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Conntrack Input Plugin
+
+Collects stats from Netfilter's conntrack-tools.
+
+There are two collection mechanisms for this plugin:
+
+## /proc/net/stat/nf_conntrack
+
+When a user specifies the `collect` config option with valid options, then the
+plugin will loop through the files in `/proc/net/stat/nf_conntrack` to find
+CPU specific values.
+
+## Specific files and dirs
+
+The second mechanism is for the user to specify a set of directories and files
+to search through
+
+At runtime, conntrack exposes many of those connection statistics within
+`/proc/sys/net`. Depending on your kernel version, these files can be found in
+either `/proc/sys/net/ipv4/netfilter` or `/proc/sys/net/netfilter` and will be
+prefixed with either `ip` or `nf`.  This plugin reads the files specified
+in its configuration and publishes each one as a field, with the prefix
+normalized to ip_.
+
+conntrack exposes many of those connection statistics within `/proc/sys/net`.
+Depending on your kernel version, these files can be found in either
+`/proc/sys/net/ipv4/netfilter` or `/proc/sys/net/netfilter` and will be
+prefixed with either `ip_` or `nf_`.  This plugin reads the files specified
+in its configuration and publishes each one as a field, with the prefix
+normalized to `ip_`.
+
+In order to simplify configuration in a heterogeneous environment, a superset
+of directory and filenames can be specified.  Any locations that does nt exist
+are ignored.
+
+For more information on conntrack-tools, see the
+[Netfilter Documentation](http://conntrack-tools.netfilter.org/).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Collects conntrack stats from the configured directories and files.
+# This plugin ONLY supports Linux
+[[inputs.conntrack]]
+  ## The following defaults would work with multiple versions of conntrack.
+  ## Note the nf_ and ip_ filename prefixes are mutually exclusive across
+  ## kernel versions, as are the directory locations.
+
+  ## Look through /proc/net/stat/nf_conntrack for these metrics
+  ## all - aggregated statistics
+  ## percpu - include detailed statistics with cpu tag
+  collect = ["all", "percpu"]
+
+  ## User-specified directories and files to look through
+  ## Directories to search within for the conntrack files above.
+  ## Missing directories will be ignored.
+  dirs = ["/proc/sys/net/ipv4/netfilter","/proc/sys/net/netfilter"]
+
+  ## Superset of filenames to look for within the conntrack dirs.
+  ## Missing files will be ignored.
+  files = ["ip_conntrack_count","ip_conntrack_max",
+          "nf_conntrack_count","nf_conntrack_max"]
+```
+
+## Metrics
+
+A detailed explanation of each fields can be found in
+[kernel documentation](https://www.kernel.org/doc/Documentation/networking/nf_conntrack-sysctl.txt)
+
+[kerneldoc]: https://www.kernel.org/doc/Documentation/networking/nf_conntrack-sysctl.txt
+
+- conntrack
+  - `ip_conntrack_count` `(int, count)`: The number of entries in the conntrack table
+  - `ip_conntrack_max` `(int, size)`: The max capacity of the conntrack table
+  - `ip_conntrack_buckets`  `(int, size)`: The size of hash table.
+
+With `collect = ["all"]`:
+
+- `entries`: The number of entries in the conntrack table
+- `searched`: The number of conntrack table lookups performed
+- `found`: The number of searched entries which were successful
+- `new`: The number of entries added which were not expected before
+- `invalid`: The number of packets seen which can not be tracked
+- `ignore`: The number of packets seen which are already connected to an entry
+- `delete`: The number of entries which were removed
+- `delete_list`: The number of entries which were put to dying list
+- `insert`: The number of entries inserted into the list
+- `insert_failed`: The number of insertion attempted but failed (same entry exists)
+- `drop`: The number of packets dropped due to conntrack failure
+- `early_drop`: The number of dropped entries to make room for new ones, if maxsize reached
+- `icmp_error`: Subset of invalid. Packets that can't be tracked due to error
+- `expect_new`: Entries added after an expectation was already present
+- `expect_create`: Expectations added
+- `expect_delete`: Expectations deleted
+- `search_restart`: Conntrack table lookups restarted due to hashtable resizes
+
+### Tags
+
+With `collect = ["percpu"]` will include detailed statistics per CPU thread.
+
+Without `"percpu"` the `cpu` tag will have `all` value.
+
+## Example Output
+
+```text
+conntrack,host=myhost ip_conntrack_count=2,ip_conntrack_max=262144 1461620427667995735
+```
+
+with stats:
+
+```text
+conntrack,cpu=all,host=localhost delete=0i,delete_list=0i,drop=2i,early_drop=0i,entries=5568i,expect_create=0i,expect_delete=0i,expect_new=0i,found=7i,icmp_error=1962i,ignore=2586413402i,insert=0i,insert_failed=2i,invalid=46853i,new=0i,search_restart=453336i,searched=0i 1615233542000000000
+conntrack,host=localhost ip_conntrack_count=464,ip_conntrack_max=262144 1615233542000000000
+```
diff --git a/content/telegraf/v1/input-plugins/consul/_index.md b/content/telegraf/v1/input-plugins/consul/_index.md
new file mode 100644
index 000000000..d510487fc
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/consul/_index.md
@@ -0,0 +1,117 @@
+---
+description: "Telegraf plugin for collecting metrics from Consul"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Consul
+    identifier: input-consul
+tags: [Consul, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Consul Input Plugin
+
+This plugin will collect statistics about all health checks registered in the
+Consul. It uses [Consul API](https://www.consul.io/docs/agent/http/health.html#health_state) to query the data. It will not report the
+[telemetry](https://www.consul.io/docs/agent/telemetry.html) but Consul can report those stats already using StatsD protocol
+if needed.
+
+[1]: https://www.consul.io/docs/agent/http/health.html#health_state
+
+[2]: https://www.consul.io/docs/agent/telemetry.html
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Gather health check statuses from services registered in Consul
+[[inputs.consul]]
+  ## Consul server address
+  # address = "localhost:8500"
+
+  ## URI scheme for the Consul server, one of "http", "https"
+  # scheme = "http"
+
+  ## Metric version controls the mapping from Consul metrics into
+  ## Telegraf metrics. Version 2 moved all fields with string values
+  ## to tags.
+  ##
+  ##   example: metric_version = 1; deprecated in 1.16
+  ##            metric_version = 2; recommended version
+  # metric_version = 1
+
+  ## ACL token used in every request
+  # token = ""
+
+  ## HTTP Basic Authentication username and password.
+  # username = ""
+  # password = ""
+
+  ## Data center to query the health checks from
+  # datacenter = ""
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = true
+
+  ## Consul checks' tag splitting
+  # When tags are formatted like "key:value" with ":" as a delimiter then
+  # they will be split and reported as proper key:value in Telegraf
+  # tag_delimiter = ":"
+```
+
+## Metrics
+
+### metric_version = 1
+
+- consul_health_checks
+  - tags:
+    - node (node that check/service is registered on)
+    - service_name
+    - check_id
+  - fields:
+    - check_name
+    - service_id
+    - status
+    - passing (integer)
+    - critical (integer)
+    - warning (integer)
+
+### metric_version = 2
+
+- consul_health_checks
+  - tags:
+    - node (node that check/service is registered on)
+    - service_name
+    - check_id
+    - check_name
+    - service_id
+    - status
+  - fields:
+    - passing (integer)
+    - critical (integer)
+    - warning (integer)
+
+`passing`, `critical`, and `warning` are integer representations of the health
+check state. A value of `1` represents that the status was the state of the
+health check at this sample. `status` is string representation of the same
+state.
+
+## Example Output
+
+```text
+consul_health_checks,host=wolfpit,node=consul-server-node,check_id="serfHealth" check_name="Serf Health Status",service_id="",status="passing",passing=1i,critical=0i,warning=0i 1464698464486439902
+consul_health_checks,host=wolfpit,node=consul-server-node,service_name=www.example.com,check_id="service:www-example-com.test01" check_name="Service 'www.example.com' check",service_id="www-example-com.test01",status="critical",passing=0i,critical=1i,warning=0i 1464698464486519036
+```
diff --git a/content/telegraf/v1/input-plugins/consul_agent/_index.md b/content/telegraf/v1/input-plugins/consul_agent/_index.md
new file mode 100644
index 000000000..5450bbb9c
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/consul_agent/_index.md
@@ -0,0 +1,61 @@
+---
+description: "Telegraf plugin for collecting metrics from Hashicorp Consul Agent Metrics"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Hashicorp Consul Agent Metrics
+    identifier: input-consul_agent
+tags: [Hashicorp Consul Agent Metrics, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Hashicorp Consul Agent Metrics Input Plugin
+
+This plugin grabs metrics from a Consul agent. Telegraf may be present in every
+node and connect to the agent locally. In this case should be something like
+`http://127.0.0.1:8500`.
+
+> Tested on Consul 1.10.4 .
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from the Consul Agent API
+[[inputs.consul_agent]]
+  ## URL for the Consul agent
+  # url = "http://127.0.0.1:8500"
+
+  ## Use auth token for authorization.
+  ## If both are set, an error is thrown.
+  ## If both are empty, no token will be used.
+  # token_file = "/path/to/auth/token"
+  ## OR
+  # token = "a1234567-40c7-9048-7bae-378687048181"
+
+  ## Set timeout (default 5 seconds)
+  # timeout = "5s"
+
+  ## Optional TLS Config
+  # tls_ca = /path/to/cafile
+  # tls_cert = /path/to/certfile
+  # tls_key = /path/to/keyfile
+```
+
+## Metrics
+
+Consul collects various metrics. For every details, please have a look at Consul
+following documentation:
+
+- [https://www.consul.io/api/agent#view-metrics](https://www.consul.io/api/agent#view-metrics)
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/couchbase/_index.md b/content/telegraf/v1/input-plugins/couchbase/_index.md
new file mode 100644
index 000000000..acb4f8950
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/couchbase/_index.md
@@ -0,0 +1,343 @@
+---
+description: "Telegraf plugin for collecting metrics from Couchbase"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Couchbase
+    identifier: input-couchbase
+tags: [Couchbase, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Couchbase Input Plugin
+
+Couchbase is a distributed NoSQL database.  This plugin gets metrics for each
+Couchbase node, as well as detailed metrics for each bucket, for a given
+couchbase server.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read per-node and per-bucket metrics from Couchbase
+[[inputs.couchbase]]
+  ## specify servers via a url matching:
+  ##  [protocol://]()@address[:port]
+  ##  e.g.
+  ##    http://couchbase-0.example.com/
+  ##    http://admin:secret@couchbase-0.example.com:8091/
+  ##
+  ## If no servers are specified, then localhost is used as the host.
+  ## If no protocol is specified, HTTP is used.
+  ## If no port is specified, 8091 is used.
+  servers = ["http://localhost:8091"]
+
+  ## Filter bucket fields to include only here.
+  # bucket_stats_included = ["quota_percent_used", "ops_per_sec", "disk_fetches", "item_count", "disk_used", "data_used", "mem_used"]
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification (defaults to false)
+  ## If set to false, tls_cert and tls_key are required
+  # insecure_skip_verify = false
+
+  ## Whether to collect cluster-wide bucket statistics
+  ## It is recommended to disable this in favor of node_stats
+  ## to get a better view of the cluster.
+  # cluster_bucket_stats = true
+
+  ## Whether to collect bucket stats for each individual node
+  # node_bucket_stats = false
+
+  ## List of additional stats to collect, choose from:
+  ##  * autofailover
+  # additional_stats = []
+```
+
+## Metrics
+
+### couchbase_node
+
+Tags:
+
+- cluster: sanitized string from `servers` configuration field
+  e.g.: `http://user:password@couchbase-0.example.com:8091/endpoint` becomes
+  `http://couchbase-0.example.com:8091/endpoint`
+- hostname: Couchbase's name for the node and port, e.g., `172.16.10.187:8091`
+
+Fields:
+
+- memory_free (unit: bytes, example: 23181365248.0)
+- memory_total (unit: bytes, example: 64424656896.0)
+
+### couchbase_autofailover
+
+Tags:
+
+- cluster: sanitized string from `servers` configuration field
+  e.g.: `http://user:password@couchbase-0.example.com:8091/endpoint` becomes
+  `http://couchbase-0.example.com:8091/endpoint`
+
+Fields:
+
+- count (unit: int, example: 1)
+- enabled (unit: bool, example: true)
+- max_count (unit: int, example: 2)
+- timeout (unit: int, example: 72)
+
+### couchbase_bucket and couchbase_node_bucket
+
+Tags:
+
+- cluster: whatever you called it in `servers` in the configuration,
+  e.g. `http://couchbase-0.example.com/`
+- bucket: the name of the couchbase bucket, e.g., `blastro-df`
+- hostname: the hostname of the node the bucket metrics were collected
+  from, e.g. `172.16.10.187:8091` (only present in `couchbase_node_bucket`)
+
+Default bucket fields:
+
+- quota_percent_used (unit: percent, example: 68.85424936294555)
+- ops_per_sec (unit: count, example: 5686.789686789687)
+- disk_fetches (unit: count, example: 0.0)
+- item_count (unit: count, example: 943239752.0)
+- disk_used (unit: bytes, example: 409178772321.0)
+- data_used (unit: bytes, example: 212179309111.0)
+- mem_used (unit: bytes, example: 202156957464.0)
+
+Additional fields that can be configured with the `bucket_stats_included`
+option:
+
+- couch_total_disk_size
+- couch_docs_fragmentation
+- couch_views_fragmentation
+- hit_ratio
+- ep_cache_miss_rate
+- ep_resident_items_rate
+- vb_avg_active_queue_age
+- vb_avg_replica_queue_age
+- vb_avg_pending_queue_age
+- vb_avg_total_queue_age
+- vb_active_resident_items_ratio
+- vb_replica_resident_items_ratio
+- vb_pending_resident_items_ratio
+- avg_disk_update_time
+- avg_disk_commit_time
+- avg_bg_wait_time
+- avg_active_timestamp_drift
+- avg_replica_timestamp_drift
+- ep_dcp_views+indexes_count
+- ep_dcp_views+indexes_items_remaining
+- ep_dcp_views+indexes_producer_count
+- ep_dcp_views+indexes_total_backlog_size
+- ep_dcp_views+indexes_items_sent
+- ep_dcp_views+indexes_total_bytes
+- ep_dcp_views+indexes_backoff
+- bg_wait_count
+- bg_wait_total
+- bytes_read
+- bytes_written
+- cas_badval
+- cas_hits
+- cas_misses
+- cmd_get
+- cmd_lookup
+- cmd_set
+- couch_docs_actual_disk_size
+- couch_docs_data_size
+- couch_docs_disk_size
+- couch_spatial_data_size
+- couch_spatial_disk_size
+- couch_spatial_ops
+- couch_views_actual_disk_size
+- couch_views_data_size
+- couch_views_disk_size
+- couch_views_ops
+- curr_connections
+- curr_items
+- curr_items_tot
+- decr_hits
+- decr_misses
+- delete_hits
+- delete_misses
+- disk_commit_count
+- disk_commit_total
+- disk_update_count
+- disk_update_total
+- disk_write_queue
+- ep_active_ahead_exceptions
+- ep_active_hlc_drift
+- ep_active_hlc_drift_count
+- ep_bg_fetched
+- ep_clock_cas_drift_threshold_exceeded
+- ep_data_read_failed
+- ep_data_write_failed
+- ep_dcp_2i_backoff
+- ep_dcp_2i_count
+- ep_dcp_2i_items_remaining
+- ep_dcp_2i_items_sent
+- ep_dcp_2i_producer_count
+- ep_dcp_2i_total_backlog_size
+- ep_dcp_2i_total_bytes
+- ep_dcp_cbas_backoff
+- ep_dcp_cbas_count
+- ep_dcp_cbas_items_remaining
+- ep_dcp_cbas_items_sent
+- ep_dcp_cbas_producer_count
+- ep_dcp_cbas_total_backlog_size
+- ep_dcp_cbas_total_bytes
+- ep_dcp_eventing_backoff
+- ep_dcp_eventing_count
+- ep_dcp_eventing_items_remaining
+- ep_dcp_eventing_items_sent
+- ep_dcp_eventing_producer_count
+- ep_dcp_eventing_total_backlog_size
+- ep_dcp_eventing_total_bytes
+- ep_dcp_fts_backoff
+- ep_dcp_fts_count
+- ep_dcp_fts_items_remaining
+- ep_dcp_fts_items_sent
+- ep_dcp_fts_producer_count
+- ep_dcp_fts_total_backlog_size
+- ep_dcp_fts_total_bytes
+- ep_dcp_other_backoff
+- ep_dcp_other_count
+- ep_dcp_other_items_remaining
+- ep_dcp_other_items_sent
+- ep_dcp_other_producer_count
+- ep_dcp_other_total_backlog_size
+- ep_dcp_other_total_bytes
+- ep_dcp_replica_backoff
+- ep_dcp_replica_count
+- ep_dcp_replica_items_remaining
+- ep_dcp_replica_items_sent
+- ep_dcp_replica_producer_count
+- ep_dcp_replica_total_backlog_size
+- ep_dcp_replica_total_bytes
+- ep_dcp_views_backoff
+- ep_dcp_views_count
+- ep_dcp_views_items_remaining
+- ep_dcp_views_items_sent
+- ep_dcp_views_producer_count
+- ep_dcp_views_total_backlog_size
+- ep_dcp_views_total_bytes
+- ep_dcp_xdcr_backoff
+- ep_dcp_xdcr_count
+- ep_dcp_xdcr_items_remaining
+- ep_dcp_xdcr_items_sent
+- ep_dcp_xdcr_producer_count
+- ep_dcp_xdcr_total_backlog_size
+- ep_dcp_xdcr_total_bytes
+- ep_diskqueue_drain
+- ep_diskqueue_fill
+- ep_diskqueue_items
+- ep_flusher_todo
+- ep_item_commit_failed
+- ep_kv_size
+- ep_max_size
+- ep_mem_high_wat
+- ep_mem_low_wat
+- ep_meta_data_memory
+- ep_num_non_resident
+- ep_num_ops_del_meta
+- ep_num_ops_del_ret_meta
+- ep_num_ops_get_meta
+- ep_num_ops_set_meta
+- ep_num_ops_set_ret_meta
+- ep_num_value_ejects
+- ep_oom_errors
+- ep_ops_create
+- ep_ops_update
+- ep_overhead
+- ep_queue_size
+- ep_replica_ahead_exceptions
+- ep_replica_hlc_drift
+- ep_replica_hlc_drift_count
+- ep_tmp_oom_errors
+- ep_vb_total
+- evictions
+- get_hits
+- get_misses
+- incr_hits
+- incr_misses
+- mem_used
+- misses
+- ops
+- timestamp
+- vb_active_eject
+- vb_active_itm_memory
+- vb_active_meta_data_memory
+- vb_active_num
+- vb_active_num_non_resident
+- vb_active_ops_create
+- vb_active_ops_update
+- vb_active_queue_age
+- vb_active_queue_drain
+- vb_active_queue_fill
+- vb_active_queue_size
+- vb_active_sync_write_aborted_count
+- vb_active_sync_write_accepted_count
+- vb_active_sync_write_committed_count
+- vb_pending_curr_items
+- vb_pending_eject
+- vb_pending_itm_memory
+- vb_pending_meta_data_memory
+- vb_pending_num
+- vb_pending_num_non_resident
+- vb_pending_ops_create
+- vb_pending_ops_update
+- vb_pending_queue_age
+- vb_pending_queue_drain
+- vb_pending_queue_fill
+- vb_pending_queue_size
+- vb_replica_curr_items
+- vb_replica_eject
+- vb_replica_itm_memory
+- vb_replica_meta_data_memory
+- vb_replica_num
+- vb_replica_num_non_resident
+- vb_replica_ops_create
+- vb_replica_ops_update
+- vb_replica_queue_age
+- vb_replica_queue_drain
+- vb_replica_queue_fill
+- vb_replica_queue_size
+- vb_total_queue_age
+- xdc_ops
+- allocstall
+- cpu_cores_available
+- cpu_irq_rate
+- cpu_stolen_rate
+- cpu_sys_rate
+- cpu_user_rate
+- cpu_utilization_rate
+- hibernated_requests
+- hibernated_waked
+- mem_actual_free
+- mem_actual_used
+- mem_free
+- mem_limit
+- mem_total
+- mem_used_sys
+- odp_report_failed
+- rest_requests
+- swap_total
+- swap_used
+
+## Example Output
+
+```text
+couchbase_node,cluster=http://localhost:8091/,hostname=172.17.0.2:8091 memory_free=7705575424,memory_total=16558182400 1547829754000000000
+couchbase_bucket,bucket=beer-sample,cluster=http://localhost:8091/ quota_percent_used=27.09285736083984,ops_per_sec=0,disk_fetches=0,item_count=7303,disk_used=21662946,data_used=9325087,mem_used=28408920 1547829754000000000
+```
diff --git a/content/telegraf/v1/input-plugins/couchdb/_index.md b/content/telegraf/v1/input-plugins/couchdb/_index.md
new file mode 100644
index 000000000..9e398fcf3
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/couchdb/_index.md
@@ -0,0 +1,103 @@
+---
+description: "Telegraf plugin for collecting metrics from CouchDB"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: CouchDB
+    identifier: input-couchdb
+tags: [CouchDB, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# CouchDB Input Plugin
+
+The CouchDB plugin gathers metrics of CouchDB using [_stats] endpoint.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read CouchDB Stats from one or more servers
+[[inputs.couchdb]]
+  ## Works with CouchDB stats endpoints out of the box
+  ## Multiple Hosts from which to read CouchDB stats:
+  hosts = ["http://localhost:8086/_stats"]
+
+  ## Use HTTP Basic Authentication.
+  # basic_username = "telegraf"
+  # basic_password = "p@ssw0rd"
+```
+
+## Metrics
+
+Statistics specific to the internals of CouchDB:
+
+- couchdb_auth_cache_misses
+- couchdb_database_writes
+- couchdb_open_databases
+- couchdb_auth_cache_hits
+- couchdb_request_time
+- couchdb_database_reads
+- couchdb_open_os_files
+
+Statistics of HTTP requests by method:
+
+- httpd_request_methods_put
+- httpd_request_methods_get
+- httpd_request_methods_copy
+- httpd_request_methods_delete
+- httpd_request_methods_post
+- httpd_request_methods_head
+
+Statistics of HTTP requests by response code:
+
+- httpd_status_codes_200
+- httpd_status_codes_201
+- httpd_status_codes_202
+- httpd_status_codes_301
+- httpd_status_codes_304
+- httpd_status_codes_400
+- httpd_status_codes_401
+- httpd_status_codes_403
+- httpd_status_codes_404
+- httpd_status_codes_405
+- httpd_status_codes_409
+- httpd_status_codes_412
+- httpd_status_codes_500
+
+httpd statistics:
+
+- httpd_clients_requesting_changes
+- httpd_temporary_view_reads
+- httpd_requests
+- httpd_bulk_requests
+- httpd_view_reads
+
+## Tags
+
+- server (url of the couchdb _stats endpoint)
+
+## Example Output
+
+### Post Couchdb 2.0
+
+```text
+couchdb,server=http://couchdb22:5984/_node/_local/_stats couchdb_auth_cache_hits_value=0,httpd_request_methods_delete_value=0,couchdb_auth_cache_misses_value=0,httpd_request_methods_get_value=42,httpd_status_codes_304_value=0,httpd_status_codes_400_value=0,httpd_request_methods_head_value=0,httpd_status_codes_201_value=0,couchdb_database_reads_value=0,httpd_request_methods_copy_value=0,couchdb_request_time_max=0,httpd_status_codes_200_value=42,httpd_status_codes_301_value=0,couchdb_open_os_files_value=2,httpd_request_methods_put_value=0,httpd_request_methods_post_value=0,httpd_status_codes_202_value=0,httpd_status_codes_403_value=0,httpd_status_codes_409_value=0,couchdb_database_writes_value=0,couchdb_request_time_min=0,httpd_status_codes_412_value=0,httpd_status_codes_500_value=0,httpd_status_codes_401_value=0,httpd_status_codes_404_value=0,httpd_status_codes_405_value=0,couchdb_open_databases_value=0 1536707179000000000
+```
+
+### Pre Couchdb 2.0
+
+```text
+couchdb,server=http://couchdb16:5984/_stats couchdb_request_time_sum=96,httpd_status_codes_200_sum=37,httpd_status_codes_200_min=0,httpd_requests_mean=0.005,httpd_requests_min=0,couchdb_request_time_stddev=3.833,couchdb_request_time_min=1,httpd_request_methods_get_stddev=0.073,httpd_request_methods_get_min=0,httpd_status_codes_200_mean=0.005,httpd_status_codes_200_max=1,httpd_requests_sum=37,couchdb_request_time_current=96,httpd_request_methods_get_sum=37,httpd_request_methods_get_mean=0.005,httpd_request_methods_get_max=1,httpd_status_codes_200_stddev=0.073,couchdb_request_time_mean=2.595,couchdb_request_time_max=25,httpd_request_methods_get_current=37,httpd_status_codes_200_current=37,httpd_requests_current=37,httpd_requests_stddev=0.073,httpd_requests_max=1 1536707179000000000
+```
+
+[_stats]: http://docs.couchdb.org/en/1.6.1/api/server/common.html?highlight=stats#get--_stats
diff --git a/content/telegraf/v1/input-plugins/cpu/_index.md b/content/telegraf/v1/input-plugins/cpu/_index.md
new file mode 100644
index 000000000..a4933bd4a
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/cpu/_index.md
@@ -0,0 +1,107 @@
+---
+description: "Telegraf plugin for collecting metrics from CPU"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: CPU
+    identifier: input-cpu
+tags: [CPU, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# CPU Input Plugin
+
+The `cpu` plugin gather metrics on the system CPUs.
+
+## macOS Support
+
+The [gopsutil](https://github.com/shirou/gopsutil/blob/master/cpu/cpu_darwin_nocgo.go) library, which is used to collect CPU data, does not support
+gathering CPU metrics without CGO on macOS. The user will see a "not
+implemented" message in this case. Builds provided by InfluxData do not build
+with CGO.
+
+Users can use the builds provided by [Homebrew](https://formulae.brew.sh/formula/telegraf), which build with CGO, to
+produce CPU metrics.
+
+[1]: https://github.com/shirou/gopsutil/blob/master/cpu/cpu_darwin_nocgo.go
+[2]: https://formulae.brew.sh/formula/telegraf
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics about cpu usage
+[[inputs.cpu]]
+  ## Whether to report per-cpu stats or not
+  percpu = true
+  ## Whether to report total system cpu stats or not
+  totalcpu = true
+  ## If true, collect raw CPU time metrics
+  collect_cpu_time = false
+  ## If true, compute and report the sum of all non-idle CPU states
+  ## NOTE: The resulting 'time_active' field INCLUDES 'iowait'!
+  report_active = false
+  ## If true and the info is available then add core_id and physical_id tags
+  core_tags = false
+```
+
+## Metrics
+
+On Linux, consult `man proc` for details on the meanings of these values.
+
+- cpu
+  - tags:
+    - cpu (CPU ID or `cpu-total`)
+  - fields:
+    - time_user (float)
+    - time_system (float)
+    - time_idle (float)
+    - time_active (float)
+    - time_nice (float)
+    - time_iowait (float)
+    - time_irq (float)
+    - time_softirq (float)
+    - time_steal (float)
+    - time_guest (float)
+    - time_guest_nice (float)
+    - usage_user (float, percent)
+    - usage_system (float, percent)
+    - usage_idle (float, percent)
+    - usage_active (float)
+    - usage_nice (float, percent)
+    - usage_iowait (float, percent)
+    - usage_irq (float, percent)
+    - usage_softirq (float, percent)
+    - usage_steal (float, percent)
+    - usage_guest (float, percent)
+    - usage_guest_nice (float, percent)
+
+## Troubleshooting
+
+On Linux systems the `/proc/stat` file is used to gather CPU times.
+Percentages are based on the last 2 samples.
+Tags core_id and physical_id are read from `/proc/cpuinfo` on Linux systems
+
+## Example Output
+
+```text
+cpu,cpu=cpu0,host=loaner time_active=202224.15999999992,time_guest=30250.35,time_guest_nice=0,time_idle=1527035.04,time_iowait=1352,time_irq=0,time_nice=169.28,time_softirq=6281.4,time_steal=0,time_system=40097.14,time_user=154324.34 1568760922000000000
+cpu,cpu=cpu0,host=loaner usage_active=31.249999981810106,usage_guest=2.083333333080696,usage_guest_nice=0,usage_idle=68.7500000181899,usage_iowait=0,usage_irq=0,usage_nice=0,usage_softirq=0,usage_steal=0,usage_system=4.166666666161392,usage_user=25.000000002273737 1568760922000000000
+cpu,cpu=cpu1,host=loaner time_active=201890.02000000002,time_guest=30508.41,time_guest_nice=0,time_idle=264641.18,time_iowait=210.44,time_irq=0,time_nice=181.75,time_softirq=4537.88,time_steal=0,time_system=39480.7,time_user=157479.25 1568760922000000000
+cpu,cpu=cpu1,host=loaner usage_active=12.500000010610771,usage_guest=2.0833333328280585,usage_guest_nice=0,usage_idle=87.49999998938922,usage_iowait=0,usage_irq=0,usage_nice=0,usage_softirq=2.0833333332070145,usage_steal=0,usage_system=4.166666665656117,usage_user=4.166666666414029 1568760922000000000
+cpu,cpu=cpu2,host=loaner time_active=201382.78999999998,time_guest=30325.8,time_guest_nice=0,time_idle=264686.63,time_iowait=202.77,time_irq=0,time_nice=162.81,time_softirq=3378.34,time_steal=0,time_system=39270.59,time_user=158368.28 1568760922000000000
+cpu,cpu=cpu2,host=loaner usage_active=15.999999993480742,usage_guest=1.9999999999126885,usage_guest_nice=0,usage_idle=84.00000000651926,usage_iowait=0,usage_irq=0,usage_nice=0,usage_softirq=2.0000000002764864,usage_steal=0,usage_system=3.999999999825377,usage_user=7.999999998923158 1568760922000000000
+cpu,cpu=cpu3,host=loaner time_active=198953.51000000007,time_guest=30344.43,time_guest_nice=0,time_idle=265504.09,time_iowait=187.64,time_irq=0,time_nice=197.47,time_softirq=2301.47,time_steal=0,time_system=39313.73,time_user=156953.2 1568760922000000000
+cpu,cpu=cpu3,host=loaner usage_active=10.41666667424579,usage_guest=0,usage_guest_nice=0,usage_idle=89.58333332575421,usage_iowait=0,usage_irq=0,usage_nice=0,usage_softirq=0,usage_steal=0,usage_system=4.166666666666667,usage_user=6.249999998484175 1568760922000000000
+cpu,cpu=cpu-total,host=loaner time_active=804450.5299999998,time_guest=121429,time_guest_nice=0,time_idle=2321866.96,time_iowait=1952.86,time_irq=0,time_nice=711.32,time_softirq=16499.1,time_steal=0,time_system=158162.17,time_user=627125.08 1568760922000000000
+cpu,cpu=cpu-total,host=loaner usage_active=17.616580305880305,usage_guest=1.036269430422946,usage_guest_nice=0,usage_idle=82.3834196941197,usage_iowait=0,usage_irq=0,usage_nice=0,usage_softirq=1.0362694300459534,usage_steal=0,usage_system=4.145077721691784,usage_user=11.398963731636465 1568760922000000000
+```
diff --git a/content/telegraf/v1/input-plugins/csgo/_index.md b/content/telegraf/v1/input-plugins/csgo/_index.md
new file mode 100644
index 000000000..b0b19e374
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/csgo/_index.md
@@ -0,0 +1,63 @@
+---
+description: "Telegraf plugin for collecting metrics from Counter-Strike Global Offensive (CSGO)"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Counter-Strike Global Offensive (CSGO)
+    identifier: input-csgo
+tags: [Counter-Strike Global Offensive (CSGO), "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Counter-Strike: Global Offensive (CSGO) Input Plugin
+
+The `csgo` plugin gather metrics from Counter-Strike: Global Offensive servers.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Fetch metrics from a CSGO SRCDS
+[[inputs.csgo]]
+  ## Specify servers using the following format:
+  ##    servers = [
+  ##      ["ip1:port1", "rcon_password1"],
+  ##      ["ip2:port2", "rcon_password2"],
+  ##    ]
+  #
+  ## If no servers are specified, no data will be collected
+  servers = []
+```
+
+## Metrics
+
+The plugin retrieves the output of the `stats` command that is executed via
+rcon.
+
+If no servers are specified, no data will be collected
+
+- csgo
+  - tags:
+    - host
+  - fields:
+    - cpu (float)
+    - net_in (float)
+    - net_out (float)
+    - uptime_minutes (float)
+    - maps (float)
+    - fps (float)
+    - players (float)
+    - sv_ms (float)
+    - variance_ms (float)
+    - tick_ms (float)
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/ctrlx_datalayer/_index.md b/content/telegraf/v1/input-plugins/ctrlx_datalayer/_index.md
new file mode 100644
index 000000000..1d49b9bee
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/ctrlx_datalayer/_index.md
@@ -0,0 +1,398 @@
+---
+description: "Telegraf plugin for collecting metrics from ctrlX Data Layer"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: ctrlX Data Layer
+    identifier: input-ctrlx_datalayer
+tags: [ctrlX Data Layer, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# ctrlX Data Layer Input Plugin
+
+The `ctrlx_datalayer` plugin gathers data from the ctrlX Data Layer,
+a communication middleware running on
+[ctrlX CORE devices](https://ctrlx-core.com) from
+[Bosch Rexroth](https://boschrexroth.com). The platform is used for
+professional automation applications like industrial automation, building
+automation, robotics, IoT Gateways or as classical PLC. For more
+information, see [ctrlX AUTOMATION](https://ctrlx-automation.com).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# A ctrlX Data Layer server sent event input plugin
+[[inputs.ctrlx_datalayer]]
+   ## Hostname or IP address of the ctrlX CORE Data Layer server
+   ##  example: server = "localhost"        # Telegraf is running directly on the device
+   ##           server = "192.168.1.1"      # Connect to ctrlX CORE remote via IP
+   ##           server = "host.example.com" # Connect to ctrlX CORE remote via hostname
+   ##           server = "10.0.2.2:8443"    # Connect to ctrlX CORE Virtual from development environment
+   server = "localhost"
+
+   ## Authentication credentials
+   username = "boschrexroth"
+   password = "boschrexroth"
+
+   ## Use TLS but skip chain & host verification
+   # insecure_skip_verify = false
+
+   ## Timeout for HTTP requests. (default: "10s")
+   # timeout = "10s"
+
+
+   ## Create a ctrlX Data Layer subscription.
+   ## It is possible to define multiple subscriptions per host. Each subscription can have its own
+   ## sampling properties and a list of nodes to subscribe to.
+   ## All subscriptions share the same credentials.
+   [[inputs.ctrlx_datalayer.subscription]]
+      ## The name of the measurement. (default: "ctrlx")
+      measurement = "memory"
+
+      ## Configure the ctrlX Data Layer nodes which should be subscribed.
+      ## address - node address in ctrlX Data Layer (mandatory)
+      ## name    - field name to use in the output (optional, default: base name of address)
+      ## tags    - extra node tags to be added to the output metric (optional)
+      ## Note: 
+      ## Use either the inline notation or the bracketed notation, not both.
+      ## The tags property is only supported in bracketed notation due to toml parser restrictions
+      ## Examples:
+      ## Inline notation 
+      nodes=[
+         {name="available", address="framework/metrics/system/memavailable-mb"},
+         {name="used", address="framework/metrics/system/memused-mb"},
+      ]
+      ## Bracketed notation
+      # [[inputs.ctrlx_datalayer.subscription.nodes]]
+      #    name   ="available"
+      #    address="framework/metrics/system/memavailable-mb"
+      #    ## Define extra tags related to node to be added to the output metric (optional)
+      #    [inputs.ctrlx_datalayer.subscription.nodes.tags]
+      #       node_tag1="node_tag1"
+      #       node_tag2="node_tag2"
+      # [[inputs.ctrlx_datalayer.subscription.nodes]]
+      #    name   ="used"
+      #    address="framework/metrics/system/memused-mb"
+
+      ## The switch "output_json_string" enables output of the measurement as json. 
+      ## That way it can be used in in a subsequent processor plugin, e.g. "Starlark Processor Plugin".
+      # output_json_string = false
+
+      ## Define extra tags related to subscription to be added to the output metric (optional)
+      # [inputs.ctrlx_datalayer.subscription.tags]
+      #    subscription_tag1 = "subscription_tag1"
+      #    subscription_tag2 = "subscription_tag2"
+
+      ## The interval in which messages shall be sent by the ctrlX Data Layer to this plugin. (default: 1s)
+      ## Higher values reduce load on network by queuing samples on server side and sending as a single TCP packet.
+      # publish_interval = "1s"
+
+      ## The interval a "keepalive" message is sent if no change of data occurs. (default: 60s)
+      ## Only used internally to detect broken network connections.
+      # keep_alive_interval = "60s"
+
+      ## The interval an "error" message is sent if an error was received from a node. (default: 10s)
+      ## Higher values reduce load on output target and network in case of errors by limiting frequency of error messages.
+      # error_interval = "10s"
+
+      ## The interval that defines the fastest rate at which the node values should be sampled and values captured. (default: 1s)
+      ## The sampling frequency should be adjusted to the dynamics of the signal to be sampled.
+      ## Higher sampling frequencies increases load on ctrlX Data Layer.
+      ## The sampling frequency can be higher, than the publish interval. Captured samples are put in a queue and sent in publish interval.
+      ## Note: The minimum sampling interval can be overruled by a global setting in the ctrlX Data Layer configuration ('datalayer/subscriptions/settings').
+      # sampling_interval = "1s"
+
+      ## The requested size of the node value queue. (default: 10)
+      ## Relevant if more values are captured than can be sent.
+      # queue_size = 10
+
+      ## The behaviour of the queue if it is full. (default: "DiscardOldest")
+      ## Possible values: 
+      ## - "DiscardOldest"
+      ##   The oldest value gets deleted from the queue when it is full.
+      ## - "DiscardNewest"
+      ##   The newest value gets deleted from the queue when it is full.
+      # queue_behaviour = "DiscardOldest"
+
+      ## The filter when a new value will be sampled. (default: 0.0)
+      ## Calculation rule: If (abs(lastCapturedValue - newValue) > dead_band_value) capture(newValue).
+      # dead_band_value = 0.0
+
+      ## The conditions on which a sample should be captured and thus will be sent as a message. (default: "StatusValue")
+      ## Possible values:
+      ## - "Status"
+      ##   Capture the value only, when the state of the node changes from or to error state. Value changes are ignored.
+      ## - "StatusValue" 
+      ##   Capture when the value changes or the node changes from or to error state.
+      ##   See also 'dead_band_value' for what is considered as a value change.
+      ## - "StatusValueTimestamp": 
+      ##   Capture even if the value is the same, but the timestamp of the value is newer.
+      ##   Note: This might lead to high load on the network because every sample will be sent as a message
+      ##   even if the value of the node did not change.
+      # value_change = "StatusValue"
+      
+```
+
+## Metrics
+
+All measurements are tagged with the server address of the device and the
+corresponding node address as defined in the ctrlX Data Layer.
+
+- measurement name
+  - tags:
+    - `source` (ctrlX Data Layer server where the metrics are gathered from)
+    - `node` (Address of the ctrlX Data Layer node)
+  - fields:
+    - `{name}` (for nodes with simple data types)
+    - `{name}_{index}`(for nodes with array data types)
+    - `{name}_{jsonflat.key}` (for nodes with object data types)
+
+### Output Format
+
+The switch "output_json_string" determines the format of the output metric.
+
+#### Output default format
+
+With the output default format
+
+```toml
+output_json_string=false
+```
+
+the output is formatted automatically as follows depending on the data type:
+
+##### Simple data type
+
+The value is passed 'as it is' to a metric with pattern:
+
+```text
+{name}={value}
+```
+
+Simple data types of ctrlX Data Layer:
+
+```text
+bool8,int8,uint8,int16,uint16,int32,uint32,int64,uint64,float,double,string,timestamp
+```
+
+##### Array data type
+
+Every value in the array is passed to a metric with pattern:
+
+```text
+{name}_{index}={value[index]}
+```
+
+example:
+
+```text
+myarray=[1,2,3] -> myarray_1=1, myarray_2=2, myarray_3=3
+```
+
+Array data types of ctrlX Data Layer:
+
+```text
+arbool8,arint8,aruint8,arint16,aruint16,arint32,aruint32,arint64,aruint64,arfloat,ardouble,arstring,artimestamp
+```
+
+##### Object data type (JSON)
+
+Every value of the flattened json is passed to a metric with pattern:
+
+```text
+{name}_{jsonflat.key}={jsonflat.value}
+```
+
+example:
+
+```text
+myobj={"a":1,"b":2,"c":{"d": 3}} -> myobj_a=1, myobj_b=2, myobj_c_d=3
+```
+
+#### Output JSON format
+
+With the output JSON format
+
+```toml
+output_json_string=true
+```
+
+the output is formatted as JSON string:
+
+```text
+{name}="{value}"
+```
+
+examples:
+
+```text
+input=true -> output="true"
+```
+
+```text
+input=[1,2,3] -> output="[1,2,3]"
+```
+
+```text
+input={"x":4720,"y":9440,"z":{"d": 14160}} -> output="{\"x\":4720,\"y\":9440,\"z\":14160}"
+```
+
+The JSON output string can be passed to a processor plugin for transformation
+e.g. [Parser Processor Plugin](../../processors/parser/README.md)
+or [Starlark Processor Plugin](../../processors/starlark/README.md)
+
+[PARSER.md]: ../../processors/parser/README.md
+[STARLARK.md]: ../../processors/starlark/README.md
+
+example:
+
+```toml
+[[inputs.ctrlx_datalayer.subscription]]
+   measurement = "osci"
+   nodes = [
+     {address="oscilloscope/instances/Osci_PLC/rec-values/allsignals"},
+   ]
+   output_json_string = true
+
+[[processors.starlark]]
+   namepass = [
+      'osci',
+   ]
+   script = "oscilloscope.star"
+```
+
+## Troubleshooting
+
+This plugin was contributed by
+[Bosch Rexroth](https://www.boschrexroth.com).
+For questions regarding ctrlX AUTOMATION and this plugin feel
+free to check out and be part of the
+[ctrlX AUTOMATION Community](https://ctrlx-automation.com/community)
+to get additional support or leave some ideas and feedback.
+
+Also, join
+[InfluxData Community Slack](https://influxdata.com/slack) or
+[InfluxData Community Page](https://community.influxdata.com/)
+if you have questions or comments for the telegraf engineering teams.
+
+## Example Output
+
+The plugin handles simple, array and object (JSON) data types.
+
+### Example with simple data type
+
+Configuration:
+
+```toml
+[[inputs.ctrlx_datalayer.subscription]]
+   measurement="memory"
+   [inputs.ctrlx_datalayer.subscription.tags]
+      sub_tag1="memory_tag1"
+      sub_tag2="memory_tag2"
+
+   [[inputs.ctrlx_datalayer.subscription.nodes]]
+      name   ="available"
+      address="framework/metrics/system/memavailable-mb"
+      [inputs.ctrlx_datalayer.subscription.nodes.tags]
+         node_tag1="memory_available_tag1"
+         node_tag2="memory_available_tag2"
+
+   [[inputs.ctrlx_datalayer.subscription.nodes]]
+      name   ="used"
+      address="framework/metrics/system/memused-mb"
+      [inputs.ctrlx_datalayer.subscription.nodes.tags]
+         node_tag1="memory_used_node_tag1"
+         node_tag2="memory_used_node_tag2"
+```
+
+Source:
+
+```json
+"framework/metrics/system/memavailable-mb" : 365.93359375
+"framework/metrics/system/memused-mb" : 567.67578125
+```
+
+Metrics:
+
+```text
+memory,source=192.168.1.1,host=host.example.com,node=framework/metrics/system/memavailable-mb,node_tag1=memory_available_tag1,node_tag2=memory_available_tag2,sub_tag1=memory2_tag1,sub_tag2=memory_tag2 available=365.93359375 1680093310249627400
+memory,source=192.168.1.1,host=host.example.com,node=framework/metrics/system/memused-mb,node_tag1=memory_used_node_tag1,node_tag2=memory_used_node_tag2,sub_tag1=memory2_tag1,sub_tag2=memory_tag2 used=567.67578125 1680093310249667600
+```
+
+### Example with array data type
+
+Configuration:
+
+```toml
+[[inputs.ctrlx_datalayer.subscription]]
+   measurement="array"
+   nodes=[
+      { name="ar_uint8", address="alldata/dynamic/array-of-uint8"},
+      { name="ar_bool8", address="alldata/dynamic/array-of-bool8"},
+   ]
+```
+
+Source:
+
+```json
+"alldata/dynamic/array-of-bool8" : [true, false, true]
+"alldata/dynamic/array-of-uint8" : [0, 255]
+```
+
+Metrics:
+
+```text
+array,source=192.168.1.1,host=host.example.com,node=alldata/dynamic/array-of-bool8 ar_bool8_0=true,ar_bool8_1=false,ar_bool8_2=true 1680095727347018800
+array,source=192.168.1.1,host=host.example.com,node=alldata/dynamic/array-of-uint8 ar_uint8_0=0,ar_uint8_1=255 1680095727347223300
+```
+
+### Example with object data type (JSON)
+
+Configuration:
+
+```toml
+[[inputs.ctrlx_datalayer.subscription]]
+   measurement="motion"
+   nodes=[
+      {name="linear", address="motion/axs/Axis_1/state/values/actual"},
+      {name="rotational", address="motion/axs/Axis_2/state/values/actual"},
+   ]
+```
+
+Source:
+
+```json
+"motion/axs/Axis_1/state/values/actual" : {"actualPos":65.249329860957,"actualVel":5,"actualAcc":0,"actualTorque":0,"distLeft":0,"actualPosUnit":"mm","actualVelUnit":"mm/min","actualAccUnit":"m/s^2","actualTorqueUnit":"Nm","distLeftUnit":"mm"}
+"motion/axs/Axis_2/state/values/actual" : {"actualPos":120,"actualVel":0,"actualAcc":0,"actualTorque":0,"distLeft":0,"actualPosUnit":"deg","actualVelUnit":"rpm","actualAccUnit":"rad/s^2","actualTorqueUnit":"Nm","distLeftUnit":"deg"}
+```
+
+Metrics:
+
+```text
+motion,source=192.168.1.1,host=host.example.com,node=motion/axs/Axis_1/state/values/actual linear_actualVel=5,linear_distLeftUnit="mm",linear_actualAcc=0,linear_distLeft=0,linear_actualPosUnit="mm",linear_actualAccUnit="m/s^2",linear_actualTorqueUnit="Nm",linear_actualPos=65.249329860957,linear_actualVelUnit="mm/min",linear_actualTorque=0 1680258290342523500
+motion,source=192.168.1.1,host=host.example.com,node=motion/axs/Axis_2/state/values/actual rotational_distLeft=0,rotational_actualVelUnit="rpm",rotational_actualAccUnit="rad/s^2",rotational_distLeftUnit="deg",rotational_actualPos=120,rotational_actualVel=0,rotational_actualAcc=0,rotational_actualPosUnit="deg",rotational_actualTorqueUnit="Nm",rotational_actualTorque=0 1680258290342538100
+```
+
+If `output_json_string` is set in the configuration:
+
+```toml
+  output_json_string = true
+```
+
+then the metrics will be generated like this:
+
+```text
+motion,source=192.168.1.1,host=host.example.com,node=motion/axs/Axis_1/state/values/actual linear="{\"actualAcc\":0,\"actualAccUnit\":\"m/s^2\",\"actualPos\":65.249329860957,\"actualPosUnit\":\"mm\",\"actualTorque\":0,\"actualTorqueUnit\":\"Nm\",\"actualVel\":5,\"actualVelUnit\":\"mm/min\",\"distLeft\":0,\"distLeftUnit\":\"mm\"}" 1680258290342523500
+motion,source=192.168.1.1,host=host.example.com,node=motion/axs/Axis_2/state/values/actual rotational="{\"actualAcc\":0,\"actualAccUnit\":\"rad/s^2\",\"actualPos\":120,\"actualPosUnit\":\"deg\",\"actualTorque\":0,\"actualTorqueUnit\":\"Nm\",\"actualVel\":0,\"actualVelUnit\":\"rpm\",\"distLeft\":0,\"distLeftUnit\":\"deg\"}" 1680258290342538100
+```
diff --git a/content/telegraf/v1/input-plugins/dcos/_index.md b/content/telegraf/v1/input-plugins/dcos/_index.md
new file mode 100644
index 000000000..e9746d97c
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/dcos/_index.md
@@ -0,0 +1,246 @@
+---
+description: "Telegraf plugin for collecting metrics from DC/OS"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: DC/OS
+    identifier: input-dcos
+tags: [DC/OS, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# DC/OS Input Plugin
+
+This input plugin gathers metrics from a DC/OS cluster's [metrics
+component](https://docs.mesosphere.com/1.10/metrics/).
+
+## Series Cardinality Warning
+
+Depending on the work load of your DC/OS cluster, this plugin can quickly
+create a high number of series which, when unchecked, can cause high load on
+your database.
+
+- Use the
+  [measurement filtering](https://docs.influxdata.com/telegraf/latest/administration/configuration/#measurement-filtering)
+  options to exclude unneeded tags.
+- Write to a database with an appropriate
+  [retention policy](https://docs.influxdata.com/influxdb/latest/guides/downsampling_and_retention/).
+- Consider using the
+  [Time Series Index](https://docs.influxdata.com/influxdb/latest/concepts/time-series-index/).
+- Monitor your databases
+  [series cardinality](https://docs.influxdata.com/influxdb/latest/query_language/spec/#show-cardinality).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Input plugin for DC/OS metrics
+[[inputs.dcos]]
+  ## The DC/OS cluster URL.
+  cluster_url = "https://dcos-master-1"
+
+  ## The ID of the service account.
+  service_account_id = "telegraf"
+  ## The private key file for the service account.
+  service_account_private_key = "/etc/telegraf/telegraf-sa-key.pem"
+
+  ## Path containing login token.  If set, will read on every gather.
+  # token_file = "/home/dcos/.dcos/token"
+
+  ## In all filter options if both include and exclude are empty all items
+  ## will be collected.  Arrays may contain glob patterns.
+  ##
+  ## Node IDs to collect metrics from.  If a node is excluded, no metrics will
+  ## be collected for its containers or apps.
+  # node_include = []
+  # node_exclude = []
+  ## Container IDs to collect container metrics from.
+  # container_include = []
+  # container_exclude = []
+  ## Container IDs to collect app metrics from.
+  # app_include = []
+  # app_exclude = []
+
+  ## Maximum concurrent connections to the cluster.
+  # max_connections = 10
+  ## Maximum time to receive a response from cluster.
+  # response_timeout = "20s"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## If false, skip chain & host verification
+  # insecure_skip_verify = true
+
+  ## Recommended filtering to reduce series cardinality.
+  # [inputs.dcos.tagdrop]
+  #   path = ["/var/lib/mesos/slave/slaves/*"]
+```
+
+### Enterprise Authentication
+
+When using Enterprise DC/OS, it is recommended to use a service account to
+authenticate with the cluster.
+
+The plugin requires the following permissions:
+
+```text
+dcos:adminrouter:ops:system-metrics full
+dcos:adminrouter:ops:mesos full
+```
+
+Follow the directions to [create a service account and assign permissions](https://docs.mesosphere.com/1.10/security/service-auth/custom-service-auth/).
+
+Quick configuration using the Enterprise CLI:
+
+```text
+dcos security org service-accounts keypair telegraf-sa-key.pem telegraf-sa-cert.pem
+dcos security org service-accounts create -p telegraf-sa-cert.pem -d "Telegraf DC/OS input plugin" telegraf
+dcos security org users grant telegraf dcos:adminrouter:ops:system-metrics full
+dcos security org users grant telegraf dcos:adminrouter:ops:mesos full
+```
+
+[1]: https://docs.mesosphere.com/1.10/security/service-auth/custom-service-auth/
+
+### Open Source Authentication
+
+The Open Source DC/OS does not provide service accounts.  Instead you can use
+of the following options:
+
+1. [Disable authentication](https://dcos.io/docs/1.10/security/managing-authentication/#authentication-opt-out)
+2. Use the `token_file` parameter to read a authentication token from a file.
+
+Then `token_file` can be set by using the [dcos cli] to login periodically.
+The cli can login for at most XXX days, you will need to ensure the cli
+performs a new login before this time expires.
+
+```shell
+dcos auth login --username foo --password bar
+dcos config show core.dcos_acs_token > ~/.dcos/token
+```
+
+Another option to create a `token_file` is to generate a token using the
+cluster secret.  This will allow you to set the expiration date manually or
+even create a never expiring token.  However, if the cluster secret or the
+token is compromised it cannot be revoked and may require a full reinstall of
+the cluster.  For more information on this technique reference
+[this blog post](https://medium.com/@richardgirges/authenticating-open-source-dc-os-with-third-party-services-125fa33a5add).
+
+[2]: https://medium.com/@richardgirges/authenticating-open-source-dc-os-with-third-party-services-125fa33a5add
+
+## Metrics
+
+Please consult the [Metrics Reference](https://docs.mesosphere.com/1.10/metrics/reference/) for details about field
+interpretation.
+
+- dcos_node
+  - tags:
+    - cluster
+    - hostname
+    - path (filesystem fields only)
+    - interface (network fields only)
+  - fields:
+    - system_uptime (float)
+    - cpu_cores (float)
+    - cpu_total (float)
+    - cpu_user (float)
+    - cpu_system (float)
+    - cpu_idle (float)
+    - cpu_wait (float)
+    - load_1min (float)
+    - load_5min (float)
+    - load_15min (float)
+    - filesystem_capacity_total_bytes (int)
+    - filesystem_capacity_used_bytes (int)
+    - filesystem_capacity_free_bytes (int)
+    - filesystem_inode_total (float)
+    - filesystem_inode_used (float)
+    - filesystem_inode_free (float)
+    - memory_total_bytes (int)
+    - memory_free_bytes (int)
+    - memory_buffers_bytes (int)
+    - memory_cached_bytes (int)
+    - swap_total_bytes (int)
+    - swap_free_bytes (int)
+    - swap_used_bytes (int)
+    - network_in_bytes (int)
+    - network_out_bytes (int)
+    - network_in_packets (float)
+    - network_out_packets (float)
+    - network_in_dropped (float)
+    - network_out_dropped (float)
+    - network_in_errors (float)
+    - network_out_errors (float)
+    - process_count (float)
+
+- dcos_container
+  - tags:
+    - cluster
+    - hostname
+    - container_id
+    - task_name
+  - fields:
+    - cpus_limit (float)
+    - cpus_system_time (float)
+    - cpus_throttled_time (float)
+    - cpus_user_time (float)
+    - disk_limit_bytes (int)
+    - disk_used_bytes (int)
+    - mem_limit_bytes (int)
+    - mem_total_bytes (int)
+    - net_rx_bytes (int)
+    - net_rx_dropped (float)
+    - net_rx_errors (float)
+    - net_rx_packets (float)
+    - net_tx_bytes (int)
+    - net_tx_dropped (float)
+    - net_tx_errors (float)
+    - net_tx_packets (float)
+
+- dcos_app
+  - tags:
+    - cluster
+    - hostname
+    - container_id
+    - task_name
+  - fields:
+    - fields are application specific
+
+[3]: https://docs.mesosphere.com/1.10/metrics/reference/
+
+## Example Output
+
+```text
+dcos_node,cluster=enterprise,hostname=192.168.122.18,path=/boot filesystem_capacity_free_bytes=918188032i,filesystem_capacity_total_bytes=1063256064i,filesystem_capacity_used_bytes=145068032i,filesystem_inode_free=523958,filesystem_inode_total=524288,filesystem_inode_used=330 1511859222000000000
+dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=dummy0 network_in_bytes=0i,network_in_dropped=0,network_in_errors=0,network_in_packets=0,network_out_bytes=0i,network_out_dropped=0,network_out_errors=0,network_out_packets=0 1511859222000000000
+dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=docker0 network_in_bytes=0i,network_in_dropped=0,network_in_errors=0,network_in_packets=0,network_out_bytes=0i,network_out_dropped=0,network_out_errors=0,network_out_packets=0 1511859222000000000
+dcos_node,cluster=enterprise,hostname=192.168.122.18 cpu_cores=2,cpu_idle=81.62,cpu_system=4.19,cpu_total=13.670000000000002,cpu_user=9.48,cpu_wait=0,load_15min=0.7,load_1min=0.22,load_5min=0.6,memory_buffers_bytes=970752i,memory_cached_bytes=1830473728i,memory_free_bytes=1178636288i,memory_total_bytes=3975073792i,process_count=198,swap_free_bytes=859828224i,swap_total_bytes=859828224i,swap_used_bytes=0i,system_uptime=18874 1511859222000000000
+dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=lo network_in_bytes=1090992450i,network_in_dropped=0,network_in_errors=0,network_in_packets=1546938,network_out_bytes=1090992450i,network_out_dropped=0,network_out_errors=0,network_out_packets=1546938 1511859222000000000
+dcos_node,cluster=enterprise,hostname=192.168.122.18,path=/ filesystem_capacity_free_bytes=1668378624i,filesystem_capacity_total_bytes=6641680384i,filesystem_capacity_used_bytes=4973301760i,filesystem_inode_free=3107856,filesystem_inode_total=3248128,filesystem_inode_used=140272 1511859222000000000
+dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=minuteman network_in_bytes=0i,network_in_dropped=0,network_in_errors=0,network_in_packets=0,network_out_bytes=210i,network_out_dropped=0,network_out_errors=0,network_out_packets=3 1511859222000000000
+dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=eth0 network_in_bytes=539886216i,network_in_dropped=1,network_in_errors=0,network_in_packets=979808,network_out_bytes=112395836i,network_out_dropped=0,network_out_errors=0,network_out_packets=891239 1511859222000000000
+dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=spartan network_in_bytes=0i,network_in_dropped=0,network_in_errors=0,network_in_packets=0,network_out_bytes=210i,network_out_dropped=0,network_out_errors=0,network_out_packets=3 1511859222000000000
+dcos_node,cluster=enterprise,hostname=192.168.122.18,path=/var/lib/docker/overlay filesystem_capacity_free_bytes=1668378624i,filesystem_capacity_total_bytes=6641680384i,filesystem_capacity_used_bytes=4973301760i,filesystem_inode_free=3107856,filesystem_inode_total=3248128,filesystem_inode_used=140272 1511859222000000000
+dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=vtep1024 network_in_bytes=0i,network_in_dropped=0,network_in_errors=0,network_in_packets=0,network_out_bytes=0i,network_out_dropped=0,network_out_errors=0,network_out_packets=0 1511859222000000000
+dcos_node,cluster=enterprise,hostname=192.168.122.18,path=/var/lib/docker/plugins filesystem_capacity_free_bytes=1668378624i,filesystem_capacity_total_bytes=6641680384i,filesystem_capacity_used_bytes=4973301760i,filesystem_inode_free=3107856,filesystem_inode_total=3248128,filesystem_inode_used=140272 1511859222000000000
+dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=d-dcos network_in_bytes=0i,network_in_dropped=0,network_in_errors=0,network_in_packets=0,network_out_bytes=0i,network_out_dropped=0,network_out_errors=0,network_out_packets=0 1511859222000000000
+dcos_app,cluster=enterprise,container_id=9a78d34a-3bbf-467e-81cf-a57737f154ee,hostname=192.168.122.18 container_received_bytes_per_sec=0,container_throttled_bytes_per_sec=0 1511859222000000000
+dcos_container,cluster=enterprise,container_id=cbf19b77-3b8d-4bcf-b81f-824b67279629,hostname=192.168.122.18 cpus_limit=0.3,cpus_system_time=307.31,cpus_throttled_time=102.029930607,cpus_user_time=268.57,disk_limit_bytes=268435456i,disk_used_bytes=30953472i,mem_limit_bytes=570425344i,mem_total_bytes=13316096i,net_rx_bytes=0i,net_rx_dropped=0,net_rx_errors=0,net_rx_packets=0,net_tx_bytes=0i,net_tx_dropped=0,net_tx_errors=0,net_tx_packets=0 1511859222000000000
+dcos_app,cluster=enterprise,container_id=cbf19b77-3b8d-4bcf-b81f-824b67279629,hostname=192.168.122.18 container_received_bytes_per_sec=0,container_throttled_bytes_per_sec=0 1511859222000000000
+dcos_container,cluster=enterprise,container_id=5725e219-f66e-40a8-b3ab-519d85f4c4dc,hostname=192.168.122.18,task_name=hello-world cpus_limit=0.6,cpus_system_time=25.6,cpus_throttled_time=327.977109217,cpus_user_time=566.54,disk_limit_bytes=0i,disk_used_bytes=0i,mem_limit_bytes=1107296256i,mem_total_bytes=335941632i,net_rx_bytes=0i,net_rx_dropped=0,net_rx_errors=0,net_rx_packets=0,net_tx_bytes=0i,net_tx_dropped=0,net_tx_errors=0,net_tx_packets=0 1511859222000000000
+dcos_app,cluster=enterprise,container_id=5725e219-f66e-40a8-b3ab-519d85f4c4dc,hostname=192.168.122.18 container_received_bytes_per_sec=0,container_throttled_bytes_per_sec=0 1511859222000000000
+dcos_app,cluster=enterprise,container_id=c76e1488-4fb7-4010-a4cf-25725f8173f9,hostname=192.168.122.18 container_received_bytes_per_sec=0,container_throttled_bytes_per_sec=0 1511859222000000000
+dcos_container,cluster=enterprise,container_id=cbe0b2f9-061f-44ac-8f15-4844229e8231,hostname=192.168.122.18,task_name=telegraf cpus_limit=0.2,cpus_system_time=8.109999999,cpus_throttled_time=93.183916045,cpus_user_time=17.97,disk_limit_bytes=0i,disk_used_bytes=0i,mem_limit_bytes=167772160i,mem_total_bytes=0i,net_rx_bytes=0i,net_rx_dropped=0,net_rx_errors=0,net_rx_packets=0,net_tx_bytes=0i,net_tx_dropped=0,net_tx_errors=0,net_tx_packets=0 1511859222000000000
+dcos_container,cluster=enterprise,container_id=b64115de-3d2a-431d-a805-76e7c46453f1,hostname=192.168.122.18 cpus_limit=0.2,cpus_system_time=2.69,cpus_throttled_time=20.064861214,cpus_user_time=6.56,disk_limit_bytes=268435456i,disk_used_bytes=29360128i,mem_limit_bytes=297795584i,mem_total_bytes=13733888i,net_rx_bytes=0i,net_rx_dropped=0,net_rx_errors=0,net_rx_packets=0,net_tx_bytes=0i,net_tx_dropped=0,net_tx_errors=0,net_tx_packets=0 1511859222000000000
+dcos_app,cluster=enterprise,container_id=b64115de-3d2a-431d-a805-76e7c46453f1,hostname=192.168.122.18 container_received_bytes_per_sec=0,container_throttled_bytes_per_sec=0 1511859222000000000
+```
diff --git a/content/telegraf/v1/input-plugins/directory_monitor/_index.md b/content/telegraf/v1/input-plugins/directory_monitor/_index.md
new file mode 100644
index 000000000..dfd7b3024
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/directory_monitor/_index.md
@@ -0,0 +1,116 @@
+---
+description: "Telegraf plugin for collecting metrics from Directory Monitor"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Directory Monitor
+    identifier: input-directory_monitor
+tags: [Directory Monitor, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Directory Monitor Input Plugin
+
+This plugin monitors a single directory (traversing sub-directories),
+and takes in each file placed in the directory.  The plugin will gather all
+files in the directory at the configured interval, and parse the ones that
+haven't been picked up yet.
+
+This plugin is intended to read files that are moved or copied to the monitored
+directory, and thus files should also not be used by another process or else
+they may fail to be gathered. Please be advised that this plugin pulls files
+directly after they've been in the directory for the length of the configurable
+`directory_duration_threshold`, and thus files should not be written 'live' to
+the monitored directory. If you absolutely must write files directly, they must
+be guaranteed to finish writing before the `directory_duration_threshold`.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Ingests files in a directory and then moves them to a target directory.
+[[inputs.directory_monitor]]
+  ## The directory to monitor and read files from (including sub-directories if "recursive" is true).
+  directory = ""
+  #
+  ## The directory to move finished files to (maintaining directory hierarchy from source).
+  finished_directory = ""
+  #
+  ## Setting recursive to true will make the plugin recursively walk the directory and process all sub-directories.
+  # recursive = false
+  #
+  ## The directory to move files to upon file error.
+  ## If not provided, erroring files will stay in the monitored directory.
+  # error_directory = ""
+  #
+  ## The amount of time a file is allowed to sit in the directory before it is picked up.
+  ## This time can generally be low but if you choose to have a very large file written to the directory and it's potentially slow,
+  ## set this higher so that the plugin will wait until the file is fully copied to the directory.
+  # directory_duration_threshold = "50ms"
+  #
+  ## A list of the only file names to monitor, if necessary. Supports regex. If left blank, all files are ingested.
+  # files_to_monitor = ["^.*\\.csv"]
+  #
+  ## A list of files to ignore, if necessary. Supports regex.
+  # files_to_ignore = [".DS_Store"]
+  #
+  ## Maximum lines of the file to process that have not yet be written by the
+  ## output. For best throughput set to the size of the output's metric_buffer_limit.
+  ## Warning: setting this number higher than the output's metric_buffer_limit can cause dropped metrics.
+  # max_buffered_metrics = 10000
+  #
+  ## The maximum amount of file paths to queue up for processing at once, before waiting until files are processed to find more files.
+  ## Lowering this value will result in *slightly* less memory use, with a potential sacrifice in speed efficiency, if absolutely necessary.
+  # file_queue_size = 100000
+  #
+  ## Name a tag containing the name of the file the data was parsed from.  Leave empty
+  ## to disable. Cautious when file name variation is high, this can increase the cardinality
+  ## significantly. Read more about cardinality here:
+  ## https://docs.influxdata.com/influxdb/cloud/reference/glossary/#series-cardinality
+  # file_tag = ""
+  #
+  ## Specify if the file can be read completely at once or if it needs to be read line by line (default).
+  ## Possible values: "line-by-line", "at-once"
+  # parse_method = "line-by-line"
+  #
+  ## The dataformat to be read from the files.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+  data_format = "influx"
+```
+
+## Metrics
+
+The format of metrics produced by this plugin depends on the content and data
+format of the file.
+
+When the [internal](/telegraf/v1/plugins/#input-internal) input is enabled:
+
+- internal_directory_monitor
+  - fields:
+    - files_processed - How many files have been processed (counter)
+    - files_dropped - How many files have been dropped (counter)
+- internal_directory_monitor
+  - tags:
+    - directory - The monitored directory
+  - fields:
+    - files_processed_per_dir - How many files have been processed (counter)
+    - files_dropped_per_dir - How many files have been dropped (counter)
+    - files_queue_per_dir - How many files to be processed (gauge)
+
+## Example Output
+
+The metrics produced by this plugin depends on the content and data
+format of the file.
+
+[internal]: /plugins/inputs/internal
diff --git a/content/telegraf/v1/input-plugins/disk/_index.md b/content/telegraf/v1/input-plugins/disk/_index.md
new file mode 100644
index 000000000..676669cbc
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/disk/_index.md
@@ -0,0 +1,108 @@
+---
+description: "Telegraf plugin for collecting metrics from Disk"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Disk
+    identifier: input-disk
+tags: [Disk, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Disk Input Plugin
+
+The disk input plugin gathers metrics about disk usage.
+
+Note that `used_percent` is calculated by doing `used / (used + free)`, _not_
+`used / total`, which is how the unix `df` command does it. See
+[wikipedia - df](https://en.wikipedia.org/wiki/Df_(Unix)) for more details.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics about disk usage by mount point
+[[inputs.disk]]
+  ## By default stats will be gathered for all mount points.
+  ## Set mount_points will restrict the stats to only the specified mount points.
+  # mount_points = ["/"]
+
+  ## Ignore mount points by filesystem type.
+  ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]
+
+  ## Ignore mount points by mount options.
+  ## The 'mount' command reports options of all mounts in parathesis.
+  ## Bind mounts can be ignored with the special 'bind' option.
+  # ignore_mount_opts = []
+```
+
+### Docker container
+
+To monitor the Docker engine host from within a container you will need to mount
+the host's filesystem into the container and set the `HOST_PROC` environment
+variable to the location of the `/proc` filesystem.  If desired, you can also
+set the `HOST_MOUNT_PREFIX` environment variable to the prefix containing the
+`/proc` directory, when present this variable is stripped from the reported
+`path` tag.
+
+```shell
+docker run -v /:/hostfs:ro -e HOST_MOUNT_PREFIX=/hostfs -e HOST_PROC=/hostfs/proc telegraf
+```
+
+## Metrics
+
+- disk
+  - tags:
+    - fstype (filesystem type)
+    - device (device file)
+    - path (mount point path)
+    - mode (whether the mount is rw or ro)
+    - label (devicemapper labels, only if present)
+  - fields:
+    - free (integer, bytes)
+    - total (integer, bytes)
+    - used (integer, bytes)
+    - used_percent (float, percent)
+    - inodes_free (integer, files)
+    - inodes_total (integer, files)
+    - inodes_used (integer, files)
+    - inodes_used_percent (float, percent)
+
+## Troubleshooting
+
+On Linux, the list of disks is taken from the `/proc/self/mounts` file and a
+[statfs] call is made on the second column.  If any expected filesystems are
+missing ensure that the `telegraf` user can read these files:
+
+```shell
+$ sudo -u telegraf cat /proc/self/mounts | grep sda2
+/dev/sda2 /home ext4 rw,relatime,data=ordered 0 0
+$ sudo -u telegraf stat /home
+```
+
+It may be desired to use POSIX ACLs to provide additional access:
+
+```shell
+sudo setfacl -R -m u:telegraf:X /var/lib/docker/volumes/
+```
+
+## Example Output
+
+```text
+disk,fstype=hfs,mode=ro,path=/ free=398407520256i,inodes_free=97267461i,inodes_total=121847806i,inodes_used=24580345i,total=499088621568i,used=100418957312i,used_percent=20.131039916242397,inodes_used_percent=20.1729894 1453832006274071563
+disk,fstype=devfs,mode=rw,path=/dev free=0i,inodes_free=0i,inodes_total=628i,inodes_used=628i,total=185856i,used=185856i,used_percent=100,inodes_used_percent=100 1453832006274137913
+disk,fstype=autofs,mode=rw,path=/net free=0i,inodes_free=0i,inodes_total=0i,inodes_used=0i,total=0i,used=0i,used_percent=0,inodes_used_percent=0 1453832006274157077
+disk,fstype=autofs,mode=rw,path=/home free=0i,inodes_free=0i,inodes_total=0i,inodes_used=0i,total=0i,used=0i,used_percent=0,inodes_used_percent=0 1453832006274169688
+disk,device=dm-1,fstype=xfs,label=lvg-lv,mode=rw,path=/mnt inodes_free=8388605i,inodes_used=3i,total=17112760320i,free=16959598592i,used=153161728i,used_percent=0.8950147441789215,inodes_total=8388608i,inodes_used_percent=0.0017530778 1677001387000000000
+```
+
+[statfs]: http://man7.org/linux/man-pages/man2/statfs.2.html
diff --git a/content/telegraf/v1/input-plugins/diskio/_index.md b/content/telegraf/v1/input-plugins/diskio/_index.md
new file mode 100644
index 000000000..ed84e786a
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/diskio/_index.md
@@ -0,0 +1,161 @@
+---
+description: "Telegraf plugin for collecting metrics from DiskIO"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: DiskIO
+    identifier: input-diskio
+tags: [DiskIO, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# DiskIO Input Plugin
+
+The diskio input plugin gathers metrics about disk traffic and timing.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics about disk IO by device
+[[inputs.diskio]]
+  ## Devices to collect stats for
+  ## Wildcards are supported except for disk synonyms like '/dev/disk/by-id'.
+  ## ex. devices = ["sda", "sdb", "vd*", "/dev/disk/by-id/nvme-eui.00123deadc0de123"]
+  # devices = ["*"]
+
+  ## Skip gathering of the disk's serial numbers.
+  # skip_serial_number = true
+
+  ## Device metadata tags to add on systems supporting it (Linux only)
+  ## Use 'udevadm info -q property -n <device>' to get a list of properties.
+  ## Note: Most, but not all, udev properties can be accessed this way. Properties
+  ## that are currently inaccessible include DEVTYPE, DEVNAME, and DEVPATH.
+  # device_tags = ["ID_FS_TYPE", "ID_FS_USAGE"]
+
+  ## Using the same metadata source as device_tags, you can also customize the
+  ## name of the device via templates.
+  ## The 'name_templates' parameter is a list of templates to try and apply to
+  ## the device. The template may contain variables in the form of '$PROPERTY' or
+  ## '${PROPERTY}'. The first template which does not contain any variables not
+  ## present for the device is used as the device name tag.
+  ## The typical use case is for LVM volumes, to get the VG/LV name instead of
+  ## the near-meaningless DM-0 name.
+  # name_templates = ["$ID_FS_LABEL","$DM_VG_NAME/$DM_LV_NAME"]
+```
+
+### Docker container
+
+To monitor the Docker engine host from within a container you will need to
+mount the host's filesystem into the container and set the `HOST_PROC`
+environment variable to the location of the `/proc` filesystem.  Additionally,
+it is required to use privileged mode to provide access to `/dev`.
+
+If you are using the `device_tags` or `name_templates` options, you will need
+to bind mount `/run/udev` into the container.
+
+```shell
+docker run --privileged -v /:/hostfs:ro -v /run/udev:/run/udev:ro -e HOST_PROC=/hostfs/proc telegraf
+```
+
+## Metrics
+
+- diskio
+  - tags:
+    - name (device name)
+    - serial (device serial number)
+  - fields:
+    - reads (integer, counter)
+    - writes (integer, counter)
+    - read_bytes (integer, counter, bytes)
+    - write_bytes (integer, counter, bytes)
+    - read_time (integer, counter, milliseconds)
+    - write_time (integer, counter, milliseconds)
+    - io_time (integer, counter, milliseconds)
+    - weighted_io_time (integer, counter, milliseconds)
+    - iops_in_progress (integer, gauge)
+    - merged_reads (integer, counter)
+    - merged_writes (integer, counter)
+
+On linux these values correspond to the values in [`/proc/diskstats`]() and
+[`/sys/block/<dev>/stat`]().
+
+[1]: https://www.kernel.org/doc/Documentation/ABI/testing/procfs-diskstats
+
+[2]: https://www.kernel.org/doc/Documentation/block/stat.txt
+
+### `reads` & `writes`
+
+These values increment when an I/O request completes.
+
+### `read_bytes` & `write_bytes`
+
+These values count the number of bytes read from or written to this
+block device.
+
+### `read_time` & `write_time`
+
+These values count the number of milliseconds that I/O requests have
+waited on this block device.  If there are multiple I/O requests waiting,
+these values will increase at a rate greater than 1000/second; for
+example, if 60 read requests wait for an average of 30 ms, the read_time
+field will increase by 60*30 = 1800.
+
+### `io_time`
+
+This value counts the number of milliseconds during which the device has
+had I/O requests queued.
+
+### `weighted_io_time`
+
+This value counts the number of milliseconds that I/O requests have waited
+on this block device.  If there are multiple I/O requests waiting, this
+value will increase as the product of the number of milliseconds times the
+number of requests waiting (see `read_time` above for an example).
+
+### `iops_in_progress`
+
+This value counts the number of I/O requests that have been issued to
+the device driver but have not yet completed.  It does not include I/O
+requests that are in the queue but not yet issued to the device driver.
+
+### `merged_reads` & `merged_writes`
+
+Reads and writes which are adjacent to each other may be merged for
+efficiency.  Thus two 4K reads may become one 8K read before it is
+ultimately handed to the disk, and so it will be counted (and queued)
+as only one I/O. These fields lets you know how often this was done.
+
+## Sample Queries
+
+### Calculate percent IO utilization per disk and host
+
+```sql
+SELECT non_negative_derivative(last("io_time"),1ms) FROM "diskio" WHERE time > now() - 30m GROUP BY "host","name",time(60s)
+```
+
+### Calculate average queue depth
+
+`iops_in_progress` will give you an instantaneous value. This will give you the
+average between polling intervals.
+
+```sql
+SELECT non_negative_derivative(last("weighted_io_time"),1ms) from "diskio" WHERE time > now() - 30m GROUP BY "host","name",time(60s)
+```
+
+## Example Output
+
+```text
+diskio,name=sda1 merged_reads=0i,reads=2353i,writes=10i,write_bytes=2117632i,write_time=49i,io_time=1271i,weighted_io_time=1350i,read_bytes=31350272i,read_time=1303i,iops_in_progress=0i,merged_writes=0i 1578326400000000000
+diskio,name=centos/var_log reads=1063077i,writes=591025i,read_bytes=139325491712i,write_bytes=144233131520i,read_time=650221i,write_time=24368817i,io_time=852490i,weighted_io_time=25037394i,iops_in_progress=1i,merged_reads=0i,merged_writes=0i 1578326400000000000
+diskio,name=sda write_time=49i,io_time=1317i,weighted_io_time=1404i,reads=2495i,read_time=1357i,write_bytes=2117632i,iops_in_progress=0i,merged_reads=0i,merged_writes=0i,writes=10i,read_bytes=38956544i 1578326400000000000
+```
diff --git a/content/telegraf/v1/input-plugins/disque/_index.md b/content/telegraf/v1/input-plugins/disque/_index.md
new file mode 100644
index 000000000..bc5f84792
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/disque/_index.md
@@ -0,0 +1,61 @@
+---
+description: "Telegraf plugin for collecting metrics from Disque"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Disque
+    identifier: input-disque
+tags: [Disque, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Disque Input Plugin
+
+[Disque](https://github.com/antirez/disque) is an ongoing experiment to build a
+distributed, in-memory, message broker.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from one or many disque servers
+[[inputs.disque]]
+  ## An array of URI to gather stats about. Specify an ip or hostname
+  ## with optional port and password.
+  ## ie disque://localhost, disque://10.10.3.33:18832, 10.0.0.1:10000, etc.
+  ## If no servers are specified, then localhost is used as the host.
+  servers = ["localhost"]
+```
+
+## Metrics
+
+- disque
+  - disque_host
+    - uptime_in_seconds
+    - connected_clients
+    - blocked_clients
+    - used_memory
+    - used_memory_rss
+    - used_memory_peak
+    - total_connections_received
+    - total_commands_processed
+    - instantaneous_ops_per_sec
+    - latest_fork_usec
+    - mem_fragmentation_ratio
+    - used_cpu_sys
+    - used_cpu_user
+    - used_cpu_sys_children
+    - used_cpu_user_children
+    - registered_jobs
+    - registered_queues
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/dmcache/_index.md b/content/telegraf/v1/input-plugins/dmcache/_index.md
new file mode 100644
index 000000000..033834344
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/dmcache/_index.md
@@ -0,0 +1,71 @@
+---
+description: "Telegraf plugin for collecting metrics from DMCache"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: DMCache
+    identifier: input-dmcache
+tags: [DMCache, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# DMCache Input Plugin
+
+This plugin provide a native collection for dmsetup based statistics for
+dm-cache.
+
+This plugin requires sudo, that is why you should setup and be sure that the
+telegraf is able to execute sudo without a password.
+
+`sudo /sbin/dmsetup status --target cache` is the full command that telegraf
+will run for debugging purposes.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Provide a native collection for dmsetup based statistics for dm-cache
+# This plugin ONLY supports Linux
+[[inputs.dmcache]]
+  ## Whether to report per-device stats or not
+  per_device = true
+```
+
+## Metrics
+
+- dmcache
+  - length
+  - target
+  - metadata_blocksize
+  - metadata_used
+  - metadata_total
+  - cache_blocksize
+  - cache_used
+  - cache_total
+  - read_hits
+  - read_misses
+  - write_hits
+  - write_misses
+  - demotions
+  - promotions
+  - dirty
+
+## Tags
+
+- All measurements have the following tags:
+  - device
+
+## Example Output
+
+```text
+dmcache,device=example cache_blocksize=0i,read_hits=995134034411520i,read_misses=916807089127424i,write_hits=195107267543040i,metadata_used=12861440i,write_misses=563725346013184i,promotions=3265223720960i,dirty=0i,metadata_blocksize=0i,cache_used=1099511627776ii,cache_total=0i,length=0i,metadata_total=1073741824i,demotions=3265223720960i 1491482035000000000
+```
diff --git a/content/telegraf/v1/input-plugins/dns_query/_index.md b/content/telegraf/v1/input-plugins/dns_query/_index.md
new file mode 100644
index 000000000..b0229ea7e
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/dns_query/_index.md
@@ -0,0 +1,101 @@
+---
+description: "Telegraf plugin for collecting metrics from DNS Query"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: DNS Query
+    identifier: input-dns_query
+tags: [DNS Query, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# DNS Query Input Plugin
+
+The DNS plugin gathers dns query times in milliseconds - like
+[Dig](https://en.wikipedia.org/wiki/Dig_\(command\))
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Query given DNS server and gives statistics
+[[inputs.dns_query]]
+  ## servers to query
+  servers = ["8.8.8.8"]
+
+  ## Network is the network protocol name.
+  # network = "udp"
+
+  ## Domains or subdomains to query.
+  # domains = ["."]
+
+  ## Query record type.
+  ## Possible values: A, AAAA, CNAME, MX, NS, PTR, TXT, SOA, SPF, SRV.
+  # record_type = "A"
+
+  ## Dns server port.
+  # port = 53
+
+  ## Query timeout
+  # timeout = "2s"
+
+  ## Include the specified additional properties in the resulting metric.
+  ## The following values are supported:
+  ##    "first_ip" -- return IP of the first A and AAAA answer
+  ##    "all_ips"  -- return IPs of all A and AAAA answers
+  # include_fields = []
+```
+
+## Metrics
+
+- dns_query
+  - tags:
+    - server
+    - domain
+    - record_type
+    - result
+    - rcode
+  - fields:
+    - query_time_ms (float)
+    - result_code (int, success = 0, timeout = 1, error = 2)
+    - rcode_value (int)
+
+## Rcode Descriptions
+
+|rcode_value|rcode|Description|
+|---|-----------|-----------------------------------|
+|0  | NoError   | No Error                          |
+|1  | FormErr   | Format Error                      |
+|2  | ServFail  | Server Failure                    |
+|3  | NXDomain  | Non-Existent Domain               |
+|4  | NotImp    | Not Implemented                   |
+|5  | Refused   | Query Refused                     |
+|6  | YXDomain  | Name Exists when it should not    |
+|7  | YXRRSet   | RR Set Exists when it should not  |
+|8  | NXRRSet   | RR Set that should exist does not |
+|9  | NotAuth   | Server Not Authoritative for zone |
+|10 | NotZone   | Name not contained in zone        |
+|16 | BADSIG    | TSIG Signature Failure            |
+|16 | BADVERS   | Bad OPT Version                   |
+|17 | BADKEY    | Key not recognized                |
+|18 | BADTIME   | Signature out of time window      |
+|19 | BADMODE   | Bad TKEY Mode                     |
+|20 | BADNAME   | Duplicate key name                |
+|21 | BADALG    | Algorithm not supported           |
+|22 | BADTRUNC  | Bad Truncation                    |
+|23 | BADCOOKIE | Bad/missing Server Cookie         |
+
+## Example Output
+
+```text
+dns_query,domain=google.com,rcode=NOERROR,record_type=A,result=success,server=127.0.0.1 rcode_value=0i,result_code=0i,query_time_ms=0.13746 1550020750001000000
+```
diff --git a/content/telegraf/v1/input-plugins/docker/_index.md b/content/telegraf/v1/input-plugins/docker/_index.md
new file mode 100644
index 000000000..900f60b4c
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/docker/_index.md
@@ -0,0 +1,423 @@
+---
+description: "Telegraf plugin for collecting metrics from Docker"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Docker
+    identifier: input-docker
+tags: [Docker, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Docker Input Plugin
+
+The docker plugin uses the Docker Engine API to gather metrics on running
+docker containers.
+
+The docker plugin uses the [Official Docker Client](https://github.com/moby/moby/tree/master/client) to gather stats from the
+[Engine API](https://docs.docker.com/engine/api/v1.24/).
+
+[1]: https://github.com/moby/moby/tree/master/client
+
+[2]: https://docs.docker.com/engine/api/v1.24/
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics about docker containers
+[[inputs.docker]]
+  ## Docker Endpoint
+  ##   To use TCP, set endpoint = "tcp://[ip]:[port]"
+  ##   To use environment variables (ie, docker-machine), set endpoint = "ENV"
+  endpoint = "unix:///var/run/docker.sock"
+
+  ## Set to true to collect Swarm metrics(desired_replicas, running_replicas)
+  ## Note: configure this in one of the manager nodes in a Swarm cluster.
+  ## configuring in multiple Swarm managers results in duplication of metrics.
+  gather_services = false
+
+  ## Only collect metrics for these containers. Values will be appended to
+  ## container_name_include.
+  ## Deprecated (1.4.0), use container_name_include
+  container_names = []
+
+  ## Set the source tag for the metrics to the container ID hostname, eg first 12 chars
+  source_tag = false
+
+  ## Containers to include and exclude. Collect all if empty. Globs accepted.
+  container_name_include = []
+  container_name_exclude = []
+
+  ## Container states to include and exclude. Globs accepted.
+  ## When empty only containers in the "running" state will be captured.
+  ## example: container_state_include = ["created", "restarting", "running", "removing", "paused", "exited", "dead"]
+  ## example: container_state_exclude = ["created", "restarting", "running", "removing", "paused", "exited", "dead"]
+  # container_state_include = []
+  # container_state_exclude = []
+
+  ## Objects to include for disk usage query
+  ## Allowed values are "container", "image", "volume" 
+  ## When empty disk usage is excluded
+  storage_objects = []
+
+  ## Timeout for docker list, info, and stats commands
+  timeout = "5s"
+
+  ## Specifies for which classes a per-device metric should be issued
+  ## Possible values are 'cpu' (cpu0, cpu1, ...), 'blkio' (8:0, 8:1, ...) and 'network' (eth0, eth1, ...)
+  ## Please note that this setting has no effect if 'perdevice' is set to 'true'
+  # perdevice_include = ["cpu"]
+
+  ## Specifies for which classes a total metric should be issued. Total is an aggregated of the 'perdevice' values.
+  ## Possible values are 'cpu', 'blkio' and 'network'
+  ## Total 'cpu' is reported directly by Docker daemon, and 'network' and 'blkio' totals are aggregated by this plugin.
+  ## Please note that this setting has no effect if 'total' is set to 'false'
+  # total_include = ["cpu", "blkio", "network"]
+
+  ## docker labels to include and exclude as tags.  Globs accepted.
+  ## Note that an empty array for both will include all labels as tags
+  docker_label_include = []
+  docker_label_exclude = []
+
+  ## Which environment variables should we use as a tag
+  tag_env = ["JAVA_HOME", "HEAP_SIZE"]
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+### Environment Configuration
+
+When using the `"ENV"` endpoint, the connection is configured using the [cli
+Docker environment variables]().
+
+[3]: https://godoc.org/github.com/moby/moby/client#NewEnvClient
+
+### Security
+
+Giving telegraf access to the Docker daemon expands the [attack surface](https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface) that
+could result in an attacker gaining root access to a machine. This is especially
+relevant if the telegraf configuration can be changed by untrusted users.
+
+[4]: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
+
+### Docker Daemon Permissions
+
+Typically, telegraf must be given permission to access the docker daemon unix
+socket when using the default endpoint. This can be done by adding the
+`telegraf` unix user (created when installing a Telegraf package) to the
+`docker` unix group with the following command:
+
+```shell
+sudo usermod -aG docker telegraf
+```
+
+If telegraf is run within a container, the unix socket will need to be exposed
+within the telegraf container. This can be done in the docker CLI by add the
+option `-v /var/run/docker.sock:/var/run/docker.sock` or adding the following
+lines to the telegraf container definition in a docker compose file.
+Additionally docker `telegraf` user must be assigned to `docker` group id
+from host:
+
+```yaml
+user: telegraf:<host_docker_gid>
+volumes:
+  - /var/run/docker.sock:/var/run/docker.sock
+```
+
+### source tag
+
+Selecting the containers measurements can be tricky if you have many containers
+with the same name.  To alleviate this issue you can set the below value to
+`true`
+
+```toml
+source_tag = true
+```
+
+This will cause all measurements to have the `source` tag be set to the first 12
+characters of the container id. The first 12 characters is the common hostname
+for containers that have no explicit hostname set, as defined by docker.
+
+### Kubernetes Labels
+
+Kubernetes may add many labels to your containers, if they are not needed you
+may prefer to exclude them:
+
+```json
+  docker_label_exclude = ["annotation.kubernetes*"]
+```
+
+### Docker-compose Labels
+
+Docker-compose will add labels to your containers. You can limit restrict labels
+to selected ones, e.g.
+
+```json
+  docker_label_include = [
+    "com.docker.compose.config-hash",
+    "com.docker.compose.container-number",
+    "com.docker.compose.oneoff",
+    "com.docker.compose.project",
+    "com.docker.compose.service",
+  ]
+```
+
+## Metrics
+
+- docker
+  - tags:
+    - unit
+    - engine_host
+    - server_version
+  - fields:
+    - n_used_file_descriptors
+    - n_cpus
+    - n_containers
+    - n_containers_running
+    - n_containers_stopped
+    - n_containers_paused
+    - n_images
+    - n_goroutines
+    - n_listener_events
+    - memory_total
+    - pool_blocksize (requires devicemapper storage driver) (deprecated see: `docker_devicemapper`)
+
+The `docker_data` and `docker_metadata` measurements are available only for
+some storage drivers such as devicemapper.
+
+- docker_data (deprecated see: `docker_devicemapper`)
+  - tags:
+    - unit
+    - engine_host
+    - server_version
+  - fields:
+    - available
+    - total
+    - used
+
+- docker_metadata (deprecated see: `docker_devicemapper`)
+  - tags:
+    - unit
+    - engine_host
+    - server_version
+  - fields:
+    - available
+    - total
+    - used
+
+The above measurements for the devicemapper storage driver can now be found in
+the new `docker_devicemapper` measurement
+
+- docker_devicemapper
+  - tags:
+    - engine_host
+    - server_version
+    - pool_name
+  - fields:
+    - pool_blocksize_bytes
+    - data_space_used_bytes
+    - data_space_total_bytes
+    - data_space_available_bytes
+    - metadata_space_used_bytes
+    - metadata_space_total_bytes
+    - metadata_space_available_bytes
+    - thin_pool_minimum_free_space_bytes
+
+- docker_container_mem
+  - tags:
+    - engine_host
+    - server_version
+    - container_image
+    - container_name
+    - container_status
+    - container_version
+  - fields:
+    - total_pgmajfault
+    - cache
+    - mapped_file
+    - total_inactive_file
+    - pgpgout
+    - rss
+    - total_mapped_file
+    - writeback
+    - unevictable
+    - pgpgin
+    - total_unevictable
+    - pgmajfault
+    - total_rss
+    - total_rss_huge
+    - total_writeback
+    - total_inactive_anon
+    - rss_huge
+    - hierarchical_memory_limit
+    - total_pgfault
+    - total_active_file
+    - active_anon
+    - total_active_anon
+    - total_pgpgout
+    - total_cache
+    - inactive_anon
+    - active_file
+    - pgfault
+    - inactive_file
+    - total_pgpgin
+    - max_usage
+    - usage
+    - failcnt
+    - limit
+    - container_id
+
+- docker_container_cpu
+  - tags:
+    - engine_host
+    - server_version
+    - container_image
+    - container_name
+    - container_status
+    - container_version
+    - cpu
+  - fields:
+    - throttling_periods
+    - throttling_throttled_periods
+    - throttling_throttled_time
+    - usage_in_kernelmode
+    - usage_in_usermode
+    - usage_system
+    - usage_total
+    - usage_percent
+    - container_id
+
+- docker_container_net
+  - tags:
+    - engine_host
+    - server_version
+    - container_image
+    - container_name
+    - container_status
+    - container_version
+    - network
+  - fields:
+    - rx_dropped
+    - rx_bytes
+    - rx_errors
+    - tx_packets
+    - tx_dropped
+    - rx_packets
+    - tx_errors
+    - tx_bytes
+    - container_id
+
+- docker_container_blkio
+  - tags:
+    - engine_host
+    - server_version
+    - container_image
+    - container_name
+    - container_status
+    - container_version
+    - device
+  - fields:
+    - io_service_bytes_recursive_async
+    - io_service_bytes_recursive_read
+    - io_service_bytes_recursive_sync
+    - io_service_bytes_recursive_total
+    - io_service_bytes_recursive_write
+    - io_serviced_recursive_async
+    - io_serviced_recursive_read
+    - io_serviced_recursive_sync
+    - io_serviced_recursive_total
+    - io_serviced_recursive_write
+    - container_id
+
+The `docker_container_health` measurements report on a containers
+[HEALTHCHECK](https://docs.docker.com/engine/reference/builder/#healthcheck)
+status if configured.
+
+- docker_container_health (container must use the HEALTHCHECK)
+  - tags:
+    - engine_host
+    - server_version
+    - container_image
+    - container_name
+    - container_status
+    - container_version
+  - fields:
+    - health_status (string)
+    - failing_streak (integer)
+
+- docker_container_status
+  - tags:
+    - engine_host
+    - server_version
+    - container_image
+    - container_name
+    - container_status
+    - container_version
+  - fields:
+    - container_id
+    - oomkilled (boolean)
+    - pid (integer)
+    - exitcode (integer)
+    - started_at (integer)
+    - finished_at (integer)
+    - uptime_ns (integer)
+
+- docker_swarm
+  - tags:
+    - service_id
+    - service_name
+    - service_mode
+  - fields:
+    - tasks_desired
+    - tasks_running
+
+- docker_disk_usage
+  - tags:
+    - engine_host
+    - server_version
+    - container_name
+    - container_image
+    - container_version
+    - image_id
+    - image_name
+    - image_version
+    - volume_name
+  - fields:
+    - size_rw
+    - size_root_fs
+    - size
+    - shared_size
+
+## Example Output
+
+```text
+docker,engine_host=debian-stretch-docker,server_version=17.09.0-ce n_containers=6i,n_containers_paused=0i,n_containers_running=1i,n_containers_stopped=5i,n_cpus=2i,n_goroutines=41i,n_images=2i,n_listener_events=0i,n_used_file_descriptors=27i 1524002041000000000
+docker,engine_host=debian-stretch-docker,server_version=17.09.0-ce,unit=bytes memory_total=2101661696i 1524002041000000000
+docker_container_mem,container_image=telegraf,container_name=zen_ritchie,container_status=running,container_version=unknown,engine_host=debian-stretch-docker,server_version=17.09.0-ce active_anon=8327168i,active_file=2314240i,cache=27402240i,container_id="adc4ba9593871bf2ab95f3ffde70d1b638b897bb225d21c2c9c84226a10a8cf4",hierarchical_memory_limit=9223372036854771712i,inactive_anon=0i,inactive_file=25088000i,limit=2101661696i,mapped_file=20582400i,max_usage=36646912i,pgfault=4193i,pgmajfault=214i,pgpgin=9243i,pgpgout=520i,rss=8327168i,rss_huge=0i,total_active_anon=8327168i,total_active_file=2314240i,total_cache=27402240i,total_inactive_anon=0i,total_inactive_file=25088000i,total_mapped_file=20582400i,total_pgfault=4193i,total_pgmajfault=214i,total_pgpgin=9243i,total_pgpgout=520i,total_rss=8327168i,total_rss_huge=0i,total_unevictable=0i,total_writeback=0i,unevictable=0i,usage=36528128i,usage_percent=0.4342225020025297,writeback=0i 1524002042000000000
+docker_container_cpu,container_image=telegraf,container_name=zen_ritchie,container_status=running,container_version=unknown,cpu=cpu-total,engine_host=debian-stretch-docker,server_version=17.09.0-ce container_id="adc4ba9593871bf2ab95f3ffde70d1b638b897bb225d21c2c9c84226a10a8cf4",throttling_periods=0i,throttling_throttled_periods=0i,throttling_throttled_time=0i,usage_in_kernelmode=40000000i,usage_in_usermode=100000000i,usage_percent=0,usage_system=6394210000000i,usage_total=117319068i 1524002042000000000
+docker_container_cpu,container_image=telegraf,container_name=zen_ritchie,container_status=running,container_version=unknown,cpu=cpu0,engine_host=debian-stretch-docker,server_version=17.09.0-ce container_id="adc4ba9593871bf2ab95f3ffde70d1b638b897bb225d21c2c9c84226a10a8cf4",usage_total=20825265i 1524002042000000000
+docker_container_cpu,container_image=telegraf,container_name=zen_ritchie,container_status=running,container_version=unknown,cpu=cpu1,engine_host=debian-stretch-docker,server_version=17.09.0-ce container_id="adc4ba9593871bf2ab95f3ffde70d1b638b897bb225d21c2c9c84226a10a8cf4",usage_total=96493803i 1524002042000000000
+docker_container_net,container_image=telegraf,container_name=zen_ritchie,container_status=running,container_version=unknown,engine_host=debian-stretch-docker,network=eth0,server_version=17.09.0-ce container_id="adc4ba9593871bf2ab95f3ffde70d1b638b897bb225d21c2c9c84226a10a8cf4",rx_bytes=1576i,rx_dropped=0i,rx_errors=0i,rx_packets=20i,tx_bytes=0i,tx_dropped=0i,tx_errors=0i,tx_packets=0i 1524002042000000000
+docker_container_blkio,container_image=telegraf,container_name=zen_ritchie,container_status=running,container_version=unknown,device=254:0,engine_host=debian-stretch-docker,server_version=17.09.0-ce container_id="adc4ba9593871bf2ab95f3ffde70d1b638b897bb225d21c2c9c84226a10a8cf4",io_service_bytes_recursive_async=27398144i,io_service_bytes_recursive_read=27398144i,io_service_bytes_recursive_sync=0i,io_service_bytes_recursive_total=27398144i,io_service_bytes_recursive_write=0i,io_serviced_recursive_async=529i,io_serviced_recursive_read=529i,io_serviced_recursive_sync=0i,io_serviced_recursive_total=529i,io_serviced_recursive_write=0i 1524002042000000000
+docker_container_health,container_image=telegraf,container_name=zen_ritchie,container_status=running,container_version=unknown,engine_host=debian-stretch-docker,server_version=17.09.0-ce failing_streak=0i,health_status="healthy" 1524007529000000000
+docker_swarm,service_id=xaup2o9krw36j2dy1mjx1arjw,service_mode=replicated,service_name=test tasks_desired=3,tasks_running=3 1508968160000000000
+docker_disk_usage,engine_host=docker-desktop,server_version=24.0.5 layers_size=17654519107i 1695742041000000000
+docker_disk_usage,container_image=influxdb,container_name=frosty_wright,container_version=1.8,engine_host=docker-desktop,server_version=24.0.5 size_root_fs=286593526i,size_rw=538i 1695742041000000000
+docker_disk_usage,engine_host=docker-desktop,image_id=7f4a1cc74046,image_name=telegraf,image_version=latest,server_version=24.0.5 shared_size=0i,size=425484494i 1695742041000000000
+docker_disk_usage,engine_host=docker-desktop,server_version=24.0.5,volume_name=docker_influxdb-data size=91989940i 1695742041000000000
+```
diff --git a/content/telegraf/v1/input-plugins/docker_log/_index.md b/content/telegraf/v1/input-plugins/docker_log/_index.md
new file mode 100644
index 000000000..e67d63f64
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/docker_log/_index.md
@@ -0,0 +1,124 @@
+---
+description: "Telegraf plugin for collecting metrics from Docker Log"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Docker Log
+    identifier: input-docker_log
+tags: [Docker Log, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Docker Log Input Plugin
+
+The docker log plugin uses the Docker Engine API to get logs on running
+docker containers.
+
+The docker plugin uses the [Official Docker Client](https://github.com/moby/moby/tree/master/client) to gather logs from the
+[Engine API](https://docs.docker.com/engine/api/v1.24/).
+
+**Note:** This plugin works only for containers with the `local` or
+`json-file` or `journald` logging driver.
+
+[Official Docker Client]: https://github.com/moby/moby/tree/master/client
+[Engine API]: https://docs.docker.com/engine/api/v1.24/
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read logging output from the Docker engine
+[[inputs.docker_log]]
+  ## Docker Endpoint
+  ##   To use TCP, set endpoint = "tcp://[ip]:[port]"
+  ##   To use environment variables (ie, docker-machine), set endpoint = "ENV"
+  # endpoint = "unix:///var/run/docker.sock"
+
+  ## When true, container logs are read from the beginning; otherwise reading
+  ## begins at the end of the log. If state-persistence is enabled for Telegraf,
+  ## the reading continues at the last previously processed timestamp.
+  # from_beginning = false
+
+  ## Timeout for Docker API calls.
+  # timeout = "5s"
+
+  ## Containers to include and exclude. Globs accepted.
+  ## Note that an empty array for both will include all containers
+  # container_name_include = []
+  # container_name_exclude = []
+
+  ## Container states to include and exclude. Globs accepted.
+  ## When empty only containers in the "running" state will be captured.
+  # container_state_include = []
+  # container_state_exclude = []
+
+  ## docker labels to include and exclude as tags.  Globs accepted.
+  ## Note that an empty array for both will include all labels as tags
+  # docker_label_include = []
+  # docker_label_exclude = []
+
+  ## Set the source tag for the metrics to the container ID hostname, eg first 12 chars
+  source_tag = false
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+### Environment Configuration
+
+When using the `"ENV"` endpoint, the connection is configured using the
+[CLI Docker environment variables](https://godoc.org/github.com/moby/moby/client#NewEnvClient)
+
+[env]: https://godoc.org/github.com/moby/moby/client#NewEnvClient
+
+## source tag
+
+Selecting the containers can be tricky if you have many containers with the same
+name.  To alleviate this issue you can set the below value to `true`
+
+```toml
+source_tag = true
+```
+
+This will cause all data points to have the `source` tag be set to the first 12
+characters of the container id. The first 12 characters is the common hostname
+for containers that have no explicit hostname set, as defined by docker.
+
+## Metrics
+
+- docker_log
+  - tags:
+    - container_image
+    - container_version
+    - container_name
+    - stream (stdout, stderr, or tty)
+    - source
+  - fields:
+    - container_id
+    - message
+
+## Example Output
+
+```text
+docker_log,container_image=telegraf,container_name=sharp_bell,container_version=alpine,stream=stderr container_id="371ee5d3e58726112f499be62cddef800138ca72bbba635ed2015fbf475b1023",message="2019-06-19T03:11:11Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:\"371ee5d3e587\", Flush Interval:10s" 1560913872000000000
+docker_log,container_image=telegraf,container_name=sharp_bell,container_version=alpine,stream=stderr container_id="371ee5d3e58726112f499be62cddef800138ca72bbba635ed2015fbf475b1023",message="2019-06-19T03:11:11Z I! Tags enabled: host=371ee5d3e587" 1560913872000000000
+docker_log,container_image=telegraf,container_name=sharp_bell,container_version=alpine,stream=stderr container_id="371ee5d3e58726112f499be62cddef800138ca72bbba635ed2015fbf475b1023",message="2019-06-19T03:11:11Z I! Loaded outputs: file" 1560913872000000000
+docker_log,container_image=telegraf,container_name=sharp_bell,container_version=alpine,stream=stderr container_id="371ee5d3e58726112f499be62cddef800138ca72bbba635ed2015fbf475b1023",message="2019-06-19T03:11:11Z I! Loaded processors:" 1560913872000000000
+docker_log,container_image=telegraf,container_name=sharp_bell,container_version=alpine,stream=stderr container_id="371ee5d3e58726112f499be62cddef800138ca72bbba635ed2015fbf475b1023",message="2019-06-19T03:11:11Z I! Loaded aggregators:" 1560913872000000000
+docker_log,container_image=telegraf,container_name=sharp_bell,container_version=alpine,stream=stderr container_id="371ee5d3e58726112f499be62cddef800138ca72bbba635ed2015fbf475b1023",message="2019-06-19T03:11:11Z I! Loaded inputs: net" 1560913872000000000
+docker_log,container_image=telegraf,container_name=sharp_bell,container_version=alpine,stream=stderr container_id="371ee5d3e58726112f499be62cddef800138ca72bbba635ed2015fbf475b1023",message="2019-06-19T03:11:11Z I! Using config file: /etc/telegraf/telegraf.conf" 1560913872000000000
+docker_log,container_image=telegraf,container_name=sharp_bell,container_version=alpine,stream=stderr container_id="371ee5d3e58726112f499be62cddef800138ca72bbba635ed2015fbf475b1023",message="2019-06-19T03:11:11Z I! Starting Telegraf 1.10.4" 1560913872000000000
+```
diff --git a/content/telegraf/v1/input-plugins/dovecot/_index.md b/content/telegraf/v1/input-plugins/dovecot/_index.md
new file mode 100644
index 000000000..c7332077a
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/dovecot/_index.md
@@ -0,0 +1,94 @@
+---
+description: "Telegraf plugin for collecting metrics from Dovecot"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Dovecot
+    identifier: input-dovecot
+tags: [Dovecot, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Dovecot Input Plugin
+
+The dovecot plugin uses the Dovecot [v2.1 stats protocol](http://wiki2.dovecot.org/Statistics/Old) to gather
+metrics on configured domains.
+
+When using Dovecot v2.3 you are still able to use this protocol by following
+the [upgrading steps](https://wiki2.dovecot.org/Upgrading/2.3#Statistics_Redesign).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics about dovecot servers
+[[inputs.dovecot]]
+  ## specify dovecot servers via an address:port list
+  ##  e.g.
+  ##    localhost:24242
+  ## or as an UDS socket
+  ##  e.g.
+  ##    /var/run/dovecot/old-stats
+  ##
+  ## If no servers are specified, then localhost is used as the host.
+  servers = ["localhost:24242"]
+
+  ## Type is one of "user", "domain", "ip", or "global"
+  type = "global"
+
+  ## Wildcard matches like "*.com". An empty string "" is same as "*"
+  ## If type = "ip" filters should be <IP/network>
+  filters = [""]
+```
+
+## Metrics
+
+- dovecot
+  - tags:
+    - server (hostname)
+    - type (query type)
+    - ip (ip addr)
+    - user (username)
+    - domain (domain name)
+  - fields:
+    - reset_timestamp (string)
+    - last_update (string)
+    - num_logins (integer)
+    - num_cmds (integer)
+    - num_connected_sessions (integer)
+    - user_cpu (float)
+    - sys_cpu (float)
+    - clock_time (float)
+    - min_faults (integer)
+    - maj_faults (integer)
+    - vol_cs (integer)
+    - invol_cs (integer)
+    - disk_input (integer)
+    - disk_output (integer)
+    - read_count (integer)
+    - read_bytes (integer)
+    - write_count (integer)
+    - write_bytes (integer)
+    - mail_lookup_path (integer)
+    - mail_lookup_attr (integer)
+    - mail_read_count (integer)
+    - mail_read_bytes (integer)
+    - mail_cache_hits (integer)
+
+## Example Output
+
+```text
+dovecot,server=dovecot-1.domain.test,type=global clock_time=101196971074203.94,disk_input=6493168218112i,disk_output=17978638815232i,invol_cs=1198855447i,last_update="2016-04-08 11:04:13.000379245 +0200 CEST",mail_cache_hits=68192209i,mail_lookup_attr=0i,mail_lookup_path=653861i,mail_read_bytes=86705151847i,mail_read_count=566125i,maj_faults=17208i,min_faults=1286179702i,num_cmds=917469i,num_connected_sessions=8896i,num_logins=174827i,read_bytes=30327690466186i,read_count=1772396430i,reset_timestamp="2016-04-08 10:28:45 +0200 CEST",sys_cpu=157965.692,user_cpu=219337.48,vol_cs=2827615787i,write_bytes=17150837661940i,write_count=992653220i 1460106266642153907
+```
+
+[stats old]: http://wiki2.dovecot.org/Statistics/Old
+[upgrading]: https://wiki2.dovecot.org/Upgrading/2.3#Statistics_Redesign
diff --git a/content/telegraf/v1/input-plugins/dpdk/_index.md b/content/telegraf/v1/input-plugins/dpdk/_index.md
new file mode 100644
index 000000000..d01588d6d
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/dpdk/_index.md
@@ -0,0 +1,331 @@
+---
+description: "Telegraf plugin for collecting metrics from Data Plane Development Kit (DPDK)"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Data Plane Development Kit (DPDK)
+    identifier: input-dpdk
+tags: [Data Plane Development Kit (DPDK), "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Data Plane Development Kit (DPDK) Input Plugin
+
+The `dpdk` plugin collects metrics exposed by applications built with [Data
+Plane Development Kit](https://www.dpdk.org/) which is an extensive set of open
+source libraries designed for accelerating packet processing workloads.
+
+DPDK provides APIs that enable exposing various statistics from the devices used
+by DPDK applications and enable exposing KPI metrics directly from
+applications. Device statistics include e.g. common statistics available across
+NICs, like: received and sent packets, received and sent bytes etc. In addition
+to this generic statistics, an extended statistics API is available that allows
+providing more detailed, driver-specific metrics that are not available as
+generic statistics.
+
+[DPDK Release 20.05](https://doc.dpdk.org/guides/rel_notes/release_20_05.html)
+introduced updated telemetry interface that enables DPDK libraries and
+applications to provide their telemetry. This is referred to as `v2` version of
+this socket-based telemetry interface. This release enabled e.g. reading
+driver-specific extended stats (`/ethdev/xstats`) via this new interface.
+
+[DPDK Release 20.11](https://doc.dpdk.org/guides/rel_notes/release_20_11.html)
+introduced reading via `v2` interface common statistics (`/ethdev/stats`) in
+addition to existing (`/ethdev/xstats`).
+
+[DPDK Release 21.11](https://doc.dpdk.org/guides/rel_notes/release_21_11.html)
+introduced reading via `v2` interface additional ethernet device information
+(`/ethdev/info`).
+This version also adds support for exposing telemetry from multiple
+`--in-memory` instances of DPDK via dedicated sockets.
+The plugin supports reading from those sockets when `in_memory`
+option is set.
+
+The example usage of `v2` telemetry interface can be found in [Telemetry User
+Guide](https://doc.dpdk.org/guides/howto/telemetry.html).  A variety of [DPDK
+Sample Applications](https://doc.dpdk.org/guides/sample_app_ug/index.html) is
+also available for users to discover and test the capabilities of DPDK libraries
+and to explore the exposed metrics.
+
+> **DPDK Version Info:** This plugin uses this `v2` interface to read telemetry
+> data from applications build with `DPDK version >= 20.05`. The default
+> configuration include reading common statistics from `/ethdev/stats` that is
+> available from `DPDK version >= 20.11`. When using
+> `DPDK 20.05 <= version < DPDK 20.11` it is recommended to disable querying
+> `/ethdev/stats` by setting corresponding `exclude_commands` configuration
+> option.
+>
+> **NOTE:** Since DPDK will most likely run with root privileges, the socket
+> telemetry interface exposed by DPDK will also require root access. This means
+> that either access permissions have to be adjusted for socket telemetry
+> interface to allow Telegraf to access it, or Telegraf should run with root
+> privileges.
+>
+> **NOTE:** There are known issues with exposing telemetry from multiple
+> `--in-memory` instances while using `DPDK 21.11.1`. The recommended version
+> to use in conjunction with `in_memory` plugin option is `DPDK 21.11.2`
+> or higher.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Reads metrics from DPDK applications using v2 telemetry interface.
+# This plugin ONLY supports Linux
+[[inputs.dpdk]]
+  ## Path to DPDK telemetry socket. This shall point to v2 version of DPDK
+  ## telemetry interface.
+  # socket_path = "/var/run/dpdk/rte/dpdk_telemetry.v2"
+
+  ## Duration that defines how long the connected socket client will wait for
+  ## a response before terminating connection.
+  ## This includes both writing to and reading from socket. Since it's local
+  ## socket access to a fast packet processing application, the timeout should
+  ## be sufficient for most users.
+  ## Setting the value to 0 disables the timeout (not recommended)
+  # socket_access_timeout = "200ms"
+
+  ## Enables telemetry data collection for selected device types.
+  ## Adding "ethdev" enables collection of telemetry from DPDK NICs (stats, xstats, link_status, info).
+  ## Adding "rawdev" enables collection of telemetry from DPDK Raw Devices (xstats).
+  # device_types = ["ethdev"]
+
+  ## List of custom, application-specific telemetry commands to query
+  ## The list of available commands depend on the application deployed.
+  ## Applications can register their own commands via telemetry library API
+  ## https://doc.dpdk.org/guides/prog_guide/telemetry_lib.html#registering-commands
+  ## For L3 Forwarding with Power Management Sample Application this could be:
+  ##   additional_commands = ["/l3fwd-power/stats"]
+  # additional_commands = []
+
+  ## List of plugin options.
+  ## Supported options:
+  ##  - "in_memory" option enables reading for multiple sockets when a dpdk application is running with --in-memory option.
+  ##    When option is enabled plugin will try to find additional socket paths related to provided socket_path.
+  ##    Details: https://doc.dpdk.org/guides/howto/telemetry.html#connecting-to-different-dpdk-processes
+  # plugin_options = ["in_memory"]
+
+  ## Specifies plugin behavior regarding unreachable socket (which might not have been initialized yet).
+  ## Available choices:
+  ##   - error: Telegraf will return an error during the startup and gather phases if socket is unreachable
+  ##   - ignore: Telegraf will ignore error regarding unreachable socket on both startup and gather
+  # unreachable_socket_behavior = "error"
+
+  ## List of metadata fields which will be added to every metric produced by the plugin.
+  ## Supported options:
+  ##  - "pid" - exposes PID of DPDK process. Example: pid=2179660i
+  ##  - "version" - exposes version of DPDK. Example: version="DPDK 21.11.2"
+  # metadata_fields = ["pid", "version"]
+
+  ## Allows turning off collecting data for individual "ethdev" commands.
+  ## Remove "/ethdev/link_status" from list to gather link status metrics.
+  [inputs.dpdk.ethdev]
+    exclude_commands = ["/ethdev/link_status"]
+
+  ## When running multiple instances of the plugin it's recommended to add a
+  ## unique tag to each instance to identify metrics exposed by an instance
+  ## of DPDK application. This is useful when multiple DPDK apps run on a
+  ## single host.
+  ##  [inputs.dpdk.tags]
+  ##    dpdk_instance = "my-fwd-app"
+```
+
+This plugin offers multiple configuration options, please review examples below
+for additional usage information.
+
+### Example: Minimal Configuration for NIC metrics
+
+This configuration allows getting metrics for all devices reported via
+`/ethdev/list` command:
+
+* `/ethdev/info` - device information: name, MAC address, buffers size, etc. (since `DPDK 21.11`)
+* `/ethdev/stats` - basic device statistics (since `DPDK 20.11`)
+* `/ethdev/xstats` - extended device statistics
+* `/ethdev/link_status` - up/down link status
+
+```toml
+[[inputs.dpdk]]
+  device_types = ["ethdev"]
+```
+
+Since this configuration will query `/ethdev/link_status` it's recommended to
+increase timeout to `socket_access_timeout = "10s"`.
+
+The plugin collecting interval.
+
+### Example: Excluding NIC link status from being collected
+
+Checking link status depending on underlying implementation may take more time
+to complete. This configuration can be used to exclude this telemetry command
+to allow faster response for metrics.
+
+```toml
+[[inputs.dpdk]]
+  device_types = ["ethdev"]
+
+  [inputs.dpdk.ethdev]
+    exclude_commands = ["/ethdev/link_status"]
+```
+
+A separate plugin instance with higher timeout settings can be used to get
+`/ethdev/link_status` independently.  Consult Independent NIC link status
+configuration and Getting
+metrics from multiple DPDK instances running on same
+host
+examples for further details.
+
+### Example: Independent NIC link status configuration
+
+This configuration allows getting `/ethdev/link_status` using separate
+configuration, with higher timeout.
+
+```toml
+[[inputs.dpdk]]
+  interval = "30s"
+  socket_access_timeout = "10s"
+  device_types = ["ethdev"]
+
+  [inputs.dpdk.ethdev]
+    exclude_commands = ["/ethdev/info", "/ethdev/stats", "/ethdev/xstats"]
+```
+
+### Example: Getting application-specific metrics
+
+This configuration allows reading custom metrics exposed by
+applications. Example telemetry command obtained from [L3 Forwarding with Power
+Management Sample Application]().
+
+```toml
+[[inputs.dpdk]]
+  device_types = ["ethdev"]
+  additional_commands = ["/l3fwd-power/stats"]
+
+  [inputs.dpdk.ethdev]
+    exclude_commands = ["/ethdev/link_status"]
+```
+
+Command entries specified in `additional_commands` should match DPDK command
+format:
+
+* Command entry format: either `command` or `command,params` for commands that
+  expect parameters, where comma (`,`) separates command from params.
+* Command entry length (command with params) should be `< 1024` characters.
+* Command length (without params) should be `< 56` characters.
+* Commands have to start with `/`.
+
+Providing invalid commands will prevent the plugin from starting. Additional
+commands allow duplicates, but they will be removed during execution, so each
+command will be executed only once during each metric gathering interval.
+
+[sample-app]: https://doc.dpdk.org/guides/sample_app_ug/l3_forward_power_man.html
+
+### Example: Getting metrics from multiple DPDK instances on same host
+
+This configuration allows getting metrics from two separate applications
+exposing their telemetry interfaces via separate sockets. For each plugin
+instance a unique tag `[inputs.dpdk.tags]` allows distinguishing between them.
+
+```toml
+# Instance #1 - L3 Forwarding with Power Management Application
+[[inputs.dpdk]]
+  socket_path = "/var/run/dpdk/rte/l3fwd-power_telemetry.v2"
+  device_types = ["ethdev"]
+  additional_commands = ["/l3fwd-power/stats"]
+
+  [inputs.dpdk.ethdev]
+    exclude_commands = ["/ethdev/link_status"]
+
+  [inputs.dpdk.tags]
+    dpdk_instance = "l3fwd-power"
+
+# Instance #2 - L2 Forwarding with Intel Cache Allocation Technology (CAT)
+# Application
+[[inputs.dpdk]]
+  socket_path = "/var/run/dpdk/rte/l2fwd-cat_telemetry.v2"
+  device_types = ["ethdev"]
+
+[inputs.dpdk.ethdev]
+  exclude_commands = ["/ethdev/link_status"]
+
+  [inputs.dpdk.tags]
+    dpdk_instance = "l2fwd-cat"
+```
+
+This utilizes Telegraf's standard capability of adding custom
+tags to input plugin's
+measurements.
+
+## Metrics
+
+The DPDK socket accepts `command,params` requests and returns metric data in
+JSON format. All metrics from DPDK socket become flattened using Telegraf's
+JSON Flattener, and a set of tags that identify
+querying hierarchy:
+
+```text
+dpdk,host=dpdk-host,dpdk_instance=l3fwd-power,command=/ethdev/stats,params=0 [fields] [timestamp]
+```
+
+| Tag | Description |
+|-----|-------------|
+| `host` | hostname of the machine (consult [Telegraf Agent configuration](https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#agent) for additional details) |
+| `dpdk_instance` | custom tag from `[inputs.dpdk.tags]` (optional) |
+| `command` | executed command (without params) |
+| `params` | command parameter, e.g. for `/ethdev/stats` it is the ID of NIC as exposed by `/ethdev/list`. For DPDK app that uses 2 NICs the metrics will output e.g. `params=0`, `params=1`. |
+
+When running plugin configuration below...
+
+```toml
+[[inputs.dpdk]]
+  device_types = ["ethdev"]
+  additional_commands = ["/l3fwd-power/stats"]
+  metadata_fields = []
+  [inputs.dpdk.tags]
+    dpdk_instance = "l3fwd-power"
+```
+
+...expected output for `dpdk` plugin instance running on host named
+`host=dpdk-host`:
+
+```text
+dpdk,command=/ethdev/info,dpdk_instance=l3fwd-power,host=dpdk-host,params=0 all_multicast=0,dev_configured=1,dev_flags=74,dev_started=1,ethdev_rss_hf=0,lro=0,mac_addr="E4:3D:1A:DD:13:31",mtu=1500,name="0000:ca:00.1",nb_rx_queues=1,nb_tx_queues=1,numa_node=1,port_id=0,promiscuous=1,rx_mbuf_alloc_fail=0,rx_mbuf_size_min=2176,rx_offloads=0,rxq_state_0=1,scattered_rx=0,state=1,tx_offloads=65536,txq_state_0=1 1659017414000000000
+dpdk,command=/ethdev/stats,dpdk_instance=l3fwd-power,host=dpdk-host,params=0 q_opackets_0=0,q_ipackets_5=0,q_errors_11=0,ierrors=0,q_obytes_5=0,q_obytes_10=0,q_opackets_10=0,q_ipackets_4=0,q_ipackets_7=0,q_ipackets_15=0,q_ibytes_5=0,q_ibytes_6=0,q_ibytes_9=0,obytes=0,q_opackets_1=0,q_opackets_11=0,q_obytes_7=0,q_errors_5=0,q_errors_10=0,q_ibytes_4=0,q_obytes_6=0,q_errors_1=0,q_opackets_5=0,q_errors_3=0,q_errors_12=0,q_ipackets_11=0,q_ipackets_12=0,q_obytes_14=0,q_opackets_15=0,q_obytes_2=0,q_errors_8=0,q_opackets_12=0,q_errors_0=0,q_errors_9=0,q_opackets_14=0,q_ibytes_3=0,q_ibytes_15=0,q_ipackets_13=0,q_ipackets_14=0,q_obytes_3=0,q_errors_13=0,q_opackets_3=0,q_ibytes_0=7092,q_ibytes_2=0,q_ibytes_8=0,q_ipackets_8=0,q_ipackets_10=0,q_obytes_4=0,q_ibytes_10=0,q_ibytes_13=0,q_ibytes_1=0,q_ibytes_12=0,opackets=0,q_obytes_1=0,q_errors_15=0,q_opackets_2=0,oerrors=0,rx_nombuf=0,q_opackets_8=0,q_ibytes_11=0,q_ipackets_3=0,q_obytes_0=0,q_obytes_12=0,q_obytes_11=0,q_obytes_13=0,q_errors_6=0,q_ipackets_1=0,q_ipackets_6=0,q_ipackets_9=0,q_obytes_15=0,q_opackets_7=0,q_ibytes_14=0,ipackets=98,q_ipackets_2=0,q_opackets_6=0,q_ibytes_7=0,imissed=0,q_opackets_4=0,q_opackets_9=0,q_obytes_8=0,q_obytes_9=0,q_errors_4=0,q_errors_14=0,q_opackets_13=0,ibytes=7092,q_ipackets_0=98,q_errors_2=0,q_errors_7=0 1606310780000000000
+dpdk,command=/ethdev/stats,dpdk_instance=l3fwd-power,host=dpdk-host,params=1 q_opackets_0=0,q_ipackets_5=0,q_errors_11=0,ierrors=0,q_obytes_5=0,q_obytes_10=0,q_opackets_10=0,q_ipackets_4=0,q_ipackets_7=0,q_ipackets_15=0,q_ibytes_5=0,q_ibytes_6=0,q_ibytes_9=0,obytes=0,q_opackets_1=0,q_opackets_11=0,q_obytes_7=0,q_errors_5=0,q_errors_10=0,q_ibytes_4=0,q_obytes_6=0,q_errors_1=0,q_opackets_5=0,q_errors_3=0,q_errors_12=0,q_ipackets_11=0,q_ipackets_12=0,q_obytes_14=0,q_opackets_15=0,q_obytes_2=0,q_errors_8=0,q_opackets_12=0,q_errors_0=0,q_errors_9=0,q_opackets_14=0,q_ibytes_3=0,q_ibytes_15=0,q_ipackets_13=0,q_ipackets_14=0,q_obytes_3=0,q_errors_13=0,q_opackets_3=0,q_ibytes_0=7092,q_ibytes_2=0,q_ibytes_8=0,q_ipackets_8=0,q_ipackets_10=0,q_obytes_4=0,q_ibytes_10=0,q_ibytes_13=0,q_ibytes_1=0,q_ibytes_12=0,opackets=0,q_obytes_1=0,q_errors_15=0,q_opackets_2=0,oerrors=0,rx_nombuf=0,q_opackets_8=0,q_ibytes_11=0,q_ipackets_3=0,q_obytes_0=0,q_obytes_12=0,q_obytes_11=0,q_obytes_13=0,q_errors_6=0,q_ipackets_1=0,q_ipackets_6=0,q_ipackets_9=0,q_obytes_15=0,q_opackets_7=0,q_ibytes_14=0,ipackets=98,q_ipackets_2=0,q_opackets_6=0,q_ibytes_7=0,imissed=0,q_opackets_4=0,q_opackets_9=0,q_obytes_8=0,q_obytes_9=0,q_errors_4=0,q_errors_14=0,q_opackets_13=0,ibytes=7092,q_ipackets_0=98,q_errors_2=0,q_errors_7=0 1606310780000000000
+dpdk,command=/ethdev/xstats,dpdk_instance=l3fwd-power,host=dpdk-host,params=0 out_octets_encrypted=0,rx_fcoe_mbuf_allocation_errors=0,tx_q1packets=0,rx_priority0_xoff_packets=0,rx_priority7_xoff_packets=0,rx_errors=0,mac_remote_errors=0,in_pkts_invalid=0,tx_priority3_xoff_packets=0,tx_errors=0,rx_fcoe_bytes=0,rx_flow_control_xon_packets=0,rx_priority4_xoff_packets=0,tx_priority2_xoff_packets=0,rx_illegal_byte_errors=0,rx_xoff_packets=0,rx_management_packets=0,rx_priority7_dropped=0,rx_priority4_dropped=0,in_pkts_unchecked=0,rx_error_bytes=0,rx_size_256_to_511_packets=0,tx_priority4_xoff_packets=0,rx_priority6_xon_packets=0,tx_priority4_xon_to_xoff_packets=0,in_pkts_delayed=0,rx_priority0_mbuf_allocation_errors=0,out_octets_protected=0,tx_priority7_xon_to_xoff_packets=0,tx_priority1_xon_to_xoff_packets=0,rx_fcoe_no_direct_data_placement_ext_buff=0,tx_priority6_xon_to_xoff_packets=0,flow_director_filter_add_errors=0,rx_total_packets=99,rx_crc_errors=0,flow_director_filter_remove_errors=0,rx_missed_errors=0,tx_size_64_packets=0,rx_priority3_dropped=0,flow_director_matched_filters=0,tx_priority2_xon_to_xoff_packets=0,rx_priority1_xon_packets=0,rx_size_65_to_127_packets=99,rx_fragment_errors=0,in_pkts_notusingsa=0,rx_q0bytes=7162,rx_fcoe_dropped=0,rx_priority1_dropped=0,rx_fcoe_packets=0,rx_priority5_xoff_packets=0,out_pkts_protected=0,tx_total_packets=0,rx_priority2_dropped=0,in_pkts_late=0,tx_q1bytes=0,in_pkts_badtag=0,rx_multicast_packets=99,rx_priority6_xoff_packets=0,tx_flow_control_xoff_packets=0,rx_flow_control_xoff_packets=0,rx_priority0_xon_packets=0,in_pkts_untagged=0,tx_fcoe_packets=0,rx_priority7_mbuf_allocation_errors=0,tx_priority0_xon_to_xoff_packets=0,tx_priority5_xon_to_xoff_packets=0,tx_flow_control_xon_packets=0,tx_q0packets=0,tx_xoff_packets=0,rx_size_512_to_1023_packets=0,rx_priority3_xon_packets=0,rx_q0errors=0,rx_oversize_errors=0,tx_priority4_xon_packets=0,tx_priority5_xoff_packets=0,rx_priority5_xon_packets=0,rx_total_missed_packets=0,rx_priority4_mbuf_allocation_errors=0,tx_priority1_xon_packets=0,tx_management_packets=0,rx_priority5_mbuf_allocation_errors=0,rx_fcoe_no_direct_data_placement=0,rx_undersize_errors=0,tx_priority1_xoff_packets=0,rx_q0packets=99,tx_q2packets=0,tx_priority6_xon_packets=0,rx_good_packets=99,tx_priority5_xon_packets=0,tx_size_256_to_511_packets=0,rx_priority6_dropped=0,rx_broadcast_packets=0,tx_size_512_to_1023_packets=0,tx_priority3_xon_to_xoff_packets=0,in_pkts_unknownsci=0,in_octets_validated=0,tx_priority6_xoff_packets=0,tx_priority7_xoff_packets=0,rx_jabber_errors=0,tx_priority7_xon_packets=0,tx_priority0_xon_packets=0,in_pkts_unusedsa=0,tx_priority0_xoff_packets=0,mac_local_errors=33,rx_total_bytes=7162,in_pkts_notvalid=0,rx_length_errors=0,in_octets_decrypted=0,rx_size_128_to_255_packets=0,rx_good_bytes=7162,tx_size_65_to_127_packets=0,rx_mac_short_packet_dropped=0,tx_size_1024_to_max_packets=0,rx_priority2_mbuf_allocation_errors=0,flow_director_added_filters=0,tx_multicast_packets=0,rx_fcoe_crc_errors=0,rx_priority1_xoff_packets=0,flow_director_missed_filters=0,rx_xon_packets=0,tx_size_128_to_255_packets=0,out_pkts_encrypted=0,rx_priority4_xon_packets=0,rx_priority0_dropped=0,rx_size_1024_to_max_packets=0,tx_good_bytes=0,rx_management_dropped=0,rx_mbuf_allocation_errors=0,tx_xon_packets=0,rx_priority3_xoff_packets=0,tx_good_packets=0,tx_fcoe_bytes=0,rx_priority6_mbuf_allocation_errors=0,rx_priority2_xon_packets=0,tx_broadcast_packets=0,tx_q2bytes=0,rx_priority7_xon_packets=0,out_pkts_untagged=0,rx_priority2_xoff_packets=0,rx_priority1_mbuf_allocation_errors=0,tx_q0bytes=0,rx_size_64_packets=0,rx_priority5_dropped=0,tx_priority2_xon_packets=0,in_pkts_nosci=0,flow_director_removed_filters=0,in_pkts_ok=0,rx_l3_l4_xsum_error=0,rx_priority3_mbuf_allocation_errors=0,tx_priority3_xon_packets=0 1606310780000000000
+dpdk,command=/ethdev/xstats,dpdk_instance=l3fwd-power,host=dpdk-host,params=1 tx_priority5_xoff_packets=0,in_pkts_unknownsci=0,tx_q0packets=0,tx_total_packets=0,rx_crc_errors=0,rx_priority4_xoff_packets=0,rx_priority5_dropped=0,tx_size_65_to_127_packets=0,rx_good_packets=98,tx_priority6_xoff_packets=0,tx_fcoe_bytes=0,out_octets_protected=0,out_pkts_encrypted=0,rx_priority1_xon_packets=0,tx_size_128_to_255_packets=0,rx_flow_control_xoff_packets=0,rx_priority7_xoff_packets=0,tx_priority0_xon_to_xoff_packets=0,rx_broadcast_packets=0,tx_priority1_xon_packets=0,rx_xon_packets=0,rx_fragment_errors=0,tx_flow_control_xoff_packets=0,tx_q0bytes=0,out_pkts_untagged=0,rx_priority4_xon_packets=0,tx_priority5_xon_packets=0,rx_priority1_xoff_packets=0,rx_good_bytes=7092,rx_priority4_mbuf_allocation_errors=0,in_octets_decrypted=0,tx_priority2_xon_to_xoff_packets=0,rx_priority3_dropped=0,tx_multicast_packets=0,mac_local_errors=33,in_pkts_ok=0,rx_illegal_byte_errors=0,rx_xoff_packets=0,rx_q0errors=0,flow_director_added_filters=0,rx_size_256_to_511_packets=0,rx_priority3_xon_packets=0,rx_l3_l4_xsum_error=0,rx_priority6_dropped=0,in_pkts_notvalid=0,rx_size_64_packets=0,tx_management_packets=0,rx_length_errors=0,tx_priority7_xon_to_xoff_packets=0,rx_mbuf_allocation_errors=0,rx_missed_errors=0,rx_priority1_mbuf_allocation_errors=0,rx_fcoe_no_direct_data_placement=0,tx_priority3_xoff_packets=0,in_pkts_delayed=0,tx_errors=0,rx_size_512_to_1023_packets=0,tx_priority4_xon_packets=0,rx_q0bytes=7092,in_pkts_unchecked=0,tx_size_512_to_1023_packets=0,rx_fcoe_packets=0,in_pkts_nosci=0,rx_priority6_mbuf_allocation_errors=0,rx_priority1_dropped=0,tx_q2packets=0,rx_priority7_dropped=0,tx_size_1024_to_max_packets=0,rx_management_packets=0,rx_multicast_packets=98,rx_total_bytes=7092,mac_remote_errors=0,tx_priority3_xon_packets=0,rx_priority2_mbuf_allocation_errors=0,rx_priority5_mbuf_allocation_errors=0,tx_q2bytes=0,rx_size_128_to_255_packets=0,in_pkts_badtag=0,out_pkts_protected=0,rx_management_dropped=0,rx_fcoe_bytes=0,flow_director_removed_filters=0,tx_priority2_xoff_packets=0,rx_fcoe_crc_errors=0,rx_priority0_mbuf_allocation_errors=0,rx_priority0_xon_packets=0,rx_fcoe_dropped=0,tx_priority1_xon_to_xoff_packets=0,rx_size_65_to_127_packets=98,rx_q0packets=98,tx_priority0_xoff_packets=0,rx_priority6_xon_packets=0,rx_total_packets=98,rx_undersize_errors=0,flow_director_missed_filters=0,rx_jabber_errors=0,in_pkts_invalid=0,in_pkts_late=0,rx_priority5_xon_packets=0,tx_priority4_xoff_packets=0,out_octets_encrypted=0,tx_q1packets=0,rx_priority5_xoff_packets=0,rx_priority6_xoff_packets=0,rx_errors=0,in_octets_validated=0,rx_priority3_xoff_packets=0,tx_priority4_xon_to_xoff_packets=0,tx_priority5_xon_to_xoff_packets=0,tx_flow_control_xon_packets=0,rx_priority0_dropped=0,flow_director_filter_add_errors=0,tx_q1bytes=0,tx_priority6_xon_to_xoff_packets=0,flow_director_matched_filters=0,tx_priority2_xon_packets=0,rx_fcoe_mbuf_allocation_errors=0,rx_priority2_xoff_packets=0,tx_priority7_xoff_packets=0,rx_priority0_xoff_packets=0,rx_oversize_errors=0,in_pkts_notusingsa=0,tx_size_64_packets=0,rx_size_1024_to_max_packets=0,tx_priority6_xon_packets=0,rx_priority2_dropped=0,rx_priority4_dropped=0,rx_priority7_mbuf_allocation_errors=0,rx_flow_control_xon_packets=0,tx_good_bytes=0,tx_priority3_xon_to_xoff_packets=0,rx_total_missed_packets=0,rx_error_bytes=0,tx_priority7_xon_packets=0,rx_mac_short_packet_dropped=0,tx_priority1_xoff_packets=0,tx_good_packets=0,tx_broadcast_packets=0,tx_xon_packets=0,in_pkts_unusedsa=0,rx_priority2_xon_packets=0,in_pkts_untagged=0,tx_fcoe_packets=0,flow_director_filter_remove_errors=0,rx_priority3_mbuf_allocation_errors=0,tx_priority0_xon_packets=0,rx_priority7_xon_packets=0,rx_fcoe_no_direct_data_placement_ext_buff=0,tx_xoff_packets=0,tx_size_256_to_511_packets=0 1606310780000000000
+dpdk,command=/ethdev/link_status,dpdk_instance=l3fwd-power,host=dpdk-host,params=0 status="UP",link_status=1,speed=10000,duplex="full-duplex" 1606310780000000000
+dpdk,command=/ethdev/link_status,dpdk_instance=l3fwd-power,host=dpdk-host,params=1 status="UP",link_status=1,speed=10000,duplex="full-duplex" 1606310780000000000
+dpdk,command=/l3fwd-power/stats,dpdk_instance=l3fwd-power,host=dpdk-host empty_poll=49506395979901,full_poll=0,busy_percent=0 1606310780000000000
+```
+
+When running plugin configuration below...
+
+```toml
+[[inputs.dpdk]]
+  interval = "30s"
+  socket_access_timeout = "10s"
+  device_types = ["ethdev"]
+  metadata_fields = ["version", "pid"]
+  plugin_options = ["in_memory"]
+
+  [inputs.dpdk.ethdev]
+    exclude_commands = ["/ethdev/info", "/ethdev/stats", "/ethdev/xstats"]
+```
+
+Expected output for `dpdk` plugin instance running with `link_status` command
+and all metadata fields enabled, additionally `link_status` field will be
+exposed to represent string value of `status` field (`DOWN`=0,`UP`=1):
+
+```text
+dpdk,command=/ethdev/link_status,host=dpdk-host,params=0 pid=100988i,version="DPDK 21.11.2",status="DOWN",link_status=0i 1660295749000000000
+dpdk,command=/ethdev/link_status,host=dpdk-host,params=0 pid=2401624i,version="DPDK 21.11.2",status="UP",link_status=1i 1660295749000000000
+```
diff --git a/content/telegraf/v1/input-plugins/ecs/_index.md b/content/telegraf/v1/input-plugins/ecs/_index.md
new file mode 100644
index 000000000..ff9b5ef6e
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/ecs/_index.md
@@ -0,0 +1,264 @@
+---
+description: "Telegraf plugin for collecting metrics from Amazon ECS"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Amazon ECS
+    identifier: input-ecs
+tags: [Amazon ECS, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Amazon ECS Input Plugin
+
+Amazon ECS, Fargate compatible, input plugin which uses the Amazon ECS metadata
+and stats [v2](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint-v2.html) or [v3](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint-v3.html) API
+endpoints to gather stats on running containers in a Task.
+
+The telegraf container must be run in the same Task as the workload it is
+inspecting.
+
+This is similar to (and reuses a few pieces of) the [Docker](/telegraf/v1/plugins/#input-docker) input
+plugin, with some ECS specific modifications for AWS metadata and stats formats.
+
+The amazon-ecs-agent (though it _is_ a container running on the host) is not
+present in the metadata/stats endpoints.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics about ECS containers
+[[inputs.ecs]]
+  ## ECS metadata url.
+  ## Metadata v2 API is used if set explicitly. Otherwise,
+  ## v3 metadata endpoint API is used if available.
+  # endpoint_url = ""
+
+  ## Containers to include and exclude. Globs accepted.
+  ## Note that an empty array for both will include all containers
+  # container_name_include = []
+  # container_name_exclude = []
+
+  ## Container states to include and exclude. Globs accepted.
+  ## When empty only containers in the "RUNNING" state will be captured.
+  ## Possible values are "NONE", "PULLED", "CREATED", "RUNNING",
+  ## "RESOURCES_PROVISIONED", "STOPPED".
+  # container_status_include = []
+  # container_status_exclude = []
+
+  ## ecs labels to include and exclude as tags.  Globs accepted.
+  ## Note that an empty array for both will include all labels as tags
+  ecs_label_include = [ "com.amazonaws.ecs.*" ]
+  ecs_label_exclude = []
+
+  ## Timeout for queries.
+  # timeout = "5s"
+```
+
+## Configuration (enforce v2 metadata)
+
+```toml
+# Read metrics about ECS containers
+[[inputs.ecs]]
+  ## ECS metadata url.
+  ## Metadata v2 API is used if set explicitly. Otherwise,
+  ## v3 metadata endpoint API is used if available.
+  endpoint_url = "http://169.254.170.2"
+
+  ## Containers to include and exclude. Globs accepted.
+  ## Note that an empty array for both will include all containers
+  # container_name_include = []
+  # container_name_exclude = []
+
+  ## Container states to include and exclude. Globs accepted.
+  ## When empty only containers in the "RUNNING" state will be captured.
+  ## Possible values are "NONE", "PULLED", "CREATED", "RUNNING",
+  ## "RESOURCES_PROVISIONED", "STOPPED".
+  # container_status_include = []
+  # container_status_exclude = []
+
+  ## ecs labels to include and exclude as tags.  Globs accepted.
+  ## Note that an empty array for both will include all labels as tags
+  ecs_label_include = [ "com.amazonaws.ecs.*" ]
+  ecs_label_exclude = []
+
+  ## Timeout for queries.
+  # timeout = "5s"
+```
+
+## Metrics
+
+- ecs_task
+  - tags:
+    - cluster
+    - task_arn
+    - family
+    - revision
+    - id
+    - name
+  - fields:
+    - desired_status (string)
+    - known_status (string)
+    - limit_cpu (float)
+    - limit_mem (float)
+
+- ecs_container_mem
+  - tags:
+    - cluster
+    - task_arn
+    - family
+    - revision
+    - id
+    - name
+  - fields:
+    - container_id
+    - active_anon
+    - active_file
+    - cache
+    - hierarchical_memory_limit
+    - inactive_anon
+    - inactive_file
+    - mapped_file
+    - pgfault
+    - pgmajfault
+    - pgpgin
+    - pgpgout
+    - rss
+    - rss_huge
+    - total_active_anon
+    - total_active_file
+    - total_cache
+    - total_inactive_anon
+    - total_inactive_file
+    - total_mapped_file
+    - total_pgfault
+    - total_pgmajfault
+    - total_pgpgin
+    - total_pgpgout
+    - total_rss
+    - total_rss_huge
+    - total_unevictable
+    - total_writeback
+    - unevictable
+    - writeback
+    - fail_count
+    - limit
+    - max_usage
+    - usage
+    - usage_percent
+
+- ecs_container_cpu
+  - tags:
+    - cluster
+    - task_arn
+    - family
+    - revision
+    - id
+    - name
+    - cpu
+  - fields:
+    - container_id
+    - usage_total
+    - usage_in_usermode
+    - usage_in_kernelmode
+    - usage_system
+    - throttling_periods
+    - throttling_throttled_periods
+    - throttling_throttled_time
+    - usage_percent
+    - usage_total
+
+- ecs_container_net
+  - tags:
+    - cluster
+    - task_arn
+    - family
+    - revision
+    - id
+    - name
+    - network
+  - fields:
+    - container_id
+    - rx_packets
+    - rx_dropped
+    - rx_bytes
+    - rx_errors
+    - tx_packets
+    - tx_dropped
+    - tx_bytes
+    - tx_errors
+
+- ecs_container_blkio
+  - tags:
+    - cluster
+    - task_arn
+    - family
+    - revision
+    - id
+    - name
+    - device
+  - fields:
+    - container_id
+    - io_service_bytes_recursive_async
+    - io_service_bytes_recursive_read
+    - io_service_bytes_recursive_sync
+    - io_service_bytes_recursive_total
+    - io_service_bytes_recursive_write
+    - io_serviced_recursive_async
+    - io_serviced_recursive_read
+    - io_serviced_recursive_sync
+    - io_serviced_recursive_total
+    - io_serviced_recursive_write
+
+- ecs_container_meta
+  - tags:
+    - cluster
+    - task_arn
+    - family
+    - revision
+    - id
+    - name
+  - fields:
+    - container_id
+    - docker_name
+    - image
+    - image_id
+    - desired_status
+    - known_status
+    - limit_cpu
+    - limit_mem
+    - created_at
+    - started_at
+    - type
+
+## Example Output
+
+```text
+ecs_task,cluster=test,family=nginx,host=c4b301d4a123,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a desired_status="RUNNING",known_status="RUNNING",limit_cpu=0.5,limit_mem=512 1542641488000000000
+ecs_container_mem,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a active_anon=40960i,active_file=8192i,cache=790528i,pgpgin=1243i,total_pgfault=1298i,total_rss=40960i,limit=1033658368i,max_usage=4825088i,hierarchical_memory_limit=536870912i,rss=40960i,total_active_file=8192i,total_mapped_file=618496i,usage_percent=0.05349543109392212,container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",pgfault=1298i,pgmajfault=6i,pgpgout=1040i,total_active_anon=40960i,total_inactive_file=782336i,total_pgpgin=1243i,usage=552960i,inactive_file=782336i,mapped_file=618496i,total_cache=790528i,total_pgpgout=1040i 1542642001000000000
+ecs_container_cpu,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,cpu=cpu-total,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a usage_in_kernelmode=0i,throttling_throttled_periods=0i,throttling_periods=0i,throttling_throttled_time=0i,container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",usage_percent=0,usage_total=26426156i,usage_in_usermode=20000000i,usage_system=2336100000000i 1542642001000000000
+ecs_container_cpu,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,cpu=cpu0,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",usage_total=26426156i 1542642001000000000
+ecs_container_net,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,network=eth0,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a rx_errors=0i,rx_packets=36i,tx_errors=0i,tx_bytes=648i,container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",rx_dropped=0i,rx_bytes=5338i,tx_packets=8i,tx_dropped=0i 1542642001000000000
+ecs_container_net,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,network=eth5,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a rx_errors=0i,tx_packets=9i,rx_packets=26i,tx_errors=0i,rx_bytes=4641i,tx_dropped=0i,tx_bytes=690i,container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",rx_dropped=0i 1542642001000000000
+ecs_container_net,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,network=total,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a rx_dropped=0i,rx_bytes=9979i,rx_errors=0i,rx_packets=62i,tx_bytes=1338i,container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",tx_packets=17i,tx_dropped=0i,tx_errors=0i 1542642001000000000
+ecs_container_blkio,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,device=253:1,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a io_service_bytes_recursive_sync=790528i,io_service_bytes_recursive_total=790528i,io_serviced_recursive_sync=10i,io_serviced_recursive_write=0i,io_serviced_recursive_async=0i,io_serviced_recursive_total=10i,container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",io_service_bytes_recursive_read=790528i,io_service_bytes_recursive_write=0i,io_service_bytes_recursive_async=0i,io_serviced_recursive_read=10i 1542642001000000000
+ecs_container_blkio,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,device=253:2,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a io_service_bytes_recursive_sync=790528i,io_service_bytes_recursive_total=790528i,io_serviced_recursive_async=0i,io_serviced_recursive_total=10i,container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",io_service_bytes_recursive_read=790528i,io_service_bytes_recursive_write=0i,io_service_bytes_recursive_async=0i,io_serviced_recursive_read=10i,io_serviced_recursive_write=0i,io_serviced_recursive_sync=10i 1542642001000000000
+ecs_container_blkio,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,device=253:4,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a io_service_bytes_recursive_write=0i,io_service_bytes_recursive_sync=790528i,io_service_bytes_recursive_async=0i,io_service_bytes_recursive_total=790528i,io_serviced_recursive_async=0i,container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",io_service_bytes_recursive_read=790528i,io_serviced_recursive_read=10i,io_serviced_recursive_write=0i,io_serviced_recursive_sync=10i,io_serviced_recursive_total=10i 1542642001000000000
+ecs_container_blkio,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,device=202:26368,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a io_serviced_recursive_read=10i,io_serviced_recursive_write=0i,io_serviced_recursive_sync=10i,io_serviced_recursive_async=0i,io_serviced_recursive_total=10i,container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",io_service_bytes_recursive_sync=790528i,io_service_bytes_recursive_total=790528i,io_service_bytes_recursive_async=0i,io_service_bytes_recursive_read=790528i,io_service_bytes_recursive_write=0i 1542642001000000000
+ecs_container_blkio,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,device=total,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a io_serviced_recursive_async=0i,io_serviced_recursive_read=40i,io_serviced_recursive_sync=40i,io_serviced_recursive_write=0i,io_serviced_recursive_total=40i,io_service_bytes_recursive_read=3162112i,io_service_bytes_recursive_write=0i,io_service_bytes_recursive_async=0i,container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",io_service_bytes_recursive_sync=3162112i,io_service_bytes_recursive_total=3162112i 1542642001000000000
+ecs_container_meta,cluster=test,com.amazonaws.ecs.cluster=test,com.amazonaws.ecs.container-name=~internal~ecs~pause,com.amazonaws.ecs.task-arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a,com.amazonaws.ecs.task-definition-family=nginx,com.amazonaws.ecs.task-definition-version=2,family=nginx,host=c4b301d4a123,id=e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba,name=~internal~ecs~pause,revision=2,task_arn=arn:aws:ecs:aws-region-1:012345678901:task/a1234abc-a0a0-0a01-ab01-0abc012a0a0a limit_mem=0,type="CNI_PAUSE",container_id="e6af031b91deb3136a2b7c42f262ed2ab554e2fe2736998c7d8edf4afe708dba",docker_name="ecs-nginx-2-internalecspause",limit_cpu=0,known_status="RESOURCES_PROVISIONED",image="amazon/amazon-ecs-pause:0.1.0",image_id="",desired_status="RESOURCES_PROVISIONED" 1542642001000000000
+```
+
+[docker-input]: /plugins/inputs/docker/README.md
+[task-metadata-endpoint-v2]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint-v2.html
+[task-metadata-endpoint-v3]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint-v3.html
diff --git a/content/telegraf/v1/input-plugins/elasticsearch/_index.md b/content/telegraf/v1/input-plugins/elasticsearch/_index.md
new file mode 100644
index 000000000..3a22a201e
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/elasticsearch/_index.md
@@ -0,0 +1,905 @@
+---
+description: "Telegraf plugin for collecting metrics from Elasticsearch"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Elasticsearch
+    identifier: input-elasticsearch
+tags: [Elasticsearch, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Elasticsearch Input Plugin
+
+The [elasticsearch](https://www.elastic.co/) plugin queries endpoints to obtain
+[Node Stats](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html) and optionally [Cluster-Health](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html) metrics.
+
+In addition, the following optional queries are only made by the master node:
+ [Cluster Stats](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-stats.html) [Indices Stats](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html) [Shard Stats](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html)
+
+Specific Elasticsearch endpoints that are queried:
+
+- Node: either /_nodes/stats or /_nodes/_local/stats depending on 'local'
+  configuration setting
+- Cluster Heath: /_cluster/health?level=indices
+- Cluster Stats: /_cluster/stats
+- Indices Stats: /_all/_stats
+- Shard Stats: /_all/_stats?level=shards
+
+Note that specific statistics information can change between Elasticsearch
+versions. In general, this plugin attempts to stay as version-generic as
+possible by tagging high-level categories only and using a generic json parser
+to make unique field names of whatever statistics names are provided at the
+mid-low level.
+
+[1]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html
+[2]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html
+[3]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-stats.html
+[4]: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html
+[5]: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read stats from one or more Elasticsearch servers or clusters
+[[inputs.elasticsearch]]
+  ## specify a list of one or more Elasticsearch servers
+  ## you can add username and password to your url to use basic authentication:
+  ## servers = ["http://user:pass@localhost:9200"]
+  servers = ["http://localhost:9200"]
+
+  ## HTTP headers to send with each request
+  # headers = { "X-Custom-Header" = "Custom" }
+
+  ## When local is true (the default), the node will read only its own stats.
+  ## Set local to false when you want to read the node stats from all nodes
+  ## of the cluster.
+  local = true
+
+  ## Set cluster_health to true when you want to obtain cluster health stats
+  cluster_health = false
+
+  ## Adjust cluster_health_level when you want to obtain detailed health stats
+  ## The options are
+  ##  - indices (default)
+  ##  - cluster
+  # cluster_health_level = "indices"
+
+  ## Set cluster_stats to true when you want to obtain cluster stats.
+  cluster_stats = false
+
+  ## Only gather cluster_stats from the master node.
+  ## To work this require local = true
+  cluster_stats_only_from_master = true
+
+  ## Gather stats from the enrich API
+  # enrich_stats = false
+
+  ## Indices to collect; can be one or more indices names or _all
+  ## Use of wildcards is allowed. Use a wildcard at the end to retrieve index
+  ## names that end with a changing value, like a date.
+  indices_include = ["_all"]
+
+  ## One of "shards", "cluster", "indices"
+  ## Currently only "shards" is implemented
+  indices_level = "shards"
+
+  ## node_stats is a list of sub-stats that you want to have gathered.
+  ## Valid options are "indices", "os", "process", "jvm", "thread_pool",
+  ## "fs", "transport", "http", "breaker". Per default, all stats are gathered.
+  # node_stats = ["jvm", "http"]
+
+  ## HTTP Basic Authentication username and password.
+  # username = ""
+  # password = ""
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## If 'use_system_proxy' is set to true, Telegraf will check env vars such as
+  ## HTTP_PROXY, HTTPS_PROXY, and NO_PROXY (or their lowercase counterparts).
+  ## If 'use_system_proxy' is set to false (default) and 'http_proxy_url' is
+  ## provided, Telegraf will use the specified URL as HTTP proxy.
+  # use_system_proxy = false
+  # http_proxy_url = "http://localhost:8888"
+
+  ## Sets the number of most recent indices to return for indices that are
+  ## configured with a date-stamped suffix. Each 'indices_include' entry
+  ## ending with a wildcard (*) or glob matching pattern will group together
+  ## all indices that match it, and  sort them by the date or number after
+  ## the wildcard. Metrics then are gathered for only the
+  ## 'num_most_recent_indices' amount of most  recent indices.
+  # num_most_recent_indices = 0
+```
+
+## Metrics
+
+Emitted when `cluster_health = true`:
+
+- elasticsearch_cluster_health
+  - tags:
+    - name
+  - fields:
+    - active_primary_shards (integer)
+    - active_shards (integer)
+    - active_shards_percent_as_number (float)
+    - delayed_unassigned_shards (integer)
+    - initializing_shards (integer)
+    - number_of_data_nodes (integer)
+    - number_of_in_flight_fetch (integer)
+    - number_of_nodes (integer)
+    - number_of_pending_tasks (integer)
+    - relocating_shards (integer)
+    - status (string, one of green, yellow or red)
+    - status_code (integer, green = 1, yellow = 2, red = 3),
+    - task_max_waiting_in_queue_millis (integer)
+    - timed_out (boolean)
+    - unassigned_shards (integer)
+
+Emitted when `cluster_health = true` and `cluster_health_level = "indices"`:
+
+- elasticsearch_cluster_health_indices
+  - tags:
+    - index
+    - name
+  - fields:
+    - active_primary_shards (integer)
+    - active_shards (integer)
+    - initializing_shards (integer)
+    - number_of_replicas (integer)
+    - number_of_shards (integer)
+    - relocating_shards (integer)
+    - status (string, one of green, yellow or red)
+    - status_code (integer, green = 1, yellow = 2, red = 3),
+    - unassigned_shards (integer)
+
+Emitted when `cluster_stats = true`:
+
+- elasticsearch_clusterstats_indices
+  - tags:
+    - cluster_name
+    - node_name
+    - status
+  - fields:
+    - completion_size_in_bytes (float)
+    - count (float)
+    - docs_count (float)
+    - docs_deleted (float)
+    - fielddata_evictions (float)
+    - fielddata_memory_size_in_bytes (float)
+    - query_cache_cache_count (float)
+    - query_cache_cache_size (float)
+    - query_cache_evictions (float)
+    - query_cache_hit_count (float)
+    - query_cache_memory_size_in_bytes (float)
+    - query_cache_miss_count (float)
+    - query_cache_total_count (float)
+    - segments_count (float)
+    - segments_doc_values_memory_in_bytes (float)
+    - segments_fixed_bit_set_memory_in_bytes (float)
+    - segments_index_writer_memory_in_bytes (float)
+    - segments_max_unsafe_auto_id_timestamp (float)
+    - segments_memory_in_bytes (float)
+    - segments_norms_memory_in_bytes (float)
+    - segments_points_memory_in_bytes (float)
+    - segments_stored_fields_memory_in_bytes (float)
+    - segments_term_vectors_memory_in_bytes (float)
+    - segments_terms_memory_in_bytes (float)
+    - segments_version_map_memory_in_bytes (float)
+    - shards_index_primaries_avg (float)
+    - shards_index_primaries_max (float)
+    - shards_index_primaries_min (float)
+    - shards_index_replication_avg (float)
+    - shards_index_replication_max (float)
+    - shards_index_replication_min (float)
+    - shards_index_shards_avg (float)
+    - shards_index_shards_max (float)
+    - shards_index_shards_min (float)
+    - shards_primaries (float)
+    - shards_replication (float)
+    - shards_total (float)
+    - store_size_in_bytes (float)
+
+- elasticsearch_clusterstats_nodes
+  - tags:
+    - cluster_name
+    - node_name
+    - status
+  - fields:
+    - count_coordinating_only (float)
+    - count_data (float)
+    - count_ingest (float)
+    - count_master (float)
+    - count_total (float)
+    - fs_available_in_bytes (float)
+    - fs_free_in_bytes (float)
+    - fs_total_in_bytes (float)
+    - jvm_max_uptime_in_millis (float)
+    - jvm_mem_heap_max_in_bytes (float)
+    - jvm_mem_heap_used_in_bytes (float)
+    - jvm_threads (float)
+    - jvm_versions_0_count (float)
+    - jvm_versions_0_version (string)
+    - jvm_versions_0_vm_name (string)
+    - jvm_versions_0_vm_vendor (string)
+    - jvm_versions_0_vm_version (string)
+    - network_types_http_types_security4 (float)
+    - network_types_transport_types_security4 (float)
+    - os_allocated_processors (float)
+    - os_available_processors (float)
+    - os_mem_free_in_bytes (float)
+    - os_mem_free_percent (float)
+    - os_mem_total_in_bytes (float)
+    - os_mem_used_in_bytes (float)
+    - os_mem_used_percent (float)
+    - os_names_0_count (float)
+    - os_names_0_name (string)
+    - os_pretty_names_0_count (float)
+    - os_pretty_names_0_pretty_name (string)
+    - process_cpu_percent (float)
+    - process_open_file_descriptors_avg (float)
+    - process_open_file_descriptors_max (float)
+    - process_open_file_descriptors_min (float)
+    - versions_0 (string)
+
+Emitted when the appropriate `node_stats` options are set.
+
+- elasticsearch_transport
+  - tags:
+    - cluster_name
+    - node_attribute_ml.enabled
+    - node_attribute_ml.machine_memory
+    - node_attribute_ml.max_open_jobs
+    - node_attribute_xpack.installed
+    - node_host
+    - node_id
+    - node_name
+  - fields:
+    - rx_count (float)
+    - rx_size_in_bytes (float)
+    - server_open (float)
+    - tx_count (float)
+    - tx_size_in_bytes (float)
+
+- elasticsearch_breakers
+  - tags:
+    - cluster_name
+    - node_attribute_ml.enabled
+    - node_attribute_ml.machine_memory
+    - node_attribute_ml.max_open_jobs
+    - node_attribute_xpack.installed
+    - node_host
+    - node_id
+    - node_name
+  - fields:
+    - accounting_estimated_size_in_bytes (float)
+    - accounting_limit_size_in_bytes (float)
+    - accounting_overhead (float)
+    - accounting_tripped (float)
+    - fielddata_estimated_size_in_bytes (float)
+    - fielddata_limit_size_in_bytes (float)
+    - fielddata_overhead (float)
+    - fielddata_tripped (float)
+    - in_flight_requests_estimated_size_in_bytes (float)
+    - in_flight_requests_limit_size_in_bytes (float)
+    - in_flight_requests_overhead (float)
+    - in_flight_requests_tripped (float)
+    - parent_estimated_size_in_bytes (float)
+    - parent_limit_size_in_bytes (float)
+    - parent_overhead (float)
+    - parent_tripped (float)
+    - request_estimated_size_in_bytes (float)
+    - request_limit_size_in_bytes (float)
+    - request_overhead (float)
+    - request_tripped (float)
+
+- elasticsearch_fs
+  - tags:
+    - cluster_name
+    - node_attribute_ml.enabled
+    - node_attribute_ml.machine_memory
+    - node_attribute_ml.max_open_jobs
+    - node_attribute_xpack.installed
+    - node_host
+    - node_id
+    - node_name
+  - fields:
+    - data_0_available_in_bytes (float)
+    - data_0_free_in_bytes (float)
+    - data_0_total_in_bytes (float)
+    - io_stats_devices_0_operations (float)
+    - io_stats_devices_0_read_kilobytes (float)
+    - io_stats_devices_0_read_operations (float)
+    - io_stats_devices_0_write_kilobytes (float)
+    - io_stats_devices_0_write_operations (float)
+    - io_stats_total_operations (float)
+    - io_stats_total_read_kilobytes (float)
+    - io_stats_total_read_operations (float)
+    - io_stats_total_write_kilobytes (float)
+    - io_stats_total_write_operations (float)
+    - timestamp (float)
+    - total_available_in_bytes (float)
+    - total_free_in_bytes (float)
+    - total_total_in_bytes (float)
+
+- elasticsearch_http
+  - tags:
+    - cluster_name
+    - node_attribute_ml.enabled
+    - node_attribute_ml.machine_memory
+    - node_attribute_ml.max_open_jobs
+    - node_attribute_xpack.installed
+    - node_host
+    - node_id
+    - node_name
+  - fields:
+    - current_open (float)
+    - total_opened (float)
+
+- elasticsearch_indices
+  - tags:
+    - cluster_name
+    - node_attribute_ml.enabled
+    - node_attribute_ml.machine_memory
+    - node_attribute_ml.max_open_jobs
+    - node_attribute_xpack.installed
+    - node_host
+    - node_id
+    - node_name
+  - fields:
+    - completion_size_in_bytes (float)
+    - docs_count (float)
+    - docs_deleted (float)
+    - fielddata_evictions (float)
+    - fielddata_memory_size_in_bytes (float)
+    - flush_periodic (float)
+    - flush_total (float)
+    - flush_total_time_in_millis (float)
+    - get_current (float)
+    - get_exists_time_in_millis (float)
+    - get_exists_total (float)
+    - get_missing_time_in_millis (float)
+    - get_missing_total (float)
+    - get_time_in_millis (float)
+    - get_total (float)
+    - indexing_delete_current (float)
+    - indexing_delete_time_in_millis (float)
+    - indexing_delete_total (float)
+    - indexing_index_current (float)
+    - indexing_index_failed (float)
+    - indexing_index_time_in_millis (float)
+    - indexing_index_total (float)
+    - indexing_noop_update_total (float)
+    - indexing_throttle_time_in_millis (float)
+    - merges_current (float)
+    - merges_current_docs (float)
+    - merges_current_size_in_bytes (float)
+    - merges_total (float)
+    - merges_total_auto_throttle_in_bytes (float)
+    - merges_total_docs (float)
+    - merges_total_size_in_bytes (float)
+    - merges_total_stopped_time_in_millis (float)
+    - merges_total_throttled_time_in_millis (float)
+    - merges_total_time_in_millis (float)
+    - query_cache_cache_count (float)
+    - query_cache_cache_size (float)
+    - query_cache_evictions (float)
+    - query_cache_hit_count (float)
+    - query_cache_memory_size_in_bytes (float)
+    - query_cache_miss_count (float)
+    - query_cache_total_count (float)
+    - recovery_current_as_source (float)
+    - recovery_current_as_target (float)
+    - recovery_throttle_time_in_millis (float)
+    - refresh_listeners (float)
+    - refresh_total (float)
+    - refresh_total_time_in_millis (float)
+    - request_cache_evictions (float)
+    - request_cache_hit_count (float)
+    - request_cache_memory_size_in_bytes (float)
+    - request_cache_miss_count (float)
+    - search_fetch_current (float)
+    - search_fetch_time_in_millis (float)
+    - search_fetch_total (float)
+    - search_open_contexts (float)
+    - search_query_current (float)
+    - search_query_time_in_millis (float)
+    - search_query_total (float)
+    - search_scroll_current (float)
+    - search_scroll_time_in_millis (float)
+    - search_scroll_total (float)
+    - search_suggest_current (float)
+    - search_suggest_time_in_millis (float)
+    - search_suggest_total (float)
+    - segments_count (float)
+    - segments_doc_values_memory_in_bytes (float)
+    - segments_fixed_bit_set_memory_in_bytes (float)
+    - segments_index_writer_memory_in_bytes (float)
+    - segments_max_unsafe_auto_id_timestamp (float)
+    - segments_memory_in_bytes (float)
+    - segments_norms_memory_in_bytes (float)
+    - segments_points_memory_in_bytes (float)
+    - segments_stored_fields_memory_in_bytes (float)
+    - segments_term_vectors_memory_in_bytes (float)
+    - segments_terms_memory_in_bytes (float)
+    - segments_version_map_memory_in_bytes (float)
+    - store_size_in_bytes (float)
+    - translog_earliest_last_modified_age (float)
+    - translog_operations (float)
+    - translog_size_in_bytes (float)
+    - translog_uncommitted_operations (float)
+    - translog_uncommitted_size_in_bytes (float)
+    - warmer_current (float)
+    - warmer_total (float)
+    - warmer_total_time_in_millis (float)
+
+- elasticsearch_jvm
+  - tags:
+    - cluster_name
+    - node_attribute_ml.enabled
+    - node_attribute_ml.machine_memory
+    - node_attribute_ml.max_open_jobs
+    - node_attribute_xpack.installed
+    - node_host
+    - node_id
+    - node_name
+  - fields:
+    - buffer_pools_direct_count (float)
+    - buffer_pools_direct_total_capacity_in_bytes (float)
+    - buffer_pools_direct_used_in_bytes (float)
+    - buffer_pools_mapped_count (float)
+    - buffer_pools_mapped_total_capacity_in_bytes (float)
+    - buffer_pools_mapped_used_in_bytes (float)
+    - classes_current_loaded_count (float)
+    - classes_total_loaded_count (float)
+    - classes_total_unloaded_count (float)
+    - gc_collectors_old_collection_count (float)
+    - gc_collectors_old_collection_time_in_millis (float)
+    - gc_collectors_young_collection_count (float)
+    - gc_collectors_young_collection_time_in_millis (float)
+    - mem_heap_committed_in_bytes (float)
+    - mem_heap_max_in_bytes (float)
+    - mem_heap_used_in_bytes (float)
+    - mem_heap_used_percent (float)
+    - mem_non_heap_committed_in_bytes (float)
+    - mem_non_heap_used_in_bytes (float)
+    - mem_pools_old_max_in_bytes (float)
+    - mem_pools_old_peak_max_in_bytes (float)
+    - mem_pools_old_peak_used_in_bytes (float)
+    - mem_pools_old_used_in_bytes (float)
+    - mem_pools_survivor_max_in_bytes (float)
+    - mem_pools_survivor_peak_max_in_bytes (float)
+    - mem_pools_survivor_peak_used_in_bytes (float)
+    - mem_pools_survivor_used_in_bytes (float)
+    - mem_pools_young_max_in_bytes (float)
+    - mem_pools_young_peak_max_in_bytes (float)
+    - mem_pools_young_peak_used_in_bytes (float)
+    - mem_pools_young_used_in_bytes (float)
+    - threads_count (float)
+    - threads_peak_count (float)
+    - timestamp (float)
+    - uptime_in_millis (float)
+
+- elasticsearch_os
+  - tags:
+    - cluster_name
+    - node_attribute_ml.enabled
+    - node_attribute_ml.machine_memory
+    - node_attribute_ml.max_open_jobs
+    - node_attribute_xpack.installed
+    - node_host
+    - node_id
+    - node_name
+  - fields:
+    - cgroup_cpu_cfs_period_micros (float)
+    - cgroup_cpu_cfs_quota_micros (float)
+    - cgroup_cpu_stat_number_of_elapsed_periods (float)
+    - cgroup_cpu_stat_number_of_times_throttled (float)
+    - cgroup_cpu_stat_time_throttled_nanos (float)
+    - cgroup_cpuacct_usage_nanos (float)
+    - cpu_load_average_15m (float)
+    - cpu_load_average_1m (float)
+    - cpu_load_average_5m (float)
+    - cpu_percent (float)
+    - mem_free_in_bytes (float)
+    - mem_free_percent (float)
+    - mem_total_in_bytes (float)
+    - mem_used_in_bytes (float)
+    - mem_used_percent (float)
+    - swap_free_in_bytes (float)
+    - swap_total_in_bytes (float)
+    - swap_used_in_bytes (float)
+    - timestamp (float)
+
+- elasticsearch_process
+  - tags:
+    - cluster_name
+    - node_attribute_ml.enabled
+    - node_attribute_ml.machine_memory
+    - node_attribute_ml.max_open_jobs
+    - node_attribute_xpack.installed
+    - node_host
+    - node_id
+    - node_name
+  - fields:
+    - cpu_percent (float)
+    - cpu_total_in_millis (float)
+    - max_file_descriptors (float)
+    - mem_total_virtual_in_bytes (float)
+    - open_file_descriptors (float)
+    - timestamp (float)
+
+- elasticsearch_thread_pool
+  - tags:
+    - cluster_name
+    - node_attribute_ml.enabled
+    - node_attribute_ml.machine_memory
+    - node_attribute_ml.max_open_jobs
+    - node_attribute_xpack.installed
+    - node_host
+    - node_id
+    - node_name
+  - fields:
+    - analyze_active (float)
+    - analyze_completed (float)
+    - analyze_largest (float)
+    - analyze_queue (float)
+    - analyze_rejected (float)
+    - analyze_threads (float)
+    - ccr_active (float)
+    - ccr_completed (float)
+    - ccr_largest (float)
+    - ccr_queue (float)
+    - ccr_rejected (float)
+    - ccr_threads (float)
+    - fetch_shard_started_active (float)
+    - fetch_shard_started_completed (float)
+    - fetch_shard_started_largest (float)
+    - fetch_shard_started_queue (float)
+    - fetch_shard_started_rejected (float)
+    - fetch_shard_started_threads (float)
+    - fetch_shard_store_active (float)
+    - fetch_shard_store_completed (float)
+    - fetch_shard_store_largest (float)
+    - fetch_shard_store_queue (float)
+    - fetch_shard_store_rejected (float)
+    - fetch_shard_store_threads (float)
+    - flush_active (float)
+    - flush_completed (float)
+    - flush_largest (float)
+    - flush_queue (float)
+    - flush_rejected (float)
+    - flush_threads (float)
+    - force_merge_active (float)
+    - force_merge_completed (float)
+    - force_merge_largest (float)
+    - force_merge_queue (float)
+    - force_merge_rejected (float)
+    - force_merge_threads (float)
+    - generic_active (float)
+    - generic_completed (float)
+    - generic_largest (float)
+    - generic_queue (float)
+    - generic_rejected (float)
+    - generic_threads (float)
+    - get_active (float)
+    - get_completed (float)
+    - get_largest (float)
+    - get_queue (float)
+    - get_rejected (float)
+    - get_threads (float)
+    - index_active (float)
+    - index_completed (float)
+    - index_largest (float)
+    - index_queue (float)
+    - index_rejected (float)
+    - index_threads (float)
+    - listener_active (float)
+    - listener_completed (float)
+    - listener_largest (float)
+    - listener_queue (float)
+    - listener_rejected (float)
+    - listener_threads (float)
+    - management_active (float)
+    - management_completed (float)
+    - management_largest (float)
+    - management_queue (float)
+    - management_rejected (float)
+    - management_threads (float)
+    - ml_autodetect_active (float)
+    - ml_autodetect_completed (float)
+    - ml_autodetect_largest (float)
+    - ml_autodetect_queue (float)
+    - ml_autodetect_rejected (float)
+    - ml_autodetect_threads (float)
+    - ml_datafeed_active (float)
+    - ml_datafeed_completed (float)
+    - ml_datafeed_largest (float)
+    - ml_datafeed_queue (float)
+    - ml_datafeed_rejected (float)
+    - ml_datafeed_threads (float)
+    - ml_utility_active (float)
+    - ml_utility_completed (float)
+    - ml_utility_largest (float)
+    - ml_utility_queue (float)
+    - ml_utility_rejected (float)
+    - ml_utility_threads (float)
+    - refresh_active (float)
+    - refresh_completed (float)
+    - refresh_largest (float)
+    - refresh_queue (float)
+    - refresh_rejected (float)
+    - refresh_threads (float)
+    - rollup_indexing_active (float)
+    - rollup_indexing_completed (float)
+    - rollup_indexing_largest (float)
+    - rollup_indexing_queue (float)
+    - rollup_indexing_rejected (float)
+    - rollup_indexing_threads (float)
+    - search_active (float)
+    - search_completed (float)
+    - search_largest (float)
+    - search_queue (float)
+    - search_rejected (float)
+    - search_threads (float)
+    - search_throttled_active (float)
+    - search_throttled_completed (float)
+    - search_throttled_largest (float)
+    - search_throttled_queue (float)
+    - search_throttled_rejected (float)
+    - search_throttled_threads (float)
+    - security-token-key_active (float)
+    - security-token-key_completed (float)
+    - security-token-key_largest (float)
+    - security-token-key_queue (float)
+    - security-token-key_rejected (float)
+    - security-token-key_threads (float)
+    - snapshot_active (float)
+    - snapshot_completed (float)
+    - snapshot_largest (float)
+    - snapshot_queue (float)
+    - snapshot_rejected (float)
+    - snapshot_threads (float)
+    - warmer_active (float)
+    - warmer_completed (float)
+    - warmer_largest (float)
+    - warmer_queue (float)
+    - warmer_rejected (float)
+    - warmer_threads (float)
+    - watcher_active (float)
+    - watcher_completed (float)
+    - watcher_largest (float)
+    - watcher_queue (float)
+    - watcher_rejected (float)
+    - watcher_threads (float)
+    - write_active (float)
+    - write_completed (float)
+    - write_largest (float)
+    - write_queue (float)
+    - write_rejected (float)
+    - write_threads (float)
+
+Emitted when the appropriate `indices_stats` options are set.
+
+- elasticsearch_indices_stats_(primaries|total)
+  - tags:
+    - index_name
+  - fields:
+    - completion_size_in_bytes (float)
+    - docs_count (float)
+    - docs_deleted (float)
+    - fielddata_evictions (float)
+    - fielddata_memory_size_in_bytes (float)
+    - flush_periodic (float)
+    - flush_total (float)
+    - flush_total_time_in_millis (float)
+    - get_current (float)
+    - get_exists_time_in_millis (float)
+    - get_exists_total (float)
+    - get_missing_time_in_millis (float)
+    - get_missing_total (float)
+    - get_time_in_millis (float)
+    - get_total (float)
+    - indexing_delete_current (float)
+    - indexing_delete_time_in_millis (float)
+    - indexing_delete_total (float)
+    - indexing_index_current (float)
+    - indexing_index_failed (float)
+    - indexing_index_time_in_millis (float)
+    - indexing_index_total (float)
+    - indexing_is_throttled (float)
+    - indexing_noop_update_total (float)
+    - indexing_throttle_time_in_millis (float)
+    - merges_current (float)
+    - merges_current_docs (float)
+    - merges_current_size_in_bytes (float)
+    - merges_total (float)
+    - merges_total_auto_throttle_in_bytes (float)
+    - merges_total_docs (float)
+    - merges_total_size_in_bytes (float)
+    - merges_total_stopped_time_in_millis (float)
+    - merges_total_throttled_time_in_millis (float)
+    - merges_total_time_in_millis (float)
+    - query_cache_cache_count (float)
+    - query_cache_cache_size (float)
+    - query_cache_evictions (float)
+    - query_cache_hit_count (float)
+    - query_cache_memory_size_in_bytes (float)
+    - query_cache_miss_count (float)
+    - query_cache_total_count (float)
+    - recovery_current_as_source (float)
+    - recovery_current_as_target (float)
+    - recovery_throttle_time_in_millis (float)
+    - refresh_external_total (float)
+    - refresh_external_total_time_in_millis (float)
+    - refresh_listeners (float)
+    - refresh_total (float)
+    - refresh_total_time_in_millis (float)
+    - request_cache_evictions (float)
+    - request_cache_hit_count (float)
+    - request_cache_memory_size_in_bytes (float)
+    - request_cache_miss_count (float)
+    - search_fetch_current (float)
+    - search_fetch_time_in_millis (float)
+    - search_fetch_total (float)
+    - search_open_contexts (float)
+    - search_query_current (float)
+    - search_query_time_in_millis (float)
+    - search_query_total (float)
+    - search_scroll_current (float)
+    - search_scroll_time_in_millis (float)
+    - search_scroll_total (float)
+    - search_suggest_current (float)
+    - search_suggest_time_in_millis (float)
+    - search_suggest_total (float)
+    - segments_count (float)
+    - segments_doc_values_memory_in_bytes (float)
+    - segments_fixed_bit_set_memory_in_bytes (float)
+    - segments_index_writer_memory_in_bytes (float)
+    - segments_max_unsafe_auto_id_timestamp (float)
+    - segments_memory_in_bytes (float)
+    - segments_norms_memory_in_bytes (float)
+    - segments_points_memory_in_bytes (float)
+    - segments_stored_fields_memory_in_bytes (float)
+    - segments_term_vectors_memory_in_bytes (float)
+    - segments_terms_memory_in_bytes (float)
+    - segments_version_map_memory_in_bytes (float)
+    - store_size_in_bytes (float)
+    - translog_earliest_last_modified_age (float)
+    - translog_operations (float)
+    - translog_size_in_bytes (float)
+    - translog_uncommitted_operations (float)
+    - translog_uncommitted_size_in_bytes (float)
+    - warmer_current (float)
+    - warmer_total (float)
+    - warmer_total_time_in_millis (float)
+
+Emitted when the appropriate `shards_stats` options are set.
+
+- elasticsearch_indices_stats_shards_total
+  - fields:
+    - failed (float)
+    - successful (float)
+    - total (float)
+
+- elasticsearch_indices_stats_shards
+  - tags:
+    - index_name
+    - node_name
+    - shard_name
+    - type
+  - fields:
+    - commit_generation (float)
+    - commit_num_docs (float)
+    - completion_size_in_bytes (float)
+    - docs_count (float)
+    - docs_deleted (float)
+    - fielddata_evictions (float)
+    - fielddata_memory_size_in_bytes (float)
+    - flush_periodic (float)
+    - flush_total (float)
+    - flush_total_time_in_millis (float)
+    - get_current (float)
+    - get_exists_time_in_millis (float)
+    - get_exists_total (float)
+    - get_missing_time_in_millis (float)
+    - get_missing_total (float)
+    - get_time_in_millis (float)
+    - get_total (float)
+    - indexing_delete_current (float)
+    - indexing_delete_time_in_millis (float)
+    - indexing_delete_total (float)
+    - indexing_index_current (float)
+    - indexing_index_failed (float)
+    - indexing_index_time_in_millis (float)
+    - indexing_index_total (float)
+    - indexing_is_throttled (bool)
+    - indexing_noop_update_total (float)
+    - indexing_throttle_time_in_millis (float)
+    - merges_current (float)
+    - merges_current_docs (float)
+    - merges_current_size_in_bytes (float)
+    - merges_total (float)
+    - merges_total_auto_throttle_in_bytes (float)
+    - merges_total_docs (float)
+    - merges_total_size_in_bytes (float)
+    - merges_total_stopped_time_in_millis (float)
+    - merges_total_throttled_time_in_millis (float)
+    - merges_total_time_in_millis (float)
+    - query_cache_cache_count (float)
+    - query_cache_cache_size (float)
+    - query_cache_evictions (float)
+    - query_cache_hit_count (float)
+    - query_cache_memory_size_in_bytes (float)
+    - query_cache_miss_count (float)
+    - query_cache_total_count (float)
+    - recovery_current_as_source (float)
+    - recovery_current_as_target (float)
+    - recovery_throttle_time_in_millis (float)
+    - refresh_external_total (float)
+    - refresh_external_total_time_in_millis (float)
+    - refresh_listeners (float)
+    - refresh_total (float)
+    - refresh_total_time_in_millis (float)
+    - request_cache_evictions (float)
+    - request_cache_hit_count (float)
+    - request_cache_memory_size_in_bytes (float)
+    - request_cache_miss_count (float)
+    - retention_leases_primary_term (float)
+    - retention_leases_version (float)
+    - routing_state (int)
+      (UNASSIGNED = 1, INITIALIZING = 2, STARTED = 3, RELOCATING = 4, other = 0)
+    - search_fetch_current (float)
+    - search_fetch_time_in_millis (float)
+    - search_fetch_total (float)
+    - search_open_contexts (float)
+    - search_query_current (float)
+    - search_query_time_in_millis (float)
+    - search_query_total (float)
+    - search_scroll_current (float)
+    - search_scroll_time_in_millis (float)
+    - search_scroll_total (float)
+    - search_suggest_current (float)
+    - search_suggest_time_in_millis (float)
+    - search_suggest_total (float)
+    - segments_count (float)
+    - segments_doc_values_memory_in_bytes (float)
+    - segments_fixed_bit_set_memory_in_bytes (float)
+    - segments_index_writer_memory_in_bytes (float)
+    - segments_max_unsafe_auto_id_timestamp (float)
+    - segments_memory_in_bytes (float)
+    - segments_norms_memory_in_bytes (float)
+    - segments_points_memory_in_bytes (float)
+    - segments_stored_fields_memory_in_bytes (float)
+    - segments_term_vectors_memory_in_bytes (float)
+    - segments_terms_memory_in_bytes (float)
+    - segments_version_map_memory_in_bytes (float)
+    - seq_no_global_checkpoint (float)
+    - seq_no_local_checkpoint (float)
+    - seq_no_max_seq_no (float)
+    - shard_path_is_custom_data_path (bool)
+    - store_size_in_bytes (float)
+    - translog_earliest_last_modified_age (float)
+    - translog_operations (float)
+    - translog_size_in_bytes (float)
+    - translog_uncommitted_operations (float)
+    - translog_uncommitted_size_in_bytes (float)
+    - warmer_current (float)
+    - warmer_total (float)
+    - warmer_total_time_in_millis (float)
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/elasticsearch_query/_index.md b/content/telegraf/v1/input-plugins/elasticsearch_query/_index.md
new file mode 100644
index 000000000..e02c2adb2
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/elasticsearch_query/_index.md
@@ -0,0 +1,218 @@
+---
+description: "Telegraf plugin for collecting metrics from Elasticsearch Query"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Elasticsearch Query
+    identifier: input-elasticsearch_query
+tags: [Elasticsearch Query, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Elasticsearch Query Input Plugin
+
+This [elasticsearch](https://www.elastic.co/) query plugin queries endpoints
+to obtain metrics from data stored in an Elasticsearch cluster.
+
+The following is supported:
+
+- return number of hits for a search query
+- calculate the avg/max/min/sum for a numeric field, filtered by a query,
+  aggregated per tag
+- count number of terms for a particular field
+
+## Elasticsearch Support
+
+This plugins is tested against Elasticsearch 5.x and 6.x releases.
+Currently it is known to break on 7.x or greater versions.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Derive metrics from aggregating Elasticsearch query results
+[[inputs.elasticsearch_query]]
+  ## The full HTTP endpoint URL for your Elasticsearch instance
+  ## Multiple urls can be specified as part of the same cluster,
+  ## this means that only ONE of the urls will be written to each interval.
+  urls = [ "http://node1.es.example.com:9200" ] # required.
+
+  ## Elasticsearch client timeout, defaults to "5s".
+  # timeout = "5s"
+
+  ## Set to true to ask Elasticsearch a list of all cluster nodes,
+  ## thus it is not necessary to list all nodes in the urls config option
+  # enable_sniffer = false
+
+  ## Set the interval to check if the Elasticsearch nodes are available
+  ## This option is only used if enable_sniffer is also set (0s to disable it)
+  # health_check_interval = "10s"
+
+  ## HTTP basic authentication details (eg. when using x-pack)
+  # username = "telegraf"
+  # password = "mypassword"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+ 
+  ## If 'use_system_proxy' is set to true, Telegraf will check env vars such as
+  ## HTTP_PROXY, HTTPS_PROXY, and NO_PROXY (or their lowercase counterparts).
+  ## If 'use_system_proxy' is set to false (default) and 'http_proxy_url' is
+  ## provided, Telegraf will use the specified URL as HTTP proxy.
+  # use_system_proxy = false
+  # http_proxy_url = "http://localhost:8888"
+
+  [[inputs.elasticsearch_query.aggregation]]
+    ## measurement name for the results of the aggregation query
+    measurement_name = "measurement"
+
+    ## Elasticsearch indexes to query (accept wildcards).
+    index = "index-*"
+
+    ## The date/time field in the Elasticsearch index (mandatory).
+    date_field = "@timestamp"
+
+    ## If the field used for the date/time field in Elasticsearch is also using
+    ## a custom date/time format it may be required to provide the format to
+    ## correctly parse the field.
+    ##
+    ## If using one of the built in elasticsearch formats this is not required.
+    # date_field_custom_format = ""
+
+    ## Time window to query (eg. "1m" to query documents from last minute).
+    ## Normally should be set to same as collection interval
+    query_period = "1m"
+
+    ## Lucene query to filter results
+    # filter_query = "*"
+
+    ## Fields to aggregate values (must be numeric fields)
+    # metric_fields = ["metric"]
+
+    ## Aggregation function to use on the metric fields
+    ## Must be set if 'metric_fields' is set
+    ## Valid values are: avg, sum, min, max, sum
+    # metric_function = "avg"
+
+    ## Fields to be used as tags
+    ## Must be text, non-analyzed fields. Metric aggregations are performed
+    ## per tag
+    # tags = ["field.keyword", "field2.keyword"]
+
+    ## Set to true to not ignore documents when the tag(s) above are missing
+    # include_missing_tag = false
+
+    ## String value of the tag when the tag does not exist
+    ## Used when include_missing_tag is true
+    # missing_tag_value = "null"
+```
+
+## Examples
+
+Please note that the `[[inputs.elasticsearch_query]]` is still required for all
+of the examples below.
+
+### Search the average response time, per URI and per response status code
+
+```toml
+[[inputs.elasticsearch_query.aggregation]]
+  measurement_name = "http_logs"
+  index = "my-index-*"
+  filter_query = "*"
+  metric_fields = ["response_time"]
+  metric_function = "avg"
+  tags = ["URI.keyword", "response.keyword"]
+  include_missing_tag = true
+  missing_tag_value = "null"
+  date_field = "@timestamp"
+  query_period = "1m"
+```
+
+### Search the maximum response time per method and per URI
+
+```toml
+[[inputs.elasticsearch_query.aggregation]]
+  measurement_name = "http_logs"
+  index = "my-index-*"
+  filter_query = "*"
+  metric_fields = ["response_time"]
+  metric_function = "max"
+  tags = ["method.keyword","URI.keyword"]
+  include_missing_tag = false
+  missing_tag_value = "null"
+  date_field = "@timestamp"
+  query_period = "1m"
+```
+
+### Search number of documents matching a filter query in all indices
+
+```toml
+[[inputs.elasticsearch_query.aggregation]]
+  measurement_name = "http_logs"
+  index = "*"
+  filter_query = "product_1 AND HEAD"
+  query_period = "1m"
+  date_field = "@timestamp"
+```
+
+### Search number of documents matching a filter query, returning per response status code
+
+```toml
+[[inputs.elasticsearch_query.aggregation]]
+  measurement_name = "http_logs"
+  index = "*"
+  filter_query = "downloads"
+  tags = ["response.keyword"]
+  include_missing_tag = false
+  date_field = "@timestamp"
+  query_period = "1m"
+```
+
+### Required parameters
+
+- `measurement_name`: The target measurement to be stored the results of the
+  aggregation query.
+- `index`: The index name to query on Elasticsearch
+- `query_period`: The time window to query (eg. "1m" to query documents from
+  last minute). Normally should be set to same as collection
+- `date_field`: The date/time field in the Elasticsearch index
+
+### Optional parameters
+
+- `date_field_custom_format`: Not needed if using one of the built in date/time
+  formats of Elasticsearch, but may be required if using a custom date/time
+  format. The format syntax uses the [Joda date format](https://www.elastic.co/guide/en/elasticsearch/reference/6.8/search-aggregations-bucket-daterange-aggregation.html#date-format-pattern).
+- `filter_query`: Lucene query to filter the results (default: "\*")
+- `metric_fields`: The list of fields to perform metric aggregation (these must
+  be indexed as numeric fields)
+- `metric_function`: The single-value metric aggregation function to be performed
+  on the `metric_fields` defined. Currently supported aggregations are "avg",
+  "min", "max", "sum". (see the [aggregation docs](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics.html)
+- `tags`: The list of fields to be used as tags (these must be indexed as
+  non-analyzed fields). A "terms aggregation" will be done per tag defined
+- `include_missing_tag`: Set to true to not ignore documents where the tag(s)
+  specified above does not exist. (If false, documents without the specified tag
+  field will be ignored in `doc_count` and in the metric aggregation)
+- `missing_tag_value`: The value of the tag that will be set for documents in
+  which the tag field does not exist. Only used when `include_missing_tag` is
+  set to `true`.
+
+[joda]: https://www.elastic.co/guide/en/elasticsearch/reference/6.8/search-aggregations-bucket-daterange-aggregation.html#date-format-pattern
+[agg]: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics.html
+
+## Metrics
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/ethtool/_index.md b/content/telegraf/v1/input-plugins/ethtool/_index.md
new file mode 100644
index 000000000..09f8e032a
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/ethtool/_index.md
@@ -0,0 +1,118 @@
+---
+description: "Telegraf plugin for collecting metrics from Ethtool"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Ethtool
+    identifier: input-ethtool
+tags: [Ethtool, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Ethtool Input Plugin
+
+The ethtool input plugin pulls ethernet device stats. Fields pulled will depend
+on the network device and driver.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Returns ethtool statistics for given interfaces
+# This plugin ONLY supports Linux
+[[inputs.ethtool]]
+  ## List of interfaces to pull metrics for
+  # interface_include = ["eth0"]
+
+  ## List of interfaces to ignore when pulling metrics.
+  # interface_exclude = ["eth1"]
+
+  ## Plugin behavior for downed interfaces
+  ## Available choices:
+  ##   - expose: collect & report metrics for down interfaces
+  ##   - skip: ignore interfaces that are marked down
+  # down_interfaces = "expose"
+
+  ## Reading statistics from interfaces in additional namespaces is also
+  ## supported, so long as the namespaces are named (have a symlink in
+  ## /var/run/netns). The telegraf process will also need the CAP_SYS_ADMIN
+  ## permission.
+  ## By default, only the current namespace will be used. For additional
+  ## namespace support, at least one of `namespace_include` and
+  ## `namespace_exclude` must be provided.
+  ## To include all namespaces, set `namespace_include` to `["*"]`.
+  ## The initial namespace (if anonymous) can be specified with the empty
+  ## string ("").
+
+  ## List of namespaces to pull metrics for
+  # namespace_include = []
+
+  ## List of namespace to ignore when pulling metrics.
+  # namespace_exclude = []
+
+  ## Some drivers declare statistics with extra whitespace, different spacing,
+  ## and mix cases. This list, when enabled, can be used to clean the keys.
+  ## Here are the current possible normalizations:
+  ##  * snakecase: converts fooBarBaz to foo_bar_baz
+  ##  * trim: removes leading and trailing whitespace
+  ##  * lower: changes all capitalized letters to lowercase
+  ##  * underscore: replaces spaces with underscores
+  # normalize_keys = ["snakecase", "trim", "lower", "underscore"]
+```
+
+Interfaces can be included or ignored using:
+
+- `interface_include`
+- `interface_exclude`
+
+Note that loopback interfaces will be automatically ignored.
+
+## Namespaces
+
+Metrics from interfaces in additional namespaces will be retrieved if either
+`namespace_include` or `namespace_exclude` is configured (to a non-empty list).
+This requires `CAP_SYS_ADMIN` permissions to switch namespaces, which can be
+granted to telegraf in several ways. The two recommended ways are listed below:
+
+### Using systemd capabilities
+
+If you are using systemd to run Telegraf, you may run
+`systemctl edit telegraf.service` and add the following:
+
+```text
+[Service]
+AmbientCapabilities=CAP_SYS_ADMIN
+```
+
+### Configuring executable capabilities
+
+If you are not using systemd to run Telegraf, you can configure the Telegraf
+executable to have `CAP_SYS_ADMIN` when run.
+
+```sh
+sudo setcap CAP_SYS_ADMIN+epi $(which telegraf)
+```
+
+N.B.: This capability is a filesystem attribute on the binary itself. The
+attribute needs to be re-applied if the Telegraf binary is rotated (e.g. on
+installation of new a Telegraf version from the system package manager).
+
+## Metrics
+
+Metrics are dependent on the network device and driver.
+
+## Example Output
+
+```text
+ethtool,driver=igb,host=test01,interface=mgmt0 tx_queue_1_packets=280782i,rx_queue_5_csum_err=0i,tx_queue_4_restart=0i,tx_multicast=7i,tx_queue_1_bytes=39674885i,rx_queue_2_alloc_failed=0i,tx_queue_5_packets=173970i,tx_single_coll_ok=0i,rx_queue_1_drops=0i,tx_queue_2_restart=0i,tx_aborted_errors=0i,rx_queue_6_csum_err=0i,tx_queue_5_restart=0i,tx_queue_4_bytes=64810835i,tx_abort_late_coll=0i,tx_queue_4_packets=109102i,os2bmc_tx_by_bmc=0i,tx_bytes=427527435i,tx_queue_7_packets=66665i,dropped_smbus=0i,rx_queue_0_csum_err=0i,tx_flow_control_xoff=0i,rx_packets=25926536i,rx_queue_7_csum_err=0i,rx_queue_3_bytes=84326060i,rx_multicast=83771i,rx_queue_4_alloc_failed=0i,rx_queue_3_drops=0i,rx_queue_3_csum_err=0i,rx_errors=0i,tx_errors=0i,tx_queue_6_packets=183236i,rx_broadcast=24378893i,rx_queue_7_packets=88680i,tx_dropped=0i,rx_frame_errors=0i,tx_queue_3_packets=161045i,tx_packets=1257017i,rx_queue_1_csum_err=0i,tx_window_errors=0i,tx_dma_out_of_sync=0i,rx_length_errors=0i,rx_queue_5_drops=0i,tx_timeout_count=0i,rx_queue_4_csum_err=0i,rx_flow_control_xon=0i,tx_heartbeat_errors=0i,tx_flow_control_xon=0i,collisions=0i,tx_queue_0_bytes=29465801i,rx_queue_6_drops=0i,rx_queue_0_alloc_failed=0i,tx_queue_1_restart=0i,rx_queue_0_drops=0i,tx_broadcast=9i,tx_carrier_errors=0i,tx_queue_7_bytes=13777515i,tx_queue_7_restart=0i,rx_queue_5_bytes=50732006i,rx_queue_7_bytes=35744457i,tx_deferred_ok=0i,tx_multi_coll_ok=0i,rx_crc_errors=0i,rx_fifo_errors=0i,rx_queue_6_alloc_failed=0i,tx_queue_2_packets=175206i,tx_queue_0_packets=107011i,rx_queue_4_bytes=201364548i,rx_queue_6_packets=372573i,os2bmc_rx_by_host=0i,multicast=83771i,rx_queue_4_drops=0i,rx_queue_5_packets=130535i,rx_queue_6_bytes=139488035i,tx_fifo_errors=0i,tx_queue_5_bytes=84899130i,rx_queue_0_packets=24529563i,rx_queue_3_alloc_failed=0i,rx_queue_7_drops=0i,tx_queue_6_bytes=96288614i,tx_queue_2_bytes=22132949i,tx_tcp_seg_failed=0i,rx_queue_1_bytes=246703840i,rx_queue_0_bytes=1506870738i,tx_queue_0_restart=0i,rx_queue_2_bytes=111344804i,tx_tcp_seg_good=0i,tx_queue_3_restart=0i,rx_no_buffer_count=0i,rx_smbus=0i,rx_queue_1_packets=273865i,rx_over_errors=0i,os2bmc_tx_by_host=0i,rx_queue_1_alloc_failed=0i,rx_queue_7_alloc_failed=0i,rx_short_length_errors=0i,tx_hwtstamp_timeouts=0i,tx_queue_6_restart=0i,rx_queue_2_packets=207136i,tx_queue_3_bytes=70391970i,rx_queue_3_packets=112007i,rx_queue_4_packets=212177i,tx_smbus=0i,rx_long_byte_count=2480280632i,rx_queue_2_csum_err=0i,rx_missed_errors=0i,rx_bytes=2480280632i,rx_queue_5_alloc_failed=0i,rx_queue_2_drops=0i,os2bmc_rx_by_bmc=0i,rx_align_errors=0i,rx_long_length_errors=0i,interface_up=1i,rx_hwtstamp_cleared=0i,rx_flow_control_xoff=0i,speed=1000i,link=1i,duplex=1i,autoneg=1i 1564658080000000000
+ethtool,driver=igb,host=test02,interface=mgmt0 rx_queue_2_bytes=111344804i,tx_queue_3_bytes=70439858i,multicast=83771i,rx_broadcast=24378975i,tx_queue_0_packets=107011i,rx_queue_6_alloc_failed=0i,rx_queue_6_drops=0i,rx_hwtstamp_cleared=0i,tx_window_errors=0i,tx_tcp_seg_good=0i,rx_queue_1_drops=0i,tx_queue_1_restart=0i,rx_queue_7_csum_err=0i,rx_no_buffer_count=0i,tx_queue_1_bytes=39675245i,tx_queue_5_bytes=84899130i,tx_broadcast=9i,rx_queue_1_csum_err=0i,tx_flow_control_xoff=0i,rx_queue_6_csum_err=0i,tx_timeout_count=0i,os2bmc_tx_by_bmc=0i,rx_queue_6_packets=372577i,rx_queue_0_alloc_failed=0i,tx_flow_control_xon=0i,rx_queue_2_drops=0i,tx_queue_2_packets=175206i,rx_queue_3_csum_err=0i,tx_abort_late_coll=0i,tx_queue_5_restart=0i,tx_dropped=0i,rx_queue_2_alloc_failed=0i,tx_multi_coll_ok=0i,rx_queue_1_packets=273865i,rx_flow_control_xon=0i,tx_single_coll_ok=0i,rx_length_errors=0i,rx_queue_7_bytes=35744457i,rx_queue_4_alloc_failed=0i,rx_queue_6_bytes=139488395i,rx_queue_2_csum_err=0i,rx_long_byte_count=2480288216i,rx_queue_1_alloc_failed=0i,tx_queue_0_restart=0i,rx_queue_0_csum_err=0i,tx_queue_2_bytes=22132949i,rx_queue_5_drops=0i,tx_dma_out_of_sync=0i,rx_queue_3_drops=0i,rx_queue_4_packets=212177i,tx_queue_6_restart=0i,rx_packets=25926650i,rx_queue_7_packets=88680i,rx_frame_errors=0i,rx_queue_3_bytes=84326060i,rx_short_length_errors=0i,tx_queue_7_bytes=13777515i,rx_queue_3_alloc_failed=0i,tx_queue_6_packets=183236i,rx_queue_0_drops=0i,rx_multicast=83771i,rx_queue_2_packets=207136i,rx_queue_5_csum_err=0i,rx_queue_5_packets=130535i,rx_queue_7_alloc_failed=0i,tx_smbus=0i,tx_queue_3_packets=161081i,rx_queue_7_drops=0i,tx_queue_2_restart=0i,tx_multicast=7i,tx_fifo_errors=0i,tx_queue_3_restart=0i,rx_long_length_errors=0i,tx_queue_6_bytes=96288614i,tx_queue_1_packets=280786i,tx_tcp_seg_failed=0i,rx_align_errors=0i,tx_errors=0i,rx_crc_errors=0i,rx_queue_0_packets=24529673i,rx_flow_control_xoff=0i,tx_queue_0_bytes=29465801i,rx_over_errors=0i,rx_queue_4_drops=0i,os2bmc_rx_by_bmc=0i,rx_smbus=0i,dropped_smbus=0i,tx_hwtstamp_timeouts=0i,rx_errors=0i,tx_queue_4_packets=109102i,tx_carrier_errors=0i,tx_queue_4_bytes=64810835i,tx_queue_4_restart=0i,rx_queue_4_csum_err=0i,tx_queue_7_packets=66665i,tx_aborted_errors=0i,rx_missed_errors=0i,tx_bytes=427575843i,collisions=0i,rx_queue_1_bytes=246703840i,rx_queue_5_bytes=50732006i,rx_bytes=2480288216i,os2bmc_rx_by_host=0i,rx_queue_5_alloc_failed=0i,rx_queue_3_packets=112007i,tx_deferred_ok=0i,os2bmc_tx_by_host=0i,tx_heartbeat_errors=0i,rx_queue_0_bytes=1506877506i,tx_queue_7_restart=0i,tx_packets=1257057i,rx_queue_4_bytes=201364548i,interface_up=0i,rx_fifo_errors=0i,tx_queue_5_packets=173970i,speed=1000i,link=1i,duplex=1i,autoneg=1i 1564658090000000000
+```
diff --git a/content/telegraf/v1/input-plugins/eventhub_consumer/_index.md b/content/telegraf/v1/input-plugins/eventhub_consumer/_index.md
new file mode 100644
index 000000000..13a8284b1
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/eventhub_consumer/_index.md
@@ -0,0 +1,154 @@
+---
+description: "Telegraf plugin for collecting metrics from Event Hub Consumer"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Event Hub Consumer
+    identifier: input-eventhub_consumer
+tags: [Event Hub Consumer, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Event Hub Consumer Input Plugin
+
+This plugin provides a consumer for use with Azure Event Hubs and Azure IoT Hub.
+
+## IoT Hub Setup
+
+The main focus for development of this plugin is Azure IoT hub:
+
+1. Create an Azure IoT Hub by following any of the guides provided here: [Azure
+   IoT Hub](https://docs.microsoft.com/en-us/azure/iot-hub/)
+2. Create a device, for example a [simulated Raspberry
+   Pi](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-raspberry-pi-web-simulator-get-started)
+3. The connection string needed for the plugin is located under *Shared access
+   policies*, both the *iothubowner* and *service* policies should work
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Azure Event Hubs service input plugin
+[[inputs.eventhub_consumer]]
+  ## The default behavior is to create a new Event Hub client from environment variables.
+  ## This requires one of the following sets of environment variables to be set:
+  ##
+  ## 1) Expected Environment Variables:
+  ##    - "EVENTHUB_CONNECTION_STRING"
+  ##
+  ## 2) Expected Environment Variables:
+  ##    - "EVENTHUB_NAMESPACE"
+  ##    - "EVENTHUB_NAME"
+  ##    - "EVENTHUB_KEY_NAME"
+  ##    - "EVENTHUB_KEY_VALUE"
+
+  ## 3) Expected Environment Variables:
+  ##    - "EVENTHUB_NAMESPACE"
+  ##    - "EVENTHUB_NAME"
+  ##    - "AZURE_TENANT_ID"
+  ##    - "AZURE_CLIENT_ID"
+  ##    - "AZURE_CLIENT_SECRET"
+
+  ## Uncommenting the option below will create an Event Hub client based solely on the connection string.
+  ## This can either be the associated environment variable or hard coded directly.
+  ## If this option is uncommented, environment variables will be ignored.
+  ## Connection string should contain EventHubName (EntityPath)
+  # connection_string = ""
+
+  ## Set persistence directory to a valid folder to use a file persister instead of an in-memory persister
+  # persistence_dir = ""
+
+  ## Change the default consumer group
+  # consumer_group = ""
+
+  ## By default the event hub receives all messages present on the broker, alternative modes can be set below.
+  ## The timestamp should be in https://github.com/toml-lang/toml#offset-date-time format (RFC 3339).
+  ## The 3 options below only apply if no valid persister is read from memory or file (e.g. first run).
+  # from_timestamp =
+  # latest = true
+
+  ## Set a custom prefetch count for the receiver(s)
+  # prefetch_count = 1000
+
+  ## Add an epoch to the receiver(s)
+  # epoch = 0
+
+  ## Change to set a custom user agent, "telegraf" is used by default
+  # user_agent = "telegraf"
+
+  ## To consume from a specific partition, set the partition_ids option.
+  ## An empty array will result in receiving from all partitions.
+  # partition_ids = ["0","1"]
+
+  ## Max undelivered messages
+  ## This plugin uses tracking metrics, which ensure messages are read to
+  ## outputs before acknowledging them to the original broker to ensure data
+  ## is not lost. This option sets the maximum messages to read from the
+  ## broker that have not been written by an output.
+  ##
+  ## This value needs to be picked with awareness of the agent's
+  ## metric_batch_size value as well. Setting max undelivered messages too high
+  ## can result in a constant stream of data batches to the output. While
+  ## setting it too low may never flush the broker's messages.
+  # max_undelivered_messages = 1000
+
+  ## Set either option below to true to use a system property as timestamp.
+  ## You have the choice between EnqueuedTime and IoTHubEnqueuedTime.
+  ## It is recommended to use this setting when the data itself has no timestamp.
+  # enqueued_time_as_ts = true
+  # iot_hub_enqueued_time_as_ts = true
+
+  ## Tags or fields to create from keys present in the application property bag.
+  ## These could for example be set by message enrichments in Azure IoT Hub.
+  # application_property_tags = []
+  # application_property_fields = []
+
+  ## Tag or field name to use for metadata
+  ## By default all metadata is disabled
+  # sequence_number_field = "SequenceNumber"
+  # enqueued_time_field = "EnqueuedTime"
+  # offset_field = "Offset"
+  # partition_id_tag = "PartitionID"
+  # partition_key_tag = "PartitionKey"
+  # iot_hub_device_connection_id_tag = "IoTHubDeviceConnectionID"
+  # iot_hub_auth_generation_id_tag = "IoTHubAuthGenerationID"
+  # iot_hub_connection_auth_method_tag = "IoTHubConnectionAuthMethod"
+  # iot_hub_connection_module_id_tag = "IoTHubConnectionModuleID"
+  # iot_hub_enqueued_time_field = "IoTHubEnqueuedTime"
+
+  ## Data format to consume.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+  data_format = "influx"
+```
+
+### Environment Variables
+
+[Full documentation of the available environment variables](https://github.com/Azure/azure-event-hubs-go#environment-variables).
+
+[envvar]: https://github.com/Azure/azure-event-hubs-go#environment-variables
+
+## Metrics
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/exec/_index.md b/content/telegraf/v1/input-plugins/exec/_index.md
new file mode 100644
index 000000000..0d10242fd
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/exec/_index.md
@@ -0,0 +1,108 @@
+---
+description: "Telegraf plugin for collecting metrics from Exec"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Exec
+    identifier: input-exec
+tags: [Exec, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Exec Input Plugin
+
+The `exec` plugin executes all the `commands` in parallel on every interval and
+parses metrics from their output in any one of the accepted Input Data
+Formats.
+
+This plugin can be used to poll for custom metrics from any source.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from one or more commands that can output to stdout
+[[inputs.exec]]
+  ## Commands array
+  commands = []
+
+  ## Environment variables
+  ## Array of "key=value" pairs to pass as environment variables
+  ## e.g. "KEY=value", "USERNAME=John Doe",
+  ## "LD_LIBRARY_PATH=/opt/custom/lib64:/usr/local/libs"
+  # environment = []
+
+  ## Timeout for each command to complete.
+  # timeout = "5s"
+
+  ## Measurement name suffix
+  ## Used for separating different commands
+  # name_suffix = ""
+
+  ## Ignore Error Code
+  ## If set to true, a non-zero error code in not considered an error and the
+  ## plugin will continue to parse the output.
+  # ignore_error = false
+
+  ## Data format
+  ## By default, exec expects JSON. This was done for historical reasons and is
+  ## different than other inputs that use the influx line protocol. Each data
+  ## format has its own unique set of configuration options, read more about
+  ## them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+  # data_format = "json"
+```
+
+Glob patterns in the `command` option are matched on every run, so adding new
+scripts that match the pattern will cause them to be picked up immediately.
+
+## Example
+
+This script produces static values, since no timestamp is specified the values
+are at the current time. Ensure that int values are followed with `i` for proper
+parsing.
+
+```sh
+#!/bin/sh
+echo 'example,tag1=a,tag2=b i=42i,j=43i,k=44i'
+```
+
+It can be paired with the following configuration and will be run at the
+`interval` of the agent.
+
+```toml
+[[inputs.exec]]
+  commands = ["sh /tmp/test.sh"]
+  timeout = "5s"
+  data_format = "influx"
+```
+
+## Common Issues
+
+### My script works when I run it by hand, but not when Telegraf is running as a service
+
+This may be related to the Telegraf service running as a different user. The
+official packages run Telegraf as the `telegraf` user and group on Linux
+systems.
+
+### With a PowerShell on Windows, the output of the script appears to be truncated
+
+You may need to set a variable in your script to increase the number of columns
+available for output:
+
+```shell
+$host.UI.RawUI.BufferSize = new-object System.Management.Automation.Host.Size(1024,50)
+```
+
+## Metrics
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/execd/_index.md b/content/telegraf/v1/input-plugins/execd/_index.md
new file mode 100644
index 000000000..d8b42c3a3
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/execd/_index.md
@@ -0,0 +1,115 @@
+---
+description: "Telegraf plugin for collecting metrics from Execd"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Execd
+    identifier: input-execd
+tags: [Execd, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Execd Input Plugin
+
+The `execd` plugin runs an external program as a long-running daemon.  The
+programs must output metrics in any one of the accepted [Input Data Formats](/telegraf/v1/data_formats/input)
+on the process's STDOUT, and is expected to stay running. If you'd instead like
+the process to collect metrics and then exit, check out the [inputs.exec](../exec/README.md)
+plugin.
+
+The `signal` can be configured to send a signal the running daemon on each
+collection interval. This is used for when you want to have Telegraf notify the
+plugin when it's time to run collection. STDIN is recommended, which writes a
+new line to the process's STDIN.
+
+STDERR from the process will be relayed to Telegraf's logging facilities. By
+default all messages on `stderr` will be logged as errors. However, you can
+log to other levels by prefixing your message with `E!` for error, `W!` for
+warning, `I!` for info, `D!` for debugging and `T!` for trace levels followed by
+a space and the actual message. For example outputting `I! A log message` will
+create a `info` log line in your Telegraf logging output.
+
+[Input Data Formats]: ../../../docs/DATA_FORMATS_INPUT.md
+[inputs.exec]: ../exec/README.md
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Run executable as long-running input plugin
+[[inputs.execd]]
+  ## One program to run as daemon.
+  ## NOTE: process and each argument should each be their own string
+  command = ["telegraf-smartctl", "-d", "/dev/sda"]
+
+  ## Environment variables
+  ## Array of "key=value" pairs to pass as environment variables
+  ## e.g. "KEY=value", "USERNAME=John Doe",
+  ## "LD_LIBRARY_PATH=/opt/custom/lib64:/usr/local/libs"
+  # environment = []
+
+  ## Define how the process is signaled on each collection interval.
+  ## Valid values are:
+  ##   "none"    : Do not signal anything. (Recommended for service inputs)
+  ##               The process must output metrics by itself.
+  ##   "STDIN"   : Send a newline on STDIN. (Recommended for gather inputs)
+  ##   "SIGHUP"  : Send a HUP signal. Not available on Windows. (not recommended)
+  ##   "SIGUSR1" : Send a USR1 signal. Not available on Windows.
+  ##   "SIGUSR2" : Send a USR2 signal. Not available on Windows.
+  # signal = "none"
+
+  ## Delay before the process is restarted after an unexpected termination
+  # restart_delay = "10s"
+
+  ## Buffer size used to read from the command output stream
+  ## Optional parameter. Default is 64 Kib, minimum is 16 bytes
+  # buffer_size = "64Kib"
+
+  ## Disable automatic restart of the program and stop if the program exits
+  ## with an error (i.e. non-zero error code)
+  # stop_on_error = false
+
+  ## Data format to consume.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+  # data_format = "influx"
+```
+
+## Example
+
+See the examples directory for basic examples in different languages expecting
+various signals from Telegraf:
+
+- Go: Example expects `signal = "SIGHUP"`
+- Python: Example expects `signal = "none"`
+- Ruby: Example expects `signal = "none"`
+- shell: Example expects `signal = "STDIN"`
+
+## Metrics
+
+Varies depending on the users data.
+
+## Example Output
+
+Varies depending on the users data.
diff --git a/content/telegraf/v1/input-plugins/fail2ban/_index.md b/content/telegraf/v1/input-plugins/fail2ban/_index.md
new file mode 100644
index 000000000..f943eef32
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/fail2ban/_index.md
@@ -0,0 +1,94 @@
+---
+description: "Telegraf plugin for collecting metrics from Fail2ban"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Fail2ban
+    identifier: input-fail2ban
+tags: [Fail2ban, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Fail2ban Input Plugin
+
+The fail2ban plugin gathers the count of failed and banned ip addresses using
+[fail2ban](https://www.fail2ban.org).
+
+This plugin runs the `fail2ban-client` command which generally requires root
+access.  Acquiring the required permissions can be done using several methods:
+
+- Use sudo
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from fail2ban.
+[[inputs.fail2ban]]
+  ## Use sudo to run fail2ban-client
+  # use_sudo = false
+
+  ## Use the given socket instead of the default one
+  # socket = "/var/run/fail2ban/fail2ban.sock"
+```
+
+## Using sudo
+
+Make sure to set `use_sudo = true` in your configuration file.
+
+You will also need to update your sudoers file.  It is recommended to modify a
+file in the `/etc/sudoers.d` directory using `visudo`:
+
+```bash
+sudo visudo -f /etc/sudoers.d/telegraf
+```
+
+Add the following lines to the file, these commands allow the `telegraf` user
+to call `fail2ban-client` without needing to provide a password and disables
+logging of the call in the auth.log.  Consult `man 8 visudo` and `man 5
+sudoers` for details.
+
+```text
+Cmnd_Alias FAIL2BAN = /usr/bin/fail2ban-client status, /usr/bin/fail2ban-client status *
+telegraf  ALL=(root) NOEXEC: NOPASSWD: FAIL2BAN
+Defaults!FAIL2BAN !logfile, !syslog, !pam_session
+```
+
+## Metrics
+
+- fail2ban
+  - tags:
+    - jail
+  - fields:
+    - failed (integer, count)
+    - banned (integer, count)
+
+## Example Output
+
+```text
+fail2ban,jail=sshd failed=5i,banned=2i 1495868667000000000
+```
+
+### Execute the binary directly
+
+```shell
+# fail2ban-client status sshd
+Status for the jail: sshd
+|- Filter
+|  |- Currently failed: 5
+|  |- Total failed:     20
+|  `- File list:        /var/log/secure
+`- Actions
+   |- Currently banned: 2
+   |- Total banned:     10
+   `- Banned IP list:   192.168.0.1 192.168.0.2
+```
diff --git a/content/telegraf/v1/input-plugins/fibaro/_index.md b/content/telegraf/v1/input-plugins/fibaro/_index.md
new file mode 100644
index 000000000..8a009b29a
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/fibaro/_index.md
@@ -0,0 +1,86 @@
+---
+description: "Telegraf plugin for collecting metrics from Fibaro"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Fibaro
+    identifier: input-fibaro
+tags: [Fibaro, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Fibaro Input Plugin
+
+The Fibaro plugin makes HTTP calls to the Fibaro controller API to gather values
+of hooked devices. Those values could be true (1) or false (0) for switches,
+percentage for dimmers, temperature, etc.
+
+By default, this plugin supports HC2 devices. To support HC3 devices, please
+use the device type config option.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read devices value(s) from a Fibaro controller
+[[inputs.fibaro]]
+  ## Required Fibaro controller address/hostname.
+  ## Note: at the time of writing this plugin, Fibaro only implemented http - no https available
+  url = "http://<controller>:80"
+
+  ## Required credentials to access the API (http://<controller/api/<component>)
+  username = "<username>"
+  password = "<password>"
+
+  ## Amount of time allowed to complete the HTTP request
+  # timeout = "5s"
+
+  ## Fibaro Device Type
+  ## By default, this plugin will attempt to read using the HC2 API. For HC3
+  ## devices, set this to "HC3"
+  # device_type = "HC2"
+```
+
+## Metrics
+
+- fibaro
+  - tags:
+    - deviceId (device id)
+    - section (section name)
+    - room (room name)
+    - name (device name)
+    - type (device type)
+  - fields:
+    - batteryLevel (float, when available from device)
+    - energy (float, when available from device)
+    - power (float, when available from device)
+    - value (float)
+    - value2 (float, when available from device)
+
+## Example Output
+
+```text
+fibaro,deviceId=9,host=vm1,name=Fenêtre\ haute,room=Cuisine,section=Cuisine,type=com.fibaro.FGRM222 energy=2.04,power=0.7,value=99,value2=99 1529996807000000000
+fibaro,deviceId=10,host=vm1,name=Escaliers,room=Dégagement,section=Pièces\ communes,type=com.fibaro.binarySwitch value=0 1529996807000000000
+fibaro,deviceId=13,host=vm1,name=Porte\ fenêtre,room=Salon,section=Pièces\ communes,type=com.fibaro.FGRM222 energy=4.33,power=0.7,value=99,value2=99 1529996807000000000
+fibaro,deviceId=21,host=vm1,name=LED\ îlot\ central,room=Cuisine,section=Cuisine,type=com.fibaro.binarySwitch value=0 1529996807000000000
+fibaro,deviceId=90,host=vm1,name=Détérioration,room=Entrée,section=Pièces\ communes,type=com.fibaro.heatDetector value=0 1529996807000000000
+fibaro,deviceId=163,host=vm1,name=Température,room=Cave,section=Cave,type=com.fibaro.temperatureSensor value=21.62 1529996807000000000
+fibaro,deviceId=191,host=vm1,name=Présence,room=Garde-manger,section=Cuisine,type=com.fibaro.FGMS001 value=1 1529996807000000000
+fibaro,deviceId=193,host=vm1,name=Luminosité,room=Garde-manger,section=Cuisine,type=com.fibaro.lightSensor value=195 1529996807000000000
+fibaro,deviceId=200,host=vm1,name=Etat,room=Garage,section=Extérieur,type=com.fibaro.doorSensor value=0 1529996807000000000
+fibaro,deviceId=220,host=vm1,name=CO2\ (ppm),room=Salon,section=Pièces\ communes,type=com.fibaro.multilevelSensor value=536 1529996807000000000
+fibaro,deviceId=221,host=vm1,name=Humidité\ (%),room=Salon,section=Pièces\ communes,type=com.fibaro.humiditySensor value=61 1529996807000000000
+fibaro,deviceId=222,host=vm1,name=Pression\ (mb),room=Salon,section=Pièces\ communes,type=com.fibaro.multilevelSensor value=1013.7 1529996807000000000
+fibaro,deviceId=223,host=vm1,name=Bruit\ (db),room=Salon,section=Pièces\ communes,type=com.fibaro.multilevelSensor value=44 1529996807000000000
+fibaro,deviceId=248,host=vm1,name=Température,room=Garage,section=Extérieur,type=com.fibaro.temperatureSensor batteryLevel=85,value=10.8 1529996807000000000
+```
diff --git a/content/telegraf/v1/input-plugins/file/_index.md b/content/telegraf/v1/input-plugins/file/_index.md
new file mode 100644
index 000000000..92a1585c4
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/file/_index.md
@@ -0,0 +1,75 @@
+---
+description: "Telegraf plugin for collecting metrics from File"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: File
+    identifier: input-file
+tags: [File, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# File Input Plugin
+
+The file plugin parses the **complete** contents of a file **every interval**
+using the selected [input data format](/telegraf/v1/data_formats/input).
+
+**Note:** If you wish to parse only newly appended lines use the [tail](/telegraf/v1/plugins/#input-tail) input
+plugin instead.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Parse a complete file each interval
+[[inputs.file]]
+  ## Files to parse each interval.  Accept standard unix glob matching rules,
+  ## as well as ** to match recursive files and directories.
+  files = ["/tmp/metrics.out"]
+
+  ## Character encoding to use when interpreting the file contents.  Invalid
+  ## characters are replaced using the unicode replacement character.  When set
+  ## to the empty string the data is not decoded to text.
+  ##   ex: character_encoding = "utf-8"
+  ##       character_encoding = "utf-16le"
+  ##       character_encoding = "utf-16be"
+  ##       character_encoding = ""
+  # character_encoding = ""
+
+  ## Data format to consume.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+  data_format = "influx"
+
+  ## Please use caution when using the following options: when file name
+  ## variation is high, this can increase the cardinality significantly. Read
+  ## more about cardinality here:
+  ## https://docs.influxdata.com/influxdb/cloud/reference/glossary/#series-cardinality
+
+  ## Name of tag to store the name of the file. Disabled if not set.
+  # file_tag = ""
+
+  ## Name of tag to store the absolute path and name of the file. Disabled if
+  ## not set.
+  # file_path_tag = ""
+```
+
+## Metrics
+
+The format of metrics produced by this plugin depends on the content and data
+format of the file.
+
+[input data format]: /docs/DATA_FORMATS_INPUT.md
+[tail]: /plugins/inputs/tail
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/filecount/_index.md b/content/telegraf/v1/input-plugins/filecount/_index.md
new file mode 100644
index 000000000..46acb4201
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/filecount/_index.md
@@ -0,0 +1,79 @@
+---
+description: "Telegraf plugin for collecting metrics from Filecount"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Filecount
+    identifier: input-filecount
+tags: [Filecount, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Filecount Input Plugin
+
+Reports the number and total size of files in specified directories.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Count files in a directory
+[[inputs.filecount]]
+  ## Directories to gather stats about.
+  ## This accept standard unit glob matching rules, but with the addition of
+  ## ** as a "super asterisk". ie:
+  ##   /var/log/**    -> recursively find all directories in /var/log and count files in each directories
+  ##   /var/log/*/*   -> find all directories with a parent dir in /var/log and count files in each directories
+  ##   /var/log       -> count all files in /var/log and all of its subdirectories
+  directories = ["/var/cache/apt", "/tmp"]
+
+  ## Only count files that match the name pattern. Defaults to "*".
+  name = "*"
+
+  ## Count files in subdirectories. Defaults to true.
+  recursive = true
+
+  ## Only count regular files. Defaults to true.
+  regular_only = true
+
+  ## Follow all symlinks while walking the directory tree. Defaults to false.
+  follow_symlinks = false
+
+  ## Only count files that are at least this size. If size is
+  ## a negative number, only count files that are smaller than the
+  ## absolute value of size. Acceptable units are B, KiB, MiB, KB, ...
+  ## Without quotes and units, interpreted as size in bytes.
+  size = "0B"
+
+  ## Only count files that have not been touched for at least this
+  ## duration. If mtime is negative, only count files that have been
+  ## touched in this duration. Defaults to "0s".
+  mtime = "0s"
+```
+
+## Metrics
+
+- filecount
+  - tags:
+    - directory (the directory path)
+  - fields:
+    - count (integer)
+    - size_bytes (integer)
+    - oldest_file_timestamp (int, unix time nanoseconds)
+    - newest_file_timestamp (int, unix time nanoseconds)
+
+## Example Output
+
+```text
+filecount,directory=/var/cache/apt count=7i,size_bytes=7438336i,oldest_file_timestamp=1507152973123456789i,newest_file_timestamp=1507152973123456789i 1530034445000000000
+filecount,directory=/tmp count=17i,size_bytes=28934786i,oldest_file_timestamp=1507152973123456789i,newest_file_timestamp=1507152973123456789i 1530034445000000000
+```
diff --git a/content/telegraf/v1/input-plugins/filestat/_index.md b/content/telegraf/v1/input-plugins/filestat/_index.md
new file mode 100644
index 000000000..d24777748
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/filestat/_index.md
@@ -0,0 +1,60 @@
+---
+description: "Telegraf plugin for collecting metrics from Filestat"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Filestat
+    identifier: input-filestat
+tags: [Filestat, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Filestat Input Plugin
+
+The filestat plugin gathers metrics about file existence, size, and other stats.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read stats about given file(s)
+[[inputs.filestat]]
+  ## Files to gather stats about.
+  ## These accept standard unix glob matching rules, but with the addition of
+  ## ** as a "super asterisk". See https://github.com/gobwas/glob.
+  files = ["/etc/telegraf/telegraf.conf", "/var/log/**.log"]
+
+  ## If true, read the entire file and calculate an md5 checksum.
+  md5 = false
+```
+
+## Metrics
+
+### Measurements & Fields
+
+- filestat
+  - exists (int, 0 | 1)
+  - size_bytes (int, bytes)
+  - modification_time (int, unix time nanoseconds)
+  - md5 (optional, string)
+
+### Tags
+
+- All measurements have the following tags:
+  - file (the path the to file, as specified in the config)
+
+## Example Output
+
+```text
+filestat,file=/tmp/foo/bar,host=tyrion exists=0i 1507218518192154351
+filestat,file=/Users/sparrc/ws/telegraf.conf,host=tyrion exists=1i,size=47894i,modification_time=1507152973123456789i  1507218518192154351
+```
diff --git a/content/telegraf/v1/input-plugins/fireboard/_index.md b/content/telegraf/v1/input-plugins/fireboard/_index.md
new file mode 100644
index 000000000..1c73ef653
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/fireboard/_index.md
@@ -0,0 +1,82 @@
+---
+description: "Telegraf plugin for collecting metrics from Fireboard"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Fireboard
+    identifier: input-fireboard
+tags: [Fireboard, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Fireboard Input Plugin
+
+The fireboard plugin gathers the real time temperature data from fireboard
+thermometers.  In order to use this input plugin, you'll need to sign up to use
+the [Fireboard REST API](https://docs.fireboard.io/reference/restapi.html).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read real time temps from fireboard.io servers
+[[inputs.fireboard]]
+  ## Specify auth token for your account
+  auth_token = "invalidAuthToken"
+  ## You can override the fireboard server URL if necessary
+  # url = https://fireboard.io/api/v1/devices.json
+  ## You can set a different http_timeout if you need to
+  ## You should set a string using an number and time indicator
+  ## for example "12s" for 12 seconds.
+  # http_timeout = "4s"
+```
+
+### auth_token
+
+In lieu of requiring a username and password, this plugin requires an
+authentication token that you can generate using the [Fireboard REST
+API](https://docs.fireboard.io/reference/restapi.html#Authentication).
+
+### url
+
+While there should be no reason to override the URL, the option is available
+in case Fireboard changes their site, etc.
+
+### http_timeout
+
+If you need to increase the HTTP timeout, you can do so here. You can set this
+value in seconds. The default value is four (4) seconds.
+
+## Metrics
+
+The Fireboard REST API docs have good examples of the data that is available,
+currently this input only returns the real time temperatures. Temperature
+values are included if they are less than a minute old.
+
+- fireboard
+  - tags:
+    - channel
+    - scale (Celcius; Fahrenheit)
+    - title (name of the Fireboard)
+    - uuid (UUID of the Fireboard)
+  - fields:
+    - temperature (float, unit)
+
+## Example Output
+
+This section shows example output in Line Protocol format.  You can often use
+`telegraf --input-filter <plugin-name> --test` or use the `file` output to get
+this information.
+
+```text
+fireboard,channel=2,host=patas-mbp,scale=Fahrenheit,title=telegraf-FireBoard,uuid=b55e766c-b308-49b5-93a4-df89fe31efd0 temperature=78.2 1561690040000000000
+```
diff --git a/content/telegraf/v1/input-plugins/fluentd/_index.md b/content/telegraf/v1/input-plugins/fluentd/_index.md
new file mode 100644
index 000000000..8cabea962
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/fluentd/_index.md
@@ -0,0 +1,104 @@
+---
+description: "Telegraf plugin for collecting metrics from Fluentd"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Fluentd
+    identifier: input-fluentd
+tags: [Fluentd, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Fluentd Input Plugin
+
+The fluentd plugin gathers metrics from plugin endpoint provided by [in_monitor
+plugin]().  This plugin understands data provided by /api/plugin.json resource
+(/api/config.json is not covered).
+
+You might need to adjust your fluentd configuration, in order to reduce series
+cardinality in case your fluentd restarts frequently. Every time fluentd starts,
+`plugin_id` value is given a new random value.  According to [fluentd
+documentation](), you are able to add `@id` parameter for each plugin to avoid
+this behaviour and define custom `plugin_id`.
+
+example configuration with `@id` parameter for http plugin:
+
+```text
+<source>
+  @type http
+  @id http
+  port 8888
+</source>
+```
+
+[1]: https://docs.fluentd.org/input/monitor_agent
+[2]: https://docs.fluentd.org/configuration/config-file#common-plugin-parameter
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics exposed by fluentd in_monitor plugin
+[[inputs.fluentd]]
+  ## This plugin reads information exposed by fluentd (using /api/plugins.json endpoint).
+  ##
+  ## Endpoint:
+  ## - only one URI is allowed
+  ## - https is not supported
+  endpoint = "http://localhost:24220/api/plugins.json"
+
+  ## Define which plugins have to be excluded (based on "type" field - e.g. monitor_agent)
+  exclude = [
+    "monitor_agent",
+    "dummy",
+  ]
+```
+
+## Metrics
+
+### Measurements & Fields
+
+Fields may vary depending on the plugin type
+
+- fluentd
+  - retry_count              (float, unit)
+  - buffer_queue_length      (float, unit)
+  - buffer_total_queued_size (float, unit)
+  - rollback_count           (float, unit)
+  - flush_time_count         (float, unit)
+  - slow_flush_count         (float, unit)
+  - emit_count               (float, unit)
+  - emit_records             (float, unit)
+  - emit_size                (float, unit)
+  - write_count              (float, unit)
+  - buffer_stage_length      (float, unit)
+  - buffer_queue_byte_size   (float, unit)
+  - buffer_stage_byte_size   (float, unit)
+  - buffer_available_buffer_space_ratios (float, unit)
+
+### Tags
+
+- All measurements have the following tags:
+  - plugin_id        (unique plugin id)
+  - plugin_type      (type of the plugin e.g. s3)
+    - plugin_category  (plugin category e.g. output)
+
+## Example Output
+
+```text
+fluentd,host=T440s,plugin_id=object:9f748c,plugin_category=input,plugin_type=dummy buffer_total_queued_size=0,buffer_queue_length=0,retry_count=0 1492006105000000000
+fluentd,plugin_category=input,plugin_type=dummy,host=T440s,plugin_id=object:8da98c buffer_queue_length=0,retry_count=0,buffer_total_queued_size=0 1492006105000000000
+fluentd,plugin_id=object:820190,plugin_category=input,plugin_type=monitor_agent,host=T440s retry_count=0,buffer_total_queued_size=0,buffer_queue_length=0 1492006105000000000
+fluentd,plugin_id=object:c5e054,plugin_category=output,plugin_type=stdout,host=T440s buffer_queue_length=0,retry_count=0,buffer_total_queued_size=0 1492006105000000000
+fluentd,plugin_type=s3,host=T440s,plugin_id=object:bd7a90,plugin_category=output buffer_queue_length=0,retry_count=0,buffer_total_queued_size=0 1492006105000000000
+fluentd,plugin_id=output_td, plugin_category=output,plugin_type=tdlog, host=T440s buffer_available_buffer_space_ratios=100,buffer_queue_byte_size=0,buffer_queue_length=0,buffer_stage_byte_size=0,buffer_stage_length=0,buffer_total_queued_size=0,emit_count=0,emit_records=0,flush_time_count=0,retry_count=0,rollback_count=0,slow_flush_count=0,write_count=0 1651474085000000000
+```
diff --git a/content/telegraf/v1/input-plugins/github/_index.md b/content/telegraf/v1/input-plugins/github/_index.md
new file mode 100644
index 000000000..251b8d25e
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/github/_index.md
@@ -0,0 +1,105 @@
+---
+description: "Telegraf plugin for collecting metrics from GitHub"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: GitHub
+    identifier: input-github
+tags: [GitHub, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# GitHub Input Plugin
+
+Gather repository information from [GitHub](https://www.github.com) hosted repositories.
+
+**Note:** Telegraf also contains the [webhook](/telegraf/v1/plugins/#input-github) input which can be used as an
+alternative method for collecting repository information.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Gather repository information from GitHub hosted repositories.
+[[inputs.github]]
+  ## List of repositories to monitor
+  repositories = [
+    "influxdata/telegraf",
+    "influxdata/influxdb"
+  ]
+
+  ## Github API access token.  Unauthenticated requests are limited to 60 per hour.
+  # access_token = ""
+
+  ## Github API enterprise url. Github Enterprise accounts must specify their base url.
+  # enterprise_base_url = ""
+
+  ## Timeout for HTTP requests.
+  # http_timeout = "5s"
+
+  ## List of additional fields to query.
+  ## NOTE: Getting those fields might involve issuing additional API-calls, so please
+  ##       make sure you do not exceed the rate-limit of GitHub.
+  ##
+  ## Available fields are:
+  ##  - pull-requests -- number of open and closed pull requests (2 API-calls per repository)
+  # additional_fields = []
+```
+
+## Metrics
+
+- github_repository
+  - tags:
+    - name - The repository name
+    - owner - The owner of the repository
+    - language - The primary language of the repository
+    - license - The license set for the repository
+  - fields:
+    - forks (int)
+    - open_issues (int)
+    - networks (int)
+    - size (int)
+    - subscribers (int)
+    - stars (int)
+    - watchers (int)
+
+When the [internal](/telegraf/v1/plugins/#input-internal) input is enabled:
+
+- internal_github
+  - tags:
+    - access_token - An obfuscated reference to the configured access token or "Unauthenticated"
+  - fields:
+    - limit - How many requests you are limited to (per hour)
+    - remaining - How many requests you have remaining (per hour)
+    - blocks - How many requests have been blocked due to rate limit
+
+When specifying `additional_fields` the plugin will collect the specified
+properties.  **NOTE:** Querying this additional fields might require to perform
+additional API-calls.  Please make sure you don't exceed the query rate-limit by
+specifying too many additional fields.  In the following we list the available
+options with the required API-calls and the resulting fields
+
+- "pull-requests" (2 API-calls per repository)
+  - fields:
+    - open_pull_requests (int)
+    - closed_pull_requests (int)
+
+## Example Output
+
+```text
+github_repository,language=Go,license=MIT\ License,name=telegraf,owner=influxdata forks=2679i,networks=2679i,open_issues=794i,size=23263i,stars=7091i,subscribers=316i,watchers=7091i 1563901372000000000
+internal_github,access_token=Unauthenticated closed_pull_requests=3522i,rate_limit_remaining=59i,rate_limit_limit=60i,rate_limit_blocks=0i,open_pull_requests=260i 1552653551000000000
+```
+
+[GitHub]: https://www.github.com
+[internal]: /plugins/inputs/internal
+[webhook]: /plugins/inputs/webhooks/github
diff --git a/content/telegraf/v1/input-plugins/gnmi/_index.md b/content/telegraf/v1/input-plugins/gnmi/_index.md
new file mode 100644
index 000000000..7415416a7
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/gnmi/_index.md
@@ -0,0 +1,276 @@
+---
+description: "Telegraf plugin for collecting metrics from gNMI (gRPC Network Management Interface)"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: gNMI (gRPC Network Management Interface)
+    identifier: input-gnmi
+tags: [gNMI (gRPC Network Management Interface), "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# gNMI (gRPC Network Management Interface) Input Plugin
+
+This plugin consumes telemetry data based on the [gNMI](https://github.com/openconfig/reference/blob/master/rpc/gnmi/gnmi-specification.md) Subscribe method. TLS
+is supported for authentication and encryption.  This input plugin is
+vendor-agnostic and is supported on any platform that supports the gNMI spec.
+
+For Cisco devices:
+
+It has been optimized to support gNMI telemetry as produced by Cisco IOS XR
+(64-bit) version 6.5.1, Cisco NX-OS 9.3 and Cisco IOS XE 16.12 and later.
+
+[1]: https://github.com/openconfig/reference/blob/master/rpc/gnmi/gnmi-specification.md
+
+Please check the troubleshooting section in case of
+problems, e.g. when getting an *empty metric-name warning*!
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `username` and
+`password` options. See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more
+details on how to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# gNMI telemetry input plugin
+[[inputs.gnmi]]
+  ## Address and port of the gNMI GRPC server
+  addresses = ["10.49.234.114:57777"]
+
+  ## define credentials
+  username = "cisco"
+  password = "cisco"
+
+  ## gNMI encoding requested (one of: "proto", "json", "json_ietf", "bytes")
+  # encoding = "proto"
+
+  ## redial in case of failures after
+  # redial = "10s"
+
+  ## gRPC Keepalive settings
+  ## See https://pkg.go.dev/google.golang.org/grpc/keepalive
+  ## The client will ping the server to see if the transport is still alive if it has
+  ## not see any activity for the given time.
+  ## If not set, none of the keep-alive setting (including those below) will be applied.
+  ## If set and set below 10 seconds, the gRPC library will apply a minimum value of 10s will be used instead.
+  # keepalive_time = ""
+
+  ## Timeout for seeing any activity after the keep-alive probe was
+  ## sent. If no activity is seen the connection is closed.
+  # keepalive_timeout = ""
+
+  ## gRPC Maximum Message Size
+  # max_msg_size = "4MB"
+
+  ## Enable to get the canonical path as field-name
+  # canonical_field_names = false
+
+  ## Remove leading slashes and dots in field-name
+  # trim_field_names = false
+
+  ## Guess the path-tag if an update does not contain a prefix-path
+  ## Supported values are
+  ##   none         -- do not add a 'path' tag
+  ##   common path  -- use the common path elements of all fields in an update
+  ##   subscription -- use the subscription path
+  # path_guessing_strategy = "none"
+
+  ## Prefix tags from path keys with the path element
+  # prefix_tag_key_with_path = false
+
+  ## Optional client-side TLS to authenticate the device
+  ## Set to true/false to enforce TLS being enabled/disabled. If not set,
+  ## enable TLS only if any of the other options are specified.
+  # tls_enable =
+  ## Trusted root certificates for server
+  # tls_ca = "/path/to/cafile"
+  ## Used for TLS client certificate authentication
+  # tls_cert = "/path/to/certfile"
+  ## Used for TLS client certificate authentication
+  # tls_key = "/path/to/keyfile"
+  ## Password for the key file if it is encrypted
+  # tls_key_pwd = ""
+  ## Send the specified TLS server name via SNI
+  # tls_server_name = "kubernetes.example.com"
+  ## Minimal TLS version to accept by the client
+  # tls_min_version = "TLS12"
+  ## List of ciphers to accept, by default all secure ciphers will be accepted
+  ## See https://pkg.go.dev/crypto/tls#pkg-constants for supported values.
+  ## Use "all", "secure" and "insecure" to add all support ciphers, secure
+  ## suites or insecure suites respectively.
+  # tls_cipher_suites = ["secure"]
+  ## Renegotiation method, "never", "once" or "freely"
+  # tls_renegotiation_method = "never"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## gNMI subscription prefix (optional, can usually be left empty)
+  ## See: https://github.com/openconfig/reference/blob/master/rpc/gnmi/gnmi-specification.md#222-paths
+  # origin = ""
+  # prefix = ""
+  # target = ""
+
+  ## Vendor specific options
+  ## This defines what vendor specific options to load.
+  ## * Juniper Header Extension (juniper_header): some sensors are directly managed by
+  ##   Linecard, which adds the Juniper GNMI Header Extension. Enabling this
+  ##   allows the decoding of the Extension header if present. Currently this knob
+  ##   adds component, component_id & sub_component_id as additional tags
+  # vendor_specific = []
+
+  ## YANG model paths for decoding IETF JSON payloads
+  ## Model files are loaded recursively from the given directories. Disabled if
+  ## no models are specified.
+  # yang_model_paths = []
+
+  ## Define additional aliases to map encoding paths to measurement names
+  # [inputs.gnmi.aliases]
+  #   ifcounters = "openconfig:/interfaces/interface/state/counters"
+
+  [[inputs.gnmi.subscription]]
+    ## Name of the measurement that will be emitted
+    name = "ifcounters"
+
+    ## Origin and path of the subscription
+    ## See: https://github.com/openconfig/reference/blob/master/rpc/gnmi/gnmi-specification.md#222-paths
+    ##
+    ## origin usually refers to a (YANG) data model implemented by the device
+    ## and path to a specific substructure inside it that should be subscribed
+    ## to (similar to an XPath). YANG models can be found e.g. here:
+    ## https://github.com/YangModels/yang/tree/master/vendor/cisco/xr
+    origin = "openconfig-interfaces"
+    path = "/interfaces/interface/state/counters"
+
+    ## Subscription mode ("target_defined", "sample", "on_change") and interval
+    subscription_mode = "sample"
+    sample_interval = "10s"
+
+    ## Suppress redundant transmissions when measured values are unchanged
+    # suppress_redundant = false
+
+    ## If suppression is enabled, send updates at least every X seconds anyway
+    # heartbeat_interval = "60s"
+
+  ## Tag subscriptions are applied as tags to other subscriptions.
+  # [[inputs.gnmi.tag_subscription]]
+  #  ## When applying this value as a tag to other metrics, use this tag name
+  #  name = "descr"
+  #
+  #  ## All other subscription fields are as normal
+  #  origin = "openconfig-interfaces"
+  #  path = "/interfaces/interface/state"
+  #  subscription_mode = "on_change"
+  #
+  #  ## Match strategy to use for the tag.
+  #  ## Tags are only applied for metrics of the same address. The following
+  #  ## settings are valid:
+  #  ##   unconditional -- always match
+  #  ##   name          -- match by the "name" key
+  #  ##                    This resembles the previous 'tag-only' behavior.
+  #  ##   elements      -- match by the keys in the path filtered by the path
+  #  ##                    parts specified `elements` below
+  #  ## By default, 'elements' is used if the 'elements' option is provided,
+  #  ## otherwise match by 'name'.
+  #  # match = ""
+  #
+  #  ## For the 'elements' match strategy, at least one path-element name must
+  #  ## be supplied containing at least one key to match on. Multiple path
+  #  ## elements can be specified in any order. All given keys must be equal
+  #  ## for a match.
+  #  # elements = ["description", "interface"]
+```
+
+## Metrics
+
+Each configured subscription will emit a different measurement.  Each leaf in a
+GNMI SubscribeResponse Update message will produce a field reading in the
+measurement. GNMI PathElement keys for leaves will attach tags to the field(s).
+
+## Example Output
+
+```text
+ifcounters,path=openconfig-interfaces:/interfaces/interface/state/counters,host=linux,name=MgmtEth0/RP0/CPU0/0,source=10.49.234.115,descr/description=Foo in-multicast-pkts=0i,out-multicast-pkts=0i,out-errors=0i,out-discards=0i,in-broadcast-pkts=0i,out-broadcast-pkts=0i,in-discards=0i,in-unknown-protos=0i,in-errors=0i,out-unicast-pkts=0i,in-octets=0i,out-octets=0i,last-clear="2019-05-22T16:53:21Z",in-unicast-pkts=0i 1559145777425000000
+ifcounters,path=openconfig-interfaces:/interfaces/interface/state/counters,host=linux,name=GigabitEthernet0/0/0/0,source=10.49.234.115,descr/description=Bar out-multicast-pkts=0i,out-broadcast-pkts=0i,in-errors=0i,out-errors=0i,in-discards=0i,out-octets=0i,in-unknown-protos=0i,in-unicast-pkts=0i,in-octets=0i,in-multicast-pkts=0i,in-broadcast-pkts=0i,last-clear="2019-05-22T16:54:50Z",out-unicast-pkts=0i,out-discards=0i 1559145777425000000
+```
+
+## Troubleshooting
+
+### Empty metric-name warning
+
+Some devices (e.g. Juniper) report spurious data with response paths not
+corresponding to any subscription. In those cases, Telegraf will not be able
+to determine the metric name for the response and you get an
+*empty metric-name warning*
+
+For example if you subscribe to `/junos/system/linecard/cpu/memory` but the
+corresponding response arrives with path
+`/components/component/properties/property/...` To avoid those issues, you can
+manually map the response to a metric name using the `aliases` option like
+
+```toml
+[[inputs.gnmi]]
+  addresses     = ["..."]
+
+  [inputs.gnmi.aliases]
+    memory = "/components"
+
+  [[inputs.gnmi.subscription]]
+    name = "memory"
+    origin = "openconfig"
+    path = "/junos/system/linecard/cpu/memory"
+    subscription_mode = "sample"
+    sample_interval = "60s"
+```
+
+If this does *not* solve the issue, please follow the warning instructions and
+open an issue with the response, your configuration and the metric you expect.
+
+### Missing `path` tag
+
+Some devices (e.g. Arista) omit the prefix and specify the path in the update
+if there is only one value reported. This leads to a missing `path` tag for
+the resulting metrics. In those cases you should set `path_guessing_strategy`
+to `subscription` to use the subscription path as `path` tag.
+
+Other devices might omit the prefix in updates altogether. Here setting
+`path_guessing_strategy` to `common path` can help to infer the `path` tag by
+using the part of the path that is common to all values in the update.
+
+### TLS handshake failure
+
+When receiving an error like
+
+```text
+2024-01-01T00:00:00Z E! [inputs.gnmi] Error in plugin: failed to setup subscription: rpc error: code = Unavailable desc = connection error: desc = "transport: authentication handshake failed: remote error: tls: handshake failure"
+```
+
+this might be due to insecure TLS configurations in the GNMI server. Please
+check the minimum TLS version provided by the server as well as the cipher suite
+used. You might want to use the `tls_min_version` or `tls_cipher_suites` setting
+respectively to work-around the issue. Please be careful to not undermine the
+security of the connection between the plugin and the device!
diff --git a/content/telegraf/v1/input-plugins/google_cloud_storage/_index.md b/content/telegraf/v1/input-plugins/google_cloud_storage/_index.md
new file mode 100644
index 000000000..32c65f277
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/google_cloud_storage/_index.md
@@ -0,0 +1,85 @@
+---
+description: "Telegraf plugin for collecting metrics from Google Cloud Storage"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Google Cloud Storage
+    identifier: input-google_cloud_storage
+tags: [Google Cloud Storage, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Google Cloud Storage Input Plugin
+
+The Google Cloud Storage plugin will collect metrics
+on the given Google Cloud Storage Buckets.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Gather metrics by iterating the files located on a Cloud Storage Bucket.
+[[inputs.google_cloud_storage]]
+  ## Required. Name of Cloud Storage bucket to ingest metrics from.
+  bucket = "my-bucket"
+
+  ## Optional. Prefix of Cloud Storage bucket keys to list metrics from.
+  # key_prefix = "my-bucket"
+
+  ## Key that will store the offsets in order to pick up where the ingestion was left.
+  offset_key = "offset_key"
+
+  ## Key that will store the offsets in order to pick up where the ingestion was left.
+  objects_per_iteration = 10
+
+  ## Required. Data format to consume.
+  ## Each data format has its own unique set of configuration options.
+  ## Read more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+  data_format = "influx"
+
+  ## Optional. Filepath for GCP credentials JSON file to authorize calls to
+  ## Google Cloud Storage APIs. If not set explicitly, Telegraf will attempt to use
+  ## Application Default Credentials, which is preferred.
+  # credentials_file = "path/to/my/creds.json"
+```
+
+## Metrics
+
+- Measurements will reside on Google Cloud Storage with the format specified
+
+- example when [[inputs.google_cloud_storage.data_format]] is json
+
+```json
+{
+  "metrics": [
+    {
+      "fields": {
+        "cosine": 10,
+        "sine": -1.0975806427415925e-12
+      },
+      "name": "cpu",
+      "tags": {
+        "datacenter": "us-east-1",
+        "host": "localhost"
+      },
+      "timestamp": 1604148850990
+    }
+  ]
+}
+```
+
+## Example Output
+
+```text
+google_cloud_storage,datacenter=us-east-1,host=localhost cosine=10,sine=-1.0975806427415925e-12 1604148850990000000
+```
diff --git a/content/telegraf/v1/input-plugins/graylog/_index.md b/content/telegraf/v1/input-plugins/graylog/_index.md
new file mode 100644
index 000000000..f643af5aa
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/graylog/_index.md
@@ -0,0 +1,85 @@
+---
+description: "Telegraf plugin for collecting metrics from GrayLog"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: GrayLog
+    identifier: input-graylog
+tags: [GrayLog, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# GrayLog Input Plugin
+
+The Graylog plugin can collect data from remote Graylog service URLs.
+
+Plugin currently support two type of end points:-
+
+- multiple  (e.g. `http://[graylog-server-ip]:9000/api/system/metrics/multiple`)
+- namespace (e.g. `http://[graylog-server-ip]:9000/api/system/metrics/namespace/{namespace}`)
+
+End Point can be a mix of one multiple end point and several namespaces end
+points
+
+Note: if namespace end point specified metrics array will be ignored for that
+call.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read flattened metrics from one or more GrayLog HTTP endpoints
+[[inputs.graylog]]
+  ## API endpoint, currently supported API:
+  ##
+  ##   - multiple  (e.g. http://<host>:9000/api/system/metrics/multiple)
+  ##   - namespace (e.g. http://<host>:9000/api/system/metrics/namespace/{namespace})
+  ##
+  ## For namespace endpoint, the metrics array will be ignored for that call.
+  ## Endpoint can contain namespace and multiple type calls.
+  ##
+  ## Please check http://[graylog-server-ip]:9000/api/api-browser for full list
+  ## of endpoints
+  servers = [
+    "http://[graylog-server-ip]:9000/api/system/metrics/multiple",
+  ]
+
+  ## Set timeout (default 5 seconds)
+  # timeout = "5s"
+
+  ## Metrics list
+  ## List of metrics can be found on Graylog webservice documentation.
+  ## Or by hitting the web service api at:
+  ##   http://[graylog-host]:9000/api/system/metrics
+  metrics = [
+    "jvm.cl.loaded",
+    "jvm.memory.pools.Metaspace.committed"
+  ]
+
+  ## Username and password
+  username = ""
+  password = ""
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+Please refer to GrayLog metrics api browser for full metric end points:
+`http://host:9000/api/api-browser`
+
+## Metrics
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/haproxy/_index.md b/content/telegraf/v1/input-plugins/haproxy/_index.md
new file mode 100644
index 000000000..32b5e7b81
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/haproxy/_index.md
@@ -0,0 +1,138 @@
+---
+description: "Telegraf plugin for collecting metrics from HAProxy"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: HAProxy
+    identifier: input-haproxy
+tags: [HAProxy, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# HAProxy Input Plugin
+
+The [HAProxy](http://www.haproxy.org/) input plugin gathers [statistics](https://cbonte.github.io/haproxy-dconv/1.9/intro.html#3.3.16)
+using the [stats socket](https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-stats%20socket) or [HTTP statistics page](https://cbonte.github.io/haproxy-dconv/1.9/management.html#9) of a HAProxy server.
+
+[1]: https://cbonte.github.io/haproxy-dconv/1.9/intro.html#3.3.16
+[2]: https://cbonte.github.io/haproxy-dconv/1.9/management.html#9.3
+[3]: https://cbonte.github.io/haproxy-dconv/1.9/management.html#9
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics of HAProxy, via stats socket or http endpoints
+[[inputs.haproxy]]
+  ## List of stats endpoints. Metrics can be collected from both http and socket
+  ## endpoints. Examples of valid endpoints:
+  ##   - http://myhaproxy.com:1936/haproxy?stats
+  ##   - https://myhaproxy.com:8000/stats
+  ##   - socket:/run/haproxy/admin.sock
+  ##   - /run/haproxy/*.sock
+  ##   - tcp://127.0.0.1:1936
+  ##
+  ## Server addresses not starting with 'http://', 'https://', 'tcp://' will be
+  ## treated as possible sockets. When specifying local socket, glob patterns are
+  ## supported.
+  servers = ["http://myhaproxy.com:1936/haproxy?stats"]
+
+  ## By default, some of the fields are renamed from what haproxy calls them.
+  ## Setting this option to true results in the plugin keeping the original
+  ## field names.
+  # keep_field_names = false
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+### HAProxy Configuration
+
+The following information may be useful when getting started, but please consult
+the HAProxy documentation for complete and up to date instructions.
+
+The [`stats enable`]() option can be used to add unauthenticated access over
+HTTP using the default settings.  To enable the unix socket begin by reading
+about the [`stats socket`]() option.
+
+[4]: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-stats%20enable
+[5]: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-stats%20socket
+
+### servers
+
+Server addresses must explicitly start with 'http' if you wish to use HAProxy
+status page.  Otherwise, addresses will be assumed to be an UNIX socket and any
+protocol (if present) will be discarded.
+
+When using socket names, wildcard expansion is supported so plugin can gather
+stats from multiple sockets at once.
+
+To use HTTP Basic Auth add the username and password in the userinfo section of
+the URL: `http://user:password@1.2.3.4/haproxy?stats`.  The credentials are sent
+via the `Authorization` header and not using the request URL.
+
+### keep_field_names
+
+By default, some of the fields are renamed from what haproxy calls them.
+Setting the `keep_field_names` parameter to `true` will result in the plugin
+keeping the original field names.
+
+The following renames are made:
+
+- `pxname` -> `proxy`
+- `svname` -> `sv`
+- `act` -> `active_servers`
+- `bck` -> `backup_servers`
+- `cli_abrt` -> `cli_abort`
+- `srv_abrt` -> `srv_abort`
+- `hrsp_1xx` -> `http_response.1xx`
+- `hrsp_2xx` -> `http_response.2xx`
+- `hrsp_3xx` -> `http_response.3xx`
+- `hrsp_4xx` -> `http_response.4xx`
+- `hrsp_5xx` -> `http_response.5xx`
+- `hrsp_other` -> `http_response.other`
+
+## Metrics
+
+For more details about collected metrics reference the [HAProxy CSV format
+documentation]().
+
+- haproxy
+  - tags:
+    - `server` - address of the server data was gathered from
+    - `proxy` - proxy name
+    - `sv` - service name
+    - `type` - proxy session type
+  - fields:
+    - `status` (string)
+    - `check_status` (string)
+    - `last_chk` (string)
+    - `mode` (string)
+    - `tracked` (string)
+    - `agent_status` (string)
+    - `last_agt` (string)
+    - `addr` (string)
+    - `cookie` (string)
+    - `lastsess` (int)
+    - **all other stats** (int)
+
+[6]: https://cbonte.github.io/haproxy-dconv/1.8/management.html#9.1
+
+## Example Output
+
+```text
+haproxy,server=/run/haproxy/admin.sock,proxy=public,sv=FRONTEND,type=frontend http_response.other=0i,req_rate_max=1i,comp_byp=0i,status="OPEN",rate_lim=0i,dses=0i,req_rate=0i,comp_rsp=0i,bout=9287i,comp_in=0i,mode="http",smax=1i,slim=2000i,http_response.1xx=0i,conn_rate=0i,dreq=0i,ereq=0i,iid=2i,rate_max=1i,http_response.2xx=1i,comp_out=0i,intercepted=1i,stot=2i,pid=1i,http_response.5xx=1i,http_response.3xx=0i,http_response.4xx=0i,conn_rate_max=1i,conn_tot=2i,dcon=0i,bin=294i,rate=0i,sid=0i,req_tot=2i,scur=0i,dresp=0i 1513293519000000000
+```
diff --git a/content/telegraf/v1/input-plugins/hddtemp/_index.md b/content/telegraf/v1/input-plugins/hddtemp/_index.md
new file mode 100644
index 000000000..a09a0b0da
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/hddtemp/_index.md
@@ -0,0 +1,75 @@
+---
+description: "Telegraf plugin for collecting metrics from HDDtemp"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: HDDtemp
+    identifier: input-hddtemp
+tags: [HDDtemp, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# HDDtemp Input Plugin
+
+This plugin reads data from hddtemp daemon.
+
+Hddtemp should be installed and its daemon running.
+
+## OS Support & Alternatives
+
+This plugin depends on the availability of the `hddtemp` binary. The upstream
+project is not active and Debian made the decision to remove it in Debian
+Bookworm. This means the rest of the Debian ecosystem no longer has this binary
+in later releases, like Ubuntu 22.04.
+
+As an alternative consider using the [`smartctl` plugin]. This parses the full
+JSON output from `smartctl`, which includes temperature data, in addition to
+much more data about devices in a system.
+
+[`smartctl` plugin]: ../smartctl/README.md
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Monitor disks' temperatures using hddtemp
+[[inputs.hddtemp]]
+  ## By default, telegraf gathers temps data from all disks detected by the
+  ## hddtemp.
+  ##
+  ## Only collect temps from the selected disks.
+  ##
+  ## A * as the device name will return the temperature values of all disks.
+  ##
+  # address = "127.0.0.1:7634"
+  # devices = ["sda", "*"]
+```
+
+## Metrics
+
+- hddtemp
+  - tags:
+    - device
+    - model
+    - unit
+    - status
+    - source
+  - fields:
+    - temperature
+
+## Example Output
+
+```text
+hddtemp,source=server1,unit=C,status=,device=sdb,model=WDC\ WD740GD-00FLA1 temperature=43i 1481655647000000000
+hddtemp,device=sdc,model=SAMSUNG\ HD103UI,unit=C,source=server1,status= temperature=38i 148165564700000000
+hddtemp,device=sdd,model=SAMSUNG\ HD103UI,unit=C,source=server1,status= temperature=36i 1481655647000000000
+```
diff --git a/content/telegraf/v1/input-plugins/http/_index.md b/content/telegraf/v1/input-plugins/http/_index.md
new file mode 100644
index 000000000..d39856714
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/http/_index.md
@@ -0,0 +1,175 @@
+---
+description: "Telegraf plugin for collecting metrics from HTTP"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: HTTP
+    identifier: input-http
+tags: [HTTP, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# HTTP Input Plugin
+
+The HTTP input plugin collects metrics from one or more HTTP(S) endpoints.  The
+endpoint should have metrics formatted in one of the supported input data
+formats.  Each data format has its own
+unique set of configuration options which can be added to the input
+configuration.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `username`, `password`,
+`token`, `headers`, and `cookie_auth_headers` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Read formatted metrics from one or more HTTP endpoints
+[[inputs.http]]
+  ## One or more URLs from which to read formatted metrics.
+  urls = [
+    "http://localhost/metrics",
+    "http+unix:///run/user/420/podman/podman.sock:/d/v4.0.0/libpod/pods/json"
+  ]
+
+  ## HTTP method
+  # method = "GET"
+
+  ## Optional HTTP headers
+  # headers = {"X-Special-Header" = "Special-Value"}
+
+  ## HTTP entity-body to send with POST/PUT requests.
+  # body = ""
+
+  ## HTTP Content-Encoding for write request body, can be set to "gzip" to
+  ## compress body or "identity" to apply no encoding.
+  # content_encoding = "identity"
+
+  ## Optional Bearer token settings to use for the API calls.
+  ## Use either the token itself or the token file if you need a token.
+  # token = "eyJhbGc...Qssw5c"
+  # token_file = "/path/to/file"
+
+  ## Optional HTTP Basic Auth Credentials
+  # username = "username"
+  # password = "pa$$word"
+
+  ## OAuth2 Client Credentials. The options 'client_id', 'client_secret', and 'token_url' are required to use OAuth2.
+  # client_id = "clientid"
+  # client_secret = "secret"
+  # token_url = "https://indentityprovider/oauth2/v1/token"
+  # scopes = ["urn:opc:idm:__myscopes__"]
+
+  ## HTTP Proxy support
+  # use_system_proxy = false
+  # http_proxy_url = ""
+
+  ## Optional TLS Config
+  ## Set to true/false to enforce TLS being enabled/disabled. If not set,
+  ## enable TLS only if any of the other options are specified.
+  # tls_enable =
+  ## Trusted root certificates for server
+  # tls_ca = "/path/to/cafile"
+  ## Used for TLS client certificate authentication
+  # tls_cert = "/path/to/certfile"
+  ## Used for TLS client certificate authentication
+  # tls_key = "/path/to/keyfile"
+  ## Password for the key file if it is encrypted
+  # tls_key_pwd = ""
+  ## Send the specified TLS server name via SNI
+  # tls_server_name = "kubernetes.example.com"
+  ## Minimal TLS version to accept by the client
+  # tls_min_version = "TLS12"
+  ## List of ciphers to accept, by default all secure ciphers will be accepted
+  ## See https://pkg.go.dev/crypto/tls#pkg-constants for supported values.
+  ## Use "all", "secure" and "insecure" to add all support ciphers, secure
+  ## suites or insecure suites respectively.
+  # tls_cipher_suites = ["secure"]
+  ## Renegotiation method, "never", "once" or "freely"
+  # tls_renegotiation_method = "never"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## Optional Cookie authentication
+  # cookie_auth_url = "https://localhost/authMe"
+  # cookie_auth_method = "POST"
+  # cookie_auth_username = "username"
+  # cookie_auth_password = "pa$$word"
+  # cookie_auth_headers = { Content-Type = "application/json", X-MY-HEADER = "hello" }
+  # cookie_auth_body = '{"username": "user", "password": "pa$$word", "authenticate": "me"}'
+  ## cookie_auth_renewal not set or set to "0" will auth once and never renew the cookie
+  # cookie_auth_renewal = "5m"
+
+  ## Amount of time allowed to complete the HTTP request
+  # timeout = "5s"
+
+  ## List of success status codes
+  # success_status_codes = [200]
+
+  ## Data format to consume.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+  # data_format = "influx"
+
+```
+
+HTTP requests over Unix domain sockets can be specified via the "http+unix" or
+"https+unix" schemes.
+Request URLs should have the following form:
+
+```text
+http+unix:///path/to/service.sock:/api/endpoint
+```
+
+Note: The path to the Unix domain socket and the request endpoint are separated
+by a colon (":").
+
+## Example Output
+
+This example output was taken from [this instructional article](https://docs.influxdata.com/telegraf/v1/configure_plugins/input_plugins/using_http/).
+
+[1]: https://docs.influxdata.com/telegraf/v1/configure_plugins/input_plugins/using_http/
+
+```text
+citibike,station_id=4703 eightd_has_available_keys=false,is_installed=1,is_renting=1,is_returning=1,legacy_id="4703",num_bikes_available=6,num_bikes_disabled=2,num_docks_available=26,num_docks_disabled=0,num_ebikes_available=0,station_status="active" 1641505084000000000
+citibike,station_id=4704 eightd_has_available_keys=false,is_installed=1,is_renting=1,is_returning=1,legacy_id="4704",num_bikes_available=10,num_bikes_disabled=2,num_docks_available=36,num_docks_disabled=0,num_ebikes_available=0,station_status="active" 1641505084000000000
+citibike,station_id=4711 eightd_has_available_keys=false,is_installed=1,is_renting=1,is_returning=1,legacy_id="4711",num_bikes_available=9,num_bikes_disabled=0,num_docks_available=36,num_docks_disabled=0,num_ebikes_available=1,station_status="active" 1641505084000000000
+```
+
+## Metrics
+
+The metrics collected by this input plugin will depend on the configured
+`data_format` and the payload returned by the HTTP endpoint(s).
+
+The default values below are added if the input format does not specify a value:
+
+- http
+  - tags:
+    - url
+
+## Optional Cookie Authentication Settings
+
+The optional Cookie Authentication Settings will retrieve a cookie from the
+given authorization endpoint, and use it in subsequent API requests.  This is
+useful for services that do not provide OAuth or Basic Auth authentication,
+e.g. the [Tesla Powerwall API](https://www.tesla.com/support/energy/powerwall/own/monitoring-from-home-network), which uses a Cookie Auth Body to retrieve
+an authorization cookie.  The Cookie Auth Renewal interval will renew the
+authorization by retrieving a new cookie at the given interval.
+
+[tesla]: https://www.tesla.com/support/energy/powerwall/own/monitoring-from-home-network
diff --git a/content/telegraf/v1/input-plugins/http_listener/_index.md b/content/telegraf/v1/input-plugins/http_listener/_index.md
new file mode 100644
index 000000000..1abe0ae27
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/http_listener/_index.md
@@ -0,0 +1,34 @@
+---
+description: "Telegraf plugin for collecting metrics from HTTP Listener"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: HTTP Listener
+    identifier: input-http_listener
+tags: [HTTP Listener, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# HTTP Listener Input Plugin
+
+This service input plugin that listens for requests sent according to the
+[InfluxDB HTTP API](https://docs.influxdata.com/influxdb/v1.8/guides/write_data/). The intent of the plugin is to allow
+Telegraf to serve as a proxy/router for the `/write` endpoint of the InfluxDB
+HTTP API.
+
+> [!NOTE]
+> This plugin was renamed to [`influxdb_listener`]() in v1.9
+> and is deprecated since then. If you wish to receive general metrics via HTTP
+> it is recommended to use the [`http_listener_v2`]() plugin
+> instead.
+
+**introduces in:** Telegraf v1.30.0
+**deprecated in:** Telegraf v1.9.0
+**removal in:** Telegraf v1.35.0
+**tags:** servers, web
+**supported OS:** all
+
+[influxdb_http_api]: https://docs.influxdata.com/influxdb/v1.8/guides/write_data/
+[influxdb_listener]: /plugins/inputs/influxdb_listener/README.md
+[http_listener_v2]: /plugins/inputs/http_listener_v2/README.md
diff --git a/content/telegraf/v1/input-plugins/http_listener_v2/_index.md b/content/telegraf/v1/input-plugins/http_listener_v2/_index.md
new file mode 100644
index 000000000..aabca75e8
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/http_listener_v2/_index.md
@@ -0,0 +1,154 @@
+---
+description: "Telegraf plugin for collecting metrics from HTTP Listener v2"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: HTTP Listener v2
+    identifier: input-http_listener_v2
+tags: [HTTP Listener v2, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# HTTP Listener v2 Input Plugin
+
+HTTP Listener v2 is a service input plugin that listens for metrics sent via
+HTTP. Metrics may be sent in any supported [data format](/telegraf/v1/data_formats/input). For
+metrics in [InfluxDB Line Protocol](https://docs.influxdata.com/influxdb/cloud/reference/syntax/line-protocol/) it's recommended to use the
+[`influxdb_listener`]() or
+[`influxdb_v2_listener`]() instead.
+
+**Note:** The plugin previously known as `http_listener` has been renamed
+`influxdb_listener`.  If you would like Telegraf to act as a proxy/relay for
+InfluxDB it is recommended to use [`influxdb_listener`]() or
+[`influxdb_v2_listener`]().
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Generic HTTP write listener
+[[inputs.http_listener_v2]]
+  ## Address to host HTTP listener on
+  ## can be prefixed by protocol tcp, or unix if not provided defaults to tcp
+  ## if unix network type provided it should be followed by absolute path for unix socket
+  service_address = "tcp://:8080"
+  ## service_address = "tcp://:8443"
+  ## service_address = "unix:///tmp/telegraf.sock"
+
+  ## Permission for unix sockets (only available for unix sockets)
+  ## This setting may not be respected by some platforms. To safely restrict
+  ## permissions it is recommended to place the socket into a previously
+  ## created directory with the desired permissions.
+  ##   ex: socket_mode = "777"
+  # socket_mode = ""
+
+  ## Paths to listen to.
+  # paths = ["/telegraf"]
+
+  ## Save path as http_listener_v2_path tag if set to true
+  # path_tag = false
+
+  ## HTTP methods to accept.
+  # methods = ["POST", "PUT"]
+
+  ## Optional HTTP headers
+  ## These headers are applied to the server that is listening for HTTP
+  ## requests and included in responses.
+  # http_headers = {"HTTP_HEADER" = "TAG_NAME"}
+
+  ## HTTP Return Success Code
+  ## This is the HTTP code that will be returned on success
+  # http_success_code = 204
+
+  ## maximum duration before timing out read of the request
+  # read_timeout = "10s"
+  ## maximum duration before timing out write of the response
+  # write_timeout = "10s"
+
+  ## Maximum allowed http request body size in bytes.
+  ## 0 means to use the default of 524,288,000 bytes (500 mebibytes)
+  # max_body_size = "500MB"
+
+  ## Part of the request to consume.  Available options are "body" and
+  ## "query".
+  # data_source = "body"
+
+  ## Set one or more allowed client CA certificate file names to
+  ## enable mutually authenticated TLS connections
+  # tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
+
+  ## Add service certificate and key
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+
+  ## Minimal TLS version accepted by the server
+  # tls_min_version = "TLS12"
+
+  ## Optional username and password to accept for HTTP basic authentication.
+  ## You probably want to make sure you have TLS configured above for this.
+  # basic_username = "foobar"
+  # basic_password = "barfoo"
+
+  ## Optional setting to map http headers into tags
+  ## If the http header is not present on the request, no corresponding tag will be added
+  ## If multiple instances of the http header are present, only the first value will be used
+  # http_header_tags = {"HTTP_HEADER" = "TAG_NAME"}
+
+  ## Data format to consume.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+  data_format = "influx"
+```
+
+## Metrics
+
+Metrics are collected from the part of the request specified by the
+`data_source` param and are parsed depending on the value of `data_format`.
+
+## Example Output
+
+## Troubleshooting
+
+Send Line Protocol:
+
+```shell
+curl -i -XPOST 'http://localhost:8080/telegraf' --data-binary 'cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000'
+```
+
+Send JSON:
+
+```shell
+curl -i -XPOST 'http://localhost:8080/telegraf' --data-binary '{"value1": 42, "value2": 42}'
+```
+
+Send query params:
+
+```shell
+curl -i -XGET 'http://localhost:8080/telegraf?host=server01&value=0.42'
+```
+
+[data_format]: /docs/DATA_FORMATS_INPUT.md
+[influxdb_listener]: /plugins/inputs/influxdb_listener/README.md
+[line_protocol]: https://docs.influxdata.com/influxdb/cloud/reference/syntax/line-protocol/
+[influxdb_v2_listener]: /plugins/inputs/influxdb_v2_listener/README.md
diff --git a/content/telegraf/v1/input-plugins/http_response/_index.md b/content/telegraf/v1/input-plugins/http_response/_index.md
new file mode 100644
index 000000000..bb4d6293b
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/http_response/_index.md
@@ -0,0 +1,161 @@
+---
+description: "Telegraf plugin for collecting metrics from HTTP Response"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: HTTP Response
+    identifier: input-http_response
+tags: [HTTP Response, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# HTTP Response Input Plugin
+
+This input plugin checks HTTP/HTTPS connections.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `username` and
+`password` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# HTTP/HTTPS request given an address a method and a timeout
+[[inputs.http_response]]
+  ## List of urls to query.
+  # urls = ["http://localhost"]
+
+  ## Set http_proxy.
+  ## Telegraf uses the system wide proxy settings if it's is not set.
+  # http_proxy = "http://localhost:8888"
+
+  ## Set response_timeout (default 5 seconds)
+  # response_timeout = "5s"
+
+  ## HTTP Request Method
+  # method = "GET"
+
+  ## Whether to follow redirects from the server (defaults to false)
+  # follow_redirects = false
+
+  ## Optional file with Bearer token
+  ## file content is added as an Authorization header
+  # bearer_token = "/path/to/file"
+
+  ## Optional HTTP Basic Auth Credentials
+  # username = "username"
+  # password = "pa$$word"
+
+  ## Optional HTTP Request Body
+  # body = '''
+  # {'fake':'data'}
+  # '''
+
+  ## Optional HTTP Request Body Form
+  ## Key value pairs to encode and set at URL form. Can be used with the POST
+  ## method + application/x-www-form-urlencoded content type to replicate the
+  ## POSTFORM method.
+  # body_form = { "key": "value" }
+
+  ## Optional name of the field that will contain the body of the response.
+  ## By default it is set to an empty String indicating that the body's
+  ## content won't be added
+  # response_body_field = ''
+
+  ## Maximum allowed HTTP response body size in bytes.
+  ## 0 means to use the default of 32MiB.
+  ## If the response body size exceeds this limit a "body_read_error" will
+  ## be raised.
+  # response_body_max_size = "32MiB"
+
+  ## Optional substring or regex match in body of the response (case sensitive)
+  # response_string_match = "\"service_status\": \"up\""
+  # response_string_match = "ok"
+  # response_string_match = "\".*_status\".?:.?\"up\""
+
+  ## Expected response status code.
+  ## The status code of the response is compared to this value. If they match,
+  ## the field "response_status_code_match" will be 1, otherwise it will be 0.
+  ## If the expected status code is 0, the check is disabled and the field
+  ## won't be added.
+  # response_status_code = 0
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+  ## Use the given name as the SNI server name on each URL
+  # tls_server_name = ""
+  ## TLS renegotiation method, choose from "never", "once", "freely"
+  # tls_renegotiation_method = "never"
+
+  ## HTTP Request Headers (all values must be strings)
+  # [inputs.http_response.headers]
+  #   Host = "github.com"
+
+  ## Optional setting to map response http headers into tags
+  ## If the http header is not present on the request, no corresponding tag will
+  ## be added. If multiple instances of the http header are present, only the
+  ## first value will be used.
+  # http_header_tags = {"HTTP_HEADER" = "TAG_NAME"}
+
+  ## Interface to use when dialing an address
+  # interface = "eth0"
+
+  ## Optional Cookie authentication
+  # cookie_auth_url = "https://localhost/authMe"
+  # cookie_auth_method = "POST"
+  # cookie_auth_username = "username"
+  # cookie_auth_password = "pa$$word"
+  # cookie_auth_body = '{"username": "user", "password": "pa$$word", "authenticate": "me"}'
+  ## cookie_auth_renewal not set or set to "0" will auth once and never renew the cookie
+  # cookie_auth_renewal = "5m"
+```
+
+## Metrics
+
+- http_response
+  - tags:
+    - server (target URL)
+    - method (request method)
+    - status_code (response status code)
+    - result (see below
+    - result_code (int, see below will trigger this error. Or the option `response_body_field` was used and the content of the response body was not a valid utf-8. Or the size of the body of the response exceeded the `response_body_max_size` |
+|connection_failed             | 3                       |Catch all for any network error not specifically handled by the plugin|
+|timeout                       | 4                       |The plugin timed out while awaiting the HTTP connection to complete|
+|dns_error                     | 5                       |There was a DNS error while attempting to connect to the host|
+|response_status_code_mismatch | 6                       |The option `response_status_code_match` was used, and the status code of the response didn't match the value.|
+
+## Example Output
+
+```text
+http_response,method=GET,result=success,server=http://github.com,status_code=200 content_length=87878i,http_response_code=200i,response_time=0.937655534,result_code=0i,result_type="success" 1565839598000000000
+```
+
+## Optional Cookie Authentication Settings
+
+The optional Cookie Authentication Settings will retrieve a cookie from the
+given authorization endpoint, and use it in subsequent API requests.  This is
+useful for services that do not provide OAuth or Basic Auth authentication,
+e.g. the [Tesla Powerwall API](https://www.tesla.com/support/energy/powerwall/own/monitoring-from-home-network), which uses a Cookie Auth Body to retrieve
+an authorization cookie.  The Cookie Auth Renewal interval will renew the
+authorization by retrieving a new cookie at the given interval.
+
+[tesla]: https://www.tesla.com/support/energy/powerwall/own/monitoring-from-home-network
diff --git a/content/telegraf/v1/input-plugins/hugepages/_index.md b/content/telegraf/v1/input-plugins/hugepages/_index.md
new file mode 100644
index 000000000..f95dba188
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/hugepages/_index.md
@@ -0,0 +1,95 @@
+---
+description: "Telegraf plugin for collecting metrics from Hugepages"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Hugepages
+    identifier: input-hugepages
+tags: [Hugepages, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Hugepages Input Plugin
+
+Transparent Huge Pages (THP) is a Linux memory management system that reduces
+the overhead of Translation Lookaside Buffer (TLB) lookups on machines with
+large amounts of memory by using larger memory pages.
+
+Consult [the website](https://www.kernel.org/doc/html/latest/admin-guide/mm/hugetlbpage.html) for more details.
+
+[website]: https://www.kernel.org/doc/html/latest/admin-guide/mm/hugetlbpage.html
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Gathers huge pages measurements.
+# This plugin ONLY supports Linux
+[[inputs.hugepages]]
+  ## Supported huge page types:
+  ##   - "root"     - based on root huge page control directory:
+  ##                  /sys/kernel/mm/hugepages
+  ##   - "per_node" - based on per NUMA node directories:
+  ##                  /sys/devices/system/node/node[0-9]*/hugepages
+  ##   - "meminfo"  - based on /proc/meminfo file
+  # types = ["root", "per_node"]
+```
+
+## Metrics
+
+### Measurements
+
+**The following measurements are supported by Hugepages plugin:**
+
+- hugepages_root (gathered from `/sys/kernel/mm/hugepages`)
+  - tags:
+    - size_kb (integer, kB)
+  - fields:
+    - free (integer)
+    - mempolicy (integer)
+    - overcommit (integer)
+    - reserved (integer)
+    - surplus (integer)
+    - total (integer)
+- hugepages_per_node (gathered from `/sys/devices/system/node/node[0-9]*/hugepages`)
+  - tags:
+    - size_kb (integer, kB)
+    - node (integer)
+  - fields:
+    - free (integer)
+    - surplus (integer)
+    - total (integer)
+- hugepages_meminfo (gathered from `/proc/meminfo` file)
+  - The fields `total`, `free`, `reserved`, and `surplus` are counts of pages
+    of default size. Fields with suffix `_kb` are in kilobytes.
+  - fields:
+    - anonymous_kb (integer, kB)
+    - file_kb (integer, kB)
+    - free (integer)
+    - reserved (integer)
+    - shared_kb (integer, kB)
+    - size_kb (integer, kB)
+    - surplus (integer)
+    - tlb_kb (integer, kB)
+    - total (integer)
+
+## Example Output
+
+```text
+hugepages_root,host=ubuntu,size_kb=1048576 free=0i,mempolicy=8i,overcommit=0i,reserved=0i,surplus=0i,total=8i 1646258020000000000
+hugepages_root,host=ubuntu,size_kb=2048 free=883i,mempolicy=2048i,overcommit=0i,reserved=0i,surplus=0i,total=2048i 1646258020000000000
+hugepages_per_node,host=ubuntu,size_kb=1048576,node=0 free=0i,surplus=0i,total=4i 1646258020000000000
+hugepages_per_node,host=ubuntu,size_kb=2048,node=0 free=434i,surplus=0i,total=1024i 1646258020000000000
+hugepages_per_node,host=ubuntu,size_kb=1048576,node=1 free=0i,surplus=0i,total=4i 1646258020000000000
+hugepages_per_node,host=ubuntu,size_kb=2048,node=1 free=449i,surplus=0i,total=1024i 1646258020000000000
+hugepages_meminfo,host=ubuntu anonymous_kb=0i,file_kb=0i,free=883i,reserved=0i,shared_kb=0i,size_kb=2048i,surplus=0i,tlb_kb=12582912i,total=2048i 1646258020000000000
+```
diff --git a/content/telegraf/v1/input-plugins/icinga2/_index.md b/content/telegraf/v1/input-plugins/icinga2/_index.md
new file mode 100644
index 000000000..476deaed6
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/icinga2/_index.md
@@ -0,0 +1,190 @@
+---
+description: "Telegraf plugin for collecting metrics from Icinga2"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Icinga2
+    identifier: input-icinga2
+tags: [Icinga2, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Icinga2 Input Plugin
+
+This plugin gather services & hosts status using Icinga2 Remote API.
+
+The icinga2 plugin uses the icinga2 remote API to gather status on running
+services and hosts. You can read Icinga2's documentation for their remote API
+[here](https://docs.icinga.com/icinga2/latest/doc/module/icinga2/chapter/icinga2-api).
+
+[1]: https://docs.icinga.com/icinga2/latest/doc/module/icinga2/chapter/icinga2-api
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Gather Icinga2 status
+[[inputs.icinga2]]
+  ## Required Icinga2 server address
+  # server = "https://localhost:5665"
+
+  ## Collected Icinga2 objects ("services", "hosts")
+  ## Specify at least one object to collect from /v1/objects endpoint.
+  # objects = ["services"]
+
+  ## Collect metrics from /v1/status endpoint
+  ## Choose from:
+  ##     "ApiListener", "CIB", "IdoMysqlConnection", "IdoPgsqlConnection"
+  # status = []
+
+  ## Credentials for basic HTTP authentication
+  # username = "admin"
+  # password = "admin"
+
+  ## Maximum time to receive response.
+  # response_timeout = "5s"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = true
+```
+
+## Metrics
+
+- `icinga2_hosts`
+  - tags
+    - `check_command` - The short name of the check command
+    - `display_name` - The name of the host
+    - `state` - The state: UP/DOWN
+    - `source` - The icinga2 host
+    - `port` - The icinga2 port
+    - `scheme` - The icinga2 protocol (http/https)
+    - `server` - The server the check_command is running for
+  - fields
+    - `name` (string)
+    - `state_code` (int)
+- `icinga2_services`
+  - tags
+    - `check_command` - The short name of the check command
+    - `display_name` - The name of the service
+    - `state` - The state: OK/WARNING/CRITICAL/UNKNOWN for services
+    - `source` - The icinga2 host
+    - `port` - The icinga2 port
+    - `scheme` - The icinga2 protocol (http/https)
+    - `server` - The server the check_command is running for
+  - fields
+    - `name` (string)
+    - `state_code` (int)
+- `icinga2_status`
+  - component:
+    - `ApiListener`
+      - tags
+        - `component` name
+      - fields
+        - `api_num_conn_endpoints`
+        - `api_num_endpoint`
+        - `api_num_http_clients`
+        - `api_num_json_rpc_anonymous_clients`
+        - `api_num_json_rpc_relay_queue_item_rate`
+        - `api_num_json_rpc_relay_queue_items`
+        - `api_num_json_rpc_sync_queue_item_rate`
+        - `api_num_json_rpc_sync_queue_items`
+        - `api_num_json_rpc_work_queue_item_rate`
+        - `api_num_not_conn_endpoints`
+    - `CIB`
+      - tags
+        - `component` name
+      - fields
+        - `active_host_checks`
+        - `active_host_checks_15min`
+        - `active_host_checks_1min`
+        - `active_host_checks_5min`
+        - `active_service_checks`
+        - `active_service_checks_15min`
+        - `active_service_checks_1min`
+        - `active_service_checks_5min`
+        - `avg_execution_time`
+        - `avg_latency`
+        - `current_concurrent_checks`
+        - `current_pending_callbacks`
+        - `max_execution_time`
+        - `max_latency`
+        - `min_execution_time`
+        - `min_latency`
+        - `num_hosts_acknowledged`
+        - `num_hosts_down`
+        - `num_hosts_flapping`
+        - `num_hosts_handled`
+        - `num_hosts_in_downtime`
+        - `num_hosts_pending`
+        - `num_hosts_problem`
+        - `num_hosts_unreachable`
+        - `num_hosts_up`
+        - `num_services_acknowledged`
+        - `num_services_critical`
+        - `num_services_flapping`
+        - `num_services_handled`
+        - `num_services_in_downtime`
+        - `num_services_ok`
+        - `num_services_pending`
+        - `num_services_problem`
+        - `num_services_unknown`
+        - `num_services_unreachable`
+        - `num_services_warning`
+        - `passive_host_checks`
+        - `passive_host_checks_15min`
+        - `passive_host_checks_1min`
+        - `passive_host_checks_5min`
+        - `passive_service_checks`
+        - `passive_service_checks_15min`
+        - `passive_service_checks_1min`
+        - `passive_service_checks_5min`
+        - `remote_check_queue`
+        - `uptime`
+    - `IdoMysqlConnection`
+      - tags
+        - `component` name
+      - fields
+        - `mysql_queries_1min`
+        - `mysql_queries_5mins`
+        - `mysql_queries_15mins`
+        - `mysql_queries_rate`
+        - `mysql_query_queue_item_rate`
+        - `mysql_query_queue_items`
+    - `IdoPgsqlConnection`
+      - tags
+        - `component` name
+      - fields
+        - `pgsql_queries_1min`
+        - `pgsql_queries_5mins`
+        - `pgsql_queries_15mins`
+        - `pgsql_queries_rate`
+        - `pgsql_query_queue_item_rate`
+        - `pgsql_query_queue_items`
+
+## Sample Queries
+
+```sql
+SELECT * FROM "icinga2_services" WHERE state_code = 0 AND time > now() - 24h // Service with OK status
+SELECT * FROM "icinga2_services" WHERE state_code = 1 AND time > now() - 24h // Service with WARNING status
+SELECT * FROM "icinga2_services" WHERE state_code = 2 AND time > now() - 24h // Service with CRITICAL status
+SELECT * FROM "icinga2_services" WHERE state_code = 3 AND time > now() - 24h // Service with UNKNOWN status
+```
+
+## Example Output
+
+```text
+icinga2_hosts,display_name=router-fr.eqx.fr,check_command=hostalive-custom,host=test-vm,source=localhost,port=5665,scheme=https,state=ok name="router-fr.eqx.fr",state=0 1492021603000000000
+```
diff --git a/content/telegraf/v1/input-plugins/infiniband/_index.md b/content/telegraf/v1/input-plugins/infiniband/_index.md
new file mode 100644
index 000000000..8b64bf286
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/infiniband/_index.md
@@ -0,0 +1,79 @@
+---
+description: "Telegraf plugin for collecting metrics from InfiniBand"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: InfiniBand
+    identifier: input-infiniband
+tags: [InfiniBand, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# InfiniBand Input Plugin
+
+This plugin gathers statistics for all InfiniBand devices and ports on the
+system. These are the counters that can be found in
+`/sys/class/infiniband/<dev>/port/<port>/counters/`
+
+**Supported Platforms**: Linux
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Gets counters from all InfiniBand cards and ports installed
+# This plugin ONLY supports Linux
+[[inputs.infiniband]]
+  # no configuration
+```
+
+## Metrics
+
+Actual metrics depend on the InfiniBand devices, the plugin uses a simple
+mapping from counter -> counter value.
+
+[Information about the counters](https://community.mellanox.com/s/article/understanding-mlx5-linux-counters-and-status-parameters) collected is provided by Mellanox.
+
+[counters]: https://community.mellanox.com/s/article/understanding-mlx5-linux-counters-and-status-parameters
+
+- infiniband
+  - tags:
+    - device
+    - port
+  - fields:
+    - excessive_buffer_overrun_errors (integer)
+    - link_downed (integer)
+    - link_error_recovery (integer)
+    - local_link_integrity_errors (integer)
+    - multicast_rcv_packets (integer)
+    - multicast_xmit_packets (integer)
+    - port_rcv_constraint_errors (integer)
+    - port_rcv_data (integer)
+    - port_rcv_errors (integer)
+    - port_rcv_packets (integer)
+    - port_rcv_remote_physical_errors (integer)
+    - port_rcv_switch_relay_errors (integer)
+    - port_xmit_constraint_errors (integer)
+    - port_xmit_data (integer)
+    - port_xmit_discards (integer)
+    - port_xmit_packets (integer)
+    - port_xmit_wait (integer)
+    - symbol_error (integer)
+    - unicast_rcv_packets (integer)
+    - unicast_xmit_packets (integer)
+    - VL15_dropped (integer)
+
+## Example Output
+
+```text
+infiniband,device=mlx5_0,port=1 VL15_dropped=0i,excessive_buffer_overrun_errors=0i,link_downed=0i,link_error_recovery=0i,local_link_integrity_errors=0i,multicast_rcv_packets=0i,multicast_xmit_packets=0i,port_rcv_constraint_errors=0i,port_rcv_data=237159415345822i,port_rcv_errors=0i,port_rcv_packets=801977655075i,port_rcv_remote_physical_errors=0i,port_rcv_switch_relay_errors=0i,port_xmit_constraint_errors=0i,port_xmit_data=238334949937759i,port_xmit_discards=0i,port_xmit_packets=803162651391i,port_xmit_wait=4294967295i,symbol_error=0i,unicast_rcv_packets=801977655075i,unicast_xmit_packets=803162651391i 1573125558000000000
+```
diff --git a/content/telegraf/v1/input-plugins/influxdb/_index.md b/content/telegraf/v1/input-plugins/influxdb/_index.md
new file mode 100644
index 000000000..22ec81246
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/influxdb/_index.md
@@ -0,0 +1,469 @@
+---
+description: "Telegraf plugin for collecting metrics from InfluxDB"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: InfluxDB
+    identifier: input-influxdb
+tags: [InfluxDB, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# InfluxDB Input Plugin
+
+The InfluxDB plugin will collect metrics on the given InfluxDB v1 servers from
+the `/debug/vars` endpoint. Read the [documentation](https://docs.influxdata.com/platform/monitoring/influxdata-platform/tools/measurements-internal/) for detailed
+information about `influxdb` metrics. For InfluxDB v2 and the `metrics`
+endpoint please see the section below.
+
+This plugin can also gather metrics from endpoints that expose
+InfluxDB-formatted endpoints. See below for more information.
+
+[1]: https://docs.influxdata.com/platform/monitoring/influxdata-platform/tools/measurements-internal/
+
+## InfluxDB v2 Metrics
+
+For [InfluxDB v2 metrics](https://docs.influxdata.com/influxdb/latest/reference/internals/metrics/) are produced in Prometheus plain-text format. To
+collect metrics at the new `/metrics` endpoint, please use the Prometheus
+input plugin. This is an example to collect from a local database:
+
+```toml
+[[inputs.prometheus]]
+  urls = ["http://localhost:8086/metrics"]
+  metric_version = 1
+```
+
+[2]: https://docs.influxdata.com/influxdb/latest/reference/internals/metrics/
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read InfluxDB-formatted JSON metrics from one or more HTTP endpoints
+[[inputs.influxdb]]
+  ## Works with InfluxDB debug endpoints out of the box,
+  ## but other services can use this format too.
+  ## See the influxdb plugin's README for more details.
+
+  ## Multiple URLs from which to read InfluxDB-formatted JSON
+  ## Default is "http://localhost:8086/debug/vars".
+  urls = [
+    "http://localhost:8086/debug/vars"
+  ]
+
+  ## Username and password to send using HTTP Basic Authentication.
+  # username = ""
+  # password = ""
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## http request & header timeout
+  timeout = "5s"
+```
+
+## Metrics
+
+**Note:** The measurements and fields included in this plugin are dynamically
+built from the InfluxDB source, and may vary between versions:
+
+- **influxdb_ae** _(Enterprise Only)_ : Statistics related to the Anti-Entropy
+  (AE) engine in InfluxDB Enterprise clusters.
+  - **bytesRx**: Number of bytes received by the data node.
+  - **errors**: Total number of anti-entropy jobs that have resulted in errors.
+  - **jobs**: Total number of jobs executed by the data node.
+  - **jobsActive**: Number of active (currently executing) jobs.
+- **influxdb_cluster** _(Enterprise Only)_ : Statistics related to the
+  clustering features of the data nodes in InfluxDB Enterprise clusters.
+  - **copyShardReq**: Number of internal requests made to copy a shard from
+    one data node to another.
+  - **createIteratorReq**: Number of read requests from other data nodes in
+    the cluster.
+  - **expandSourcesReq**: Number of remote node requests made to find
+    measurements on this node that match a particular regular expression.
+  - **fieldDimensionsReq**: Number of remote node requests for information
+    about the fields and associated types, and tag keys of measurements on
+    this data node.
+  - **iteratorCostReq**: Number of internal requests for iterator cost.
+  - **openConnections**: Tracks the number of open connections being handled by
+    the data node (including logical connections multiplexed onto a single
+    yamux connection).
+  - **removeShardReq**: Number of internal requests to delete a shard from this
+    data node. Exclusively incremented by use of the influxd-ctl remove shard
+    command.
+  - **writeShardFail**: Total number of internal write requests from a remote
+    node that failed.
+  - **writeShardPointsReq**: Number of points in every internal write request
+    from any remote node, regardless of success.
+  - **writeShardReq**: Number of internal write requests from a remote data
+    node, regardless of success.
+- **influxdb_cq**: Metrics related to continuous queries (CQs).
+  - **queryFail**: Total number of continuous queries that executed but failed.
+  - **queryOk**: Total number of continuous queries that executed successfully.
+- **influxdb_database**: Database metrics are collected from.
+  - **numMeasurements**: Current number of measurements in the specified
+    database.
+  - **numSeries**: Current series cardinality of the specified database.
+- **influxdb_hh** _(Enterprise Only)_ : Events resulting in new hinted handoff
+  (HH) processors in InfluxDB Enterprise clusters.
+  - **writeShardReq**: Number of initial write requests handled by the hinted
+    handoff engine for a remote node.
+  - **writeShardReqPoints**: Number of write requests for each point in the
+    initial request to the hinted handoff engine for a remote node.
+- **influxdb_hh_database** _(Enterprise Only)_ : Aggregates all hinted handoff
+  queues for a single database and node.
+  - **bytesRead**: Size, in bytes, of points read from the hinted handoff queue
+    and sent to its destination data node.
+  - **bytesWritten**: Total number of bytes written to the hinted handoff queue.
+  - **queueBytes**: Total number of bytes remaining in the hinted handoff queue.
+  - **queueDepth**: Total number of segments in the hinted handoff queue.
+    The HH queue is a sequence of 10MB “segment” files.
+  - **writeBlocked**: Number of writes blocked because the number of concurrent
+    HH requests exceeds the limit.
+  - **writeDropped**: Number of writes dropped from the HH queue because the
+    write appeared to be corrupted.
+  - **writeNodeReq**: Total number of write requests that succeeded in writing
+    a batch to the destination node.
+  - **writeNodeReqFail**: Total number of write requests that failed in writing
+    a batch of data from the hinted handoff queue to the destination node.
+  - **writeNodeReqPoints**: Total number of points successfully written from
+    the HH queue to the destination node fr
+  - **writeShardReq**: Total number of every write batch request enqueued into
+    the hinted handoff queue.
+  - **writeShardReqPoints**: Total number of points enqueued into the hinted
+    handoff queue.
+- **influxdb_hh_processor** _(Enterprise Only)_: Statistics stored for a single
+  queue (shard).
+  - **bytesRead**: Size, in bytes, of points read from the hinted handoff queue
+    and sent to its destination data node.
+  - **bytesWritten**: Total number of bytes written to the hinted handoff queue.
+  - **queueBytes**: Total number of bytes remaining in the hinted handoff queue.
+  - **queueDepth**: Total number of segments in the hinted handoff queue.
+    The HH queue is a sequence of 10MB “segment” files.
+  - **writeBlocked**: Number of writes blocked because the number of concurrent
+    HH requests exceeds the limit.
+  - **writeDropped**: Number of writes dropped from the HH queue because the
+    write appeared to be corrupted.
+  - **writeNodeReq**: Total number of write requests that succeeded in writing
+    a batch to the destination node.
+  - **writeNodeReqFail**: Total number of write requests that failed in writing
+    a batch of data from the hinted handoff queue to the destination node.
+  - **writeNodeReqPoints**: Total number of points successfully written from
+    the HH queue to the destination node fr
+  - **writeShardReq**: Total number of every write batch request enqueued into
+    the hinted handoff queue.
+  - **writeShardReqPoints**: Total number of points enqueued into the hinted
+    handoff queue.
+- **influxdb_httpd**: Metrics related to the InfluxDB HTTP server.
+  - **authFail**: Number of HTTP requests that were aborted due to
+    authentication being required, but not supplied or incorrect.
+  - **clientError**: Number of HTTP responses due to client errors, with
+    a 4XX HTTP status code.
+  - **fluxQueryReq**: Number of Flux query requests served.
+  - **fluxQueryReqDurationNs**: Duration (wall-time), in nanoseconds, spent
+    executing Flux query requests.
+  - **pingReq**: Number of times InfluxDB HTTP server served the /ping HTTP
+    endpoint.
+  - **pointsWrittenDropped**: Number of points dropped by the storage engine.
+  - **pointsWrittenFail**: Number of points accepted by the HTTP /write
+    endpoint, but unable to be persisted.
+  - **pointsWrittenOK**: Number of points successfully accepted and persisted
+    by the HTTP /write endpoint.
+  - **promReadReq**: Number of read requests to the Prometheus /read endpoint.
+  - **promWriteReq**: Number of write requests to the Prometheus /write
+    endpoint.
+  - **queryReq**: Number of query requests.
+  - **queryReqDurationNs**: Total query request duration, in nanosecond (ns).
+  - **queryRespBytes**: Total number of bytes returned in query responses.
+  - **recoveredPanics**: Total number of panics recovered by the HTTP handler.
+  - **req**: Total number of HTTP requests served.
+  - **reqActive**: Number of currently active requests.
+  - **reqDurationNs**: Duration (wall time), in nanoseconds, spent inside HTTP
+    requests.
+  - **serverError**: Number of HTTP responses due to server errors.
+  - **statusReq**: Number of status requests served using the HTTP /status
+    endpoint.
+  - **valuesWrittenOK**: Number of values (fields) successfully accepted and
+    persisted by the HTTP /write endpoint.
+  - **writeReq**: Number of write requests served using the HTTP /write
+    endpoint.
+  - **writeReqActive**: Number of currently active write requests.
+  - **writeReqBytes**: Total number of bytes of line protocol data received by
+    write requests, using the HTTP /write endpoint.
+  - **writeReqDurationNs**: Duration, in nanoseconds, of write requests served
+    using the /write HTTP endpoint.
+- **influxdb_memstats**: Statistics about the memory allocator in the specified
+  database.
+  - **Alloc**: Number of bytes allocated to heap objects.
+  - **BuckHashSys**: Number of bytes of memory in profiling bucket hash tables.
+  - **Frees**: Cumulative count of heap objects freed.
+  - **GCCPUFraction**: fraction of InfluxDB's available CPU time used by the
+    garbage collector (GC) since InfluxDB started.
+  - **GCSys**: Number of bytes of memory in garbage collection metadata.
+  - **HeapAlloc**: Number of bytes of allocated heap objects.
+  - **HeapIdle**: Number of bytes in idle (unused) spans.
+  - **HeapInuse**: Number of bytes in in-use spans.
+  - **HeapObjects**: Number of allocated heap objects.
+  - **HeapReleased**: Number of bytes of physical memory returned to the OS.
+  - **HeapSys**: Number of bytes of heap memory obtained from the OS.
+  - **LastGC**: Time the last garbage collection finished.
+  - **Lookups**: Number of pointer lookups performed by the runtime.
+  - **MCacheInuse**: Number of bytes of allocated mcache structures.
+  - **MCacheSys**: Number of bytes of memory obtained from the OS for mcache
+    structures.
+  - **MSpanInuse**: Number of bytes of allocated mspan structures.
+  - **MSpanSys**: Number of bytes of memory obtained from the OS for mspan
+    structures.
+  - **Mallocs**: Cumulative count of heap objects allocated.
+  - **NextGC**: Target heap size of the next GC cycle.
+  - **NumForcedGC**: Number of GC cycles that were forced by the application
+    calling the GC function.
+  - **NumGC**: Number of completed GC cycles.
+  - **OtherSys**: Number of bytes of memory in miscellaneous off-heap runtime
+    allocations.
+  - **PauseTotalNs**: Cumulative nanoseconds in GC stop-the-world pauses since
+    the program started.
+  - **StackInuse**: Number of bytes in stack spans.
+  - **StackSys**: Number of bytes of stack memory obtained from the OS.
+  - **Sys**: Total bytes of memory obtained from the OS.
+  - **TotalAlloc**: Cumulative bytes allocated for heap objects.
+- **influxdb_queryExecutor**: Metrics related to usage of the Query Executor
+  of the InfluxDB engine.
+  - **queriesActive**: Number of active queries currently being handled.
+  - **queriesExecuted**: Number of queries executed (started).
+  - **queriesFinished**: Number of queries that have finished executing.
+  - **queryDurationNs**: Total duration, in nanoseconds, of executed queries.
+  - **recoveredPanics**: Number of panics recovered by the Query Executor.
+- **influxdb_rpc** _(Enterprise Only)_ : Statistics related to the use of RPC
+  calls within InfluxDB Enterprise clusters.
+  - **idleStreams**: Number of idle multiplexed streams across all live TCP
+    connections.
+  - **liveConnections**: Current number of live TCP connections to other nodes.
+  - **liveStreams**: Current number of live multiplexed streams across all live
+    TCP connections.
+  - **rpcCalls**: Total number of RPC calls made to remote nodes.
+  - **rpcFailures**: Total number of RPC failures, which are RPCs that did
+    not recover.
+  - **rpcReadBytes**: Total number of RPC bytes read.
+  - **rpcRetries**: Total number of RPC calls that retried at least once.
+  - **rpcWriteBytes**: Total number of RPC bytes written.
+  - **singleUse**: Total number of single-use connections opened using Dial.
+  - **singleUseOpen**: Number of single-use connections currently open.
+  - **totalConnections**: Total number of TCP connections that have been
+    established.
+  - **totalStreams**: Total number of streams established.
+- **influxdb_runtime**: Subset of memstat record statistics for the Go memory
+  allocator.
+  - **Alloc**: Currently allocated number of bytes of heap objects.
+  - **Frees**: Cumulative number of freed (live) heap objects.
+  - **HeapAlloc**: Size, in bytes, of all heap objects.
+  - **HeapIdle**: Number of bytes of idle heap objects.
+  - **HeapInUse**: Number of bytes in in-use spans.
+  - **HeapObjects**: Number of allocated heap objects.
+  - **HeapReleased**: Number of bytes of physical memory returned to the OS.
+  - **HeapSys**: Number of bytes of heap memory obtained from the OS. Measures
+    the amount of virtual address space reserved for the heap.
+  - **Lookups**: Number of pointer lookups performed by the runtime. Primarily
+    useful for debugging runtime internals.
+  - **Mallocs**: Total number of heap objects allocated. The total number of
+    live objects is Frees.
+  - **NumGC**: Number of completed GC (garbage collection) cycles.
+  - **NumGoroutine**: Total number of Go routines.
+  - **PauseTotalNs**: Total duration, in nanoseconds, of total GC
+    (garbage collection) pauses.
+  - **Sys**: Total number of bytes of memory obtained from the OS. Measures
+    the virtual address space reserved by the Go runtime for the heap, stacks,
+    and other internal data structures.
+  - **TotalAlloc**: Total number of bytes allocated for heap objects. This
+    statistic does not decrease when objects are freed.
+- **influxdb_shard**: Metrics related to InfluxDB shards.
+  - **diskBytes**: Size, in bytes, of the shard, including the size of the
+    data directory and the WAL directory.
+  - **fieldsCreate**: Number of fields created.
+  - **indexType**: Type of index inmem or tsi1.
+  - **n_shards**: Total number of shards in the specified database.
+  - **seriesCreate**: Number of series created.
+  - **writeBytes**: Number of bytes written to the shard.
+  - **writePointsDropped**: Number of requests to write points t dropped from
+    a write.
+  - **writePointsErr**: Number of requests to write points that failed to be
+    written due to errors.
+  - **writePointsOk**: Number of points written successfully.
+  - **writeReq**: Total number of write requests.
+  - **writeReqErr**: Total number of write requests that failed due to errors.
+  - **writeReqOk**: Total number of successful write requests.
+- **influxdb_subscriber**: InfluxDB subscription metrics.
+  - **createFailures**: Number of subscriptions that failed to be created.
+  - **pointsWritten**: Total number of points that were successfully written
+    to subscribers.
+  - **writeFailures**: Total number of batches that failed to be written
+    to subscribers.
+- **influxdb_tsm1_cache**: TSM cache metrics.
+  - **cacheAgeMs**: Duration, in milliseconds, since the cache was last
+    snapshotted at sample time.
+  - **cachedBytes**: Total number of bytes that have been written into snapshots.
+  - **diskBytes**: Size, in bytes, of on-disk snapshots.
+  - **memBytes**: Size, in bytes, of in-memory cache.
+  - **snapshotCount**: Current level (number) of active snapshots.
+  - **WALCompactionTimeMs**: Duration, in milliseconds, that the commit lock is
+    held while compacting snapshots.
+  - **writeDropped**: Total number of writes dropped due to timeouts.
+  - **writeErr**: Total number of writes that failed.
+  - **writeOk**: Total number of successful writes.
+- **influxdb_tsm1_engine**: TSM storage engine metrics.
+  - **cacheCompactionDuration** Duration (wall time), in nanoseconds, spent in
+    cache compactions.
+  - **cacheCompactionErr** Number of cache compactions that have failed due
+    to errors.
+  - **cacheCompactions** Total number of cache compactions that have ever run.
+  - **cacheCompactionsActive** Number of cache compactions that are currently
+    running.
+  - **tsmFullCompactionDuration** Duration (wall time), in nanoseconds, spent
+    in full compactions.
+  - **tsmFullCompactionErr** Total number of TSM full compactions that have
+    failed due to errors.
+  - **tsmFullCompactionQueue** Current number of pending TMS Full compactions.
+  - **tsmFullCompactions** Total number of TSM full compactions that have
+    ever run.
+  - **tsmFullCompactionsActive** Number of TSM full compactions currently
+    running.
+  - **tsmLevel1CompactionDuration** Duration (wall time), in nanoseconds,
+    spent in TSM level 1 compactions.
+  - **tsmLevel1CompactionErr** Total number of TSM level 1 compactions that
+    have failed due to errors.
+  - **tsmLevel1CompactionQueue** Current number of pending TSM level 1
+    compactions.
+  - **tsmLevel1Compactions** Total number of TSM level 1 compactions that have
+    ever run.
+  - **tsmLevel1CompactionsActive** Number of TSM level 1 compactions that are
+    currently running.
+  - **tsmLevel2CompactionDuration** Duration (wall time), in nanoseconds,
+    spent in TSM level 2 compactions.
+  - **tsmLevel2CompactionErr** Number of TSM level 2 compactions that have
+    failed due to errors.
+  - **tsmLevel2CompactionQueue** Current number of pending TSM level 2
+    compactions.
+  - **tsmLevel2Compactions** Total number of TSM level 2 compactions that
+    have ever run.
+  - **tsmLevel2CompactionsActive** Number of TSM level 2 compactions that
+    are currently running.
+  - **tsmLevel3CompactionDuration** Duration (wall time), in nanoseconds,
+    spent in TSM level 3 compactions.
+  - **tsmLevel3CompactionErr** Number of TSM level 3 compactions that have
+    failed due to errors.
+  - **tsmLevel3CompactionQueue** Current number of pending TSM level 3
+    compactions.
+  - **tsmLevel3Compactions** Total number of TSM level 3 compactions that
+    have ever run.
+  - **tsmLevel3CompactionsActive** Number of TSM level 3 compactions that
+    are currently running.
+  - **tsmOptimizeCompactionDuration** Duration (wall time), in nanoseconds,
+    spent during TSM optimize compactions.
+  - **tsmOptimizeCompactionErr** Total number of TSM optimize compactions
+   that have failed due to errors.
+  - **tsmOptimizeCompactionQueue** Current number of pending TSM optimize
+    compactions.
+  - **tsmOptimizeCompactions** Total number of TSM optimize compactions that
+    have ever run.
+  - **tsmOptimizeCompactionsActive** Number of TSM optimize compactions that
+    are currently running.
+- **influxdb_tsm1_filestore**: The TSM file store metrics.
+  - **diskBytes**: Size, in bytes, of disk usage by the TSM file store.
+  - **numFiles**: Total number of files in the TSM file store.
+- **influxdb_tsm1_wal**: The TSM Write Ahead Log (WAL) metrics.
+  - **currentSegmentDiskBytes**: Current size, in bytes, of the segment disk.
+  - **oldSegmentDiskBytes**: Size, in bytes, of the segment disk.
+  - **writeErr**: Number of writes that failed due to errors.
+  - **writeOK**: Number of writes that succeeded.
+- **influxdb_write**: Metrics related to InfluxDB writes.
+  - **pointReq**: Total number of points requested to be written.
+  - **pointReqHH** _(Enterprise only)_: Total number of points received for
+    write by this node and then enqueued into hinted handoff for the
+    destination node.
+  - **pointReqLocal** _(Enterprise only)_: Total number of point requests that
+    have been attempted to be written into a shard on the same (local) node.
+  - **pointReqRemote** _(Enterprise only)_: Total number of points received for
+    write by this node but needed to be forwarded into a shard on a remote node.
+  - **pointsWrittenOK**: Number of points written to the HTTP /write endpoint
+    and persisted successfully.
+  - **req**: Total number of batches requested to be written.
+  - **subWriteDrop**: Total number of batches that failed to be sent to the
+    subscription dispatcher.
+  - **subWriteOk**: Total number of batches successfully sent to the
+    subscription dispatcher.
+  - **valuesWrittenOK**: Number of values (fields) written to the HTTP
+    /write endpoint and persisted successfully.
+  - **writeDrop**: Total number of write requests for points that have been
+    dropped due to timestamps not matching any existing retention policies.
+  - **writeError**: Total number of batches of points that were not
+    successfully written, due to a failure to write to a local or remote shard.
+  - **writeOk**: Total number of batches of points written at the requested
+    consistency level.
+  - **writePartial** _(Enterprise only)_: Total number of batches written to
+    at least one node, but did not meet the requested consistency level.
+  - **writeTimeout**: Total number of write requests that failed to complete
+    within the default write timeout duration.
+
+## Example Output
+
+```sh
+telegraf --config ~/ws/telegraf.conf --input-filter influxdb --test
+```
+
+```text
+influxdb_database,database=_internal,host=tyrion,url=http://localhost:8086/debug/vars numMeasurements=10,numSeries=29 1463590500247354636
+influxdb_httpd,bind=:8086,host=tyrion,url=http://localhost:8086/debug/vars req=7,reqActive=1,reqDurationNs=14227734 1463590500247354636
+influxdb_measurement,database=_internal,host=tyrion,measurement=database,url=http://localhost:8086/debug/vars numSeries=1 1463590500247354636
+influxdb_measurement,database=_internal,host=tyrion,measurement=httpd,url=http://localhost:8086/debug/vars numSeries=1 1463590500247354636
+influxdb_measurement,database=_internal,host=tyrion,measurement=measurement,url=http://localhost:8086/debug/vars numSeries=10 1463590500247354636
+influxdb_measurement,database=_internal,host=tyrion,measurement=runtime,url=http://localhost:8086/debug/vars numSeries=1 1463590500247354636
+influxdb_measurement,database=_internal,host=tyrion,measurement=shard,url=http://localhost:8086/debug/vars numSeries=4 1463590500247354636
+influxdb_measurement,database=_internal,host=tyrion,measurement=subscriber,url=http://localhost:8086/debug/vars numSeries=1 1463590500247354636
+influxdb_measurement,database=_internal,host=tyrion,measurement=tsm1_cache,url=http://localhost:8086/debug/vars numSeries=4 1463590500247354636
+influxdb_measurement,database=_internal,host=tyrion,measurement=tsm1_filestore,url=http://localhost:8086/debug/vars numSeries=2 1463590500247354636
+influxdb_measurement,database=_internal,host=tyrion,measurement=tsm1_wal,url=http://localhost:8086/debug/vars numSeries=4 1463590500247354636
+influxdb_measurement,database=_internal,host=tyrion,measurement=write,url=http://localhost:8086/debug/vars numSeries=1 1463590500247354636
+influxdb_memstats,host=tyrion,url=http://localhost:8086/debug/vars alloc=7642384i,buck_hash_sys=1463471i,frees=1169558i,gc_sys=653312i,gc_cpu_fraction=0.00003825652361068311,heap_alloc=7642384i,heap_idle=9912320i,heap_inuse=9125888i,heap_objects=48276i,heap_released=0i,heap_sys=19038208i,last_gc=1463590480877651621i,lookups=90i,mallocs=1217834i,mcache_inuse=4800i,mcache_sys=16384i,mspan_inuse=70920i,mspan_sys=81920i,next_gc=11679787i,num_gc=141i,other_sys=1244233i,pause_total_ns=24034027i,stack_inuse=884736i,stack_sys=884736i,sys=23382264i,total_alloc=679012200i 1463590500277918755
+influxdb_shard,database=_internal,engine=tsm1,host=tyrion,id=4,path=/Users/sparrc/.influxdb/data/_internal/monitor/4,retentionPolicy=monitor,url=http://localhost:8086/debug/vars fieldsCreate=65,seriesCreate=26,writePointsOk=7274,writeReq=280 1463590500247354636
+influxdb_subscriber,host=tyrion,url=http://localhost:8086/debug/vars pointsWritten=7274 1463590500247354636
+influxdb_tsm1_cache,database=_internal,host=tyrion,path=/Users/sparrc/.influxdb/data/_internal/monitor/1,retentionPolicy=monitor,url=http://localhost:8086/debug/vars WALCompactionTimeMs=0,cacheAgeMs=2809192,cachedBytes=0,diskBytes=0,memBytes=0,snapshotCount=0 1463590500247354636
+influxdb_tsm1_cache,database=_internal,host=tyrion,path=/Users/sparrc/.influxdb/data/_internal/monitor/2,retentionPolicy=monitor,url=http://localhost:8086/debug/vars WALCompactionTimeMs=0,cacheAgeMs=2809184,cachedBytes=0,diskBytes=0,memBytes=0,snapshotCount=0 1463590500247354636
+influxdb_tsm1_cache,database=_internal,host=tyrion,path=/Users/sparrc/.influxdb/data/_internal/monitor/3,retentionPolicy=monitor,url=http://localhost:8086/debug/vars WALCompactionTimeMs=0,cacheAgeMs=2809180,cachedBytes=0,diskBytes=0,memBytes=42368,snapshotCount=0 1463590500247354636
+influxdb_tsm1_cache,database=_internal,host=tyrion,path=/Users/sparrc/.influxdb/data/_internal/monitor/4,retentionPolicy=monitor,url=http://localhost:8086/debug/vars WALCompactionTimeMs=0,cacheAgeMs=2799155,cachedBytes=0,diskBytes=0,memBytes=331216,snapshotCount=0 1463590500247354636
+influxdb_tsm1_filestore,database=_internal,host=tyrion,path=/Users/sparrc/.influxdb/data/_internal/monitor/1,retentionPolicy=monitor,url=http://localhost:8086/debug/vars diskBytes=37892 1463590500247354636
+influxdb_tsm1_filestore,database=_internal,host=tyrion,path=/Users/sparrc/.influxdb/data/_internal/monitor/2,retentionPolicy=monitor,url=http://localhost:8086/debug/vars diskBytes=52907 1463590500247354636
+influxdb_tsm1_wal,database=_internal,host=tyrion,path=/Users/sparrc/.influxdb/wal/_internal/monitor/1,retentionPolicy=monitor,url=http://localhost:8086/debug/vars currentSegmentDiskBytes=0,oldSegmentsDiskBytes=0 1463590500247354636
+influxdb_tsm1_wal,database=_internal,host=tyrion,path=/Users/sparrc/.influxdb/wal/_internal/monitor/2,retentionPolicy=monitor,url=http://localhost:8086/debug/vars currentSegmentDiskBytes=0,oldSegmentsDiskBytes=0 1463590500247354636
+influxdb_tsm1_wal,database=_internal,host=tyrion,path=/Users/sparrc/.influxdb/wal/_internal/monitor/3,retentionPolicy=monitor,url=http://localhost:8086/debug/vars currentSegmentDiskBytes=0,oldSegmentsDiskBytes=65651 1463590500247354636
+influxdb_tsm1_wal,database=_internal,host=tyrion,path=/Users/sparrc/.influxdb/wal/_internal/monitor/4,retentionPolicy=monitor,url=http://localhost:8086/debug/vars currentSegmentDiskBytes=495687,oldSegmentsDiskBytes=0 1463590500247354636
+influxdb_write,host=tyrion,url=http://localhost:8086/debug/vars pointReq=7274,pointReqLocal=7274,req=280,subWriteOk=280,writeOk=280 1463590500247354636
+influxdb_shard,host=tyrion n_shards=4i 1463590500247354636
+```
+
+## InfluxDB-formatted endpoints
+
+The influxdb plugin can collect InfluxDB-formatted data from JSON endpoints.
+Whether associated with an Influx database or not.
+
+With a configuration of:
+
+```toml
+[[inputs.influxdb]]
+  urls = [
+    "http://127.0.0.1:8086/debug/vars",
+    "http://192.168.2.1:8086/debug/vars"
+  ]
+```
diff --git a/content/telegraf/v1/input-plugins/influxdb_listener/_index.md b/content/telegraf/v1/input-plugins/influxdb_listener/_index.md
new file mode 100644
index 000000000..dc4b7de02
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/influxdb_listener/_index.md
@@ -0,0 +1,128 @@
+---
+description: "Telegraf plugin for collecting metrics from InfluxDB Listener"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: InfluxDB Listener
+    identifier: input-influxdb_listener
+tags: [InfluxDB Listener, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# InfluxDB Listener Input Plugin
+
+InfluxDB Listener is a service input plugin that listens for requests sent
+according to the [InfluxDB HTTP API](https://docs.influxdata.com/influxdb/v1.8/guides/write_data/).  The intent of the
+plugin is to allow Telegraf to serve as a proxy/router for the `/write`
+endpoint of the InfluxDB HTTP API.
+
+**Note:** This plugin was previously known as `http_listener`.  If you wish to
+send general metrics via HTTP it is recommended to use the
+[`http_listener_v2`]() instead.
+
+The `/write` endpoint supports the `precision` query parameter and can be set
+to one of `ns`, `u`, `ms`, `s`, `m`, `h`.  All other parameters are ignored and
+defer to the output plugins configuration.
+
+When chaining Telegraf instances using this plugin, CREATE DATABASE requests
+receive a 200 OK response with message body `{"results":[]}` but they are not
+relayed. The output configuration of the Telegraf instance which ultimately
+submits data to InfluxDB determines the destination database.
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Accept metrics over InfluxDB 1.x HTTP API
+[[inputs.influxdb_listener]]
+  ## Address and port to host HTTP listener on
+  service_address = ":8186"
+
+  ## maximum duration before timing out read of the request
+  read_timeout = "10s"
+  ## maximum duration before timing out write of the response
+  write_timeout = "10s"
+
+  ## Maximum allowed HTTP request body size in bytes.
+  ## 0 means to use the default of 32MiB.
+  max_body_size = 0
+
+  ## Set one or more allowed client CA certificate file names to
+  ## enable mutually authenticated TLS connections
+  tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
+
+  ## Add service certificate and key
+  tls_cert = "/etc/telegraf/cert.pem"
+  tls_key = "/etc/telegraf/key.pem"
+
+  ## Optional tag name used to store the database name.
+  ## If the write has a database in the query string then it will be kept in this tag name.
+  ## This tag can be used in downstream outputs.
+  ## The default value of nothing means it will be off and the database will not be recorded.
+  ## If you have a tag that is the same as the one specified below, and supply a database,
+  ## the tag will be overwritten with the database supplied.
+  # database_tag = ""
+
+  ## If set the retention policy specified in the write query will be added as
+  ## the value of this tag name.
+  # retention_policy_tag = ""
+
+  ## Optional username and password to accept for HTTP basic authentication
+  ## or authentication token.
+  ## You probably want to make sure you have TLS configured above for this.
+  ## Use these options for the authentication token in the form
+  ##   Authentication: Token <basic_username>:<basic_password>
+  # basic_username = "foobar"
+  # basic_password = "barfoo"
+
+  ## Optional JWT token authentication for HTTP requests
+  ## Please see the documentation at
+  ##   https://docs.influxdata.com/influxdb/v1.8/administration/authentication_and_authorization/#authenticate-using-jwt-tokens
+  ## for further details.
+  ## Please note: Token authentication and basic authentication cannot be used
+  ##              at the same time.
+  # token_shared_secret = ""
+  # token_username = ""
+
+  ## Influx line protocol parser
+  ## 'internal' is the default. 'upstream' is a newer parser that is faster
+  ## and more memory efficient.
+  # parser_type = "internal"
+```
+
+## Metrics
+
+Metrics are created from InfluxDB Line Protocol in the request body.
+
+## Troubleshooting
+
+**Example Query:**
+
+```sh
+curl -i -XPOST 'http://localhost:8186/write' --data-binary 'cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000'
+```
+
+[influxdb_http_api]: https://docs.influxdata.com/influxdb/v1.8/guides/write_data/
+[http_listener_v2]: /plugins/inputs/http_listener_v2/README.md
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/influxdb_v2_listener/_index.md b/content/telegraf/v1/input-plugins/influxdb_v2_listener/_index.md
new file mode 100644
index 000000000..6e1cbb88a
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/influxdb_v2_listener/_index.md
@@ -0,0 +1,115 @@
+---
+description: "Telegraf plugin for collecting metrics from InfluxDB V2 Listener"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: InfluxDB V2 Listener
+    identifier: input-influxdb_v2_listener
+tags: [InfluxDB V2 Listener, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# InfluxDB V2 Listener Input Plugin
+
+InfluxDB V2 Listener is a service input plugin that listens for requests sent
+according to the [InfluxDB HTTP API](https://docs.influxdata.com/influxdb/latest/api/).  The intent of the
+plugin is to allow Telegraf to serve as a proxy/router for the `/api/v2/write`
+endpoint of the InfluxDB HTTP API.
+
+The `/api/v2/write` endpoint supports the `precision` query parameter and can be
+set to one of `ns`, `us`, `ms`, `s`.  All other parameters are ignored and defer
+to the output plugins configuration.
+
+Telegraf minimum version: Telegraf 1.16.0
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `token` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Accept metrics over InfluxDB 2.x HTTP API
+[[inputs.influxdb_v2_listener]]
+  ## Address and port to host InfluxDB listener on
+  ## (Double check the port. Could be 9999 if using OSS Beta)
+  service_address = ":8086"
+
+  ## Maximum undelivered metrics before rate limit kicks in.
+  ## When the rate limit kicks in, HTTP status 429 will be returned.
+  ## 0 disables rate limiting
+  # max_undelivered_metrics = 0
+
+  ## Maximum duration before timing out read of the request
+  # read_timeout = "10s"
+  ## Maximum duration before timing out write of the response
+  # write_timeout = "10s"
+
+  ## Maximum allowed HTTP request body size in bytes.
+  ## 0 means to use the default of 32MiB.
+  # max_body_size = "32MiB"
+
+  ## Optional tag to determine the bucket.
+  ## If the write has a bucket in the query string then it will be kept in this tag name.
+  ## This tag can be used in downstream outputs.
+  ## The default value of nothing means it will be off and the database will not be recorded.
+  # bucket_tag = ""
+
+  ## Set one or more allowed client CA certificate file names to
+  ## enable mutually authenticated TLS connections
+  # tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
+
+  ## Add service certificate and key
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+
+  ## Optional token to accept for HTTP authentication.
+  ## You probably want to make sure you have TLS configured above for this.
+  # token = "some-long-shared-secret-token"
+
+  ## Influx line protocol parser
+  ## 'internal' is the default. 'upstream' is a newer parser that is faster
+  ## and more memory efficient.
+  # parser_type = "internal"
+```
+
+## Metrics
+
+Metrics are created from InfluxDB Line Protocol in the request body.
+
+## Troubleshooting
+
+**Example Query:**
+
+```sh
+curl -i -XPOST 'http://localhost:8186/api/v2/write' --data-binary 'cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000'
+```
+
+[influxdb_http_api]: https://docs.influxdata.com/influxdb/latest/api/
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/intel_baseband/_index.md b/content/telegraf/v1/input-plugins/intel_baseband/_index.md
new file mode 100644
index 000000000..e5f3d457c
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/intel_baseband/_index.md
@@ -0,0 +1,138 @@
+---
+description: "Telegraf plugin for collecting metrics from Intel Baseband Accelerator"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Intel Baseband Accelerator
+    identifier: input-intel_baseband
+tags: [Intel Baseband Accelerator, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Intel Baseband Accelerator Input Plugin
+
+Intel Baseband Accelerator Input Plugin collects metrics from both dedicated and
+integrated Intel devices that provide Wireless Baseband hardware acceleration.
+These devices play a key role in accelerating 5G and 4G Virtualized Radio Access
+Networks (vRAN) workloads, increasing the overall compute capacity of
+a commercial, off-the-shelf platforms.
+
+Intel Baseband devices integrate various features critical for 5G and
+LTE (Long Term Evolution) networks, including e.g.:
+
+- Forward Error Correction (FEC) processing,
+- 4G Turbo FEC processing,
+- 5G Low Density Parity Check (LDPC)
+- a Fast Fourier Transform (FFT) block providing DFT/iDFT processing offload
+for the 5G Sounding Reference Signal (SRS)
+
+Supported hardware:
+
+- Intel® vRAN Boost integrated accelerators:
+  - 4th Gen Intel® Xeon® Scalable processor with Intel® vRAN Boost (also known as Sapphire Rapids Edge Enhanced / SPR-EE)
+- External expansion cards connected to the PCI bus:
+  - Intel® vRAN Dedicated Accelerator ACC100 SoC (code named Mount Bryce)
+
+## Prerequisites
+
+- Intel Baseband device installed and configured.
+- Minimum Linux kernel version required is 5.7.
+- [pf-bb-config](https://github.com/intel/pf-bb-config) (version >= v23.03) installed and running.
+
+For more information regarding system configuration, please follow DPDK
+installation guides:
+
+- [Intel® vRAN Boost Poll Mode Driver (PMD)](https://doc.dpdk.org/guides/bbdevs/vrb1.html#installation)
+- [Intel® ACC100 5G/4G FEC Poll Mode Drivers](https://doc.dpdk.org/guides/bbdevs/acc100.html#installation)
+
+[VRB1]: https://doc.dpdk.org/guides/bbdevs/vrb1.html#installation
+[ACC100]: https://doc.dpdk.org/guides/bbdevs/acc100.html#installation
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Intel Baseband Accelerator Input Plugin collects metrics from both dedicated and integrated
+# Intel devices that provide Wireless Baseband hardware acceleration.
+# This plugin ONLY supports Linux.
+[[inputs.intel_baseband]]
+  ## Path to socket exposed by pf-bb-config for CLI interaction (mandatory).
+  ## In version v23.03 of pf-bb-config the path is created according to the schema:
+  ##   "/tmp/pf_bb_config.0000\:<b>\:<d>.<f>.sock" where 0000\:<b>\:<d>.<f> is the PCI device ID.
+  socket_path = ""
+
+  ## Path to log file exposed by pf-bb-config with telemetry to read (mandatory).
+  ## In version v23.03 of pf-bb-config the path is created according to the schema:
+  ##   "/var/log/pf_bb_cfg_0000\:<b>\:<d>.<f>.log" where 0000\:<b>\:<d>.<f> is the PCI device ID.
+  log_file_path = ""
+
+  ## Specifies plugin behavior regarding unreachable socket (which might not have been initialized yet).
+  ## Available choices:
+  ##   - error: Telegraf will return an error on startup if socket is unreachable
+  ##   - ignore: Telegraf will ignore error regarding unreachable socket on both startup and gather
+  # unreachable_socket_behavior = "error"
+
+  ## Duration that defines how long the connected socket client will wait for
+  ## a response before terminating connection.
+  ## Since it's local socket access to a fast packet processing application, the timeout should
+  ## be sufficient for most users.
+  ## Setting the value to 0 disables the timeout (not recommended).
+  # socket_access_timeout = "1s"
+
+  ## Duration that defines maximum time plugin will wait for pf-bb-config to write telemetry to the log file.
+  ## Timeout may differ depending on the environment.
+  ## Must be equal or larger than 50ms.
+  # wait_for_telemetry_timeout = "1s"
+```
+
+## Metrics
+
+Depending on version of Intel Baseband device and version of pf-bb-config,
+subset of following measurements may be exposed:
+
+**The following tags and fields are supported by Intel Baseband plugin:**
+
+| Tag         | Description                                                 |
+|-------------|-------------------------------------------------------------|
+| `metric`    | Type of metric : "code_blocks", "data_bytes", "per_engine". |
+| `operation` | Type of operation: "5GUL", "5GDL", "4GUL", "4GDL", "FFT".   |
+| `vf`        | Virtual Function number.                                    |
+| `engine`    | Engine number.                                              |
+
+| Metric name (field)  | Description                                                       |
+|----------------------|-------------------------------------------------------------------|
+| `value`              | Metric value for a given operation (non-negative integer, gauge). |
+
+## Example Output
+
+```text
+intel_baseband,host=ubuntu,metric=code_blocks,operation=5GUL,vf=0 value=54i 1685695885000000000
+intel_baseband,host=ubuntu,metric=code_blocks,operation=5GDL,vf=0 value=0i 1685695885000000000
+intel_baseband,host=ubuntu,metric=code_blocks,operation=FFT,vf=0 value=0i 1685695885000000000
+intel_baseband,host=ubuntu,metric=code_blocks,operation=5GUL,vf=1 value=0i 1685695885000000000
+intel_baseband,host=ubuntu,metric=code_blocks,operation=5GDL,vf=1 value=32i 1685695885000000000
+intel_baseband,host=ubuntu,metric=code_blocks,operation=FFT,vf=1 value=0i 1685695885000000000
+intel_baseband,host=ubuntu,metric=data_bytes,operation=5GUL,vf=0 value=18560i 1685695885000000000
+intel_baseband,host=ubuntu,metric=data_bytes,operation=5GDL,vf=0 value=0i 1685695885000000000
+intel_baseband,host=ubuntu,metric=data_bytes,operation=FFT,vf=0 value=0i 1685695885000000000
+intel_baseband,host=ubuntu,metric=data_bytes,operation=5GUL,vf=1 value=0i 1685695885000000000
+intel_baseband,host=ubuntu,metric=data_bytes,operation=5GDL,vf=1 value=86368i 1685695885000000000
+intel_baseband,host=ubuntu,metric=data_bytes,operation=FFT,vf=1 value=0i 1685695885000000000
+intel_baseband,engine=0,host=ubuntu,metric=per_engine,operation=5GUL value=72i 1685695885000000000
+intel_baseband,engine=1,host=ubuntu,metric=per_engine,operation=5GUL value=72i 1685695885000000000
+intel_baseband,engine=2,host=ubuntu,metric=per_engine,operation=5GUL value=72i 1685695885000000000
+intel_baseband,engine=3,host=ubuntu,metric=per_engine,operation=5GUL value=72i 1685695885000000000
+intel_baseband,engine=4,host=ubuntu,metric=per_engine,operation=5GUL value=72i 1685695885000000000
+intel_baseband,engine=0,host=ubuntu,metric=per_engine,operation=5GDL value=132i 1685695885000000000
+intel_baseband,engine=1,host=ubuntu,metric=per_engine,operation=5GDL value=130i 1685695885000000000
+intel_baseband,engine=0,host=ubuntu,metric=per_engine,operation=FFT value=0i 1685695885000000000
+```
diff --git a/content/telegraf/v1/input-plugins/intel_dlb/_index.md b/content/telegraf/v1/input-plugins/intel_dlb/_index.md
new file mode 100644
index 000000000..06e862f42
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/intel_dlb/_index.md
@@ -0,0 +1,119 @@
+---
+description: "Telegraf plugin for collecting metrics from Intel® Dynamic Load Balancer (Intel® DLB) "
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Intel® Dynamic Load Balancer (Intel® DLB) 
+    identifier: input-intel_dlb
+tags: [Intel® Dynamic Load Balancer (Intel® DLB) , "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Intel® Dynamic Load Balancer (Intel® DLB)  Input Plugin
+
+The `Intel DLB` plugin collects metrics exposed by applications built with
+[Data Plane Development Kit](https://www.dpdk.org/) which is an extensive
+set of open source libraries designed for accelerating packet processing
+workloads, plugin is also using bifurcated driver. More specifically it's
+targeted for applications that use Intel DLB as eventdev devices accessed
+via bifurcated driver (allowing access from kernel and user-space).
+
+## Metrics
+
+There are two sources of metrics:
+
+- DPDK-based app for detailed eventdev metrics per device, per port and per queue
+- Sysfs entries from kernel driver for RAS metrics
+
+## About Intel® Dynamic Load Balancer (Intel® DLB)
+
+The Intel® Dynamic Load Balancer (Intel® DLB) is a PCIe device that provides
+load-balanced, prioritized scheduling of events (that is, packets) across
+CPU cores enabling efficient core-to-core communication. It is a hardware
+accelerator located inside the latest Intel® Xeon® devices offered by Intel.
+It supports the event-driven programming model of DPDK's Event Device Library.
+This library is used in packet processing pipelines for multi-core scalability,
+dynamic load-balancing, and variety of packet distribution and synchronization
+schemes.
+
+## About DPDK Event Device Library
+
+The DPDK Event device library is an abstraction that provides the application
+with features to schedule events. The eventdev framework introduces the event
+driven programming model. In a polling model, lcores poll ethdev ports and
+associated Rx queues directly to look for a packet. By contrast in an event
+driven model, lcores call the scheduler that selects packets for them based on
+programmer-specified criteria. The Eventdev library adds support for an event
+driven programming model, which offers applications automatic multicore scaling,
+dynamic load balancing, pipelining, packet ingress order maintenance and
+synchronization services to simplify application packet processing.
+By introducing an event driven programming model, DPDK can support
+both polling and event driven programming models for packet processing,
+and applications are free to choose whatever model (or combination of the two)
+best suits their needs.
+
+## Prerequisites
+
+- [DLB >= v7.4](https://www.intel.com/content/www/us/en/download/686372/intel-dynamic-load-balancer.html)
+- [DPDK >= 20.11.3](http://core.dpdk.org/download/)
+- Linux kernel >= 5.12
+
+> **NOTE:** It may happen that sysfs entries or the socket telemetry interface
+> exposed by DPDK-based app will require root access. This means that either
+> access permissions have to be adjusted for sysfs / socket telemetry
+> interface to allow Telegraf to access it, or Telegraf should run with root
+> privileges.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+## Reads metrics from DPDK using v2 telemetry interface.
+## This plugin ONLY supports Linux
+[[inputs.intel_dlb]]
+  ## Path to DPDK telemetry socket.
+  # socket_path = "/var/run/dpdk/rte/dpdk_telemetry.v2"
+
+  ## Default eventdev command list, it gathers metrics from socket by given commands.
+  ## Supported options:
+  ##   "/eventdev/dev_xstats", "/eventdev/port_xstats",
+  ##   "/eventdev/queue_xstats", "/eventdev/queue_links"
+  # eventdev_commands = ["/eventdev/dev_xstats", "/eventdev/port_xstats", "/eventdev/queue_xstats", "/eventdev/queue_links"]
+
+  ## Detect DLB devices based on device id.
+  ## Currently, only supported and tested device id is `0x2710`.
+  ## Configuration added to support forward compatibility.
+  # dlb_device_types = ["0x2710"]
+
+  ## Specifies plugin behavior regarding unreachable socket (which might not have been initialized yet).
+  ## Available choices:
+  ##   - error: Telegraf will return an error on startup if socket is unreachable
+  ##   - ignore: Telegraf will ignore error regarding unreachable socket on both startup and gather
+  # unreachable_socket_behavior = "error"
+```
+
+Default configuration allows getting metrics for all metrics
+reported via `/eventdev/` command:
+
+- `/eventdev/dev_xstats`
+- `/eventdev/port_xstats`
+- `/eventdev/queue_xstats`
+- `/eventdev/queue_links`
+
+## Example Output
+
+```text
+intel_dlb,command=/eventdev/dev_xstats\,0,host=controller1 dev_dir_pool_size=0i,dev_inflight_events=8192i,dev_ldb_pool_size=8192i,dev_nb_events_limit=8192i,dev_pool_size=0i,dev_rx_drop=0i,dev_rx_interrupt_wait=0i,dev_rx_ok=463126660i,dev_rx_umonitor_umwait=0i,dev_total_polls=78422946i,dev_tx_nospc_dir_hw_credits=0i,dev_tx_nospc_hw_credits=584614i,dev_tx_nospc_inflight_credits=0i,dev_tx_nospc_inflight_max=0i,dev_tx_nospc_ldb_hw_credits=584614i,dev_tx_nospc_new_event_limit=59331982i,dev_tx_ok=694694059i,dev_zero_polls=29667908i 1641996791000000000
+intel_dlb,command=/eventdev/queue_links\,0\,1,host=controller1 qid_0=128i,qid_1=128i 1641996791000000000
+intel_dlb_ras,device=pci0000:6d,host=controller1,metric_file=aer_dev_correctable BadDLLP=0i,BadTLP=0i,CorrIntErr=0i,HeaderOF=0i,NonFatalErr=0i,Rollover=0i,RxErr=0i,TOTAL_ERR_COR=0i,Timeout=0i 1641996791000000000
+intel_dlb_ras,device=pci0000:6d,host=controller1,metric_file=aer_dev_fatal ACSViol=0i,AtomicOpBlocked=0i,BlockedTLP=0i,CmpltAbrt=0i,CmpltTO=0i,DLP=0i,ECRC=0i,FCP=0i,MalfTLP=0i,PoisonTLPBlocked=0i,RxOF=0i,SDES=0i,TLP=0i,TLPBlockedErr=0i,TOTAL_ERR_FATAL=0i,UncorrIntErr=0i,Undefined=0i,UnsupReq=0i,UnxCmplt=0i 1641996791000000000
+```
diff --git a/content/telegraf/v1/input-plugins/intel_pmt/_index.md b/content/telegraf/v1/input-plugins/intel_pmt/_index.md
new file mode 100644
index 000000000..423e8f9fe
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/intel_pmt/_index.md
@@ -0,0 +1,427 @@
+---
+description: "Telegraf plugin for collecting metrics from Intel® Platform Monitoring Technology (Intel® PMT)"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Intel® Platform Monitoring Technology (Intel® PMT)
+    identifier: input-intel_pmt
+tags: [Intel® Platform Monitoring Technology (Intel® PMT), "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Intel® Platform Monitoring Technology (Intel® PMT) Input Plugin
+
+This plugin collects metrics via the Linux kernel driver for
+Intel® Platform Monitoring Technology (Intel® PMT).
+Intel® PMT is an architecture capable of enumerating
+and accessing hardware monitoring capabilities on a supported device.
+
+Support has been added to the mainline Linux kernel under the
+platform driver (`drivers/platform/x86/intel/pmt`) which exposes
+the Intel PMT telemetry space as a sysfs entry at
+`/sys/class/intel_pmt/`. Each discovered telemetry aggregator is
+exposed as a directory (with a `telem` prefix) containing a `guid`
+identifying the unique PMT space. This file is associated with a
+set of XML specification files which can be found in the
+[Intel-PMT Repository].
+
+This plugin discovers and parses the telemetry data exposed by
+the kernel driver using the specification inside the XML files.
+Furthermore, the plugin then reads low level samples/counters
+and evaluates high level samples/counters according to
+transformation formulas, and then reports the collected values.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Requirements
+
+Intel PMT Input Plugin requires an XML specification as `spec`.
+It can be provided as a filepath.
+
+The provided filepath should be an absolute path to `pmt.xml` within
+local copies of XML files from the cloned [Intel-PMT Repository].
+
+## Configuration
+
+```toml @sample.conf
+# Intel Platform Monitoring Technology plugin exposes Intel PMT metrics available through the Intel PMT kernel space.
+# This plugin ONLY supports Linux.
+[[inputs.intel_pmt]]
+  ## Filepath to PMT XML within local copies of XML files from PMT repository.
+  ## The filepath should be absolute.
+  spec = "/home/telegraf/Intel-PMT/xml/pmt.xml"
+  
+  ## Enable metrics by their datatype.
+  ## See the Enabling Metrics section in README for more details.
+  ## If empty, all metrics are enabled.
+  ## When used, the alternative option samples_enabled should NOT be used.
+  # datatypes_enabled = []
+  
+  ## Enable metrics by their name.
+  ## See the Enabling Metrics section in README for more details.
+  ## If empty, all metrics are enabled.
+  ## When used, the alternative option datatypes_enabled should NOT be used.
+  # samples_enabled = []
+```
+
+## Example Configuration: C-State residency and temperature with a datatype metric filter
+
+This configuration allows getting only a subset of metrics
+with the use of a datatype filter:
+
+```toml
+[[inputs.intel_pmt]]
+  spec = "/home/telegraf/Intel-PMT/xml/pmt.xml"
+  datatypes_enabled = ["tbandwidth_28b","ttemperature"]
+```
+
+## Example Configuration: C-State residency and temperature with a sample metric filter
+
+This configuration allows getting only a subset of metrics
+with the use of a sample filter:
+
+```toml
+[[inputs.intel_pmt]]
+  spec = "/home/telegraf/Intel-PMT/xml/pmt.xml"
+  samples_enabled = ["C0Residency","C1Residency", "Cx_TEMP"]
+```
+
+## Prerequisites
+
+Minimum Linux kernel version 5.11 with
+
+- the `intel_pmt_telemetry` module loaded (on kernels 5.11-5.14)
+- the `intel_pmt` module loaded (on kernels 5.14+)
+
+Intel PMT is exposed on a limited number of devices, e.g.
+
+- 4th Generation Intel® Xeon® Scalable Processors
+(codenamed Sapphire Rapids / SPR)
+- 6th Generation Intel® Xeon® Scalable Processors
+(codenamed Granite Rapids / GNR)
+
+PMT space is located in `/sys/class/intel_pmt` with `telem` files requiring
+root privileges to read.
+
+### If Telegraf is not running as a root user
+
+By default, the `telem` binary file requires root privileges to be read.
+
+To avoid running Telegraf as a root,
+add the following capability to the Telegraf executable:
+
+```sh
+sudo setcap cap_dac_read_search+ep /usr/bin/telegraf
+```
+
+## Metrics
+
+All metrics have the following tags:
+
+- `guid` (unique id of an Intel PMT space).
+- `numa_node` (NUMA node the sample is collected from).
+- `pci_bdf` (PCI Bus:Device.Function (BDF) the sample is collected from).
+- `sample_name` (name of the gathered sample).
+- `sample_group` (name of a group to which the sample belongs).
+- `datatype_idref` (datatype to which the sample belongs).
+
+`sample_name` prefixed in XMLs with `Cx_` where `x`
+is the core number also have the following tag:
+  
+- `core` (core to which the metric relates).
+  
+`sample_name` prefixed in XMLs with `CHAx_` where `x`
+is the CHA number also have the following tag:
+
+- `cha` (Caching and Home Agent to which the metric relates).
+
+## Enabling metrics
+
+By default, the plugin collects all available metrics.
+
+To limit the metrics collected by the plugin,
+two options are available for selecting metrics:
+
+- enable by datatype (groups of metrics),
+- enable by name.
+
+It's important to note that only one enabling option
+should be chosen at a time.
+
+See the table below for available datatypes and related metrics:
+
+| Datatype                | Metric name             | Description                                                                                                                 |
+|-------------------------|-------------------------|-----------------------------------------------------------------------------------------------------------------------------|
+| `txtal_strap`           | `XTAL_FREQ`             | Clock rate of the crystal oscillator on this silicon                                                                        |
+| `tdram_energy`          | `DRAM_ENERGY_LOW`       | DRAM energy consumed by all DIMMS in all Channels (uJ)                                                                      |
+|                         | `DRAM_ENERGY_HIGH`      | DRAM energy consumed by all DIMMS in all Channels (uJ)                                                                      |
+| `tbandwidth_32b`        | `C2U_BW`                | Core to Uncore Bandwidth (per core and per uncore)                                                                          |
+|                         | `U2C_BW`                | Uncore to Core Bandwidth (per core and per uncore)                                                                          |
+|                         | `PC2_LOW`               | Time spent in the Package C-State 2 (PC2)                                                                                   |
+|                         | `PC2_HIGH`              | Time spent in the Package C-State 2 (PC2)                                                                                   |
+|                         | `PC6_LOW`               | Time spent in the Package C-State 6 (PC6)                                                                                   |
+|                         | `PC6_HIGH`              | Time spent in the Package C-State 6 (PC6)                                                                                   |
+|                         | `MEM_RD_BW`             | Memory Read Bandwidth (per channel)                                                                                         |
+|                         | `MEM_WR_BW`             | Memory Write Bandwidth (per channel)                                                                                        |
+|                         | `DDRT_READ_BW`          | DDRT Read Bandwidth (per channel)                                                                                           |
+|                         | `DDRT_WR_BW`            | DDRT Write Bandwidth (per channel)                                                                                          |
+|                         | `THRT_COUNT`            | Number of clock ticks when throttling occurred on IMC channel (per channel)                                                 |
+|                         | `PMSUM`                 | Energy accumulated by IMC channel (per channel)                                                                             |
+|                         | `CMD_CNT_CH0`           | Command count for IMC channel subchannel 0 (per channel)                                                                    |
+|                         | `CMD_CNT_CH1`           | Command count for IMC channel subchannel 1 (per channel)                                                                    |
+| `tU32.0`                | `PEM_ANY`               | Duration for which a core frequency excursion occurred due to a listed or unlisted reason                                   |
+|                         | `PEM_THERMAL`           | Duration for which a core frequency excursion occurred due to EMTTM                                                         |
+|                         | `PEM_EXT_PROCHOT`       | Duration for which a core frequency excursion occurred due to an external PROCHOT assertion                                 |
+|                         | `PEM_PBM`               | Duration for which a core frequency excursion occurred due to PBM                                                           |
+|                         | `PEM_PL1`               | Duration for which a core frequency excursion occurred due to PL1                                                           |
+|                         | `PEM_RESERVED`          | PEM Reserved Counter                                                                                                        |
+|                         | `PEM_PL2`               | Duration for which a core frequency excursion occurred due to PL2                                                           |
+|                         | `PEM_PMAX`              | Duration for which a core frequency excursion occurred due to PMAX                                                          |
+| `tbandwidth_28b`        | `C0Residency`           | Core C0 Residency (per core)                                                                                                |
+|                         | `C1Residency`           | Core C1 Residency (per core)                                                                                                |
+| `tratio`                | `FET`                   | Current Frequency Excursion Threshold. Ratio of the core frequency.                                                         |
+| `tbandwidth_24b`        | `UFS_MAX_RING_TRAFFIC`  | IO Bandwidth for DMI or PCIE port (per port)                                                                                |
+| `ttemperature`          | `TEMP`                  | Current temperature of a core (per core)                                                                                    |
+| `tU8.0`                 | `VERSION`               | For SPR, it's 0. New feature versions will uprev this.                                                                      |
+| `tebb_energy`           | `FIVR_HBM_ENERGY`       | FIVR HBM Energy in uJ (per HBM)                                                                                             |
+| `tBOOL`                 | `OOB_PEM_ENABLE`        | 0x0 (Default)=Inband interface for PEM is enabled. 0x1=OOB interface for PEM is enabled.                                    |
+|                         | `ENABLE_PEM`            | 0 (Default): Disable PEM. 1: Enable PEM                                                                                     |
+|                         | `ANY`                   | Set if a core frequency excursion occurs due to a listed or unlisted reason                                                 |
+|                         | `THERMAL`               | Set if a core frequency excursion occurs due to any thermal event in core/uncore                                            |
+|                         | `EXT_PROCHOT`           | Set if a core frequency excursion occurs due to external PROCHOT assertion                                                  |
+|                         | `PBM`                   | Set if a core frequency excursion occurs due to a power limit (socket RAPL and/or platform RAPL)                            |
+|                         | `PL1`                   | Set if a core frequency excursion occurs due to PL1 input from any interfaces                                               |
+|                         | `PL2`                   | Set if a core frequency excursion occurs due to PL2 input from any interfaces                                               |
+|                         | `PMAX`                  | Set if a core frequency excursion occurs due to PMAX                                                                        |
+| `ttsc`                  | `ART`                   | TSC Delta HBM (per HBM)                                                                                                     |
+| `tproduct_id`           | `PRODUCT_ID`            | Product ID                                                                                                                  |
+| `tstring`               | `LOCAL_REVISION`        | Local Revision ID for this product                                                                                          |
+|                         | `RECORD_TYPE`           | Record Type                                                                                                                 |
+| `tcore_state`           | `EN`                    | Core x is enabled (per core)                                                                                                |
+| `thist_counter`         | `FREQ_HIST_R0`          | Frequency histogram range 0 (core in C6) counter (per core)                                                                 |
+|                         | `FREQ_HIST_R1`          | Frequency histogram range 1 (16.67-800 MHz) counter (per core)                                                              |
+|                         | `FREQ_HIST_R2`          | Frequency histogram range 2 (801-1200 MHz) counter (per core)                                                               |
+|                         | `FREQ_HIST_R3`          | Frequency histogram range 3 (1201-1600 MHz) counter (per core)                                                              |
+|                         | `FREQ_HIST_R4`          | Frequency histogram range 4 (1601-2000 MHz) counter (per core)                                                              |
+|                         | `FREQ_HIST_R5`          | Frequency histogram range 5 (2001-2400 MHz) counter (per core)                                                              |
+|                         | `FREQ_HIST_R6`          | Frequency histogram range 6 (2401-2800 MHz) counter (per core)                                                              |
+|                         | `FREQ_HIST_R7`          | Frequency histogram range 7 (2801-3200 MHz) counter (per core)                                                              |
+|                         | `FREQ_HIST_R8`          | Frequency histogram range 8 (3201-3600 MHz) counter (per core)                                                              |
+|                         | `FREQ_HIST_R9`          | Frequency histogram range 9 (3601-4000 MHz) counter (per core)                                                              |
+|                         | `FREQ_HIST_R10`         | Frequency histogram range 10 (4001-4400 MHz) counter (per core)                                                             |
+|                         | `FREQ_HIST_R11`         | Frequency histogram range 11 (greater then 4400 MHz) (per core)                                                             |
+|                         | `VOLT_HIST_R0`          | Voltage histogram range 0 (less then 602 mV) counter (per core)                                                             |
+|                         | `VOLT_HIST_R1`          | Voltage histogram range 1 (602.5-657 mV) counter (per core)                                                                 |
+|                         | `VOLT_HIST_R2`          | Voltage histogram range 2 (657.5-712 mV) counter (per core)                                                                 |
+|                         | `VOLT_HIST_R3`          | Voltage histogram range 3 (712.5-767 mV) counter (per core)                                                                 |
+|                         | `VOLT_HIST_R4`          | Voltage histogram range 4 (767.5-822 mV) counter (per core)                                                                 |
+|                         | `VOLT_HIST_R5`          | Voltage histogram range 5 (822.5-877 mV) counter (per core)                                                                 |
+|                         | `VOLT_HIST_R6`          | Voltage histogram range 6 (877.5-932 mV) counter (per core)                                                                 |
+|                         | `VOLT_HIST_R7`          | Voltage histogram range 7 (932.5-987 mV) counter (per core)                                                                 |
+|                         | `VOLT_HIST_R8`          | Voltage histogram range 8 (987.5-1042 mV) counter (per core)                                                                |
+|                         | `VOLT_HIST_R9`          | Voltage histogram range 9 (1042.5-1097 mV) counter (per core)                                                               |
+|                         | `VOLT_HIST_R10`         | Voltage histogram range 10 (1097.5-1152 mV) counter (per core)                                                              |
+|                         | `VOLT_HIST_R11`         | Voltage histogram range 11 (greater then 1152 mV) counter (per core)                                                        |
+|                         | `TEMP_HIST_R0`          | Temperature histogram range 0 (less then 20°C) counter                                                                      |
+|                         | `TEMP_HIST_R1`          | Temperature histogram range 1 (20.5-27.5°C) counter                                                                         |
+|                         | `TEMP_HIST_R2`          | Temperature histogram range 2 (28-35°C) counter                                                                             |
+|                         | `TEMP_HIST_R3`          | Temperature histogram range 3 (35.5-42.5°C) counter                                                                         |
+|                         | `TEMP_HIST_R4`          | Temperature histogram range 4 (43-50°C) counter                                                                             |
+|                         | `TEMP_HIST_R5`          | Temperature histogram range 5 (50.5-57.5°C) counter                                                                         |
+|                         | `TEMP_HIST_R6`          | Temperature histogram range 6 (58-65°C) counter                                                                             |
+|                         | `TEMP_HIST_R7`          | Temperature histogram range 7 (65.5-72.5°C) counter                                                                         |
+|                         | `TEMP_HIST_R8`          | Temperature histogram range 8 (73-80°C) counter                                                                             |
+|                         | `TEMP_HIST_R9`          | Temperature histogram range 9 (80.5-87.5°C) counter                                                                         |
+|                         | `TEMP_HIST_R10`         | Temperature histogram range 10 (88-95°C) counter                                                                            |
+|                         | `TEMP_HIST_R11`         | Temperature histogram range 11 (greater then 95°C) counter                                                                  |
+| `tpvp_throttle_counter` | `PVP_THROTTLE_64`       | Counter indicating the number of times the core x was throttled in the last 64 cycles window                                |
+|                         | `PVP_THROTTLE_1024`     | Counter indicating the number of times the core x was throttled in the last 1024 cycles window                              |
+| `tpvp_level_res`        | `PVP_LEVEL_RES_128_L0`  | Counter indicating the percentage of residency during the last 2 ms measurement for level 0 of this type of CPU instruction |
+|                         | `PVP_LEVEL_RES_128_L1`  | Counter indicating the percentage of residency during the last 2 ms measurement for level 1 of this type of CPU instruction |
+|                         | `PVP_LEVEL_RES_128_L2`  | Counter indicating the percentage of residency during the last 2 ms measurement for level 2 of this type of CPU instruction |
+|                         | `PVP_LEVEL_RES_128_L3`  | Counter indicating the percentage of residency during the last 2 ms measurement for level 3 of this type of CPU instruction |
+|                         | `PVP_LEVEL_RES_256_L0`  | Counter indicating the percentage of residency during the last 2 ms measurement for level 0 of AVX256 CPU instructions      |
+|                         | `PVP_LEVEL_RES_256_L1`  | Counter indicating the percentage of residency during the last 2 ms measurement for level 1 of AVX256 CPU instructions      |
+|                         | `PVP_LEVEL_RES_256_L2`  | Counter indicating the percentage of residency during the last 2 ms measurement for level 2 of AVX256 CPU instructions      |
+|                         | `PVP_LEVEL_RES_256_L3`  | Counter indicating the percentage of residency during the last 2 ms measurement for level 3 of AVX256 CPU instructions      |
+|                         | `PVP_LEVEL_RES_512_L0`  | Counter indicating the percentage of residency during the last 2 ms measurement for level 0 of AVX512 CPU instructions      |
+|                         | `PVP_LEVEL_RES_512_L1`  | Counter indicating the percentage of residency during the last 2 ms measurement for level 1 of AVX512 CPU instructions      |
+|                         | `PVP_LEVEL_RES_512_L2`  | Counter indicating the percentage of residency during the last 2 ms measurement for level 2 of AVX512 CPU instructions      |
+|                         | `PVP_LEVEL_RES_512_L3`  | Counter indicating the percentage of residency during the last 2 ms measurement for level 3 of AVX512 CPU instructions      |
+|                         | `PVP_LEVEL_RES_TMUL_L0` | Counter indicating the percentage of residency during the last 2 ms measurement for level 0 of TMUL CPU instructions        |
+|                         | `PVP_LEVEL_RES_TMUL_L1` | Counter indicating the percentage of residency during the last 2 ms measurement for level 1 of TMUL CPU instructions        |
+|                         | `PVP_LEVEL_RES_TMUL_L2` | Counter indicating the percentage of residency during the last 2 ms measurement for level 2 of TMUL CPU instructions        |
+|                         | `PVP_LEVEL_RES_TMUL_L3` | Counter indicating the percentage of residency during the last 2 ms measurement for level 3 of TMUL CPU instructions        |
+| `ttsc_timer`            | `TSC_TIMER`             | OOBMSM TSC (Time Stamp Counter) value                                                                                       |
+| `tnum_en_cha`           | `NUM_EN_CHA`            | Number of enabled CHAs                                                                                                      |
+| `trmid_usage_counter`   | `RMID0_RDT_CMT`         | CHA x RMID 0 LLC cache line usage counter (per CHA)                                                                         |
+|                         | `RMID1_RDT_CMT`         | CHA x RMID 1 LLC cache line usage counter (per CHA)                                                                         |
+|                         | `RMID2_RDT_CMT`         | CHA x RMID 2 LLC cache line usage counter (per CHA)                                                                         |
+|                         | `RMID3_RDT_CMT`         | CHA x RMID 3 LLC cache line usage counter (per CHA)                                                                         |
+|                         | `RMID4_RDT_CMT`         | CHA x RMID 4 LLC cache line usage counter (per CHA)                                                                         |
+|                         | `RMID5_RDT_CMT`         | CHA x RMID 5 LLC cache line usage counter (per CHA)                                                                         |
+|                         | `RMID6_RDT_CMT`         | CHA x RMID 6 LLC cache line usage counter (per CHA)                                                                         |
+|                         | `RMID7_RDT_CMT`         | CHA x RMID 7 LLC cache line usage counter (per CHA)                                                                         |
+|                         | `RMID0_RDT_MBM_TOTAL`   | CHA x RMID 0 total memory transactions counter (per CHA)                                                                    |
+|                         | `RMID1_RDT_MBM_TOTAL`   | CHA x RMID 1 total memory transactions counter (per CHA)                                                                    |
+|                         | `RMID2_RDT_MBM_TOTAL`   | CHA x RMID 2 total memory transactions counter (per CHA)                                                                    |
+|                         | `RMID3_RDT_MBM_TOTAL`   | CHA x RMID 3 total memory transactions counter (per CHA)                                                                    |
+|                         | `RMID4_RDT_MBM_TOTAL`   | CHA x RMID 4 total memory transactions counter (per CHA)                                                                    |
+|                         | `RMID5_RDT_MBM_TOTAL`   | CHA x RMID 5 total memory transactions counter (per CHA)                                                                    |
+|                         | `RMID6_RDT_MBM_TOTAL`   | CHA x RMID 6 total memory transactions counter (per CHA)                                                                    |
+|                         | `RMID7_RDT_MBM_TOTAL`   | CHA x RMID 7 total memory transactions counter (per CHA)                                                                    |
+|                         | `RMID0_RDT_MBM_LOCAL`   | CHA x RMID 0 local memory transactions counter (per CHA)                                                                    |
+|                         | `RMID1_RDT_MBM_LOCAL`   | CHA x RMID 1 local memory transactions counter (per CHA)                                                                    |
+|                         | `RMID2_RDT_MBM_LOCAL`   | CHA x RMID 2 local memory transactions counter (per CHA)                                                                    |
+|                         | `RMID3_RDT_MBM_LOCAL`   | CHA x RMID 3 local memory transactions counter (per CHA)                                                                    |
+|                         | `RMID4_RDT_MBM_LOCAL`   | CHA x RMID 4 local memory transactions counter (per CHA)                                                                    |
+|                         | `RMID5_RDT_MBM_LOCAL`   | CHA x RMID 5 local memory transactions counter (per CHA)                                                                    |
+|                         | `RMID6_RDT_MBM_LOCAL`   | CHA x RMID 6 local memory transactions counter (per CHA)                                                                    |
+|                         | `RMID7_RDT_MBM_LOCAL`   | CHA x RMID 7 local memory transactions counter (per CHA)                                                                    |
+| `ttw_unit`              | `TW`                    | Time window. Valid TW range is 0 to 17. The unit is calculated as `2.3 * 2^TW` ms (e.g. `2.3 * 2^17` ms = ~302 seconds).    |
+| `tcore_stress_level`    | `STRESS_LEVEL`          | Accumulating counter indicating relative stress level for a core (per core)                                                 |
+
+## Example Output
+
+Example output with `tpvp_throttle_counter` as a datatype metric filter:
+
+```text
+intel_pmt,core=0,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C0_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1886465i 1693766334000000000
+intel_pmt,core=1,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C1_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=2,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C2_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=3,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C3_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=4,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C4_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1357578i 1693766334000000000
+intel_pmt,core=5,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C5_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=6,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C6_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=2024801i 1693766334000000000
+intel_pmt,core=7,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C7_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=8,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C8_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1390741i 1693766334000000000
+intel_pmt,core=9,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C9_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=10,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C10_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1536483i 1693766334000000000
+intel_pmt,core=11,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C11_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=12,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C12_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=13,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C13_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=14,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C14_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1604964i 1693766334000000000
+intel_pmt,core=15,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C15_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1168673i 1693766334000000000
+intel_pmt,core=16,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C16_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=17,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C17_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=18,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C18_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1276588i 1693766334000000000
+intel_pmt,core=19,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C19_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1139005i 1693766334000000000
+intel_pmt,core=20,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C20_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=21,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C21_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=22,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C22_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=970698i 1693766334000000000
+intel_pmt,core=23,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C23_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=24,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C24_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=25,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C25_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1178462i 1693766334000000000
+intel_pmt,core=26,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C26_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=27,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C27_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=2093384i 1693766334000000000
+intel_pmt,core=28,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C28_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=29,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C29_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=30,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C30_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=31,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C31_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=32,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C32_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=2825174i 1693766334000000000
+intel_pmt,core=33,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C33_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=2592279i 1693766334000000000
+intel_pmt,core=34,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C34_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=35,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C35_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=36,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C36_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1960662i 1693766334000000000
+intel_pmt,core=37,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C37_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1821914i 1693766334000000000
+intel_pmt,core=38,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C38_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=39,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C39_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=40,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C40_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=41,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C41_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=2654651i 1693766334000000000
+intel_pmt,core=42,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C42_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=2230984i 1693766334000000000
+intel_pmt,core=43,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C43_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=44,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C44_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=45,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C45_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=46,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C46_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=2325520i 1693766334000000000
+intel_pmt,core=47,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C47_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=48,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C48_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=49,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C49_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=50,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C50_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=51,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C51_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=52,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C52_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1468880i 1693766334000000000
+intel_pmt,core=53,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C53_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=2151919i 1693766334000000000
+intel_pmt,core=54,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C54_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=55,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C55_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=2065994i 1693766334000000000
+intel_pmt,core=56,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C56_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=57,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C57_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=58,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C58_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1553691i 1693766334000000000
+intel_pmt,core=59,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C59_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=1624177i 1693766334000000000
+intel_pmt,core=60,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C60_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=61,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C61_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=62,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C62_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=63,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C63_PVP_THROTTLE_64,sample_name=PVP_THROTTLE_64 value=0i 1693766334000000000
+intel_pmt,core=0,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C0_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=12977949i 1693766334000000000
+intel_pmt,core=1,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C1_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=2,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C2_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=3,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C3_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=4,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C4_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=7180524i 1693766334000000000
+intel_pmt,core=5,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C5_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=6,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C6_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=8667263i 1693766334000000000
+intel_pmt,core=7,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C7_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=8,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C8_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=5945851i 1693766334000000000
+intel_pmt,core=9,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C9_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=10,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C10_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=6669829i 1693766334000000000
+intel_pmt,core=11,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C11_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=12,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C12_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=13,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C13_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=14,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C14_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=6579832i 1693766334000000000
+intel_pmt,core=15,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C15_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=6101856i 1693766334000000000
+intel_pmt,core=16,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C16_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=17,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C17_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=18,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C18_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=7796183i 1693766334000000000
+intel_pmt,core=19,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C19_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=6849098i 1693766334000000000
+intel_pmt,core=20,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C20_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=21,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C21_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=22,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C22_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=12378942i 1693766334000000000
+intel_pmt,core=23,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C23_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=24,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C24_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=25,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C25_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=8299231i 1693766334000000000
+intel_pmt,core=26,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C26_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=27,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C27_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=7986390i 1693766334000000000
+intel_pmt,core=28,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C28_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=29,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C29_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=30,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C30_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=31,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C31_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=32,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C32_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=9876325i 1693766334000000000
+intel_pmt,core=33,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C33_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=8547471i 1693766334000000000
+intel_pmt,core=34,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C34_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=35,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C35_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=36,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C36_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=9231744i 1693766334000000000
+intel_pmt,core=37,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C37_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=8133031i 1693766334000000000
+intel_pmt,core=38,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C38_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=39,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C39_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=40,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C40_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=41,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C41_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=6136417i 1693766334000000000
+intel_pmt,core=42,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C42_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=6091019i 1693766334000000000
+intel_pmt,core=43,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C43_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=44,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C44_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=45,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C45_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=46,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C46_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=5804639i 1693766334000000000
+intel_pmt,core=47,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C47_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=48,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C48_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=49,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C49_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=50,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C50_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=51,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C51_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=52,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C52_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=5738491i 1693766334000000000
+intel_pmt,core=53,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C53_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=6058504i 1693766334000000000
+intel_pmt,core=54,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C54_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=55,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C55_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=5987093i 1693766334000000000
+intel_pmt,core=56,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C56_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=57,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C57_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=58,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C58_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=10384909i 1693766334000000000
+intel_pmt,core=59,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C59_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=7305786i 1693766334000000000
+intel_pmt,core=60,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C60_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=61,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C61_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=62,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C62_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+intel_pmt,core=63,datatype_idref=tpvp_throttle_counter,guid=0x87b6fef1,pmt,numa_node=0,pci_bdf=0000:e7:03.1,sample_group=C63_PVP_THROTTLE_1024,sample_name=PVP_THROTTLE_1024 value=0i 1693766334000000000
+```
+
+[Intel-PMT repository]: https://github.com/intel/Intel-PMT
diff --git a/content/telegraf/v1/input-plugins/intel_pmu/_index.md b/content/telegraf/v1/input-plugins/intel_pmu/_index.md
new file mode 100644
index 000000000..8f9bf5ee8
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/intel_pmu/_index.md
@@ -0,0 +1,265 @@
+---
+description: "Telegraf plugin for collecting metrics from Intel Performance Monitoring Unit"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Intel Performance Monitoring Unit
+    identifier: input-intel_pmu
+tags: [Intel Performance Monitoring Unit, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Intel Performance Monitoring Unit Plugin
+
+This input plugin exposes Intel PMU (Performance Monitoring Unit) metrics
+available through [Linux Perf](https://perf.wiki.kernel.org/index.php/Main_Page)
+subsystem.
+
+PMU metrics gives insight into performance and health of IA processor's internal
+components, including core and uncore units. With the number of cores increasing
+and processor topology getting more complex the insight into those metrics is
+vital to assure the best CPU performance and utilization.
+
+Performance counters are CPU hardware registers that count hardware events such
+as instructions executed, cache-misses suffered, or branches mispredicted. They
+form a basis for profiling applications to trace dynamic control flow and
+identify hotspots.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Intel Performance Monitoring Unit plugin exposes Intel PMU metrics available through Linux Perf subsystem
+# This plugin ONLY supports Linux on amd64
+[[inputs.intel_pmu]]
+  ## List of filesystem locations of JSON files that contain PMU event definitions.
+  event_definitions = ["/var/cache/pmu/GenuineIntel-6-55-4-core.json", "/var/cache/pmu/GenuineIntel-6-55-4-uncore.json"]
+
+  ## List of core events measurement entities. There can be more than one core_events sections.
+  [[inputs.intel_pmu.core_events]]
+    ## List of events to be counted. Event names shall match names from event_definitions files.
+    ## Single entry can contain name of the event (case insensitive) augmented with config options and perf modifiers.
+    ## If absent, all core events from provided event_definitions are counted skipping unresolvable ones.
+    events = ["INST_RETIRED.ANY", "CPU_CLK_UNHALTED.THREAD_ANY:config1=0x4043200000000k"]
+
+    ## Limits the counting of events to core numbers specified.
+    ## If absent, events are counted on all cores.
+    ## Single "0", multiple "0,1,2" and range "0-2" notation is supported for each array element.
+    ##   example: cores = ["0,2", "4", "12-16"]
+    cores = ["0"]
+
+    ## Indicator that plugin shall attempt to run core_events.events as a single perf group.
+    ## If absent or set to false, each event is counted individually. Defaults to false.
+    ## This limits the number of events that can be measured to a maximum of available hardware counters per core.
+    ## Could vary depending on type of event, use of fixed counters.
+    # perf_group = false
+
+    ## Optionally set a custom tag value that will be added to every measurement within this events group.
+    ## Can be applied to any group of events, unrelated to perf_group setting.
+    # events_tag = ""
+
+  ## List of uncore event measurement entities. There can be more than one uncore_events sections.
+  [[inputs.intel_pmu.uncore_events]]
+    ## List of events to be counted. Event names shall match names from event_definitions files.
+    ## Single entry can contain name of the event (case insensitive) augmented with config options and perf modifiers.
+    ## If absent, all uncore events from provided event_definitions are counted skipping unresolvable ones.
+    events = ["UNC_CHA_CLOCKTICKS", "UNC_CHA_TOR_OCCUPANCY.IA_MISS"]
+
+    ## Limits the counting of events to specified sockets.
+    ## If absent, events are counted on all sockets.
+    ## Single "0", multiple "0,1" and range "0-1" notation is supported for each array element.
+    ##   example: sockets = ["0-2"]
+    sockets = ["0"]
+
+    ## Indicator that plugin shall provide an aggregated value for multiple units of same type distributed in an uncore.
+    ## If absent or set to false, events for each unit are exposed as separate metric. Defaults to false.
+    # aggregate_uncore_units = false
+
+    ## Optionally set a custom tag value that will be added to every measurement within this events group.
+    # events_tag = ""
+```
+
+### Modifiers
+
+Perf modifiers adjust event-specific perf attribute to fulfill particular
+requirements.  Details about perf attribute structure could be found in
+[perf_event_open]()
+syscall manual.
+
+General schema of configuration's `events` list element:
+
+```regexp
+EVENT_NAME(:(config|config1|config2)=(0x[0-9a-f]{1-16})(p|k|u|h|H|I|G|D))*
+```
+
+where:
+
+| Modifier | Underlying attribute            | Description                 |
+|----------|---------------------------------|-----------------------------|
+| config   | perf_event_attr.config          | type-specific configuration |
+| config1  | perf_event_attr.config1         | extension of config         |
+| config2  | perf_event_attr.config2         | extension of config1        |
+| p        | perf_event_attr.precise_ip      | skid constraint             |
+| k        | perf_event_attr.exclude_user    | don't count user            |
+| u        | perf_event_attr.exclude_kernel  | don't count kernel          |
+| h / H    | perf_event_attr.exclude_guest   | don't count in guest        |
+| I        | perf_event_attr.exclude_idle    | don't count when idle       |
+| G        | perf_event_attr.exclude_hv      | don't count hypervisor      |
+| D        | perf_event_attr.pinned          | must always be on PMU       |
+
+## Requirements
+
+The plugin is using [iaevents](https://github.com/intel/iaevents) library which
+is a golang package that makes accessing the Linux kernel's perf interface
+easier.
+
+Intel PMU plugin, is only intended for use on **linux 64-bit** systems.
+
+Event definition JSON files for specific architectures can be found at
+[github](https://github.com/intel/perfmon).  A script to download the event
+definitions that are appropriate for your system (event_download.py) is
+available at [pmu-tools](https://github.com/andikleen/pmu-tools).  Please keep
+these files in a safe place on your system.
+
+## Measuring
+
+Plugin allows measuring both core and uncore events. During plugin
+initialization the event names provided by user are compared with event
+definitions included in JSON files and translated to perf attributes. Next,
+those events are activated to start counting.  During every telegraf interval,
+the plugin reads proper measurement for each previously activated event.
+
+Each single core event may be counted severally on every available CPU's
+core. In contrast, uncore events could be placed in many PMUs within specified
+CPU package. The plugin allows choosing core ids (core events) or socket ids
+(uncore events) on which the counting should be executed.  Uncore events are
+separately activated on all socket's PMUs, and can be exposed as separate
+measurement or to be summed up as one measurement.
+
+Obtained measurements are stored as three values: **Raw**, **Enabled** and
+**Running**. Raw is a total count of event. Enabled and running are total time
+the event was enabled and running.  Normally these are the same. If more events
+are started than available counter slots on the PMU, then multiplexing occurs
+and events only run part of the time. Therefore, the plugin provides a 4-th
+value called **scaled** which is calculated using following formula: `raw *
+enabled / running`.
+
+Events are measured for all running processes.
+
+### Core event groups
+
+Perf allows assembling events as a group. A perf event group is scheduled onto
+the CPU as a unit: it will be put onto the CPU only if all of the events in the
+group can be put onto the CPU.  This means that the values of the member events
+can be meaningfully compared — added, divided (to get ratios), and so on — with
+each other, since they have counted events for the same set of executed
+instructions [(source)](https://man7.org/linux/man-pages/man2/perf_event_open.2.html).
+
+> **NOTE:** Be aware that the plugin will throw an error when trying to create
+> core event group of size that exceeds available core PMU counters.  The error
+> message from perf syscall will be shown as "invalid argument". If you want to
+> check how many PMUs are supported by your Intel CPU, you can use the
+> [cpuid](https://linux.die.net/man/1/cpuid) command.
+
+### Note about file descriptors
+
+The plugin opens a number of file descriptors dependent on number of monitored
+CPUs and number of monitored counters. It can easily exceed the default per
+process limit of allowed file descriptors. Depending on configuration, it might
+be required to increase the limit of opened file descriptors allowed.  This can
+be done for example by using `ulimit -n command`.
+
+## Metrics
+
+On each Telegraf interval, Intel PMU plugin transmits following data:
+
+### Metric Fields
+
+| Field   | Type   | Description                                                                                                                                   |
+|---------|--------|-----------------------------------------------------------------------------------------------------------------------------------------------|
+| enabled | uint64 | time counter, contains time the associated perf event was enabled                                                                             |
+| running | uint64 | time counter, contains time the event was actually counted                                                                                    |
+| raw     | uint64 | value counter, contains event count value during the time the event was actually counted                                                      |
+| scaled  | uint64 | value counter, contains approximated value of counter if the event was continuously counted, using scaled = raw * (enabled / running) formula |
+
+### Metric Tags - common
+
+| Tag   | Description                  |
+|-------|------------------------------|
+| host  | hostname as read by Telegraf |
+| event | name of the event            |
+
+### Metric Tags - core events
+
+| Tag        | Description                                                                                        |
+|------------|----------------------------------------------------------------------------------------------------|
+| cpu        | CPU id as identified by linux OS (either logical cpu id when HT on or physical cpu id when HT off) |
+| events_tag | (optional) tag as defined in "intel_pmu.core_events" configuration element                           |
+
+### Metric Tags - uncore events
+
+| Tag       | Description                                                                                                                                                                                |
+|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| socket    | socket number as identified by linux OS (physical_package_id)                                                                                                                              |
+| unit_type | type of event-capable PMU that the event was counted for, provides category of PMU that the event was counted for, e.g. cbox for uncore_cbox_1, r2pcie for uncore_r2pcie etc.              |
+| unit      | name of event-capable PMU that the event was counted for, as listed in /sys/bus/event_source/devices/ e.g. uncore_cbox_1, uncore_imc_1 etc.  Present for non-aggregated uncore events only |
+| events_tag| (optional) tag as defined in "intel_pmu.uncore_events" configuration element                           |
+
+## Example Output
+
+Event group:
+
+```text
+pmu_metric,cpu=0,event=CPU_CLK_THREAD_UNHALTED.REF_XCLK,events_tag=unhalted,host=xyz enabled=2871237051i,running=2871237051i,raw=1171711i,scaled=1171711i 1621254096000000000
+pmu_metric,cpu=0,event=CPU_CLK_UNHALTED.THREAD_P_ANY,events_tag=unhalted,host=xyz enabled=2871240713i,running=2871240713i,raw=72340716i,scaled=72340716i 1621254096000000000
+pmu_metric,cpu=1,event=CPU_CLK_THREAD_UNHALTED.REF_XCLK,events_tag=unhalted,host=xyz enabled=2871118275i,running=2871118275i,raw=1646752i,scaled=1646752i 1621254096000000000
+pmu_metric,cpu=1,event=CPU_CLK_UNHALTED.THREAD_P_ANY,events_tag=unhalted,host=xyz raw=108802421i,scaled=108802421i,enabled=2871120107i,running=2871120107i 1621254096000000000
+pmu_metric,cpu=2,event=CPU_CLK_THREAD_UNHALTED.REF_XCLK,events_tag=unhalted,host=xyz enabled=2871143950i,running=2871143950i,raw=1316834i,scaled=1316834i 1621254096000000000
+pmu_metric,cpu=2,event=CPU_CLK_UNHALTED.THREAD_P_ANY,events_tag=unhalted,host=xyz enabled=2871074681i,running=2871074681i,raw=68728436i,scaled=68728436i 1621254096000000000
+```
+
+Uncore event not aggregated:
+
+```text
+pmu_metric,event=UNC_CBO_XSNP_RESPONSE.MISS_XCORE,host=xyz,socket=0,unit=uncore_cbox_0,unit_type=cbox enabled=2870630747i,running=2870630747i,raw=183996i,scaled=183996i 1621254096000000000
+pmu_metric,event=UNC_CBO_XSNP_RESPONSE.MISS_XCORE,host=xyz,socket=0,unit=uncore_cbox_1,unit_type=cbox enabled=2870608194i,running=2870608194i,raw=185703i,scaled=185703i 1621254096000000000
+pmu_metric,event=UNC_CBO_XSNP_RESPONSE.MISS_XCORE,host=xyz,socket=0,unit=uncore_cbox_2,unit_type=cbox enabled=2870600211i,running=2870600211i,raw=187331i,scaled=187331i 1621254096000000000
+pmu_metric,event=UNC_CBO_XSNP_RESPONSE.MISS_XCORE,host=xyz,socket=0,unit=uncore_cbox_3,unit_type=cbox enabled=2870593914i,running=2870593914i,raw=184228i,scaled=184228i 1621254096000000000
+pmu_metric,event=UNC_CBO_XSNP_RESPONSE.MISS_XCORE,host=xyz,socket=0,unit=uncore_cbox_4,unit_type=cbox scaled=195355i,enabled=2870558952i,running=2870558952i,raw=195355i 1621254096000000000
+pmu_metric,event=UNC_CBO_XSNP_RESPONSE.MISS_XCORE,host=xyz,socket=0,unit=uncore_cbox_5,unit_type=cbox enabled=2870554131i,running=2870554131i,raw=197756i,scaled=197756i 1621254096000000000
+```
+
+Uncore event aggregated:
+
+```text
+pmu_metric,event=UNC_CBO_XSNP_RESPONSE.MISS_XCORE,host=xyz,socket=0,unit_type=cbox enabled=13199712335i,running=13199712335i,raw=467485i,scaled=467485i 1621254412000000000
+```
+
+Time multiplexing:
+
+```text
+pmu_metric,cpu=0,event=CPU_CLK_THREAD_UNHALTED.REF_XCLK,host=xyz raw=2947727i,scaled=4428970i,enabled=2201071844i,running=1464935978i 1621254412000000000
+pmu_metric,cpu=0,event=CPU_CLK_UNHALTED.THREAD_P_ANY,host=xyz running=1465155618i,raw=302553190i,scaled=454511623i,enabled=2201035323i 1621254412000000000
+pmu_metric,cpu=0,event=CPU_CLK_UNHALTED.REF_XCLK,host=xyz enabled=2200994057i,running=1466812391i,raw=3177535i,scaled=4767982i 1621254412000000000
+pmu_metric,cpu=0,event=CPU_CLK_UNHALTED.REF_XCLK_ANY,host=xyz enabled=2200963921i,running=1470523496i,raw=3359272i,scaled=5027894i 1621254412000000000
+pmu_metric,cpu=0,event=L1D_PEND_MISS.PENDING_CYCLES_ANY,host=xyz enabled=2200933946i,running=1470322480i,raw=23631950i,scaled=35374798i 1621254412000000000
+pmu_metric,cpu=0,event=L1D_PEND_MISS.PENDING_CYCLES,host=xyz raw=18767833i,scaled=28169827i,enabled=2200888514i,running=1466317384i 1621254412000000000
+```
+
+[man]: https://man7.org/linux/man-pages/man2/perf_event_open.2.html
+
+## Changelog
+
+| Version | Description |
+| --- | --- |
+| v1.0.0 | Initial version |
+| v1.1.0 | Added support for [new perfmon event format](https://github.com/intel/perfmon/issues/22). Old event format is still accepted (warn message will be printed in the log) |
diff --git a/content/telegraf/v1/input-plugins/intel_powerstat/_index.md b/content/telegraf/v1/input-plugins/intel_powerstat/_index.md
new file mode 100644
index 000000000..8a4bc5f2c
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/intel_powerstat/_index.md
@@ -0,0 +1,420 @@
+---
+description: "Telegraf plugin for collecting metrics from Intel PowerStat"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Intel PowerStat
+    identifier: input-intel_powerstat
+tags: [Intel PowerStat, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Intel PowerStat Input Plugin
+
+This input plugin monitors power statistics on Intel-based platforms and
+assumes presence of Linux based OS.
+
+Not all CPUs are supported, please see the software and hardware dependencies
+sections below to ensure platform support.
+
+Main use cases are power saving and workload migration. Telemetry frameworks
+allow users to monitor critical platform level metrics. Key source of platform
+telemetry is power domain that is beneficial for MANO Monitoring&Analytics
+systems to take preventive/corrective actions based on platform busyness, CPU
+temperature, actual CPU utilization and power statistics.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Intel PowerStat plugin enables monitoring of platform metrics (power, TDP)
+# and per-CPU metrics like temperature, power and utilization. Please see the
+# plugin readme for details on software and hardware compatibility.
+# This plugin ONLY supports Linux.
+[[inputs.intel_powerstat]]
+  ## The user can choose which package metrics are monitored by the plugin with
+  ## the package_metrics setting:
+  ## - The default, will collect "current_power_consumption",
+  ##   "current_dram_power_consumption" and "thermal_design_power".
+  ## - Leaving this setting empty means no package metrics will be collected.
+  ## - Finally, a user can specify individual metrics to capture from the
+  ##   supported options list.
+  ## Supported options:
+  ##   "current_power_consumption", "current_dram_power_consumption",
+  ##   "thermal_design_power", "max_turbo_frequency", "uncore_frequency",
+  ##   "cpu_base_frequency"
+  # package_metrics = ["current_power_consumption", "current_dram_power_consumption", "thermal_design_power"]
+
+  ## The user can choose which per-CPU metrics are monitored by the plugin in
+  ## cpu_metrics array.
+  ## Empty or missing array means no per-CPU specific metrics will be collected
+  ## by the plugin.
+  ## Supported options:
+  ##   "cpu_frequency", "cpu_c0_state_residency", "cpu_c1_state_residency",
+  ##   "cpu_c3_state_residency", "cpu_c6_state_residency", "cpu_c7_state_residency",
+  ##   "cpu_temperature", "cpu_busy_frequency", "cpu_c0_substate_c01",
+  ##   "cpu_c0_substate_c02", "cpu_c0_substate_c0_wait"
+  # cpu_metrics = []
+
+  ## Optionally the user can choose for which CPUs metrics configured in cpu_metrics array should be gathered.
+  ## Can't be combined with excluded_cpus.
+  ## Empty or missing array means CPU metrics are gathered for all CPUs.
+  ## e.g. ["0-3", "4,5,6"] or ["1-3,4"]
+  # included_cpus = []
+
+  ## Optionally the user can choose which CPUs should be excluded from gathering metrics configured in cpu_metrics array.
+  ## Can't be combined with included_cpus.
+  ## Empty or missing array means CPU metrics are gathered for all CPUs.
+  ## e.g. ["0-3", "4,5,6"] or ["1-3,4"]
+  # excluded_cpus = []
+
+  ## Filesystem location of JSON file that contains PMU event definitions.
+  ## Mandatory only for perf-related metrics (cpu_c0_substate_c01, cpu_c0_substate_c02, cpu_c0_substate_c0_wait).
+  # event_definitions = ""
+
+  ## The user can set the timeout duration for MSR reading.
+  ## Enabling this timeout can be useful in situations where, on heavily loaded systems,
+  ## the code waits too long for a kernel response to MSR read requests.
+  ## 0 disables the timeout (default).
+  # msr_read_timeout = "0ms"
+```
+
+### Configuration notes
+
+1. The configuration of `included_cpus` or `excluded_cpus` may affect the ability to collect `package_metrics`.
+   Some of them (`max_turbo_frequency`, `cpu_base_frequency`, and `uncore_frequency`) need to read data
+   from exactly one processor for each package. If `included_cpus` or `excluded_cpus` exclude all processors
+   from the package, reading the mentioned metrics for that package will not be possible.
+2. `event_definitions` JSON file for specific architecture can be found at [perfmon](https://github.com/intel/perfmon).
+   A script to download the event definition that is appropriate for current environment (`event_download.py`) is
+   available at [pmu-tools](https://github.com/andikleen/pmu-tools).
+   For perf-related metrics supported by this plugin, an event definition JSON file
+   with events for the `core` is required.
+
+   For example: `sapphirerapids_core.json` or `GenuineIntel-6-8F-core.json`.
+
+### Example: Configuration with no per-CPU telemetry
+
+This configuration allows getting default processor package specific metrics,
+no per-CPU metrics are collected:
+
+```toml
+[[inputs.intel_powerstat]]
+  cpu_metrics = []
+```
+
+### Example: Configuration with no per-CPU telemetry - equivalent case
+
+This configuration allows getting default processor package specific metrics,
+no per-CPU metrics are collected:
+
+```toml
+[[inputs.intel_powerstat]]
+```
+
+### Example: Configuration for CPU Temperature and CPU Frequency
+
+This configuration allows getting default processor package specific metrics,
+plus subset of per-CPU metrics (CPU Temperature and CPU Frequency) which will be
+gathered only for `cpu_id = 0`:
+
+```toml
+[[inputs.intel_powerstat]]
+  cpu_metrics = ["cpu_frequency", "cpu_temperature"]
+  included_cpus = ["0"]
+```
+
+### Example: Configuration for CPU Temperature and CPU Frequency without default package metrics
+
+This configuration allows getting only a subset of per-CPU metrics
+(CPU Temperature and CPU Frequency) which will be gathered for
+all `cpus` except `cpu_id = ["1-3"]`:
+
+```toml
+[[inputs.intel_powerstat]]
+  package_metrics = []
+  cpu_metrics = ["cpu_frequency", "cpu_temperature"]
+  excluded_cpus = ["1-3"]
+```
+
+### Example: Configuration with all available metrics
+
+This configuration allows getting all processor package specific metrics and
+all per-CPU metrics:
+
+```toml
+[[inputs.intel_powerstat]]
+  package_metrics = ["current_power_consumption", "current_dram_power_consumption", "thermal_design_power", "max_turbo_frequency", "uncore_frequency", "cpu_base_frequency"]
+  cpu_metrics = ["cpu_frequency", "cpu_c0_state_residency", "cpu_c1_state_residency", "cpu_c3_state_residency", "cpu_c6_state_residency", "cpu_c7_state_residency", "cpu_temperature", "cpu_busy_frequency", "cpu_c0_substate_c01", "cpu_c0_substate_c02", "cpu_c0_substate_c0_wait"]
+  event_definitions = "/home/telegraf/.cache/pmu-events/GenuineIntel-6-8F-core.json"
+```
+
+## SW Dependencies
+
+### Kernel modules
+
+Plugin is mostly based on Linux Kernel modules that expose specific metrics over
+`sysfs` or `devfs` interfaces. The following dependencies are expected by
+plugin:
+
+- `intel-rapl` kernel module which exposes Intel Runtime Power Limiting metrics over
+  `sysfs` (`/sys/devices/virtual/powercap/intel-rapl`),
+- `msr` kernel module that provides access to processor model specific
+  registers over `devfs` (`/dev/cpu/cpu%d/msr`),
+- `cpufreq` kernel module - which exposes per-CPU Frequency over `sysfs`
+ (`/sys/devices/system/cpu/cpu%d/cpufreq/scaling_cur_freq`),
+- `intel-uncore-frequency` kernel module exposes Intel uncore frequency metrics
+  over `sysfs` (`/sys/devices/system/cpu/intel_uncore_frequency`).
+
+Make sure that required kernel modules are loaded and running.
+Modules might have to be manually enabled by using `modprobe`.
+Depending on the kernel version, run commands:
+
+```sh
+# rapl modules:
+## kernel < 4.0
+sudo modprobe intel_rapl
+## kernel >= 4.0
+sudo modprobe rapl
+sudo modprobe intel_rapl_common
+sudo modprobe intel_rapl_msr
+
+# msr module:
+sudo modprobe msr
+
+# cpufreq module:
+### integrated in kernel 
+
+# intel-uncore-frequency module:
+## only for kernel >= 5.6.0
+sudo modprobe intel-uncore-frequency
+```
+
+### Kernel's perf interface
+
+For perf-related metrics, when Telegraf is not running as root,
+the following capability should be added to the Telegraf executable:
+
+```sh
+sudo setcap cap_sys_admin+ep <path_to_telegraf_binary>
+```
+
+Alternatively, `/proc/sys/kernel/perf_event_paranoid` has to be set to
+value less than 1.
+
+Depending on environment and configuration (number of monitored CPUs
+and number of enabled metrics), it might be required to increase
+the limit on the number of open file descriptors allowed.
+This can be done for example by using `ulimit -n` command.
+
+### Dependencies of metrics on system configuration
+
+Details of these dependencies are discussed above:
+
+| Configuration option                                                                | Type              | Dependency                                     |
+|-------------------------------------------------------------------------------------|-------------------|------------------------------------------------|
+| `current_power_consumption`                                                         | `package_metrics` | `rapl` kernel module(s)                        |
+| `current_dram_power_consumption`                                                    | `package_metrics` | `rapl` kernel module(s)                        |
+| `thermal_design_power`                                                              | `package_metrics` | `rapl` kernel module(s)                        |
+| `max_turbo_frequency`                                                               | `package_metrics` | `msr` kernel module                            |
+| `uncore_frequency`                                                                  | `package_metrics` | `intel-uncore-frequency`/`msr` kernel modules* |
+| `cpu_base_frequency`                                                                | `package_metrics` | `msr` kernel module                            |
+| `cpu_frequency`                                                                     | `cpu_metrics`     | `cpufreq` kernel module                        |
+| `cpu_c0_state_residency`                                                            | `cpu_metrics`     | `msr` kernel module                            |
+| `cpu_c1_state_residency`                                                            | `cpu_metrics`     | `msr` kernel module                            |
+| `cpu_c3_state_residency`                                                            | `cpu_metrics`     | `msr` kernel module                            |
+| `cpu_c6_state_residency`                                                            | `cpu_metrics`     | `msr` kernel module                            |
+| `cpu_c7_state_residency`                                                            | `cpu_metrics`     | `msr` kernel module                            |
+| `cpu_busy_cycles` (**DEPRECATED** - superseded by `cpu_c0_state_residency_percent`) | `cpu_metrics`     | `msr` kernel module                            |
+| `cpu_temperature`                                                                   | `cpu_metrics`     | `msr` kernel module                            |
+| `cpu_busy_frequency`                                                                | `cpu_metrics`     | `msr` kernel module                            |
+| `cpu_c0_substate_c01`                                                               | `cpu_metrics`     | kernel's `perf` interface                      |
+| `cpu_c0_substate_c02`                                                               | `cpu_metrics`     | kernel's `perf` interface                      |
+| `cpu_c0_substate_c0_wait`                                                           | `cpu_metrics`     | kernel's `perf` interface                      |
+
+*for all metrics enabled by the configuration option `uncore_frequency`,
+starting from kernel version 5.18, only the `intel-uncore-frequency` module
+is required. For older kernel versions, the metric `uncore_frequency_mhz_cur`
+requires the `msr` module to be enabled.
+
+### Root privileges
+
+**Telegraf with Intel PowerStat plugin enabled may require
+root privileges to read all the metrics**
+(depending on OS type or configuration).
+
+Alternatively, the following capabilities can be added to
+the Telegraf executable:
+
+```sh
+#without perf-related metrics:
+sudo setcap cap_sys_rawio,cap_dac_read_search+ep <path_to_telegraf_binary>
+
+#with perf-related metrics:
+sudo setcap cap_sys_rawio,cap_dac_read_search,cap_sys_admin+ep <path_to_telegraf_binary>
+```
+
+## HW Dependencies
+
+Specific metrics require certain processor features to be present, otherwise
+Intel PowerStat plugin won't be able to read them. The user can detect supported
+processor features by reading `/proc/cpuinfo` file.
+Plugin assumes crucial properties are the same for all CPU cores in the system.
+
+The following `processor` properties are examined in more detail
+in this section:
+
+- `vendor_id`
+- `cpu family`
+- `model`
+- `flags`
+
+The following processor properties are required by the plugin:
+
+- Processor `vendor_id` must be `GenuineIntel` and `cpu family` must be `6` -
+  since data used by the plugin are Intel-specific.
+- The following processor flags shall be present:
+  - `msr` shall be present for plugin to read platform data from processor
+    model specific registers and collect the following metrics:
+    - `cpu_c0_state_residency`
+    - `cpu_c1_state_residency`
+    - `cpu_c3_state_residency`
+    - `cpu_c6_state_residency`
+    - `cpu_c7_state_residency`
+    - `cpu_busy_cycles` (**DEPRECATED** - superseded by `cpu_c0_state_residency_percent`)
+    - `cpu_busy_frequency`
+    - `cpu_temperature`
+    - `cpu_base_frequency`
+    - `max_turbo_frequency`
+    - `uncore_frequency` (for kernel < 5.18)
+  - `aperfmperf` shall be present to collect the following metrics:
+    - `cpu_c0_state_residency`
+    - `cpu_c1_state_residency`
+    - `cpu_busy_cycles` (**DEPRECATED** - superseded by `cpu_c0_state_residency_percent`)
+    - `cpu_busy_frequency`
+  - `dts` shall be present to collect:
+    - `cpu_temperature`
+- Please consult the table of supported CPU models for the processor package.                                                                                                                                                                                                                                                                                                  | MHz   |
+
+### Known issues
+
+Starting from Linux kernel version v5.4.77, due to
+[this kernel change](https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?h=v5.4.77&id=19f6d91bdad42200aac557a683c17b1f65ee6c94), resources such as
+`/sys/devices/virtual/powercap/intel-rapl//*/energy_uj`
+can only be accessed by the root user for security reasons.
+Therefore, this plugin requires root privileges to gather
+`rapl` metrics correctly.
+
+If such strict security restrictions are not relevant, reading permissions for
+files in the `/sys/devices/virtual/powercap/intel-rapl/` directory can be
+manually altered, for example, using the chmod command with custom parameters.
+For instance, read and execute permissions for all files in the
+intel-rapl directory can be granted to all users using:
+
+```bash
+sudo chmod -R a+rx /sys/devices/virtual/powercap/intel-rapl/
+```
+
+[19f6d91b]: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?h=v5.4.77&id=19f6d91bdad42200aac557a683c17b1f65ee6c94
+
+## Example Output
+
+```text
+powerstat_package,host=ubuntu,package_id=0 thermal_design_power_watts=160 1606494744000000000
+powerstat_package,host=ubuntu,package_id=0 current_power_consumption_watts=35 1606494744000000000
+powerstat_package,host=ubuntu,package_id=0 cpu_base_frequency_mhz=2400i 1669118424000000000
+powerstat_package,host=ubuntu,package_id=0 current_dram_power_consumption_watts=13.94 1606494744000000000
+powerstat_package,host=ubuntu,package_id=0,active_cores=0 max_turbo_frequency_mhz=3000i 1606494744000000000
+powerstat_package,host=ubuntu,package_id=0,active_cores=1 max_turbo_frequency_mhz=2800i 1606494744000000000
+powerstat_package,die=0,host=ubuntu,package_id=0,type=initial uncore_frequency_limit_mhz_min=800,uncore_frequency_limit_mhz_max=2400 1606494744000000000
+powerstat_package,die=0,host=ubuntu,package_id=0,type=current uncore_frequency_mhz_cur=800i,uncore_frequency_limit_mhz_min=800,uncore_frequency_limit_mhz_max=2400 1606494744000000000
+powerstat_core,core_id=0,cpu_id=0,host=ubuntu,package_id=0 cpu_frequency_mhz=1200.29 1606494744000000000
+powerstat_core,core_id=0,cpu_id=0,host=ubuntu,package_id=0 cpu_temperature_celsius=34i 1606494744000000000
+powerstat_core,core_id=0,cpu_id=0,host=ubuntu,package_id=0 cpu_c0_state_residency_percent=0.8 1606494744000000000
+powerstat_core,core_id=0,cpu_id=0,host=ubuntu,package_id=0 cpu_c1_state_residency_percent=6.68 1606494744000000000
+powerstat_core,core_id=0,cpu_id=0,host=ubuntu,package_id=0 cpu_c3_state_residency_percent=0 1606494744000000000
+powerstat_core,core_id=0,cpu_id=0,host=ubuntu,package_id=0 cpu_c6_state_residency_percent=92.52 1606494744000000000
+powerstat_core,core_id=0,cpu_id=0,host=ubuntu,package_id=0 cpu_c7_state_residency_percent=0 1606494744000000000
+powerstat_core,core_id=0,cpu_id=0,host=ubuntu,package_id=0 cpu_busy_frequency_mhz=1213.24 1606494744000000000
+powerstat_core,core_id=0,cpu_id=0,host=ubuntu,package_id=0 cpu_c0_substate_c01_percent=0 1606494744000000000
+powerstat_core,core_id=0,cpu_id=0,host=ubuntu,package_id=0 cpu_c0_substate_c02_percent=5.68 1606494744000000000
+powerstat_core,core_id=0,cpu_id=0,host=ubuntu,package_id=0 cpu_c0_substate_c0_wait_percent=43.74 1606494744000000000
+```
+
+## Supported CPU models
+
+| Model number | Processor name                  | `cpu_c1_state_residency`<br/>`cpu_c6_state_residency`<br/>`cpu_temperature`<br/>`cpu_base_frequency` | `cpu_c3_state_residency` | `cpu_c7_state_residency` | `uncore_frequency` |
+|--------------|---------------------------------|:----------------------------------------------------------------------------------------------------:|:------------------------:|:------------------------:|:------------------:|
+| 0x1E         | Intel Nehalem                   |                                                  ✓                                                   |            ✓             |                          |                    |
+| 0x1F         | Intel Nehalem-G                 |                                                  ✓                                                   |            ✓             |                          |                    |
+| 0x1A         | Intel Nehalem-EP                |                                                  ✓                                                   |            ✓             |                          |                    |
+| 0x2E         | Intel Nehalem-EX                |                                                  ✓                                                   |            ✓             |                          |                    |
+| 0x25         | Intel Westmere                  |                                                  ✓                                                   |            ✓             |                          |                    |
+| 0x2C         | Intel Westmere-EP               |                                                  ✓                                                   |            ✓             |                          |                    |
+| 0x2F         | Intel Westmere-EX               |                                                  ✓                                                   |            ✓             |                          |                    |
+| 0x2A         | Intel Sandybridge               |                                                  ✓                                                   |            ✓             |            ✓             |                    |
+| 0x2D         | Intel Sandybridge-X             |                                                  ✓                                                   |            ✓             |            ✓             |                    |
+| 0x3A         | Intel Ivybridge                 |                                                  ✓                                                   |            ✓             |            ✓             |                    |
+| 0x3E         | Intel Ivybridge-X               |                                                  ✓                                                   |            ✓             |            ✓             |                    |
+| 0x3C         | Intel Haswell                   |                                                  ✓                                                   |            ✓             |            ✓             |                    |
+| 0x3F         | Intel Haswell-X                 |                                                  ✓                                                   |            ✓             |            ✓             |                    |
+| 0x45         | Intel Haswell-L                 |                                                  ✓                                                   |            ✓             |            ✓             |                    |
+| 0x46         | Intel Haswell-G                 |                                                  ✓                                                   |            ✓             |            ✓             |                    |
+| 0x3D         | Intel Broadwell                 |                                                  ✓                                                   |            ✓             |            ✓             |                    |
+| 0x47         | Intel Broadwell-G               |                                                  ✓                                                   |            ✓             |            ✓             |         ✓          |
+| 0x4F         | Intel Broadwell-X               |                                                  ✓                                                   |            ✓             |                          |         ✓          |
+| 0x56         | Intel Broadwell-D               |                                                  ✓                                                   |            ✓             |                          |         ✓          |
+| 0x4E         | Intel Skylake-L                 |                                                  ✓                                                   |            ✓             |            ✓             |                    |
+| 0x5E         | Intel Skylake                   |                                                  ✓                                                   |            ✓             |            ✓             |                    |
+| 0x55         | Intel Skylake-X                 |                                                  ✓                                                   |                          |                          |         ✓          |
+| 0x8E         | Intel KabyLake-L                |                                                  ✓                                                   |            ✓             |            ✓             |                    |
+| 0x9E         | Intel KabyLake                  |                                                  ✓                                                   |            ✓             |            ✓             |                    |
+| 0xA5         | Intel CometLake                 |                                                  ✓                                                   |            ✓             |            ✓             |                    |
+| 0xA6         | Intel CometLake-L               |                                                  ✓                                                   |            ✓             |            ✓             |                    |
+| 0x66         | Intel CannonLake-L              |                                                  ✓                                                   |                          |            ✓             |                    |
+| 0x6A         | Intel IceLake-X                 |                                                  ✓                                                   |                          |                          |         ✓          |
+| 0x6C         | Intel IceLake-D                 |                                                  ✓                                                   |                          |                          |         ✓          |
+| 0x7D         | Intel IceLake                   |                                                  ✓                                                   |                          |                          |                    |
+| 0x7E         | Intel IceLake-L                 |                                                  ✓                                                   |                          |            ✓             |                    |
+| 0x9D         | Intel IceLake-NNPI              |                                                  ✓                                                   |                          |            ✓             |                    |
+| 0xA7         | Intel RocketLake                |                                                  ✓                                                   |                          |            ✓             |                    |
+| 0x8C         | Intel TigerLake-L               |                                                  ✓                                                   |                          |            ✓             |                    |
+| 0x8D         | Intel TigerLake                 |                                                  ✓                                                   |                          |            ✓             |                    |
+| 0x8F         | Intel Sapphire Rapids X         |                                                  ✓                                                   |                          |                          |         ✓          |
+| 0xCF         | Intel Emerald Rapids X          |                                                  ✓                                                   |                          |                          |         ✓          |
+| 0xAD         | Intel Granite Rapids X          |                                                  ✓                                                   |                          |                          |                    |
+| 0x8A         | Intel Lakefield                 |                                                  ✓                                                   |                          |            ✓             |                    |
+| 0x97         | Intel AlderLake                 |                                                  ✓                                                   |                          |            ✓             |         ✓          |
+| 0x9A         | Intel AlderLake-L               |                                                  ✓                                                   |                          |            ✓             |         ✓          |
+| 0xB7         | Intel RaptorLake                |                                                  ✓                                                   |                          |            ✓             |         ✓          |
+| 0xBA         | Intel RaptorLake-P              |                                                  ✓                                                   |                          |            ✓             |         ✓          |
+| 0xBF         | Intel RaptorLake-S              |                                                  ✓                                                   |                          |            ✓             |         ✓          |
+| 0xAC         | Intel MeteorLake                |                                                  ✓                                                   |                          |            ✓             |         ✓          |
+| 0xAA         | Intel MeteorLake-L              |                                                  ✓                                                   |                          |            ✓             |         ✓          |
+| 0xC6         | Intel ArrowLake                 |                                                  ✓                                                   |                          |            ✓             |                    |
+| 0xBD         | Intel LunarLake                 |                                                  ✓                                                   |                          |            ✓             |                    |
+| 0x37         | Intel Atom® Bay Trail           |                                                  ✓                                                   |                          |                          |                    |
+| 0x4D         | Intel Atom® Avaton              |                                                  ✓                                                   |                          |                          |                    |
+| 0x4A         | Intel Atom® Merrifield          |                                                  ✓                                                   |                          |                          |                    |
+| 0x5A         | Intel Atom® Moorefield          |                                                  ✓                                                   |                          |                          |                    |
+| 0x4C         | Intel Atom® Airmont             |                                                  ✓                                                   |            ✓             |                          |                    |
+| 0x5C         | Intel Atom® Apollo Lake         |                                                  ✓                                                   |            ✓             |            ✓             |                    |
+| 0x5F         | Intel Atom® Denverton           |                                                  ✓                                                   |                          |                          |                    |
+| 0x7A         | Intel Atom® Goldmont            |                                                  ✓                                                   |            ✓             |            ✓             |                    |
+| 0x86         | Intel Atom® Jacobsville         |                                                  ✓                                                   |                          |                          |                    |
+| 0x96         | Intel Atom® Elkhart Lake        |                                                  ✓                                                   |                          |            ✓             |                    |
+| 0x9C         | Intel Atom® Jasper Lake         |                                                  ✓                                                   |                          |            ✓             |                    |
+| 0xBE         | Intel AlderLake-N               |                                                  ✓                                                   |                          |            ✓             |                    |
+| 0xAF         | Intel Sierra Forest             |                                                  ✓                                                   |                          |                          |                    |
+| 0xB6         | Intel Grand Ridge               |                                                  ✓                                                   |                          |                          |                    |
+| 0x57         | Intel Xeon® PHI Knights Landing |                                                  ✓                                                   |                          |                          |                    |
+| 0x85         | Intel Xeon® PHI Knights Mill    |                                                  ✓                                                   |                          |                          |                    |
diff --git a/content/telegraf/v1/input-plugins/intel_rdt/_index.md b/content/telegraf/v1/input-plugins/intel_rdt/_index.md
new file mode 100644
index 000000000..0a377feb7
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/intel_rdt/_index.md
@@ -0,0 +1,202 @@
+---
+description: "Telegraf plugin for collecting metrics from Intel RDT"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Intel RDT
+    identifier: input-intel_rdt
+tags: [Intel RDT, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Intel RDT Input Plugin
+
+The `intel_rdt` plugin collects information provided by monitoring features of
+the Intel Resource Director Technology (Intel(R) RDT). Intel RDT provides the
+hardware framework to monitor and control the utilization of shared resources
+(ex: last level cache, memory bandwidth).
+
+## About Intel RDT
+
+Intel’s Resource Director Technology (RDT) framework consists of:
+
+- Cache Monitoring Technology (CMT)
+- Memory Bandwidth Monitoring (MBM)
+- Cache Allocation Technology (CAT)
+- Code and Data Prioritization (CDP)
+
+As multithreaded and multicore platform architectures emerge, the last level
+cache and memory bandwidth are key resources to manage for running workloads in
+single-threaded, multithreaded, or complex virtual machine environments. Intel
+introduces CMT, MBM, CAT and CDP to manage these workloads across shared
+resources.
+
+## Prerequsities - PQoS Tool
+
+To gather Intel RDT metrics, the `intel_rdt` plugin uses _pqos_ cli tool which
+is a part of [Intel(R) RDT Software
+Package](https://github.com/intel/intel-cmt-cat).  Before using this plugin
+please be sure _pqos_ is properly installed and configured regarding that the
+plugin run _pqos_ to work with `OS Interface` mode. This plugin supports _pqos_
+version 4.0.0 and above.  Note: pqos tool needs root privileges to work
+properly.
+
+Metrics will be constantly reported from the following `pqos` commands within
+the given interval:
+
+### If telegraf does not run as the root user
+
+The `pqos` binary needs to run as root.  If telegraf is running as a non-root
+user, you may enable sudo to allow `pqos` to run correctly.  The `pqos` command
+requires root level access to run.  There are two options to overcome this if
+you run telegraf as a non-root user.
+
+It is possible to update the pqos binary with setuid using `chmod u+s
+/path/to/pqos`.  This approach is simple and requires no modification to the
+Telegraf configuration, however pqos is not a read-only tool and there are
+security implications for making such a command setuid root.
+
+Alternately, you may enable sudo to allow `pqos` to run correctly, as follows:
+
+Add the following to your sudoers file (assumes telegraf runs as a user named
+`telegraf`):
+
+```sh
+telegraf ALL=(ALL) NOPASSWD:/usr/sbin/pqos -r --iface-os --mon-file-type=csv --mon-interval=*
+```
+
+If you wish to use sudo, you must also add `use_sudo = true` to the Telegraf
+configuration (see below).
+
+### In case of cores monitoring
+
+```sh
+pqos -r --iface-os --mon-file-type=csv --mon-interval=INTERVAL --mon-core=all:[CORES]\;mbt:[CORES]
+```
+
+where `CORES` is equal to group of cores provided in config. User can provide
+many groups.
+
+### In case of process monitoring
+
+```sh
+pqos -r --iface-os --mon-file-type=csv --mon-interval=INTERVAL --mon-pid=all:[PIDS]\;mbt:[PIDS]
+```
+
+where `PIDS` is group of processes IDs which name are equal to provided process
+name in a config.  User can provide many process names which lead to create many
+processes groups.
+
+In both cases `INTERVAL` is equal to sampling_interval from config.
+
+Because PIDs association within system could change in every moment, Intel RDT
+plugin provides a functionality to check on every interval if desired processes
+change their PIDs association.  If some change is reported, plugin will restart
+_pqos_ tool with new arguments. If provided by user process name is not equal to
+any of available processes, will be omitted and plugin will constantly check for
+process availability.
+
+## Useful links
+
+- Pqos installation process: <https://github.com/intel/intel-cmt-cat/blob/master/INSTALL>
+- Enabling OS interface: <https://github.com/intel/intel-cmt-cat/wiki>, <https://github.com/intel/intel-cmt-cat/wiki/resctrl>
+- More about Intel RDT: <https://www.intel.com/content/www/us/en/architecture-and-technology/resource-director-technology.html>
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read Intel RDT metrics
+# This plugin ONLY supports non-Windows
+[[inputs.intel_rdt]]
+  ## Optionally set sampling interval to Nx100ms.
+  ## This value is propagated to pqos tool. Interval format is defined by pqos itself.
+  ## If not provided or provided 0, will be set to 10 = 10x100ms = 1s.
+  # sampling_interval = "10"
+
+  ## Optionally specify the path to pqos executable.
+  ## If not provided, auto discovery will be performed.
+  # pqos_path = "/usr/local/bin/pqos"
+
+  ## Optionally specify if IPC and LLC_Misses metrics shouldn't be propagated.
+  ## If not provided, default value is false.
+  # shortened_metrics = false
+
+  ## Specify the list of groups of CPU core(s) to be provided as pqos input.
+  ## Mandatory if processes aren't set and forbidden if processes are specified.
+  ## e.g. ["0-3", "4,5,6"] or ["1-3,4"]
+  # cores = ["0-3"]
+
+  ## Specify the list of processes for which Metrics will be collected.
+  ## Mandatory if cores aren't set and forbidden if cores are specified.
+  ## e.g. ["qemu", "pmd"]
+  # processes = ["process"]
+
+  ## Specify if the pqos process should be called with sudo.
+  ## Mandatory if the telegraf process does not run as root.
+  # use_sudo = false
+```
+
+## Metrics
+
+| Name          | Full name                                     | Description |
+|---------------|-----------------------------------------------|-------------|
+| MBL           | Memory Bandwidth on Local NUMA Node  |     Memory bandwidth utilization by the relevant CPU core/process on the local NUMA memory channel        |
+| MBR           | Memory Bandwidth on Remote NUMA Node |     Memory bandwidth utilization by the relevant CPU core/process on the remote NUMA memory channel        |
+| MBT           | Total Memory Bandwidth               |     Total memory bandwidth utilized by a CPU core/process on local and remote NUMA memory channels        |
+| LLC           | L3 Cache Occupancy                   |     Total Last Level Cache occupancy by a CPU core/process         |
+| LLC_Misses*    | L3 Cache Misses                      |    Total Last Level Cache misses by a CPU core/process       |
+| IPC*           | Instructions Per Cycle               |     Total instructions per cycle executed by a CPU core/process        |
+
+*optional
+
+## Troubleshooting
+
+Pointing to non-existing cores will lead to throwing an error by _pqos_ and the
+plugin will not work properly. Be sure to check provided core number exists
+within desired system.
+
+Be aware, reading Intel RDT metrics by _pqos_ cannot be done simultaneously on
+the same resource.  Do not use any other _pqos_ instance that is monitoring the
+same cores or PIDs within the working system.  It is not possible to monitor
+same cores or PIDs on different groups.
+
+PIDs associated for the given process could be manually checked by `pidof`
+command. E.g:
+
+```sh
+pidof PROCESS
+```
+
+where `PROCESS` is process name.
+
+## Example Output
+
+```text
+rdt_metric,cores=12\,19,host=r2-compute-20,name=IPC,process=top value=0 1598962030000000000
+rdt_metric,cores=12\,19,host=r2-compute-20,name=LLC_Misses,process=top value=0 1598962030000000000
+rdt_metric,cores=12\,19,host=r2-compute-20,name=LLC,process=top value=0 1598962030000000000
+rdt_metric,cores=12\,19,host=r2-compute-20,name=MBL,process=top value=0 1598962030000000000
+rdt_metric,cores=12\,19,host=r2-compute-20,name=MBR,process=top value=0 1598962030000000000
+rdt_metric,cores=12\,19,host=r2-compute-20,name=MBT,process=top value=0 1598962030000000000
+```
diff --git a/content/telegraf/v1/input-plugins/internal/_index.md b/content/telegraf/v1/input-plugins/internal/_index.md
new file mode 100644
index 000000000..6a3b3b5c3
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/internal/_index.md
@@ -0,0 +1,115 @@
+---
+description: "Telegraf plugin for collecting metrics from Telegraf Internal"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Telegraf Internal
+    identifier: input-internal
+tags: [Telegraf Internal, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Telegraf Internal Input Plugin
+
+The `internal` plugin collects metrics about the telegraf agent itself.
+
+Note that some metrics are aggregates across all instances of one type of
+plugin.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Collect statistics about itself
+[[inputs.internal]]
+  ## If true, collect telegraf memory stats.
+  # collect_memstats = true
+
+  ## If true, collect metrics from Go's runtime.metrics. For a full list see:
+  ##   https://pkg.go.dev/runtime/metrics
+  # collect_gostats = false
+```
+
+## Metrics
+
+memstats are taken from the Go runtime:
+<https://golang.org/pkg/runtime/#MemStats>
+
+- internal_memstats
+  - alloc_bytes
+  - frees
+  - heap_alloc_bytes
+  - heap_idle_bytes
+  - heap_in_use_bytes
+  - heap_objects_bytes
+  - heap_released_bytes
+  - heap_sys_bytes
+  - mallocs
+  - num_gc
+  - pointer_lookups
+  - sys_bytes
+  - total_alloc_bytes
+
+agent stats collect aggregate stats on all telegraf plugins.
+
+- internal_agent
+  - gather_errors
+  - gather_timeouts
+  - metrics_dropped
+  - metrics_gathered
+  - metrics_written
+
+internal_gather stats collect aggregate stats on all input plugins
+that are of the same input type. They are tagged with `input=<plugin_name>`
+`version=<telegraf_version>` and `go_version=<go_build_version>`.
+
+- internal_gather
+  - gather_time_ns
+  - metrics_gathered
+  - gather_timeouts
+
+internal_write stats collect aggregate stats on all output plugins
+that are of the same input type. They are tagged with `output=<plugin_name>`
+and `version=<telegraf_version>`.
+
+- internal_write
+  - buffer_limit
+  - buffer_size
+  - metrics_added
+  - metrics_written
+  - metrics_dropped
+  - metrics_filtered
+  - write_time_ns
+
+internal_<plugin_name> are metrics which are defined on a per-plugin basis, and
+usually contain tags which differentiate each instance of a particular type of
+plugin and `version=<telegraf_version>`.
+
+- internal_<plugin_name>
+  - individual plugin-specific fields, such as requests counts.
+
+## Tags
+
+All measurements for specific plugins are tagged with information relevant
+to each particular plugin and with `version=<telegraf_version>`.
+
+## Example Output
+
+```text
+internal_memstats,host=tyrion alloc_bytes=4457408i,sys_bytes=10590456i,pointer_lookups=7i,mallocs=17642i,frees=7473i,heap_sys_bytes=6848512i,heap_idle_bytes=1368064i,heap_in_use_bytes=5480448i,heap_released_bytes=0i,total_alloc_bytes=6875560i,heap_alloc_bytes=4457408i,heap_objects_bytes=10169i,num_gc=2i 1480682800000000000
+internal_agent,host=tyrion,go_version=1.12.7,version=1.99.0 metrics_written=18i,metrics_dropped=0i,metrics_gathered=19i,gather_errors=0i,gather_timeouts=0i 1480682800000000000
+internal_write,output=file,host=tyrion,version=1.99.0 buffer_limit=10000i,write_time_ns=636609i,metrics_added=18i,metrics_written=18i,buffer_size=0i 1480682800000000000
+internal_gather,input=internal,host=tyrion,version=1.99.0 metrics_gathered=19i,gather_time_ns=442114i,gather_timeouts=0i 1480682800000000000
+internal_gather,input=http_listener,host=tyrion,version=1.99.0 metrics_gathered=0i,gather_time_ns=167285i,gather_timeouts=0i 1480682800000000000
+internal_http_listener,address=:8186,host=tyrion,version=1.99.0 queries_received=0i,writes_received=0i,requests_received=0i,buffers_created=0i,requests_served=0i,pings_received=0i,bytes_received=0i,not_founds_served=0i,pings_served=0i,queries_served=0i,writes_served=0i 1480682800000000000
+internal_mqtt_consumer,host=tyrion,version=1.99.0 messages_received=622i,payload_size=37942i 1657282270000000000
+```
diff --git a/content/telegraf/v1/input-plugins/internet_speed/_index.md b/content/telegraf/v1/input-plugins/internet_speed/_index.md
new file mode 100644
index 000000000..6b48af536
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/internet_speed/_index.md
@@ -0,0 +1,103 @@
+---
+description: "Telegraf plugin for collecting metrics from Internet Speed Monitor"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Internet Speed Monitor
+    identifier: input-internet_speed
+tags: [Internet Speed Monitor, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Internet Speed Monitor Input Plugin
+
+The `Internet Speed Monitor` collects data about the internet speed on the
+system.
+
+On some systems, the default settings may cause speed tests to fail; if this
+affects you then try enabling `memory_saving_mode`. This reduces the memory
+requirements for the test, and may reduce the runtime of the test. However,
+please be aware that this may also reduce the accuracy of the test for fast
+(>30Mb/s) connections. This setting enables the upstream
+[Memory Saving Mode](https://github.com/showwin/speedtest-go#memory-saving-mode)
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Monitors internet speed using speedtest.net service
+[[inputs.internet_speed]]
+  ## This plugin downloads many MB of data each time it is run. As such
+  ## consider setting a higher interval for this plugin to reduce the
+  ## demand on your internet connection.
+  # interval = "60m"
+
+  ## Enable to reduce memory usage
+  # memory_saving_mode = false
+
+  ## Caches the closest server location
+  # cache = false
+
+  ## Number of concurrent connections
+  ## By default or set to zero, the number of CPU cores is used. Use this to
+  ## reduce the impact on system performance or to increase the connections on
+  ## faster connections to ensure the fastest speed.
+  # connections = 0
+
+  ## Test mode
+  ## By default, a single sever is used for testing. This may work for most,
+  ## however, setting to "multi" will reach out to multiple servers in an
+  ## attempt to get closer to ideal internet speeds.
+  ## And "multi" will use all available servers to calculate average packet loss.
+  # test_mode = "single"
+
+  ## Server ID exclude filter
+  ## Allows the user to exclude or include specific server IDs received by
+  ## speedtest-go. Values in the exclude option will be skipped over. Values in
+  ## the include option are the only options that will be picked from.
+  ##
+  ## See the list of servers speedtest-go will return at:
+  ##     https://www.speedtest.net/api/js/servers?engine=js&limit=10
+  ##
+  # server_id_exclude = []
+  # server_id_include = []
+```
+
+## Metrics
+
+It collects the following fields:
+
+| Name           | Field Name  | Type    | Unit       |
+|----------------|-------------|---------|------------|
+| Download Speed | download    | float64 | Mbps       |
+| Upload Speed   | upload      | float64 | Mbps       |
+| Latency        | latency     | float64 | ms         |
+| Jitter         | jitter      | float64 | ms         |
+| Packet Loss    | packet_loss | float64 | percentage |
+| Location       | location    | string  | -          |
+
+The `packet_loss` will return -1, if packet loss not applicable.
+
+And the following tags:
+
+| Name      | tag name  |
+|-----------|-----------|
+| Source    | source    |
+| Server ID | server_id |
+| Test Mode | test_mode |
+
+## Example Output
+
+```text
+internet_speed,source=speedtest02.z4internet.com:8080,server_id=54619,test_mode=single download=318.37580265897725,upload=30.444407341274385,latency=37.73174,jitter=1.99810,packet_loss=0.05377,location="Somewhere, TX" 1675458921000000000
+internet_speed,source=speedtest02.z4internet.com:8080,server_id=54619,test_mode=multi download=318.37580265897725,upload=30.444407341274385,latency=37.73174,jitter=1.99810,packet_loss=-1,location="Somewhere, TX" 1675458921000000000
+```
diff --git a/content/telegraf/v1/input-plugins/interrupts/_index.md b/content/telegraf/v1/input-plugins/interrupts/_index.md
new file mode 100644
index 000000000..8355eb33a
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/interrupts/_index.md
@@ -0,0 +1,109 @@
+---
+description: "Telegraf plugin for collecting metrics from Interrupts"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Interrupts
+    identifier: input-interrupts
+tags: [Interrupts, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Interrupts Input Plugin
+
+The interrupts plugin gathers metrics about IRQs from `/proc/interrupts` and
+`/proc/softirqs`.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# This plugin gathers interrupts data from /proc/interrupts and /proc/softirqs.
+[[inputs.interrupts]]
+  ## When set to true, cpu metrics are tagged with the cpu.  Otherwise cpu is
+  ## stored as a field.
+  ##
+  ## The default is false for backwards compatibility, and will be changed to
+  ## true in a future version.  It is recommended to set to true on new
+  ## deployments.
+  # cpu_as_tag = false
+
+  ## To filter which IRQs to collect, make use of tagpass / tagdrop, i.e.
+  # [inputs.interrupts.tagdrop]
+  #   irq = [ "NET_RX", "TASKLET" ]
+```
+
+## Metrics
+
+There are two styles depending on the value of `cpu_as_tag`.
+
+With `cpu_as_tag = false`:
+
+- interrupts
+  - tags:
+    - irq (IRQ name)
+    - type
+    - device (name of the device that is located at the IRQ)
+    - cpu
+  - fields:
+    - cpu (int, number of interrupts per cpu)
+    - total (int, total number of interrupts)
+
+- soft_interrupts
+  - tags:
+    - irq (IRQ name)
+    - type
+    - device (name of the device that is located at the IRQ)
+    - cpu
+  - fields:
+    - cpu (int, number of interrupts per cpu)
+    - total (int, total number of interrupts)
+
+With `cpu_as_tag = true`:
+
+- interrupts
+  - tags:
+    - irq (IRQ name)
+    - type
+    - device (name of the device that is located at the IRQ)
+    - cpu
+  - fields:
+    - count (int, number of interrupts)
+
+- soft_interrupts
+  - tags:
+    - irq (IRQ name)
+    - type
+    - device (name of the device that is located at the IRQ)
+    - cpu
+  - fields:
+    - count (int, number of interrupts)
+
+## Example Output
+
+With `cpu_as_tag = false`:
+
+```text
+interrupts,irq=0,type=IO-APIC,device=2-edge\ timer,cpu=cpu0 count=23i 1489346531000000000
+interrupts,irq=1,type=IO-APIC,device=1-edge\ i8042,cpu=cpu0 count=9i 1489346531000000000
+interrupts,irq=30,type=PCI-MSI,device=65537-edge\ virtio1-input.0,cpu=cpu1 count=1i 1489346531000000000
+soft_interrupts,irq=NET_RX,cpu=cpu0 count=280879i 1489346531000000000
+```
+
+With `cpu_as_tag = true`:
+
+```text
+interrupts,cpu=cpu6,irq=PIW,type=Posted-interrupt\ wakeup\ event count=0i 1543539773000000000
+interrupts,cpu=cpu7,irq=PIW,type=Posted-interrupt\ wakeup\ event count=0i 1543539773000000000
+soft_interrupts,cpu=cpu0,irq=HI count=246441i 1543539773000000000
+soft_interrupts,cpu=cpu1,irq=HI count=159154i 1543539773000000000
+```
diff --git a/content/telegraf/v1/input-plugins/ipmi_sensor/_index.md b/content/telegraf/v1/input-plugins/ipmi_sensor/_index.md
new file mode 100644
index 000000000..e31027a89
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/ipmi_sensor/_index.md
@@ -0,0 +1,213 @@
+---
+description: "Telegraf plugin for collecting metrics from IPMI Sensor"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: IPMI Sensor
+    identifier: input-ipmi_sensor
+tags: [IPMI Sensor, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# IPMI Sensor Input Plugin
+
+Get bare metal metrics using the command line utility
+[`ipmitool`](https://github.com/ipmitool/ipmitool).
+
+If no servers are specified, the plugin will query the local machine sensor
+stats via the following command:
+
+```sh
+ipmitool sdr
+```
+
+or with the version 2 schema:
+
+```sh
+ipmitool sdr elist
+```
+
+When one or more servers are specified, the plugin will use the following
+command to collect remote host sensor stats:
+
+```sh
+ipmitool -I lan -H SERVER -U USERID -P PASSW0RD sdr
+```
+
+Any of the following parameters will be added to the aforementioned query if
+they're configured:
+
+```sh
+-y hex_key -L privilege
+```
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from the bare metal servers via IPMI
+[[inputs.ipmi_sensor]]
+  ## Specify the path to the ipmitool executable
+  # path = "/usr/bin/ipmitool"
+
+  ## Use sudo
+  ## Setting 'use_sudo' to true will make use of sudo to run ipmitool.
+  ## Sudo must be configured to allow the telegraf user to run ipmitool
+  ## without a password.
+  # use_sudo = false
+
+  ## Servers
+  ## Specify one or more servers via a url. If no servers are specified, local
+  ## machine sensor stats will be queried. Uses the format:
+  ##  [username[:password]@][protocol[(address)]]
+  ##  e.g. root:passwd@lan(127.0.0.1)
+  # servers = ["USERID:PASSW0RD@lan(192.168.1.1)"]
+
+  ## Session privilege level
+  ## Choose from: CALLBACK, USER, OPERATOR, ADMINISTRATOR
+  # privilege = "ADMINISTRATOR"
+
+  ## Timeout
+  ## Timeout for the ipmitool command to complete.
+  # timeout = "20s"
+
+  ## Metric schema version
+  ## See the plugin readme for more information on schema versioning.
+  # metric_version = 1
+
+  ## Sensors to collect
+  ## Choose from:
+  ##   * sdr: default, collects sensor data records
+  ##   * chassis_power_status: collects the power status of the chassis
+  ##   * dcmi_power_reading: collects the power readings from the Data Center Management Interface
+  # sensors = ["sdr"]
+
+  ## Hex key
+  ## Optionally provide the hex key for the IMPI connection.
+  # hex_key = ""
+
+  ## Cache
+  ## If ipmitool should use a cache
+  ## Using a cache can speed up collection times depending on your device.
+  # use_cache = false
+
+  ## Path to the ipmitools cache file (defaults to OS temp dir)
+  ## The provided path must exist and must be writable
+  # cache_path = ""
+```
+
+## Sensors
+
+By default the plugin collects data via the `sdr` command and returns those
+values. However, there are additonal sensor options that be call on:
+
+- `chassis_power_status` - returns 0 or 1 depending on the output of
+  `chassis power status`
+- `dcmi_power_reading` - Returns the watt values from `dcmi power reading`
+
+These sensor options are not affected by the metric version.
+
+## Metrics
+
+Version 1 schema:
+
+- ipmi_sensor:
+  - tags:
+    - name
+    - unit
+    - host
+    - server (only when retrieving stats from remote servers)
+  - fields:
+    - status (int, 1=ok status_code/0=anything else)
+    - value (float)
+
+Version 2 schema:
+
+- ipmi_sensor:
+  - tags:
+    - name
+    - entity_id (can help uniquify duplicate names)
+    - status_code (two letter code from IPMI documentation)
+    - status_desc (extended status description field)
+    - unit (only on analog values)
+    - host
+    - server (only when retrieving stats from remote)
+  - fields:
+    - value (float)
+
+### Permissions
+
+When gathering from the local system, Telegraf will need permission to the
+ipmi device node.  When using udev you can create the device node giving
+`rw` permissions to the `telegraf` user by adding the following rule to
+`/etc/udev/rules.d/52-telegraf-ipmi.rules`:
+
+```sh
+KERNEL=="ipmi*", MODE="660", GROUP="telegraf"
+```
+
+Alternatively, it is possible to use sudo. You will need the following in your
+telegraf config:
+
+```toml
+[[inputs.ipmi_sensor]]
+  use_sudo = true
+```
+
+You will also need to update your sudoers file:
+
+```bash
+$ visudo
+# Add the following line:
+Cmnd_Alias IPMITOOL = /usr/bin/ipmitool *
+telegraf  ALL=(root) NOPASSWD: IPMITOOL
+Defaults!IPMITOOL !logfile, !syslog, !pam_session
+```
+
+## Example Output
+
+### Version 1 Schema
+
+When retrieving stats from a remote server:
+
+```text
+ipmi_sensor,server=10.20.2.203,name=uid_light value=0,status=1i 1517125513000000000
+ipmi_sensor,server=10.20.2.203,name=sys._health_led status=1i,value=0 1517125513000000000
+ipmi_sensor,server=10.20.2.203,name=power_supply_1,unit=watts status=1i,value=110 1517125513000000000
+ipmi_sensor,server=10.20.2.203,name=power_supply_2,unit=watts status=1i,value=120 1517125513000000000
+ipmi_sensor,server=10.20.2.203,name=power_supplies value=0,status=1i 1517125513000000000
+ipmi_sensor,server=10.20.2.203,name=fan_1,unit=percent status=1i,value=43.12 1517125513000000000
+```
+
+When retrieving stats from the local machine (no server specified):
+
+```text
+ipmi_sensor,name=uid_light value=0,status=1i 1517125513000000000
+ipmi_sensor,name=sys._health_led status=1i,value=0 1517125513000000000
+ipmi_sensor,name=power_supply_1,unit=watts status=1i,value=110 1517125513000000000
+ipmi_sensor,name=power_supply_2,unit=watts status=1i,value=120 1517125513000000000
+ipmi_sensor,name=power_supplies value=0,status=1i 1517125513000000000
+ipmi_sensor,name=fan_1,unit=percent status=1i,value=43.12 1517125513000000000
+```
+
+#### Version 2 Schema
+
+When retrieving stats from the local machine (no server specified):
+
+```text
+ipmi_sensor,name=uid_light,entity_id=23.1,status_code=ok,status_desc=ok value=0 1517125474000000000
+ipmi_sensor,name=sys._health_led,entity_id=23.2,status_code=ok,status_desc=ok value=0 1517125474000000000
+ipmi_sensor,entity_id=10.1,name=power_supply_1,status_code=ok,status_desc=presence_detected,unit=watts value=110 1517125474000000000
+ipmi_sensor,name=power_supply_2,entity_id=10.2,status_code=ok,unit=watts,status_desc=presence_detected value=125 1517125474000000000
+ipmi_sensor,name=power_supplies,entity_id=10.3,status_code=ok,status_desc=fully_redundant value=0 1517125474000000000
+ipmi_sensor,entity_id=7.1,name=fan_1,status_code=ok,status_desc=transition_to_running,unit=percent value=43.12 1517125474000000000
+```
diff --git a/content/telegraf/v1/input-plugins/ipset/_index.md b/content/telegraf/v1/input-plugins/ipset/_index.md
new file mode 100644
index 000000000..77d46371b
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/ipset/_index.md
@@ -0,0 +1,97 @@
+---
+description: "Telegraf plugin for collecting metrics from Ipset"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Ipset
+    identifier: input-ipset
+tags: [Ipset, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Ipset Input Plugin
+
+The ipset plugin gathers packets and bytes counters from Linux ipset.
+It uses the output of the command "ipset save".
+Ipsets created without the "counters" option are ignored.
+
+Results are tagged with:
+
+- ipset name
+- ipset entry
+
+There are 3 ways to grant telegraf the right to run ipset:
+
+- Run as root (strongly discouraged)
+- Use sudo
+- Configure systemd to run telegraf with CAP_NET_ADMIN and CAP_NET_RAW capabilities.
+
+## Using systemd capabilities
+
+You may run `systemctl edit telegraf.service` and add the following:
+
+```text
+[Service]
+CapabilityBoundingSet=CAP_NET_RAW CAP_NET_ADMIN
+AmbientCapabilities=CAP_NET_RAW CAP_NET_ADMIN
+```
+
+## Using sudo
+
+You will need the following in your telegraf config:
+
+```toml
+[[inputs.ipset]]
+  use_sudo = true
+```
+
+You will also need to update your sudoers file:
+
+```bash
+$ visudo
+# Add the following line:
+Cmnd_Alias IPSETSAVE = /sbin/ipset save
+telegraf  ALL=(root) NOPASSWD: IPSETSAVE
+Defaults!IPSETSAVE !logfile, !syslog, !pam_session
+```
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Gather packets and bytes counters from Linux ipsets
+  [[inputs.ipset]]
+    ## By default, we only show sets which have already matched at least 1 packet.
+    ## set include_unmatched_sets = true to gather them all.
+    include_unmatched_sets = false
+    ## Adjust your sudo settings appropriately if using this option ("sudo ipset save")
+    ## You can avoid using sudo or root, by setting appropriate privileges for
+    ## the telegraf.service systemd service.
+    use_sudo = false
+    ## The default timeout of 1s for ipset execution can be overridden here:
+    # timeout = "1s"
+
+```
+
+## Metrics
+
+## Example Output
+
+```sh
+$ sudo ipset save
+create myset hash:net family inet hashsize 1024 maxelem 65536 counters comment
+add myset 10.69.152.1 packets 8 bytes 672 comment "machine A"
+```
+
+```text
+ipset,rule=10.69.152.1,host=trashme,set=myset bytes_total=8i,packets_total=672i 1507615028000000000
+```
diff --git a/content/telegraf/v1/input-plugins/iptables/_index.md b/content/telegraf/v1/input-plugins/iptables/_index.md
new file mode 100644
index 000000000..f4262eef0
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/iptables/_index.md
@@ -0,0 +1,146 @@
+---
+description: "Telegraf plugin for collecting metrics from Iptables"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Iptables
+    identifier: input-iptables
+tags: [Iptables, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Iptables Input Plugin
+
+The iptables plugin gathers packets and bytes counters for rules within a set
+of table and chain from the Linux's iptables firewall.
+
+Rules are identified through associated comment. **Rules without comment are
+ignored**.  Indeed we need a unique ID for the rule and the rule number is not
+a constant: it may vary when rules are inserted/deleted at start-up or by
+automatic tools (interactive firewalls, fail2ban, ...).  Also when the rule set
+is becoming big (hundreds of lines) most people are interested in monitoring
+only a small part of the rule set.
+
+Before using this plugin **you must ensure that the rules you want to monitor
+are named with a unique comment**. Comments are added using the `-m comment
+--comment "my comment"` iptables options.
+
+The iptables command requires CAP_NET_ADMIN and CAP_NET_RAW capabilities. You
+have several options to grant telegraf to run iptables:
+
+* Run telegraf as root. This is strongly discouraged.
+* Configure systemd to run telegraf with CAP_NET_ADMIN and CAP_NET_RAW. This is
+  the simplest and recommended option.
+* Configure sudo to grant telegraf to run iptables. This is the most
+  restrictive option, but require sudo setup.
+
+## Using systemd capabilities
+
+You may run `systemctl edit telegraf.service` and add the following:
+
+```shell
+[Service]
+CapabilityBoundingSet=CAP_NET_RAW CAP_NET_ADMIN
+AmbientCapabilities=CAP_NET_RAW CAP_NET_ADMIN
+```
+
+Since telegraf will fork a process to run iptables, `AmbientCapabilities` is
+required to transmit the capabilities bounding set to the forked process.
+
+## Using sudo
+
+You will need the following in your telegraf config:
+
+```toml
+[[inputs.iptables]]
+  use_sudo = true
+```
+
+You will also need to update your sudoers file:
+
+```bash
+$ visudo
+# Add the following line:
+Cmnd_Alias IPTABLESSHOW = /usr/bin/iptables -nvL *
+telegraf  ALL=(root) NOPASSWD: IPTABLESSHOW
+Defaults!IPTABLESSHOW !logfile, !syslog, !pam_session
+```
+
+## Using IPtables lock feature
+
+Defining multiple instances of this plugin in telegraf.conf can lead to
+concurrent IPtables access resulting in "ERROR in input [inputs.iptables]: exit
+status 4" messages in telegraf.log and missing metrics. Setting 'use_lock =
+true' in the plugin configuration will run IPtables with the '-w' switch,
+allowing a lock usage to prevent this error.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Gather packets and bytes throughput from iptables
+# This plugin ONLY supports Linux
+[[inputs.iptables]]
+  ## iptables require root access on most systems.
+  ## Setting 'use_sudo' to true will make use of sudo to run iptables.
+  ## Users must configure sudo to allow telegraf user to run iptables with
+  ## no password.
+  ## iptables can be restricted to only list command "iptables -nvL".
+  use_sudo = false
+  ## Setting 'use_lock' to true runs iptables with the "-w" option.
+  ## Adjust your sudo settings appropriately if using this option
+  ## ("iptables -w 5 -nvl")
+  use_lock = false
+  ## Define an alternate executable, such as "ip6tables". Default is "iptables".
+  # binary = "ip6tables"
+  ## defines the table to monitor:
+  table = "filter"
+  ## defines the chains to monitor.
+  ## NOTE: iptables rules without a comment will not be monitored.
+  ## Read the plugin documentation for more information.
+  chains = [ "INPUT" ]
+```
+
+## Metrics
+
+### Measurements & Fields
+
+* iptables
+  * pkts (integer, count)
+  * bytes (integer, bytes)
+
+### Tags
+
+* All measurements have the following tags:
+  * table
+  * chain
+  * ruleid
+
+The `ruleid` is the comment associated to the rule.
+
+## Example Output
+
+```shell
+iptables -nvL INPUT
+```
+
+```text
+Chain INPUT (policy DROP 0 packets, 0 bytes)
+pkts bytes target     prot opt in     out     source               destination
+100   1024   ACCEPT     tcp  --  *      *       192.168.0.0/24       0.0.0.0/0            tcp dpt:22 /* ssh */
+ 42   2048   ACCEPT     tcp  --  *      *       192.168.0.0/24       0.0.0.0/0            tcp dpt:80 /* httpd */
+```
+
+```text
+iptables,table=filter,chain=INPUT,ruleid=ssh pkts=100i,bytes=1024i 1453831884664956455
+iptables,table=filter,chain=INPUT,ruleid=httpd pkts=42i,bytes=2048i 1453831884664956455
+```
diff --git a/content/telegraf/v1/input-plugins/ipvs/_index.md b/content/telegraf/v1/input-plugins/ipvs/_index.md
new file mode 100644
index 000000000..8f0a71f51
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/ipvs/_index.md
@@ -0,0 +1,109 @@
+---
+description: "Telegraf plugin for collecting metrics from IPVS"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: IPVS
+    identifier: input-ipvs
+tags: [IPVS, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# IPVS Input Plugin
+
+The IPVS input plugin uses the linux kernel netlink socket interface to gather
+metrics about ipvs virtual and real servers.
+
+**Supported Platforms:** Linux
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Collect virtual and real server stats from Linux IPVS
+# This plugin ONLY supports Linux
+[[inputs.ipvs]]
+  # no configuration
+```
+
+### Permissions
+
+Assuming you installed the telegraf package via one of the published packages,
+the process will be running as the `telegraf` user. However, in order for this
+plugin to communicate over netlink sockets it needs the telegraf process to be
+running as `root` (or some user with `CAP_NET_ADMIN` and `CAP_NET_RAW`). Be sure
+to ensure these permissions before running telegraf with this plugin included.
+
+## Metrics
+
+Server will contain tags identifying how it was configured, using one of
+`address` + `port` + `protocol` *OR* `fwmark`. This is how one would normally
+configure a virtual server using `ipvsadm`.
+
+- ipvs_virtual_server
+  - tags:
+    - sched (the scheduler in use)
+    - netmask (the mask used for determining affinity)
+    - address_family (inet/inet6)
+    - address
+    - port
+    - protocol
+    - fwmark
+  - fields:
+    - connections
+    - pkts_in
+    - pkts_out
+    - bytes_in
+    - bytes_out
+    - pps_in
+    - pps_out
+    - cps
+
+- ipvs_real_server
+  - tags:
+    - address
+    - port
+    - address_family (inet/inet6)
+    - virtual_address
+    - virtual_port
+    - virtual_protocol
+    - virtual_fwmark
+  - fields:
+    - active_connections
+    - inactive_connections
+    - connections
+    - pkts_in
+    - pkts_out
+    - bytes_in
+    - bytes_out
+    - pps_in
+    - pps_out
+    - cps
+
+## Example Output
+
+Virtual server is configured using `fwmark` and backed by 2 real servers:
+
+```text
+ipvs_virtual_server,address=172.18.64.234,address_family=inet,netmask=32,port=9000,protocol=tcp,sched=rr bytes_in=0i,bytes_out=0i,pps_in=0i,pps_out=0i,cps=0i,connections=0i,pkts_in=0i,pkts_out=0i 1541019340000000000
+ipvs_real_server,address=172.18.64.220,address_family=inet,port=9000,virtual_address=172.18.64.234,virtual_port=9000,virtual_protocol=tcp active_connections=0i,inactive_connections=0i,pkts_in=0i,bytes_out=0i,pps_out=0i,connections=0i,pkts_out=0i,bytes_in=0i,pps_in=0i,cps=0i 1541019340000000000
+ipvs_real_server,address=172.18.64.219,address_family=inet,port=9000,virtual_address=172.18.64.234,virtual_port=9000,virtual_protocol=tcp active_connections=0i,inactive_connections=0i,pps_in=0i,pps_out=0i,connections=0i,pkts_in=0i,pkts_out=0i,bytes_in=0i,bytes_out=0i,cps=0i 1541019340000000000
+```
+
+Virtual server is configured using `proto+addr+port` and backed by 2 real
+servers:
+
+```text
+ipvs_virtual_server,address_family=inet,fwmark=47,netmask=32,sched=rr cps=0i,connections=0i,pkts_in=0i,pkts_out=0i,bytes_in=0i,bytes_out=0i,pps_in=0i,pps_out=0i 1541019340000000000
+ipvs_real_server,address=172.18.64.220,address_family=inet,port=9000,virtual_fwmark=47 inactive_connections=0i,pkts_out=0i,bytes_out=0i,pps_in=0i,cps=0i,active_connections=0i,pkts_in=0i,bytes_in=0i,pps_out=0i,connections=0i 1541019340000000000
+ipvs_real_server,address=172.18.64.219,address_family=inet,port=9000,virtual_fwmark=47 cps=0i,active_connections=0i,inactive_connections=0i,connections=0i,pkts_in=0i,bytes_out=0i,pkts_out=0i,bytes_in=0i,pps_in=0i,pps_out=0i 1541019340000000000
+```
diff --git a/content/telegraf/v1/input-plugins/jenkins/_index.md b/content/telegraf/v1/input-plugins/jenkins/_index.md
new file mode 100644
index 000000000..e0f57025b
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/jenkins/_index.md
@@ -0,0 +1,145 @@
+---
+description: "Telegraf plugin for collecting metrics from Jenkins"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Jenkins
+    identifier: input-jenkins
+tags: [Jenkins, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Jenkins Input Plugin
+
+The jenkins plugin gathers information about the nodes and jobs running in a
+jenkins instance.
+
+This plugin does not require a plugin on jenkins and it makes use of Jenkins API
+to retrieve all the information needed.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read jobs and cluster metrics from Jenkins instances
+[[inputs.jenkins]]
+  ## The Jenkins URL in the format "schema://host:port"
+  url = "http://my-jenkins-instance:8080"
+  # username = "admin"
+  # password = "admin"
+
+  ## Set response_timeout
+  response_timeout = "5s"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use SSL but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## Optional Max Job Build Age filter
+  ## Default 1 hour, ignore builds older than max_build_age
+  # max_build_age = "1h"
+
+  ## Optional Sub Job Depth filter
+  ## Jenkins can have unlimited layer of sub jobs
+  ## This config will limit the layers of pulling, default value 0 means
+  ## unlimited pulling until no more sub jobs
+  # max_subjob_depth = 0
+
+  ## Optional Sub Job Per Layer
+  ## In workflow-multibranch-plugin, each branch will be created as a sub job.
+  ## This config will limit to call only the lasted branches in each layer,
+  ## empty will use default value 10
+  # max_subjob_per_layer = 10
+
+  ## Jobs to include or exclude from gathering
+  ## When using both lists, job_exclude has priority.
+  ## Wildcards are supported: [ "jobA/*", "jobB/subjob1/*"]
+  # job_include = [ "*" ]
+  # job_exclude = [ ]
+
+  ## Nodes to include or exclude from gathering
+  ## When using both lists, node_exclude has priority.
+  # node_include = [ "*" ]
+  # node_exclude = [ ]
+
+  ## Worker pool for jenkins plugin only
+  ## Empty this field will use default value 5
+  # max_connections = 5
+
+  ## When set to true will add node labels as a comma-separated tag. If none,
+  ## are found, then a tag with the value of 'none' is used. Finally, if a
+  ## label contains a comma it is replaced with an underscore.
+  # node_labels_as_tag = false
+```
+
+## Metrics
+
+- jenkins
+  - tags:
+    - source
+    - port
+  - fields:
+    - busy_executors
+    - total_executors
+
+- jenkins_node
+  - tags:
+    - arch
+    - disk_path
+    - temp_path
+    - node_name
+    - status ("online", "offline")
+    - source
+    - port
+  - fields:
+    - disk_available (Bytes)
+    - temp_available (Bytes)
+    - memory_available (Bytes)
+    - memory_total (Bytes)
+    - swap_available (Bytes)
+    - swap_total (Bytes)
+    - response_time (ms)
+    - num_executors
+
+- jenkins_job
+  - tags:
+    - name
+    - parents
+    - result
+    - source
+    - port
+  - fields:
+    - duration (ms)
+    - number
+    - result_code (0 = SUCCESS, 1 = FAILURE, 2 = NOT_BUILD, 3 = UNSTABLE, 4 = ABORTED)
+
+## Sample Queries
+
+```sql
+SELECT mean("memory_available") AS "mean_memory_available", mean("memory_total") AS "mean_memory_total", mean("temp_available") AS "mean_temp_available" FROM "jenkins_node" WHERE time > now() - 15m GROUP BY time(:interval:) FILL(null)
+```
+
+```sql
+SELECT mean("duration") AS "mean_duration" FROM "jenkins_job" WHERE time > now() - 24h GROUP BY time(:interval:) FILL(null)
+```
+
+## Example Output
+
+```text
+jenkins,host=myhost,port=80,source=my-jenkins-instance busy_executors=4i,total_executors=8i 1580418261000000000
+jenkins_node,arch=Linux\ (amd64),disk_path=/var/jenkins_home,temp_path=/tmp,host=myhost,node_name=master,source=my-jenkins-instance,port=8080 swap_total=4294963200,memory_available=586711040,memory_total=6089498624,status=online,response_time=1000i,disk_available=152392036352,temp_available=152392036352,swap_available=3503263744,num_executors=2i 1516031535000000000
+jenkins_job,host=myhost,name=JOB1,parents=apps/br1,result=SUCCESS,source=my-jenkins-instance,port=8080 duration=2831i,result_code=0i 1516026630000000000
+jenkins_job,host=myhost,name=JOB2,parents=apps/br2,result=SUCCESS,source=my-jenkins-instance,port=8080 duration=2285i,result_code=0i 1516027230000000000
+```
diff --git a/content/telegraf/v1/input-plugins/jolokia2_agent/_index.md b/content/telegraf/v1/input-plugins/jolokia2_agent/_index.md
new file mode 100644
index 000000000..5561afae0
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/jolokia2_agent/_index.md
@@ -0,0 +1,208 @@
+---
+description: "Telegraf plugin for collecting metrics from Jolokia2 Agent"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Jolokia2 Agent
+    identifier: input-jolokia2_agent
+tags: [Jolokia2 Agent, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Jolokia2 Agent Input Plugin
+
+The `jolokia2_agent` input plugin reads JMX metrics from one or more
+[Jolokia agent](https://jolokia.org/agent/jvm.html) REST endpoints.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read JMX metrics from a Jolokia REST agent endpoint
+[[inputs.jolokia2_agent]]
+  # default_tag_prefix      = ""
+  # default_field_prefix    = ""
+  # default_field_separator = "."
+
+  # Add agents URLs to query
+  urls = ["http://localhost:8080/jolokia"]
+  # username = ""
+  # password = ""
+  # response_timeout = "5s"
+
+  ## Optional origin URL to include as a header in the request. Some endpoints
+  ## may reject an empty origin.
+  # origin = ""
+
+  ## Optional TLS config
+  # tls_ca   = "/var/private/ca.pem"
+  # tls_cert = "/var/private/client.pem"
+  # tls_key  = "/var/private/client-key.pem"
+  # insecure_skip_verify = false
+
+  ## Add metrics to read
+  [[inputs.jolokia2_agent.metric]]
+    name  = "java_runtime"
+    mbean = "java.lang:type=Runtime"
+    paths = ["Uptime"]
+```
+
+Optionally, specify TLS options for communicating with agents:
+
+```toml
+[[inputs.jolokia2_agent]]
+  urls = ["https://agent:8080/jolokia"]
+  tls_ca   = "/var/private/ca.pem"
+  tls_cert = "/var/private/client.pem"
+  tls_key  = "/var/private/client-key.pem"
+  #insecure_skip_verify = false
+
+  [[inputs.jolokia2_agent.metric]]
+    name  = "jvm_runtime"
+    mbean = "java.lang:type=Runtime"
+    paths = ["Uptime"]
+```
+
+### Metric Configuration
+
+Each `metric` declaration generates a Jolokia request to fetch telemetry from a
+JMX MBean.
+
+| Key            | Required | Description |
+|----------------|----------|-------------|
+| `mbean`        | yes      | The object name of a JMX MBean. MBean property-key values can contain a wildcard `*`, allowing you to fetch multiple MBeans with one declaration. |
+| `paths`        | no       | A list of MBean attributes to read. |
+| `tag_keys`     | no       | A list of MBean property-key names to convert into tags. The property-key name becomes the tag name, while the property-key value becomes the tag value. |
+| `tag_prefix`   | no       | A string to prepend to the tag names produced by this `metric` declaration. |
+| `field_name`   | no       | A string to set as the name of the field produced by this metric; can contain substitutions. |
+| `field_prefix` | no       | A string to prepend to the field names produced by this `metric` declaration; can contain substitutions. |
+
+Use `paths` to refine which fields to collect.
+
+```toml
+[[inputs.jolokia2_agent.metric]]
+  name  = "jvm_memory"
+  mbean = "java.lang:type=Memory"
+  paths = ["HeapMemoryUsage", "NonHeapMemoryUsage", "ObjectPendingFinalizationCount"]
+```
+
+The preceding `jvm_memory` `metric` declaration produces the following output:
+
+```text
+jvm_memory HeapMemoryUsage.committed=4294967296,HeapMemoryUsage.init=4294967296,HeapMemoryUsage.max=4294967296,HeapMemoryUsage.used=1750658992,NonHeapMemoryUsage.committed=67350528,NonHeapMemoryUsage.init=2555904,NonHeapMemoryUsage.max=-1,NonHeapMemoryUsage.used=65821352,ObjectPendingFinalizationCount=0 1503762436000000000
+```
+
+Use `*` wildcards against `mbean` property-key values to create distinct series
+by capturing values into `tag_keys`.
+
+```toml
+[[inputs.jolokia2_agent.metric]]
+  name     = "jvm_garbage_collector"
+  mbean    = "java.lang:name=*,type=GarbageCollector"
+  paths    = ["CollectionTime", "CollectionCount"]
+  tag_keys = ["name"]
+```
+
+Since `name=*` matches both `G1 Old Generation` and `G1 Young Generation`, and
+`name` is used as a tag, the preceding `jvm_garbage_collector` `metric`
+declaration produces two metrics.
+
+```shell
+jvm_garbage_collector,name=G1\ Old\ Generation CollectionCount=0,CollectionTime=0 1503762520000000000
+jvm_garbage_collector,name=G1\ Young\ Generation CollectionTime=32,CollectionCount=2 1503762520000000000
+```
+
+Use `tag_prefix` along with `tag_keys` to add detail to tag names.
+
+```toml
+[[inputs.jolokia2_agent.metric]]
+  name       = "jvm_memory_pool"
+  mbean      = "java.lang:name=*,type=MemoryPool"
+  paths      = ["Usage", "PeakUsage", "CollectionUsage"]
+  tag_keys   = ["name"]
+  tag_prefix = "pool_"
+```
+
+The preceding `jvm_memory_pool` `metric` declaration produces six metrics, each
+with a distinct `pool_name` tag.
+
+```text
+jvm_memory_pool,pool_name=Compressed\ Class\ Space PeakUsage.max=1073741824,PeakUsage.committed=3145728,PeakUsage.init=0,Usage.committed=3145728,Usage.init=0,PeakUsage.used=3017976,Usage.max=1073741824,Usage.used=3017976 1503764025000000000
+jvm_memory_pool,pool_name=Code\ Cache PeakUsage.init=2555904,PeakUsage.committed=6291456,Usage.committed=6291456,PeakUsage.used=6202752,PeakUsage.max=251658240,Usage.used=6210368,Usage.max=251658240,Usage.init=2555904 1503764025000000000
+jvm_memory_pool,pool_name=G1\ Eden\ Space CollectionUsage.max=-1,PeakUsage.committed=56623104,PeakUsage.init=56623104,PeakUsage.used=53477376,Usage.max=-1,Usage.committed=49283072,Usage.used=19922944,CollectionUsage.committed=49283072,CollectionUsage.init=56623104,CollectionUsage.used=0,PeakUsage.max=-1,Usage.init=56623104 1503764025000000000
+jvm_memory_pool,pool_name=G1\ Old\ Gen CollectionUsage.max=1073741824,CollectionUsage.committed=0,PeakUsage.max=1073741824,PeakUsage.committed=1017118720,PeakUsage.init=1017118720,PeakUsage.used=137032208,Usage.max=1073741824,CollectionUsage.init=1017118720,Usage.committed=1017118720,Usage.init=1017118720,Usage.used=134708752,CollectionUsage.used=0 1503764025000000000
+jvm_memory_pool,pool_name=G1\ Survivor\ Space Usage.max=-1,Usage.init=0,CollectionUsage.max=-1,CollectionUsage.committed=7340032,CollectionUsage.used=7340032,PeakUsage.committed=7340032,Usage.committed=7340032,Usage.used=7340032,CollectionUsage.init=0,PeakUsage.max=-1,PeakUsage.init=0,PeakUsage.used=7340032 1503764025000000000
+jvm_memory_pool,pool_name=Metaspace PeakUsage.init=0,PeakUsage.used=21852224,PeakUsage.max=-1,Usage.max=-1,Usage.committed=22282240,Usage.init=0,Usage.used=21852224,PeakUsage.committed=22282240 1503764025000000000
+```
+
+Use substitutions to create fields and field prefixes with MBean property-keys
+captured by wildcards. In the following example, `$1` represents the value of
+the property-key `name`, and `$2` represents the value of the property-key
+`topic`.
+
+```toml
+[[inputs.jolokia2_agent.metric]]
+  name         = "kafka_topic"
+  mbean        = "kafka.server:name=*,topic=*,type=BrokerTopicMetrics"
+  field_prefix = "$1"
+  tag_keys     = ["topic"]
+```
+
+The preceding `kafka_topic` `metric` declaration produces a metric per Kafka
+topic. The `name` Mbean property-key is used as a field prefix to aid in
+gathering fields together into the single metric.
+
+```text
+kafka_topic,topic=my-topic BytesOutPerSec.MeanRate=0,FailedProduceRequestsPerSec.MeanRate=0,BytesOutPerSec.EventType="bytes",BytesRejectedPerSec.Count=0,FailedProduceRequestsPerSec.RateUnit="SECONDS",FailedProduceRequestsPerSec.EventType="requests",MessagesInPerSec.RateUnit="SECONDS",BytesInPerSec.EventType="bytes",BytesOutPerSec.RateUnit="SECONDS",BytesInPerSec.OneMinuteRate=0,FailedFetchRequestsPerSec.EventType="requests",TotalFetchRequestsPerSec.MeanRate=146.301533938701,BytesOutPerSec.FifteenMinuteRate=0,TotalProduceRequestsPerSec.MeanRate=0,BytesRejectedPerSec.FifteenMinuteRate=0,MessagesInPerSec.FiveMinuteRate=0,BytesInPerSec.Count=0,BytesRejectedPerSec.MeanRate=0,FailedFetchRequestsPerSec.MeanRate=0,FailedFetchRequestsPerSec.FiveMinuteRate=0,FailedFetchRequestsPerSec.FifteenMinuteRate=0,FailedProduceRequestsPerSec.Count=0,TotalFetchRequestsPerSec.FifteenMinuteRate=128.59314292334466,TotalFetchRequestsPerSec.OneMinuteRate=126.71551273850747,TotalFetchRequestsPerSec.Count=1353483,TotalProduceRequestsPerSec.FifteenMinuteRate=0,FailedFetchRequestsPerSec.OneMinuteRate=0,FailedFetchRequestsPerSec.Count=0,FailedProduceRequestsPerSec.FifteenMinuteRate=0,TotalFetchRequestsPerSec.FiveMinuteRate=130.8516148751592,TotalFetchRequestsPerSec.RateUnit="SECONDS",BytesRejectedPerSec.RateUnit="SECONDS",BytesInPerSec.MeanRate=0,FailedFetchRequestsPerSec.RateUnit="SECONDS",BytesRejectedPerSec.OneMinuteRate=0,BytesOutPerSec.Count=0,BytesOutPerSec.OneMinuteRate=0,MessagesInPerSec.FifteenMinuteRate=0,MessagesInPerSec.MeanRate=0,BytesInPerSec.FiveMinuteRate=0,TotalProduceRequestsPerSec.RateUnit="SECONDS",FailedProduceRequestsPerSec.OneMinuteRate=0,TotalProduceRequestsPerSec.EventType="requests",BytesRejectedPerSec.FiveMinuteRate=0,BytesRejectedPerSec.EventType="bytes",BytesOutPerSec.FiveMinuteRate=0,FailedProduceRequestsPerSec.FiveMinuteRate=0,MessagesInPerSec.Count=0,TotalProduceRequestsPerSec.FiveMinuteRate=0,TotalProduceRequestsPerSec.OneMinuteRate=0,MessagesInPerSec.EventType="messages",MessagesInPerSec.OneMinuteRate=0,TotalFetchRequestsPerSec.EventType="requests",BytesInPerSec.RateUnit="SECONDS",BytesInPerSec.FifteenMinuteRate=0,TotalProduceRequestsPerSec.Count=0 1503767532000000000
+```
+
+This plugins support default configurations that apply to every `metric`
+declaration.
+
+| Key                       | Default Value | Description |
+|---------------------------|---------------|-------------|
+| `default_field_separator` | `.`           | A character to use to join Mbean attributes when creating fields. |
+| `default_field_prefix`    | _None_        | A string to prepend to the field names produced by all `metric` declarations. |
+| `default_tag_prefix`      | _None_        | A string to prepend to the tag names produced by all `metric` declarations. |
+
+## Metrics
+
+The metrics depend on the definition(s) in the `inputs.jolokia2_agent.metric`
+section(s).
+
+## Example Output
+
+```text
+jvm_memory_pool,pool_name=Compressed\ Class\ Space PeakUsage.max=1073741824,PeakUsage.committed=3145728,PeakUsage.init=0,Usage.committed=3145728,Usage.init=0,PeakUsage.used=3017976,Usage.max=1073741824,Usage.used=3017976 1503764025000000000
+jvm_memory_pool,pool_name=Code\ Cache PeakUsage.init=2555904,PeakUsage.committed=6291456,Usage.committed=6291456,PeakUsage.used=6202752,PeakUsage.max=251658240,Usage.used=6210368,Usage.max=251658240,Usage.init=2555904 1503764025000000000
+jvm_memory_pool,pool_name=G1\ Eden\ Space CollectionUsage.max=-1,PeakUsage.committed=56623104,PeakUsage.init=56623104,PeakUsage.used=53477376,Usage.max=-1,Usage.committed=49283072,Usage.used=19922944,CollectionUsage.committed=49283072,CollectionUsage.init=56623104,CollectionUsage.used=0,PeakUsage.max=-1,Usage.init=56623104 1503764025000000000
+jvm_memory_pool,pool_name=G1\ Old\ Gen CollectionUsage.max=1073741824,CollectionUsage.committed=0,PeakUsage.max=1073741824,PeakUsage.committed=1017118720,PeakUsage.init=1017118720,PeakUsage.used=137032208,Usage.max=1073741824,CollectionUsage.init=1017118720,Usage.committed=1017118720,Usage.init=1017118720,Usage.used=134708752,CollectionUsage.used=0 1503764025000000000
+jvm_memory_pool,pool_name=G1\ Survivor\ Space Usage.max=-1,Usage.init=0,CollectionUsage.max=-1,CollectionUsage.committed=7340032,CollectionUsage.used=7340032,PeakUsage.committed=7340032,Usage.committed=7340032,Usage.used=7340032,CollectionUsage.init=0,PeakUsage.max=-1,PeakUsage.init=0,PeakUsage.used=7340032 1503764025000000000
+jvm_memory_pool,pool_name=Metaspace PeakUsage.init=0,PeakUsage.used=21852224,PeakUsage.max=-1,Usage.max=-1,Usage.committed=22282240,Usage.init=0,Usage.used=21852224,PeakUsage.committed=22282240 1503764025000000000
+```
+
+## Example Configurations
+
+* ActiveMQ
+* BitBucket
+* Cassandra
+* Hadoop-HDFS
+* Java JVM
+* JBoss
+* Kafka
+* Kafka Connect
+* Tomcat
+* Weblogic
+* ZooKeeper
+
+Please help improve this list and contribute new configuration files by opening
+an issue or pull request.
diff --git a/content/telegraf/v1/input-plugins/jolokia2_proxy/_index.md b/content/telegraf/v1/input-plugins/jolokia2_proxy/_index.md
new file mode 100644
index 000000000..2d6aff893
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/jolokia2_proxy/_index.md
@@ -0,0 +1,106 @@
+---
+description: "Telegraf plugin for collecting metrics from Jolokia2 Proxy"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Jolokia2 Proxy
+    identifier: input-jolokia2_proxy
+tags: [Jolokia2 Proxy, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Jolokia2 Proxy Input Plugin
+
+The `jolokia2_proxy` input plugin reads JMX metrics from one or more _targets_
+by interacting with a [Jolokia proxy](https://jolokia.org/features/proxy.html)
+REST endpoint.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read JMX metrics from a Jolokia REST proxy endpoint
+[[inputs.jolokia2_proxy]]
+  # default_tag_prefix      = ""
+  # default_field_prefix    = ""
+  # default_field_separator = "."
+
+  ## Proxy agent
+  url = "http://localhost:8080/jolokia"
+  # username = ""
+  # password = ""
+  # response_timeout = "5s"
+
+  ## Optional origin URL to include as a header in the request. Some endpoints
+  ## may reject an empty origin.
+  # origin = ""
+
+  ## Optional TLS config
+  # tls_ca   = "/var/private/ca.pem"
+  # tls_cert = "/var/private/client.pem"
+  # tls_key  = "/var/private/client-key.pem"
+  # insecure_skip_verify = false
+
+  ## Add proxy targets to query
+  # default_target_username = ""
+  # default_target_password = ""
+  [[inputs.jolokia2_proxy.target]]
+    url = "service:jmx:rmi:///jndi/rmi://targethost:9999/jmxrmi"
+    # username = ""
+    # password = ""
+
+  ## Add metrics to read
+  [[inputs.jolokia2_proxy.metric]]
+    name  = "java_runtime"
+    mbean = "java.lang:type=Runtime"
+    paths = ["Uptime"]
+```
+
+Optionally, specify TLS options for communicating with proxies:
+
+```toml
+[[inputs.jolokia2_proxy]]
+  url = "https://proxy:8080/jolokia"
+
+  tls_ca   = "/var/private/ca.pem"
+  tls_cert = "/var/private/client.pem"
+  tls_key  = "/var/private/client-key.pem"
+  #insecure_skip_verify = false
+
+  #default_target_username = ""
+  #default_target_password = ""
+  [[inputs.jolokia2_proxy.target]]
+    url = "service:jmx:rmi:///jndi/rmi://targethost:9999/jmxrmi"
+    # username = ""
+    # password = ""
+
+  [[inputs.jolokia2_proxy.metric]]
+    name  = "jvm_runtime"
+    mbean = "java.lang:type=Runtime"
+    paths = ["Uptime"]
+```
+
+### Metric Configuration
+
+Please see
+Jolokia agent documentation.
+
+## Example Output
+
+```text
+jvm_memory_pool,pool_name=Compressed\ Class\ Space PeakUsage.max=1073741824,PeakUsage.committed=3145728,PeakUsage.init=0,Usage.committed=3145728,Usage.init=0,PeakUsage.used=3017976,Usage.max=1073741824,Usage.used=3017976 1503764025000000000
+jvm_memory_pool,pool_name=Code\ Cache PeakUsage.init=2555904,PeakUsage.committed=6291456,Usage.committed=6291456,PeakUsage.used=6202752,PeakUsage.max=251658240,Usage.used=6210368,Usage.max=251658240,Usage.init=2555904 1503764025000000000
+jvm_memory_pool,pool_name=G1\ Eden\ Space CollectionUsage.max=-1,PeakUsage.committed=56623104,PeakUsage.init=56623104,PeakUsage.used=53477376,Usage.max=-1,Usage.committed=49283072,Usage.used=19922944,CollectionUsage.committed=49283072,CollectionUsage.init=56623104,CollectionUsage.used=0,PeakUsage.max=-1,Usage.init=56623104 1503764025000000000
+jvm_memory_pool,pool_name=G1\ Old\ Gen CollectionUsage.max=1073741824,CollectionUsage.committed=0,PeakUsage.max=1073741824,PeakUsage.committed=1017118720,PeakUsage.init=1017118720,PeakUsage.used=137032208,Usage.max=1073741824,CollectionUsage.init=1017118720,Usage.committed=1017118720,Usage.init=1017118720,Usage.used=134708752,CollectionUsage.used=0 1503764025000000000
+jvm_memory_pool,pool_name=G1\ Survivor\ Space Usage.max=-1,Usage.init=0,CollectionUsage.max=-1,CollectionUsage.committed=7340032,CollectionUsage.used=7340032,PeakUsage.committed=7340032,Usage.committed=7340032,Usage.used=7340032,CollectionUsage.init=0,PeakUsage.max=-1,PeakUsage.init=0,PeakUsage.used=7340032 1503764025000000000
+jvm_memory_pool,pool_name=Metaspace PeakUsage.init=0,PeakUsage.used=21852224,PeakUsage.max=-1,Usage.max=-1,Usage.committed=22282240,Usage.init=0,Usage.used=21852224,PeakUsage.committed=22282240 1503764025000000000
+```
diff --git a/content/telegraf/v1/input-plugins/jti_openconfig_telemetry/_index.md b/content/telegraf/v1/input-plugins/jti_openconfig_telemetry/_index.md
new file mode 100644
index 000000000..effbb7b62
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/jti_openconfig_telemetry/_index.md
@@ -0,0 +1,114 @@
+---
+description: "Telegraf plugin for collecting metrics from JTI OpenConfig Telemetry"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: JTI OpenConfig Telemetry
+    identifier: input-jti_openconfig_telemetry
+tags: [JTI OpenConfig Telemetry, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# JTI OpenConfig Telemetry Input Plugin
+
+This plugin reads Juniper Networks implementation of OpenConfig telemetry data
+from listed sensors using Junos Telemetry Interface. Refer to
+[openconfig.net](http://openconfig.net/) for more details about OpenConfig and
+[Junos Telemetry Interface (JTI)](https://www.juniper.net/documentation/en_US/junos/topics/concept/junos-telemetry-interface-oveview.html).
+
+[1]: https://www.juniper.net/documentation/en_US/junos/topics/concept/junos-telemetry-interface-oveview.html
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Subscribe and receive OpenConfig Telemetry data using JTI
+[[inputs.jti_openconfig_telemetry]]
+  ## List of device addresses to collect telemetry from
+  servers = ["localhost:1883"]
+
+  ## Authentication details. Username and password are must if device expects
+  ## authentication. Client ID must be unique when connecting from multiple instances
+  ## of telegraf to the same device
+  username = "user"
+  password = "pass"
+  client_id = "telegraf"
+
+  ## Frequency to get data
+  sample_frequency = "1000ms"
+
+  ## Sensors to subscribe for
+  ## A identifier for each sensor can be provided in path by separating with space
+  ## Else sensor path will be used as identifier
+  ## When identifier is used, we can provide a list of space separated sensors.
+  ## A single subscription will be created with all these sensors and data will
+  ## be saved to measurement with this identifier name
+  sensors = [
+   "/interfaces/",
+   "collection /components/ /lldp",
+  ]
+
+  ## We allow specifying sensor group level reporting rate. To do this, specify the
+  ## reporting rate in Duration at the beginning of sensor paths / collection
+  ## name. For entries without reporting rate, we use configured sample frequency
+  sensors = [
+   "1000ms customReporting /interfaces /lldp",
+   "2000ms collection /components",
+   "/interfaces",
+  ]
+
+  ## Timestamp Source
+  ## Set to 'collection' for time of collection, and 'data' for using the time
+  ## provided by the _timestamp field.
+  # timestamp_source = "collection"
+
+  ## Optional TLS Config
+  # enable_tls = false
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Minimal TLS version to accept by the client
+  # tls_min_version = "TLS12"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## Delay between retry attempts of failed RPC calls or streams. Defaults to 1000ms.
+  ## Failed streams/calls will not be retried if 0 is provided
+  retry_delay = "1000ms"
+
+  ## Period for sending keep-alive packets on idle connections
+  ## This is helpful to identify broken connections to the server
+  # keep_alive_period = "10s"
+
+  ## To treat all string values as tags, set this to true
+  str_as_tags = false
+```
+
+## Tags
+
+- All measurements are tagged appropriately using the identifier information
+  in incoming data
+
+## Example Output
+
+## Metrics
diff --git a/content/telegraf/v1/input-plugins/kafka_consumer/_index.md b/content/telegraf/v1/input-plugins/kafka_consumer/_index.md
new file mode 100644
index 000000000..f87eaeb0a
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/kafka_consumer/_index.md
@@ -0,0 +1,232 @@
+---
+description: "Telegraf plugin for collecting metrics from Kafka Consumer"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Kafka Consumer
+    identifier: input-kafka_consumer
+tags: [Kafka Consumer, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Kafka Consumer Input Plugin
+
+The [Kafka](https://kafka.apache.org) consumer plugin reads from Kafka
+and creates metrics using one of the supported [input data formats](/telegraf/v1/data_formats/input).
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `sasl_username`,
+`sasl_password` and `sasl_access_token` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from Kafka topics
+[[inputs.kafka_consumer]]
+  ## Kafka brokers.
+  brokers = ["localhost:9092"]
+
+  ## Set the minimal supported Kafka version. Should be a string contains
+  ## 4 digits in case if it is 0 version and 3 digits for versions starting
+  ## from 1.0.0 separated by dot. This setting enables the use of new
+  ## Kafka features and APIs.  Must be 0.10.2.0(used as default) or greater.
+  ## Please, check the list of supported versions at
+  ## https://pkg.go.dev/github.com/Shopify/sarama#SupportedVersions
+  ##   ex: kafka_version = "2.6.0"
+  ##   ex: kafka_version = "0.10.2.0"
+  # kafka_version = "0.10.2.0"
+
+  ## Topics to consume.
+  topics = ["telegraf"]
+
+  ## Topic regular expressions to consume.  Matches will be added to topics.
+  ## Example: topic_regexps = [ "*test", "metric[0-9A-z]*" ]
+  # topic_regexps = [ ]
+
+  ## When set this tag will be added to all metrics with the topic as the value.
+  # topic_tag = ""
+
+  ## The list of Kafka message headers that should be pass as metric tags
+  ## works only for Kafka version 0.11+, on lower versions the message headers
+  ## are not available
+  # msg_headers_as_tags = []
+
+  ## The name of kafka message header which value should override the metric name.
+  ## In case when the same header specified in current option and in msg_headers_as_tags
+  ## option, it will be excluded from the msg_headers_as_tags list.
+  # msg_header_as_metric_name = ""
+
+  ## Set metric(s) timestamp using the given source.
+  ## Available options are:
+  ##   metric -- do not modify the metric timestamp
+  ##   inner  -- use the inner message timestamp (Kafka v0.10+)
+  ##   outer  -- use the outer (compressed) block timestamp (Kafka v0.10+)
+  # timestamp_source = "metric"
+
+  ## Optional Client id
+  # client_id = "Telegraf"
+
+  ## Optional TLS Config
+  # enable_tls = false
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## Period between keep alive probes.
+  ## Defaults to the OS configuration if not specified or zero.
+  # keep_alive_period = "15s"
+
+  ## SASL authentication credentials.  These settings should typically be used
+  ## with TLS encryption enabled
+  # sasl_username = "kafka"
+  # sasl_password = "secret"
+
+  ## Optional SASL:
+  ## one of: OAUTHBEARER, PLAIN, SCRAM-SHA-256, SCRAM-SHA-512, GSSAPI
+  ## (defaults to PLAIN)
+  # sasl_mechanism = ""
+
+  ## used if sasl_mechanism is GSSAPI
+  # sasl_gssapi_service_name = ""
+  # ## One of: KRB5_USER_AUTH and KRB5_KEYTAB_AUTH
+  # sasl_gssapi_auth_type = "KRB5_USER_AUTH"
+  # sasl_gssapi_kerberos_config_path = "/"
+  # sasl_gssapi_realm = "realm"
+  # sasl_gssapi_key_tab_path = ""
+  # sasl_gssapi_disable_pafxfast = false
+
+  ## used if sasl_mechanism is OAUTHBEARER
+  # sasl_access_token = ""
+
+  ## SASL protocol version.  When connecting to Azure EventHub set to 0.
+  # sasl_version = 1
+
+  # Disable Kafka metadata full fetch
+  # metadata_full = false
+
+  ## Name of the consumer group.
+  # consumer_group = "telegraf_metrics_consumers"
+
+  ## Compression codec represents the various compression codecs recognized by
+  ## Kafka in messages.
+  ##  0 : None
+  ##  1 : Gzip
+  ##  2 : Snappy
+  ##  3 : LZ4
+  ##  4 : ZSTD
+  # compression_codec = 0
+  ## Initial offset position; one of "oldest" or "newest".
+  # offset = "oldest"
+
+  ## Consumer group partition assignment strategy; one of "range", "roundrobin" or "sticky".
+  # balance_strategy = "range"
+
+  ## Maximum number of retries for metadata operations including
+  ## connecting. Sets Sarama library's Metadata.Retry.Max config value. If 0 or
+  ## unset, use the Sarama default of 3,
+  # metadata_retry_max = 0
+
+  ## Type of retry backoff. Valid options: "constant", "exponential"
+  # metadata_retry_type = "constant"
+
+  ## Amount of time to wait before retrying. When metadata_retry_type is
+  ## "constant", each retry is delayed this amount. When "exponential", the
+  ## first retry is delayed this amount, and subsequent delays are doubled. If 0
+  ## or unset, use the Sarama default of 250 ms
+  # metadata_retry_backoff = 0
+
+  ## Maximum amount of time to wait before retrying when metadata_retry_type is
+  ## "exponential". Ignored for other retry types. If 0, there is no backoff
+  ## limit.
+  # metadata_retry_max_duration = 0
+
+  ## When set to true, this turns each bootstrap broker address into a set of
+  ## IPs, then does a reverse lookup on each one to get its canonical hostname.
+  ## This list of hostnames then replaces the original address list.
+  ## resolve_canonical_bootstrap_servers_only = false
+
+  ## Strategy for making connection to kafka brokers. Valid options: "startup",
+  ## "defer". If set to "defer" the plugin is allowed to start before making a
+  ## connection. This is useful if the broker may be down when telegraf is
+  ## started, but if there are any typos in the broker setting, they will cause
+  ## connection failures without warning at startup
+  # connection_strategy = "startup"
+
+  ## Maximum length of a message to consume, in bytes (default 0/unlimited);
+  ## larger messages are dropped
+  max_message_len = 1000000
+
+  ## Max undelivered messages
+  ## This plugin uses tracking metrics, which ensure messages are read to
+  ## outputs before acknowledging them to the original broker to ensure data
+  ## is not lost. This option sets the maximum messages to read from the
+  ## broker that have not been written by an output.
+  ##
+  ## This value needs to be picked with awareness of the agent's
+  ## metric_batch_size value as well. Setting max undelivered messages too high
+  ## can result in a constant stream of data batches to the output. While
+  ## setting it too low may never flush the broker's messages.
+  # max_undelivered_messages = 1000
+
+  ## Maximum amount of time the consumer should take to process messages. If
+  ## the debug log prints messages from sarama about 'abandoning subscription
+  ## to [topic] because consuming was taking too long', increase this value to
+  ## longer than the time taken by the output plugin(s).
+  ##
+  ## Note that the effective timeout could be between 'max_processing_time' and
+  ## '2 * max_processing_time'.
+  # max_processing_time = "100ms"
+
+  ## The default number of message bytes to fetch from the broker in each
+  ## request (default 1MB). This should be larger than the majority of
+  ## your messages, or else the consumer will spend a lot of time
+  ## negotiating sizes and not actually consuming. Similar to the JVM's
+  ## `fetch.message.max.bytes`.
+  # consumer_fetch_default = "1MB"
+
+  ## Data format to consume.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+  data_format = "influx"
+```
+
+[kafka]: https://kafka.apache.org
+[input data formats]: /docs/DATA_FORMATS_INPUT.md
+
+## Metrics
+
+The plugin accepts arbitrary input and parses it according to the `data_format`
+setting. There is no predefined metric format.
+
+## Example Output
+
+There is no predefined metric format, so output depends on plugin input.
diff --git a/content/telegraf/v1/input-plugins/kapacitor/_index.md b/content/telegraf/v1/input-plugins/kapacitor/_index.md
new file mode 100644
index 000000000..10db09829
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/kapacitor/_index.md
@@ -0,0 +1,476 @@
+---
+description: "Telegraf plugin for collecting metrics from Kapacitor"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Kapacitor
+    identifier: input-kapacitor
+tags: [Kapacitor, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Kapacitor Input Plugin
+
+The Kapacitor plugin collects metrics from the given Kapacitor instances.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read Kapacitor-formatted JSON metrics from one or more HTTP endpoints
+[[inputs.kapacitor]]
+  ## Multiple URLs from which to read Kapacitor-formatted JSON
+  ## Default is "http://localhost:9092/kapacitor/v1/debug/vars".
+  urls = [
+    "http://localhost:9092/kapacitor/v1/debug/vars"
+  ]
+
+  ## Time limit for http requests
+  timeout = "5s"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+## Metrics
+
+### Measurements and fields
+
+- kapacitor
+  - num_enabled_tasks_
+  - num_subscriptions_
+  - num_tasks_
+- kapacitor_alert
+  - notification_dropped_
+  - primary-handle-count_
+  - secondary-handle-count kapacitor_cluster_
+  - dropped_member_events_
+  - dropped_user_events_
+  - query_handler_errors_
+- kapacitor_edges
+  - collected_
+  - emitted_
+- kapacitor_ingress
+  - points_received_
+- kapacitor_load
+  - errors_
+- kapacitor_memstats
+  - alloc_bytes_
+  - buck_hash_sys_bytes_
+  - frees_
+  - gc_sys_bytes_
+  - gc_cpu_fraction_
+  - heap_alloc_bytes_
+  - heap_idle_bytes_
+  - heap_in_use_bytes_
+  - heap_objects_
+  - heap_released_bytes_
+  - heap_sys_bytes_
+  - last_gc_ns_
+  - lookups_
+  - mallocs_
+  - mcache_in_use_bytes_
+  - mcache_sys_bytes_
+  - mspan_in_use_bytes_
+  - mspan_sys_bytes_
+  - next_gc_ns_
+  - num_gc_
+  - other_sys_bytes_
+  - pause_total_ns_
+  - stack_in_use_bytes_
+  - stack_sys_bytes_
+  - sys_bytes_
+  - total_alloc_bytes_
+- kapacitor_nodes
+  - alerts_inhibited_
+  - alerts_triggered_
+  - avg_exec_time_ns_
+  - crits_triggered_
+  - errors_
+  - infos_triggered_
+  - oks_triggered_
+  - points_written_
+  - warns_triggered_
+  - write_errors_
+- kapacitor_topics
+  - collected_
+
+---
+
+## kapacitor
+
+The `kapacitor` measurement stores fields with information related to
+[Kapacitor tasks](https://docs.influxdata.com/kapacitor/latest/introduction/getting-started/#kapacitor-tasks) and [subscriptions](https://docs.influxdata.com/kapacitor/latest/administration/subscription-management/).
+
+[tasks]: https://docs.influxdata.com/kapacitor/latest/introduction/getting-started/#kapacitor-tasks
+
+[subs]: https://docs.influxdata.com/kapacitor/latest/administration/subscription-management/
+
+### num_enabled_tasks
+
+The number of enabled Kapacitor tasks.
+
+### num_subscriptions
+
+The number of Kapacitor/InfluxDB subscriptions.
+
+### num_tasks
+
+The total number of Kapacitor tasks.
+
+---
+
+## kapacitor_alert
+
+The `kapacitor_alert` measurement stores fields with information related to
+[Kapacitor alerts](https://docs.influxdata.com/kapacitor/v1.5/working/alerts/).
+
+### notification-dropped
+
+The number of internal notifications dropped because they arrive too late from
+another Kapacitor node.  If this count is increasing, Kapacitor Enterprise nodes
+aren't able to communicate fast enough to keep up with the volume of alerts.
+
+### primary-handle-count
+
+The number of times this node handled an alert as the primary. This count should
+increase under normal conditions.
+
+### secondary-handle-count
+
+The number of times this node handled an alert as the secondary. An increase in
+this counter indicates that the primary is failing to handle alerts in a timely
+manner.
+
+---
+
+## kapacitor_cluster
+
+The `kapacitor_cluster` measurement reflects the ability of [Kapacitor nodes to
+communicate]() with one another. Specifically, these metrics track the
+gossip communication between the Kapacitor nodes.
+
+[cluster]: https://docs.influxdata.com/enterprise_kapacitor/v1.5/administration/configuration/#cluster-communications
+
+### dropped_member_events
+
+The number of gossip member events that were dropped.
+
+### dropped_user_events
+
+The number of gossip user events that were dropped.
+
+### query_handler_errors
+
+The number of errors from event handlers.
+
+---
+
+## kapacitor_edges
+
+The `kapacitor_edges` measurement stores fields with information related to
+[edges](https://docs.influxdata.com/kapacitor/latest/tick/introduction/#pipelines) in Kapacitor TICKscripts.
+
+[edges]: https://docs.influxdata.com/kapacitor/latest/tick/introduction/#pipelines
+
+### collected
+
+The number of messages collected by TICKscript edges.
+
+### emitted
+
+The number of messages emitted by TICKscript edges.
+
+---
+
+## kapacitor_ingress
+
+The `kapacitor_ingress` measurement stores fields with information related to
+data coming into Kapacitor.
+
+### points_received
+
+The number of points received by Kapacitor.
+
+---
+
+## kapacitor_load
+
+The `kapacitor_load` measurement stores fields with information related to the
+[Kapacitor Load Directory service](https://docs.influxdata.com/kapacitor/latest/guides/load_directory/).
+
+[load-dir]: https://docs.influxdata.com/kapacitor/latest/guides/load_directory/
+
+### errors
+
+The number of errors reported from the load directory service.
+
+---
+
+## kapacitor_memstats
+
+The `kapacitor_memstats` measurement stores fields related to Kapacitor memory
+usage.
+
+### alloc_bytes
+
+The number of bytes of memory allocated by Kapacitor that are still in use.
+
+### buck_hash_sys_bytes
+
+The number of bytes of memory used by the profiling bucket hash table.
+
+### frees
+
+The number of heap objects freed.
+
+### gc_sys_bytes
+
+The number of bytes of memory used for garbage collection system metadata.
+
+### gc_cpu_fraction
+
+The fraction of Kapacitor's available CPU time used by garbage collection since
+Kapacitor started.
+
+### heap_alloc_bytes
+
+The number of reachable and unreachable heap objects garbage collection has
+not freed.
+
+### heap_idle_bytes
+
+The number of heap bytes waiting to be used.
+
+### heap_in_use_bytes
+
+The number of heap bytes in use.
+
+### heap_objects
+
+The number of allocated objects.
+
+### heap_released_bytes
+
+The number of heap bytes released to the operating system.
+
+### heap_sys_bytes
+
+The number of heap bytes obtained from `system`.
+
+### last_gc_ns
+
+The nanosecond epoch time of the last garbage collection.
+
+### lookups
+
+The total number of pointer lookups.
+
+### mallocs
+
+The total number of mallocs.
+
+### mcache_in_use_bytes
+
+The number of bytes in use by mcache structures.
+
+### mcache_sys_bytes
+
+The number of bytes used for mcache structures obtained from `system`.
+
+### mspan_in_use_bytes
+
+The number of bytes in use by mspan structures.
+
+### mspan_sys_bytes
+
+The number of bytes used for mspan structures obtained from `system`.
+
+### next_gc_ns
+
+The nanosecond epoch time of the next garbage collection.
+
+### num_gc
+
+The number of completed garbage collection cycles.
+
+### other_sys_bytes
+
+The number of bytes used for other system allocations.
+
+### pause_total_ns
+
+The total number of nanoseconds spent in garbage collection "stop-the-world"
+pauses since Kapacitor started.
+
+### stack_in_use_bytes
+
+The number of bytes in use by the stack allocator.
+
+### stack_sys_bytes
+
+The number of bytes obtained from `system` for stack allocator.
+
+### sys_bytes
+
+The number of bytes of memory obtained from `system`.
+
+### total_alloc_bytes
+
+The total number of bytes allocated, even if freed.
+
+---
+
+## kapacitor_nodes
+
+The `kapacitor_nodes` measurement stores fields related to events that occur in
+[TICKscript nodes](https://docs.influxdata.com/kapacitor/latest/nodes/).
+
+### alerts_inhibited
+
+The total number of alerts inhibited by TICKscripts.
+
+### alerts_triggered
+
+The total number of alerts triggered by TICKscripts.
+
+### avg_exec_time_ns
+
+The average execution time of TICKscripts in nanoseconds.
+
+### crits_triggered
+
+The number of critical (`crit`) alerts triggered by TICKscripts.
+
+### errors (from TICKscripts)
+
+The number of errors caused caused by TICKscripts.
+
+### infos_triggered
+
+The number of info (`info`) alerts triggered by TICKscripts.
+
+### oks_triggered
+
+The number of ok (`ok`) alerts triggered by TICKscripts.
+
+#### points_written
+
+The number of points written to InfluxDB or back to Kapacitor.
+
+#### warns_triggered
+
+The number of warning (`warn`) alerts triggered by TICKscripts.
+
+#### working_cardinality
+
+The total number of unique series processed.
+
+#### write_errors
+
+The number of errors that occurred when writing to InfluxDB or other write
+endpoints.
+
+---
+
+### kapacitor_topics
+
+The `kapacitor_topics` measurement stores fields related to Kapacitor
+topics][topics].
+
+[topics]: https://docs.influxdata.com/kapacitor/latest/working/using_alert_topics/
+
+#### collected (kapacitor_topics)
+
+The number of events collected by Kapacitor topics.
+
+---
+
+__Note:__ The Kapacitor variables `host`, `cluster_id`, and `server_id`
+are currently not recorded due to the potential high cardinality of
+these values.
+
+## Example Output
+
+```text
+kapacitor_memstats,host=hostname.local,kap_version=1.1.0~rc2,url=http://localhost:9092/kapacitor/v1/debug/vars alloc_bytes=6974808i,buck_hash_sys_bytes=1452609i,frees=207281i,gc_sys_bytes=802816i,gc_cpu_fraction=0.00004693548939673313,heap_alloc_bytes=6974808i,heap_idle_bytes=6742016i,heap_in_use_bytes=9183232i,heap_objects=23216i,heap_released_bytes=0i,heap_sys_bytes=15925248i,last_gc_ns=1478791460012676997i,lookups=88i,mallocs=230497i,mcache_in_use_bytes=9600i,mcache_sys_bytes=16384i,mspan_in_use_bytes=98560i,mspan_sys_bytes=131072i,next_gc_ns=11467528i,num_gc=8i,other_sys_bytes=2236087i,pause_total_ns=2994110i,stack_in_use_bytes=1900544i,stack_sys_bytes=1900544i,sys_bytes=22464760i,total_alloc_bytes=35023600i 1478791462000000000
+kapacitor,host=hostname.local,kap_version=1.1.0~rc2,url=http://localhost:9092/kapacitor/v1/debug/vars num_enabled_tasks=5i,num_subscriptions=5i,num_tasks=5i 1478791462000000000
+kapacitor_edges,child=stream0,host=hostname.local,parent=stream,task=deadman-test,type=stream collected=0,emitted=0 1478791462000000000
+kapacitor_ingress,database=_internal,host=hostname.local,measurement=shard,retention_policy=monitor,task_master=main points_received=120 1478791462000000000
+kapacitor_ingress,database=_internal,host=hostname.local,measurement=subscriber,retention_policy=monitor,task_master=main points_received=60 1478791462000000000
+kapacitor_nodes,host=hostname.local,kind=http_out,node=http_out3,task=sys-stats,type=stream avg_exec_time_ns=0i 1478791462000000000
+kapacitor_edges,child=window6,host=hostname.local,parent=derivative5,task=deadman-test,type=stream collected=0,emitted=0 1478791462000000000
+kapacitor_nodes,host=hostname.local,kind=from,node=from1,task=sys-stats,type=stream avg_exec_time_ns=0i 1478791462000000000
+kapacitor_nodes,host=hostname.local,kind=stream,node=stream0,task=test,type=stream avg_exec_time_ns=0i 1478791462000000000
+kapacitor_nodes,host=hostname.local,kind=window,node=window6,task=deadman-test,type=stream avg_exec_time_ns=0i 1478791462000000000
+kapacitor_ingress,database=_internal,host=hostname.local,measurement=cq,retention_policy=monitor,task_master=main points_received=10 1478791462000000000
+kapacitor_edges,child=http_out3,host=hostname.local,parent=window2,task=sys-stats,type=batch collected=0,emitted=0 1478791462000000000
+kapacitor_edges,child=mean4,host=hostname.local,parent=log3,task=deadman-test,type=batch collected=0,emitted=0 1478791462000000000
+kapacitor_ingress,database=_kapacitor,host=hostname.local,measurement=nodes,retention_policy=autogen,task_master=main points_received=207 1478791462000000000
+kapacitor_edges,child=stream0,host=hostname.local,parent=stream,task=sys-stats,type=stream collected=0,emitted=0 1478791462000000000
+kapacitor_edges,child=log6,host=hostname.local,parent=sum5,task=derivative-test,type=stream collected=0,emitted=0 1478791462000000000
+kapacitor_edges,child=from1,host=hostname.local,parent=stream0,task=sys-stats,type=stream collected=0,emitted=0 1478791462000000000
+kapacitor_nodes,host=hostname.local,kind=alert,node=alert2,task=test,type=stream alerts_triggered=0,avg_exec_time_ns=0i,crits_triggered=0,infos_triggered=0,oks_triggered=0,warns_triggered=0 1478791462000000000
+kapacitor_edges,child=log3,host=hostname.local,parent=derivative2,task=derivative-test,type=stream collected=0,emitted=0 1478791462000000000
+kapacitor_ingress,database=_kapacitor,host=hostname.local,measurement=runtime,retention_policy=autogen,task_master=main points_received=9 1478791462000000000
+kapacitor_ingress,database=_internal,host=hostname.local,measurement=tsm1_filestore,retention_policy=monitor,task_master=main points_received=120 1478791462000000000
+kapacitor_edges,child=derivative2,host=hostname.local,parent=from1,task=derivative-test,type=stream collected=0,emitted=0 1478791462000000000
+kapacitor_nodes,host=hostname.local,kind=stream,node=stream0,task=derivative-test,type=stream avg_exec_time_ns=0i 1478791462000000000
+kapacitor_ingress,database=_internal,host=hostname.local,measurement=queryExecutor,retention_policy=monitor,task_master=main points_received=10 1478791462000000000
+kapacitor_ingress,database=_internal,host=hostname.local,measurement=tsm1_wal,retention_policy=monitor,task_master=main points_received=120 1478791462000000000
+kapacitor_nodes,host=hostname.local,kind=log,node=log6,task=derivative-test,type=stream avg_exec_time_ns=0i 1478791462000000000
+kapacitor_edges,child=stream,host=hostname.local,parent=stats,task=task_master:main,type=stream collected=598,emitted=598 1478791462000000000
+kapacitor_ingress,database=_internal,host=hostname.local,measurement=write,retention_policy=monitor,task_master=main points_received=10 1478791462000000000
+kapacitor_edges,child=stream0,host=hostname.local,parent=stream,task=derivative-test,type=stream collected=0,emitted=0 1478791462000000000
+kapacitor_nodes,host=hostname.local,kind=log,node=log3,task=deadman-test,type=stream avg_exec_time_ns=0i 1478791462000000000
+kapacitor_nodes,host=hostname.local,kind=from,node=from1,task=deadman-test,type=stream avg_exec_time_ns=0i 1478791462000000000
+kapacitor_ingress,database=_kapacitor,host=hostname.local,measurement=ingress,retention_policy=autogen,task_master=main points_received=148 1478791462000000000
+kapacitor_nodes,host=hostname.local,kind=eval,node=eval4,task=derivative-test,type=stream avg_exec_time_ns=0i,eval_errors=0 1478791462000000000
+kapacitor_nodes,host=hostname.local,kind=derivative,node=derivative2,task=derivative-test,type=stream avg_exec_time_ns=0i 1478791462000000000
+kapacitor_ingress,database=_internal,host=hostname.local,measurement=runtime,retention_policy=monitor,task_master=main points_received=10 1478791462000000000
+kapacitor_ingress,database=_internal,host=hostname.local,measurement=httpd,retention_policy=monitor,task_master=main points_received=10 1478791462000000000
+kapacitor_edges,child=sum5,host=hostname.local,parent=eval4,task=derivative-test,type=stream collected=0,emitted=0 1478791462000000000
+kapacitor_ingress,database=_kapacitor,host=hostname.local,measurement=kapacitor,retention_policy=autogen,task_master=main points_received=9 1478791462000000000
+kapacitor_nodes,host=hostname.local,kind=from,node=from1,task=test,type=stream avg_exec_time_ns=0i 1478791462000000000
+kapacitor_ingress,database=_internal,host=hostname.local,measurement=tsm1_engine,retention_policy=monitor,task_master=main points_received=120 1478791462000000000
+kapacitor_nodes,host=hostname.local,kind=window,node=window2,task=deadman-test,type=stream avg_exec_time_ns=0i 1478791462000000000
+kapacitor_nodes,host=hostname.local,kind=stream,node=stream0,task=deadman-test,type=stream avg_exec_time_ns=0i 1478791462000000000
+kapacitor_edges,child=influxdb_out4,host=hostname.local,parent=http_out3,task=sys-stats,type=batch collected=0,emitted=0 1478791462000000000
+kapacitor_edges,child=window2,host=hostname.local,parent=from1,task=deadman-test,type=stream collected=0,emitted=0 1478791462000000000
+kapacitor_nodes,host=hostname.local,kind=from,node=from1,task=derivative-test,type=stream avg_exec_time_ns=0i 1478791462000000000
+kapacitor_edges,child=from1,host=hostname.local,parent=stream0,task=deadman-test,type=stream collected=0,emitted=0 1478791462000000000
+kapacitor_ingress,database=_internal,host=hostname.local,measurement=database,retention_policy=monitor,task_master=main points_received=40 1478791462000000000
+kapacitor_edges,child=stream,host=hostname.local,parent=write_points,task=task_master:main,type=stream collected=750,emitted=750 1478791462000000000
+kapacitor_edges,child=log7,host=hostname.local,parent=window6,task=deadman-test,type=batch collected=0,emitted=0 1478791462000000000
+kapacitor_edges,child=window2,host=hostname.local,parent=from1,task=sys-stats,type=stream collected=0,emitted=0 1478791462000000000
+kapacitor_nodes,host=hostname.local,kind=log,node=log7,task=deadman-test,type=stream avg_exec_time_ns=0i 1478791462000000000
+kapacitor_ingress,database=_kapacitor,host=hostname.local,measurement=edges,retention_policy=autogen,task_master=main points_received=225 1478791462000000000
+kapacitor_nodes,host=hostname.local,kind=derivative,node=derivative5,task=deadman-test,type=stream avg_exec_time_ns=0i 1478791462000000000
+kapacitor_edges,child=from1,host=hostname.local,parent=stream0,task=test,type=stream collected=0,emitted=0 1478791462000000000
+kapacitor_edges,child=alert2,host=hostname.local,parent=from1,task=test,type=stream collected=0,emitted=0 1478791462000000000
+kapacitor_nodes,host=hostname.local,kind=log,node=log3,task=derivative-test,type=stream avg_exec_time_ns=0i 1478791462000000000
+kapacitor_nodes,host=hostname.local,kind=influxdb_out,node=influxdb_out4,task=sys-stats,type=stream avg_exec_time_ns=0i,points_written=0,write_errors=0 1478791462000000000
+kapacitor_edges,child=stream0,host=hostname.local,parent=stream,task=test,type=stream collected=0,emitted=0 1478791462000000000
+kapacitor_edges,child=log3,host=hostname.local,parent=window2,task=deadman-test,type=batch collected=0,emitted=0 1478791462000000000
+kapacitor_edges,child=derivative5,host=hostname.local,parent=mean4,task=deadman-test,type=stream collected=0,emitted=0 1478791462000000000
+kapacitor_nodes,host=hostname.local,kind=stream,node=stream0,task=sys-stats,type=stream avg_exec_time_ns=0i 1478791462000000000
+kapacitor_nodes,host=hostname.local,kind=window,node=window2,task=sys-stats,type=stream avg_exec_time_ns=0i 1478791462000000000
+kapacitor_nodes,host=hostname.local,kind=mean,node=mean4,task=deadman-test,type=stream avg_exec_time_ns=0i 1478791462000000000
+kapacitor_edges,child=from1,host=hostname.local,parent=stream0,task=derivative-test,type=stream collected=0,emitted=0 1478791462000000000
+kapacitor_ingress,database=_internal,host=hostname.local,measurement=tsm1_cache,retention_policy=monitor,task_master=main points_received=120 1478791462000000000
+kapacitor_nodes,host=hostname.local,kind=sum,node=sum5,task=derivative-test,type=stream avg_exec_time_ns=0i 1478791462000000000
+kapacitor_edges,child=eval4,host=hostname.local,parent=log3,task=derivative-test,type=stream collected=0,emitted=0 1478791462000000000
+```
diff --git a/content/telegraf/v1/input-plugins/kernel/_index.md b/content/telegraf/v1/input-plugins/kernel/_index.md
new file mode 100644
index 000000000..990341528
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/kernel/_index.md
@@ -0,0 +1,163 @@
+---
+description: "Telegraf plugin for collecting metrics from Kernel"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Kernel
+    identifier: input-kernel
+tags: [Kernel, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Kernel Input Plugin
+
+This plugin is only available on Linux.
+
+The kernel plugin gathers info about the kernel that doesn't fit into other
+plugins. In general, it is the statistics available in `/proc/stat` that are not
+covered by other plugins as well as the value of
+`/proc/sys/kernel/random/entropy_avail` and optionally, Kernel Samepage Merging
+and Pressure Stall Information.
+
+The metrics are documented in `man 5 proc` under the `/proc/stat` section, as
+well as `man 4 random` under the `/proc interfaces` section
+(for `entropy_avail`).
+
+```text
+/proc/sys/kernel/random/entropy_avail
+Contains the value of available entropy
+
+/proc/stat
+kernel/system statistics. Varies with architecture. Common entries include:
+
+page 5741 1808
+The number of pages the system paged in and the number that were paged out (from disk).
+
+swap 1 0
+The number of swap pages that have been brought in and out.
+
+intr 1462898
+This line shows counts of interrupts serviced since boot time, for each of
+the possible system interrupts. The first column is the total of all
+interrupts serviced; each subsequent column is the total for a particular interrupt.
+
+ctxt 115315
+The number of context switches that the system underwent.
+
+btime 769041601
+boot time, in seconds since the Epoch, 1970-01-01 00:00:00 +0000 (UTC).
+
+processes 86031
+Number of forks since boot.
+```
+
+Kernel Samepage Merging is generally documented in [kernel documentation](https://www.kernel.org/doc/html/latest/accounting/psi.html) and
+the available metrics exposed via sysfs are documented in [admin guide](https://www.kernel.org/doc/html/latest/admin-guide/mm/ksm.html#ksm-daemon-sysfs-interface).
+
+Pressure Stall Information is exposed through `/proc/pressure` and is documented
+in [kernel documentation](https://www.kernel.org/doc/html/latest/accounting/psi.html). Kernel version 4.20 or later is required.
+Examples of PSI:
+
+```shell
+# /proc/pressure/cpu
+some avg10=1.53 avg60=1.87 avg300=1.73 total=1088168194
+
+# /proc/pressure/memory
+some avg10=0.00 avg60=0.00 avg300=0.00 total=3463792
+full avg10=0.00 avg60=0.00 avg300=0.00 total=1429641
+
+# /proc/pressure/io
+some avg10=0.00 avg60=0.00 avg300=0.00 total=68568296
+full avg10=0.00 avg60=0.00 avg300=0.00 total=54982338
+```
+
+[1]: https://www.kernel.org/doc/html/latest/mm/ksm.html
+[2]: https://www.kernel.org/doc/html/latest/admin-guide/mm/ksm.html#ksm-daemon-sysfs-interface
+[3]: https://www.kernel.org/doc/html/latest/accounting/psi.html
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Plugin to collect various Linux kernel statistics.
+# This plugin ONLY supports Linux
+[[inputs.kernel]]
+  ## Additional gather options
+  ## Possible options include:
+  ## * ksm - kernel same-page merging
+  ## * psi - pressure stall information
+  # collect = []
+```
+
+## Metrics
+
+- kernel
+  - boot_time (integer, seconds since epoch, `btime`)
+  - context_switches (integer, `ctxt`)
+  - disk_pages_in (integer, `page (0)`)
+  - disk_pages_out (integer, `page (1)`)
+  - interrupts (integer, `intr`)
+  - processes_forked (integer, `processes`)
+  - entropy_avail (integer, `entropy_available`)
+  - ksm_full_scans (integer, how many times all mergeable areas have been scanned, `full_scans`)
+  - ksm_max_page_sharing (integer, maximum sharing allowed for each KSM page, `max_page_sharing`)
+  - ksm_merge_across_nodes (integer, whether pages should be merged across NUMA nodes, `merge_across_nodes`)
+  - ksm_pages_shared (integer, how many shared pages are being used, `pages_shared`)
+  - ksm_pages_sharing (integer,how many more sites are sharing them , `pages_sharing`)
+  - ksm_pages_to_scan (integer, how many pages to scan before ksmd goes to sleep, `pages_to_scan`)
+  - ksm_pages_unshared (integer, how many pages unique but repeatedly checked for merging, `pages_unshared`)
+  - ksm_pages_volatile (integer, how many pages changing too fast to be placed in a tree, `pages_volatile`)
+  - ksm_run (integer, whether ksm is running or not, `run`)
+  - ksm_sleep_millisecs (integer, how many milliseconds ksmd should sleep between scans, `sleep_millisecs`)
+  - ksm_stable_node_chains (integer, the number of KSM pages that hit the max_page_sharing limit, `stable_node_chains`)
+  - ksm_stable_node_chains_prune_millisecs (integer, how frequently KSM checks the metadata of the pages that hit the deduplication limit, `stable_node_chains_prune_millisecs`)
+  - ksm_stable_node_dups (integer, number of duplicated KSM pages, `stable_node_dups`)
+  - ksm_use_zero_pages (integer, whether empty pages should be treated specially, `use_zero_pages`)
+
+- pressure (if `psi` is included in `collect`)
+  - tags:
+    - resource: cpu, memory, or io
+    - type: some or full
+  - floating-point fields: avg10, avg60, avg300
+  - integer fields: total
+
+## Example Output
+
+Default config:
+
+```text
+kernel boot_time=1690487872i,context_switches=321398652i,entropy_avail=256i,interrupts=141868628i,processes_forked=946492i 1691339564000000000
+```
+
+If `ksm` is included in `collect`:
+
+```text
+kernel boot_time=1690487872i,context_switches=321252729i,entropy_avail=256i,interrupts=141783427i,ksm_full_scans=0i,ksm_max_page_sharing=256i,ksm_merge_across_nodes=1i,ksm_pages_shared=0i,ksm_pages_sharing=0i,ksm_pages_to_scan=100i,ksm_pages_unshared=0i,ksm_pages_volatile=0i,ksm_run=0i,ksm_sleep_millisecs=20i,ksm_stable_node_chains=0i,ksm_stable_node_chains_prune_millisecs=2000i,ksm_stable_node_dups=0i,ksm_use_zero_pages=0i,processes_forked=946467i 1691339522000000000
+```
+
+If `psi` is included in `collect`:
+
+```text
+pressure,resource=cpu,type=some avg10=1.53,avg60=1.87,avg300=1.73 1700000000000000000
+pressure,resource=memory,type=some avg10=0.00,avg60=0.00,avg300=0.00 1700000000000000000
+pressure,resource=memory,type=full avg10=0.00,avg60=0.00,avg300=0.00 1700000000000000000
+pressure,resource=io,type=some avg10=0.0,avg60=0.0,avg300=0.0 1700000000000000000
+pressure,resource=io,type=full avg10=0.0,avg60=0.0,avg300=0.0 1700000000000000000
+pressure,resource=cpu,type=some total=1088168194i 1700000000000000000
+pressure,resource=memory,type=some total=3463792i 1700000000000000000
+pressure,resource=memory,type=full total=1429641i 1700000000000000000
+pressure,resource=io,type=some total=68568296i 1700000000000000000
+pressure,resource=io,type=full total=54982338i 1700000000000000000
+```
+
+Note that the combination for `resource=cpu,type=full` is omitted because it is
+always zero.
diff --git a/content/telegraf/v1/input-plugins/kernel_vmstat/_index.md b/content/telegraf/v1/input-plugins/kernel_vmstat/_index.md
new file mode 100644
index 000000000..03da5f61f
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/kernel_vmstat/_index.md
@@ -0,0 +1,243 @@
+---
+description: "Telegraf plugin for collecting metrics from Kernel VMStat"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Kernel VMStat
+    identifier: input-kernel_vmstat
+tags: [Kernel VMStat, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Kernel VMStat Input Plugin
+
+The kernel_vmstat plugin gathers virtual memory statistics by reading
+/proc/vmstat. For a full list of available fields see the /proc/vmstat section
+of the [proc man page](http://man7.org/linux/man-pages/man5/proc.5.html).  For a better idea of what each field
+represents, see the [vmstat man page](http://linux.die.net/man/8/vmstat).
+
+[man-proc]: http://man7.org/linux/man-pages/man5/proc.5.html
+
+[man-vmstat]: http://linux.die.net/man/8/vmstat
+
+```text
+/proc/vmstat
+kernel/system statistics. Common entries include (from http://www.linuxinsight.com/proc_vmstat.html):
+
+Number of pages that are dirty, under writeback or unstable:
+
+nr_dirty 1550
+nr_writeback 0
+nr_unstable 0
+
+Number of pages allocated to page tables, mapped by files or allocated by the kernel slab allocator:
+
+nr_page_table_pages 699
+nr_mapped 139596
+nr_slab 42723
+
+Number of pageins and pageouts (since the last boot):
+
+pgpgin 33754195
+pgpgout 38985992
+
+Number of swapins and swapouts (since the last boot):
+
+pswpin 2473
+pswpout 2995
+
+Number of page allocations per zone (since the last boot):
+
+pgalloc_high 0
+pgalloc_normal 110123213
+pgalloc_dma32 0
+pgalloc_dma 415219
+
+Number of page frees, activations and deactivations (since the last boot):
+
+pgfree 110549163
+pgactivate 4509729
+pgdeactivate 2136215
+
+Number of minor and major page faults (since the last boot):
+
+pgfault 80663722
+pgmajfault 49813
+
+Number of page refills (per zone, since the last boot):
+
+pgrefill_high 0
+pgrefill_normal 5817500
+pgrefill_dma32 0
+pgrefill_dma 149176
+
+Number of page steals (per zone, since the last boot):
+
+pgsteal_high 0
+pgsteal_normal 10421346
+pgsteal_dma32 0
+pgsteal_dma 142196
+
+Number of pages scanned by the kswapd daemon (per zone, since the last boot):
+
+pgscan_kswapd_high 0
+pgscan_kswapd_normal 10491424
+pgscan_kswapd_dma32 0
+pgscan_kswapd_dma 156130
+
+Number of pages reclaimed directly (per zone, since the last boot):
+
+pgscan_direct_high 0
+pgscan_direct_normal 11904
+pgscan_direct_dma32 0
+pgscan_direct_dma 225
+
+Number of pages reclaimed via inode freeing (since the last boot):
+
+pginodesteal 11
+
+Number of slab objects scanned (since the last boot):
+
+slabs_scanned 8926976
+
+Number of pages reclaimed by kswapd (since the last boot):
+
+kswapd_steal 10551674
+
+Number of pages reclaimed by kswapd via inode freeing (since the last boot):
+
+kswapd_inodesteal 338730
+
+Number of kswapd's calls to page reclaim (since the last boot):
+
+pageoutrun 181908
+
+Number of direct reclaim calls (since the last boot):
+
+allocstall 160
+
+Miscellaneous statistics:
+
+pgrotated 3781
+nr_bounce 0
+```
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Get kernel statistics from /proc/vmstat
+# This plugin ONLY supports Linux
+[[inputs.kernel_vmstat]]
+  # no configuration
+```
+
+## Metrics
+
+- kernel_vmstat
+  - nr_free_pages (integer, `nr_free_pages`)
+  - nr_inactive_anon (integer, `nr_inactive_anon`)
+  - nr_active_anon (integer, `nr_active_anon`)
+  - nr_inactive_file (integer, `nr_inactive_file`)
+  - nr_active_file (integer, `nr_active_file`)
+  - nr_unevictable (integer, `nr_unevictable`)
+  - nr_mlock (integer, `nr_mlock`)
+  - nr_anon_pages (integer, `nr_anon_pages`)
+  - nr_mapped (integer, `nr_mapped`)
+  - nr_file_pages (integer, `nr_file_pages`)
+  - nr_dirty (integer, `nr_dirty`)
+  - nr_writeback (integer, `nr_writeback`)
+  - nr_slab_reclaimable (integer, `nr_slab_reclaimable`)
+  - nr_slab_unreclaimable (integer, `nr_slab_unreclaimable`)
+  - nr_page_table_pages (integer, `nr_page_table_pages`)
+  - nr_kernel_stack (integer, `nr_kernel_stack`)
+  - nr_unstable (integer, `nr_unstable`)
+  - nr_bounce (integer, `nr_bounce`)
+  - nr_vmscan_write (integer, `nr_vmscan_write`)
+  - nr_writeback_temp (integer, `nr_writeback_temp`)
+  - nr_isolated_anon (integer, `nr_isolated_anon`)
+  - nr_isolated_file (integer, `nr_isolated_file`)
+  - nr_shmem (integer, `nr_shmem`)
+  - numa_hit (integer, `numa_hit`)
+  - numa_miss (integer, `numa_miss`)
+  - numa_foreign (integer, `numa_foreign`)
+  - numa_interleave (integer, `numa_interleave`)
+  - numa_local (integer, `numa_local`)
+  - numa_other (integer, `numa_other`)
+  - nr_anon_transparent_hugepages (integer, `nr_anon_transparent_hugepages`)
+  - pgpgin (integer, `pgpgin`)
+  - pgpgout (integer, `pgpgout`)
+  - pswpin (integer, `pswpin`)
+  - pswpout (integer, `pswpout`)
+  - pgalloc_dma (integer, `pgalloc_dma`)
+  - pgalloc_dma32 (integer, `pgalloc_dma32`)
+  - pgalloc_normal (integer, `pgalloc_normal`)
+  - pgalloc_movable (integer, `pgalloc_movable`)
+  - pgfree (integer, `pgfree`)
+  - pgactivate (integer, `pgactivate`)
+  - pgdeactivate (integer, `pgdeactivate`)
+  - pgfault (integer, `pgfault`)
+  - pgmajfault (integer, `pgmajfault`)
+  - pgrefill_dma (integer, `pgrefill_dma`)
+  - pgrefill_dma32 (integer, `pgrefill_dma32`)
+  - pgrefill_normal (integer, `pgrefill_normal`)
+  - pgrefill_movable (integer, `pgrefill_movable`)
+  - pgsteal_dma (integer, `pgsteal_dma`)
+  - pgsteal_dma32 (integer, `pgsteal_dma32`)
+  - pgsteal_normal (integer, `pgsteal_normal`)
+  - pgsteal_movable (integer, `pgsteal_movable`)
+  - pgscan_kswapd_dma (integer, `pgscan_kswapd_dma`)
+  - pgscan_kswapd_dma32 (integer, `pgscan_kswapd_dma32`)
+  - pgscan_kswapd_normal (integer, `pgscan_kswapd_normal`)
+  - pgscan_kswapd_movable (integer, `pgscan_kswapd_movable`)
+  - pgscan_direct_dma (integer, `pgscan_direct_dma`)
+  - pgscan_direct_dma32 (integer, `pgscan_direct_dma32`)
+  - pgscan_direct_normal (integer, `pgscan_direct_normal`)
+  - pgscan_direct_movable (integer, `pgscan_direct_movable`)
+  - zone_reclaim_failed (integer, `zone_reclaim_failed`)
+  - pginodesteal (integer, `pginodesteal`)
+  - slabs_scanned (integer, `slabs_scanned`)
+  - kswapd_steal (integer, `kswapd_steal`)
+  - kswapd_inodesteal (integer, `kswapd_inodesteal`)
+  - kswapd_low_wmark_hit_quickly (integer, `kswapd_low_wmark_hit_quickly`)
+  - kswapd_high_wmark_hit_quickly (integer, `kswapd_high_wmark_hit_quickly`)
+  - kswapd_skip_congestion_wait (integer, `kswapd_skip_congestion_wait`)
+  - pageoutrun (integer, `pageoutrun`)
+  - allocstall (integer, `allocstall`)
+  - pgrotated (integer, `pgrotated`)
+  - compact_blocks_moved (integer, `compact_blocks_moved`)
+  - compact_pages_moved (integer, `compact_pages_moved`)
+  - compact_pagemigrate_failed (integer, `compact_pagemigrate_failed`)
+  - compact_stall (integer, `compact_stall`)
+  - compact_fail (integer, `compact_fail`)
+  - compact_success (integer, `compact_success`)
+  - htlb_buddy_alloc_success (integer, `htlb_buddy_alloc_success`)
+  - htlb_buddy_alloc_fail (integer, `htlb_buddy_alloc_fail`)
+  - unevictable_pgs_culled (integer, `unevictable_pgs_culled`)
+  - unevictable_pgs_scanned (integer, `unevictable_pgs_scanned`)
+  - unevictable_pgs_rescued (integer, `unevictable_pgs_rescued`)
+  - unevictable_pgs_mlocked (integer, `unevictable_pgs_mlocked`)
+  - unevictable_pgs_munlocked (integer, `unevictable_pgs_munlocked`)
+  - unevictable_pgs_cleared (integer, `unevictable_pgs_cleared`)
+  - unevictable_pgs_stranded (integer, `unevictable_pgs_stranded`)
+  - unevictable_pgs_mlockfreed (integer, `unevictable_pgs_mlockfreed`)
+  - thp_fault_alloc (integer, `thp_fault_alloc`)
+  - thp_fault_fallback (integer, `thp_fault_fallback`)
+  - thp_collapse_alloc (integer, `thp_collapse_alloc`)
+  - thp_collapse_alloc_failed (integer, `thp_collapse_alloc_failed`)
+  - thp_split (integer, `thp_split`)
+
+## Example Output
+
+```text
+kernel_vmstat allocstall=81496i,compact_blocks_moved=238196i,compact_fail=135220i,compact_pagemigrate_failed=0i,compact_pages_moved=6370588i,compact_stall=142092i,compact_success=6872i,htlb_buddy_alloc_fail=0i,htlb_buddy_alloc_success=0i,kswapd_high_wmark_hit_quickly=25439i,kswapd_inodesteal=29770874i,kswapd_low_wmark_hit_quickly=8756i,kswapd_skip_congestion_wait=0i,kswapd_steal=291534428i,nr_active_anon=2515657i,nr_active_file=2244914i,nr_anon_pages=1358675i,nr_anon_transparent_hugepages=2034i,nr_bounce=0i,nr_dirty=5690i,nr_file_pages=5153546i,nr_free_pages=78730i,nr_inactive_anon=426259i,nr_inactive_file=2366791i,nr_isolated_anon=0i,nr_isolated_file=0i,nr_kernel_stack=579i,nr_mapped=558821i,nr_mlock=0i,nr_page_table_pages=11115i,nr_shmem=541689i,nr_slab_reclaimable=459806i,nr_slab_unreclaimable=47859i,nr_unevictable=0i,nr_unstable=0i,nr_vmscan_write=6206i,nr_writeback=0i,nr_writeback_temp=0i,numa_foreign=0i,numa_hit=5113399878i,numa_interleave=35793i,numa_local=5113399878i,numa_miss=0i,numa_other=0i,pageoutrun=505006i,pgactivate=375664931i,pgalloc_dma=0i,pgalloc_dma32=122480220i,pgalloc_movable=0i,pgalloc_normal=5233176719i,pgdeactivate=122735906i,pgfault=8699921410i,pgfree=5359765021i,pginodesteal=9188431i,pgmajfault=122210i,pgpgin=219717626i,pgpgout=3495885510i,pgrefill_dma=0i,pgrefill_dma32=1180010i,pgrefill_movable=0i,pgrefill_normal=119866676i,pgrotated=60620i,pgscan_direct_dma=0i,pgscan_direct_dma32=12256i,pgscan_direct_movable=0i,pgscan_direct_normal=31501600i,pgscan_kswapd_dma=0i,pgscan_kswapd_dma32=4480608i,pgscan_kswapd_movable=0i,pgscan_kswapd_normal=287857984i,pgsteal_dma=0i,pgsteal_dma32=4466436i,pgsteal_movable=0i,pgsteal_normal=318463755i,pswpin=2092i,pswpout=6206i,slabs_scanned=93775616i,thp_collapse_alloc=24857i,thp_collapse_alloc_failed=102214i,thp_fault_alloc=346219i,thp_fault_fallback=895453i,thp_split=9817i,unevictable_pgs_cleared=0i,unevictable_pgs_culled=1531i,unevictable_pgs_mlocked=6988i,unevictable_pgs_mlockfreed=0i,unevictable_pgs_munlocked=6988i,unevictable_pgs_rescued=5426i,unevictable_pgs_scanned=0i,unevictable_pgs_stranded=0i,zone_reclaim_failed=0i 1459455200071462843
+```
diff --git a/content/telegraf/v1/input-plugins/kibana/_index.md b/content/telegraf/v1/input-plugins/kibana/_index.md
new file mode 100644
index 000000000..842bd47a3
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/kibana/_index.md
@@ -0,0 +1,107 @@
+---
+description: "Telegraf plugin for collecting metrics from Kibana"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Kibana
+    identifier: input-kibana
+tags: [Kibana, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Kibana Input Plugin
+
+The `kibana` plugin queries the [Kibana](https://www.elastic.co/) API to obtain the service status.
+
+- Telegraf minimum version: 1.8
+- Kibana minimum tested version: 6.0
+
+[Kibana]: https://www.elastic.co/
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read status information from one or more Kibana servers
+[[inputs.kibana]]
+  ## Specify a list of one or more Kibana servers
+  servers = ["http://localhost:5601"]
+
+  ## Timeout for HTTP requests
+  timeout = "5s"
+
+  ## HTTP Basic Auth credentials
+  # username = "username"
+  # password = "pa$$word"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+ 
+  ## If 'use_system_proxy' is set to true, Telegraf will check env vars such as
+  ## HTTP_PROXY, HTTPS_PROXY, and NO_PROXY (or their lowercase counterparts).
+  ## If 'use_system_proxy' is set to false (default) and 'http_proxy_url' is
+  ## provided, Telegraf will use the specified URL as HTTP proxy.
+  # use_system_proxy = false
+  # http_proxy_url = "http://localhost:8888"
+```
+
+## Metrics
+
+- kibana
+  - tags:
+    - name (Kibana reported name)
+    - source (Kibana server hostname or IP)
+    - status (Kibana health: green, yellow, red)
+    - version (Kibana version)
+  - fields:
+    - status_code (integer, green=1 yellow=2 red=3 unknown=0)
+    - heap_total_bytes (integer)
+    - heap_max_bytes (integer; deprecated in 1.13.3: use `heap_total_bytes` field)
+    - heap_used_bytes (integer)
+    - heap_size_limit (integer)
+    - uptime_ms (integer)
+    - response_time_avg_ms (float)
+    - response_time_max_ms (integer)
+    - concurrent_connections (integer)
+    - requests_per_sec (float)
+
+## Example Output
+
+```text
+kibana,host=myhost,name=my-kibana,source=localhost:5601,status=green,version=6.5.4 concurrent_connections=8i,heap_max_bytes=447778816i,heap_total_bytes=447778816i,heap_used_bytes=380603352i,requests_per_sec=1,response_time_avg_ms=57.6,response_time_max_ms=220i,status_code=1i,uptime_ms=6717489805i 1534864502000000000
+```
+
+## Run example environment
+
+Requires the following tools:
+
+- [Docker](https://docs.docker.com/get-docker/)
+- [Docker Compose](https://docs.docker.com/compose/install/)
+
+From the root of this project execute the following script:
+`./plugins/inputs/kibana/test_environment/run_test_env.sh`
+
+This will build the latest Telegraf and then start up Kibana and Elasticsearch,
+Telegraf will begin monitoring Kibana's status and write its results to the file
+`/tmp/metrics.out` in the Telegraf container.
+
+Then you can attach to the telegraf container to inspect the file
+`/tmp/metrics.out` to see if the status is being reported.
+
+The Visual Studio Code [Remote - Containers](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) extension provides an easy
+user interface to attach to the running container.
+
+[remote]: https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers
diff --git a/content/telegraf/v1/input-plugins/kinesis_consumer/_index.md b/content/telegraf/v1/input-plugins/kinesis_consumer/_index.md
new file mode 100644
index 000000000..c6f1c12f0
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/kinesis_consumer/_index.md
@@ -0,0 +1,139 @@
+---
+description: "Telegraf plugin for collecting metrics from Kinesis Consumer"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Kinesis Consumer
+    identifier: input-kinesis_consumer
+tags: [Kinesis Consumer, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Kinesis Consumer Input Plugin
+
+The [Kinesis](https://aws.amazon.com/kinesis/) consumer plugin reads from a Kinesis data stream
+and creates metrics using one of the supported [input data formats](/telegraf/v1/data_formats/input).
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for the AWS Kinesis input.
+[[inputs.kinesis_consumer]]
+  ## Amazon REGION of kinesis endpoint.
+  region = "ap-southeast-2"
+
+  ## Amazon Credentials
+  ## Credentials are loaded in the following order
+  ## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified
+  ## 2) Assumed credentials via STS if role_arn is specified
+  ## 3) explicit credentials from 'access_key' and 'secret_key'
+  ## 4) shared profile from 'profile'
+  ## 5) environment variables
+  ## 6) shared credentials file
+  ## 7) EC2 Instance Profile
+  # access_key = ""
+  # secret_key = ""
+  # token = ""
+  # role_arn = ""
+  # web_identity_token_file = ""
+  # role_session_name = ""
+  # profile = ""
+  # shared_credential_file = ""
+
+  ## Endpoint to make request against, the correct endpoint is automatically
+  ## determined and this option should only be set if you wish to override the
+  ## default.
+  ##   ex: endpoint_url = "http://localhost:8000"
+  # endpoint_url = ""
+
+  ## Kinesis StreamName must exist prior to starting telegraf.
+  streamname = "StreamName"
+
+  ## Shard iterator type (only 'TRIM_HORIZON' and 'LATEST' currently supported)
+  # shard_iterator_type = "TRIM_HORIZON"
+
+  ## Max undelivered messages
+  ## This plugin uses tracking metrics, which ensure messages are read to
+  ## outputs before acknowledging them to the original broker to ensure data
+  ## is not lost. This option sets the maximum messages to read from the
+  ## broker that have not been written by an output.
+  ##
+  ## This value needs to be picked with awareness of the agent's
+  ## metric_batch_size value as well. Setting max undelivered messages too high
+  ## can result in a constant stream of data batches to the output. While
+  ## setting it too low may never flush the broker's messages.
+  # max_undelivered_messages = 1000
+
+  ## Data format to consume.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+  data_format = "influx"
+
+  ##
+  ## The content encoding of the data from kinesis
+  ## If you are processing a cloudwatch logs kinesis stream then set this to "gzip"
+  ## as AWS compresses cloudwatch log data before it is sent to kinesis (aws
+  ## also base64 encodes the zip byte data before pushing to the stream.  The base64 decoding
+  ## is done automatically by the golang sdk, as data is read from kinesis)
+  ##
+  # content_encoding = "identity"
+
+  ## Optional
+  ## Configuration for a dynamodb checkpoint
+  [inputs.kinesis_consumer.checkpoint_dynamodb]
+    ## unique name for this consumer
+    app_name = "default"
+    table_name = "default"
+```
+
+### Required AWS IAM permissions
+
+Kinesis:
+
+- DescribeStream
+- GetRecords
+- GetShardIterator
+
+DynamoDB:
+
+- GetItem
+- PutItem
+
+### DynamoDB Checkpoint
+
+The DynamoDB checkpoint stores the last processed record in a DynamoDB. To
+leverage this functionality, create a table with the following string type keys:
+
+```shell
+Partition key: namespace
+Sort key: shard_id
+```
+
+[kinesis]: https://aws.amazon.com/kinesis/
+[input data formats]: /docs/DATA_FORMATS_INPUT.md
+
+## Metrics
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/knx_listener/_index.md b/content/telegraf/v1/input-plugins/knx_listener/_index.md
new file mode 100644
index 000000000..ce8a6b75e
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/knx_listener/_index.md
@@ -0,0 +1,104 @@
+---
+description: "Telegraf plugin for collecting metrics from KNX"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: KNX
+    identifier: input-knx_listener
+tags: [KNX, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# KNX Input Plugin
+
+The KNX input plugin that listens for messages on the KNX home-automation bus.
+This plugin connects to the KNX bus via a KNX-IP interface.
+Information about supported KNX message datapoint types can be found at the
+underlying "knx-go" project site (<https://github.com/vapourismo/knx-go>).
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Listener capable of handling KNX bus messages provided through a KNX-IP Interface.
+[[inputs.knx_listener]]
+  ## Type of KNX-IP interface.
+  ## Can be either "tunnel_udp", "tunnel_tcp", "tunnel" (alias for tunnel_udp) or "router".
+  # service_type = "tunnel"
+
+  ## Address of the KNX-IP interface.
+  service_address = "localhost:3671"
+
+  ## Measurement definition(s)
+  # [[inputs.knx_listener.measurement]]
+  #   ## Name of the measurement
+  #   name = "temperature"
+  #   ## Datapoint-Type (DPT) of the KNX messages
+  #   dpt = "9.001"
+  #   ## Use the string representation instead of the numerical value for the
+  #   ## datapoint-type and the addresses below
+  #   # as_string = false
+  #   ## List of Group-Addresses (GAs) assigned to the measurement
+  #   addresses = ["5/5/1"]
+
+  # [[inputs.knx_listener.measurement]]
+  #   name = "illumination"
+  #   dpt = "9.004"
+  #   addresses = ["5/5/3"]
+```
+
+### Related tools
+
+- [knx-telegraf-config-generator](https://github.com/svsool/knx-telegraf-config-generator) generates configuration from KNX project file
+
+### Measurement configurations
+
+Each measurement contains only one datapoint-type (DPT) and assigns a list of
+addresses to this measurement. You can, for example group all temperature sensor
+messages within a "temperature" measurement. However, you are free to split
+messages of one datapoint-type to multiple measurements.
+
+**NOTE: You should not assign a group-address (GA) to multiple measurements!**
+
+## Metrics
+
+Received KNX data is stored in the named measurement as configured above using
+the "value" field. Additional to the value, there are the following tags added
+to the datapoint:
+
+- "groupaddress": KNX group-address corresponding to the value
+- "unit":         unit of the value
+- "source":       KNX physical address sending the value
+
+To find out about the datatype of the datapoint please check your KNX project,
+the KNX-specification or the "knx-go" project for the corresponding DPT.
+
+## Example Output
+
+This section shows example output in Line Protocol format.
+
+```text
+illumination,groupaddress=5/5/4,host=Hugin,source=1.1.12,unit=lux value=17.889999389648438 1582132674999013274
+temperature,groupaddress=5/5/1,host=Hugin,source=1.1.8,unit=°C value=17.799999237060547 1582132663427587361
+windowopen,groupaddress=1/0/1,host=Hugin,source=1.1.3 value=true 1582132630425581320
+```
diff --git a/content/telegraf/v1/input-plugins/kube_inventory/_index.md b/content/telegraf/v1/input-plugins/kube_inventory/_index.md
new file mode 100644
index 000000000..2f8936d33
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/kube_inventory/_index.md
@@ -0,0 +1,379 @@
+---
+description: "Telegraf plugin for collecting metrics from Kubernetes Inventory"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Kubernetes Inventory
+    identifier: input-kube_inventory
+tags: [Kubernetes Inventory, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Kubernetes Inventory Input Plugin
+
+This plugin generates metrics derived from the state of the following
+Kubernetes resources:
+
+- daemonsets
+- deployments
+- endpoints
+- ingress
+- nodes
+- persistentvolumes
+- persistentvolumeclaims
+- pods (containers)
+- services
+- statefulsets
+- resourcequotas
+
+Kubernetes is a fast moving project, with a new minor release every 3 months.
+As such, we will aim to maintain support only for versions that are supported
+by the major cloud providers; this is roughly 4 release / 2 years.
+
+**This plugin supports Kubernetes 1.11 and later.**
+
+## Series Cardinality Warning
+
+This plugin may produce a high number of series which, when not controlled
+for, will cause high load on your database. Use the following techniques to
+avoid cardinality issues:
+
+- Use [metric filtering](https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#metric-filtering) options to exclude unneeded measurements and tags.
+- Write to a database with an appropriate [retention policy](https://docs.influxdata.com/influxdb/latest/guides/downsampling_and_retention/).
+- Consider using the [Time Series Index](https://docs.influxdata.com/influxdb/latest/concepts/time-series-index/).
+- Monitor your databases [series cardinality](https://docs.influxdata.com/influxdb/latest/query_language/spec/#show-cardinality).
+- Consult the [InfluxDB documentation](https://docs.influxdata.com/influxdb/latest/) for the most up-to-date
+  techniques.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from the Kubernetes api
+[[inputs.kube_inventory]]
+  ## URL for the Kubernetes API.
+  ## If empty in-cluster config with POD's service account token will be used.
+  # url = ""
+
+  ## URL for the kubelet, if set it will be used to collect the pods resource metrics
+  # url_kubelet = "http://127.0.0.1:10255"
+
+  ## Namespace to use. Set to "" to use all namespaces.
+  # namespace = "default"
+
+  ## Node name to filter to. No filtering by default.
+  # node_name = ""
+
+  ## Use bearer token for authorization.
+  ## Ignored if url is empty and in-cluster config is used.
+  # bearer_token = "/var/run/secrets/kubernetes.io/serviceaccount/token"
+
+  ## Set response_timeout (default 5 seconds)
+  # response_timeout = "5s"
+
+  ## Optional Resources to exclude from gathering
+  ## Leave them with blank with try to gather everything available.
+  ## Values can be - "daemonsets", deployments", "endpoints", "ingress",
+  ## "nodes", "persistentvolumes", "persistentvolumeclaims", "pods", "services",
+  ## "statefulsets"
+  # resource_exclude = [ "deployments", "nodes", "statefulsets" ]
+
+  ## Optional Resources to include when gathering
+  ## Overrides resource_exclude if both set.
+  # resource_include = [ "deployments", "nodes", "statefulsets" ]
+
+  ## selectors to include and exclude as tags.  Globs accepted.
+  ## Note that an empty array for both will include all selectors as tags
+  ## selector_exclude overrides selector_include if both set.
+  # selector_include = []
+  # selector_exclude = ["*"]
+
+  ## Optional TLS Config
+  ## Trusted root certificates for server
+  # tls_ca = "/path/to/cafile"
+  ## Used for TLS client certificate authentication
+  # tls_cert = "/path/to/certfile"
+  ## Used for TLS client certificate authentication
+  # tls_key = "/path/to/keyfile"
+  ## Send the specified TLS server name via SNI
+  # tls_server_name = "kubernetes.example.com"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## Uncomment to remove deprecated metrics.
+  # fieldexclude = ["terminated_reason"]
+```
+
+## Kubernetes Permissions
+
+If using [RBAC authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/), you will need to create a cluster role to
+list "persistentvolumes" and "nodes". You will then need to make an [aggregated
+ClusterRole]() that will eventually be bound to a user or group.
+
+[rbac]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
+[agg]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles
+
+```yaml
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+  name: influx:cluster:viewer
+  labels:
+    rbac.authorization.k8s.io/aggregate-view-telegraf: "true"
+rules:
+  - apiGroups: [""]
+    resources: ["persistentvolumes", "nodes"]
+    verbs: ["get", "list"]
+
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+  name: influx:telegraf
+aggregationRule:
+  clusterRoleSelectors:
+    - matchLabels:
+        rbac.authorization.k8s.io/aggregate-view-telegraf: "true"
+    - matchLabels:
+        rbac.authorization.k8s.io/aggregate-to-view: "true"
+rules: [] # Rules are automatically filled in by the controller manager.
+```
+
+Bind the newly created aggregated ClusterRole with the following config file,
+updating the subjects as needed.
+
+```yaml
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+  name: influx:telegraf:viewer
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: ClusterRole
+  name: influx:telegraf
+subjects:
+  - kind: ServiceAccount
+    name: telegraf
+    namespace: default
+```
+
+## Quickstart in k3s
+
+When monitoring [k3s](https://k3s.io) server instances one can re-use already
+generated administration token. This is less secure than using the more
+restrictive dedicated telegraf user but more convenient to set up.
+
+```console
+# replace `telegraf` with the user the telegraf process is running as
+$ install -o telegraf -m400 /var/lib/rancher/k3s/server/token /run/telegraf-kubernetes-token
+$ install -o telegraf -m400 /var/lib/rancher/k3s/server/tls/client-admin.crt /run/telegraf-kubernetes-cert
+$ install -o telegraf -m400 /var/lib/rancher/k3s/server/tls/client-admin.key /run/telegraf-kubernetes-key
+```
+
+```toml
+[kube_inventory]
+bearer_token = "/run/telegraf-kubernetes-token"
+tls_cert = "/run/telegraf-kubernetes-cert"
+tls_key = "/run/telegraf-kubernetes-key"
+```
+
+## Metrics
+
+- kubernetes_daemonset
+  - tags:
+    - daemonset_name
+    - namespace
+    - selector (\*varies)
+  - fields:
+    - generation
+    - current_number_scheduled
+    - desired_number_scheduled
+    - number_available
+    - number_misscheduled
+    - number_ready
+    - number_unavailable
+    - updated_number_scheduled
+
+- kubernetes_deployment
+  - tags:
+    - deployment_name
+    - namespace
+    - selector (\*varies)
+  - fields:
+    - replicas_available
+    - replicas_unavailable
+    - created
+
+- kubernetes_endpoints
+  - tags:
+    - endpoint_name
+    - namespace
+    - hostname
+    - node_name
+    - port_name
+    - port_protocol
+    - kind (\*varies)
+  - fields:
+    - created
+    - generation
+    - ready
+    - port
+
+- kubernetes_ingress
+  - tags:
+    - ingress_name
+    - namespace
+    - hostname
+    - ip
+    - backend_service_name
+    - path
+    - host
+  - fields:
+    - created
+    - generation
+    - backend_service_port
+    - tls
+
+- kubernetes_node
+  - tags:
+    - node_name
+    - status
+    - condition
+    - cluster_namespace
+  - fields:
+    - capacity_cpu_cores
+    - capacity_millicpu_cores
+    - capacity_memory_bytes
+    - capacity_pods
+    - allocatable_cpu_cores
+    - allocatable_millicpu_cores
+    - allocatable_memory_bytes
+    - allocatable_pods
+    - status_condition
+    - spec_unschedulable
+    - node_count
+
+- kubernetes_persistentvolume
+  - tags:
+    - pv_name
+    - phase
+    - storageclass
+  - fields:
+    - phase_type (int, see below
+  - fields:
+    - phase_type (int, see below
+  - fields:
+    - created
+    - generation
+    - replicas
+    - replicas_current
+    - replicas_ready
+    - replicas_updated
+    - spec_replicas
+    - observed_generation
+
+- kubernetes_resourcequota
+  - tags:
+    - resource
+    - namespace
+  - fields:
+    - hard_cpu_limits
+    - hard_cpu_requests
+    - hard_memory_limit
+    - hard_memory_requests
+    - hard_pods
+    - used_cpu_limits
+    - used_cpu_requests
+    - used_memory_limits
+    - used_memory_requests
+    - used_pods
+
+- kubernetes_certificate
+  - tags:
+    - common_name
+    - signature_algorithm
+    - public_key_algorithm
+    - issuer_common_name
+    - san
+    - verification
+    - name
+    - namespace
+  - fields:
+    - age
+    - expiry
+    - startdate
+    - enddate
+    - verification_code
+
+### kubernetes node status `status`
+
+The node status ready can mean 3 different values.
+
+| Tag value | Corresponding field value | Meaning  |
+| --------- | ------------------------- | -------- |
+| ready     | 0                         | NotReady |
+| ready     | 1                         | Ready    |
+| ready     | 2                         | Unknown  |
+
+### pv `phase_type`
+
+The persistentvolume "phase" is saved in the `phase` tag with a correlated
+numeric field called `phase_type` corresponding with that tag value.
+
+| Tag value | Corresponding field value |
+| --------- | ------------------------- |
+| bound     | 0                         |
+| failed    | 1                         |
+| pending   | 2                         |
+| released  | 3                         |
+| available | 4                         |
+| unknown   | 5                         |
+
+### pvc `phase_type`
+
+The persistentvolumeclaim "phase" is saved in the `phase` tag with a correlated
+numeric field called `phase_type` corresponding with that tag value.
+
+| Tag value | Corresponding field value |
+| --------- | ------------------------- |
+| bound     | 0                         |
+| lost      | 1                         |
+| pending   | 2                         |
+| unknown   | 3                         |
+
+## Example Output
+
+```text
+kubernetes_configmap,configmap_name=envoy-config,namespace=default,resource_version=56593031 created=1544103867000000000i 1547597616000000000
+kubernetes_daemonset,daemonset_name=telegraf,selector_select1=s1,namespace=logging number_unavailable=0i,desired_number_scheduled=11i,number_available=11i,number_misscheduled=8i,number_ready=11i,updated_number_scheduled=11i,created=1527758699000000000i,generation=16i,current_number_scheduled=11i 1547597616000000000
+kubernetes_deployment,deployment_name=deployd,selector_select1=s1,namespace=default replicas_unavailable=0i,created=1544103082000000000i,replicas_available=1i 1547597616000000000
+kubernetes_node,host=vjain node_count=8i 1628918652000000000
+kubernetes_node,condition=Ready,host=vjain,node_name=ip-172-17-0-2.internal,status=True status_condition=1i 1629177980000000000
+kubernetes_node,cluster_namespace=tools,condition=Ready,host=vjain,node_name=ip-172-17-0-2.internal,status=True allocatable_cpu_cores=4i,allocatable_memory_bytes=7186567168i,allocatable_millicpu_cores=4000i,allocatable_pods=110i,capacity_cpu_cores=4i,capacity_memory_bytes=7291424768i,capacity_millicpu_cores=4000i,capacity_pods=110i,spec_unschedulable=0i,status_condition=1i 1628918652000000000
+kubernetes_resourcequota,host=vjain,namespace=default,resource=pods-high hard_cpu=1000i,hard_memory=214748364800i,hard_pods=10i,used_cpu=0i,used_memory=0i,used_pods=0i 1629110393000000000
+kubernetes_resourcequota,host=vjain,namespace=default,resource=pods-low hard_cpu=5i,hard_memory=10737418240i,hard_pods=10i,used_cpu=0i,used_memory=0i,used_pods=0i 1629110393000000000
+kubernetes_persistentvolume,phase=Released,pv_name=pvc-aaaaaaaa-bbbb-cccc-1111-222222222222,storageclass=ebs-1-retain phase_type=3i 1547597616000000000
+kubernetes_persistentvolumeclaim,namespace=default,phase=Bound,pvc_name=data-etcd-0,selector_select1=s1,storageclass=ebs-1-retain phase_type=0i 1547597615000000000
+kubernetes_pod,namespace=default,node_name=ip-172-17-0-2.internal,pod_name=tick1 last_transition_time=1547578322000000000i,ready="false" 1547597616000000000
+kubernetes_service,cluster_ip=172.29.61.80,namespace=redis-cache-0001,port_name=redis,port_protocol=TCP,selector_app=myapp,selector_io.kompose.service=redis,selector_role=slave,service_name=redis-slave created=1588690034000000000i,generation=0i,port=6379i,target_port=0i 1547597616000000000
+kubernetes_pod_container,condition=Ready,host=vjain,pod_name=uefi-5997f76f69-xzljt,status=True status_condition=1i 1629177981000000000
+kubernetes_pod_container,container_name=telegraf,namespace=default,node_name=ip-172-17-0-2.internal,node_selector_node-role.kubernetes.io/compute=true,pod_name=tick1,phase=Running,state=running,readiness=ready resource_requests_cpu_units=0.1,resource_limits_memory_bytes=524288000,resource_limits_cpu_units=0.5,restarts_total=0i,state_code=0i,state_reason="",phase_reason="",resource_requests_memory_bytes=524288000 1547597616000000000
+kubernetes_statefulset,namespace=default,selector_select1=s1,statefulset_name=etcd replicas_updated=3i,spec_replicas=3i,observed_generation=1i,created=1544101669000000000i,generation=1i,replicas=3i,replicas_current=3i,replicas_ready=3i 1547597616000000000
+```
+
+[metric filtering]: https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#metric-filtering
+[retention policy]: https://docs.influxdata.com/influxdb/latest/guides/downsampling_and_retention/
+[tsi]: https://docs.influxdata.com/influxdb/latest/concepts/time-series-index/
+[series cardinality]: https://docs.influxdata.com/influxdb/latest/query_language/spec/#show-cardinality
+[influx-docs]: https://docs.influxdata.com/influxdb/latest/
diff --git a/content/telegraf/v1/input-plugins/kubernetes/_index.md b/content/telegraf/v1/input-plugins/kubernetes/_index.md
new file mode 100644
index 000000000..63c253053
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/kubernetes/_index.md
@@ -0,0 +1,202 @@
+---
+description: "Telegraf plugin for collecting metrics from Kubernetes"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Kubernetes
+    identifier: input-kubernetes
+tags: [Kubernetes, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Kubernetes Input Plugin
+
+The Kubernetes plugin talks to the Kubelet API and gathers metrics about the
+running pods and containers for a single host. It is assumed that this plugin
+is running as part of a `daemonset` within a kubernetes installation. This
+means that telegraf is running on every node within the cluster. Therefore, you
+should configure this plugin to talk to its locally running kubelet.
+
+Kubernetes is a fast moving project, with a new minor release every 3 months. As
+such, this plugin aims to maintain support only for versions that are supported
+by the major cloud providers, namely, 4 release over 2 years.
+
+## Host IP
+
+To find the ip address of the host you are running on you can issue a command
+like the following:
+
+```sh
+curl -s $API_URL/api/v1/namespaces/$POD_NAMESPACE/pods/$HOSTNAME \
+  --header "Authorization: Bearer $TOKEN" \
+  --insecure | jq -r '.status.hostIP'
+```
+
+This example uses the downward API to pass in the `$POD_NAMESPACE` and
+`$HOSTNAME` is the hostname of the pod which is set by the kubernetes API.
+See the [Kubernetes docs](https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#without-kubectl-proxy) for a full example of generating a bearer token to
+explore the Kubernetes API.
+
+[Kubernetes docs]: https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#without-kubectl-proxy
+
+## Series Cardinality Warning
+
+This plugin may produce a high number of series which, when not controlled
+for, will cause high load on your database. Use the following techniques to
+avoid cardinality issues:
+
+- Use [metric filtering](https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#metric-filtering) options to exclude unneeded measurements and tags.
+- Write to a database with an appropriate [retention policy](https://docs.influxdata.com/influxdb/latest/guides/downsampling_and_retention/).
+- Consider using the [Time Series Index](https://docs.influxdata.com/influxdb/latest/concepts/time-series-index/).
+- Monitor your databases [series cardinality](https://docs.influxdata.com/influxdb/latest/query_language/spec/#show-cardinality).
+- Consult the [InfluxDB documentation](https://docs.influxdata.com/influxdb/latest/) for the most up-to-date techniques.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from the kubernetes kubelet api
+[[inputs.kubernetes]]
+  ## URL for the kubelet, if empty read metrics from all nodes in the cluster
+  url = "http://127.0.0.1:10255"
+
+  ## Use bearer token for authorization. ('bearer_token' takes priority)
+  ## If both of these are empty, we'll use the default serviceaccount:
+  ## at: /var/run/secrets/kubernetes.io/serviceaccount/token
+  ##
+  ## To re-read the token at each interval, please use a file with the
+  ## bearer_token option. If given a string, Telegraf will always use that
+  ## token.
+  # bearer_token = "/var/run/secrets/kubernetes.io/serviceaccount/token"
+  ## OR
+  # bearer_token_string = "abc_123"
+
+  ## Kubernetes Node Metric Name
+  ## The default Kubernetes node metric name (i.e. kubernetes_node) is the same
+  ## for the kubernetes and kube_inventory plugins. To avoid conflicts, set this
+  ## option to a different value.
+  # node_metric_name = "kubernetes_node"
+
+  ## Pod labels to be added as tags.  An empty array for both include and
+  ## exclude will include all labels.
+  # label_include = []
+  # label_exclude = ["*"]
+
+  ## Set response_timeout (default 5 seconds)
+  # response_timeout = "5s"
+
+  ## Optional TLS Config
+  # tls_ca = /path/to/cafile
+  # tls_cert = /path/to/certfile
+  # tls_key = /path/to/keyfile
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+## DaemonSet
+
+For recommendations on running Telegraf as a DaemonSet see [Monitoring
+Kubernetes Architecture]() or view the Helm charts:
+
+- [Telegraf](https://github.com/helm/charts/tree/master/stable/telegraf)
+- [InfluxDB](https://github.com/helm/charts/tree/master/stable/influxdb)
+- [Chronograf](https://github.com/helm/charts/tree/master/stable/chronograf)
+- [Kapacitor](https://github.com/helm/charts/tree/master/stable/kapacitor)
+
+## Metrics
+
+- kubernetes_node
+  - tags:
+    - node_name
+  - fields:
+    - cpu_usage_nanocores
+    - cpu_usage_core_nanoseconds
+    - memory_available_bytes
+    - memory_usage_bytes
+    - memory_working_set_bytes
+    - memory_rss_bytes
+    - memory_page_faults
+    - memory_major_page_faults
+    - network_rx_bytes
+    - network_rx_errors
+    - network_tx_bytes
+    - network_tx_errors
+    - fs_available_bytes
+    - fs_capacity_bytes
+    - fs_used_bytes
+    - runtime_image_fs_available_bytes
+    - runtime_image_fs_capacity_bytes
+    - runtime_image_fs_used_bytes
+
+- kubernetes_pod_container
+  - tags:
+    - container_name
+    - namespace
+    - node_name
+    - pod_name
+  - fields:
+    - cpu_usage_nanocores
+    - cpu_usage_core_nanoseconds
+    - memory_usage_bytes
+    - memory_working_set_bytes
+    - memory_rss_bytes
+    - memory_page_faults
+    - memory_major_page_faults
+    - rootfs_available_bytes
+    - rootfs_capacity_bytes
+    - rootfs_used_bytes
+    - logsfs_available_bytes
+    - logsfs_capacity_bytes
+    - logsfs_used_bytes
+
+- kubernetes_pod_volume
+  - tags:
+    - volume_name
+    - namespace
+    - node_name
+    - pod_name
+  - fields:
+    - available_bytes
+    - capacity_bytes
+    - used_bytes
+
+- kubernetes_pod_network
+  - tags:
+    - namespace
+    - node_name
+    - pod_name
+  - fields:
+    - rx_bytes
+    - rx_errors
+    - tx_bytes
+    - tx_errors
+
+## Example Output
+
+```text
+kubernetes_node
+kubernetes_pod_container,container_name=deis-controller,namespace=deis,node_name=ip-10-0-0-0.ec2.internal,pod_name=deis-controller-3058870187-xazsr cpu_usage_core_nanoseconds=2432835i,cpu_usage_nanocores=0i,logsfs_available_bytes=121128271872i,logsfs_capacity_bytes=153567944704i,logsfs_used_bytes=20787200i,memory_major_page_faults=0i,memory_page_faults=175i,memory_rss_bytes=0i,memory_usage_bytes=0i,memory_working_set_bytes=0i,rootfs_available_bytes=121128271872i,rootfs_capacity_bytes=153567944704i,rootfs_used_bytes=1110016i 1476477530000000000
+kubernetes_pod_network,namespace=deis,node_name=ip-10-0-0-0.ec2.internal,pod_name=deis-controller-3058870187-xazsr rx_bytes=120671099i,rx_errors=0i,tx_bytes=102451983i,tx_errors=0i 1476477530000000000
+kubernetes_pod_volume,volume_name=default-token-f7wts,namespace=default,node_name=ip-172-17-0-1.internal,pod_name=storage-7 available_bytes=8415240192i,capacity_bytes=8415252480i,used_bytes=12288i 1546910783000000000
+kubernetes_system_container
+```
+
+[metric filtering]: https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#metric-filtering
+[retention policy]: https://docs.influxdata.com/influxdb/latest/guides/downsampling_and_retention/
+[tsi]: https://docs.influxdata.com/influxdb/latest/concepts/time-series-index/
+[series cardinality]: https://docs.influxdata.com/influxdb/latest/query_language/spec/#show-cardinality
+[influx-docs]: https://docs.influxdata.com/influxdb/latest/
+[k8s-telegraf]: https://www.influxdata.com/blog/monitoring-kubernetes-architecture/
+[telegraf]: https://github.com/helm/charts/tree/master/stable/telegraf
+[influxdb]: https://github.com/helm/charts/tree/master/stable/influxdb
+[chronograf]: https://github.com/helm/charts/tree/master/stable/chronograf
+[kapacitor]: https://github.com/helm/charts/tree/master/stable/kapacitor
diff --git a/content/telegraf/v1/input-plugins/lanz/_index.md b/content/telegraf/v1/input-plugins/lanz/_index.md
new file mode 100644
index 000000000..fe4184c60
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/lanz/_index.md
@@ -0,0 +1,124 @@
+---
+description: "Telegraf plugin for collecting metrics from Arista LANZ Consumer"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Arista LANZ Consumer
+    identifier: input-lanz
+tags: [Arista LANZ Consumer, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Arista LANZ Consumer Input Plugin
+
+This plugin provides a consumer for use with Arista Networks’ Latency Analyzer
+(LANZ)
+
+Metrics are read from a stream of data via TCP through port 50001 on the
+switches management IP. The data is in Protobuffers format. For more information
+on Arista LANZ
+
+- <https://www.arista.com/en/um-eos/eos-latency-analyzer-lanz>
+
+This plugin uses Arista's sdk.
+
+- <https://github.com/aristanetworks/goarista>
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics off Arista LANZ, via socket
+[[inputs.lanz]]
+  ## URL to Arista LANZ endpoint
+  servers = [
+    "tcp://switch1.int.example.com:50001",
+    "tcp://switch2.int.example.com:50001",
+  ]
+```
+
+You will need to configure LANZ and enable streaming LANZ data.
+
+- <https://www.arista.com/en/um-eos/eos-section-44-3-configuring-lanz>
+- <https://www.arista.com/en/um-eos/eos-section-44-3-configuring-lanz#ww1149292>
+
+## Metrics
+
+For more details on the metrics see
+<https://github.com/aristanetworks/goarista/blob/master/lanz/proto/lanz.proto>
+
+- lanz_congestion_record:
+  - tags:
+    - intf_name
+    - switch_id
+    - port_id
+    - entry_type
+    - traffic_class
+    - fabric_peer_intf_name
+    - source
+    - port
+  - fields:
+    - timestamp        (integer)
+    - queue_size       (integer)
+    - time_of_max_qlen (integer)
+    - tx_latency       (integer)
+    - q_drop_count     (integer)
+
+- lanz_global_buffer_usage_record
+  - tags:
+    - entry_type
+    - source
+    - port
+  - fields:
+    - timestamp   (integer)
+    - buffer_size (integer)
+    - duration    (integer)
+
+## Sample Queries
+
+Get the max tx_latency for the last hour for all interfaces on all switches.
+
+```sql
+SELECT max("tx_latency") AS "max_tx_latency" FROM "congestion_record" WHERE time > now() - 1h GROUP BY time(10s), "hostname", "intf_name"
+```
+
+Get the max tx_latency for the last hour for all interfaces on all switches.
+
+```sql
+SELECT max("queue_size") AS "max_queue_size" FROM "congestion_record" WHERE time > now() - 1h GROUP BY time(10s), "hostname", "intf_name"
+```
+
+Get the max buffer_size for over the last hour for all switches.
+
+```sql
+SELECT max("buffer_size") AS "max_buffer_size" FROM "global_buffer_usage_record" WHERE time > now() - 1h GROUP BY time(10s), "hostname"
+```
+
+## Example Output
+
+```text
+lanz_global_buffer_usage_record,entry_type=2,host=telegraf.int.example.com,port=50001,source=switch01.int.example.com timestamp=158334105824919i,buffer_size=505i,duration=0i 1583341058300643815
+lanz_congestion_record,entry_type=2,host=telegraf.int.example.com,intf_name=Ethernet36,port=50001,port_id=61,source=switch01.int.example.com,switch_id=0,traffic_class=1 time_of_max_qlen=0i,tx_latency=564480i,q_drop_count=0i,timestamp=158334105824919i,queue_size=225i 1583341058300636045
+lanz_global_buffer_usage_record,entry_type=2,host=telegraf.int.example.com,port=50001,source=switch01.int.example.com timestamp=158334105824919i,buffer_size=589i,duration=0i 1583341058300457464
+lanz_congestion_record,entry_type=1,host=telegraf.int.example.com,intf_name=Ethernet36,port=50001,port_id=61,source=switch01.int.example.com,switch_id=0,traffic_class=1 q_drop_count=0i,timestamp=158334105824919i,queue_size=232i,time_of_max_qlen=0i,tx_latency=584640i 1583341058300450302
+```
diff --git a/content/telegraf/v1/input-plugins/ldap/_index.md b/content/telegraf/v1/input-plugins/ldap/_index.md
new file mode 100644
index 000000000..479ec24fb
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/ldap/_index.md
@@ -0,0 +1,107 @@
+---
+description: "Telegraf plugin for collecting metrics from LDAP"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: LDAP
+    identifier: input-ldap
+tags: [LDAP, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# LDAP Input Plugin
+
+This plugin gathers metrics from LDAP servers' monitoring (`cn=Monitor`)
+backend. Currently this plugin supports [OpenLDAP](https://www.openldap.org/)
+and [389ds](https://www.port389.org/) servers.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# LDAP monitoring plugin
+[[inputs.ldap]]
+  ## Server to monitor
+  ## The scheme determines the mode to use for connection with
+  ##    ldap://...      -- unencrypted (non-TLS) connection
+  ##    ldaps://...     -- TLS connection
+  ##    starttls://...  --  StartTLS connection
+  ## If no port is given, the default ports, 389 for ldap and starttls and
+  ## 636 for ldaps, are used.
+  server = "ldap://localhost"
+
+  ## Server dialect, can be "openldap" or "389ds"
+  # dialect = "openldap"
+
+  # DN and password to bind with
+  ## If bind_dn is empty an anonymous bind is performed.
+  bind_dn = ""
+  bind_password = ""
+
+  ## Reverse the field names constructed from the monitoring DN
+  # reverse_field_names = false
+
+  ## Optional TLS Config
+  ## Set to true/false to enforce TLS being enabled/disabled. If not set,
+  ## enable TLS only if any of the other options are specified.
+  # tls_enable =
+  ## Trusted root certificates for server
+  # tls_ca = "/path/to/cafile"
+  ## Used for TLS client certificate authentication
+  # tls_cert = "/path/to/certfile"
+  ## Used for TLS client certificate authentication
+  # tls_key = "/path/to/keyfile"
+  ## Password for the key file if it is encrypted
+  # tls_key_pwd = ""
+  ## Send the specified TLS server name via SNI
+  # tls_server_name = "kubernetes.example.com"
+  ## Minimal TLS version to accept by the client
+  # tls_min_version = "TLS12"
+  ## List of ciphers to accept, by default all secure ciphers will be accepted
+  ## See https://pkg.go.dev/crypto/tls#pkg-constants for supported values.
+  ## Use "all", "secure" and "insecure" to add all support ciphers, secure
+  ## suites or insecure suites respectively.
+  # tls_cipher_suites = ["secure"]
+  ## Renegotiation method, "never", "once" or "freely"
+  # tls_renegotiation_method = "never"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+To use this plugin you must enable the monitoring backend/plugin of your LDAP
+server. See
+[OpenLDAP](https://www.openldap.org/devel/admin/monitoringslapd.html) or 389ds
+documentation for details.
+
+## Metrics
+
+Depending on the server dialect, different metrics are produced. The metrics
+are usually named according to the selected dialect.
+
+### Tags
+
+- server -- Server name or IP
+- port   -- Port used for connecting
+
+## Example Output
+
+Using the `openldap` dialect
+
+```text
+openldap,server=localhost,port=389 operations_completed=63i,operations_initiated=98i,operations_bind_initiated=10i,operations_unbind_initiated=6i,operations_modrdn_completed=0i,operations_delete_initiated=0i,operations_add_completed=2i,operations_delete_completed=0i,operations_abandon_completed=0i,statistics_entries=1516i,threads_open=2i,threads_active=1i,waiters_read=1i,operations_modify_completed=0i,operations_extended_initiated=4i,threads_pending=0i,operations_search_initiated=36i,operations_compare_initiated=0i,connections_max_file_descriptors=4096i,operations_modify_initiated=0i,operations_modrdn_initiated=0i,threads_max=16i,time_uptime=6017i,connections_total=1037i,connections_current=1i,operations_add_initiated=2i,statistics_bytes=162071i,operations_unbind_completed=6i,operations_abandon_initiated=0i,statistics_pdu=1566i,threads_max_pending=0i,threads_backload=1i,waiters_write=0i,operations_bind_completed=10i,operations_search_completed=35i,operations_compare_completed=0i,operations_extended_completed=4i,statistics_referrals=0i,threads_starting=0i 1516912070000000000
+```
+
+Using the `389ds` dialect
+
+```text
+389ds,port=32805,server=localhost add_operations=0i,anonymous_binds=0i,backends=0i,bind_security_errors=0i,bytes_received=0i,bytes_sent=256i,cache_entries=0i,cache_hits=0i,chainings=0i,compare_operations=0i,connections=1i,connections_in_max_threads=0i,connections_max_threads=0i,copy_entries=0i,current_connections=1i,current_connections_at_max_threads=0i,delete_operations=0i,dtablesize=63936i,entries_returned=2i,entries_sent=2i,errors=2i,in_operations=11i,list_operations=0i,maxthreads_per_conn_hits=0i,modify_operations=1i,modrdn_operations=0i,onelevel_search_operations=0i,operations_completed=10i,operations_initiated=11i,read_operations=0i,read_waiters=0i,referrals=0i,referrals_returned=0i,search_operations=3i,security_errors=0i,simpleauth_binds=1i,strongauth_binds=2i,threads=17i,total_connections=4i,unauth_binds=0i,wholesubtree_search_operations=1i 1695637234047087280
+```
diff --git a/content/telegraf/v1/input-plugins/leofs/_index.md b/content/telegraf/v1/input-plugins/leofs/_index.md
new file mode 100644
index 000000000..5688238d6
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/leofs/_index.md
@@ -0,0 +1,195 @@
+---
+description: "Telegraf plugin for collecting metrics from LeoFS"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: LeoFS
+    identifier: input-leofs
+tags: [LeoFS, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# LeoFS Input Plugin
+
+The LeoFS plugin gathers metrics of LeoGateway, LeoManager, and LeoStorage using
+SNMP. See [LeoFS Documentation / System Administration / System
+Monitoring](https://leo-project.net/leofs/docs/admin/system_admin/monitoring/).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from a LeoFS Server via SNMP
+[[inputs.leofs]]
+  ## An array of URLs of the form:
+  ##   host [ ":" port]
+  servers = ["127.0.0.1:4010"]
+```
+
+## Metrics
+
+### Statistics specific to the internals of LeoManager
+
+#### Erlang VM of LeoManager
+
+- 1 min Statistics
+  - num_of_processes
+  - total_memory_usage
+  - system_memory_usage
+  - processes_memory_usage
+  - ets_memory_usage
+  - used_allocated_memory
+  - allocated_memory
+- 5 min Statistics
+  - num_of_processes_5min
+  - total_memory_usage_5min
+  - system_memory_usage_5min
+  - processes_memory_usage_5min
+  - ets_memory_usage_5min
+  - used_allocated_memory_5min
+  - allocated_memory_5min
+
+### Statistics specific to the internals of LeoStorage
+
+### Erlang VM of LeoStorage
+
+- 1 min Statistics
+  - num_of_processes
+  - total_memory_usage
+  - system_memory_usage
+  - processes_memory_usage
+  - ets_memory_usage
+  - used_allocated_memory
+  - allocated_memory
+- 5 min Statistics
+  - num_of_processes_5min
+  - total_memory_usage_5min
+  - system_memory_usage_5min
+  - processes_memory_usage_5min
+  - ets_memory_usage_5min
+  - used_allocated_memory_5min
+  - allocated_memory_5min
+
+### Total Number of Requests for LeoStorage
+
+- 1 min Statistics
+  - num_of_writes
+  - num_of_reads
+  - num_of_deletes
+- 5 min Statistics
+  - num_of_writes_5min
+  - num_of_reads_5min
+  - num_of_deletes_5min
+
+#### Total Number of Objects and Total Size of Objects
+
+- num_of_active_objects
+- total_objects
+- total_size_of_active_objects
+- total_size
+
+#### Total Number of MQ Messages
+
+- num_of_replication_messages,
+- num_of_sync-vnode_messages,
+- num_of_rebalance_messages,
+- mq_num_of_msg_recovery_node
+- mq_num_of_msg_deletion_dir
+- mq_num_of_msg_async_deletion_dir
+- mq_num_of_msg_req_deletion_dir
+- mq_mdcr_num_of_msg_req_comp_metadata
+- mq_mdcr_num_of_msg_req_sync_obj
+
+Note: The following items are available since LeoFS v1.4.0:
+
+- mq_num_of_msg_recovery_node
+- mq_num_of_msg_deletion_dir
+- mq_num_of_msg_async_deletion_dir
+- mq_num_of_msg_req_deletion_dir
+- mq_mdcr_num_of_msg_req_comp_metadata
+- mq_mdcr_num_of_msg_req_sync_obj
+
+#### Data Compaction
+
+- comp_state
+- comp_last_start_datetime
+- comp_last_end_datetime
+- comp_num_of_pending_targets
+- comp_num_of_ongoing_targets
+- comp_num_of_out_of_targets
+
+Note: The all items are available since LeoFS v1.4.0.
+
+### Statistics specific to the internals of LeoGateway
+
+#### Erlang VM of LeoGateway
+
+- 1 min Statistics
+  - num_of_processes
+  - total_memory_usage
+  - system_memory_usage
+  - processes_memory_usage
+  - ets_memory_usage
+  - used_allocated_memory
+  - allocated_memory
+- 5 min Statistics
+  - num_of_processes_5min
+  - total_memory_usage_5min
+  - system_memory_usage_5min
+  - processes_memory_usage_5min
+  - ets_memory_usage_5min
+  - used_allocated_memory_5min
+  - allocated_memory_5min
+
+#### Total Number of Requests for LeoGateway
+
+- 1 min Statistics
+  - num_of_writes
+  - num_of_reads
+  - num_of_deletes
+- 5 min Statistics
+  - num_of_writes_5min
+  - num_of_reads_5min
+  - num_of_deletes_5min
+
+#### Object Cache
+
+- count_of_cache-hit
+- count_of_cache-miss
+- total_of_files
+- total_cached_size
+
+### Tags
+
+All measurements have the following tags:
+
+- node
+
+## Example Output
+
+### LeoManager
+
+```text
+leofs,host=manager_0,node=manager_0@127.0.0.1 allocated_memory=78255445,allocated_memory_5min=78159025,ets_memory_usage=4611900,ets_memory_usage_5min=4632599,num_of_processes=223,num_of_processes_5min=223,processes_memory_usage=20201316,processes_memory_usage_5min=20186559,system_memory_usage=37172701,system_memory_usage_5min=37189213,total_memory_usage=57373373,total_memory_usage_5min=57374653,used_allocated_memory=67,used_allocated_memory_5min=67 1524105758000000000
+```
+
+### LeoStorage
+
+```text
+leofs,host=storage_0,node=storage_0@127.0.0.1 allocated_memory=63504384,allocated_memory_5min=0,comp_last_end_datetime=0,comp_last_start_datetime=0,comp_num_of_ongoing_targets=0,comp_num_of_out_of_targets=0,comp_num_of_pending_targets=8,comp_state=0,ets_memory_usage=3877824,ets_memory_usage_5min=0,mq_mdcr_num_of_msg_req_comp_metadata=0,mq_mdcr_num_of_msg_req_sync_obj=0,mq_num_of_msg_async_deletion_dir=0,mq_num_of_msg_deletion_dir=0,mq_num_of_msg_recovery_node=0,mq_num_of_msg_req_deletion_dir=0,num_of_active_objects=70,num_of_deletes=0,num_of_deletes_5min=0,num_of_processes=577,num_of_processes_5min=0,num_of_reads=1,num_of_reads_5min=0,num_of_rebalance_messages=0,num_of_replication_messages=0,num_of_sync-vnode_messages=0,num_of_writes=70,num_of_writes_5min=0,processes_memory_usage=20029464,processes_memory_usage_5min=0,system_memory_usage=25900472,system_memory_usage_5min=0,total_memory_usage=45920987,total_memory_usage_5min=0,total_objects=70,total_size=2,total_size_of_active_objects=2,used_allocated_memory=69,used_allocated_memory_5min=0 1524529826000000000
+```
+
+### LeoGateway
+
+```text
+leofs,host=gateway_0,node=gateway_0@127.0.0.1 allocated_memory=87941120,allocated_memory_5min=88067672,count_of_cache-hit=0,count_of_cache-miss=0,ets_memory_usage=4843497,ets_memory_usage_5min=4841574,num_of_deletes=0,num_of_deletes_5min=0,num_of_processes=555,num_of_processes_5min=555,num_of_reads=0,num_of_reads_5min=0,num_of_writes=0,num_of_writes_5min=0,processes_memory_usage=17388052,processes_memory_usage_5min=17413928,system_memory_usage=49531263,system_memory_usage_5min=49577819,total_cached_size=0,total_memory_usage=66917393,total_memory_usage_5min=66989469,total_of_files=0,used_allocated_memory=69,used_allocated_memory_5min=69 1524105894000000000
+```
diff --git a/content/telegraf/v1/input-plugins/libvirt/_index.md b/content/telegraf/v1/input-plugins/libvirt/_index.md
new file mode 100644
index 000000000..1f0ce61b5
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/libvirt/_index.md
@@ -0,0 +1,285 @@
+---
+description: "Telegraf plugin for collecting metrics from Libvirt"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Libvirt
+    identifier: input-libvirt
+tags: [Libvirt, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Libvirt Input Plugin
+
+The `libvirt` plugin collects statistics about virtualized
+guests on a system by using virtualization libvirt API,
+created by RedHat's Emerging Technology group.
+Metrics are gathered directly from the hypervisor on a host
+system, which means that Telegraf doesn't have to be installed
+and configured on a guest system.
+
+## Prerequisites
+
+For proper operation of the libvirt plugin,
+it is required that the host system has:
+
+- enabled virtualization options for host CPU
+- libvirtd and its dependencies installed and running
+- qemu hypervisor installed and running
+- at least one virtual machine for statistics monitoring
+
+Useful links:
+
+- [libvirt](https://libvirt.org/)
+- [qemu](https://www.qemu.org/)
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# The libvirt plugin collects statistics from virtualized guests using virtualization libvirt API.
+[[inputs.libvirt]]
+     ## Domain names from which libvirt gather statistics.
+     ## By default (empty or missing array) the plugin gather statistics from each domain registered in the host system.
+     # domains = []
+
+     ## Libvirt connection URI with hypervisor.
+     ## The plugin supports multiple transport protocols and approaches which are configurable via the URI.
+     ## The general URI form: driver[+transport]://[username@]()[:port]/[path]()
+     ## Supported transport protocols: ssh, tcp, tls, unix
+     ## URI examples for each type of transport protocol:
+     ## 1. SSH:  qemu+ssh://<USER@IP_OR_HOSTNAME>/system?keyfile=/<PATH_TO_PRIVATE_KEY>&known_hosts=/<PATH_TO_known_hosts>
+     ## 2. TCP:  qemu+tcp://<IP_OR_HOSTNAME>/system
+     ## 3. TLS:  qemu+tls://<HOSTNAME>/system?pkipath=/certs_dir/<COMMON_LOCATION_OF_CACERT_AND_SERVER_CLIENT_CERTS>
+     ## 4. UNIX: qemu+unix:///system?socket=/<PATH_TO_libvirt-sock>
+     ## Default URI is qemu:///system
+     # libvirt_uri = "qemu:///system"
+
+     ## Statistics groups for which libvirt plugin will gather statistics.
+     ## Supported statistics groups: state, cpu_total, balloon, vcpu, interface, block, perf, iothread, memory, dirtyrate
+     ## Empty array means no metrics for statistics groups will be exposed by the plugin.
+     ## By default the plugin will gather all available statistics.
+     # statistics_groups = ["state", "cpu_total", "balloon", "vcpu", "interface", "block", "perf", "iothread", "memory", "dirtyrate"]
+
+     ## A list containing additional statistics to be exposed by libvirt plugin.
+     ## Supported additional statistics: vcpu_mapping
+     ## By default (empty or missing array) the plugin will not collect additional statistics.
+     # additional_statistics = []
+
+```
+
+Useful links:
+
+- [Libvirt URI docs](https://libvirt.org/uri.html)
+- [TLS setup for libvirt](https://wiki.libvirt.org/page/TLSSetup)
+
+In cases when one or more of the following occur:
+
+- the global Telegraf variable `interval` is set to a low value (e.g. 1s),
+- a significant number of VMs are monitored,
+- the medium connecting the plugin to the hypervisor is inefficient,
+
+It is possible that following warning in the logs appears:
+`Collection took longer than expected`.
+
+For that case, `interval` should be set inside plugin configuration.
+Its value should be adjusted to plugin's runtime environment.
+
+Example:
+
+```toml
+[[inputs.libvirt]]
+  interval = "30s"
+```
+
+### Example configuration
+
+```toml
+[[inputs.libvirt]]
+  domain_names = ["ubuntu_20"]
+  libvirt_uri = "qemu:///system"
+  libvirt_metrics = ["state", "interface"]
+  additional_statistics = ["vcpu_mapping"]
+```
+
+## Metrics
+
+See the table below for a list of metrics produced by the plugin.
+
+The exact metric format depends on the statistics libvirt reports,
+which may vary depending on the version of libvirt on your system.
+
+The metrics are divided into the following groups of statistics:
+
+- state
+- cpu_total
+- balloon
+- vcpu
+- net
+- perf
+- block
+- iothread
+- memory
+- dirtyrate
+- vcpu_mapping - additional statistics
+
+Statistics groups from the plugin corresponds to the grouping of
+metrics directly read from libvirtd using the `virsh domstats` command.
+More details about metrics can be found at the links below:
+
+- [Domain statistics](https://libvirt.org/manpages/virsh.html#domstats)
+- [Performance monitoring events](https://libvirt.org/formatdomain.html#performance-monitoring-events)
+
+| **Statistics group** | **Metric name** | **Exposed Telegraf field** | **Description** |
+|:---|:---|:---|:---|
+| **state** | state.state | state | state of the VM, returned as number from virDomainState enum |
+||state.reason | reason | reason for entering given state, returned as int from virDomain*Reason enum corresponding to given state |
+| **cpu_total** | cpu.time | time | total cpu time spent for this domain in nanoseconds |
+|| cpu.user | user | user cpu time spent in nanoseconds |
+|| cpu.system | system | system cpu time spent in nanoseconds |
+|| cpu.haltpoll.success.time | haltpoll_success_time | cpu halt polling success time spent in nanoseconds |
+|| cpu.haltpoll.fail.time | haltpoll_fail_time | cpu halt polling fail time spent in nanoseconds |
+|| cpu.cache.monitor.count |count | the number of cache monitors for this domain |
+|| cpu.cache.monitor.\<num\>.name | name | the name of cache monitor \<num\>, not available for kernels from 4.14 upwards |
+|| cpu.cache.monitor.\<num\>.vcpus| vcpus |vcpu list of cache monitor \<num\>, not available for kernels from 4.14 upwards |
+|| cpu.cache.monitor.\<num\>.bank.count | bank_count | the number of cache banks in cache monitor \<num\>, not available for kernels from 4.14 upwards |
+|| cpu.cache.monitor.\<num\>.bank.\<index\>.id | id|host allocated cache id for bank \<index\> in cache monitor \<num\>, not available for kernels from 4.14 upwards |
+|| cpu.cache.monitor.\<num\>.bank.\<index\>.bytes | bytes | the number of bytes of last level cache that the domain is using on cache bank \<index\>, not available for kernels from 4.14 upwards|
+| **balloon** | balloon.current | current | the memory in KiB currently used |
+|| balloon.maximum | maximum | the maximum memory in KiB allowed |
+|| balloon.swap_in | swap_in | the amount of data read from swap space (in KiB) |
+|| balloon.swap_out | swap_out | the amount of memory written out to swap space (in KiB) |
+|| balloon.major_fault | major_fault | the number of page faults when disk IO was required |
+|| balloon.minor_fault | minor_fault | the number of other page faults |
+|| balloon.unused | unused | the amount of memory left unused by the system (in KiB) |
+|| balloon.available | available | the amount of usable memory as seen by the domain (in KiB) |
+|| balloon.rss | rss | Resident Set Size of running domain's process (in KiB) |
+|| balloon.usable | usable | the amount of memory which can be reclaimed by balloon without causing host swapping (in KiB) |
+|| balloon.last-update | last_update | timestamp of the last update of statistics (in seconds) |
+|| balloon.disk_caches | disk_caches | the amount of memory that can be reclaimed without additional I/O, typically disk (in KiB) |
+|| balloon.hugetlb_pgalloc | hugetlb_pgalloc | the number of successful huge page allocations from inside the domain via virtio balloon |
+|| balloon.hugetlb_pgfail | hugetlb_pgfail | the number of failed huge page allocations from inside the domain via virtio balloon |
+| **vcpu** | vcpu.current | current | yes current number of online virtual CPUs |
+|| vcpu.maximum | maximum | maximum number of online virtual CPUs |
+|| vcpu.\<num\>.state | state | state of the virtual CPU \<num\>, as number from virVcpuState enum |
+|| vcpu.\<num\>.time | time | virtual cpu time spent by virtual CPU \<num\> (in microseconds) |
+|| vcpu.\<num\>.wait | wait | virtual cpu time spent by virtual CPU \<num\> waiting on I/O (in microseconds) |
+|| vcpu.\<num\>.halted | halted | virtual CPU \<num\> is halted: yes or no (may indicate the processor is idle or even disabled, depending on the architecture) |
+|| vcpu.\<num\>.halted | halted_i | virtual CPU \<num\> is halted: 1 (for "yes") or 0 (for other values) (may indicate the processor is idle or even disabled, depending on the architecture) |
+|| vcpu.\<num\>.delay | delay | time the vCPU \<num\> thread was enqueued by the host scheduler, but was waiting in the queue instead of running. Exposed to the VM as a steal time. |
+|| --- | cpu_id | Information about mapping vcpu_id to cpu_id (id of physical cpu). Should only be exposed when statistics_group contains vcpu and additional_statistics contains vcpu_mapping (in config) |
+| **interface** | net.count | count | number of network interfaces on this domain |
+|| net.\<num\>.name | name | name of the interface  \<num\> |
+|| net.\<num\>.rx.bytes | rx_bytes | number of bytes received |
+|| net.\<num\>.rx.pkts | rx_pkts | number of packets received |
+|| net.\<num\>.rx.errs | rx_errs | number of receive errors |
+|| net.\<num\>.rx.drop | rx_drop | number of receive packets dropped |
+|| net.\<num\>.tx.bytes | tx_bytes | number of bytes transmitted |
+|| net.\<num\>.tx.pkts | tx_pkts | number of packets transmitted |
+|| net.\<num\>.tx.errs | tx_errs | number of transmission errors |
+|| net.\<num\>.tx.drop | tx_drop | number of transmit packets dropped |
+| **perf** | perf.cmt | cmt | the cache usage in Byte currently used, not available for kernels from 4.14 upwards |
+|| perf.mbmt | mbmt | total system bandwidth from one level of cache, not available for kernels from 4.14 upwards |
+|| perf.mbml | mbml | bandwidth of memory traffic for a memory controller, not available for kernels from 4.14 upwards |
+|| perf.cpu_cycles | cpu_cycles | the count of cpu cycles (total/elapsed) |
+|| perf.instructions | instructions |  the count of instructions |
+|| perf.cache_references | cache_references | the count of cache hits |
+|| perf.cache_misses | cache_misses | the count of caches misses |
+|| perf.branch_instructions | branch_instructions | the count of branch instructions |
+|| perf.branch_misses | branch_misses | the count of branch misses |
+|| perf.bus_cycles | bus_cycles | the count of bus cycles |
+|| perf.stalled_cycles_frontend | stalled_cycles_frontend | the count of stalled frontend cpu cycles |
+|| perf.stalled_cycles_backend | stalled_cycles_backend | the count of stalled backend cpu cycles |
+|| perf.ref_cpu_cycles | ref_cpu_cycles | the count of ref cpu cycles |
+|| perf.cpu_clock | cpu_clock | the count of cpu clock time |
+|| perf.task_clock | task_clock | the count of task clock time |
+|| perf.page_faults | page_faults | the count of page faults |
+|| perf.context_switches | context_switches | the count of context switches |
+|| perf.cpu_migrations | cpu_migrations | the count of cpu migrations |
+|| perf.page_faults_min | page_faults_min | the count of minor page faults |
+|| perf.page_faults_maj | page_faults_maj | the count of major page faults |
+|| perf.alignment_faults | alignment_faults | the count of alignment faults |
+|| perf.emulation_faults | emulation_faults | the count of emulation faults |
+| **block** | block.count | count | number of block devices being listed |
+|| block.\<num\>.name | name | name of the target of the block device  \<num\> (the same name for multiple entries if --backing is present) |
+|| block.\<num\>.backingIndex | backingIndex | when --backing is present, matches up with the \<backingStore\> index listed in domain XML for backing files |
+|| block.\<num\>.path | path | file source of block device  \<num\>, if it is a local file or block device |
+|| block.\<num\>.rd.reqs | rd_reqs | number of read requests |
+|| block.\<num\>.rd.bytes | rd_bytes | number of read bytes |
+|| block.\<num\>.rd.times | rd_times | total time (ns) spent on reads |
+|| block.\<num\>.wr.reqs | wr_reqs | number of write requests |
+|| block.\<num\>.wr.bytes | wr_bytes | number of written bytes |
+|| block.\<num\>.wr.times | wr_times | total time (ns) spent on writes |
+|| block.\<num\>.fl.reqs | fl_reqs | total flush requests |
+|| block.\<num\>.fl.times | fl_times | total time (ns) spent on cache flushing |
+|| block.\<num\>.errors | errors | Xen only: the 'oo_req' value |
+|| block.\<num\>.allocation | allocation | offset of highest written sector in bytes |
+|| block.\<num\>.capacity | capacity | logical size of source file in bytes |
+|| block.\<num\>.physical | physical | physical size of source file in bytes |
+|| block.\<num\>.threshold | threshold | threshold (in bytes) for delivering the VIR_DOMAIN_EVENT_ID_BLOCK_THRESHOLD event. See domblkthreshold |
+| **iothread** | iothread.count | count | maximum number of IOThreads in the subsequent list as unsigned int. Each IOThread in the list will will use it's iothread_id value as the \<id\>. There may be fewer \<id\> entries than the iothread.count value if the polling values are not supported |
+|| iothread.\<id\>.poll-max-ns | poll_max_ns | maximum polling time in nanoseconds used by the \<id\> IOThread. A value of 0 (zero) indicates polling is disabled |
+|| iothread.\<id\>.poll-grow | poll_grow | polling time grow value. A value of 0 (zero) growth is managed by the hypervisor |
+|| iothread.\<id\>.poll-shrink | poll_shrink | polling time shrink value. A value of (zero) indicates shrink is managed by hypervisor |
+| **memory** | memory.bandwidth.monitor.count | count | the number of memory bandwidth monitors for this domain, not available for kernels from 4.14 upwards |
+|| memory.bandwidth.monitor.\<num\>.name | name | the name of monitor  \<num\>, not available for kernels from 4.14 upwards |
+|| memory.bandwidth.monitor.\<num\>.vcpus | vcpus | the vcpu list of monitor  \<num\>, not available for kernels from 4.14 upwards |
+|| memory.bandwidth.monitor.\<num\>.node.count | node_count | the number of memory controller in monitor \<num\>, not available for kernels from 4.14 upwards |
+|| memory.bandwidth.monitor.\<num\>.node.\<index\>.id | id | host allocated memory controller id for controller \<index\> of monitor \<num\>, not available for kernels from 4.14 upwards |
+|| memory.bandwidth.monitor.\<num\>.node.\<index\>.bytes.local | bytes_local | the accumulative bytes consumed by \@vcpus that passing through the memory controller in the same processor that the scheduled host CPU belongs to, not available for kernels from 4.14 upwards |
+|| memory.bandwidth.monitor.\<num\>.node.\<index\>.bytes.total | bytes_total | the total bytes consumed by \@vcpus that passing through all memory controllers, either local or remote controller, not available for kernels from 4.14 upwards |
+| **dirtyrate** | dirtyrate.calc_status | calc_status | the status of last memory dirty rate calculation, returned as number from virDomainDirtyRateStatus enum |
+|| dirtyrate.calc_start_time | calc_start_time the | start time of last memory dirty rate calculation |
+|| dirtyrate.calc_period | calc_period | the period of last memory dirty rate calculation |
+|| dirtyrate.megabytes_per_second | megabytes_per_second | the calculated memory dirty rate in MiB/s |
+|| dirtyrate.calc_mode | calc_mode | the calculation mode used last measurement (page-sampling/dirty-bitmap/dirty-ring) |
+|| dirtyrate.vcpu.\<num\>.megabytes_per_second | megabytes_per_second | the calculated memory dirty rate for a virtual cpu in MiB/s |
+
+### Additional statistics
+
+| **Statistics group**           | **Exposed Telegraf tag**      | **Exposed Telegraf field**      |**Description**         |
+|:-------------------------------|:-----------------------------:|:-------------------------------:|:-----------------------|
+| **vcpu_mapping** | vcpu_id | --- | ID of Virtual CPU |
+|| --- | cpu_id | Comma separated list (exposed as a string) of Physical CPU IDs |
+
+## Example Output
+
+```text
+libvirt_cpu_affinity,domain_name=U22,host=localhost,vcpu_id=0 cpu_id="1,2,3" 1662383707000000000
+libvirt_cpu_affinity,domain_name=U22,host=localhost,vcpu_id=1 cpu_id="1,2,3,4,5,6,7,8,9,10" 1662383707000000000
+libvirt_balloon,domain_name=U22,host=localhost current=4194304i,maximum=4194304i,swap_in=0i,swap_out=0i,major_fault=0i,minor_fault=0i,unused=3928628i,available=4018480i,rss=1036012i,usable=3808724i,last_update=1654611373i,disk_caches=68820i,hugetlb_pgalloc=0i,hugetlb_pgfail=0i 1662383709000000000
+libvirt_vcpu_total,domain_name=U22,host=localhost maximum=2i,current=2i 1662383709000000000
+libvirt_vcpu,domain_name=U22,host=localhost,vcpu_id=0 state=1i,time=17943740000000i,wait=0i,halted="no",halted_i=0i,delay=14246609424i,cpu_id=1i 1662383709000000000
+libvirt_vcpu,domain_name=U22,host=localhost,vcpu_id=1 state=1i,time=18288400000000i,wait=0i,halted="yes",halted_i=1i,delay=12902231142i,cpu_id=3i 1662383709000000000
+libvirt_net_total,domain_name=U22,host=localhost count=1i 1662383709000000000
+libvirt_net,domain_name=U22,host=localhost,interface_id=0 name="vnet0",rx_bytes=110i,rx_pkts=1i,rx_errs=0i,rx_drop=31007i,tx_bytes=0i,tx_pkts=0i,tx_errs=0i,tx_drop=0i 1662383709000000000
+libvirt_block_total,domain_name=U22,host=localhost count=1i 1662383709000000000
+libvirt_block,domain_name=U22,host=localhost,block_id=0 rd=17337818234i,path=name="vda",backingIndex=1i,path="/tmp/ubuntu_image.img",rd_reqs=11354i,rd_bytes=330314752i,rd_times=6240559566i,wr_reqs=52440i,wr_bytes=1183828480i,wr_times=21887150375i,fl_reqs=32250i,fl_times=23158998353i,errors=0i,allocation=770048000i,capacity=2361393152i,physical=770052096i,threshold=2147483648i
+libvirt_perf,domain_name=U22,host=localhost cmt=19087360i,mbmt=77168640i,mbml=67788800i,cpu_cycles=29858995122i,instructions=0i,cache_references=3053301695i,cache_misses=609441024i,branch_instructions=2623890194i,branch_misses=103707961i,bus_cycles=188105628i,stalled_cycles_frontend=0i,stalled_cycles_backend=0i,ref_cpu_cycles=30766094039i,cpu_clock=25166642695i,task_clock=25263578917i,page_faults=2670i,context_switches=294284i,cpu_migrations=17949i,page_faults_min=2670i,page_faults_maj=0i,alignment_faults=0i,emulation_faults=0i 1662383709000000000
+libvirt_dirtyrate,domain_name=U22,host=localhost calc_status=2i,calc_start_time=348414i,calc_period=1i,dirtyrate.megabytes_per_second=4i,calc_mode="dirty-ring" 1662383709000000000
+libvirt_dirtyrate_vcpu,domain_name=U22,host=localhost,vcpu_id=0 megabytes_per_second=2i 1662383709000000000
+libvirt_dirtyrate_vcpu,domain_name=U22,host=localhost,vcpu_id=1 megabytes_per_second=2i 1662383709000000000
+libvirt_state,domain_name=U22,host=localhost state=1i,reason=5i 1662383709000000000
+libvirt_cpu,domain_name=U22,host=localhost time=67419144867000i,user=63886161852000i,system=3532983015000i,haltpoll_success_time=516907915i,haltpoll_fail_time=2727253643i 1662383709000000000
+libvirt_cpu_cache_monitor_total,domain_name=U22,host=localhost count=1i 1662383709000000000
+libvirt_cpu_cache_monitor,domain_name=U22,host=localhost,cache_monitor_id=0 name="any_name_vcpus_0-3",vcpus="0-3",bank_count=1i 1662383709000000000
+libvirt_cpu_cache_monitor_bank,domain_name=U22,host=localhost,cache_monitor_id=0,bank_index=0 id=0i,bytes=5406720i 1662383709000000000
+libvirt_iothread_total,domain_name=U22,host=localhost count=1i 1662383709000000000
+libvirt_iothread,domain_name=U22,host=localhost,iothread_id=0 poll_max_ns=32768i,poll_grow=0i,poll_shrink=0i 1662383709000000000
+libvirt_memory_bandwidth_monitor_total,domain_name=U22,host=localhost count=2i 1662383709000000000
+libvirt_memory_bandwidth_monitor,domain_name=U22,host=localhost,memory_bandwidth_monitor_id=0 name="any_name_vcpus_0-4",vcpus="0-4",node_count=2i 1662383709000000000
+libvirt_memory_bandwidth_monitor,domain_name=U22,host=localhost,memory_bandwidth_monitor_id=1 name="vcpus_7",vcpus="7",node_count=2i 1662383709000000000
+libvirt_memory_bandwidth_monitor_node,domain_name=U22,host=localhost,memory_bandwidth_monitor_id=0,controller_index=0 id=0i,bytes_total=10208067584i,bytes_local=4807114752i 1662383709000000000
+libvirt_memory_bandwidth_monitor_node,domain_name=U22,host=localhost,memory_bandwidth_monitor_id=0,controller_index=1 id=1i,bytes_total=8693735424i,bytes_local=5850161152i 1662383709000000000
+libvirt_memory_bandwidth_monitor_node,domain_name=U22,host=localhost,memory_bandwidth_monitor_id=1,controller_index=0 id=0i,bytes_total=853811200i,bytes_local=290701312i 1662383709000000000
+libvirt_memory_bandwidth_monitor_node,domain_name=U22,host=localhost,memory_bandwidth_monitor_id=1,controller_index=1 id=1i,bytes_total=406044672i,bytes_local=229425152i 1662383709000000000
+```
diff --git a/content/telegraf/v1/input-plugins/linux_cpu/_index.md b/content/telegraf/v1/input-plugins/linux_cpu/_index.md
new file mode 100644
index 000000000..8d7136cac
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/linux_cpu/_index.md
@@ -0,0 +1,86 @@
+---
+description: "Telegraf plugin for collecting metrics from Linux CPU"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Linux CPU
+    identifier: input-linux_cpu
+tags: [Linux CPU, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Linux CPU Input Plugin
+
+The `linux_cpu` plugin gathers CPU metrics exposed on Linux-based systems.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Provides Linux CPU metrics
+# This plugin ONLY supports Linux
+[[inputs.linux_cpu]]
+  ## Path for sysfs filesystem.
+  ## See https://www.kernel.org/doc/Documentation/filesystems/sysfs.txt
+  ## Defaults:
+  # host_sys = "/sys"
+
+  ## CPU metrics collected by the plugin.
+  ## Supported options:
+  ## "cpufreq", "thermal"
+  ## Defaults:
+  # metrics = ["cpufreq"]
+```
+
+## Metrics
+
+The following tags are emitted by the plugin under the name `linux_cpu`:
+
+| Tag   | Description           |
+|-------|-----------------------|
+| `cpu` | Identifier of the CPU |
+
+The following fields are emitted by the plugin when selecting `cpufreq`:
+
+| Metric name (field) | Description                                                | Units |
+|---------------------|------------------------------------------------------------|-------|
+| `scaling_cur_freq`  | Current frequency of the CPU as determined by CPUFreq      | KHz   |
+| `scaling_min_freq`  | Minimum frequency the governor can scale to                | KHz   |
+| `scaling_max_freq`  | Maximum frequency the governor can scale to                | KHz   |
+| `cpuinfo_cur_freq`  | Current frequency of the CPU as determined by the hardware | KHz   |
+| `cpuinfo_min_freq`  | Minimum operating frequency of the CPU                     | KHz   |
+| `cpuinfo_max_freq`  | Maximum operating frequency of the CPU                     | KHz   |
+
+The following fields are emitted by the plugin when selecting `thermal`:
+
+| Metric name (field)   | Description                                                 | Units |
+|-----------------------|-------------------------------------------------------------|-------|
+| `throttle_count`      | Number of thermal throttle events reported by the CPU       |       |
+| `throttle_max_time`   | Maximum amount of time CPU was in throttled state           | ms    |
+| `throtlle_total_time` | Cumulative time during which the CPU was in throttled state | ms    |
+
+## Example Output
+
+```text
+linux_cpu,cpu=0,host=go scaling_max_freq=4700000i,cpuinfo_min_freq=400000i,cpuinfo_max_freq=4700000i,throttle_count=0i,throttle_max_time=0i,throttle_total_time=0i,scaling_cur_freq=803157i,scaling_min_freq=400000i 1617621150000000000
+linux_cpu,cpu=1,host=go throttle_total_time=0i,scaling_cur_freq=802939i,scaling_min_freq=400000i,scaling_max_freq=4700000i,cpuinfo_min_freq=400000i,cpuinfo_max_freq=4700000i,throttle_count=0i,throttle_max_time=0i 1617621150000000000
+linux_cpu,cpu=10,host=go throttle_max_time=0i,throttle_total_time=0i,scaling_cur_freq=838343i,scaling_min_freq=400000i,scaling_max_freq=4700000i,cpuinfo_min_freq=400000i,cpuinfo_max_freq=4700000i,throttle_count=0i 1617621150000000000
+linux_cpu,cpu=11,host=go cpuinfo_max_freq=4700000i,throttle_count=0i,throttle_max_time=0i,throttle_total_time=0i,scaling_cur_freq=800054i,scaling_min_freq=400000i,scaling_max_freq=4700000i,cpuinfo_min_freq=400000i 1617621150000000000
+linux_cpu,cpu=2,host=go throttle_total_time=0i,scaling_cur_freq=800404i,scaling_min_freq=400000i,scaling_max_freq=4700000i,cpuinfo_min_freq=400000i,cpuinfo_max_freq=4700000i,throttle_count=0i,throttle_max_time=0i 1617621150000000000
+linux_cpu,cpu=3,host=go throttle_total_time=0i,scaling_cur_freq=800126i,scaling_min_freq=400000i,scaling_max_freq=4700000i,cpuinfo_min_freq=400000i,cpuinfo_max_freq=4700000i,throttle_count=0i,throttle_max_time=0i 1617621150000000000
+linux_cpu,cpu=4,host=go cpuinfo_max_freq=4700000i,throttle_count=0i,throttle_max_time=0i,throttle_total_time=0i,scaling_cur_freq=800359i,scaling_min_freq=400000i,scaling_max_freq=4700000i,cpuinfo_min_freq=400000i 1617621150000000000
+linux_cpu,cpu=5,host=go throttle_max_time=0i,throttle_total_time=0i,scaling_cur_freq=800093i,scaling_min_freq=400000i,scaling_max_freq=4700000i,cpuinfo_min_freq=400000i,cpuinfo_max_freq=4700000i,throttle_count=0i 1617621150000000000
+linux_cpu,cpu=6,host=go cpuinfo_max_freq=4700000i,throttle_count=0i,throttle_max_time=0i,throttle_total_time=0i,scaling_cur_freq=741646i,scaling_min_freq=400000i,scaling_max_freq=4700000i,cpuinfo_min_freq=400000i 1617621150000000000
+linux_cpu,cpu=7,host=go scaling_cur_freq=700006i,scaling_min_freq=400000i,scaling_max_freq=4700000i,cpuinfo_min_freq=400000i,cpuinfo_max_freq=4700000i,throttle_count=0i,throttle_max_time=0i,throttle_total_time=0i 1617621150000000000
+linux_cpu,cpu=8,host=go throttle_max_time=0i,throttle_total_time=0i,scaling_cur_freq=700046i,scaling_min_freq=400000i,scaling_max_freq=4700000i,cpuinfo_min_freq=400000i,cpuinfo_max_freq=4700000i,throttle_count=0i 1617621150000000000
+linux_cpu,cpu=9,host=go throttle_count=0i,throttle_max_time=0i,throttle_total_time=0i,scaling_cur_freq=700075i,scaling_min_freq=400000i,scaling_max_freq=4700000i,cpuinfo_min_freq=400000i,cpuinfo_max_freq=4700000i 1617621150000000000
+```
diff --git a/content/telegraf/v1/input-plugins/linux_sysctl_fs/_index.md b/content/telegraf/v1/input-plugins/linux_sysctl_fs/_index.md
new file mode 100644
index 000000000..c4df9bcf1
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/linux_sysctl_fs/_index.md
@@ -0,0 +1,44 @@
+---
+description: "Telegraf plugin for collecting metrics from Linux Sysctl FS"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Linux Sysctl FS
+    identifier: input-linux_sysctl_fs
+tags: [Linux Sysctl FS, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Linux Sysctl FS Input Plugin
+
+The linux_sysctl_fs input provides Linux system level file metrics. The
+documentation on these fields can be found at
+<https://www.kernel.org/doc/Documentation/sysctl/fs.txt>.
+
+Example output:
+
+```shell
+> linux_sysctl_fs,host=foo dentry-want-pages=0i,file-max=44222i,aio-max-nr=65536i,inode-preshrink-nr=0i,dentry-nr=64340i,dentry-unused-nr=55274i,file-nr=1568i,aio-nr=0i,inode-nr=35952i,inode-free-nr=12957i,dentry-age-limit=45i 1490982022000000000
+```
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Provides Linux sysctl fs metrics
+[[inputs.linux_sysctl_fs]]
+  # no configuration
+```
+
+## Metrics
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/logparser/_index.md b/content/telegraf/v1/input-plugins/logparser/_index.md
new file mode 100644
index 000000000..816a249ff
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/logparser/_index.md
@@ -0,0 +1,152 @@
+---
+description: "Telegraf plugin for collecting metrics from Logparser"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Logparser
+    identifier: input-logparser
+tags: [Logparser, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Logparser Input Plugin
+
+**Deprecated in Telegraf 1.15: Please use the [tail](/telegraf/v1/plugins/#input-tail) plugin along with the
+[`grok` data format]()**
+
+The `logparser` plugin streams and parses the given logfiles. Currently it
+has the capability of parsing "grok" patterns from logfiles, which also supports
+regex patterns.
+
+The `tail` plugin now provides all the functionality of the `logparser` plugin.
+Most options can be translated directly to the `tail` plugin:
+
+- For options in the `[inputs.logparser.grok]` section, the equivalent option
+  will have add the `grok_` prefix when using them in the `tail` input.
+- The grok `measurement` option can be replaced using the standard plugin
+  `name_override` option.
+
+This plugin also supports metric filtering
+and some additional common options.
+
+## Example
+
+Migration Example:
+
+```diff
+- [[inputs.logparser]]
+-   files = ["/var/log/apache/access.log"]
+-   from_beginning = false
+-   [inputs.logparser.grok]
+-     patterns = ["%{COMBINED_LOG_FORMAT}"]
+-     measurement = "apache_access_log"
+-     custom_pattern_files = []
+-     custom_patterns = '''
+-     '''
+-     timezone = "Canada/Eastern"
+
++ [[inputs.tail]]
++   files = ["/var/log/apache/access.log"]
++   from_beginning = false
++   grok_patterns = ["%{COMBINED_LOG_FORMAT}"]
++   name_override = "apache_access_log"
++   grok_custom_pattern_files = []
++   grok_custom_patterns = '''
++   '''
++   grok_timezone = "Canada/Eastern"
++   data_format = "grok"
+```
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics off Arista LANZ, via socket
+[[inputs.logparser]]
+  ## Log files to parse.
+  ## These accept standard unix glob matching rules, but with the addition of
+  ## ** as a "super asterisk". ie:
+  ##   /var/log/**.log     -> recursively find all .log files in /var/log
+  ##   /var/log/*/*.log    -> find all .log files with a parent dir in /var/log
+  ##   /var/log/apache.log -> only tail the apache log file
+  files = ["/var/log/apache/access.log"]
+
+  ## Read files that currently exist from the beginning. Files that are created
+  ## while telegraf is running (and that match the "files" globs) will always
+  ## be read from the beginning.
+  from_beginning = false
+
+  ## Method used to watch for file updates.  Can be either "inotify" or "poll".
+  # watch_method = "inotify"
+
+  ## Parse logstash-style "grok" patterns:
+  [inputs.logparser.grok]
+    ## This is a list of patterns to check the given log file(s) for.
+    ## Note that adding patterns here increases processing time. The most
+    ## efficient configuration is to have one pattern per logparser.
+    ## Other common built-in patterns are:
+    ##   %{COMMON_LOG_FORMAT}   (plain apache & nginx access logs)
+    ##   %{COMBINED_LOG_FORMAT} (access logs + referrer & agent)
+    patterns = ["%{COMBINED_LOG_FORMAT}"]
+
+    ## Name of the outputted measurement name.
+    measurement = "apache_access_log"
+
+    ## Full path(s) to custom pattern files.
+    custom_pattern_files = []
+
+    ## Custom patterns can also be defined here. Put one pattern per line.
+    custom_patterns = '''
+    '''
+
+    ## Timezone allows you to provide an override for timestamps that
+    ## don't already include an offset
+    ## e.g. 04/06/2016 12:41:45 data one two 5.43µs
+    ##
+    ## Default: "" which renders UTC
+    ## Options are as follows:
+    ##   1. Local             -- interpret based on machine localtime
+    ##   2. "Canada/Eastern"  -- Unix TZ values like those found in https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
+    ##   3. UTC               -- or blank/unspecified, will return timestamp in UTC
+    # timezone = "Canada/Eastern"
+
+    ## When set to "disable", timestamp will not incremented if there is a
+    ## duplicate.
+    # unique_timestamp = "auto"
+```
+
+## Grok Parser
+
+Reference the [grok parser](/telegraf/v1/plugins/#parser-grok) documentation to setup the grok section of the
+configuration.
+
+## Additional Resources
+
+- <https://www.influxdata.com/telegraf-correlate-log-metrics-data-performance-bottlenecks/>
+
+[tail]: /plugins/inputs/tail/README.md
+[grok parser]: /plugins/parsers/grok/README.md
+
+## Metrics
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/logstash/_index.md b/content/telegraf/v1/input-plugins/logstash/_index.md
new file mode 100644
index 000000000..cd96c686d
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/logstash/_index.md
@@ -0,0 +1,191 @@
+---
+description: "Telegraf plugin for collecting metrics from Logstash"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Logstash
+    identifier: input-logstash
+tags: [Logstash, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Logstash Input Plugin
+
+This plugin reads metrics exposed by [Logstash Monitoring
+API](https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html).
+
+Logstash 5 and later is supported.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics exposed by Logstash
+[[inputs.logstash]]
+  ## The URL of the exposed Logstash API endpoint.
+  url = "http://127.0.0.1:9600"
+
+  ## Use Logstash 5 single pipeline API, set to true when monitoring
+  ## Logstash 5.
+  # single_pipeline = false
+
+  ## Enable optional collection components.  Can contain
+  ## "pipelines", "process", and "jvm".
+  # collect = ["pipelines", "process", "jvm"]
+
+  ## Timeout for HTTP requests.
+  # timeout = "5s"
+
+  ## Optional HTTP Basic Auth credentials.
+  # username = "username"
+  # password = "pa$$word"
+
+  ## Optional TLS Config.
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+
+  ## Use TLS but skip chain & host verification.
+  # insecure_skip_verify = false
+ 
+  ## If 'use_system_proxy' is set to true, Telegraf will check env vars such as
+  ## HTTP_PROXY, HTTPS_PROXY, and NO_PROXY (or their lowercase counterparts).
+  ## If 'use_system_proxy' is set to false (default) and 'http_proxy_url' is
+  ## provided, Telegraf will use the specified URL as HTTP proxy.
+  # use_system_proxy = false
+  # http_proxy_url = "http://localhost:8888"
+
+  ## Optional HTTP headers.
+  # [inputs.logstash.headers]
+  #   "X-Special-Header" = "Special-Value"
+```
+
+## Metrics
+
+Additional plugin stats may be collected (because logstash doesn't consistently
+expose all stats)
+
+- logstash_jvm
+  - tags:
+    - node_id
+    - node_name
+    - node_host
+    - node_version
+  - fields:
+    - threads_peak_count
+    - mem_pools_survivor_peak_max_in_bytes
+    - mem_pools_survivor_max_in_bytes
+    - mem_pools_old_peak_used_in_bytes
+    - mem_pools_young_used_in_bytes
+    - mem_non_heap_committed_in_bytes
+    - threads_count
+    - mem_pools_old_committed_in_bytes
+    - mem_pools_young_peak_max_in_bytes
+    - mem_heap_used_percent
+    - gc_collectors_young_collection_time_in_millis
+    - mem_pools_survivor_peak_used_in_bytes
+    - mem_pools_young_committed_in_bytes
+    - gc_collectors_old_collection_time_in_millis
+    - gc_collectors_old_collection_count
+    - mem_pools_survivor_used_in_bytes
+    - mem_pools_old_used_in_bytes
+    - mem_pools_young_max_in_bytes
+    - mem_heap_max_in_bytes
+    - mem_non_heap_used_in_bytes
+    - mem_pools_survivor_committed_in_bytes
+    - mem_pools_old_max_in_bytes
+    - mem_heap_committed_in_bytes
+    - mem_pools_old_peak_max_in_bytes
+    - mem_pools_young_peak_used_in_bytes
+    - mem_heap_used_in_bytes
+    - gc_collectors_young_collection_count
+    - uptime_in_millis
+
+- logstash_process
+  - tags:
+    - node_id
+    - node_name
+    - source
+    - node_version
+  - fields:
+    - open_file_descriptors
+    - cpu_load_average_1m
+    - cpu_load_average_5m
+    - cpu_load_average_15m
+    - cpu_total_in_millis
+    - cpu_percent
+    - peak_open_file_descriptors
+    - max_file_descriptors
+    - mem_total_virtual_in_bytes
+    - mem_total_virtual_in_bytes
+
+- logstash_events
+  - tags:
+    - node_id
+    - node_name
+    - source
+    - node_version
+    - pipeline (for Logstash 6+)
+  - fields:
+    - queue_push_duration_in_millis
+    - duration_in_millis
+    - in
+    - filtered
+    - out
+
+- logstash_plugins
+  - tags:
+    - node_id
+    - node_name
+    - source
+    - node_version
+    - pipeline (for Logstash 6+)
+    - plugin_id
+    - plugin_name
+    - plugin_type
+  - fields:
+    - queue_push_duration_in_millis (for input plugins only)
+    - duration_in_millis
+    - in
+    - out
+    - failures(if exists)
+    - bulk_requests_failures (for Logstash 7+)
+    - bulk_requests_with_errors (for Logstash 7+)
+    - documents_successes (for logstash 7+)
+    - documents_retryable_failures (for logstash 7+)
+
+- logstash_queue
+  - tags:
+    - node_id
+    - node_name
+    - source
+    - node_version
+    - pipeline (for Logstash 6+)
+    - queue_type
+  - fields:
+    - events
+    - free_space_in_bytes
+    - max_queue_size_in_bytes
+    - max_unread_events
+    - page_capacity_in_bytes
+    - queue_size_in_bytes
+
+## Example Output
+
+```text
+logstash_jvm,node_id=3da53ed0-a946-4a33-9cdb-33013f2273f6,node_name=debian-stretch-logstash6.virt,node_version=6.8.1,source=debian-stretch-logstash6.virt gc_collectors_old_collection_count=2,gc_collectors_old_collection_time_in_millis=100,gc_collectors_young_collection_count=26,gc_collectors_young_collection_time_in_millis=1028,mem_heap_committed_in_bytes=1056309248,mem_heap_max_in_bytes=1056309248,mem_heap_used_in_bytes=207216328,mem_heap_used_percent=19,mem_non_heap_committed_in_bytes=160878592,mem_non_heap_used_in_bytes=140838184,mem_pools_old_committed_in_bytes=899284992,mem_pools_old_max_in_bytes=899284992,mem_pools_old_peak_max_in_bytes=899284992,mem_pools_old_peak_used_in_bytes=189468088,mem_pools_old_used_in_bytes=189468088,mem_pools_survivor_committed_in_bytes=17432576,mem_pools_survivor_max_in_bytes=17432576,mem_pools_survivor_peak_max_in_bytes=17432576,mem_pools_survivor_peak_used_in_bytes=17432576,mem_pools_survivor_used_in_bytes=12572640,mem_pools_young_committed_in_bytes=139591680,mem_pools_young_max_in_bytes=139591680,mem_pools_young_peak_max_in_bytes=139591680,mem_pools_young_peak_used_in_bytes=139591680,mem_pools_young_used_in_bytes=5175600,threads_count=20,threads_peak_count=24,uptime_in_millis=739089 1566425244000000000
+logstash_process,node_id=3da53ed0-a946-4a33-9cdb-33013f2273f6,node_name=debian-stretch-logstash6.virt,node_version=6.8.1,source=debian-stretch-logstash6.virt cpu_load_average_15m=0.03,cpu_load_average_1m=0.01,cpu_load_average_5m=0.04,cpu_percent=0,cpu_total_in_millis=83230,max_file_descriptors=16384,mem_total_virtual_in_bytes=3689132032,open_file_descriptors=118,peak_open_file_descriptors=118 1566425244000000000
+logstash_events,node_id=3da53ed0-a946-4a33-9cdb-33013f2273f6,node_name=debian-stretch-logstash6.virt,node_version=6.8.1,pipeline=main,source=debian-stretch-logstash6.virt duration_in_millis=0,filtered=0,in=0,out=0,queue_push_duration_in_millis=0 1566425244000000000
+logstash_plugins,node_id=3da53ed0-a946-4a33-9cdb-33013f2273f6,node_name=debian-stretch-logstash6.virt,node_version=6.8.1,pipeline=main,plugin_id=2807cb8610ba7854efa9159814fcf44c3dda762b43bd088403b30d42c88e69ab,plugin_name=beats,plugin_type=input,source=debian-stretch-logstash6.virt out=0,queue_push_duration_in_millis=0 1566425244000000000
+logstash_plugins,node_id=3da53ed0-a946-4a33-9cdb-33013f2273f6,node_name=debian-stretch-logstash6.virt,node_version=6.8.1,pipeline=main,plugin_id=7a6c973366186a695727c73935634a00bccd52fceedf30d0746983fce572d50c,plugin_name=file,plugin_type=output,source=debian-stretch-logstash6.virt duration_in_millis=0,in=0,out=0 1566425244000000000
+logstash_queue,node_id=3da53ed0-a946-4a33-9cdb-33013f2273f6,node_name=debian-stretch-logstash6.virt,node_version=6.8.1,pipeline=main,queue_type=memory,source=debian-stretch-logstash6.virt events=0 1566425244000000000
+```
diff --git a/content/telegraf/v1/input-plugins/lustre2/_index.md b/content/telegraf/v1/input-plugins/lustre2/_index.md
new file mode 100644
index 000000000..4867aef7d
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/lustre2/_index.md
@@ -0,0 +1,217 @@
+---
+description: "Telegraf plugin for collecting metrics from Lustre"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Lustre
+    identifier: input-lustre2
+tags: [Lustre, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Lustre Input Plugin
+
+The [Lustre](http://lustre.org/)® file system is an open-source, parallel file system that
+supports many requirements of leadership class HPC simulation environments.
+
+This plugin monitors the Lustre file system using its entries in the proc
+filesystem.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from local Lustre service on OST, MDS
+# This plugin ONLY supports Linux
+[[inputs.lustre2]]
+  ## An array of /proc globs to search for Lustre stats
+  ## If not specified, the default will work on Lustre 2.12.x
+  ##
+  # mgs_procfiles = [
+  #   "/sys/fs/lustre/mgs/*/eviction_count",
+  # ]
+  # ost_procfiles = [
+  #   "/proc/fs/lustre/obdfilter/*/stats",
+  #   "/proc/fs/lustre/osd-ldiskfs/*/stats",
+  #   "/proc/fs/lustre/obdfilter/*/job_stats",
+  #   "/proc/fs/lustre/obdfilter/*/exports/*/stats",
+  #   "/proc/fs/lustre/osd-ldiskfs/*/brw_stats",
+  #   "/proc/fs/lustre/osd-zfs/*/brw_stats",
+  #   "/sys/fs/lustre/odbfilter/*/eviction_count",
+  # ]
+  # mds_procfiles = [
+  #   "/proc/fs/lustre/mdt/*/md_stats",
+  #   "/proc/fs/lustre/mdt/*/job_stats",
+  #   "/proc/fs/lustre/mdt/*/exports/*/stats",
+  #   "/proc/fs/lustre/osd-ldiskfs/*/brw_stats",
+  #   "/proc/fs/lustre/osd-zfs/*/brw_stats",
+  #   "/sys/fs/lustre/mdt/*/eviction_count",
+  # ]
+```
+
+## Metrics
+
+From `/sys/fs/lustre/health_check`:
+
+- lustre2
+  - tags:
+  - fields:
+    - health
+
+From `/proc/fs/lustre/obdfilter/*/stats` and
+`/proc/fs/lustre/osd-ldiskfs/*/stats`:
+
+- lustre2
+  - tags:
+    - name
+  - fields:
+    - write_bytes
+    - write_calls
+    - read_bytes
+    - read_calls
+    - cache_hit
+    - cache_miss
+    - cache_access
+
+From `/proc/fs/lustre/obdfilter/*/exports/*/stats`:
+
+- lustre2
+  - tags:
+    - name
+    - client
+  - fields:
+    - write_bytes
+    - write_calls
+    - read_bytes
+    - read_calls
+
+From `/proc/fs/lustre/obdfilter/*/job_stats`:
+
+- lustre2
+  - tags:
+    - name
+    - jobid
+  - fields:
+    - jobstats_ost_getattr
+    - jobstats_ost_setattr
+    - jobstats_ost_sync
+    - jobstats_punch
+    - jobstats_destroy
+    - jobstats_create
+    - jobstats_ost_statfs
+    - jobstats_get_info
+    - jobstats_set_info
+    - jobstats_quotactl
+    - jobstats_read_bytes
+    - jobstats_read_calls
+    - jobstats_read_max_size
+    - jobstats_read_min_size
+    - jobstats_write_bytes
+    - jobstats_write_calls
+    - jobstats_write_max_size
+    - jobstats_write_min_size
+
+From `/proc/fs/lustre/mdt/*/md_stats`:
+
+- lustre2
+  - tags:
+    - name
+  - fields:
+    - open
+    - close
+    - mknod
+    - link
+    - unlink
+    - mkdir
+    - rmdir
+    - rename
+    - getattr
+    - setattr
+    - getxattr
+    - setxattr
+    - statfs
+    - sync
+    - samedir_rename
+    - crossdir_rename
+
+From `/proc/fs/lustre/mdt/*/exports/*/stats`:
+
+- lustre2
+  - tags:
+    - name
+    - client
+  - fields:
+    - open
+    - close
+    - mknod
+    - link
+    - unlink
+    - mkdir
+    - rmdir
+    - rename
+    - getattr
+    - setattr
+    - getxattr
+    - setxattr
+    - statfs
+    - sync
+    - samedir_rename
+    - crossdir_rename
+
+From `/proc/fs/lustre/mdt/*/job_stats`:
+
+- lustre2
+  - tags:
+    - name
+    - jobid
+  - fields:
+    - jobstats_close
+    - jobstats_crossdir_rename
+    - jobstats_getattr
+    - jobstats_getxattr
+    - jobstats_link
+    - jobstats_mkdir
+    - jobstats_mknod
+    - jobstats_open
+    - jobstats_rename
+    - jobstats_rmdir
+    - jobstats_samedir_rename
+    - jobstats_setattr
+    - jobstats_setxattr
+    - jobstats_statfs
+    - jobstats_sync
+    - jobstats_unlink
+
+From `/proc/fs/lustre/*/*/eviction_count`:
+
+- lustre2
+  - tags:
+    - name
+  - fields:
+    - evictions
+
+## Troubleshooting
+
+Check for the default or custom procfiles in the proc filesystem, and reference
+the [Lustre Monitoring and Statistics Guide](http://wiki.lustre.org/Lustre_Monitoring_and_Statistics_Guide).  This plugin does not
+report all information from these files, only a limited set of items
+corresponding to the above metric fields.
+
+## Example Output
+
+```text
+lustre2,host=oss2,jobid=42990218,name=wrk-OST0041 jobstats_ost_setattr=0i,jobstats_ost_sync=0i,jobstats_punch=0i,jobstats_read_bytes=4096i,jobstats_read_calls=1i,jobstats_read_max_size=4096i,jobstats_read_min_size=4096i,jobstats_write_bytes=310206488i,jobstats_write_calls=7423i,jobstats_write_max_size=53048i,jobstats_write_min_size=8820i 1556525847000000000
+lustre2,host=mds1,jobid=42992017,name=wrk-MDT0000 jobstats_close=31798i,jobstats_crossdir_rename=0i,jobstats_getattr=34146i,jobstats_getxattr=15i,jobstats_link=0i,jobstats_mkdir=658i,jobstats_mknod=0i,jobstats_open=31797i,jobstats_rename=0i,jobstats_rmdir=0i,jobstats_samedir_rename=0i,jobstats_setattr=1788i,jobstats_setxattr=0i,jobstats_statfs=0i,jobstats_sync=0i,jobstats_unlink=0i 1556525828000000000
+```
+
+[lustre]: http://lustre.org/
+[guide]: http://wiki.lustre.org/Lustre_Monitoring_and_Statistics_Guide
diff --git a/content/telegraf/v1/input-plugins/lvm/_index.md b/content/telegraf/v1/input-plugins/lvm/_index.md
new file mode 100644
index 000000000..6e3d3d37e
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/lvm/_index.md
@@ -0,0 +1,112 @@
+---
+description: "Telegraf plugin for collecting metrics from LVM"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: LVM
+    identifier: input-lvm
+tags: [LVM, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# LVM Input Plugin
+
+The Logical Volume Management (LVM) input plugin collects information about
+physical volumes, volume groups, and logical volumes.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics about LVM physical volumes, volume groups, logical volumes.
+[[inputs.lvm]]
+  ## Use sudo to run LVM commands
+  use_sudo = false
+
+  ## The default location of the pvs binary can be overridden with:
+  #pvs_binary = "/usr/sbin/pvs"
+
+  ## The default location of the vgs binary can be overridden with:
+  #vgs_binary = "/usr/sbin/vgs"
+
+  ## The default location of the lvs binary can be overridden with:
+  #lvs_binary = "/usr/sbin/lvs"
+```
+
+The LVM commands requires elevated permissions. If the user has configured sudo
+with the ability to run these commands, then set the `use_sudo` to true.
+
+### Using sudo
+
+If your account does not already have the ability to run commands
+with passwordless sudo then updates to the sudoers file are required. Below
+is an example to allow the requires LVM commands:
+
+First, use the `visudo` command to start editing the sudoers file. Then add
+the following content, where `<username>` is the username of the user that
+needs this access:
+
+```text
+Cmnd_Alias LVM = /usr/sbin/pvs *, /usr/sbin/vgs *, /usr/sbin/lvs *
+<username>  ALL=(root) NOPASSWD: LVM
+Defaults!LVM !logfile, !syslog, !pam_session
+```
+
+Path to binaries must match those from config file (pvs_binary, vgs_binary and
+lvs_binary)
+
+## Metrics
+
+Metrics are broken out by physical volume (pv), volume group (vg), and logical
+volume (lv):
+
+- lvm_physical_vol
+  - tags
+    - path
+    - vol_group
+  - fields
+    - size
+    - free
+    - used
+    - used_percent
+- lvm_vol_group
+  - tags
+    - name
+  - fields
+    - size
+    - free
+    - used_percent
+    - physical_volume_count
+    - logical_volume_count
+    - snapshot_count
+- lvm_logical_vol
+  - tags
+    - name
+    - vol_group
+  - fields
+    - size
+    - data_percent
+    - meta_percent
+
+## Example Output
+
+The following example shows a system with the root partition on an LVM group
+as well as with a Docker thin-provisioned LVM group on a second drive:
+
+```text
+lvm_physical_vol,path=/dev/sda2,vol_group=vgroot free=0i,size=249510756352i,used=249510756352i,used_percent=100 1631823026000000000
+lvm_physical_vol,path=/dev/sdb,vol_group=docker free=3858759680i,size=128316342272i,used=124457582592i,used_percent=96.99277612525741 1631823026000000000
+lvm_vol_group,name=vgroot free=0i,logical_volume_count=1i,physical_volume_count=1i,size=249510756352i,snapshot_count=0i,used_percent=100 1631823026000000000
+lvm_vol_group,name=docker free=3858759680i,logical_volume_count=1i,physical_volume_count=1i,size=128316342272i,snapshot_count=0i,used_percent=96.99277612525741 1631823026000000000
+lvm_logical_vol,name=lvroot,vol_group=vgroot data_percent=0,metadata_percent=0,size=249510756352i 1631823026000000000
+lvm_logical_vol,name=thinpool,vol_group=docker data_percent=0.36000001430511475,metadata_percent=1.3300000429153442,size=121899057152i 1631823026000000000
+```
diff --git a/content/telegraf/v1/input-plugins/mailchimp/_index.md b/content/telegraf/v1/input-plugins/mailchimp/_index.md
new file mode 100644
index 000000000..90169d7d5
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/mailchimp/_index.md
@@ -0,0 +1,82 @@
+---
+description: "Telegraf plugin for collecting metrics from Mailchimp"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Mailchimp
+    identifier: input-mailchimp
+tags: [Mailchimp, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Mailchimp Input Plugin
+
+Pulls campaign reports from the [Mailchimp API](https://developer.mailchimp.com/).
+
+[1]: https://developer.mailchimp.com/
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Gathers metrics from the /3.0/reports MailChimp API
+[[inputs.mailchimp]]
+  ## MailChimp API key
+  ## get from https://admin.mailchimp.com/account/api/
+  api_key = "" # required
+
+  ## Reports for campaigns sent more than days_old ago will not be collected.
+  ## 0 means collect all and is the default value.
+  days_old = 0
+
+  ## Campaign ID to get, if empty gets all campaigns, this option overrides days_old
+  # campaign_id = ""
+```
+
+## Metrics
+
+- mailchimp
+  - tags:
+    - id
+    - campaign_title
+  - fields:
+    - emails_sent (integer, emails)
+    - abuse_reports (integer, reports)
+    - unsubscribed (integer, unsubscribes)
+    - hard_bounces (integer, emails)
+    - soft_bounces (integer, emails)
+    - syntax_errors (integer, errors)
+    - forwards_count (integer, emails)
+    - forwards_opens (integer, emails)
+    - opens_total (integer, emails)
+    - unique_opens (integer, emails)
+    - open_rate (double, percentage)
+    - clicks_total (integer, clicks)
+    - unique_clicks (integer, clicks)
+    - unique_subscriber_clicks (integer, clicks)
+    - click_rate (double, percentage)
+    - facebook_recipient_likes (integer, likes)
+    - facebook_unique_likes (integer, likes)
+    - facebook_likes (integer, likes)
+    - industry_type (string, type)
+    - industry_open_rate (double, percentage)
+    - industry_click_rate (double, percentage)
+    - industry_bounce_rate (double, percentage)
+    - industry_unopen_rate (double, percentage)
+    - industry_unsub_rate (double, percentage)
+    - industry_abuse_rate (double, percentage)
+    - list_stats_sub_rate (double, percentage)
+    - list_stats_unsub_rate (double, percentage)
+    - list_stats_open_rate (double, percentage)
+    - list_stats_click_rate (double, percentage)
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/marklogic/_index.md b/content/telegraf/v1/input-plugins/marklogic/_index.md
new file mode 100644
index 000000000..c4b7f86d2
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/marklogic/_index.md
@@ -0,0 +1,86 @@
+---
+description: "Telegraf plugin for collecting metrics from MarkLogic"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: MarkLogic
+    identifier: input-marklogic
+tags: [MarkLogic, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# MarkLogic Input Plugin
+
+The MarkLogic Telegraf plugin gathers health status metrics from one or more
+host.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Retrieves information on a specific host in a MarkLogic Cluster
+[[inputs.marklogic]]
+  ## Base URL of the MarkLogic HTTP Server.
+  url = "http://localhost:8002"
+
+  ## List of specific hostnames to retrieve information. At least (1) required.
+  # hosts = ["hostname1", "hostname2"]
+
+  ## Using HTTP Basic Authentication. Management API requires 'manage-user' role privileges
+  # username = "myuser"
+  # password = "mypassword"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+## Metrics
+
+- marklogic
+  - tags:
+    - source (the hostname of the server address, ex. `ml1.local`)
+    - id (the host node unique id ex. `2592913110757471141`)
+  - fields:
+    - online
+    - total_load
+    - total_rate
+    - ncpus
+    - ncores
+    - total_cpu_stat_user
+    - total_cpu_stat_system
+    - total_cpu_stat_idle
+    - total_cpu_stat_iowait
+    - memory_process_size
+    - memory_process_rss
+    - memory_system_total
+    - memory_system_free
+    - memory_process_swap_size
+    - memory_size
+    - host_size
+    - log_device_space
+    - data_dir_space
+    - query_read_bytes
+    - query_read_load
+    - merge_read_bytes
+    - merge_write_load
+    - http_server_receive_bytes
+    - http_server_send_bytes
+
+## Example Output
+
+```text
+marklogic,host=localhost,id=2592913110757471141,source=ml1.local total_cpu_stat_iowait=0.0125649003311992,memory_process_swap_size=0i,host_size=380i,data_dir_space=28216i,query_read_load=0i,ncpus=1i,log_device_space=28216i,query_read_bytes=13947332i,merge_write_load=0i,http_server_receive_bytes=225893i,online=true,ncores=4i,total_cpu_stat_user=0.150778993964195,total_cpu_stat_system=0.598927974700928,total_cpu_stat_idle=99.2210006713867,memory_system_total=3947i,memory_system_free=2669i,memory_size=4096i,total_rate=14.7697010040283,http_server_send_bytes=0i,memory_process_size=903i,memory_process_rss=486i,merge_read_load=0i,total_load=0.00502600101754069 1566373000000000000
+```
diff --git a/content/telegraf/v1/input-plugins/mcrouter/_index.md b/content/telegraf/v1/input-plugins/mcrouter/_index.md
new file mode 100644
index 000000000..7d9b2c2be
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/mcrouter/_index.md
@@ -0,0 +1,122 @@
+---
+description: "Telegraf plugin for collecting metrics from Mcrouter"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Mcrouter
+    identifier: input-mcrouter
+tags: [Mcrouter, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Mcrouter Input Plugin
+
+This plugin gathers statistics data from a Mcrouter server.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from one or many mcrouter servers.
+[[inputs.mcrouter]]
+  ## An array of address to gather stats about. Specify an ip or hostname
+  ## with port. ie tcp://localhost:11211, tcp://10.0.0.1:11211, etc.
+  servers = ["tcp://localhost:11211", "unix:///var/run/mcrouter.sock"]
+
+  ## Timeout for metric collections from all servers.  Minimum timeout is "1s".
+  # timeout = "5s"
+```
+
+## Metrics
+
+The fields from this plugin are gathered in the *mcrouter* measurement.
+
+Description of gathered fields can be found
+[here](https://github.com/facebook/mcrouter/wiki/Stats-list).
+
+Fields:
+
+* uptime
+* num_servers
+* num_servers_new
+* num_servers_up
+* num_servers_down
+* num_servers_closed
+* num_clients
+* num_suspect_servers
+* destination_batches_sum
+* destination_requests_sum
+* outstanding_route_get_reqs_queued
+* outstanding_route_update_reqs_queued
+* outstanding_route_get_avg_queue_size
+* outstanding_route_update_avg_queue_size
+* outstanding_route_get_avg_wait_time_sec
+* outstanding_route_update_avg_wait_time_sec
+* retrans_closed_connections
+* destination_pending_reqs
+* destination_inflight_reqs
+* destination_batch_size
+* asynclog_requests
+* proxy_reqs_processing
+* proxy_reqs_waiting
+* client_queue_notify_period
+* rusage_system
+* rusage_user
+* ps_num_minor_faults
+* ps_num_major_faults
+* ps_user_time_sec
+* ps_system_time_sec
+* ps_vsize
+* ps_rss
+* fibers_allocated
+* fibers_pool_size
+* fibers_stack_high_watermark
+* successful_client_connections
+* duration_us
+* destination_max_pending_reqs
+* destination_max_inflight_reqs
+* retrans_per_kbyte_max
+* cmd_get_count
+* cmd_delete_out
+* cmd_lease_get
+* cmd_set
+* cmd_get_out_all
+* cmd_get_out
+* cmd_lease_set_count
+* cmd_other_out_all
+* cmd_lease_get_out
+* cmd_set_count
+* cmd_lease_set_out
+* cmd_delete_count
+* cmd_other
+* cmd_delete
+* cmd_get
+* cmd_lease_set
+* cmd_set_out
+* cmd_lease_get_count
+* cmd_other_out
+* cmd_lease_get_out_all
+* cmd_set_out_all
+* cmd_other_count
+* cmd_delete_out_all
+* cmd_lease_set_out_all
+
+## Tags
+
+* Mcrouter measurements have the following tags:
+  * server (the host name from which metrics are gathered)
+
+## Example Output
+
+```text
+mcrouter,server=localhost:11211 uptime=166,num_servers=1,num_servers_new=1,num_servers_up=0,num_servers_down=0,num_servers_closed=0,num_clients=1,num_suspect_servers=0,destination_batches_sum=0,destination_requests_sum=0,outstanding_route_get_reqs_queued=0,outstanding_route_update_reqs_queued=0,outstanding_route_get_avg_queue_size=0,outstanding_route_update_avg_queue_size=0,outstanding_route_get_avg_wait_time_sec=0,outstanding_route_update_avg_wait_time_sec=0,retrans_closed_connections=0,destination_pending_reqs=0,destination_inflight_reqs=0,destination_batch_size=0,asynclog_requests=0,proxy_reqs_processing=1,proxy_reqs_waiting=0,client_queue_notify_period=0,rusage_system=0.040966,rusage_user=0.020483,ps_num_minor_faults=2490,ps_num_major_faults=11,ps_user_time_sec=0.02,ps_system_time_sec=0.04,ps_vsize=697741312,ps_rss=10563584,fibers_allocated=0,fibers_pool_size=0,fibers_stack_high_watermark=0,successful_client_connections=18,duration_us=0,destination_max_pending_reqs=0,destination_max_inflight_reqs=0,retrans_per_kbyte_max=0,cmd_get_count=0,cmd_delete_out=0,cmd_lease_get=0,cmd_set=0,cmd_get_out_all=0,cmd_get_out=0,cmd_lease_set_count=0,cmd_other_out_all=0,cmd_lease_get_out=0,cmd_set_count=0,cmd_lease_set_out=0,cmd_delete_count=0,cmd_other=0,cmd_delete=0,cmd_get=0,cmd_lease_set=0,cmd_set_out=0,cmd_lease_get_count=0,cmd_other_out=0,cmd_lease_get_out_all=0,cmd_set_out_all=0,cmd_other_count=0,cmd_delete_out_all=0,cmd_lease_set_out_all=0 1453831884664956455
+```
diff --git a/content/telegraf/v1/input-plugins/mdstat/_index.md b/content/telegraf/v1/input-plugins/mdstat/_index.md
new file mode 100644
index 000000000..eecfc9fce
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/mdstat/_index.md
@@ -0,0 +1,77 @@
+---
+description: "Telegraf plugin for collecting metrics from mdstat"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: mdstat
+    identifier: input-mdstat
+tags: [mdstat, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# mdstat Input Plugin
+
+The mdstat plugin gathers statistics about any Linux MD RAID arrays configured
+on the host by reading /proc/mdstat. For a full list of available fields see
+the /proc/mdstat section of the [proc man page](http://man7.org/linux/man-pages/man5/proc.5.html).  For a better idea
+of what each field represents, see the [mdstat man page](https://raid.wiki.kernel.org/index.php/Mdstat).
+
+Stat collection based on Prometheus' [mdstat collection library](https://github.com/prometheus/procfs/blob/master/mdstat.go).
+
+[man-proc]: http://man7.org/linux/man-pages/man5/proc.5.html
+
+[man-mdstat]: https://raid.wiki.kernel.org/index.php/Mdstat
+
+[prom-lib]: https://github.com/prometheus/procfs/blob/master/mdstat.go
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Get kernel statistics from /proc/mdstat
+# This plugin ONLY supports Linux
+[[inputs.mdstat]]
+  ## Sets file path
+  ## If not specified, then default is /proc/mdstat
+  # file_name = "/proc/mdstat"
+```
+
+## Metrics
+
+- mdstat
+  - BlocksSynced (if the array is rebuilding/checking, this is the count of
+    blocks that have been scanned)
+  - BlocksSyncedFinishTime (the expected finish time of the rebuild scan,
+    listed in minutes remaining)
+  - BlocksSyncedPct (the percentage of the rebuild scan left)
+  - BlocksSyncedSpeed (the current speed the rebuild is running at, listed
+    in K/sec)
+  - BlocksTotal (the total count of blocks in the array)
+  - DisksActive (the number of disks that are currently considered healthy
+    in the array)
+  - DisksFailed (the current count of failed disks in the array)
+  - DisksSpare (the current count of "spare" disks in the array)
+  - DisksTotal (total count of disks in the array)
+
+## Tags
+
+- mdstat
+  - ActivityState (`active` or `inactive`)
+  - Devices (comma separated list of devices that make up the array)
+  - Name (name of the array)
+
+## Example Output
+
+```text
+mdstat,ActivityState=active,Devices=sdm1\,sdn1,Name=md1 BlocksSynced=231299072i,BlocksSyncedFinishTime=0,BlocksSyncedPct=0,BlocksSyncedSpeed=0,BlocksTotal=231299072i,DisksActive=2i,DisksFailed=0i,DisksSpare=0i,DisksTotal=2i,DisksDown=0i 1617814276000000000
+mdstat,ActivityState=active,Devices=sdm5\,sdn5,Name=md2 BlocksSynced=2996224i,BlocksSyncedFinishTime=0,BlocksSyncedPct=0,BlocksSyncedSpeed=0,BlocksTotal=2996224i,DisksActive=2i,DisksFailed=0i,DisksSpare=0i,DisksTotal=2i,DisksDown=0i 1617814276000000000
+```
diff --git a/content/telegraf/v1/input-plugins/mem/_index.md b/content/telegraf/v1/input-plugins/mem/_index.md
new file mode 100644
index 000000000..818828454
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/mem/_index.md
@@ -0,0 +1,84 @@
+---
+description: "Telegraf plugin for collecting metrics from Memory"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Memory
+    identifier: input-mem
+tags: [Memory, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Memory Input Plugin
+
+The mem plugin collects system memory metrics.
+
+For a more complete explanation of the difference between *used* and
+*actual_used* RAM, see [Linux ate my ram](http://www.linuxatemyram.com/).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics about memory usage
+[[inputs.mem]]
+  # no configuration
+```
+
+## Metrics
+
+Available fields are dependent on platform.
+
+- mem
+  - fields:
+    - active (integer, Darwin, FreeBSD, Linux, OpenBSD)
+    - available (integer)
+    - available_percent (float)
+    - buffered (integer, FreeBSD, Linux)
+    - cached (integer, FreeBSD, Linux, OpenBSD)
+    - commit_limit (integer, Linux)
+    - committed_as (integer, Linux)
+    - dirty (integer, Linux)
+    - free (integer, Darwin, FreeBSD, Linux, OpenBSD)
+    - high_free (integer, Linux)
+    - high_total (integer, Linux)
+    - huge_pages_free (integer, Linux)
+    - huge_page_size (integer, Linux)
+    - huge_pages_total (integer, Linux)
+    - inactive (integer, Darwin, FreeBSD, Linux, OpenBSD)
+    - laundry (integer, FreeBSD)
+    - low_free (integer, Linux)
+    - low_total (integer, Linux)
+    - mapped (integer, Linux)
+    - page_tables (integer, Linux)
+    - shared (integer, Linux)
+    - slab (integer, Linux)
+    - sreclaimable (integer, Linux)
+    - sunreclaim (integer, Linux)
+    - swap_cached (integer, Linux)
+    - swap_free (integer, Linux)
+    - swap_total (integer, Linux)
+    - total (integer)
+    - used (integer)
+    - used_percent (float)
+    - vmalloc_chunk (integer, Linux)
+    - vmalloc_total (integer, Linux)
+    - vmalloc_used (integer, Linux)
+    - wired (integer, Darwin, FreeBSD, OpenBSD)
+    - write_back (integer, Linux)
+    - write_back_tmp (integer, Linux)
+
+## Example Output
+
+```text
+mem active=9299595264i,available=16818249728i,available_percent=80.41654254645131,buffered=2383761408i,cached=13316689920i,commit_limit=14751920128i,committed_as=11781156864i,dirty=122880i,free=1877688320i,high_free=0i,high_total=0i,huge_page_size=2097152i,huge_pages_free=0i,huge_pages_total=0i,inactive=7549939712i,low_free=0i,low_total=0i,mapped=416763904i,page_tables=19787776i,shared=670679040i,slab=2081071104i,sreclaimable=1923395584i,sunreclaim=157675520i,swap_cached=1302528i,swap_free=4286128128i,swap_total=4294963200i,total=20913917952i,used=3335778304i,used_percent=15.95004011996231,vmalloc_chunk=0i,vmalloc_total=35184372087808i,vmalloc_used=0i,wired=0i,write_back=0i,write_back_tmp=0i 1574712869000000000
+```
diff --git a/content/telegraf/v1/input-plugins/memcached/_index.md b/content/telegraf/v1/input-plugins/memcached/_index.md
new file mode 100644
index 000000000..36a911391
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/memcached/_index.md
@@ -0,0 +1,137 @@
+---
+description: "Telegraf plugin for collecting metrics from Memcached"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Memcached
+    identifier: input-memcached
+tags: [Memcached, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Memcached Input Plugin
+
+This plugin gathers statistics data from a Memcached server.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from one or many memcached servers.
+[[inputs.memcached]]
+  # An array of address to gather stats about. Specify an ip on hostname
+  # with optional port. ie localhost, 10.0.0.1:11211, etc.
+  servers = ["localhost:11211"]
+  # An array of unix memcached sockets to gather stats about.
+  # unix_sockets = ["/var/run/memcached.sock"]
+
+  ## Optional TLS Config
+  # enable_tls = false
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## If false, skip chain & host verification
+  # insecure_skip_verify = true
+```
+
+## Metrics
+
+The fields from this plugin are gathered in the *memcached* measurement.
+
+Fields:
+
+* accepting_conns - Whether or not server is accepting conns
+* auth_cmds - Number of authentication commands handled, success or failure
+* auth_errors - Number of failed authentications
+* bytes - Current number of bytes used to store items
+* bytes_read - Total number of bytes read by this server from network
+* bytes_written - Total number of bytes sent by this server to network
+* cas_badval - Number of CAS reqs for which a key was found, but the CAS value
+  did not match
+* cas_hits - Number of successful CAS reqs
+* cas_misses - Number of CAS reqs against missing keys
+* cmd_flush - Cumulative number of flush reqs
+* cmd_get - Cumulative number of retrieval reqs
+* cmd_set - Cumulative number of storage reqs
+* cmd_touch - Cumulative number of touch reqs
+* conn_yields - Number of times any connection yielded to another due to
+  hitting the -R limit
+* connection_structures - Number of connection structures allocated by the
+  server
+* curr_connections - Number of open connections
+* curr_items - Current number of items stored
+* decr_hits - Number of successful decr reqs
+* decr_misses - Number of decr reqs against missing keys
+* delete_hits - Number of deletion reqs resulting in an item being removed
+* delete_misses - Number of deletions reqs for missing keys
+* evicted_active - Items evicted from LRU that had been hit recently but did
+  not jump to top of LRU
+* evicted_unfetched - Items evicted from LRU that were never touched by
+  get/incr/append/etc
+* evictions - Number of valid items removed from cache to free memory for
+  new items
+* expired_unfetched - Items pulled from LRU that were never touched by
+  get/incr/append/etc before expiring
+* get_expired - Number of items that have been requested but had already
+  expired
+* get_flushed - Number of items that have been requested but have been flushed
+  via flush_all
+* get_hits - Number of keys that have been requested and found present
+* get_misses - Number of items that have been requested and not found
+* hash_bytes - Bytes currently used by hash tables
+* hash_is_expanding - Indicates if the hash table is being grown to a new size
+* hash_power_level - Current size multiplier for hash table
+* incr_hits - Number of successful incr reqs
+* incr_misses - Number of incr reqs against missing keys
+* limit_maxbytes - Number of bytes this server is allowed to use for storage
+* listen_disabled_num - Number of times server has stopped accepting new
+  connections (maxconns)
+* max_connections - Max number of simultaneous connections
+* reclaimed - Number of times an entry was stored using memory from an
+  expired entry
+* rejected_connections - Conns rejected in maxconns_fast mode
+* store_no_memory - Number of rejected storage requests caused by exhaustion
+  of the memory limit when evictions are disabled
+* store_too_large - Number of rejected storage requests caused by attempting
+  to write a value larger than the item size limit
+* threads - Number of worker threads requested
+* total_connections - Total number of connections opened since the server
+  started running
+* total_items - Total number of items stored since the server started
+* touch_hits - Number of keys that have been touched with a new expiration time
+* touch_misses - Number of items that have been touched and not found
+* uptime - Number of secs since the server started
+
+Description of gathered fields taken from [memcached protocol docs](https://github.com/memcached/memcached/blob/master/doc/protocol.txt).
+
+[protocol]: https://github.com/memcached/memcached/blob/master/doc/protocol.txt
+
+## Tags
+
+* Memcached measurements have the following tags:
+  * server (the host name from which metrics are gathered)
+
+## Sample Queries
+
+You can use the following query to get the average get hit and miss ratio, as
+well as the total average size of cached items, number of cached items and
+average connection counts per server.
+
+```sql
+SELECT mean(get_hits) / mean(cmd_get) as get_ratio, mean(get_misses) / mean(cmd_get) as get_misses_ratio, mean(bytes), mean(curr_items), mean(curr_connections) FROM memcached WHERE time > now() - 1h GROUP BY server
+```
+
+## Example Output
+
+```text
+memcached,server=localhost:11211 accepting_conns=1i,auth_cmds=0i,auth_errors=0i,bytes=0i,bytes_read=7i,bytes_written=0i,cas_badval=0i,cas_hits=0i,cas_misses=0i,cmd_flush=0i,cmd_get=0i,cmd_set=0i,cmd_touch=0i,conn_yields=0i,connection_structures=3i,curr_connections=2i,curr_items=0i,decr_hits=0i,decr_misses=0i,delete_hits=0i,delete_misses=0i,evicted_active=0i,evicted_unfetched=0i,evictions=0i,expired_unfetched=0i,get_expired=0i,get_flushed=0i,get_hits=0i,get_misses=0i,hash_bytes=524288i,hash_is_expanding=0i,hash_power_level=16i,incr_hits=0i,incr_misses=0i,limit_maxbytes=67108864i,listen_disabled_num=0i,max_connections=1024i,reclaimed=0i,rejected_connections=0i,store_no_memory=0i,store_too_large=0i,threads=4i,total_connections=3i,total_items=0i,touch_hits=0i,touch_misses=0i,uptime=3i 1644771989000000000
+```
diff --git a/content/telegraf/v1/input-plugins/mesos/_index.md b/content/telegraf/v1/input-plugins/mesos/_index.md
new file mode 100644
index 000000000..1b0b2a80d
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/mesos/_index.md
@@ -0,0 +1,374 @@
+---
+description: "Telegraf plugin for collecting metrics from Mesos"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Mesos
+    identifier: input-mesos
+tags: [Mesos, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Mesos Input Plugin
+
+This input plugin gathers metrics from Mesos.  For more information, please
+check the [Mesos Observability Metrics](http://mesos.apache.org/documentation/latest/monitoring/) page.
+
+[1]: http://mesos.apache.org/documentation/latest/monitoring/
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Telegraf plugin for gathering metrics from N Mesos masters
+[[inputs.mesos]]
+  ## Timeout, in ms.
+  timeout = 100
+
+  ## A list of Mesos masters.
+  masters = ["http://localhost:5050"]
+
+  ## Master metrics groups to be collected, by default, all enabled.
+  master_collections = [
+    "resources",
+    "master",
+    "system",
+    "agents",
+    "frameworks",
+    "framework_offers",
+    "tasks",
+    "messages",
+    "evqueue",
+    "registrar",
+    "allocator",
+  ]
+
+  ## A list of Mesos slaves, default is []
+  # slaves = []
+
+  ## Slave metrics groups to be collected, by default, all enabled.
+  # slave_collections = [
+  #   "resources",
+  #   "agent",
+  #   "system",
+  #   "executors",
+  #   "tasks",
+  #   "messages",
+  # ]
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+By default this plugin is not configured to gather metrics from mesos. Since a
+mesos cluster can be deployed in numerous ways it does not provide any default
+values. User needs to specify master/slave nodes this plugin will gather metrics
+from.
+
+## Metrics
+
+Mesos master metric groups
+
+- resources
+  - master/cpus_percent
+  - master/cpus_used
+  - master/cpus_total
+  - master/cpus_revocable_percent
+  - master/cpus_revocable_total
+  - master/cpus_revocable_used
+  - master/disk_percent
+  - master/disk_used
+  - master/disk_total
+  - master/disk_revocable_percent
+  - master/disk_revocable_total
+  - master/disk_revocable_used
+  - master/gpus_percent
+  - master/gpus_used
+  - master/gpus_total
+  - master/gpus_revocable_percent
+  - master/gpus_revocable_total
+  - master/gpus_revocable_used
+  - master/mem_percent
+  - master/mem_used
+  - master/mem_total
+  - master/mem_revocable_percent
+  - master/mem_revocable_total
+  - master/mem_revocable_used
+
+- master
+  - master/elected
+  - master/uptime_secs
+
+- system
+  - system/cpus_total
+  - system/load_15min
+  - system/load_5min
+  - system/load_1min
+  - system/mem_free_bytes
+  - system/mem_total_bytes
+
+- slaves
+  - master/slave_registrations
+  - master/slave_removals
+  - master/slave_reregistrations
+  - master/slave_shutdowns_scheduled
+  - master/slave_shutdowns_canceled
+  - master/slave_shutdowns_completed
+  - master/slaves_active
+  - master/slaves_connected
+  - master/slaves_disconnected
+  - master/slaves_inactive
+  - master/slave_unreachable_canceled
+  - master/slave_unreachable_completed
+  - master/slave_unreachable_scheduled
+  - master/slaves_unreachable
+
+- frameworks
+  - master/frameworks_active
+  - master/frameworks_connected
+  - master/frameworks_disconnected
+  - master/frameworks_inactive
+  - master/outstanding_offers
+
+- framework offers
+  - master/frameworks/subscribed
+  - master/frameworks/calls_total
+  - master/frameworks/calls
+  - master/frameworks/events_total
+  - master/frameworks/events
+  - master/frameworks/operations_total
+  - master/frameworks/operations
+  - master/frameworks/tasks/active
+  - master/frameworks/tasks/terminal
+  - master/frameworks/offers/sent
+  - master/frameworks/offers/accepted
+  - master/frameworks/offers/declined
+  - master/frameworks/offers/rescinded
+  - master/frameworks/roles/suppressed
+
+- tasks
+  - master/tasks_error
+  - master/tasks_failed
+  - master/tasks_finished
+  - master/tasks_killed
+  - master/tasks_lost
+  - master/tasks_running
+  - master/tasks_staging
+  - master/tasks_starting
+  - master/tasks_dropped
+  - master/tasks_gone
+  - master/tasks_gone_by_operator
+  - master/tasks_killing
+  - master/tasks_unreachable
+
+- messages
+  - master/invalid_executor_to_framework_messages
+  - master/invalid_framework_to_executor_messages
+  - master/invalid_status_update_acknowledgements
+  - master/invalid_status_updates
+  - master/dropped_messages
+  - master/messages_authenticate
+  - master/messages_deactivate_framework
+  - master/messages_decline_offers
+  - master/messages_executor_to_framework
+  - master/messages_exited_executor
+  - master/messages_framework_to_executor
+  - master/messages_kill_task
+  - master/messages_launch_tasks
+  - master/messages_reconcile_tasks
+  - master/messages_register_framework
+  - master/messages_register_slave
+  - master/messages_reregister_framework
+  - master/messages_reregister_slave
+  - master/messages_resource_request
+  - master/messages_revive_offers
+  - master/messages_status_update
+  - master/messages_status_update_acknowledgement
+  - master/messages_unregister_framework
+  - master/messages_unregister_slave
+  - master/messages_update_slave
+  - master/recovery_slave_removals
+  - master/slave_removals/reason_registered
+  - master/slave_removals/reason_unhealthy
+  - master/slave_removals/reason_unregistered
+  - master/valid_framework_to_executor_messages
+  - master/valid_status_update_acknowledgements
+  - master/valid_status_updates
+  - master/task_lost/source_master/reason_invalid_offers
+  - master/task_lost/source_master/reason_slave_removed
+  - master/task_lost/source_slave/reason_executor_terminated
+  - master/valid_executor_to_framework_messages
+  - master/invalid_operation_status_update_acknowledgements
+  - master/messages_operation_status_update_acknowledgement
+  - master/messages_reconcile_operations
+  - master/messages_suppress_offers
+  - master/valid_operation_status_update_acknowledgements
+
+- evqueue
+  - master/event_queue_dispatches
+  - master/event_queue_http_requests
+  - master/event_queue_messages
+  - master/operator_event_stream_subscribers
+
+- registrar
+  - registrar/state_fetch_ms
+  - registrar/state_store_ms
+  - registrar/state_store_ms/max
+  - registrar/state_store_ms/min
+  - registrar/state_store_ms/p50
+  - registrar/state_store_ms/p90
+  - registrar/state_store_ms/p95
+  - registrar/state_store_ms/p99
+  - registrar/state_store_ms/p999
+  - registrar/state_store_ms/p9999
+  - registrar/state_store_ms/count
+  - registrar/log/ensemble_size
+  - registrar/log/recovered
+  - registrar/queued_operations
+  - registrar/registry_size_bytes
+
+- allocator
+  - allocator/allocation_run_ms
+  - allocator/allocation_run_ms/count
+  - allocator/allocation_run_ms/max
+  - allocator/allocation_run_ms/min
+  - allocator/allocation_run_ms/p50
+  - allocator/allocation_run_ms/p90
+  - allocator/allocation_run_ms/p95
+  - allocator/allocation_run_ms/p99
+  - allocator/allocation_run_ms/p999
+  - allocator/allocation_run_ms/p9999
+  - allocator/allocation_runs
+  - allocator/allocation_run_latency_ms
+  - allocator/allocation_run_latency_ms/count
+  - allocator/allocation_run_latency_ms/max
+  - allocator/allocation_run_latency_ms/min
+  - allocator/allocation_run_latency_ms/p50
+  - allocator/allocation_run_latency_ms/p90
+  - allocator/allocation_run_latency_ms/p95
+  - allocator/allocation_run_latency_ms/p99
+  - allocator/allocation_run_latency_ms/p999
+  - allocator/allocation_run_latency_ms/p9999
+  - allocator/roles/shares/dominant
+  - allocator/event_queue_dispatches
+  - allocator/offer_filters/roles/active
+  - allocator/quota/roles/resources/offered_or_allocated
+  - allocator/quota/roles/resources/guarantee
+  - allocator/resources/cpus/offered_or_allocated
+  - allocator/resources/cpus/total
+  - allocator/resources/disk/offered_or_allocated
+  - allocator/resources/disk/total
+  - allocator/resources/mem/offered_or_allocated
+  - allocator/resources/mem/total
+
+Mesos slave metric groups
+
+- resources
+  - slave/cpus_percent
+  - slave/cpus_used
+  - slave/cpus_total
+  - slave/cpus_revocable_percent
+  - slave/cpus_revocable_total
+  - slave/cpus_revocable_used
+  - slave/disk_percent
+  - slave/disk_used
+  - slave/disk_total
+  - slave/disk_revocable_percent
+  - slave/disk_revocable_total
+  - slave/disk_revocable_used
+  - slave/gpus_percent
+  - slave/gpus_used
+  - slave/gpus_total,
+  - slave/gpus_revocable_percent
+  - slave/gpus_revocable_total
+  - slave/gpus_revocable_used
+  - slave/mem_percent
+  - slave/mem_used
+  - slave/mem_total
+  - slave/mem_revocable_percent
+  - slave/mem_revocable_total
+  - slave/mem_revocable_used
+
+- agent
+  - slave/registered
+  - slave/uptime_secs
+
+- system
+  - system/cpus_total
+  - system/load_15min
+  - system/load_5min
+  - system/load_1min
+  - system/mem_free_bytes
+  - system/mem_total_bytes
+
+- executors
+  - containerizer/mesos/container_destroy_errors
+  - slave/container_launch_errors
+  - slave/executors_preempted
+  - slave/frameworks_active
+  - slave/executor_directory_max_allowed_age_secs
+  - slave/executors_registering
+  - slave/executors_running
+  - slave/executors_terminated
+  - slave/executors_terminating
+  - slave/recovery_errors
+
+- tasks
+  - slave/tasks_failed
+  - slave/tasks_finished
+  - slave/tasks_killed
+  - slave/tasks_lost
+  - slave/tasks_running
+  - slave/tasks_staging
+  - slave/tasks_starting
+
+- messages
+  - slave/invalid_framework_messages
+  - slave/invalid_status_updates
+  - slave/valid_framework_messages
+  - slave/valid_status_updates
+
+## Tags
+
+- All master/slave measurements have the following tags:
+  - server (network location of server: `host:port`)
+  - url (URL origin of server: `scheme://host:port`)
+  - role (master/slave)
+
+- All master measurements have the extra tags:
+  - state (leader/follower)
+
+## Example Output
+
+```text
+mesos,role=master,state=leader,host=172.17.8.102,server=172.17.8.101
+allocator/event_queue_dispatches=0,master/cpus_percent=0,
+master/cpus_revocable_percent=0,master/cpus_revocable_total=0,
+master/cpus_revocable_used=0,master/cpus_total=2,
+master/cpus_used=0,master/disk_percent=0,master/disk_revocable_percent=0,
+master/disk_revocable_total=0,master/disk_revocable_used=0,master/disk_total=10823,
+master/disk_used=0,master/dropped_messages=2,master/elected=1,
+master/event_queue_dispatches=10,master/event_queue_http_requests=0,
+master/event_queue_messages=0,master/frameworks_active=2,master/frameworks_connected=2,
+master/frameworks_disconnected=0,master/frameworks_inactive=0,
+master/invalid_executor_to_framework_messages=0,
+master/invalid_framework_to_executor_messages=0,
+master/invalid_status_update_acknowledgements=0,master/invalid_status_updates=0,master/mem_percent=0,
+master/mem_revocable_percent=0,master/mem_revocable_total=0,
+master/mem_revocable_used=0,master/mem_total=1002,
+master/mem_used=0,master/messages_authenticate=0,
+master/messages_deactivate_framework=0 ...
+```
diff --git a/content/telegraf/v1/input-plugins/minecraft/_index.md b/content/telegraf/v1/input-plugins/minecraft/_index.md
new file mode 100644
index 000000000..6dcd76fb3
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/minecraft/_index.md
@@ -0,0 +1,114 @@
+---
+description: "Telegraf plugin for collecting metrics from Minecraft"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Minecraft
+    identifier: input-minecraft
+tags: [Minecraft, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Minecraft Input Plugin
+
+The `minecraft` plugin connects to a Minecraft server using the RCON protocol
+to collects scores from the server [scoreboard](http://minecraft.gamepedia.com/Scoreboard).
+
+This plugin is known to support Minecraft Java Edition versions 1.11 - 1.14.
+When using an version of Minecraft earlier than 1.13, be aware that the values
+for some criterion has changed and may need to be modified.
+
+## Server Setup
+
+Enable [RCON](http://wiki.vg/RCON) on the Minecraft server, add this to your server configuration
+in the [server.properties](https://minecraft.gamepedia.com/Server.properties) file:
+
+```conf
+enable-rcon=true
+rcon.password=<your password>
+rcon.port=<1-65535>
+```
+
+Scoreboard [Objectives](https://minecraft.gamepedia.com/Scoreboard#Objectives) must be added using the server console for the
+plugin to collect.  These can be added in game by players with op status,
+from the server console, or over an RCON connection.
+
+When getting started pick an easy to test objective.  This command will add an
+objective that counts the number of times a player has jumped:
+
+```sh
+/scoreboard objectives add jumps minecraft.custom:minecraft.jump
+```
+
+Once a player has triggered the event they will be added to the scoreboard,
+you can then list all players with recorded scores:
+
+```sh
+/scoreboard players list
+```
+
+View the current scores with a command, substituting your player name:
+
+```sh
+/scoreboard players list Etho
+```
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Collects scores from a Minecraft server's scoreboard using the RCON protocol
+[[inputs.minecraft]]
+  ## Address of the Minecraft server.
+  # server = "localhost"
+
+  ## Server RCON Port.
+  # port = "25575"
+
+  ## Server RCON Password.
+  password = ""
+
+  ## Uncomment to remove deprecated metric components.
+  # tagdrop = ["server"]
+```
+
+## Metrics
+
+- minecraft
+  - tags:
+    - player
+    - port (port of the server)
+    - server (hostname:port, deprecated in 1.11; use `source` and `port` tags)
+    - source (hostname of the server)
+  - fields:
+    - `<objective_name>` (integer, count)
+
+## Sample Queries
+
+Get the number of jumps per player in the last hour:
+
+```sql
+SELECT SPREAD("jumps") FROM "minecraft" WHERE time > now() - 1h GROUP BY "player"
+```
+
+## Example Output
+
+```text
+minecraft,player=notch,source=127.0.0.1,port=25575 jumps=178i 1498261397000000000
+minecraft,player=dinnerbone,source=127.0.0.1,port=25575 deaths=1i,jumps=1999i,cow_kills=1i 1498261397000000000
+minecraft,player=jeb,source=127.0.0.1,port=25575 d_pickaxe=1i,damage_dealt=80i,d_sword=2i,hunger=20i,health=20i,kills=1i,level=33i,jumps=264i,armor=15i 1498261397000000000
+```
+
+[server.properties]: https://minecraft.gamepedia.com/Server.properties
+[scoreboard]: http://minecraft.gamepedia.com/Scoreboard
+[objectives]: https://minecraft.gamepedia.com/Scoreboard#Objectives
+[rcon]: http://wiki.vg/RCON
diff --git a/content/telegraf/v1/input-plugins/mock/_index.md b/content/telegraf/v1/input-plugins/mock/_index.md
new file mode 100644
index 000000000..441334a40
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/mock/_index.md
@@ -0,0 +1,97 @@
+---
+description: "Telegraf plugin for collecting metrics from Mock Data"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Mock Data
+    identifier: input-mock
+tags: [Mock Data, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Mock Data Input Plugin
+
+The mock input plugin generates random data based on a selection of different
+algorithms. For example, it can produce random data between a set of values,
+fake stock data, sine waves, and step-wise values.
+
+Additionally, users can set the measurement name and tags used to whatever is
+required to mock their situation.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Generate metrics for test and demonstration purposes
+[[inputs.mock]]
+  ## Set the metric name to use for reporting
+  metric_name = "mock"
+
+  ## Optional string key-value pairs of tags to add to all metrics
+  # [inputs.mock.tags]
+  # "key" = "value"
+
+  ## One or more mock data fields *must* be defined.
+  # [[inputs.mock.constant]]
+  #   name = "constant"
+  #   value = value_of_any_type
+  # [[inputs.mock.random]]
+  #   name = "rand"
+  #   min = 1.0
+  #   max = 6.0
+  # [[inputs.mock.sine_wave]]
+  #   name = "wave"
+  #   amplitude = 1.0
+  #   period = 0.5
+  #   base_line = 0.0
+  # [[inputs.mock.step]]
+  #   name = "plus_one"
+  #   start = 0.0
+  #   step = 1.0
+  # [[inputs.mock.stock]]
+  #   name = "abc"
+  #   price = 50.00
+  #   volatility = 0.2
+```
+
+The mock plugin only requires that:
+
+1) Metric name is set
+2) One of the data field algorithms is defined
+
+## Available Algorithms
+
+The available algorithms for generating mock data include:
+
+* Constant - generate a field with the given value of type string, float, int
+  or bool
+* Random Float - generate a random float, inclusive of min and max
+* Sine Wave - produce a sine wave with a certain amplitude, period and baseline
+* Step - always add the step value, negative values accepted
+* Stock - generate fake, stock-like price values based on a volatility variable
+
+## Metrics
+
+Metrics are entirely based on the user's own configuration and settings.
+
+## Example Output
+
+The following example shows all available algorithms configured with an
+additional two tags as well:
+
+```text
+mock_sensors,building=5A,site=FTC random=4.875966794516125,abc=50,wave=0,plus_one=0 1632170840000000000
+mock_sensors,building=5A,site=FTC random=5.738651873834452,abc=45.095549448434774,wave=5.877852522924732,plus_one=1 1632170850000000000
+mock_sensors,building=5A,site=FTC random=1.0429328917205203,abc=51.928560083072924,wave=9.510565162951535,plus_one=2 1632170860000000000
+mock_sensors,building=5A,site=FTC random=5.290188595384418,abc=44.41090520217027,wave=9.510565162951536,plus_one=3 1632170870000000000
+mock_sensors,building=5A,site=FTC random=2.0724967227069135,abc=47.212167806890314,wave=5.877852522924733,plus_one=4 1632170880000000000
+```
diff --git a/content/telegraf/v1/input-plugins/modbus/_index.md b/content/telegraf/v1/input-plugins/modbus/_index.md
new file mode 100644
index 000000000..b73d5933f
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/modbus/_index.md
@@ -0,0 +1,886 @@
+---
+description: "Telegraf plugin for collecting metrics from Modbus"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Modbus
+    identifier: input-modbus
+tags: [Modbus, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+<!-- markdownlint-disable MD024 -->
+# Modbus Input Plugin
+
+The Modbus plugin collects Discrete Inputs, Coils, Input Registers and Holding
+Registers via Modbus TCP or Modbus RTU/ASCII.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample_general_begin.conf @sample_register.conf @sample_request.conf @sample_metric.conf @sample_general_end.conf
+# Retrieve data from MODBUS slave devices
+[[inputs.modbus]]
+  ## Connection Configuration
+  ##
+  ## The plugin supports connections to PLCs via MODBUS/TCP, RTU over TCP, ASCII over TCP or
+  ## via serial line communication in binary (RTU) or readable (ASCII) encoding
+  ##
+  ## Device name
+  name = "Device"
+
+  ## Slave ID - addresses a MODBUS device on the bus
+  ## Range: 0 - 255 [0 = broadcast; 248 - 255 = reserved]
+  slave_id = 1
+
+  ## Timeout for each request
+  timeout = "1s"
+
+  ## Maximum number of retries and the time to wait between retries
+  ## when a slave-device is busy.
+  # busy_retries = 0
+  # busy_retries_wait = "100ms"
+
+  # TCP - connect via Modbus/TCP
+  controller = "tcp://localhost:502"
+
+  ## Serial (RS485; RS232)
+  ## For RS485 specific setting check the end of the configuration.
+  ## For unix-like operating systems use:
+  # controller = "file:///dev/ttyUSB0"
+  ## For Windows operating systems use:
+  # controller = "COM1"
+  # baud_rate = 9600
+  # data_bits = 8
+  # parity = "N"
+  # stop_bits = 1
+
+  ## Transmission mode for Modbus packets depending on the controller type.
+  ## For Modbus over TCP you can choose between "TCP" , "RTUoverTCP" and
+  ## "ASCIIoverTCP".
+  ## For Serial controllers you can choose between "RTU" and "ASCII".
+  ## By default this is set to "auto" selecting "TCP" for ModbusTCP connections
+  ## and "RTU" for serial connections.
+  # transmission_mode = "auto"
+
+  ## Trace the connection to the modbus device
+  # log_level = "trace"
+
+  ## Define the configuration schema
+  ##  |---register -- define fields per register type in the original style (only supports one slave ID)
+  ##  |---request  -- define fields on a requests base
+  ##  |---metric   -- define fields on a metric base
+  configuration_type = "register"
+
+  ## --- "register" configuration style ---
+
+  ## Measurements
+  ##
+
+  ## Digital Variables, Discrete Inputs and Coils
+  ## measurement - the (optional) measurement name, defaults to "modbus"
+  ## name        - the variable name
+  ## data_type   - the (optional) output type, can be BOOL or UINT16 (default)
+  ## address     - variable address
+
+  discrete_inputs = [
+    { name = "start",          address = [0]},
+    { name = "stop",           address = [1]},
+    { name = "reset",          address = [2]},
+    { name = "emergency_stop", address = [3]},
+  ]
+  coils = [
+    { name = "motor1_run",     address = [0]},
+    { name = "motor1_jog",     address = [1]},
+    { name = "motor1_stop",    address = [2]},
+  ]
+
+  ## Analog Variables, Input Registers and Holding Registers
+  ## measurement - the (optional) measurement name, defaults to "modbus"
+  ## name        - the variable name
+  ## byte_order  - the ordering of bytes
+  ##  |---AB, ABCD   - Big Endian
+  ##  |---BA, DCBA   - Little Endian
+  ##  |---BADC       - Mid-Big Endian
+  ##  |---CDAB       - Mid-Little Endian
+  ## data_type   - BIT (single bit of a register)
+  ##               INT8L, INT8H, UINT8L, UINT8H (low and high byte variants)
+  ##               INT16, UINT16, INT32, UINT32, INT64, UINT64,
+  ##               FLOAT16-IEEE, FLOAT32-IEEE, FLOAT64-IEEE (IEEE 754 binary representation)
+  ##               FIXED, UFIXED (fixed-point representation on input)
+  ##               STRING (byte-sequence converted to string)
+  ## bit         - (optional) bit of the register, ONLY valid for BIT type
+  ## scale       - the final numeric variable representation
+  ## address     - variable address
+
+  holding_registers = [
+    { name = "power_factor", byte_order = "AB",   data_type = "FIXED", scale=0.01,  address = [8]},
+    { name = "voltage",      byte_order = "AB",   data_type = "FIXED", scale=0.1,   address = [0]},
+    { name = "energy",       byte_order = "ABCD", data_type = "FIXED", scale=0.001, address = [5,6]},
+    { name = "current",      byte_order = "ABCD", data_type = "FIXED", scale=0.001, address = [1,2]},
+    { name = "frequency",    byte_order = "AB",   data_type = "UFIXED", scale=0.1,  address = [7]},
+    { name = "power",        byte_order = "ABCD", data_type = "UFIXED", scale=0.1,  address = [3,4]},
+    { name = "firmware",     byte_order = "AB",   data_type = "STRING", address = [5, 6, 7, 8, 9, 10, 11, 12]},
+  ]
+  input_registers = [
+    { name = "tank_level",   byte_order = "AB",   data_type = "INT16",   scale=1.0,     address = [0]},
+    { name = "tank_ph",      byte_order = "AB",   data_type = "INT16",   scale=1.0,     address = [1]},
+    { name = "pump1_speed",  byte_order = "ABCD", data_type = "INT32",   scale=1.0,     address = [3,4]},
+  ]
+
+  ## --- "request" configuration style ---
+
+  ## Per request definition
+  ##
+
+  ## Define a request sent to the device
+  ## Multiple of those requests can be defined. Data will be collated into metrics at the end of data collection.
+  [[inputs.modbus.request]]
+    ## ID of the modbus slave device to query.
+    ## If you need to query multiple slave-devices, create several "request" definitions.
+    slave_id = 1
+
+    ## Byte order of the data.
+    ##  |---ABCD -- Big Endian (Motorola)
+    ##  |---DCBA -- Little Endian (Intel)
+    ##  |---BADC -- Big Endian with byte swap
+    ##  |---CDAB -- Little Endian with byte swap
+    byte_order = "ABCD"
+
+    ## Type of the register for the request
+    ## Can be "coil", "discrete", "holding" or "input"
+    register = "coil"
+
+    ## Name of the measurement.
+    ## Can be overridden by the individual field definitions. Defaults to "modbus"
+    # measurement = "modbus"
+
+    ## Request optimization algorithm.
+    ##  |---none       -- Do not perform any optimization and use the given layout(default)
+    ##  |---shrink     -- Shrink requests to actually requested fields
+    ##  |                 by stripping leading and trailing omits
+    ##  |---rearrange  -- Rearrange request boundaries within consecutive address ranges
+    ##  |                 to reduce the number of requested registers by keeping
+    ##  |                 the number of requests.
+    ##  |---max_insert -- Rearrange request keeping the number of extra fields below the value
+    ##                    provided in "optimization_max_register_fill". It is not necessary to define 'omitted'
+    ##                    fields as the optimisation will add such field only where needed.
+    # optimization = "none"
+
+    ## Maximum number register the optimizer is allowed to insert between two fields to
+    ## save requests.
+    ## This option is only used for the 'max_insert' optimization strategy.
+    ## NOTE: All omitted fields are ignored, so this option denotes the effective hole
+    ## size to fill.
+    # optimization_max_register_fill = 50
+
+    ## Field definitions
+    ## Analog Variables, Input Registers and Holding Registers
+    ## address        - address of the register to query. For coil and discrete inputs this is the bit address.
+    ## name *1        - field name
+    ## type *1,2      - type of the modbus field, can be
+    ##                  BIT (single bit of a register)
+    ##                  INT8L, INT8H, UINT8L, UINT8H (low and high byte variants)
+    ##                  INT16, UINT16, INT32, UINT32, INT64, UINT64 and
+    ##                  FLOAT16, FLOAT32, FLOAT64 (IEEE 754 binary representation)
+    ##                  STRING (byte-sequence converted to string)
+    ## length *1,2    - (optional) number of registers, ONLY valid for STRING type
+    ## bit *1,2       - (optional) bit of the register, ONLY valid for BIT type
+    ## scale *1,2,4   - (optional) factor to scale the variable with
+    ## output *1,3,4  - (optional) type of resulting field, can be INT64, UINT64 or FLOAT64.
+    ##                  Defaults to FLOAT64 for numeric fields if "scale" is provided.
+    ##                  Otherwise the input "type" class is used (e.g. INT* -> INT64).
+    ## measurement *1 - (optional) measurement name, defaults to the setting of the request
+    ## omit           - (optional) omit this field. Useful to leave out single values when querying many registers
+    ##                  with a single request. Defaults to "false".
+    ##
+    ## *1: These fields are ignored if field is omitted ("omit"=true)
+    ## *2: These fields are ignored for both "coil" and "discrete"-input type of registers.
+    ## *3: This field can only be "UINT16" or "BOOL" if specified for both "coil"
+    ##     and "discrete"-input type of registers. By default the fields are
+    ##     output as zero or one in UINT16 format unless "BOOL" is used.
+    ## *4: These fields cannot be used with "STRING"-type fields.
+
+    ## Coil / discrete input example
+    fields = [
+      { address=0, name="motor1_run" },
+      { address=1, name="jog", measurement="motor" },
+      { address=2, name="motor1_stop", omit=true },
+      { address=3, name="motor1_overheating", output="BOOL" },
+      { address=4, name="firmware", type="STRING", length=8 },
+    ]
+
+    [inputs.modbus.request.tags]
+      machine = "impresser"
+      location = "main building"
+
+  [[inputs.modbus.request]]
+    ## Holding example
+    ## All of those examples will result in FLOAT64 field outputs
+    slave_id = 1
+    byte_order = "DCBA"
+    register = "holding"
+    fields = [
+      { address=0, name="voltage",      type="INT16",   scale=0.1   },
+      { address=1, name="current",      type="INT32",   scale=0.001 },
+      { address=3, name="power",        type="UINT32",  omit=true   },
+      { address=5, name="energy",       type="FLOAT32", scale=0.001, measurement="W" },
+      { address=7, name="frequency",    type="UINT32",  scale=0.1   },
+      { address=8, name="power_factor", type="INT64",   scale=0.01  },
+    ]
+
+    [inputs.modbus.request.tags]
+      machine = "impresser"
+      location = "main building"
+
+  [[inputs.modbus.request]]
+    ## Input example with type conversions
+    slave_id = 1
+    byte_order = "ABCD"
+    register = "input"
+    fields = [
+      { address=0, name="rpm",         type="INT16"                   },  # will result in INT64 field
+      { address=1, name="temperature", type="INT16", scale=0.1        },  # will result in FLOAT64 field
+      { address=2, name="force",       type="INT32", output="FLOAT64" },  # will result in FLOAT64 field
+      { address=4, name="hours",       type="UINT32"                  },  # will result in UIN64 field
+    ]
+
+    [inputs.modbus.request.tags]
+      machine = "impresser"
+      location = "main building"
+
+  ## --- "metric" configuration style ---
+
+  ## Per metric definition
+  ##
+
+  ## Request optimization algorithm across metrics
+  ##  |---none       -- Do not perform any optimization and just group requests
+  ##  |                 within metrics (default)
+  ##  |---max_insert -- Collate registers across all defined metrics and fill in
+  ##                    holes to optimize the number of requests.
+  # optimization = "none"
+
+  ## Maximum number of registers the optimizer is allowed to insert between
+  ## non-consecutive registers to save requests.
+  ## This option is only used for the 'max_insert' optimization strategy and
+  ## effectively denotes the hole size between registers to fill.
+  # optimization_max_register_fill = 50
+
+  ## Define a metric produced by the requests to the device
+  ## Multiple of those metrics can be defined. The referenced registers will
+  ## be collated into requests send to the device
+  [[inputs.modbus.metric]]
+    ## ID of the modbus slave device to query
+    ## If you need to query multiple slave-devices, create several "metric" definitions.
+    slave_id = 1
+
+    ## Byte order of the data
+    ##  |---ABCD -- Big Endian (Motorola)
+    ##  |---DCBA -- Little Endian (Intel)
+    ##  |---BADC -- Big Endian with byte swap
+    ##  |---CDAB -- Little Endian with byte swap
+    # byte_order = "ABCD"
+
+    ## Name of the measurement
+    # measurement = "modbus"
+
+    ## Field definitions
+    ## register    - type of the modbus register, can be "coil", "discrete",
+    ##               "holding" or "input". Defaults to "holding".
+    ## address     - address of the register to query. For coil and discrete inputs this is the bit address.
+    ## name        - field name
+    ## type *1     - type of the modbus field, can be
+    ##                 BIT (single bit of a register)
+    ##                 INT8L, INT8H, UINT8L, UINT8H (low and high byte variants)
+    ##                 INT16, UINT16, INT32, UINT32, INT64, UINT64 and
+    ##                 FLOAT16, FLOAT32, FLOAT64 (IEEE 754 binary representation)
+    ##                 STRING (byte-sequence converted to string)
+    ## length *1   - (optional) number of registers, ONLY valid for STRING type
+    ## bit *1,2    - (optional) bit of the register, ONLY valid for BIT type
+    ## scale *1,3  - (optional) factor to scale the variable with
+    ## output *2,3 - (optional) type of resulting field, can be INT64, UINT64 or FLOAT64. Defaults to FLOAT64 if
+    ##               "scale" is provided and to the input "type" class otherwise (i.e. INT* -> INT64, etc).
+    ##
+    ## *1: These fields are ignored for both "coil" and "discrete"-input type of registers.
+    ## *2: This field can only be "UINT16" or "BOOL" if specified for both "coil"
+    ##     and "discrete"-input type of registers. By default the fields are
+    ##     output as zero or one in UINT16 format unless "BOOL" is used.
+    ## *3: These fields cannot be used with "STRING"-type fields.
+    fields = [
+      { register="coil",    address=0, name="door_open"},
+      { register="coil",    address=1, name="status_ok"},
+      { register="holding", address=0, name="voltage",      type="INT16"   },
+      { address=1, name="current",      type="INT32",   scale=0.001 },
+      { address=5, name="energy",       type="FLOAT32", scale=0.001 },
+      { address=7, name="frequency",    type="UINT32",  scale=0.1   },
+      { address=8, name="power_factor", type="INT64",   scale=0.01  },
+      { address=9, name="firmware",     type="STRING",  length=8    },
+    ]
+
+    ## Tags assigned to the metric
+    # [inputs.modbus.metric.tags]
+    #   machine = "impresser"
+    #   location = "main building"
+
+  ## RS485 specific settings. Only take effect for serial controllers.
+  ## Note: This has to be at the end of the modbus configuration due to
+  ## TOML constraints.
+  # [inputs.modbus.rs485]
+    ## Delay RTS prior to sending
+    # delay_rts_before_send = "0ms"
+    ## Delay RTS after to sending
+    # delay_rts_after_send = "0ms"
+    ## Pull RTS line to high during sending
+    # rts_high_during_send = false
+    ## Pull RTS line to high after sending
+    # rts_high_after_send = false
+    ## Enabling receiving (Rx) during transmission (Tx)
+    # rx_during_tx = false
+
+  ## Enable workarounds required by some devices to work correctly
+  # [inputs.modbus.workarounds]
+    ## Pause after connect delays the first request by the specified time.
+    ## This might be necessary for (slow) devices.
+    # pause_after_connect = "0ms"
+
+    ## Pause between read requests sent to the device.
+    ## This might be necessary for (slow) serial devices.
+    # pause_between_requests = "0ms"
+
+    ## Close the connection after every gather cycle.
+    ## Usually the plugin closes the connection after a certain idle-timeout,
+    ## however, if you query a device with limited simultaneous connectivity
+    ## (e.g. serial devices) from multiple instances you might want to only
+    ## stay connected during gather and disconnect afterwards.
+    # close_connection_after_gather = false
+
+    ## Force the plugin to read each field in a separate request.
+    ## This might be necessary for devices not conforming to the spec,
+    ## see https://github.com/influxdata/telegraf/issues/12071.
+    # one_request_per_field = false
+
+    ## Enforce the starting address to be zero for the first request on
+    ## coil registers. This is necessary for some devices see
+    ## https://github.com/influxdata/telegraf/issues/8905
+    # read_coils_starting_at_zero = false
+
+    ## String byte-location in registers AFTER byte-order conversion
+    ## Some device (e.g. EM340) place the string byte in only the upper or
+    ## lower byte location of a register see
+    ## https://github.com/influxdata/telegraf/issues/14748
+    ## Available settings:
+    ##   lower -- use only lower byte of the register i.e. 00XX 00XX 00XX 00XX
+    ##   upper -- use only upper byte of the register i.e. XX00 XX00 XX00 XX00
+    ## By default both bytes of the register are used i.e. XXXX XXXX.
+    # string_register_location = ""
+```
+
+## Notes
+
+You can debug Modbus connection issues by enabling `debug_connection`. To see
+those debug messages, Telegraf has to be started with debugging enabled
+(i.e. with the `--debug` option). Please be aware that connection tracing will
+produce a lot of messages and should __NOT__ be used in production environments.
+
+Please use `pause_after_connect` / `pause_between_requests` with care. Ensure
+the total gather time, including the pause(s), does not exceed the configured
+collection interval. Note that pauses add up if multiple requests are sent!
+
+## Configuration styles
+
+The modbus plugin supports multiple configuration styles that can be set using
+the `configuration_type` setting. The different styles are described
+below. Please note that styles cannot be mixed, i.e. only the settings belonging
+to the configured `configuration_type` are used for constructing _modbus_
+requests and creation of metrics.
+
+Directly jump to the styles:
+
+- original / register plugin style
+- per-request style
+- per-metrict style
+
+---
+
+### `register` configuration style
+
+This is the original style used by this plugin. It allows a per-register
+configuration for a single slave-device.
+
+> [!NOTE]
+> _For legacy reasons this configuration style is not completely consistent with the other styles.
+
+#### Usage of `data_type`
+
+The field `data_type` defines the representation of the data value on input from
+the modbus registers.  The input values are then converted from the given
+`data_type` to a type that is appropriate when sending the value to the output
+plugin. These output types are usually an integer or floating-point-number. The
+size of the output type is assumed to be large enough for all supported input
+types. The mapping from the input type to the output type is fixed and cannot
+be configured.
+
+##### Booleans: `BOOL`
+
+This type is only valid for _coil_ and _discrete_ registers. The value will be
+`true` if the register has a non-zero (ON) value and `false` otherwise.
+
+##### Integers: `INT8L`, `INT8H`, `UINT8L`, `UINT8H`
+
+These types are used for 8-bit integer values. Select the one that matches your
+modbus data source. The `L` and `H` suffix denotes the low- and high byte of
+the register respectively.
+
+##### Integers: `INT16`, `UINT16`, `INT32`, `UINT32`, `INT64`, `UINT64`
+
+These types are used for integer input values. Select the one that matches your
+modbus data source. For _coil_ and _discrete_ registers only `UINT16` is valid.
+
+##### Floating Point: `FLOAT16-IEEE`, `FLOAT32-IEEE`, `FLOAT64-IEEE`
+
+Use these types if your modbus registers contain a value that is encoded in this
+format. These types always include the sign, therefore no variant exists.
+
+##### Fixed Point: `FIXED`, `UFIXED`
+
+These types are handled as an integer type on input, but are converted to
+floating point representation for further processing (e.g. scaling). Use one of
+these types when the input value is a decimal fixed point representation of a
+non-integer value.
+
+Select the type `UFIXED` when the input type is declared to hold unsigned
+integer values, which cannot be negative. The documentation of your modbus
+device should indicate this by a term like 'uint16 containing fixed-point
+representation with N decimal places'.
+
+Select the type `FIXED` when the input type is declared to hold signed integer
+values. Your documentation of the modbus device should indicate this with a term
+like 'int32 containing fixed-point representation with N decimal places'.
+
+##### String: `STRING`
+
+This type is used to query the number of registers specified in the `address`
+setting and convert the byte-sequence to a string. Please note, if the
+byte-sequence contains a `null` byte, the string is truncated at this position.
+You cannot use the `scale` setting for string fields.
+
+##### Bit: `BIT`
+
+This type is used to query a single bit of a register specified in the `address`
+setting and convert the value to an unsigned integer. This type __requires__ the
+`bit` setting to be specified.
+
+---
+
+### `request` configuration style
+
+This style can be used to specify the modbus requests directly. It enables
+specifying multiple `[[inputs.modbus.request]]` sections including multiple
+slave-devices. This way, _modbus_ gateway devices can be queried. Please note
+that _requests_ might be split for non-consecutive addresses. If you want to
+avoid this behavior please add _fields_ with the `omit` flag set filling the
+gaps between addresses.
+
+#### Slave device
+
+You can use the `slave_id` setting to specify the ID of the slave device to
+query. It should be specified for each request, otherwise it defaults to
+zero. Please note, only one `slave_id` can be specified per request.
+
+#### Byte order of the register
+
+The `byte_order` setting specifies the byte and word-order of the registers. It
+can be set to `ABCD` for _big endian (Motorola)_ or `DCBA` for _little endian
+(Intel)_ format as well as `BADC` and `CDAB` for _big endian_ or _little endian_
+with _byte swap_.
+
+#### Register type
+
+The `register` setting specifies the modbus register-set to query and can be set
+to `coil`, `discrete`, `holding` or `input`.
+
+#### Per-request measurement setting
+
+You can specify the name of the measurement for the following field definitions
+using the `measurement` setting. If the setting is omitted `modbus` is
+used. Furthermore, the measurement value can be overridden by each field
+individually.
+
+#### Optimization setting
+
+__Please only use request optimization if you do understand the implications!__
+The `optimization` setting can be used to optimize the actual requests sent to
+the device. The following algorithms are available
+
+##### `none` (_default_)
+
+Do not perform any optimization. Please note that the requests are still obeying
+the maximum request sizes. Furthermore, completely empty requests, i.e. all
+fields specify `omit=true`, are removed. Otherwise, the requests are sent as
+specified by the user including request of omitted fields. This setting should
+be used if you want full control over the requests e.g. to accommodate for
+device constraints.
+
+##### `shrink`
+
+This optimization allows to remove leading and trailing fields from requests if
+those fields are omitted. This can shrink the request number and sizes in cases
+where you specify large amounts of omitted fields, e.g. for documentation
+purposes.
+
+##### `rearrange`
+
+Requests are processed similar to `shrink` but the request boundaries are
+rearranged such that usually less registers are being read while keeping the
+number of requests. This optimization algorithm only works on consecutive
+address ranges and respects user-defined gaps in the field addresses.
+
+__Please note:__ This optimization might take long in case of many
+non-consecutive, non-omitted fields!
+
+##### `aggressive`
+
+Requests are processed similar to `rearrange` but user-defined gaps in the field
+addresses are filled automatically. This usually reduces the number of requests,
+but will increase the number of registers read due to larger requests.
+This algorithm might be useful if you only want to specify the fields you are
+interested in but want to minimize the number of requests sent to the device.
+
+__Please note:__ This optimization might take long in case of many
+non-consecutive, non-omitted fields!
+
+##### `max_insert`
+
+Fields are assigned to the same request as long as the hole between the fields
+do not exceed the maximum fill size given in `optimization_max_register_fill`.
+User-defined omitted fields are ignored and interpreted as holes, so the best
+practice is to not manually insert omitted fields for this optimizer. This
+allows to specify only actually used fields and let the optimizer figure out
+the request organization which can dramatically improve query time. The
+trade-off here is between the cost of reading additional registers trashed
+later and the cost of many requests.
+
+__Please note:__ The optimal value for `optimization_max_register_fill` depends
+on the network and the queried device. It is hence recommended to test several
+values and assess performance in order to find the best value. Use the
+`--test --debug` flags to monitor how may requests are sent and the number of
+touched registers.
+
+#### Field definitions
+
+Each `request` can contain a list of fields to collect from the modbus device.
+
+##### address
+
+A field is identified by an `address` that reflects the modbus register
+address. You can usually find the address values for the different data-points
+in the datasheet of your modbus device. This is a mandatory setting.
+
+For _coil_ and _discrete input_ registers this setting specifies the __bit__
+containing the value of the field.
+
+##### name
+
+Using the `name` setting you can specify the field-name in the metric as output
+by the plugin. This setting is ignored if the field's `omit` is set to `true`
+and can be omitted in this case.
+
+__Please note:__ There cannot be multiple fields with the same `name` in one
+metric identified by `measurement`, `slave_id` and `register`.
+
+##### register datatype
+
+The `type` setting specifies the datatype of the modbus register and can be
+set to `INT8L`, `INT8H`, `UINT8L`, `UINT8H` where `L` is the lower byte of the
+register and `H` is the higher byte.
+Furthermore, the types `INT16`, `UINT16`, `INT32`, `UINT32`, `INT64` or `UINT64`
+for integer types or `FLOAT16`, `FLOAT32` and `FLOAT64` for IEEE 754 binary
+representations of floating point values exist. `FLOAT16` denotes a
+half-precision float with a 16-bit representation.
+Usually the datatype of the register is listed in the datasheet of your modbus
+device in relation to the `address` described above.
+
+The `STRING` datatype is special in that it requires the `length` setting to
+be specified containing the length (in terms of number of registers) containing
+the string. The returned byte-sequence is interpreted as string and truncated
+to the first `null` byte found if any. The `scale` and `output` setting cannot
+be used for this `type`.
+
+This setting is ignored if the field's `omit` is set to `true` or if the
+`register` type is a bit-type (`coil` or `discrete`) and can be omitted in
+these cases.
+
+##### scaling
+
+You can use the `scale` setting to scale the register values, e.g. if the
+register contains a fix-point values in `UINT32` format with two decimal places
+for example. To convert the read register value to the actual value you can set
+the `scale=0.01`. The scale is used as a factor e.g. `field_value * scale`.
+
+This setting is ignored if the field's `omit` is set to `true` or if the
+`register` type is a bit-type (`coil` or `discrete`) and can be omitted in these
+cases.
+
+__Please note:__ The resulting field-type will be set to `FLOAT64` if no output
+format is specified.
+
+##### output datatype
+
+Using the `output` setting you can explicitly specify the output
+field-datatype. The `output` type can be `INT64`, `UINT64` or `FLOAT64`. If not
+set explicitly, the output type is guessed as follows: If `scale` is set to a
+non-zero value, the output type is `FLOAT64`. Otherwise, the output type
+corresponds to the register datatype _class_, i.e. `INT*` will result in
+`INT64`, `UINT*` in `UINT64` and `FLOAT*` in `FLOAT64`.
+
+This setting is ignored if the field's `omit` is set to `true` and can be
+omitted. In case the `register` type is a bit-type (`coil` or `discrete`) only
+`UINT16` or `BOOL` are valid with the former being the default if omitted.
+For `coil` and `discrete` registers the field-value is output as zero or one in
+`UINT16` format or as `true` and `false` in `BOOL` format.
+
+#### per-field measurement setting
+
+The `measurement` setting can be used to override the measurement name on a
+per-field basis. This might be useful if you want to split the fields in one
+request to multiple measurements. If not specified, the value specified in the
+`request` section or, if also omitted,
+`modbus` is used.
+
+This setting is ignored if the field's `omit` is set to `true` and can be
+omitted in this case.
+
+#### omitting a field
+
+When specifying `omit=true`, the corresponding field will be ignored when
+collecting the metric but is taken into account when constructing the modbus
+requests. This way, you can fill "holes" in the addresses to construct
+consecutive address ranges resulting in a single request. Using a single modbus
+request can be beneficial as the values are all collected at the same point in
+time.
+
+#### Tags definitions
+
+Each `request` can be accompanied by tags valid for this request.
+
+__Please note:__ These tags take precedence over predefined tags such as `name`,
+`type` or `slave_id`.
+
+---
+
+### `metric` configuration style
+
+This style can be used to specify the desired metrics directly instead of
+focusing on the modbus view. Multiple `[[inputs.modbus.metric]]` sections
+including multiple slave-devices can be specified. This way, _modbus_ gateway
+devices can be queried. The plugin automatically collects registers across
+the specified metrics, groups them per slave and register-type and (optionally)
+optimizes the resulting requests for non-consecutive addresses.
+
+#### Slave device
+
+You can use the `slave_id` setting to specify the ID of the slave device to
+query. It should be specified for each metric section, otherwise it defaults to
+zero. Please note, only one `slave_id` can be specified per metric section.
+
+#### Byte order of the registers
+
+The `byte_order` setting specifies the byte and word-order of the registers. It
+can be set to `ABCD` for _big endian (Motorola)_ or `DCBA` for _little endian
+(Intel)_ format as well as `BADC` and `CDAB` for _big endian_ or _little endian_
+with _byte swap_.
+
+#### Measurement name
+
+You can specify the name of the measurement for the fields defined in the
+given section using the `measurement` setting. If the setting is omitted
+`modbus` is used.
+
+#### Optimization setting
+
+__Please only use request optimization if you do understand the implications!__
+The `optimization` setting can specified globally, i.e. __NOT__ per metric
+section, and is used to optimize the actual requests sent to the device. Here,
+the optimization is applied across _all metric sections_! The following
+algorithms are available
+
+##### `none` (_default_)
+
+Do not perform any optimization. Please note that consecutive registers are
+still grouped into one requests while obeying the maximum request sizes. This
+setting should be used if you want to touch as less registers as possible at
+the cost of more requests sent to the device.
+
+##### `max_insert`
+
+Fields are assigned to the same request as long as the hole between the touched
+registers does not exceed the maximum fill size given via
+`optimization_max_register_fill`. This optimization might lead to a drastically
+reduced request number and thus an improved query time. The trade-off here is
+between the cost of reading additional registers trashed later and the cost of
+many requests.
+
+__Please note:__ The optimal value for `optimization_max_register_fill` depends
+on the network and the queried device. It is hence recommended to test several
+values and assess performance in order to find the best value. Use the
+`--test --debug` flags to monitor how may requests are sent and the number of
+touched registers.
+
+#### Field definitions
+
+Each `metric` can contain a list of fields to collect from the modbus device.
+The specified fields directly corresponds to the fields of the resulting metric.
+
+##### register
+
+The `register` setting specifies the modbus register-set to query and can be set
+to `coil`, `discrete`, `holding` or `input`.
+
+##### address
+
+A field is identified by an `address` that reflects the modbus register
+address. You can usually find the address values for the different data-points
+in the datasheet of your modbus device. This is a mandatory setting.
+
+For _coil_ and _discrete input_ registers this setting specifies the __bit__
+containing the value of the field.
+
+##### name
+
+Using the `name` setting you can specify the field-name in the metric as output
+by the plugin.
+
+__Please note:__ There cannot be multiple fields with the same `name` in one
+metric identified by `measurement`, `slave_id`, `register` and tag-set.
+
+##### register datatype
+
+The `type` setting specifies the datatype of the modbus register and can be
+set to `INT8L`, `INT8H`, `UINT8L`, `UINT8H` where `L` is the lower byte of the
+register and `H` is the higher byte.
+Furthermore, the types `INT16`, `UINT16`, `INT32`, `UINT32`, `INT64` or `UINT64`
+for integer types or `FLOAT16`, `FLOAT32` and `FLOAT64` for IEEE 754 binary
+representations of floating point values exist. `FLOAT16` denotes a
+half-precision float with a 16-bit representation.
+Usually the datatype of the register is listed in the datasheet of your modbus
+device in relation to the `address` described above.
+
+The `STRING` datatype is special in that it requires the `length` setting to
+be specified containing the length (in terms of number of registers) containing
+the string. The returned byte-sequence is interpreted as string and truncated
+to the first `null` byte found if any. The `scale` and `output` setting cannot
+be used for this `type`.
+
+This setting is ignored if the `register` is a bit-type (`coil` or `discrete`)
+and can be omitted in these cases.
+
+##### scaling
+
+You can use the `scale` setting to scale the register values, e.g. if the
+register contains a fix-point values in `UINT32` format with two decimal places
+for example. To convert the read register value to the actual value you can set
+the `scale=0.01`. The scale is used as a factor e.g. `field_value * scale`.
+
+This setting is ignored if the `register` is a bit-type (`coil` or `discrete`)
+and can be omitted in these cases.
+
+__Please note:__ The resulting field-type will be set to `FLOAT64` if no output
+format is specified.
+
+##### output datatype
+
+Using the `output` setting you can explicitly specify the output
+field-datatype. The `output` type can be `INT64`, `UINT64` or `FLOAT64`. If not
+set explicitly, the output type is guessed as follows: If `scale` is set to a
+non-zero value, the output type is `FLOAT64`. Otherwise, the output type
+corresponds to the register datatype _class_, i.e. `INT*` will result in
+`INT64`, `UINT*` in `UINT64` and `FLOAT*` in `FLOAT64`.
+
+In case the `register` is a bit-type (`coil` or `discrete`) only `UINT16` or
+`BOOL` are valid with the former being the default if omitted. For `coil` and
+`discrete` registers the field-value is output as zero or one in `UINT16` format
+or as `true` and `false` in `BOOL` format.
+
+#### Tags definitions
+
+Each `metric` can be accompanied by a set of tag. These tags directly correspond
+to the tags of the resulting metric.
+
+__Please note:__ These tags take precedence over predefined tags such as `name`,
+`type` or `slave_id`.
+
+---
+
+## Metrics
+
+Metrics are custom and configured using the `discrete_inputs`, `coils`,
+`holding_register` and `input_registers` options.
+
+## Troubleshooting
+
+### Strange data
+
+Modbus documentation is often a mess. People confuse memory-address (starts at
+one) and register address (starts at zero) or are unsure about the word-order
+used. Furthermore, there are some non-standard implementations that also swap
+the bytes within the register word (16-bit).
+
+If you get an error or don't get the expected values from your device, you can
+try the following steps (assuming a 32-bit value).
+
+If you are using a serial device and get a `permission denied` error, check the
+permissions of your serial device and change them accordingly.
+
+In case you get an `exception '2' (illegal data address)` error you might try to
+offset your `address` entries by minus one as it is very likely that there is
+confusion between memory and register addresses.
+
+If you see strange values, the `byte_order` might be wrong. You can either probe
+all combinations (`ABCD`, `CDBA`, `BADC` or `DCBA`) or set `byte_order="ABCD"
+data_type="UINT32"` and use the resulting value(s) in an online converter like
+[this](https://www.scadacore.com/tools/programming-calculators/online-hex-converter/). This especially makes sense if you don't want to mess
+with the device, deal with 64-bit values and/or don't know the `data_type` of
+your register (e.g. fix-point floating values vs. IEEE floating point).
+
+If your data still looks corrupted, please post your configuration, error
+message and/or the output of `byte_order="ABCD" data_type="UINT32"` to one of
+the telegraf support channels (forum, slack or as an issue).  If nothing helps,
+please post your configuration, error message and/or the output of
+`byte_order="ABCD" data_type="UINT32"` to one of the telegraf support channels
+(forum, slack or as an issue).
+
+[online-converter]: https://www.scadacore.com/tools/programming-calculators/online-hex-converter/
+
+### Workarounds
+
+Some Modbus devices need special read characteristics when reading data and will
+fail otherwise. For example, some serial devices need a pause between register
+read requests. Others might only support a limited number of simultaneously
+connected devices, like serial devices or some ModbusTCP devices. In case you
+need to access those devices in parallel you might want to disconnect
+immediately after the plugin finishes reading.
+
+To enable this plugin to also handle those "special" devices, there is the
+`workarounds` configuration option. In case your documentation states certain
+read requirements or you get read timeouts or other read errors, you might want
+to try one or more workaround options.  If you find that other/more workarounds
+are required for your device, please let us know.
+
+In case your device needs a workaround that is not yet implemented, please open
+an issue or submit a pull-request.
+
+## Example Output
+
+```text
+modbus.InputRegisters,host=orangepizero Current=0,Energy=0,Frequency=60,Power=0,PowerFactor=0,Voltage=123.9000015258789 1554079521000000000
+```
diff --git a/content/telegraf/v1/input-plugins/mongodb/_index.md b/content/telegraf/v1/input-plugins/mongodb/_index.md
new file mode 100644
index 000000000..579cfffeb
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/mongodb/_index.md
@@ -0,0 +1,349 @@
+---
+description: "Telegraf plugin for collecting metrics from MongoDB"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: MongoDB
+    identifier: input-mongodb
+tags: [MongoDB, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# MongoDB Input Plugin
+
+See the [MongoDB Software Lifecycle Schedules](https://www.mongodb.com/support-policy/lifecycles) for supported
+versions.
+
+[lifecycles]: https://www.mongodb.com/support-policy/lifecycles
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from one or many MongoDB servers
+[[inputs.mongodb]]
+  ## An array of URLs of the form:
+  ##   "mongodb://" [user ":" pass "@"] host [ ":" port]
+  ## For example:
+  ##   mongodb://user:auth_key@10.10.3.30:27017,
+  ##   mongodb://10.10.3.33:18832,
+  ##
+  ## If connecting to a cluster, users must include the "?connect=direct" in
+  ## the URL to ensure that the connection goes directly to the specified node
+  ## and not have all connections passed to the master node.
+  servers = ["mongodb://127.0.0.1:27017/?connect=direct"]
+
+  ## When true, collect cluster status.
+  ## Note that the query that counts jumbo chunks triggers a COLLSCAN, which
+  ## may have an impact on performance.
+  # gather_cluster_status = true
+
+  ## When true, collect per database stats
+  # gather_perdb_stats = false
+
+  ## When true, collect per collection stats
+  # gather_col_stats = false
+
+  ## When true, collect usage statistics for each collection
+  ## (insert, update, queries, remove, getmore, commands etc...).
+  # gather_top_stat = false
+
+  ## List of db where collections stats are collected
+  ## If empty, all db are concerned
+  # col_stats_dbs = ["local"]
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## Specifies plugin behavior regarding disconnected servers
+  ## Available choices :
+  ##   - error: telegraf will return an error on startup if one the servers is unreachable
+  ##   - skip: telegraf will skip unreachable servers on both startup and gather
+  # disconnected_servers_behavior = "error"
+```
+
+### Permissions
+
+If your MongoDB instance has access control enabled you will need to connect
+as a user with sufficient rights.
+
+With MongoDB 3.4 and higher, the `clusterMonitor` role can be used.  In
+version 3.2 you may also need these additional permissions:
+
+```shell
+> db.grantRolesToUser("user", [{role: "read", actions: "find", db: "local"}])
+```
+
+If the user is missing required privileges you may see an error in the
+Telegraf logs similar to:
+
+```shell
+Error in input [mongodb]: not authorized on admin to execute command { serverStatus: 1, recordStats: 0 }
+```
+
+Some permission related errors are logged at debug level, you can check these
+messages by setting `debug = true` in the agent section of the configuration or
+by running Telegraf with the `--debug` argument.
+
+## Metrics
+
+- mongodb
+  - tags:
+    - hostname
+    - node_type
+    - rs_name
+  - fields:
+    - active_reads (integer)
+    - active_writes (integer)
+    - aggregate_command_failed (integer)
+    - aggregate_command_total (integer)
+    - assert_msg (integer)
+    - assert_regular (integer)
+    - assert_rollovers (integer)
+    - assert_user (integer)
+    - assert_warning (integer)
+    - available_reads (integer)
+    - available_writes (integer)
+    - commands (integer)
+    - connections_available (integer)
+    - connections_current (integer)
+    - connections_total_created (integer)
+    - count_command_failed (integer)
+    - count_command_total (integer)
+    - cursor_no_timeout_count (integer)
+    - cursor_pinned_count (integer)
+    - cursor_timed_out_count (integer)
+    - cursor_total_count (integer)
+    - delete_command_failed (integer)
+    - delete_command_total (integer)
+    - deletes (integer)
+    - distinct_command_failed (integer)
+    - distinct_command_total (integer)
+    - document_deleted (integer)
+    - document_inserted (integer)
+    - document_returned (integer)
+    - document_updated (integer)
+    - find_and_modify_command_failed (integer)
+    - find_and_modify_command_total (integer)
+    - find_command_failed (integer)
+    - find_command_total (integer)
+    - flushes (integer)
+    - flushes_total_time_ns (integer)
+    - get_more_command_failed (integer)
+    - get_more_command_total (integer)
+    - getmores (integer)
+    - insert_command_failed (integer)
+    - insert_command_total (integer)
+    - inserts (integer)
+    - jumbo_chunks (integer)
+    - latency_commands_count (integer)
+    - latency_commands (integer)
+    - latency_reads_count (integer)
+    - latency_reads (integer)
+    - latency_writes_count (integer)
+    - latency_writes (integer)
+    - member_status (string)
+    - net_in_bytes_count (integer)
+    - net_out_bytes_count (integer)
+    - open_connections (integer)
+    - operation_scan_and_order (integer)
+    - operation_write_conflicts (integer)
+    - page_faults (integer)
+    - percent_cache_dirty (float)
+    - percent_cache_used (float)
+    - queries (integer)
+    - queued_reads (integer)
+    - queued_writes (integer)
+    - repl_apply_batches_num (integer)
+    - repl_apply_batches_total_millis (integer)
+    - repl_apply_ops (integer)
+    - repl_buffer_count (integer)
+    - repl_buffer_size_bytes (integer)
+    - repl_commands (integer)
+    - repl_deletes (integer)
+    - repl_executor_pool_in_progress_count (integer)
+    - repl_executor_queues_network_in_progress (integer)
+    - repl_executor_queues_sleepers (integer)
+    - repl_executor_unsignaled_events (integer)
+    - repl_getmores (integer)
+    - repl_inserts (integer)
+    - repl_lag (integer)
+    - repl_network_bytes (integer)
+    - repl_network_getmores_num (integer)
+    - repl_network_getmores_total_millis (integer)
+    - repl_network_ops (integer)
+    - repl_queries (integer)
+    - repl_updates (integer)
+    - repl_oplog_window_sec (integer)
+    - repl_state (integer)
+    - repl_member_health (integer)
+    - repl_health_avg (float)
+    - resident_megabytes (integer)
+    - state (string)
+    - storage_freelist_search_bucket_exhausted (integer)
+    - storage_freelist_search_requests (integer)
+    - storage_freelist_search_scanned (integer)
+    - tcmalloc_central_cache_free_bytes (integer)
+    - tcmalloc_current_allocated_bytes (integer)
+    - tcmalloc_current_total_thread_cache_bytes (integer)
+    - tcmalloc_heap_size (integer)
+    - tcmalloc_max_total_thread_cache_bytes (integer)
+    - tcmalloc_pageheap_commit_count (integer)
+    - tcmalloc_pageheap_committed_bytes (integer)
+    - tcmalloc_pageheap_decommit_count (integer)
+    - tcmalloc_pageheap_free_bytes (integer)
+    - tcmalloc_pageheap_reserve_count (integer)
+    - tcmalloc_pageheap_scavenge_count (integer)
+    - tcmalloc_pageheap_total_commit_bytes (integer)
+    - tcmalloc_pageheap_total_decommit_bytes (integer)
+    - tcmalloc_pageheap_total_reserve_bytes (integer)
+    - tcmalloc_pageheap_unmapped_bytes (integer)
+    - tcmalloc_spinlock_total_delay_ns (integer)
+    - tcmalloc_thread_cache_free_bytes (integer)
+    - tcmalloc_total_free_bytes (integer)
+    - tcmalloc_transfer_cache_free_bytes (integer)
+    - total_available (integer)
+    - total_created (integer)
+    - total_docs_scanned (integer)
+    - total_in_use (integer)
+    - total_keys_scanned (integer)
+    - total_refreshing (integer)
+    - total_tickets_reads (integer)
+    - total_tickets_writes (integer)
+    - ttl_deletes (integer)
+    - ttl_passes (integer)
+    - update_command_failed (integer)
+    - update_command_total (integer)
+    - updates (integer)
+    - uptime_ns (integer)
+    - version (string)
+    - vsize_megabytes (integer)
+    - wt_connection_files_currently_open (integer)
+    - wt_data_handles_currently_active (integer)
+    - wtcache_app_threads_page_read_count (integer)
+    - wtcache_app_threads_page_read_time (integer)
+    - wtcache_app_threads_page_write_count (integer)
+    - wtcache_bytes_read_into (integer)
+    - wtcache_bytes_written_from (integer)
+    - wtcache_pages_read_into (integer)
+    - wtcache_pages_requested_from (integer)
+    - wtcache_current_bytes (integer)
+    - wtcache_max_bytes_configured (integer)
+    - wtcache_internal_pages_evicted (integer)
+    - wtcache_modified_pages_evicted (integer)
+    - wtcache_unmodified_pages_evicted (integer)
+    - wtcache_pages_evicted_by_app_thread (integer)
+    - wtcache_pages_queued_for_eviction (integer)
+    - wtcache_server_evicting_pages (integer)
+    - wtcache_tracked_dirty_bytes (integer)
+    - wtcache_worker_thread_evictingpages (integer)
+    - commands_per_sec (integer, deprecated in 1.10; use `commands`))
+    - cursor_no_timeout (integer, opened/sec, deprecated in 1.10; use `cursor_no_timeout_count`))
+    - cursor_pinned (integer, opened/sec, deprecated in 1.10; use `cursor_pinned_count`))
+    - cursor_timed_out (integer, opened/sec, deprecated in 1.10; use `cursor_timed_out_count`))
+    - cursor_total (integer, opened/sec, deprecated in 1.10; use `cursor_total_count`))
+    - deletes_per_sec (integer, deprecated in 1.10; use `deletes`))
+    - flushes_per_sec (integer, deprecated in 1.10; use `flushes`))
+    - getmores_per_sec (integer, deprecated in 1.10; use `getmores`))
+    - inserts_per_sec (integer, deprecated in 1.10; use `inserts`))
+    - net_in_bytes (integer, bytes/sec, deprecated in 1.10; use `net_out_bytes_count`))
+    - net_out_bytes (integer, bytes/sec, deprecated in 1.10; use `net_out_bytes_count`))
+    - queries_per_sec (integer, deprecated in 1.10; use `queries`))
+    - repl_commands_per_sec (integer, deprecated in 1.10; use `repl_commands`))
+    - repl_deletes_per_sec (integer, deprecated in 1.10; use `repl_deletes`)
+    - repl_getmores_per_sec (integer, deprecated in 1.10; use `repl_getmores`)
+    - repl_inserts_per_sec (integer, deprecated in 1.10; use `repl_inserts`))
+    - repl_queries_per_sec (integer, deprecated in 1.10; use `repl_queries`))
+    - repl_updates_per_sec (integer, deprecated in 1.10; use `repl_updates`))
+    - ttl_deletes_per_sec (integer, deprecated in 1.10; use `ttl_deletes`))
+    - ttl_passes_per_sec (integer, deprecated in 1.10; use `ttl_passes`))
+    - updates_per_sec (integer, deprecated in 1.10; use `updates`))
+
+- mongodb_db_stats
+  - tags:
+    - db_name
+    - hostname
+  - fields:
+    - avg_obj_size (float)
+    - collections (integer)
+    - data_size (integer)
+    - index_size (integer)
+    - indexes (integer)
+    - num_extents (integer)
+    - objects (integer)
+    - ok (integer)
+    - storage_size (integer)
+    - type (string)
+    - fs_used_size (integer)
+    - fs_total_size (integer)
+
+- mongodb_col_stats
+  - tags:
+    - hostname
+    - collection
+    - db_name
+  - fields:
+    - size (integer)
+    - avg_obj_size (integer)
+    - storage_size (integer)
+    - total_index_size (integer)
+    - ok (integer)
+    - count (integer)
+    - type (string)
+
+- mongodb_shard_stats
+  - tags:
+    - hostname
+  - fields:
+    - in_use (integer)
+    - available (integer)
+    - created (integer)
+    - refreshing (integer)
+
+- mongodb_top_stats
+  - tags:
+    - collection
+  - fields:
+    - total_time (integer)
+    - total_count (integer)
+    - read_lock_time (integer)
+    - read_lock_count (integer)
+    - write_lock_time (integer)
+    - write_lock_count (integer)
+    - queries_time (integer)
+    - queries_count (integer)
+    - get_more_time (integer)
+    - get_more_count (integer)
+    - insert_time (integer)
+    - insert_count (integer)
+    - update_time (integer)
+    - update_count (integer)
+    - remove_time (integer)
+    - remove_count (integer)
+    - commands_time (integer)
+    - commands_count (integer)
+
+## Example Output
+
+```text
+mongodb,hostname=127.0.0.1:27017 active_reads=1i,active_writes=0i,aggregate_command_failed=0i,aggregate_command_total=0i,assert_msg=0i,assert_regular=0i,assert_rollovers=0i,assert_user=0i,assert_warning=0i,available_reads=127i,available_writes=128i,commands=65i,commands_per_sec=4i,connections_available=51199i,connections_current=1i,connections_total_created=5i,count_command_failed=0i,count_command_total=7i,cursor_no_timeout=0i,cursor_no_timeout_count=0i,cursor_pinned=0i,cursor_pinned_count=0i,cursor_timed_out=0i,cursor_timed_out_count=0i,cursor_total=0i,cursor_total_count=0i,delete_command_failed=0i,delete_command_total=1i,deletes=1i,deletes_per_sec=0i,distinct_command_failed=0i,distinct_command_total=0i,document_deleted=0i,document_inserted=0i,document_returned=0i,document_updated=0i,find_and_modify_command_failed=0i,find_and_modify_command_total=0i,find_command_failed=0i,find_command_total=1i,flushes=52i,flushes_per_sec=0i,flushes_total_time_ns=364000000i,get_more_command_failed=0i,get_more_command_total=0i,getmores=0i,getmores_per_sec=0i,insert_command_failed=0i,insert_command_total=0i,inserts=0i,inserts_per_sec=0i,jumbo_chunks=0i,latency_commands=5740i,latency_commands_count=46i,latency_reads=348i,latency_reads_count=7i,latency_writes=0i,latency_writes_count=0i,net_in_bytes=296i,net_in_bytes_count=4262i,net_out_bytes=29322i,net_out_bytes_count=242103i,open_connections=1i,operation_scan_and_order=0i,operation_write_conflicts=0i,page_faults=1i,percent_cache_dirty=0,percent_cache_used=0,queries=1i,queries_per_sec=0i,queued_reads=0i,queued_writes=0i,resident_megabytes=33i,storage_freelist_search_bucket_exhausted=0i,storage_freelist_search_requests=0i,storage_freelist_search_scanned=0i,tcmalloc_central_cache_free_bytes=0i,tcmalloc_current_allocated_bytes=0i,tcmalloc_current_total_thread_cache_bytes=0i,tcmalloc_heap_size=0i,tcmalloc_max_total_thread_cache_bytes=0i,tcmalloc_pageheap_commit_count=0i,tcmalloc_pageheap_committed_bytes=0i,tcmalloc_pageheap_decommit_count=0i,tcmalloc_pageheap_free_bytes=0i,tcmalloc_pageheap_reserve_count=0i,tcmalloc_pageheap_scavenge_count=0i,tcmalloc_pageheap_total_commit_bytes=0i,tcmalloc_pageheap_total_decommit_bytes=0i,tcmalloc_pageheap_total_reserve_bytes=0i,tcmalloc_pageheap_unmapped_bytes=0i,tcmalloc_spinlock_total_delay_ns=0i,tcmalloc_thread_cache_free_bytes=0i,tcmalloc_total_free_bytes=0i,tcmalloc_transfer_cache_free_bytes=0i,total_available=0i,total_created=0i,total_docs_scanned=0i,total_in_use=0i,total_keys_scanned=0i,total_refreshing=0i,total_tickets_reads=128i,total_tickets_writes=128i,ttl_deletes=0i,ttl_deletes_per_sec=0i,ttl_passes=51i,ttl_passes_per_sec=0i,update_command_failed=0i,update_command_total=0i,updates=0i,updates_per_sec=0i,uptime_ns=6135152000000i,version="4.0.19",vsize_megabytes=5088i,wt_connection_files_currently_open=13i,wt_data_handles_currently_active=18i,wtcache_app_threads_page_read_count=99i,wtcache_app_threads_page_read_time=44528i,wtcache_app_threads_page_write_count=19i,wtcache_bytes_read_into=3248195i,wtcache_bytes_written_from=170612i,wtcache_current_bytes=3648788i,wtcache_internal_pages_evicted=0i,wtcache_max_bytes_configured=8053063680i,wtcache_modified_pages_evicted=0i,wtcache_pages_evicted_by_app_thread=0i,wtcache_pages_queued_for_eviction=0i,wtcache_pages_read_into=234i,wtcache_pages_requested_from=18235i,wtcache_server_evicting_pages=0i,wtcache_tracked_dirty_bytes=0i,wtcache_unmodified_pages_evicted=0i,wtcache_worker_thread_evictingpages=0i 1595691605000000000
+mongodb,hostname=127.0.0.1:27017,node_type=PRI,rs_name=rs0 active_reads=1i,active_writes=0i,aggregate_command_failed=0i,aggregate_command_total=0i,assert_msg=0i,assert_regular=0i,assert_rollovers=0i,assert_user=25i,assert_warning=0i,available_reads=127i,available_writes=128i,commands=345i,commands_per_sec=4i,connections_available=838853i,connections_current=7i,connections_total_created=13i,count_command_failed=0i,count_command_total=5i,cursor_no_timeout=0i,cursor_no_timeout_count=0i,cursor_pinned=0i,cursor_pinned_count=2i,cursor_timed_out=0i,cursor_timed_out_count=0i,cursor_total=0i,cursor_total_count=4i,delete_command_failed=0i,delete_command_total=0i,deletes=0i,deletes_per_sec=0i,distinct_command_failed=0i,distinct_command_total=0i,document_deleted=0i,document_inserted=2i,document_returned=56i,document_updated=0i,find_and_modify_command_failed=0i,find_and_modify_command_total=0i,find_command_failed=0i,find_command_total=23i,flushes=4i,flushes_per_sec=0i,flushes_total_time_ns=43000000i,get_more_command_failed=0i,get_more_command_total=88i,getmores=88i,getmores_per_sec=0i,insert_command_failed=0i,insert_command_total=2i,inserts=2i,inserts_per_sec=0i,jumbo_chunks=0i,latency_commands=82532i,latency_commands_count=337i,latency_reads=30633i,latency_reads_count=111i,latency_writes=0i,latency_writes_count=0i,member_status="PRI",net_in_bytes=636i,net_in_bytes_count=172300i,net_out_bytes=38849i,net_out_bytes_count=335459i,open_connections=7i,operation_scan_and_order=1i,operation_write_conflicts=0i,page_faults=1i,percent_cache_dirty=0,percent_cache_used=0,queries=23i,queries_per_sec=2i,queued_reads=0i,queued_writes=0i,repl_apply_batches_num=0i,repl_apply_batches_total_millis=0i,repl_apply_ops=0i,repl_buffer_count=0i,repl_buffer_size_bytes=0i,repl_commands=0i,repl_commands_per_sec=0i,repl_deletes=0i,repl_deletes_per_sec=0i,repl_executor_pool_in_progress_count=0i,repl_executor_queues_network_in_progress=0i,repl_executor_queues_sleepers=3i,repl_executor_unsignaled_events=0i,repl_getmores=0i,repl_getmores_per_sec=0i,repl_inserts=0i,repl_inserts_per_sec=0i,repl_lag=0i,repl_network_bytes=0i,repl_network_getmores_num=0i,repl_network_getmores_total_millis=0i,repl_network_ops=0i,repl_oplog_window_sec=140i,repl_queries=0i,repl_queries_per_sec=0i,repl_state=1i,repl_updates=0i,repl_updates_per_sec=0i,resident_megabytes=81i,state="PRIMARY",storage_freelist_search_bucket_exhausted=0i,storage_freelist_search_requests=0i,storage_freelist_search_scanned=0i,tcmalloc_central_cache_free_bytes=322128i,tcmalloc_current_allocated_bytes=143566680i,tcmalloc_current_total_thread_cache_bytes=1098968i,tcmalloc_heap_size=181317632i,tcmalloc_max_total_thread_cache_bytes=260046848i,tcmalloc_pageheap_commit_count=53i,tcmalloc_pageheap_committed_bytes=149106688i,tcmalloc_pageheap_decommit_count=1i,tcmalloc_pageheap_free_bytes=3244032i,tcmalloc_pageheap_reserve_count=51i,tcmalloc_pageheap_scavenge_count=1i,tcmalloc_pageheap_total_commit_bytes=183074816i,tcmalloc_pageheap_total_decommit_bytes=33968128i,tcmalloc_pageheap_total_reserve_bytes=181317632i,tcmalloc_pageheap_unmapped_bytes=32210944i,tcmalloc_spinlock_total_delay_ns=0i,tcmalloc_thread_cache_free_bytes=1098968i,tcmalloc_total_free_bytes=2295976i,tcmalloc_transfer_cache_free_bytes=874880i,total_available=0i,total_created=0i,total_docs_scanned=56i,total_in_use=0i,total_keys_scanned=2i,total_refreshing=0i,total_tickets_reads=128i,total_tickets_writes=128i,ttl_deletes=0i,ttl_deletes_per_sec=0i,ttl_passes=2i,ttl_passes_per_sec=0i,update_command_failed=0i,update_command_total=0i,updates=0i,updates_per_sec=0i,uptime_ns=166481000000i,version="4.0.19",vsize_megabytes=1482i,wt_connection_files_currently_open=26i,wt_data_handles_currently_active=44i,wtcache_app_threads_page_read_count=0i,wtcache_app_threads_page_read_time=0i,wtcache_app_threads_page_write_count=56i,wtcache_bytes_read_into=0i,wtcache_bytes_written_from=130403i,wtcache_current_bytes=100312i,wtcache_internal_pages_evicted=0i,wtcache_max_bytes_configured=506462208i,wtcache_modified_pages_evicted=0i,wtcache_pages_evicted_by_app_thread=0i,wtcache_pages_queued_for_eviction=0i,wtcache_pages_read_into=0i,wtcache_pages_requested_from=2085i,wtcache_server_evicting_pages=0i,wtcache_tracked_dirty_bytes=63929i,wtcache_unmodified_pages_evicted=0i,wtcache_worker_thread_evictingpages=0i 1595691605000000000
+mongodb_db_stats,db_name=admin,hostname=127.0.0.1:27017 avg_obj_size=241,collections=2i,data_size=723i,index_size=49152i,indexes=3i,num_extents=0i,objects=3i,ok=1i,storage_size=53248i,type="db_stat" 1547159491000000000
+mongodb_db_stats,db_name=local,hostname=127.0.0.1:27017 avg_obj_size=813.9705882352941,collections=6i,data_size=55350i,index_size=102400i,indexes=5i,num_extents=0i,objects=68i,ok=1i,storage_size=204800i,type="db_stat" 1547159491000000000
+mongodb_col_stats,collection=foo,db_name=local,hostname=127.0.0.1:27017 size=375005928i,avg_obj_size=5494,type="col_stat",storage_size=249307136i,total_index_size=2138112i,ok=1i,count=68251i 1547159491000000000
+mongodb_shard_stats,hostname=127.0.0.1:27017,in_use=3i,available=3i,created=4i,refreshing=0i 1522799074000000000
+mongodb_top_stats,collection=foo,total_time=1471,total_count=158,read_lock_time=49614,read_lock_count=657,write_lock_time=49125456,write_lock_count=9841,queries_time=174,queries_count=495,get_more_time=498,get_more_count=46,insert_time=2651,insert_count=1265,update_time=0,update_count=0,remove_time=0,remove_count=0,commands_time=498611,commands_count=4615
+```
diff --git a/content/telegraf/v1/input-plugins/monit/_index.md b/content/telegraf/v1/input-plugins/monit/_index.md
new file mode 100644
index 000000000..cd767dc08
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/monit/_index.md
@@ -0,0 +1,260 @@
+---
+description: "Telegraf plugin for collecting metrics from Monit"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Monit
+    identifier: input-monit
+tags: [Monit, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Monit Input Plugin
+
+The `monit` plugin gathers metrics and status information about local processes,
+remote hosts, file, file systems, directories and network interfaces managed
+and watched over by [Monit](https://mmonit.com/).
+
+The use this plugin you should first enable the [HTTPD TCP port](https://mmonit.com/monit/documentation/monit.html#TCP-PORT) in
+Monit.
+
+Minimum Version of Monit tested with is 5.16.
+
+[monit]: https://mmonit.com/
+[httpd]: https://mmonit.com/monit/documentation/monit.html#TCP-PORT
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics and status information about processes managed by Monit
+[[inputs.monit]]
+  ## Monit HTTPD address
+  address = "http://127.0.0.1:2812"
+
+  ## Username and Password for Monit
+  # username = ""
+  # password = ""
+
+  ## Amount of time allowed to complete the HTTP request
+  # timeout = "5s"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+## Metrics
+
+- monit_filesystem
+  - tags:
+    - address
+    - version
+    - service
+    - platform_name
+    - status
+    - monitoring_status
+    - monitoring_mode
+  - fields:
+    - status_code
+    - monitoring_status_code
+    - monitoring_mode_code
+    - mode
+    - block_percent
+    - block_usage
+    - block_total
+    - inode_percent
+    - inode_usage
+    - inode_total
+
+- monit_directory
+  - tags:
+    - address
+    - version
+    - service
+    - platform_name
+    - status
+    - monitoring_status
+    - monitoring_mode
+  - fields:
+    - status_code
+    - monitoring_status_code
+    - monitoring_mode_code
+    - permissions
+
+- monit_file
+  - tags:
+    - address
+    - version
+    - service
+    - platform_name
+    - status
+    - monitoring_status
+    - monitoring_mode
+  - fields:
+    - status_code
+    - monitoring_status_code
+    - monitoring_mode_code
+    - size
+    - permissions
+
+- monit_process
+  - tags:
+    - address
+    - version
+    - service
+    - platform_name
+    - status
+    - monitoring_status
+    - monitoring_mode
+  - fields:
+    - status_code
+    - monitoring_status_code
+    - monitoring_mode_code
+    - cpu_percent
+    - cpu_percent_total
+    - mem_kb
+    - mem_kb_total
+    - mem_percent
+    - mem_percent_total
+    - pid
+    - parent_pid
+    - threads
+    - children
+
+- monit_remote_host
+  - tags:
+    - address
+    - version
+    - service
+    - platform_name
+    - status
+    - monitoring_status
+    - monitoring_mode
+  - fields:
+    - status_code
+    - monitoring_status_code
+    - monitoring_mode_code
+    - hostname
+    - port_number
+    - request
+    - response_time
+    - protocol
+    - type
+
+- monit_system
+  - tags:
+    - address
+    - version
+    - service
+    - platform_name
+    - status
+    - monitoring_status
+    - monitoring_mode
+  - fields:
+    - status_code
+    - monitoring_status_code
+    - monitoring_mode_code
+    - cpu_system
+    - cpu_user
+    - cpu_wait
+    - cpu_load_avg_1m
+    - cpu_load_avg_5m
+    - cpu_load_avg_15m
+    - mem_kb
+    - mem_percent
+    - swap_kb
+    - swap_percent
+
+- monit_fifo
+  - tags:
+    - address
+    - version
+    - service
+    - platform_name
+    - status
+    - monitoring_status
+    - monitoring_mode
+  - fields:
+    - status_code
+    - monitoring_status_code
+    - monitoring_mode_code
+    - permissions
+
+- monit_program
+  - tags:
+    - address
+    - version
+    - service
+    - platform_name
+    - status
+    - monitoring_status
+    - monitoring_mode
+  - fields:
+    - status_code
+    - monitoring_status_code
+    - monitoring_mode_code
+
+- monit_network
+  - tags:
+    - address
+    - version
+    - service
+    - platform_name
+    - status
+    - monitoring_status
+    - monitoring_mode
+  - fields:
+    - status_code
+    - monitoring_status_code
+    - monitoring_mode_code
+
+- monit_program
+  - tags:
+    - address
+    - version
+    - service
+    - platform_name
+    - status
+    - monitoring_status
+    - monitoring_mode
+  - fields:
+    - status_code
+    - monitoring_status_code
+    - monitoring_mode_code
+
+- monit_network
+  - tags:
+    - address
+    - version
+    - service
+    - platform_name
+    - status
+    - monitoring_status
+    - monitoring_mode
+  - fields:
+    - status_code
+    - monitoring_status_code
+    - monitoring_mode_code
+
+## Example Output
+
+```text
+monit_file,monitoring_mode=active,monitoring_status=monitored,pending_action=none,platform_name=Linux,service=rsyslog_pid,source=xyzzy.local,status=running,version=5.20.0 mode=644i,monitoring_mode_code=0i,monitoring_status_code=1i,pending_action_code=0i,size=3i,status_code=0i 1579735047000000000
+monit_process,monitoring_mode=active,monitoring_status=monitored,pending_action=none,platform_name=Linux,service=rsyslog,source=xyzzy.local,status=running,version=5.20.0 children=0i,cpu_percent=0,cpu_percent_total=0,mem_kb=3148i,mem_kb_total=3148i,mem_percent=0.2,mem_percent_total=0.2,monitoring_mode_code=0i,monitoring_status_code=1i,parent_pid=1i,pending_action_code=0i,pid=318i,status_code=0i,threads=4i 1579735047000000000
+monit_program,monitoring_mode=active,monitoring_status=initializing,pending_action=none,platform_name=Linux,service=echo,source=xyzzy.local,status=running,version=5.20.0 monitoring_mode_code=0i,monitoring_status_code=2i,pending_action_code=0i,program_started=0i,program_status=0i,status_code=0i 1579735047000000000
+monit_system,monitoring_mode=active,monitoring_status=monitored,pending_action=none,platform_name=Linux,service=debian-stretch-monit.virt,source=xyzzy.local,status=running,version=5.20.0 cpu_load_avg_15m=0,cpu_load_avg_1m=0,cpu_load_avg_5m=0,cpu_system=0,cpu_user=0,cpu_wait=0,mem_kb=42852i,mem_percent=2.1,monitoring_mode_code=0i,monitoring_status_code=1i,pending_action_code=0i,status_code=0i,swap_kb=0,swap_percent=0 1579735047000000000
+monit_remote_host,dc=new-12,host=palladium,monitoring_mode=active,monitoring_status=monitored,pending_action=none,platform_name=Linux,rack=rack-0,service=blog.kalvad.com,source=palladium,status=running,version=5.27.0 monitoring_status_code=1i,monitoring_mode_code=0i,response_time=0.664412,type="TCP",pending_action_code=0i,remote_hostname="blog.kalvad.com",port_number=443i,request="/",protocol="HTTP",status_code=0i 1599138990000000000
+```
diff --git a/content/telegraf/v1/input-plugins/mqtt_consumer/_index.md b/content/telegraf/v1/input-plugins/mqtt_consumer/_index.md
new file mode 100644
index 000000000..29a07d469
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/mqtt_consumer/_index.md
@@ -0,0 +1,281 @@
+---
+description: "Telegraf plugin for collecting metrics from MQTT Consumer"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: MQTT Consumer
+    identifier: input-mqtt_consumer
+tags: [MQTT Consumer, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# MQTT Consumer Input Plugin
+
+The [MQTT](https://mqtt.org) consumer plugin reads from the specified MQTT topics
+and creates metrics using one of the supported [input data formats](/telegraf/v1/data_formats/input).
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Startup error behavior options <!-- @/docs/includes/startup_error_behavior.md -->
+
+In addition to the plugin-specific and global configuration settings the plugin
+supports options for specifying the behavior when experiencing startup errors
+using the `startup_error_behavior` setting. Available values are:
+
+- `error`:  Telegraf with stop and exit in case of startup errors. This is the
+            default behavior.
+- `ignore`: Telegraf will ignore startup errors for this plugin and disables it
+            but continues processing for all other plugins.
+- `retry`:  Telegraf will try to startup the plugin in every gather or write
+            cycle in case of startup errors. The plugin is disabled until
+            the startup succeeds.
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `username` and
+`password` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from MQTT topic(s)
+[[inputs.mqtt_consumer]]
+  ## Broker URLs for the MQTT server or cluster.  To connect to multiple
+  ## clusters or standalone servers, use a separate plugin instance.
+  ##   example: servers = ["tcp://localhost:1883"]
+  ##            servers = ["ssl://localhost:1883"]
+  ##            servers = ["ws://localhost:1883"]
+  servers = ["tcp://127.0.0.1:1883"]
+
+  ## Topics that will be subscribed to.
+  topics = [
+    "telegraf/host01/cpu",
+    "telegraf/+/mem",
+    "sensors/#",
+  ]
+
+  ## The message topic will be stored in a tag specified by this value.  If set
+  ## to the empty string no topic tag will be created.
+  # topic_tag = "topic"
+
+  ## QoS policy for messages
+  ##   0 = at most once
+  ##   1 = at least once
+  ##   2 = exactly once
+  ##
+  ## When using a QoS of 1 or 2, you should enable persistent_session to allow
+  ## resuming unacknowledged messages.
+  # qos = 0
+
+  ## Connection timeout for initial connection in seconds
+  # connection_timeout = "30s"
+
+  ## Interval and ping timeout for keep-alive messages
+  ## The sum of those options defines when a connection loss is detected.
+  ## Note: The keep-alive interval needs to be greater or equal one second and
+  ## fractions of a second are not supported.
+  # keepalive = "60s"
+  # ping_timeout = "10s"
+
+  ## Max undelivered messages
+  ## This plugin uses tracking metrics, which ensure messages are read to
+  ## outputs before acknowledging them to the original broker to ensure data
+  ## is not lost. This option sets the maximum messages to read from the
+  ## broker that have not been written by an output.
+  ##
+  ## This value needs to be picked with awareness of the agent's
+  ## metric_batch_size value as well. Setting max undelivered messages too high
+  ## can result in a constant stream of data batches to the output. While
+  ## setting it too low may never flush the broker's messages.
+  # max_undelivered_messages = 1000
+
+  ## Persistent session disables clearing of the client session on connection.
+  ## In order for this option to work you must also set client_id to identify
+  ## the client.  To receive messages that arrived while the client is offline,
+  ## also set the qos option to 1 or 2 and don't forget to also set the QoS when
+  ## publishing. Finally, using a persistent session will use the initial
+  ## connection topics and not subscribe to any new topics even after
+  ## reconnecting or restarting without a change in client ID.
+  # persistent_session = false
+
+  ## If unset, a random client ID will be generated.
+  # client_id = ""
+
+  ## Username and password to connect MQTT server.
+  # username = "telegraf"
+  # password = "metricsmetricsmetricsmetrics"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## Client trace messages
+  ## When set to true, and debug mode enabled in the agent settings, the MQTT
+  ## client's messages are included in telegraf logs. These messages are very
+  ## noisey, but essential for debugging issues.
+  # client_trace = false
+
+  ## Data format to consume.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+  data_format = "influx"
+
+  ## Enable extracting tag values from MQTT topics
+  ## _ denotes an ignored entry in the topic path,
+  ## # denotes a variable length path element (can only be used once per setting)
+  # [[inputs.mqtt_consumer.topic_parsing]]
+  #   topic = ""
+  #   measurement = ""
+  #   tags = ""
+  #   fields = ""
+  ## Value supported is int, float, unit
+  #   [inputs.mqtt_consumer.topic_parsing.types]
+  #      key = type
+```
+
+## Example Output
+
+```text
+mqtt_consumer,host=pop-os,topic=telegraf/host01/cpu value=45i 1653579140440951943
+mqtt_consumer,host=pop-os,topic=telegraf/host01/cpu value=100i 1653579153147395661
+```
+
+## About Topic Parsing
+
+The MQTT topic as a whole is stored as a tag, but this can be far too coarse to
+be easily used when utilizing the data further down the line. This change allows
+tag values to be extracted from the MQTT topic letting you store the information
+provided in the topic in a meaningful way. An `_` denotes an ignored entry in
+the topic path. Please see the following example.
+
+### Topic Parsing Example
+
+```toml
+[[inputs.mqtt_consumer]]
+  ## Broker URLs for the MQTT server or cluster.  To connect to multiple
+  ## clusters or standalone servers, use a separate plugin instance.
+  ##   example: servers = ["tcp://localhost:1883"]
+  ##            servers = ["ssl://localhost:1883"]
+  ##            servers = ["ws://localhost:1883"]
+  servers = ["tcp://127.0.0.1:1883"]
+
+  ## Topics that will be subscribed to.
+  topics = [
+    "telegraf/+/cpu/23",
+  ]
+
+  ## Data format to consume.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+  data_format = "value"
+  data_type = "float"
+
+  [[inputs.mqtt_consumer.topic_parsing]]
+    topic = "telegraf/one/cpu/23"
+    measurement = "_/_/measurement/_"
+    tags = "tag/_/_/_"
+    fields = "_/_/_/test"
+    [inputs.mqtt_consumer.topic_parsing.types]
+      test = "int"
+```
+
+Will result in the following metric:
+
+```text
+cpu,host=pop-os,tag=telegraf,topic=telegraf/one/cpu/23 value=45,test=23i 1637014942460689291
+```
+
+## Field Pivoting Example
+
+You can use the pivot processor to rotate single
+valued metrics into a multi field metric.
+For more info check out the pivot processors
+[here](https://github.com/influxdata/telegraf/tree/master/plugins/processors/pivot).
+
+For this example these are the topics:
+
+```text
+/sensors/CLE/v1/device5/temp
+/sensors/CLE/v1/device5/rpm
+/sensors/CLE/v1/device5/ph
+/sensors/CLE/v1/device5/spin
+```
+
+And these are the metrics:
+
+```text
+sensors,site=CLE,version=v1,device_name=device5,field=temp value=390
+sensors,site=CLE,version=v1,device_name=device5,field=rpm value=45.0
+sensors,site=CLE,version=v1,device_name=device5,field=ph value=1.45
+```
+
+Using pivot in the config will rotate the metrics into a multi field metric.
+The config:
+
+```toml
+[[inputs.mqtt_consumer]]
+    ....
+    topics = "/sensors/#"
+    [[inputs.mqtt_consumer.topic_parsing]]
+        measurement = "/measurement/_/_/_/_"
+        tags = "/_/site/version/device_name/field"
+[[processors.pivot]]
+    tag_key = "field"
+    value_key = "value"
+```
+
+Will result in the following metric:
+
+```text
+sensors,site=CLE,version=v1,device_name=device5 temp=390,rpm=45.0,ph=1.45
+```
+
+[1]: <https://github.com/influxdata/telegraf/tree/master/plugins/processors/pivot> "Pivot Processor"
+
+## Metrics
+
+- All measurements are tagged with the incoming topic, ie
+`topic=telegraf/host01/cpu`
+
+- example when [[inputs.mqtt_consumer.topic_parsing]] is set
+
+- when [[inputs.internal]] is set:
+  - payload_size (int): get the cumulative size in bytes that have been received from incoming messages
+  - messages_received (int): count of the number of messages that have been received from mqtt
+
+This will result in the following metric:
+
+```text
+internal_mqtt_consumer host=pop-os version=1.24.0 messages_received=622i payload_size=37942i 1657282270000000000
+```
+
+[mqtt]: https://mqtt.org
+[input data formats]: /docs/DATA_FORMATS_INPUT.md
diff --git a/content/telegraf/v1/input-plugins/multifile/_index.md b/content/telegraf/v1/input-plugins/multifile/_index.md
new file mode 100644
index 000000000..942dbb43b
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/multifile/_index.md
@@ -0,0 +1,105 @@
+---
+description: "Telegraf plugin for collecting metrics from Multifile"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Multifile
+    identifier: input-multifile
+tags: [Multifile, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Multifile Input Plugin
+
+The multifile input plugin allows Telegraf to combine data from multiple files
+into a single metric, creating one field or tag per file.  This is often
+useful creating custom metrics from the `/sys` or `/proc` filesystems.
+
+> Note: If you wish to parse metrics from a single file formatted in one of
+> the supported [input data formats](/telegraf/v1/data_formats/input), you should use the [file](/telegraf/v1/plugins/#input-file) input
+> plugin instead.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Aggregates the contents of multiple files into a single point
+[[inputs.multifile]]
+  ## Base directory where telegraf will look for files.
+  ## Omit this option to use absolute paths.
+  base_dir = "/sys/bus/i2c/devices/1-0076/iio:device0"
+
+  ## If true discard all data when a single file can't be read.
+  ## Else, Telegraf omits the field generated from this file.
+  # fail_early = true
+
+  ## Files to parse each interval.
+  [[inputs.multifile.file]]
+    file = "in_pressure_input"
+    dest = "pressure"
+    conversion = "float"
+  [[inputs.multifile.file]]
+    file = "in_temp_input"
+    dest = "temperature"
+    conversion = "float(3)"
+  [[inputs.multifile.file]]
+    file = "in_humidityrelative_input"
+    dest = "humidityrelative"
+    conversion = "float(3)"
+```
+
+## Metrics
+
+Each file table can contain the following options:
+
+* `file`:
+Path of the file to be parsed, relative to the `base_dir`.
+* `dest`:
+Name of the field/tag key, defaults to `$(basename file)`.
+* `conversion`:
+Data format used to parse the file contents:
+  * `float(X)`: Converts the input value into a float and divides by the Xth
+    power of 10. Effectively just moves the decimal left X places. For example
+    a value of `123` with `float(2)` will result in `1.23`.
+  * `float`: Converts the value into a float with no adjustment.
+    Same as `float(0)`.
+  * `int`: Converts the value into an integer.
+  * `string`, `""`: No conversion.
+  * `bool`: Converts the value into a boolean.
+  * `tag`: File content is used as a tag.
+
+## Example Output
+
+This example shows a BME280 connected to a Raspberry Pi, using the sample
+config.
+
+```text
+multifile pressure=101.343285156,temperature=20.4,humidityrelative=48.9 1547202076000000000
+```
+
+To reproduce this, connect a BMP280 to the board's GPIO pins and register the
+BME280 device driver
+
+```sh
+cd /sys/bus/i2c/devices/i2c-1
+echo bme280 0x76 > new_device
+```
+
+The kernel driver provides the following files in
+`/sys/bus/i2c/devices/1-0076/iio:device0`:
+
+* `in_humidityrelative_input`: `48900`
+* `in_pressure_input`: `101.343285156`
+* `in_temp_input`: `20400`
+
+[input data formats]: /docs/DATA_FORMATS_INPUT.md
+[file]: /plugins/inputs/file/README.md
diff --git a/content/telegraf/v1/input-plugins/mysql/_index.md b/content/telegraf/v1/input-plugins/mysql/_index.md
new file mode 100644
index 000000000..76b71c1a0
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/mysql/_index.md
@@ -0,0 +1,425 @@
+---
+description: "Telegraf plugin for collecting metrics from MySQL"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: MySQL
+    identifier: input-mysql
+tags: [MySQL, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# MySQL Input Plugin
+
+This plugin gathers the statistic data from MySQL server
+
+* Global statuses
+* Global variables
+* Slave statuses
+* Binlog size
+* Process list
+* User Statistics
+* Info schema auto increment columns
+* InnoDB metrics
+* Table I/O waits
+* Index I/O waits
+* Perf Schema table lock waits
+* Perf Schema event waits
+* Perf Schema events statements
+* File events statistics
+* Table schema statistics
+
+In order to gather metrics from the performance schema, it must first be enabled
+in mySQL configuration. See the performance schema [quick start](https://dev.mysql.com/doc/refman/8.0/en/performance-schema-quick-start.html).
+
+[quick-start]: https://dev.mysql.com/doc/refman/8.0/en/performance-schema-quick-start.html
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from one or many mysql servers
+[[inputs.mysql]]
+  ## specify servers via a url matching:
+  ##  [username[:password]@][protocol[(address)]]/[?tls=[true|false|skip-verify|custom]]
+  ##  see https://github.com/go-sql-driver/mysql#dsn-data-source-name
+  ##  e.g.
+  ##    servers = ["user:passwd@tcp(127.0.0.1:3306)/?tls=false"]
+  ##    servers = ["user@tcp(127.0.0.1:3306)/?tls=false"]
+  #
+  ## If no servers are specified, then localhost is used as the host.
+  servers = ["tcp(127.0.0.1:3306)/"]
+
+  ## Selects the metric output format.
+  ##
+  ## This option exists to maintain backwards compatibility, if you have
+  ## existing metrics do not set or change this value until you are ready to
+  ## migrate to the new format.
+  ##
+  ## If you do not have existing metrics from this plugin set to the latest
+  ## version.
+  ##
+  ## Telegraf >=1.6: metric_version = 2
+  ##           <1.6: metric_version = 1 (or unset)
+  metric_version = 2
+
+  ## if the list is empty, then metrics are gathered from all database tables
+  # table_schema_databases = []
+
+  ## gather metrics from INFORMATION_SCHEMA.TABLES for databases provided
+  ## in the list above
+  # gather_table_schema = false
+
+  ## gather thread state counts from INFORMATION_SCHEMA.PROCESSLIST
+  # gather_process_list = false
+
+  ## gather user statistics from INFORMATION_SCHEMA.USER_STATISTICS
+  # gather_user_statistics = false
+
+  ## gather auto_increment columns and max values from information schema
+  # gather_info_schema_auto_inc = false
+
+  ## gather metrics from INFORMATION_SCHEMA.INNODB_METRICS
+  # gather_innodb_metrics = false
+
+  ## gather metrics from all channels from SHOW SLAVE STATUS command output
+  # gather_all_slave_channels = false
+
+  ## gather metrics from SHOW SLAVE STATUS command output
+  # gather_slave_status = false
+
+  ## gather metrics from SHOW REPLICA STATUS command output
+  # gather_replica_status = false
+
+  ## use SHOW ALL SLAVES STATUS command output for MariaDB
+  ## use SHOW ALL REPLICAS STATUS command if enable gather replica status
+  # mariadb_dialect = false
+
+  ## gather metrics from SHOW BINARY LOGS command output
+  # gather_binary_logs = false
+
+  ## gather metrics from SHOW GLOBAL VARIABLES command output
+  # gather_global_variables = true
+
+  ## gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMARY_BY_TABLE
+  # gather_table_io_waits = false
+
+  ## gather metrics from PERFORMANCE_SCHEMA.TABLE_LOCK_WAITS
+  # gather_table_lock_waits = false
+
+  ## gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMARY_BY_INDEX_USAGE
+  # gather_index_io_waits = false
+
+  ## gather metrics from PERFORMANCE_SCHEMA.EVENT_WAITS
+  # gather_event_waits = false
+
+  ## gather metrics from PERFORMANCE_SCHEMA.FILE_SUMMARY_BY_EVENT_NAME
+  # gather_file_events_stats = false
+
+  ## gather metrics from PERFORMANCE_SCHEMA.EVENTS_STATEMENTS_SUMMARY_BY_DIGEST
+  # gather_perf_events_statements             = false
+  #
+  ## gather metrics from PERFORMANCE_SCHEMA.EVENTS_STATEMENTS_SUMMARY_BY_ACCOUNT_BY_EVENT_NAME
+  # gather_perf_sum_per_acc_per_event         = false
+  #
+  ## list of events to be gathered for gather_perf_sum_per_acc_per_event
+  ## in case of empty list all events will be gathered
+  # perf_summary_events                       = []
+
+  ## the limits for metrics form perf_events_statements
+  # perf_events_statements_digest_text_limit = 120
+  # perf_events_statements_limit = 250
+  # perf_events_statements_time_limit = 86400
+
+  ## Some queries we may want to run less often (such as SHOW GLOBAL VARIABLES)
+  ##   example: interval_slow = "30m"
+  # interval_slow = ""
+
+  ## Optional TLS Config (used if tls=custom parameter specified in server uri)
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+### String Data
+
+Some fields may return string data. This is unhelpful for some outputs where
+numeric data is required (e.g. Prometheus). In these cases, users can make use
+of the enum processor to convert string values to numeric values. Below is an
+example using the `slave_slave_io_running` field, which can have a variety of
+string values:
+
+```toml
+[[processors.enum]]
+  namepass = "mysql"
+  [[processors.enum.mapping]]
+    field = "slave_slave_io_running"
+    dest = "slave_slave_io_running_int"
+    default = 4
+    [processors.enum.mapping.value_mappings]
+      Yes = 0
+      No = 1
+      Preparing = 2
+      Connecting = 3
+```
+
+### Metric Version
+
+When `metric_version = 2`, a variety of field type issues are corrected as well
+as naming inconsistencies.  If you have existing data on the original version
+enabling this feature will cause a `field type error` when inserted into
+InfluxDB due to the change of types.  For this reason, you should keep the
+`metric_version` unset until you are ready to migrate to the new format.
+
+If preserving your old data is not required you may wish to drop conflicting
+measurements:
+
+```sql
+DROP SERIES from mysql
+DROP SERIES from mysql_variables
+DROP SERIES from mysql_innodb
+```
+
+Otherwise, migration can be performed using the following steps:
+
+1. Duplicate your `mysql` plugin configuration and add a `name_suffix` and
+`metric_version = 2`, this will result in collection using both the old and new
+style concurrently:
+
+   ```toml
+   [[inputs.mysql]]
+     servers = ["tcp(127.0.0.1:3306)/"]
+
+   [[inputs.mysql]]
+     name_suffix = "_v2"
+     metric_version = 2
+
+     servers = ["tcp(127.0.0.1:3306)/"]
+   ```
+
+2. Upgrade all affected Telegraf clients to version >=1.6.
+
+   New measurements will be created with the `name_suffix`, for example::
+   * `mysql_v2`
+   * `mysql_variables_v2`
+
+3. Update charts, alerts, and other supporting code to the new format.
+4. You can now remove the old `mysql` plugin configuration and remove old
+   measurements.
+
+If you wish to remove the `name_suffix` you may use Kapacitor to copy the
+historical data to the default name.  Do this only after retiring the old
+measurement name.
+
+1. Use the technique described above to write to multiple locations:
+
+   ```toml
+   [[inputs.mysql]]
+     servers = ["tcp(127.0.0.1:3306)/"]
+     metric_version = 2
+
+   [[inputs.mysql]]
+     name_suffix = "_v2"
+     metric_version = 2
+
+     servers = ["tcp(127.0.0.1:3306)/"]
+   ```
+
+2. Create a TICKScript to copy the historical data:
+
+   ```sql
+   dbrp "telegraf"."autogen"
+
+   batch
+       |query('''
+           SELECT * FROM "telegraf"."autogen"."mysql_v2"
+       ''')
+           .period(5m)
+           .every(5m)
+           |influxDBOut()
+                   .database('telegraf')
+                   .retentionPolicy('autogen')
+                   .measurement('mysql')
+   ```
+
+3. Define a task for your script:
+
+   ```sh
+   kapacitor define copy-measurement -tick copy-measurement.task
+   ```
+
+4. Run the task over the data you would like to migrate:
+
+   ```sh
+   kapacitor replay-live batch -start 2018-03-30T20:00:00Z -stop 2018-04-01T12:00:00Z -rec-time -task copy-measurement
+   ```
+
+5. Verify copied data and repeat for other measurements.
+
+## Metrics
+
+* Global statuses - all numeric and boolean values of `SHOW GLOBAL STATUSES`
+* Global variables - all numeric and boolean values of `SHOW GLOBAL VARIABLES`
+* Slave status - metrics from `SHOW SLAVE STATUS` the metrics are gathered when
+the single-source replication is on. If the multi-source replication is set,
+then everything works differently, this metric does not work with multi-source
+replication, unless you set `gather_all_slave_channels = true`. For MariaDB,
+`mariadb_dialect = true` should be set to address the field names and commands
+differences. If enable `gather_replica_status` metrics gather from command
+`SHOW REPLICA STATUS`, for MariaDB will be `SHOW ALL REPLICAS STATUS`
+  * slave_[column name]
+* Binary logs - all metrics including size and count of all binary files.
+Requires to be turned on in configuration.
+  * binary_size_bytes(int, number)
+  * binary_files_count(int, number)
+* Process list - connection metrics from processlist for each user. It has the
+  following tags
+  * connections(int, number)
+* User Statistics - connection metrics from user statistics for each user.
+  It has the following fields
+  * access_denied
+  * binlog_bytes_written
+  * busy_time
+  * bytes_received
+  * bytes_sent
+  * commit_transactions
+  * concurrent_connections
+  * connected_time
+  * cpu_time
+  * denied_connections
+  * empty_queries
+  * hostlost_connections
+  * other_commands
+  * rollback_transactions
+  * rows_fetched
+  * rows_updated
+  * select_commands
+  * server
+  * table_rows_read
+  * total_connections
+  * total_ssl_connections
+  * update_commands
+  * user
+* Perf Table IO waits - total count and time of I/O waits event for each table
+and process. It has following fields:
+  * table_io_waits_total_fetch(float, number)
+  * table_io_waits_total_insert(float, number)
+  * table_io_waits_total_update(float, number)
+  * table_io_waits_total_delete(float, number)
+  * table_io_waits_seconds_total_fetch(float, milliseconds)
+  * table_io_waits_seconds_total_insert(float, milliseconds)
+  * table_io_waits_seconds_total_update(float, milliseconds)
+  * table_io_waits_seconds_total_delete(float, milliseconds)
+* Perf index IO waits - total count and time of I/O waits event for each index
+and process. It has following fields:
+  * index_io_waits_total_fetch(float, number)
+  * index_io_waits_seconds_total_fetch(float, milliseconds)
+  * index_io_waits_total_insert(float, number)
+  * index_io_waits_total_update(float, number)
+  * index_io_waits_total_delete(float, number)
+  * index_io_waits_seconds_total_insert(float, milliseconds)
+  * index_io_waits_seconds_total_update(float, milliseconds)
+  * index_io_waits_seconds_total_delete(float, milliseconds)
+* Info schema autoincrement statuses - autoincrement fields and max values
+for them. It has following fields:
+  * auto_increment_column(int, number)
+  * auto_increment_column_max(int, number)
+* InnoDB metrics - all metrics of information_schema.INNODB_METRICS with a
+  status "enabled". For MariaDB, `mariadb_dialect = true` to use `ENABLED=1`.
+* Perf table lock waits - gathers total number and time for SQL and external
+lock waits events for each table and operation. It has following fields.
+The unit of fields varies by the tags.
+  * read_normal(float, number/milliseconds)
+  * read_with_shared_locks(float, number/milliseconds)
+  * read_high_priority(float, number/milliseconds)
+  * read_no_insert(float, number/milliseconds)
+  * write_normal(float, number/milliseconds)
+  * write_allow_write(float, number/milliseconds)
+  * write_concurrent_insert(float, number/milliseconds)
+  * write_low_priority(float, number/milliseconds)
+  * read(float, number/milliseconds)
+  * write(float, number/milliseconds)
+* Perf events waits - gathers total time and number of event waits
+  * events_waits_total(float, number)
+  * events_waits_seconds_total(float, milliseconds)
+* Perf file events statuses - gathers file events statuses
+  * file_events_total(float,number)
+  * file_events_seconds_total(float, milliseconds)
+  * file_events_bytes_total(float, bytes)
+* Perf events statements - gathers attributes of each event
+  * events_statements_total(float, number)
+  * events_statements_seconds_total(float, millieconds)
+  * events_statements_errors_total(float, number)
+  * events_statements_warnings_total(float, number)
+  * events_statements_rows_affected_total(float, number)
+  * events_statements_rows_sent_total(float, number)
+  * events_statements_rows_examined_total(float, number)
+  * events_statements_tmp_tables_total(float, number)
+  * events_statements_tmp_disk_tables_total(float, number)
+  * events_statements_sort_merge_passes_totals(float, number)
+  * events_statements_sort_rows_total(float, number)
+  * events_statements_no_index_used_total(float, number)
+* Table schema - gathers statistics per schema. It has following measurements
+  * info_schema_table_rows(float, number)
+  * info_schema_table_size_data_length(float, number)
+  * info_schema_table_size_index_length(float, number)
+  * info_schema_table_size_data_free(float, number)
+  * info_schema_table_version(float, number)
+
+## Tags
+
+* All measurements has following tags
+  * server (the host name from which the metrics are gathered)
+* Process list measurement has following tags
+  * user (username for whom the metrics are gathered)
+* User Statistics measurement has following tags
+  * user (username for whom the metrics are gathered)
+* Perf table IO waits measurement has following tags
+  * schema
+  * name (object name for event or process)
+* Perf index IO waits has following tags
+  * schema
+  * name
+  * index
+* Info schema autoincrement statuses has following tags
+  * schema
+  * table
+  * column
+* Perf table lock waits has following tags
+  * schema
+  * table
+  * sql_lock_waits_total(fields including this tag have numeric unit)
+  * external_lock_waits_total(fields including this tag have numeric unit)
+  * sql_lock_waits_seconds_total(fields including this tag have millisecond unit)
+  * external_lock_waits_seconds_total(fields including this tag have
+    millisecond unit)
+* Perf events statements has following tags
+  * event_name
+* Perf file events statuses has following tags
+  * event_name
+  * mode
+* Perf file events statements has following tags
+  * schema
+  * digest
+  * digest_text
+* Table schema has following tags
+  * schema
+  * table
+  * component
+  * type
+  * engine
+  * row_format
+  * create_options
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/nats/_index.md b/content/telegraf/v1/input-plugins/nats/_index.md
new file mode 100644
index 000000000..028ccf2e4
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/nats/_index.md
@@ -0,0 +1,67 @@
+---
+description: "Telegraf plugin for collecting metrics from NATS"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: NATS
+    identifier: input-nats
+tags: [NATS, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# NATS Input Plugin
+
+The [NATS](http://www.nats.io/about/) monitoring plugin gathers metrics from the
+NATS [monitoring http server](https://docs.nats.io/running-a-nats-service/nats_admin/monitoring).
+
+[1]: https://docs.nats.io/running-a-nats-service/nats_admin/monitoring
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Provides metrics about the state of a NATS server
+# This plugin does NOT support FreeBSD
+[[inputs.nats]]
+  ## The address of the monitoring endpoint of the NATS server
+  server = "http://localhost:8222"
+
+  ## Maximum time to receive response
+  # response_timeout = "5s"
+```
+
+## Metrics
+
+- nats
+  - tags
+    - server
+  - fields:
+    - uptime (integer, nanoseconds)
+    - mem (integer, bytes)
+    - subscriptions (integer, count)
+    - out_bytes (integer, bytes)
+    - connections (integer, count)
+    - in_msgs (integer, bytes)
+    - total_connections (integer, count)
+    - cores (integer, count)
+    - cpu (integer, count)
+    - slow_consumers (integer, count)
+    - routes (integer, count)
+    - remotes (integer, count)
+    - out_msgs (integer, count)
+    - in_bytes (integer, bytes)
+
+## Example Output
+
+```text
+nats,server=http://localhost:8222 uptime=117158348682i,mem=6647808i,subscriptions=0i,out_bytes=0i,connections=0i,in_msgs=0i,total_connections=0i,cores=2i,cpu=0,slow_consumers=0i,routes=0i,remotes=0i,out_msgs=0i,in_bytes=0i 1517015107000000000
+```
diff --git a/content/telegraf/v1/input-plugins/nats_consumer/_index.md b/content/telegraf/v1/input-plugins/nats_consumer/_index.md
new file mode 100644
index 000000000..61914eca1
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/nats_consumer/_index.md
@@ -0,0 +1,128 @@
+---
+description: "Telegraf plugin for collecting metrics from NATS Consumer"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: NATS Consumer
+    identifier: input-nats_consumer
+tags: [NATS Consumer, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# NATS Consumer Input Plugin
+
+The [NATS](https://www.nats.io/about/) consumer plugin reads from the specified NATS subjects and
+creates metrics using one of the supported [input data formats](/telegraf/v1/data_formats/input).
+
+A [Queue Group](https://www.nats.io/documentation/concepts/nats-queueing/) is used when subscribing to subjects so multiple
+instances of telegraf can read from a NATS cluster in parallel.
+
+There are three methods of (optionally) authenticating with NATS:
+[username/password](https://docs.nats.io/using-nats/developer/connecting/userpass), [a NATS creds file](https://docs.nats.io/using-nats/developer/connecting/creds) (NATS 2.0), or
+an [nkey seed file](https://docs.nats.io/using-nats/developer/connecting/nkey) (NATS 2.0).
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from NATS subject(s)
+[[inputs.nats_consumer]]
+  ## urls of NATS servers
+  servers = ["nats://localhost:4222"]
+
+  ## subject(s) to consume
+  ## If you use jetstream you need to set the subjects
+  ## in jetstream_subjects
+  subjects = ["telegraf"]
+
+  ## jetstream subjects
+  ## jetstream is a streaming technology inside of nats.
+  ## With jetstream the nats-server persists messages and
+  ## a consumer can consume historical messages. This is
+  ## useful when telegraf needs to restart it don't miss a
+  ## message. You need to configure the nats-server.
+  ## https://docs.nats.io/nats-concepts/jetstream.
+  jetstream_subjects = ["js_telegraf"]
+
+  ## name a queue group
+  queue_group = "telegraf_consumers"
+
+  ## Optional authentication with username and password credentials
+  # username = ""
+  # password = ""
+
+  ## Optional authentication with NATS credentials file (NATS 2.0)
+  # credentials = "/etc/telegraf/nats.creds"
+
+  ## Optional authentication with nkey seed file (NATS 2.0)
+  # nkey_seed = "/etc/telegraf/seed.txt"
+
+  ## Use Transport Layer Security
+  # secure = false
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## Sets the limits for pending msgs and bytes for each subscription
+  ## These shouldn't need to be adjusted except in very high throughput scenarios
+  # pending_message_limit = 65536
+  # pending_bytes_limit = 67108864
+
+  ## Max undelivered messages
+  ## This plugin uses tracking metrics, which ensure messages are read to
+  ## outputs before acknowledging them to the original broker to ensure data
+  ## is not lost. This option sets the maximum messages to read from the
+  ## broker that have not been written by an output.
+  ##
+  ## This value needs to be picked with awareness of the agent's
+  ## metric_batch_size value as well. Setting max undelivered messages too high
+  ## can result in a constant stream of data batches to the output. While
+  ## setting it too low may never flush the broker's messages.
+  # max_undelivered_messages = 1000
+
+  ## Data format to consume.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+  data_format = "influx"
+```
+
+[nats]: https://www.nats.io/about/
+[input data formats]: /docs/DATA_FORMATS_INPUT.md
+[queue group]: https://www.nats.io/documentation/concepts/nats-queueing/
+[userpass]: https://docs.nats.io/using-nats/developer/connecting/userpass
+[creds]: https://docs.nats.io/using-nats/developer/connecting/creds
+[nkey]: https://docs.nats.io/using-nats/developer/connecting/nkey
+
+## Metrics
+
+Which data you will get depends on the subjects you consume from nats
+
+## Example Output
+
+Depends on the nats subject input
+nats_consumer,host=foo,subject=recvsubj value=1.9 1655972309339341000
diff --git a/content/telegraf/v1/input-plugins/neptune_apex/_index.md b/content/telegraf/v1/input-plugins/neptune_apex/_index.md
new file mode 100644
index 000000000..f4c09bfb7
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/neptune_apex/_index.md
@@ -0,0 +1,182 @@
+---
+description: "Telegraf plugin for collecting metrics from Neptune Apex"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Neptune Apex
+    identifier: input-neptune_apex
+tags: [Neptune Apex, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Neptune Apex Input Plugin
+
+The Neptune Apex controller family allows an aquarium hobbyist to monitor and
+control their tanks based on various probes. The data is taken directly from the
+`/cgi-bin/status.xml` at the interval specified in the telegraf.conf
+configuration file.
+
+The [Neptune Apex](https://www.neptunesystems.com/) input plugin collects
+real-time data from the Apex's status.xml page.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Neptune Apex data collector
+[[inputs.neptune_apex]]
+  ## The Neptune Apex plugin reads the publicly available status.xml data from a local Apex.
+  ## Measurements will be logged under "apex".
+
+  ## The base URL of the local Apex(es). If you specify more than one server, they will
+  ## be differentiated by the "source" tag.
+  servers = [
+    "http://apex.local",
+  ]
+
+  ## The response_timeout specifies how long to wait for a reply from the Apex.
+  #response_timeout = "5s"
+
+```
+
+## Metrics
+
+The Neptune Apex controller family allows an aquarium hobbyist to monitor and
+control their tanks based on various probes. The data is taken directly from the
+/cgi-bin/status.xml at the interval specified in the telegraf.conf configuration
+file.
+
+No manipulation is done on any of the fields to ensure future changes to the
+status.xml do not introduce conversion bugs to this plugin. When reasonable and
+predictable, some tags are derived to make graphing easier and without front-end
+programming. These tags are clearly marked in the list below and should be
+considered a convenience rather than authoritative.
+
+- neptune_apex (All metrics have this measurement name)
+  - tags:
+    - host (mandatory, string) is the host on which telegraf runs.
+    - source (mandatory, string) contains the hostname of the apex device. This can be used to differentiate between
+    different units. By using the source instead of the serial number, replacements units won't disturb graphs.
+    - type (mandatory, string) maps to the different types of data. Values can be "controller" (The Apex controller
+    itself), "probe" for the different input probes, or "output" for any physical or virtual outputs. The Watt and Amp
+    probes attached to the physical 120V outlets are aggregated under the output type.
+    - hardware (mandatory, string) controller hardware version
+    - software (mandatory, string) software version
+    - probe_type (optional, string) contains the probe type as reported by the Apex.
+    - name (optional, string) contains the name of the probe or output.
+    - output_id (optional, string) represents the internal unique output ID. This is different from the device_id.
+    - device_id (optional, string) maps to either the aquabus address or the internal reference.
+    - output_type (optional, string) categorizes the output into different categories. This tag is DERIVED from the
+    device_id. Possible values are: "variable" for the 0-10V signal ports, "outlet" for physical 120V sockets, "alert"
+    for alarms (email, sound), "virtual" for user-defined outputs, and "unknown" for everything else.
+  - fields:
+    - value (float, various unit) represents the probe reading.
+    - state (string) represents the output state as defined by the Apex. Examples include "AOF" for Auto (OFF), "TBL"
+    for operating according to a table, and "PF*" for different programs.
+    - amp (float, Ampere) is the amount of current flowing through the 120V outlet.
+    - watt (float, Watt) represents the amount of energy flowing through the 120V outlet.
+    - xstatus (string) indicates the xstatus of an outlet. Found on wireless Vortech devices.
+    - power_failed (int64, Unix epoch in ns) when the controller last lost power. Omitted if the apex reports it as "none"
+    - power_restored (int64, Unix epoch in ns) when the controller last powered on. Omitted if the apex reports it as "none"
+    - serial (string, serial number)
+  - time:
+    - The time used for the metric is parsed from the status.xml page. This helps when cross-referencing events with
+     the local system of Apex Fusion. Since the Apex uses NTP, this should not matter in most scenarios.
+
+## Sample Queries
+
+Get the max, mean, and min for the temperature in the last hour:
+
+```sql
+SELECT mean("value") FROM "neptune_apex" WHERE ("probe_type" = 'Temp') AND time >= now() - 6h GROUP BY time(20s)
+```
+
+## Troubleshooting
+
+### sendRequest failure
+
+This indicates a problem communicating with the local Apex controller. If on
+Mac/Linux, try curl:
+
+```sh
+curl apex.local/cgi-bin/status.xml
+```
+
+to isolate the problem.
+
+### parseXML errors
+
+Ensure the XML being returned is valid. If you get valid XML back, open a bug
+request.
+
+### Missing fields/data
+
+The neptune_apex plugin is strict on its input to prevent any conversion
+errors. If you have fields in the status.xml output that are not converted to a
+metric, open a feature request and paste your whole status.xml
+
+## Example Output
+
+```text
+neptune_apex,hardware=1.0,host=ubuntu,software=5.04_7A18,source=apex,type=controller power_failed=1544814000000000000i,power_restored=1544833875000000000i,serial="AC5:12345" 1545978278000000000
+neptune_apex,device_id=base_Var1,hardware=1.0,host=ubuntu,name=VarSpd1_I1,output_id=0,output_type=variable,software=5.04_7A18,source=apex,type=output state="PF1" 1545978278000000000
+neptune_apex,device_id=base_Var2,hardware=1.0,host=ubuntu,name=VarSpd2_I2,output_id=1,output_type=variable,software=5.04_7A18,source=apex,type=output state="PF2" 1545978278000000000
+neptune_apex,device_id=base_Var3,hardware=1.0,host=ubuntu,name=VarSpd3_I3,output_id=2,output_type=variable,software=5.04_7A18,source=apex,type=output state="PF3" 1545978278000000000
+neptune_apex,device_id=base_Var4,hardware=1.0,host=ubuntu,name=VarSpd4_I4,output_id=3,output_type=variable,software=5.04_7A18,source=apex,type=output state="PF4" 1545978278000000000
+neptune_apex,device_id=base_Alarm,hardware=1.0,host=ubuntu,name=SndAlm_I6,output_id=4,output_type=alert,software=5.04_7A18,source=apex,type=output state="AOF" 1545978278000000000
+neptune_apex,device_id=base_Warn,hardware=1.0,host=ubuntu,name=SndWrn_I7,output_id=5,output_type=alert,software=5.04_7A18,source=apex,type=output state="AOF" 1545978278000000000
+neptune_apex,device_id=base_email,hardware=1.0,host=ubuntu,name=EmailAlm_I5,output_id=6,output_type=alert,software=5.04_7A18,source=apex,type=output state="AOF" 1545978278000000000
+neptune_apex,device_id=base_email2,hardware=1.0,host=ubuntu,name=Email2Alm_I9,output_id=7,output_type=alert,software=5.04_7A18,source=apex,type=output state="AOF" 1545978278000000000
+neptune_apex,device_id=2_1,hardware=1.0,host=ubuntu,name=RETURN_2_1,output_id=8,output_type=outlet,software=5.04_7A18,source=apex,type=output amp=0.3,state="AON",watt=34 1545978278000000000
+neptune_apex,device_id=2_2,hardware=1.0,host=ubuntu,name=Heater1_2_2,output_id=9,output_type=outlet,software=5.04_7A18,source=apex,type=output amp=0,state="AOF",watt=0 1545978278000000000
+neptune_apex,device_id=2_3,hardware=1.0,host=ubuntu,name=FREE_2_3,output_id=10,output_type=outlet,software=5.04_7A18,source=apex,type=output amp=0,state="OFF",watt=1 1545978278000000000
+neptune_apex,device_id=2_4,hardware=1.0,host=ubuntu,name=LIGHT_2_4,output_id=11,output_type=outlet,software=5.04_7A18,source=apex,type=output amp=0,state="OFF",watt=1 1545978278000000000
+neptune_apex,device_id=2_5,hardware=1.0,host=ubuntu,name=LHead_2_5,output_id=12,output_type=outlet,software=5.04_7A18,source=apex,type=output amp=0,state="AON",watt=4 1545978278000000000
+neptune_apex,device_id=2_6,hardware=1.0,host=ubuntu,name=SKIMMER_2_6,output_id=13,output_type=outlet,software=5.04_7A18,source=apex,type=output amp=0.1,state="AON",watt=12 1545978278000000000
+neptune_apex,device_id=2_7,hardware=1.0,host=ubuntu,name=FREE_2_7,output_id=14,output_type=outlet,software=5.04_7A18,source=apex,type=output amp=0,state="OFF",watt=1 1545978278000000000
+neptune_apex,device_id=2_8,hardware=1.0,host=ubuntu,name=CABLIGHT_2_8,output_id=15,output_type=outlet,software=5.04_7A18,source=apex,type=output amp=0,state="AON",watt=1 1545978278000000000
+neptune_apex,device_id=2_9,hardware=1.0,host=ubuntu,name=LinkA_2_9,output_id=16,output_type=unknown,software=5.04_7A18,source=apex,type=output state="AOF" 1545978278000000000
+neptune_apex,device_id=2_10,hardware=1.0,host=ubuntu,name=LinkB_2_10,output_id=17,output_type=unknown,software=5.04_7A18,source=apex,type=output state="AOF" 1545978278000000000
+neptune_apex,device_id=3_1,hardware=1.0,host=ubuntu,name=RVortech_3_1,output_id=18,output_type=unknown,software=5.04_7A18,source=apex,type=output state="TBL",xstatus="OK" 1545978278000000000
+neptune_apex,device_id=3_2,hardware=1.0,host=ubuntu,name=LVortech_3_2,output_id=19,output_type=unknown,software=5.04_7A18,source=apex,type=output state="TBL",xstatus="OK" 1545978278000000000
+neptune_apex,device_id=4_1,hardware=1.0,host=ubuntu,name=OSMOLATO_4_1,output_id=20,output_type=outlet,software=5.04_7A18,source=apex,type=output amp=0,state="AOF",watt=0 1545978278000000000
+neptune_apex,device_id=4_2,hardware=1.0,host=ubuntu,name=HEATER2_4_2,output_id=21,output_type=outlet,software=5.04_7A18,source=apex,type=output amp=0,state="AOF",watt=0 1545978278000000000
+neptune_apex,device_id=4_3,hardware=1.0,host=ubuntu,name=NUC_4_3,output_id=22,output_type=outlet,software=5.04_7A18,source=apex,type=output amp=0.1,state="AON",watt=8 1545978278000000000
+neptune_apex,device_id=4_4,hardware=1.0,host=ubuntu,name=CABFAN_4_4,output_id=23,output_type=outlet,software=5.04_7A18,source=apex,type=output amp=0,state="AON",watt=1 1545978278000000000
+neptune_apex,device_id=4_5,hardware=1.0,host=ubuntu,name=RHEAD_4_5,output_id=24,output_type=outlet,software=5.04_7A18,source=apex,type=output amp=0,state="AON",watt=3 1545978278000000000
+neptune_apex,device_id=4_6,hardware=1.0,host=ubuntu,name=FIRE_4_6,output_id=25,output_type=outlet,software=5.04_7A18,source=apex,type=output amp=0,state="AON",watt=3 1545978278000000000
+neptune_apex,device_id=4_7,hardware=1.0,host=ubuntu,name=LightGW_4_7,output_id=26,output_type=outlet,software=5.04_7A18,source=apex,type=output amp=0,state="AON",watt=1 1545978278000000000
+neptune_apex,device_id=4_8,hardware=1.0,host=ubuntu,name=GBSWITCH_4_8,output_id=27,output_type=outlet,software=5.04_7A18,source=apex,type=output amp=0,state="AON",watt=0 1545978278000000000
+neptune_apex,device_id=4_9,hardware=1.0,host=ubuntu,name=LinkA_4_9,output_id=28,output_type=unknown,software=5.04_7A18,source=apex,type=output state="AOF" 1545978278000000000
+neptune_apex,device_id=4_10,hardware=1.0,host=ubuntu,name=LinkB_4_10,output_id=29,output_type=unknown,software=5.04_7A18,source=apex,type=output state="AOF" 1545978278000000000
+neptune_apex,device_id=5_1,hardware=1.0,host=ubuntu,name=LinkA_5_1,output_id=30,output_type=unknown,software=5.04_7A18,source=apex,type=output state="AOF" 1545978278000000000
+neptune_apex,device_id=Cntl_A1,hardware=1.0,host=ubuntu,name=ATO_EMPTY,output_id=31,output_type=virtual,software=5.04_7A18,source=apex,type=output state="AOF" 1545978278000000000
+neptune_apex,device_id=Cntl_A2,hardware=1.0,host=ubuntu,name=LEAK,output_id=32,output_type=virtual,software=5.04_7A18,source=apex,type=output state="AOF" 1545978278000000000
+neptune_apex,device_id=Cntl_A3,hardware=1.0,host=ubuntu,name=SKMR_NOPWR,output_id=33,output_type=virtual,software=5.04_7A18,source=apex,type=output state="AOF" 1545978278000000000
+neptune_apex,hardware=1.0,host=ubuntu,name=Tmp,probe_type=Temp,software=5.04_7A18,source=apex,type=probe value=78.1 1545978278000000000
+neptune_apex,hardware=1.0,host=ubuntu,name=pH,probe_type=pH,software=5.04_7A18,source=apex,type=probe value=7.93 1545978278000000000
+neptune_apex,hardware=1.0,host=ubuntu,name=ORP,probe_type=ORP,software=5.04_7A18,source=apex,type=probe value=191 1545978278000000000
+neptune_apex,hardware=1.0,host=ubuntu,name=Salt,probe_type=Cond,software=5.04_7A18,source=apex,type=probe value=29.4 1545978278000000000
+neptune_apex,hardware=1.0,host=ubuntu,name=Volt_2,software=5.04_7A18,source=apex,type=probe value=117 1545978278000000000
+neptune_apex,hardware=1.0,host=ubuntu,name=Volt_4,software=5.04_7A18,source=apex,type=probe value=118 1545978278000000000
+```
+
+## Contributing
+
+This plugin is used for mission-critical aquatic life support. A bug could very
+well result in the death of animals. Neptune does not publish a schema file and
+as such, we have made this plugin very strict on input with no provisions for
+automatically adding fields. We are also careful to not add default values when
+none are presented to prevent automation errors.
+
+When writing unit tests, use actual Apex output to run tests. It's acceptable to
+abridge the number of repeated fields but never inner fields or parameters.
diff --git a/content/telegraf/v1/input-plugins/net/_index.md b/content/telegraf/v1/input-plugins/net/_index.md
new file mode 100644
index 000000000..646a162b1
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/net/_index.md
@@ -0,0 +1,111 @@
+---
+description: "Telegraf plugin for collecting metrics from Net"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Net
+    identifier: input-net
+tags: [Net, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Net Input Plugin
+
+This plugin gathers metrics about network interface and protocol usage (Linux
+only).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Gather metrics about network interfaces
+[[inputs.net]]
+  ## By default, telegraf gathers stats from any up interface (excluding loopback)
+  ## Setting interfaces will tell it to gather these explicit interfaces,
+  ## regardless of status. When specifying an interface, glob-style
+  ## patterns are also supported.
+  # interfaces = ["eth*", "enp0s[0-1]", "lo"]
+
+  ## On linux systems telegraf also collects protocol stats.
+  ## Setting ignore_protocol_stats to true will skip reporting of protocol metrics.
+  ##
+  ## DEPRECATION NOTICE: A value of 'false' is deprecated and discouraged!
+  ##                     Please set this to `true` and use the 'inputs.nstat'
+  ##                     plugin instead.
+  # ignore_protocol_stats = false
+```
+
+## Metrics
+
+The fields from this plugin are gathered in the _net_ measurement.
+
+Fields (all platforms):
+
+* bytes_sent - The total number of bytes sent by the interface
+* bytes_recv - The total number of bytes received by the interface
+* packets_sent - The total number of packets sent by the interface
+* packets_recv - The total number of packets received by the interface
+* err_in - The total number of receive errors detected by the interface
+* err_out - The total number of transmit errors detected by the interface
+* drop_in - The total number of received packets dropped by the interface
+* drop_out - The total number of transmitted packets dropped by the interface
+* speed - The interface's latest or current speed value, in Mbits/sec. May be -1 if unsupported by the interface
+
+Different platforms gather the data above with different mechanisms. Telegraf
+uses the ([gopsutil](https://github.com/shirou/gopsutil)) package, which under
+Linux reads the /proc/net/dev file.  Under freebsd/openbsd and darwin the plugin
+uses netstat.
+
+Additionally, for the time being _only under Linux_, the plugin gathers system
+wide stats for different network protocols using /proc/net/snmp (tcp, udp, icmp,
+etc.).  Explanation of the different metrics exposed by snmp is out of the scope
+of this document. The best way to find information would be tracing the
+constants in the [Linux kernel source](https://elixir.bootlin.com/linux/latest/source/net/ipv4/proc.c) and their usage. If
+/proc/net/snmp cannot be read for some reason, telegraf ignores the error
+silently.
+
+[source]: https://elixir.bootlin.com/linux/latest/source/net/ipv4/proc.c
+
+## Tags
+
+* Net measurements have the following tags:
+  * interface (the interface from which metrics are gathered)
+
+Under Linux the system wide protocol metrics have the interface=all tag.
+
+## Sample Queries
+
+You can use the following query to get the upload/download traffic rate per
+second for all interfaces in the last hour. The query uses the [derivative
+function]() which calculates the rate of change between subsequent field
+values.
+
+[deriv]: https://docs.influxdata.com/influxdb/v1.2/query_language/functions#derivative
+
+```sql
+SELECT derivative(first(bytes_recv), 1s) as "download bytes/sec", derivative(first(bytes_sent), 1s) as "upload bytes/sec" FROM net WHERE time > now() - 1h AND interface != 'all' GROUP BY time(10s), interface fill(0);
+```
+
+## Example Output
+
+### All platforms
+
+```text
+net,interface=eth0,host=HOST bytes_sent=451838509i,bytes_recv=3284081640i,packets_sent=2663590i,packets_recv=3585442i,err_in=0i,err_out=0i,drop_in=4i,drop_out=0i 1492834180000000000
+```
+
+### Linux
+
+```text
+net,interface=eth0,host=HOST bytes_sent=451838509i,bytes_recv=3284081640i,packets_sent=2663590i,packets_recv=3585442i,err_in=0i,err_out=0i,drop_in=4i,drop_out=0i 1492834180000000000
+net,interface=all,host=HOST ip_reasmfails=0i,icmp_insrcquenchs=0i,icmp_outtimestamps=0i,ip_inhdrerrors=0i,ip_inunknownprotos=0i,icmp_intimeexcds=10i,icmp_outaddrmasks=0i,icmp_indestunreachs=11005i,icmpmsg_outtype0=6i,tcp_retranssegs=14669i,udplite_outdatagrams=0i,ip_reasmtimeout=0i,ip_outnoroutes=2577i,ip_inaddrerrors=186i,icmp_outaddrmaskreps=0i,tcp_incsumerrors=0i,tcp_activeopens=55965i,ip_reasmoks=0i,icmp_inechos=6i,icmp_outdestunreachs=9417i,ip_reasmreqds=0i,icmp_outtimestampreps=0i,tcp_rtoalgorithm=1i,icmpmsg_intype3=11005i,icmpmsg_outtype69=129i,tcp_outsegs=2777459i,udplite_rcvbuferrors=0i,ip_fragoks=0i,icmp_inmsgs=13398i,icmp_outerrors=0i,tcp_outrsts=14951i,udplite_noports=0i,icmp_outmsgs=11517i,icmp_outechoreps=6i,icmpmsg_intype11=10i,icmp_inparmprobs=0i,ip_forwdatagrams=0i,icmp_inechoreps=1909i,icmp_outredirects=0i,icmp_intimestampreps=0i,icmpmsg_intype5=468i,tcp_rtomax=120000i,tcp_maxconn=-1i,ip_fragcreates=0i,ip_fragfails=0i,icmp_inredirects=468i,icmp_outtimeexcds=0i,icmp_outechos=1965i,icmp_inaddrmasks=0i,tcp_inerrs=389i,tcp_rtomin=200i,ip_defaultttl=64i,ip_outrequests=3366408i,ip_forwarding=2i,udp_incsumerrors=0i,udp_indatagrams=522136i,udplite_incsumerrors=0i,ip_outdiscards=871i,icmp_inerrors=958i,icmp_outsrcquenchs=0i,icmpmsg_intype0=1909i,tcp_insegs=3580226i,udp_outdatagrams=577265i,udp_rcvbuferrors=0i,udplite_sndbuferrors=0i,icmp_incsumerrors=0i,icmp_outparmprobs=0i,icmpmsg_outtype3=9417i,tcp_attemptfails=2652i,udplite_inerrors=0i,udplite_indatagrams=0i,ip_inreceives=4172969i,icmpmsg_outtype8=1965i,tcp_currestab=59i,udp_noports=5961i,ip_indelivers=4099279i,ip_indiscards=0i,tcp_estabresets=5818i,udp_sndbuferrors=3i,icmp_intimestamps=0i,icmpmsg_intype8=6i,udp_inerrors=0i,icmp_inaddrmaskreps=0i,tcp_passiveopens=452i 1492831540000000000
+```
diff --git a/content/telegraf/v1/input-plugins/net_response/_index.md b/content/telegraf/v1/input-plugins/net_response/_index.md
new file mode 100644
index 000000000..1986c4a82
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/net_response/_index.md
@@ -0,0 +1,77 @@
+---
+description: "Telegraf plugin for collecting metrics from Network Response"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Network Response
+    identifier: input-net_response
+tags: [Network Response, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Network Response Input Plugin
+
+The input plugin test UDP/TCP connections response time and can optional
+verify text in the response.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Collect response time of a TCP or UDP connection
+[[inputs.net_response]]
+  ## Protocol, must be "tcp" or "udp"
+  ## NOTE: because the "udp" protocol does not respond to requests, it requires
+  ## a send/expect string pair (see below).
+  protocol = "tcp"
+  ## Server address (default localhost)
+  address = "localhost:80"
+
+  ## Set timeout
+  # timeout = "1s"
+
+  ## Set read timeout (only used if expecting a response)
+  # read_timeout = "1s"
+
+  ## The following options are required for UDP checks. For TCP, they are
+  ## optional. The plugin will send the given string to the server and then
+  ## expect to receive the given 'expect' string back.
+  ## string sent to the server
+  # send = "ssh"
+  ## expected string in answer
+  # expect = "ssh"
+
+  ## Uncomment to remove deprecated fields; recommended for new deploys
+  # fieldexclude = ["result_type", "string_found"]
+```
+
+## Metrics
+
+- net_response
+  - tags:
+    - server
+    - port
+    - protocol
+    - result
+  - fields:
+    - response_time (float, seconds)
+    - result_code (int, success = 0, timeout = 1, connection_failed = 2, read_failed = 3, string_mismatch = 4)
+    - result_type (string) **DEPRECATED in 1.7; use result tag**
+    - string_found (boolean) **DEPRECATED in 1.4; use result tag**
+
+## Example Output
+
+```text
+net_response,port=8086,protocol=tcp,result=success,server=localhost response_time=0.000092948,result_code=0i,result_type="success" 1525820185000000000
+net_response,port=8080,protocol=tcp,result=connection_failed,server=localhost result_code=2i,result_type="connection_failed" 1525820088000000000
+net_response,port=8080,protocol=udp,result=read_failed,server=localhost result_code=3i,result_type="read_failed",string_found=false 1525820088000000000
+```
diff --git a/content/telegraf/v1/input-plugins/netflow/_index.md b/content/telegraf/v1/input-plugins/netflow/_index.md
new file mode 100644
index 000000000..9f1a9fe40
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/netflow/_index.md
@@ -0,0 +1,216 @@
+---
+description: "Telegraf plugin for collecting metrics from Netflow"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Netflow
+    identifier: input-netflow
+tags: [Netflow, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Netflow Input Plugin
+
+The `netflow` plugin acts as a collector for Netflow v5, Netflow v9 and IPFIX
+flow information. The Layer 4 protocol numbers are gathered from the
+[official IANA assignments](https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml).
+The internal field mappings for Netflow v5 fields are defined according to
+[Cisco's Netflow v5 documentation](https://www.cisco.com/c/en/us/td/docs/net_mgmt/netflow_collection_engine/3-6/user/guide/format.html#wp1006186), Netflow v9 fields are defined
+according to [Cisco's Netflow v9 documentation](https://www.cisco.com/en/US/technologies/tk648/tk362/technologies_white_paper09186a00800a3db9.html) and the
+[ASA extensions](https://www.cisco.com/c/en/us/td/docs/security/asa/special/netflow/asa_netflow.html).
+Definitions for IPFIX are according to [IANA assignment document](https://www.iana.org/assignments/ipfix/ipfix.xhtml#ipfix-nat-type).
+
+[IANA assignments]: https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml
+[CISCO NF5]:        https://www.cisco.com/c/en/us/td/docs/net_mgmt/netflow_collection_engine/3-6/user/guide/format.html#wp1006186
+[CISCO NF9]:        https://www.cisco.com/en/US/technologies/tk648/tk362/technologies_white_paper09186a00800a3db9.html
+[ASA extensions]:   https://www.cisco.com/c/en/us/td/docs/security/asa/special/netflow/asa_netflow.html
+[IPFIX doc]:        https://www.iana.org/assignments/ipfix/ipfix.xhtml#ipfix-nat-type
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Netflow v5, Netflow v9 and IPFIX collector
+[[inputs.netflow]]
+  ## Address to listen for netflow,ipfix or sflow packets.
+  ##   example: service_address = "udp://:2055"
+  ##            service_address = "udp4://:2055"
+  ##            service_address = "udp6://:2055"
+  service_address = "udp://:2055"
+
+  ## Set the size of the operating system's receive buffer.
+  ##   example: read_buffer_size = "64KiB"
+  ## Uses the system's default if not set.
+  # read_buffer_size = ""
+
+  ## Protocol version to use for decoding.
+  ## Available options are
+  ##   "ipfix"      -- IPFIX / Netflow v10 protocol (also works for Netflow v9)
+  ##   "netflow v5" -- Netflow v5 protocol
+  ##   "netflow v9" -- Netflow v9 protocol (also works for IPFIX)
+  ##   "sflow v5"   -- sFlow v5 protocol
+  # protocol = "ipfix"
+
+  ## Private Enterprise Numbers (PEN) mappings for decoding
+  ## This option allows to specify vendor-specific mapping files to use during
+  ## decoding.
+  # private_enterprise_number_files = []
+
+  ## Log incoming packets for tracing issues
+  # log_level = "trace"
+```
+
+## Private Enterprise Number mapping
+
+Using the `private_enterprise_number_files` option you can specify mappings for
+vendor-specific element-IDs with a PEN specification. The mapping has to be a
+comma-separated-file (CSV) containing the element's `ID`, its `name` and the
+`data-type`. A comma (`,`) is used as separator and comments are allowed using
+the hash (`#`) prefix.
+The element `ID` has the form `<pen-number>.<element-id>`, the `name` has to be
+a valid field-name and `data-type` denotes the mapping of the raw-byte value to
+the field's type. For example
+
+```csv
+# PEN.ID, name, data type
+35632.349,in_src_osi_sap,hex
+35632.471,nprobe_ipv4_address,ip
+35632.1028,protocol_ntop,string
+35632.1036,l4_srv_port,uint
+```
+
+specify four elements (`349`, `471`, `1028` and `1036`) for PEN `35632` (ntop)
+with the corresponding name and data-type.
+
+Currently the following `data-type`s are supported:
+
+- `uint`   unsigned integer with 8, 16, 32 or 64 bit
+- `hex`    hex-encoding of the raw byte sequence with `0x` prefix
+- `string` string interpretation of the raw byte sequence
+- `ip`     IPv4 or IPv6 address
+- `proto`  mapping of layer-4 protocol numbers to names
+
+## Troubleshooting
+
+### `Error template not found` warnings
+
+Those warnings usually occur in cases where Telegraf is restarted or reloaded
+while the flow-device is already streaming data.
+As background, the Netflow and IPFIX protocols rely on templates sent by the
+flow-device to decode fields. Without those templates, it is not clear what the
+data-type and size of the payload is and this makes it impossible to correctly
+interpret the data. However, templates are sent by the flow-device, usually at
+the start of streaming and in regular intervals (configurable in the device) and
+Telegraf has no means to trigger sending of the templates. Therefore, we need to
+skip the packets until the templates are resent by the device.
+
+## Metrics are missing at the output
+
+The metrics produced by this plugin are not tagged in a connection specific
+manner, therefore outputs relying on unique series key (e.g. InfluxDB) require
+the metrics to contain tags for the protocol, the connection source and the
+connection destination. Otherwise, metrics might be overwritten and are thus
+missing.
+
+The required tagging can be achieved using the `converter` processor
+
+```toml
+[[processors.converter]]
+  [processors.converter.fields]
+    tag = ["protocol", "src", "src_port", "dst", "dst_port"]
+```
+
+__Please be careful as this will produce metrics with high cardinality!__
+
+## Metrics
+
+Metrics depend on the format used as well as on the information provided
+by the exporter. Furthermore, proprietary information might be sent requiring
+further decoding information. Most exporters should provide at least the
+following information
+
+- netflow
+  - tags:
+    - source (IP of the exporter sending the data)
+    - version (flow protocol version)
+  - fields:
+    - src (IP address, address of the source of the packets)
+    - src_mask (uint64, mask for the IP address in bits)
+    - dst (IP address, address of the destination of the packets)
+    - dst_mask (uint64, mask for the IP address in bits)
+    - src_port (uint64, source port)
+    - dst_port (uint64, destination port)
+    - protocol (string, Layer 4 protocol name)
+    - in_bytes (uint64, number of incoming bytes)
+    - in_packets (uint64, number of incoming packets)
+    - tcp_flags (string, TCP flags for the flow)
+
+## Example Output
+
+The specific fields vary for the different protocol versions, here are some
+examples
+
+### IPFIX
+
+```text
+netflow,source=127.0.0.1,version=IPFIX protocol="tcp",vlan_src=0u,src_tos="0x00",flow_end_ms=1666345513807u,src="192.168.119.100",dst="44.233.90.52",src_port=51008u,total_bytes_exported=0u,flow_end_reason="end of flow",flow_start_ms=1666345513807u,in_total_bytes=52u,in_total_packets=1u,dst_port=443u
+netflow,source=127.0.0.1,version=IPFIX src_tos="0x00",src_port=54330u,rev_total_bytes_exported=0u,last_switched=9u,vlan_src=0u,flow_start_ms=1666345513807u,in_total_packets=1u,flow_end_reason="end of flow",flow_end_ms=1666345513816u,in_total_bytes=40u,dst_port=443u,src="192.168.119.100",dst="104.17.240.92",total_bytes_exported=0u,protocol="tcp"
+netflow,source=127.0.0.1,version=IPFIX flow_start_ms=1666345513807u,flow_end_ms=1666345513977u,src="192.168.119.100",dst_port=443u,total_bytes_exported=0u,last_switched=170u,src_tos="0x00",in_total_bytes=40u,dst="44.233.90.52",src_port=51024u,protocol="tcp",flow_end_reason="end of flow",in_total_packets=1u,rev_total_bytes_exported=0u,vlan_src=0u
+netflow,source=127.0.0.1,version=IPFIX src_port=58246u,total_bytes_exported=1u,flow_start_ms=1666345513806u,flow_end_ms=1666345513806u,in_total_bytes=156u,src="192.168.119.100",rev_total_bytes_exported=0u,last_switched=0u,flow_end_reason="forced end",dst="192.168.119.17",dst_port=53u,protocol="udp",in_total_packets=2u,vlan_src=0u,src_tos="0x00"
+netflow,source=127.0.0.1,version=IPFIX protocol="udp",vlan_src=0u,src_port=58879u,dst_port=53u,flow_end_ms=1666345513832u,src_tos="0x00",src="192.168.119.100",total_bytes_exported=1u,rev_total_bytes_exported=0u,flow_end_reason="forced end",last_switched=33u,in_total_bytes=221u,in_total_packets=2u,flow_start_ms=1666345513799u,dst="192.168.119.17"
+```
+
+### Netflow v5
+
+```text
+netflow,source=127.0.0.1,version=NetFlowV5 protocol="tcp",src="140.82.121.3",src_port=443u,dst="192.168.119.100",dst_port=55516u,flows=8u,in_bytes=87477u,in_packets=78u,first_switched=86400660u,last_switched=86403316u,tcp_flags="...PA...",engine_type="19",engine_id="0x56",sys_uptime=90003000u,src_tos="0x00",bgp_src_as=0u,bgp_dst_as=0u,src_mask=0u,dst_mask=0u,in_snmp=0u,out_snmp=0u,next_hop="0.0.0.0",seq_number=0u,sampling_interval=0u
+netflow,source=127.0.0.1,version=NetFlowV5 protocol="tcp",src="140.82.121.6",src_port=443u,dst="192.168.119.100",dst_port=36408u,flows=8u,in_bytes=5009u,in_packets=21u,first_switched=86400447u,last_switched=86403267u,tcp_flags="...PA...",engine_type="19",engine_id="0x56",sys_uptime=90003000u,src_tos="0x00",bgp_src_as=0u,bgp_dst_as=0u,src_mask=0u,dst_mask=0u,in_snmp=0u,out_snmp=0u,next_hop="0.0.0.0",seq_number=0u,sampling_interval=0u
+netflow,source=127.0.0.1,version=NetFlowV5 protocol="tcp",src="140.82.112.22",src_port=443u,dst="192.168.119.100",dst_port=39638u,flows=8u,in_bytes=925u,in_packets=6u,first_switched=86400324u,last_switched=86403214u,tcp_flags="...PA...",engine_type="19",engine_id="0x56",sys_uptime=90003000u,src_tos="0x00",bgp_src_as=0u,bgp_dst_as=0u,src_mask=0u,dst_mask=0u,in_snmp=0u,out_snmp=0u,next_hop="0.0.0.0",seq_number=0u,sampling_interval=0u
+netflow,source=127.0.0.1,version=NetFlowV5 protocol="tcp",src="140.82.114.26",src_port=443u,dst="192.168.119.100",dst_port=49398u,flows=8u,in_bytes=250u,in_packets=2u,first_switched=86403131u,last_switched=86403362u,tcp_flags="...PA...",engine_type="19",engine_id="0x56",sys_uptime=90003000u,src_tos="0x00",bgp_src_as=0u,bgp_dst_as=0u,src_mask=0u,dst_mask=0u,in_snmp=0u,out_snmp=0u,next_hop="0.0.0.0",seq_number=0u,sampling_interval=0u
+netflow,source=127.0.0.1,version=NetFlowV5 protocol="tcp",src="192.168.119.100",src_port=55516u,dst="140.82.121.3",dst_port=443u,flows=8u,in_bytes=4969u,in_packets=37u,first_switched=86400652u,last_switched=86403269u,tcp_flags="...PA...",engine_type="19",engine_id="0x56",sys_uptime=90003000u,src_tos="0x00",bgp_src_as=0u,bgp_dst_as=0u,src_mask=0u,dst_mask=0u,in_snmp=0u,out_snmp=0u,next_hop="0.0.0.0",seq_number=0u,sampling_interval=0u
+netflow,source=127.0.0.1,version=NetFlowV5 protocol="tcp",src="192.168.119.100",src_port=36408u,dst="140.82.121.6",dst_port=443u,flows=8u,in_bytes=2736u,in_packets=21u,first_switched=86400438u,last_switched=86403258u,tcp_flags="...PA...",engine_type="19",engine_id="0x56",sys_uptime=90003000u,src_tos="0x00",bgp_src_as=0u,bgp_dst_as=0u,src_mask=0u,dst_mask=0u,in_snmp=0u,out_snmp=0u,next_hop="0.0.0.0",seq_number=0u,sampling_interval=0u
+netflow,source=127.0.0.1,version=NetFlowV5 protocol="tcp",src="192.168.119.100",src_port=39638u,dst="140.82.112.22",dst_port=443u,flows=8u,in_bytes=1560u,in_packets=6u,first_switched=86400225u,last_switched=86403255u,tcp_flags="...PA...",engine_type="19",engine_id="0x56",sys_uptime=90003000u,src_tos="0x00",bgp_src_as=0u,bgp_dst_as=0u,src_mask=0u,dst_mask=0u,in_snmp=0u,out_snmp=0u,next_hop="0.0.0.0",seq_number=0u,sampling_interval=0u
+netflow,source=127.0.0.1,version=NetFlowV5 protocol="tcp",src="192.168.119.100",src_port=49398u,dst="140.82.114.26",dst_port=443u,flows=8u,in_bytes=697u,in_packets=4u,first_switched=86403030u,last_switched=86403362u,tcp_flags="...PA...",engine_type="19",engine_id="0x56",sys_uptime=90003000u,src_tos="0x00",bgp_src_as=0u,bgp_dst_as=0u,src_mask=0u,dst_mask=0u,in_snmp=0u,out_snmp=0u,next_hop="0.0.0.0",seq_number=0u,sampling_interval=0u
+```
+
+### Netflow v9
+
+```text
+netflow,source=127.0.0.1,version=NetFlowV9 protocol="tcp",src="140.82.121.3",src_port=443u,dst="192.168.119.100",dst_port=55516u,in_bytes=87477u,in_packets=78u,flow_start_ms=1666350478660u,flow_end_ms=1666350481316u,tcp_flags="...PA...",engine_type="17",engine_id="0x01",icmp_type=0u,icmp_code=0u,fwd_status="unknown",fwd_reason="unknown",src_tos="0x00"
+netflow,source=127.0.0.1,version=NetFlowV9 protocol="tcp",src="140.82.121.6",src_port=443u,dst="192.168.119.100",dst_port=36408u,in_bytes=5009u,in_packets=21u,flow_start_ms=1666350478447u,flow_end_ms=1666350481267u,tcp_flags="...PA...",engine_type="17",engine_id="0x01",icmp_type=0u,icmp_code=0u,fwd_status="unknown",fwd_reason="unknown",src_tos="0x00"
+netflow,source=127.0.0.1,version=NetFlowV9 protocol="tcp",src="140.82.112.22",src_port=443u,dst="192.168.119.100",dst_port=39638u,in_bytes=925u,in_packets=6u,flow_start_ms=1666350478324u,flow_end_ms=1666350481214u,tcp_flags="...PA...",engine_type="17",engine_id="0x01",icmp_type=0u,icmp_code=0u,fwd_status="unknown",fwd_reason="unknown",src_tos="0x00"
+netflow,source=127.0.0.1,version=NetFlowV9 protocol="tcp",src="140.82.114.26",src_port=443u,dst="192.168.119.100",dst_port=49398u,in_bytes=250u,in_packets=2u,flow_start_ms=1666350481131u,flow_end_ms=1666350481362u,tcp_flags="...PA...",engine_type="17",engine_id="0x01",icmp_type=0u,icmp_code=0u,fwd_status="unknown",fwd_reason="unknown",src_tos="0x00"
+netflow,source=127.0.0.1,version=NetFlowV9 protocol="tcp",src="192.168.119.100",src_port=55516u,dst="140.82.121.3",dst_port=443u,in_bytes=4969u,in_packets=37u,flow_start_ms=1666350478652u,flow_end_ms=1666350481269u,tcp_flags="...PA...",engine_type="17",engine_id="0x01",icmp_type=0u,icmp_code=0u,fwd_status="unknown",fwd_reason="unknown",src_tos="0x00"
+netflow,source=127.0.0.1,version=NetFlowV9 protocol="tcp",src="192.168.119.100",src_port=36408u,dst="140.82.121.6",dst_port=443u,in_bytes=2736u,in_packets=21u,flow_start_ms=1666350478438u,flow_end_ms=1666350481258u,tcp_flags="...PA...",engine_type="17",engine_id="0x01",icmp_type=0u,icmp_code=0u,fwd_status="unknown",fwd_reason="unknown",src_tos="0x00"
+netflow,source=127.0.0.1,version=NetFlowV9 protocol="tcp",src="192.168.119.100",src_port=39638u,dst="140.82.112.22",dst_port=443u,in_bytes=1560u,in_packets=6u,flow_start_ms=1666350478225u,flow_end_ms=1666350481255u,tcp_flags="...PA...",engine_type="17",engine_id="0x01",icmp_type=0u,icmp_code=0u,fwd_status="unknown",fwd_reason="unknown",src_tos="0x00"
+netflow,source=127.0.0.1,version=NetFlowV9 protocol="tcp",src="192.168.119.100",src_port=49398u,dst="140.82.114.26",dst_port=443u,in_bytes=697u,in_packets=4u,flow_start_ms=1666350481030u,flow_end_ms=1666350481362u,tcp_flags="...PA...",engine_type="17",engine_id="0x01",icmp_type=0u,icmp_code=0u,fwd_status="unknown",fwd_reason="unknown",src_tos="0x00"
+```
+
+### sFlow v5
+
+```text
+netflow,source=127.0.0.1,version=sFlowV5 out_errors=0i,out_bytes=3946i,status="up",in_unknown_protocol=4294967295i,out_unicast_packets_total=29i,agent_subid=100000i,interface_type=6i,in_unicast_packets_total=28i,out_dropped_packets=0i,in_bytes=3910i,in_broadcast_packets_total=4294967295i,ip_version="IPv4",agent_ip="192.168.119.184",in_snmp=3i,in_errors=0i,promiscuous=0i,interface=3i,in_mcast_packets_total=4294967295i,in_dropped_packets=0i,sys_uptime=12414i,seq_number=2i,speed=1000000000i,out_mcast_packets_total=4294967295i,out_broadcast_packets_total=4294967295i 12414000000
+netflow,source=127.0.0.1,version=sFlowV5 sys_uptime=17214i,agent_ip="192.168.119.184",agent_subid=100000i,seq_number=2i,in_phy_interface=1i,ip_version="IPv4" 17214000000
+netflow,source=127.0.0.1,version=sFlowV5 in_errors=0i,out_unicast_packets_total=36i,interface=3i,in_broadcast_packets_total=4294967295i,ip_version="IPv4",speed=1000000000i,out_bytes=4408i,out_mcast_packets_total=4294967295i,status="up",in_snmp=3i,in_mcast_packets_total=4294967295i,out_broadcast_packets_total=4294967295i,promiscuous=0i,in_bytes=5568i,out_dropped_packets=0i,sys_uptime=22014i,agent_subid=100000i,in_unknown_protocol=4294967295i,interface_type=6i,in_dropped_packets=0i,in_unicast_packets_total=37i,out_errors=0i,agent_ip="192.168.119.184",seq_number=3i 22014000000
+
+```
diff --git a/content/telegraf/v1/input-plugins/netstat/_index.md b/content/telegraf/v1/input-plugins/netstat/_index.md
new file mode 100644
index 000000000..2bb535fa3
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/netstat/_index.md
@@ -0,0 +1,89 @@
+---
+description: "Telegraf plugin for collecting metrics from Netstat"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Netstat
+    identifier: input-netstat
+tags: [Netstat, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Netstat Input Plugin
+
+This plugin collects TCP connections state and UDP socket counts by using
+`lsof`.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read TCP metrics such as established, time wait and sockets counts.
+[[inputs.netstat]]
+  # no configuration
+```
+
+## Metrics
+
+Supported TCP Connection states are follows.
+
+- established
+- syn_sent
+- syn_recv
+- fin_wait1
+- fin_wait2
+- time_wait
+- close
+- close_wait
+- last_ack
+- listen
+- closing
+- none
+
+## TCP Connection State measurements
+
+Meta:
+
+- units: counts
+
+Measurement names:
+
+- tcp_established
+- tcp_syn_sent
+- tcp_syn_recv
+- tcp_fin_wait1
+- tcp_fin_wait2
+- tcp_time_wait
+- tcp_close
+- tcp_close_wait
+- tcp_last_ack
+- tcp_listen
+- tcp_closing
+- tcp_none
+
+If there are no connection on the state, the metric is not counted.
+
+## UDP socket counts measurements
+
+Meta:
+
+- units: counts
+
+Measurement names:
+
+- udp_socket
+
+## Example Output
+
+```text
+netstat tcp_close=0i,tcp_close_wait=0i,tcp_closing=0i,tcp_established=14i,tcp_fin_wait1=0i,tcp_fin_wait2=0i,tcp_last_ack=0i,tcp_listen=1i,tcp_none=46i,tcp_syn_recv=0i,tcp_syn_sent=0i,tcp_time_wait=0i,udp_socket=10i 1668520568000000000
+```
diff --git a/content/telegraf/v1/input-plugins/nfsclient/_index.md b/content/telegraf/v1/input-plugins/nfsclient/_index.md
new file mode 100644
index 000000000..8ce5610de
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/nfsclient/_index.md
@@ -0,0 +1,227 @@
+---
+description: "Telegraf plugin for collecting metrics from NFS Client"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: NFS Client
+    identifier: input-nfsclient
+tags: [NFS Client, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# NFS Client Input Plugin
+
+The NFS Client input plugin collects data from /proc/self/mountstats. By
+default, only a limited number of general system-level metrics are collected,
+including basic read/write counts.  If `fullstat` is set, a great deal of
+additional metrics are collected, detailed below.
+
+__NOTE__ Many of the metrics, even if tagged with a mount point, are really
+_per-server_.  Thus, if you mount these two shares: `nfs01:/vol/foo/bar` and
+`nfs01:/vol/foo/baz`, there will be two near identical entries in
+/proc/self/mountstats.  This is a limitation of the metrics exposed by the
+kernel, not the telegraf plugin.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read per-mount NFS client metrics from /proc/self/mountstats
+[[inputs.nfsclient]]
+  ## Read more low-level metrics (optional, defaults to false)
+  # fullstat = false
+
+  ## List of mounts to explicitly include or exclude (optional)
+  ## The pattern (Go regexp) is matched against the mount point (not the
+  ## device being mounted).  If include_mounts is set, all mounts are ignored
+  ## unless present in the list. If a mount is listed in both include_mounts
+  ## and exclude_mounts, it is excluded.  Go regexp patterns can be used.
+  # include_mounts = []
+  # exclude_mounts = []
+
+  ## List of operations to include or exclude from collecting.  This applies
+  ## only when fullstat=true.  Semantics are similar to {include,exclude}_mounts:
+  ## the default is to collect everything; when include_operations is set, only
+  ## those OPs are collected; when exclude_operations is set, all are collected
+  ## except those listed.  If include and exclude are set, the OP is excluded.
+  ## See /proc/self/mountstats for a list of valid operations; note that
+  ## NFSv3 and NFSv4 have different lists.  While it is not possible to
+  ## have different include/exclude lists for NFSv3/4, unused elements
+  ## in the list should be okay.  It is possible to have different lists
+  ## for different mountpoints:  use multiple [[input.nfsclient]] stanzas,
+  ## with their own lists.  See "include_mounts" above, and be careful of
+  ## duplicate metrics.
+  # include_operations = []
+  # exclude_operations = []
+```
+
+### Configuration Options
+
+- __fullstat__ bool: Collect per-operation type metrics.  Defaults to false.
+- __include_mounts__ list(string): gather metrics for only these mounts.  Default is to watch all mounts.
+- __exclude_mounts__ list(string): gather metrics for all mounts, except those listed in this option. Excludes take precedence over includes.
+- __include_operations__ list(string): List of specific NFS operations to track.  See /proc/self/mountstats (the "per-op statistics" section) for complete lists of valid options for NFSv3 and NFSV4.  The default is to gather all metrics, but this is almost certainly _not_ what you want (there are 22 operations for NFSv3, and well over 50 for NFSv4).  A suggested 'minimal' list of operations to collect for basic usage:  `['READ','WRITE','ACCESS','GETATTR','READDIR','LOOKUP','LOOKUP']`
+- __exclude_operations__ list(string): Gather all metrics, except those listed.  Excludes take precedence over includes.
+
+_N.B._ the `include_mounts` and `exclude_mounts` arguments are both applied to
+the local mount location (e.g. /mnt/NFS), not the server export
+(e.g. nfsserver:/vol/NFS).  Go regexp patterns can be used in either.
+
+## Location of mountstats
+
+If you have mounted the /proc file system in a container, to tell this plugin
+where to find the new location, set the `MOUNT_PROC` environment variable. For
+example, in a Docker compose file, if /proc is mounted to /host/proc, then use:
+
+```yaml
+MOUNT_PROC: /host/proc/self/mountstats
+```
+
+### References
+
+1. [nfsiostat](http://git.linux-nfs.org/?p=steved/nfs-utils.git;a=summary)
+2. [net/sunrpc/stats.c - Linux source code](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/sunrpc/stats.c)
+3. [What is in /proc/self/mountstats for NFS mounts: an introduction](https://utcc.utoronto.ca/~cks/space/blog/linux/NFSMountstatsIndex)
+4. [The xprt: data for NFS mounts in /proc/self/mountstats](https://utcc.utoronto.ca/~cks/space/blog/linux/NFSMountstatsXprt)
+
+## Metrics
+
+### Fields
+
+- nfsstat
+  - bytes (integer, bytes) - The total number of bytes exchanged doing this operation. This is bytes sent _and_ received, including overhead _and_ payload.  (bytes = OP_bytes_sent + OP_bytes_recv.  See nfs_ops below)
+  - ops (integer, count) - The number of operations of this type executed.
+  - retrans (integer, count) - The number of times an operation had to be retried (retrans = OP_trans - OP_ops.  See nfs_ops below)
+  - exe (integer, milliseconds) - The number of milliseconds it took to process the operations.
+  - rtt (integer, milliseconds) - The total round-trip time for all operations.
+  - rtt_per_op (float, milliseconds) - The average round-trip time per operation.
+
+In addition enabling `fullstat` will make many more metrics available.
+
+### Tags
+
+- All measurements have the following tags:
+  - mountpoint - The local mountpoint, for instance: "/var/www"
+  - serverexport - The full server export, for instance: "nfsserver.example.org:/export"
+
+- Measurements nfsstat and nfs_ops will also include:
+  - operation - the NFS operation in question.  `READ` or `WRITE` for nfsstat, but potentially one of ~20 or ~50, depending on NFS version.  A complete list of operations supported is visible in `/proc/self/mountstats`.
+
+## Additional metrics
+
+When `fullstat` is true, additional measurements are collected.  Tags are the
+same as above.
+
+### NFS Operations
+
+Most descriptions come from [Reference](https://utcc.utoronto.ca/~cks/space/blog/linux/NFSMountstatsIndex) and `nfs_iostat.h`.  Field order
+and names are the same as in `/proc/self/mountstats` and the Kernel source.
+
+Please refer to `/proc/self/mountstats` for a list of supported NFS operations,
+as it changes occasionally.
+
+- nfs_bytes
+  - fields:
+    - normalreadbytes (int, bytes): Bytes read from the server via `read()`
+    - normalwritebytes (int, bytes): Bytes written to the server via `write()`
+    - directreadbytes (int, bytes): Bytes read with O_DIRECT set
+    - directwritebytes (int, bytes): Bytes written with O_DIRECT set
+    - serverreadbytes (int, bytes): Bytes read via NFS READ (via `mmap()`)
+    - serverwritebytes (int, bytes): Bytes written via NFS WRITE (via `mmap()`)
+    - readpages (int, count): Number of pages read
+    - writepages (int, count): Number of pages written
+
+- nfs_events (Per-event metrics)
+  - fields:
+    - inoderevalidates (int, count): How many times cached inode attributes have to be re-validated from the server.
+    - dentryrevalidates (int, count): How many times cached dentry nodes have to be re-validated.
+    - datainvalidates (int, count): How many times an inode had its cached data thrown out.
+    - attrinvalidates (int, count): How many times an inode has had cached inode attributes invalidated.
+    - vfsopen (int, count): How many times files or directories have been `open()`'d.
+    - vfslookup (int, count): How many name lookups in directories there have been.
+    - vfsaccess (int, count): Number of calls to `access()`. (formerly called "vfspermission")
+    - vfsupdatepage (int, count): Count of updates (and potential writes) to pages.
+    - vfsreadpage (int, count): Number of pages read.
+    - vfsreadpages (int, count): Count of how many times a _group_ of pages was read (possibly via `mmap()`?).
+    - vfswritepage (int, count): Number of pages written.
+    - vfswritepages (int, count): Count of how many times a _group_ of pages was written (possibly via `mmap()`?)
+    - vfsgetdents (int, count): Count of directory entry reads with getdents(). These reads can be served from cache and don't necessarily imply actual NFS requests. (formerly called "vfsreaddir")
+    - vfssetattr (int, count): How many times we've set attributes on inodes.
+    - vfsflush (int, count): Count of times pending writes have been forcibly flushed to the server.
+    - vfsfsync (int, count): Count of calls to `fsync()` on directories and files.
+    - vfslock (int, count): Number of times a lock was attempted on a file (regardless of success or not).
+    - vfsrelease (int, count): Number of calls to `close()`.
+    - congestionwait (int, count): Believe unused by the Linux kernel, but it is part of the NFS spec.
+    - setattrtrunc (int, count): How many times files have had their size truncated.
+    - extendwrite (int, count): How many times a file has been grown because you're writing beyond the existing end of the file.
+    - sillyrenames (int, count): Number of times an in-use file was removed (thus creating a temporary ".nfsXXXXXX" file)
+    - shortreads (int, count): Number of times the NFS server returned less data than requested.
+    - shortwrites (int, count): Number of times NFS server reports it wrote less data than requested.
+    - delay (int, count): Occurrences of EJUKEBOX ("Jukebox Delay", probably unused)
+    - pnfsreads (int, count): Count of NFS v4.1+ pNFS reads.
+    - pnfswrites (int, count): Count of NFS v4.1+ pNFS writes.
+
+- nfs_xprt_tcp
+  - fields:
+    - bind_count (int, count): Number of_completely new_ mounts to this server (sometimes 0?)
+    - connect_count (int, count): How many times the client has connected to the server in question
+    - connect_time (int, jiffies): How long the NFS client has spent waiting for its connection(s) to the server to be established.
+    - idle_time (int, seconds): How long (in seconds) since the NFS mount saw any RPC traffic.
+    - rpcsends (int, count): How many RPC requests this mount has sent to the server.
+    - rpcreceives (int, count): How many RPC replies this mount has received from the server.
+    - badxids (int, count): Count of XIDs sent by the server that the client doesn't know about.
+    - inflightsends (int, count): Number of outstanding requests; always >1. (See reference #4 for comment on this field)
+    - backlogutil (int, count): Cumulative backlog count
+
+- nfs_xprt_udp
+  - fields:
+    - [same as nfs_xprt_tcp, except for connect_count, connect_time, and idle_time]
+
+- nfs_ops
+  - fields (In all cases, the `operations` tag is set to the uppercase name of the NFS operation, _e.g._ "READ", "FSINFO", _etc_.  See /proc/self/mountstats for a full list):
+    - ops (int, count): Total operations of this type.
+    - trans (int, count): Total transmissions of this type, including retransmissions: `OP_ops - OP_trans = total_retransmissions` (lower is better).
+    - timeouts (int, count): Number of major timeouts.
+    - bytes_sent (int, count): Bytes sent, including headers (should also be close to on-wire size).
+    - bytes_recv (int, count): Bytes received, including headers (should be close to on-wire size).
+    - queue_time (int, milliseconds): Cumulative time a request waited in the queue before sending this OP type.
+    - response_time (int, milliseconds): Cumulative time waiting for a response for this OP type.
+    - total_time (int, milliseconds): Cumulative time a request waited in the queue before sending.
+    - errors (int, count): Total number operations that complete with tk_status < 0 (usually errors).  This is a new field, present in kernel >=5.3, mountstats version 1.1
+
+[ref]: https://utcc.utoronto.ca/~cks/space/blog/linux/NFSMountstatsIndex
+
+## Example Output
+
+For basic metrics showing server-wise read and write data.
+
+```text
+nfsstat,mountpoint=/NFS,operation=READ,serverexport=1.2.3.4:/storage/NFS ops=600i,retrans=1i,bytes=1207i,rtt=606i,exe=607i 1612651512000000000
+nfsstat,mountpoint=/NFS,operation=WRITE,serverexport=1.2.3.4:/storage/NFS bytes=1407i,rtt=706i,exe=707i,ops=700i,retrans=1i 1612651512000000000
+
+```
+
+For `fullstat=true` metrics, which includes additional measurements for
+`nfs_bytes`, `nfs_events`, and `nfs_xprt_tcp` (and `nfs_xprt_udp` if present).
+Additionally, per-OP metrics are collected, with examples for READ, LOOKUP, and
+NULL shown.  Please refer to `/proc/self/mountstats` for a list of supported NFS
+operations, as it changes as it changes periodically.
+
+```text
+nfs_bytes,mountpoint=/home,serverexport=nfs01:/vol/home directreadbytes=0i,directwritebytes=0i,normalreadbytes=42648757667i,normalwritebytes=0i,readpages=10404603i,serverreadbytes=42617098139i,serverwritebytes=0i,writepages=0i 1608787697000000000
+nfs_events,mountpoint=/home,serverexport=nfs01:/vol/home attrinvalidates=116i,congestionwait=0i,datainvalidates=65i,delay=0i,dentryrevalidates=5911243i,extendwrite=0i,inoderevalidates=200378i,pnfsreads=0i,pnfswrites=0i,setattrtrunc=0i,shortreads=0i,shortwrites=0i,sillyrenames=0i,vfsaccess=7203852i,vfsflush=117405i,vfsfsync=0i,vfsgetdents=3368i,vfslock=0i,vfslookup=740i,vfsopen=157281i,vfsreadpage=16i,vfsreadpages=86874i,vfsrelease=155526i,vfssetattr=0i,vfsupdatepage=0i,vfswritepage=0i,vfswritepages=215514i 1608787697000000000
+nfs_xprt_tcp,mountpoint=/home,serverexport=nfs01:/vol/home backlogutil=0i,badxids=0i,bind_count=1i,connect_count=1i,connect_time=0i,idle_time=0i,inflightsends=15659826i,rpcreceives=2173896i,rpcsends=2173896i 1608787697000000000
+
+nfs_ops,mountpoint=/NFS,operation=NULL,serverexport=1.2.3.4:/storage/NFS trans=0i,timeouts=0i,bytes_sent=0i,bytes_recv=0i,queue_time=0i,response_time=0i,total_time=0i,ops=0i 1612651512000000000
+nfs_ops,mountpoint=/NFS,operation=READ,serverexport=1.2.3.4:/storage/NFS bytes=1207i,timeouts=602i,total_time=607i,exe=607i,trans=601i,bytes_sent=603i,bytes_recv=604i,queue_time=605i,ops=600i,retrans=1i,rtt=606i,response_time=606i 1612651512000000000
+nfs_ops,mountpoint=/NFS,operation=WRITE,serverexport=1.2.3.4:/storage/NFS ops=700i,bytes=1407i,exe=707i,trans=701i,timeouts=702i,response_time=706i,total_time=707i,retrans=1i,rtt=706i,bytes_sent=703i,bytes_recv=704i,queue_time=705i 1612651512000000000
+```
diff --git a/content/telegraf/v1/input-plugins/nginx/_index.md b/content/telegraf/v1/input-plugins/nginx/_index.md
new file mode 100644
index 000000000..a534b1b1d
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/nginx/_index.md
@@ -0,0 +1,86 @@
+---
+description: "Telegraf plugin for collecting metrics from Nginx"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Nginx
+    identifier: input-nginx
+tags: [Nginx, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Nginx Input Plugin
+
+This plugin gathers basic status from the open source web server Nginx. Nginx
+Plus is a commercial version. For more information about the differences between
+Nginx (F/OSS) and Nginx Plus, see the Nginx [documentation](https://www.nginx.com/blog/whats-difference-nginx-foss-nginx-plus/).
+
+[diff-doc]: https://www.nginx.com/blog/whats-difference-nginx-foss-nginx-plus/
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read Nginx's basic status information (ngx_http_stub_status_module)
+[[inputs.nginx]]
+  ## An array of Nginx stub_status URI to gather stats.
+  urls = ["http://localhost/server_status"]
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## HTTP response timeout (default: 5s)
+  response_timeout = "5s"
+```
+
+## Metrics
+
+- Measurement
+  - accepts
+  - active
+  - handled
+  - reading
+  - requests
+  - waiting
+  - writing
+
+## Tags
+
+- All measurements have the following tags:
+  - port
+  - server
+
+## Example Output
+
+Using this configuration:
+
+```toml
+[[inputs.nginx]]
+  ## An array of Nginx stub_status URI to gather stats.
+  urls = ["http://localhost/status"]
+```
+
+When run with:
+
+```sh
+./telegraf --config telegraf.conf --input-filter nginx --test
+```
+
+It produces:
+
+```text
+nginx,port=80,server=localhost accepts=605i,active=2i,handled=605i,reading=0i,requests=12132i,waiting=1i,writing=1i 1456690994701784331
+```
diff --git a/content/telegraf/v1/input-plugins/nginx_plus/_index.md b/content/telegraf/v1/input-plugins/nginx_plus/_index.md
new file mode 100644
index 000000000..42dd11899
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/nginx_plus/_index.md
@@ -0,0 +1,163 @@
+---
+description: "Telegraf plugin for collecting metrics from Nginx Plus"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Nginx Plus
+    identifier: input-nginx_plus
+tags: [Nginx Plus, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Nginx Plus Input Plugin
+
+Nginx Plus is a commercial version of the open source web server Nginx. The use
+this plugin you will need a license. For more information about the differences
+between Nginx (F/OSS) and Nginx Plus, see the Nginx [documentation](https://www.nginx.com/blog/whats-difference-nginx-foss-nginx-plus/).
+
+Structures for Nginx Plus have been built based on history of [status module
+documentation]().
+
+[diff-doc]: https://www.nginx.com/blog/whats-difference-nginx-foss-nginx-plus/
+
+[status-mod]: http://nginx.org/en/docs/http/ngx_http_status_module.html
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read Nginx Plus' advanced status information
+[[inputs.nginx_plus]]
+  ## An array of Nginx status URIs to gather stats.
+  urls = ["http://localhost/status"]
+
+  # HTTP response timeout (default: 5s)
+  response_timeout = "5s"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+## Metrics
+
+- nginx_plus_processes
+  - respawned
+- nginx_plus_connections
+  - accepted
+  - dropped
+  - active
+  - idle
+- nginx_plus_ssl
+  - handshakes
+  - handshakes_failed
+  - session_reuses
+- nginx_plus_requests
+  - total
+  - current
+- nginx_plus_upstream, nginx_plus_stream_upstream
+  - keepalive
+  - zombies
+- nginx_plus_upstream_peer, nginx_plus_stream_upstream_peer
+  - requests
+  - unavail
+  - healthchecks_checks
+  - header_time
+  - response_time
+  - state
+  - active
+  - downstart
+  - healthchecks_last_passed
+  - weight
+  - responses_1xx
+  - responses_2xx
+  - responses_3xx
+  - responses_4xx
+  - responses_5xx
+  - received
+  - selected
+  - healthchecks_fails
+  - healthchecks_unhealthy
+  - backup
+  - responses_total
+  - sent
+  - fails
+  - downtime
+
+### Tags
+
+- nginx_plus_processes, nginx_plus_connections, nginx_plus_ssl, nginx_plus_requests
+  - server
+  - port
+
+- nginx_plus_upstream, nginx_plus_stream_upstream
+  - upstream
+  - server
+  - port
+
+- nginx_plus_upstream_peer, nginx_plus_stream_upstream_peer
+  - id
+  - upstream
+  - server
+  - port
+  - upstream_address
+
+## Example Output
+
+Using this configuration:
+
+```toml
+[[inputs.nginx_plus]]
+  ## An array of Nginx Plus status URIs to gather stats.
+  urls = ["http://localhost/status"]
+```
+
+When run with:
+
+```sh
+./telegraf -config telegraf.conf -input-filter nginx_plus -test
+```
+
+It produces:
+
+```text
+* Plugin: inputs.nginx_plus, Collection 1
+> nginx_plus_processes,server=localhost,port=12021,host=word.local respawned=0i 1505782513000000000
+> nginx_plus_connections,server=localhost,port=12021,host=word.local accepted=5535735212i,dropped=10140186i,active=9541i,idle=67540i 1505782513000000000
+> nginx_plus_ssl,server=localhost,port=12021,host=word.local handshakes=0i,handshakes_failed=0i,session_reuses=0i 1505782513000000000
+> nginx_plus_requests,server=localhost,port=12021,host=word.local total=186780541173i,current=9037i 1505782513000000000
+> nginx_plus_upstream,port=12021,host=word.local,upstream=dataserver80,server=localhost keepalive=0i,zombies=0i 1505782513000000000
+> nginx_plus_upstream_peer,upstream=dataserver80,upstream_address=10.10.102.181:80,id=0,server=localhost,port=12021,host=word.local sent=53806910399i,received=7516943964i,fails=207i,downtime=2325979i,selected=1505782512000i,backup=false,active=6i,responses_4xx=6935i,header_time=80i,response_time=80i,healthchecks_last_passed=true,responses_1xx=0i,responses_2xx=36299890i,responses_5xx=360450i,responses_total=36667275i,unavail=154i,downstart=0i,state="up",requests=36673741i,responses_3xx=0i,healthchecks_unhealthy=5i,weight=1i,healthchecks_checks=177209i,healthchecks_fails=29i 1505782513000000000
+> nginx_plus_stream_upstream,server=localhost,port=12021,host=word.local,upstream=dataserver443 zombies=0i 1505782513000000000
+> nginx_plus_stream_upstream_peer,server=localhost,upstream_address=10.10.102.181:443,id=0,port=12021,host=word.local,upstream=dataserver443 active=1i,healthchecks_unhealthy=1i,weight=1i,unavail=0i,connect_time=24i,first_byte_time=78i,healthchecks_last_passed=true,state="up",sent=4457713140i,received=698065272i,fails=0i,healthchecks_checks=178421i,downstart=0i,selected=1505782512000i,response_time=5156i,backup=false,connections=56251i,healthchecks_fails=20i,downtime=391017i 1505782513000000000
+```
+
+### Reference material
+
+Subsequent versions of status response structure available here:
+
+- [version 1](http://web.archive.org/web/20130805111222/http://nginx.org/en/docs/http/ngx_http_status_module.html)
+
+- [version 2](http://web.archive.org/web/20131218101504/http://nginx.org/en/docs/http/ngx_http_status_module.html)
+
+- version 3 - not available
+
+- [version 4](http://web.archive.org/web/20141218170938/http://nginx.org/en/docs/http/ngx_http_status_module.html)
+
+- [version 5](http://web.archive.org/web/20150414043916/http://nginx.org/en/docs/http/ngx_http_status_module.html)
+
+- [version 6](http://web.archive.org/web/20150918163811/http://nginx.org/en/docs/http/ngx_http_status_module.html)
+
+- [version 7](http://web.archive.org/web/20161107221028/http://nginx.org/en/docs/http/ngx_http_status_module.html)
diff --git a/content/telegraf/v1/input-plugins/nginx_plus_api/_index.md b/content/telegraf/v1/input-plugins/nginx_plus_api/_index.md
new file mode 100644
index 000000000..a99d1609c
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/nginx_plus_api/_index.md
@@ -0,0 +1,330 @@
+---
+description: "Telegraf plugin for collecting metrics from Nginx Plus API"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Nginx Plus API
+    identifier: input-nginx_plus_api
+tags: [Nginx Plus API, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Nginx Plus API Input Plugin
+
+Nginx Plus is a commercial version of the open source web server Nginx. The use
+this plugin you will need a license. For more information about the differences
+between Nginx (F/OSS) and Nginx Plus, see the Nginx [documentation](https://www.nginx.com/blog/whats-difference-nginx-foss-nginx-plus/).
+
+[diff-doc]: https://www.nginx.com/blog/whats-difference-nginx-foss-nginx-plus/
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read Nginx Plus API advanced status information
+[[inputs.nginx_plus_api]]
+  ## An array of Nginx API URIs to gather stats.
+  urls = ["http://localhost/api"]
+  # Nginx API version, default: 3
+  # api_version = 3
+
+  # HTTP response timeout (default: 5s)
+  response_timeout = "5s"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+## Migration from Nginx Plus (Status) input plugin
+
+| Nginx Plus                      | Nginx Plus API                       |
+|---------------------------------|--------------------------------------|
+| nginx_plus_processes            | nginx_plus_api_processes             |
+| nginx_plus_connections          | nginx_plus_api_connections           |
+| nginx_plus_ssl                  | nginx_plus_api_ssl                   |
+| nginx_plus_requests             | nginx_plus_api_http_requests         |
+| nginx_plus_zone                 | nginx_plus_api_http_server_zones     |
+| nginx_plus_upstream             | nginx_plus_api_http_upstreams        |
+| nginx_plus_upstream_peer        | nginx_plus_api_http_upstream_peers   |
+| nginx_plus_cache                | nginx_plus_api_http_caches           |
+| nginx_plus_stream_upstream      | nginx_plus_api_stream_upstreams      |
+| nginx_plus_stream_upstream_peer | nginx_plus_api_stream_upstream_peers |
+| nginx.stream.zone               | nginx_plus_api_stream_server_zones   |
+
+## Measurements by API version
+
+| Measurement                          | API version (api_version) |
+|--------------------------------------|---------------------------|
+| nginx_plus_api_processes             | >= 3                      |
+| nginx_plus_api_connections           | >= 3                      |
+| nginx_plus_api_ssl                   | >= 3                      |
+| nginx_plus_api_slabs_pages           | >= 3                      |
+| nginx_plus_api_slabs_slots           | >= 3                      |
+| nginx_plus_api_http_requests         | >= 3                      |
+| nginx_plus_api_http_server_zones     | >= 3                      |
+| nginx_plus_api_http_upstreams        | >= 3                      |
+| nginx_plus_api_http_upstream_peers   | >= 3                      |
+| nginx_plus_api_http_caches           | >= 3                      |
+| nginx_plus_api_stream_upstreams      | >= 3                      |
+| nginx_plus_api_stream_upstream_peers | >= 3                      |
+| nginx_plus_api_stream_server_zones   | >= 3                      |
+| nginx_plus_api_http_location_zones   | >= 5                      |
+| nginx_plus_api_resolver_zones        | >= 5                      |
+| nginx_plus_api_http_limit_reqs       | >= 6                      |
+
+## Metrics
+
+- nginx_plus_api_processes
+  - respawned
+- nginx_plus_api_connections
+  - accepted
+  - dropped
+  - active
+  - idle
+- nginx_plus_api_slabs_pages
+  - used
+  - free
+- nginx_plus_api_slabs_slots
+  - used
+  - free
+  - reqs
+  - fails
+- nginx_plus_api_ssl
+  - handshakes
+  - handshakes_failed
+  - session_reuses
+- nginx_plus_api_http_requests
+  - total
+  - current
+- nginx_plus_api_http_server_zones
+  - processing
+  - requests
+  - responses_1xx
+  - responses_2xx
+  - responses_3xx
+  - responses_4xx
+  - responses_5xx
+  - responses_total
+  - received
+  - sent
+  - discarded
+- nginx_plus_api_http_upstreams
+  - keepalive
+  - zombies
+- nginx_plus_api_http_upstream_peers
+  - requests
+  - unavail
+  - healthchecks_checks
+  - header_time
+  - state
+  - response_time
+  - active
+  - healthchecks_last_passed
+  - weight
+  - responses_1xx
+  - responses_2xx
+  - responses_3xx
+  - responses_4xx
+  - responses_5xx
+  - received
+  - healthchecks_fails
+  - healthchecks_unhealthy
+  - backup
+  - responses_total
+  - sent
+  - fails
+  - downtime
+- nginx_plus_api_http_caches
+  - size
+  - max_size
+  - cold
+  - hit_responses
+  - hit_bytes
+  - stale_responses
+  - stale_bytes
+  - updating_responses
+  - updating_bytes
+  - revalidated_responses
+  - revalidated_bytes
+  - miss_responses
+  - miss_bytes
+  - miss_responses_written
+  - miss_bytes_written
+  - expired_responses
+  - expired_bytes
+  - expired_responses_written
+  - expired_bytes_written
+  - bypass_responses
+  - bypass_bytes
+  - bypass_responses_written
+  - bypass_bytes_written
+- nginx_plus_api_stream_upstreams
+  - zombies
+- nginx_plus_api_stream_upstream_peers
+  - unavail
+  - healthchecks_checks
+  - healthchecks_fails
+  - healthchecks_unhealthy
+  - healthchecks_last_passed
+  - response_time
+  - state
+  - active
+  - weight
+  - received
+  - backup
+  - sent
+  - fails
+  - downtime
+- nginx_plus_api_stream_server_zones
+  - processing
+  - connections
+  - received
+  - sent
+- nginx_plus_api_location_zones
+  - requests
+  - responses_1xx
+  - responses_2xx
+  - responses_3xx
+  - responses_4xx
+  - responses_5xx
+  - responses_total
+  - received
+  - sent
+  - discarded
+- nginx_plus_api_resolver_zones
+  - name
+  - srv
+  - addr
+  - noerror
+  - formerr
+  - servfail
+  - nxdomain
+  - notimp
+  - refused
+  - timedout
+  - unknown
+- nginx_plus_api_http_limit_reqs
+  - passed
+  - delayed
+  - rejected
+  - delayed_dry_run
+  - rejected_dry_run
+
+### Tags
+
+- nginx_plus_api_processes, nginx_plus_api_connections, nginx_plus_api_ssl, nginx_plus_api_http_requests
+  - source
+  - port
+
+- nginx_plus_api_http_upstreams, nginx_plus_api_stream_upstreams
+  - upstream
+  - source
+  - port
+
+- nginx_plus_api_http_server_zones, nginx_plus_api_upstream_server_zones, nginx_plus_api_http_location_zones, nginx_plus_api_resolver_zones, nginx_plus_api_slabs_pages
+  - source
+  - port
+  - zone
+
+- nginx_plus_api_slabs_slots
+  - source
+  - port
+  - zone
+  - slot
+
+- nginx_plus_api_upstream_peers, nginx_plus_api_stream_upstream_peers
+  - id
+  - upstream
+  - source
+  - port
+  - upstream_address
+
+- nginx_plus_api_http_caches
+  - source
+  - port
+
+- nginx_plus_api_http_limit_reqs
+  - source
+  - port
+  - limit
+
+## Example Output
+
+Using this configuration:
+
+```toml
+[[inputs.nginx_plus_api]]
+  ## An array of Nginx Plus API URIs to gather stats.
+  urls = ["http://localhost/api"]
+```
+
+When run with:
+
+```sh
+./telegraf -config telegraf.conf -input-filter nginx_plus_api -test
+```
+
+It produces:
+
+```text
+nginx_plus_api_processes,port=80,source=demo.nginx.com respawned=0i 1570696321000000000
+nginx_plus_api_connections,port=80,source=demo.nginx.com accepted=68998606i,active=7i,dropped=0i,idle=57i 1570696322000000000
+nginx_plus_api_slabs_pages,port=80,source=demo.nginx.com,zone=hg.nginx.org used=1i,free=503i 1570696322000000000
+nginx_plus_api_slabs_pages,port=80,source=demo.nginx.com,zone=trac.nginx.org used=3i,free=500i 1570696322000000000
+nginx_plus_api_slabs_slots,port=80,source=demo.nginx.com,zone=hg.nginx.org,slot=8 used=1i,free=503i,reqs=10i,fails=0i 1570696322000000000
+nginx_plus_api_slabs_slots,port=80,source=demo.nginx.com,zone=hg.nginx.org,slot=16 used=3i,free=500i,reqs=1024i,fails=0i 1570696322000000000
+nginx_plus_api_slabs_slots,port=80,source=demo.nginx.com,zone=trac.nginx.org,slot=8 used=1i,free=503i,reqs=10i,fails=0i 1570696322000000000
+nginx_plus_api_slabs_slots,port=80,source=demo.nginx.com,zone=trac.nginx.org,slot=16 used=0i,free=1520i,reqs=0i,fails=1i 1570696322000000000
+nginx_plus_api_ssl,port=80,source=demo.nginx.com handshakes=9398978i,handshakes_failed=289353i,session_reuses=1004389i 1570696322000000000
+nginx_plus_api_http_requests,port=80,source=demo.nginx.com current=51i,total=264649353i 1570696322000000000
+nginx_plus_api_http_server_zones,port=80,source=demo.nginx.com,zone=hg.nginx.org discarded=5i,processing=0i,received=24123604i,requests=60138i,responses_1xx=0i,responses_2xx=59353i,responses_3xx=531i,responses_4xx=249i,responses_5xx=0i,responses_total=60133i,sent=830165221i 1570696322000000000
+nginx_plus_api_http_server_zones,port=80,source=demo.nginx.com,zone=trac.nginx.org discarded=250i,processing=0i,received=2184618i,requests=12404i,responses_1xx=0i,responses_2xx=8579i,responses_3xx=2513i,responses_4xx=583i,responses_5xx=479i,responses_total=12154i,sent=139384159i 1570696322000000000
+nginx_plus_api_http_server_zones,port=80,source=demo.nginx.com,zone=lxr.nginx.org discarded=1i,processing=0i,received=1011701i,requests=4523i,responses_1xx=0i,responses_2xx=4332i,responses_3xx=28i,responses_4xx=39i,responses_5xx=123i,responses_total=4522i,sent=72631354i 1570696322000000000
+nginx_plus_api_http_upstreams,port=80,source=demo.nginx.com,upstream=trac-backend keepalive=0i,zombies=0i 1570696322000000000
+nginx_plus_api_http_upstream_peers,id=0,port=80,source=demo.nginx.com,upstream=trac-backend,upstream_address=10.0.0.1:8080 active=0i,backup=false,downtime=0i,fails=0i,header_time=235i,healthchecks_checks=0i,healthchecks_fails=0i,healthchecks_unhealthy=0i,received=88581178i,requests=3180i,response_time=235i,responses_1xx=0i,responses_2xx=3168i,responses_3xx=5i,responses_4xx=6i,responses_5xx=0i,responses_total=3179i,sent=1321720i,state="up",unavail=0i,weight=1i 1570696322000000000
+nginx_plus_api_http_upstream_peers,id=1,port=80,source=demo.nginx.com,upstream=trac-backend,upstream_address=10.0.0.1:8081 active=0i,backup=true,downtime=0i,fails=0i,healthchecks_checks=0i,healthchecks_fails=0i,healthchecks_unhealthy=0i,received=0i,requests=0i,responses_1xx=0i,responses_2xx=0i,responses_3xx=0i,responses_4xx=0i,responses_5xx=0i,responses_total=0i,sent=0i,state="up",unavail=0i,weight=1i 1570696322000000000
+nginx_plus_api_http_upstreams,port=80,source=demo.nginx.com,upstream=hg-backend keepalive=0i,zombies=0i 1570696322000000000
+nginx_plus_api_http_upstream_peers,id=0,port=80,source=demo.nginx.com,upstream=hg-backend,upstream_address=10.0.0.1:8088 active=0i,backup=false,downtime=0i,fails=0i,header_time=22i,healthchecks_checks=0i,healthchecks_fails=0i,healthchecks_unhealthy=0i,received=909402572i,requests=18514i,response_time=88i,responses_1xx=0i,responses_2xx=17799i,responses_3xx=531i,responses_4xx=179i,responses_5xx=0i,responses_total=18509i,sent=10608107i,state="up",unavail=0i,weight=5i 1570696322000000000
+nginx_plus_api_http_upstream_peers,id=1,port=80,source=demo.nginx.com,upstream=hg-backend,upstream_address=10.0.0.1:8089 active=0i,backup=true,downtime=0i,fails=0i,healthchecks_checks=0i,healthchecks_fails=0i,healthchecks_unhealthy=0i,received=0i,requests=0i,responses_1xx=0i,responses_2xx=0i,responses_3xx=0i,responses_4xx=0i,responses_5xx=0i,responses_total=0i,sent=0i,state="up",unavail=0i,weight=1i 1570696322000000000
+nginx_plus_api_http_upstreams,port=80,source=demo.nginx.com,upstream=lxr-backend keepalive=0i,zombies=0i 1570696322000000000
+nginx_plus_api_http_upstream_peers,id=0,port=80,source=demo.nginx.com,upstream=lxr-backend,upstream_address=unix:/tmp/cgi.sock active=0i,backup=false,downtime=0i,fails=123i,header_time=91i,healthchecks_checks=0i,healthchecks_fails=0i,healthchecks_unhealthy=0i,received=71782888i,requests=4354i,response_time=91i,responses_1xx=0i,responses_2xx=4230i,responses_3xx=0i,responses_4xx=0i,responses_5xx=0i,responses_total=4230i,sent=3088656i,state="up",unavail=0i,weight=1i 1570696322000000000
+nginx_plus_api_http_upstream_peers,id=1,port=80,source=demo.nginx.com,upstream=lxr-backend,upstream_address=unix:/tmp/cgib.sock active=0i,backup=true,downtime=0i,fails=0i,healthchecks_checks=0i,healthchecks_fails=0i,healthchecks_unhealthy=0i,max_conns=42i,received=0i,requests=0i,responses_1xx=0i,responses_2xx=0i,responses_3xx=0i,responses_4xx=0i,responses_5xx=0i,responses_total=0i,sent=0i,state="up",unavail=0i,weight=1i 1570696322000000000
+nginx_plus_api_http_upstreams,port=80,source=demo.nginx.com,upstream=demo-backend keepalive=0i,zombies=0i 1570696322000000000
+nginx_plus_api_http_upstream_peers,id=0,port=80,source=demo.nginx.com,upstream=demo-backend,upstream_address=10.0.0.2:15431 active=0i,backup=false,downtime=0i,fails=0i,healthchecks_checks=0i,healthchecks_fails=0i,healthchecks_unhealthy=0i,received=0i,requests=0i,responses_1xx=0i,responses_2xx=0i,responses_3xx=0i,responses_4xx=0i,responses_5xx=0i,responses_total=0i,sent=0i,state="up",unavail=0i,weight=1i 1570696322000000000
+nginx_plus_api_http_caches,cache=http_cache,port=80,source=demo.nginx.com bypass_bytes=0i,bypass_bytes_written=0i,bypass_responses=0i,bypass_responses_written=0i,cold=false,expired_bytes=381518640i,expired_bytes_written=363449785i,expired_responses=42114i,expired_responses_written=39954i,hit_bytes=6321885979i,hit_responses=596730i,max_size=536870912i,miss_bytes=48512185i,miss_bytes_written=155600i,miss_responses=6052i,miss_responses_written=136i,revalidated_bytes=0i,revalidated_responses=0i,size=765952i,stale_bytes=0i,stale_responses=0i,updating_bytes=0i,updating_responses=0i 1570696323000000000
+nginx_plus_api_stream_server_zones,port=80,source=demo.nginx.com,zone=postgresql_loadbalancer connections=0i,processing=0i,received=0i,sent=0i 1570696323000000000
+nginx_plus_api_stream_server_zones,port=80,source=demo.nginx.com,zone=dns_loadbalancer connections=0i,processing=0i,received=0i,sent=0i 1570696323000000000
+nginx_plus_api_stream_upstreams,port=80,source=demo.nginx.com,upstream=postgresql_backends zombies=0i 1570696323000000000
+nginx_plus_api_stream_upstream_peers,id=0,port=80,source=demo.nginx.com,upstream=postgresql_backends,upstream_address=10.0.0.2:15432 active=0i,backup=false,connections=0i,downtime=0i,fails=0i,healthchecks_checks=0i,healthchecks_fails=0i,healthchecks_unhealthy=0i,received=0i,sent=0i,state="up",unavail=0i,weight=1i 1570696323000000000
+nginx_plus_api_stream_upstream_peers,id=1,port=80,source=demo.nginx.com,upstream=postgresql_backends,upstream_address=10.0.0.2:15433 active=0i,backup=false,connections=0i,downtime=0i,fails=0i,healthchecks_checks=0i,healthchecks_fails=0i,healthchecks_unhealthy=0i,received=0i,sent=0i,state="up",unavail=0i,weight=1i 1570696323000000000
+nginx_plus_api_stream_upstream_peers,id=2,port=80,source=demo.nginx.com,upstream=postgresql_backends,upstream_address=10.0.0.2:15434 active=0i,backup=false,connections=0i,downtime=0i,fails=0i,healthchecks_checks=0i,healthchecks_fails=0i,healthchecks_unhealthy=0i,received=0i,sent=0i,state="up",unavail=0i,weight=1i 1570696323000000000
+nginx_plus_api_stream_upstream_peers,id=3,port=80,source=demo.nginx.com,upstream=postgresql_backends,upstream_address=10.0.0.2:15435 active=0i,backup=false,connections=0i,downtime=0i,fails=0i,healthchecks_checks=0i,healthchecks_fails=0i,healthchecks_unhealthy=0i,received=0i,sent=0i,state="down",unavail=0i,weight=1i 1570696323000000000
+nginx_plus_api_stream_upstreams,port=80,source=demo.nginx.com,upstream=dns_udp_backends zombies=0i 1570696323000000000
+nginx_plus_api_stream_upstream_peers,id=0,port=80,source=demo.nginx.com,upstream=dns_udp_backends,upstream_address=10.0.0.5:53 active=0i,backup=false,connections=0i,downtime=0i,fails=0i,healthchecks_checks=0i,healthchecks_fails=0i,healthchecks_unhealthy=0i,received=0i,sent=0i,state="up",unavail=0i,weight=2i 1570696323000000000
+nginx_plus_api_stream_upstream_peers,id=1,port=80,source=demo.nginx.com,upstream=dns_udp_backends,upstream_address=10.0.0.2:53 active=0i,backup=false,connections=0i,downtime=0i,fails=0i,healthchecks_checks=0i,healthchecks_fails=0i,healthchecks_unhealthy=0i,received=0i,sent=0i,state="up",unavail=0i,weight=1i 1570696323000000000
+nginx_plus_api_stream_upstream_peers,id=2,port=80,source=demo.nginx.com,upstream=dns_udp_backends,upstream_address=10.0.0.7:53 active=0i,backup=false,connections=0i,downtime=0i,fails=0i,healthchecks_checks=0i,healthchecks_fails=0i,healthchecks_unhealthy=0i,received=0i,sent=0i,state="down",unavail=0i,weight=1i 1570696323000000000
+nginx_plus_api_stream_upstreams,port=80,source=demo.nginx.com,upstream=unused_tcp_backends zombies=0i 1570696323000000000
+nginx_plus_api_http_location_zones,port=80,source=demo.nginx.com,zone=swagger discarded=0i,received=1622i,requests=8i,responses_1xx=0i,responses_2xx=7i,responses_3xx=0i,responses_4xx=1i,responses_5xx=0i,responses_total=8i,sent=638333i 1570696323000000000
+nginx_plus_api_http_location_zones,port=80,source=demo.nginx.com,zone=api-calls discarded=64i,received=337530181i,requests=1726513i,responses_1xx=0i,responses_2xx=1726428i,responses_3xx=0i,responses_4xx=21i,responses_5xx=0i,responses_total=1726449i,sent=1902577668i 1570696323000000000
+nginx_plus_api_resolver_zones,port=80,source=demo.nginx.com,zone=resolver1 addr=0i,formerr=0i,name=0i,noerror=0i,notimp=0i,nxdomain=0i,refused=0i,servfail=0i,srv=0i,timedout=0i,unknown=0i 1570696324000000000
+nginx_plus_api_http_limit_reqs,port=80,source=demo.nginx.com,limit=limit_1 delayed=0i,delayed_dry_run=0i,passed=6i,rejected=9i,rejected_dry_run=0i 1570696322000000000
+nginx_plus_api_http_limit_reqs,port=80,source=demo.nginx.com,limit=limit_2 delayed=13i,delayed_dry_run=3i,passed=6i,rejected=1i,rejected_dry_run=31i 1570696322000000000
+```
+
+### Reference material
+
+- [api documentation](http://demo.nginx.com/swagger-ui/#/)
+- [nginx_api_module documentation](http://nginx.org/en/docs/http/ngx_http_api_module.html)
diff --git a/content/telegraf/v1/input-plugins/nginx_sts/_index.md b/content/telegraf/v1/input-plugins/nginx_sts/_index.md
new file mode 100644
index 000000000..61abef0d4
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/nginx_sts/_index.md
@@ -0,0 +1,138 @@
+---
+description: "Telegraf plugin for collecting metrics from Nginx Stream STS"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Nginx Stream STS
+    identifier: input-nginx_sts
+tags: [Nginx Stream STS, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Nginx Stream STS Input Plugin
+
+This plugin gathers Nginx status using external virtual host traffic status
+module -  <https://github.com/vozlt/nginx-module-sts>. This is an Nginx module
+that provides access to stream host status information. It contains the current
+status such as servers, upstreams, caches. This is similar to the live activity
+monitoring of Nginx plus.  For module configuration details please see its
+[documentation](https://github.com/vozlt/nginx-module-sts#synopsis).
+
+Telegraf minimum version: Telegraf 1.15.0
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read Nginx virtual host traffic status module information (nginx-module-sts)
+[[inputs.nginx_sts]]
+  ## An array of ngx_http_status_module or status URI to gather stats.
+  urls = ["http://localhost/status"]
+
+  ## HTTP response timeout (default: 5s)
+  response_timeout = "5s"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+## Metrics
+
+- nginx_sts_connections
+  - tags:
+    - source
+    - port
+  - fields:
+    - active
+    - reading
+    - writing
+    - waiting
+    - accepted
+    - handled
+    - requests
+
+- nginx_sts_server
+  - tags:
+    - source
+    - port
+    - zone
+  - fields:
+    - connects
+    - in_bytes
+    - out_bytes
+    - response_1xx_count
+    - response_2xx_count
+    - response_3xx_count
+    - response_4xx_count
+    - response_5xx_count
+    - session_msec_counter
+    - session_msec
+
+- nginx_sts_filter
+  - tags:
+    - source
+    - port
+    - filter_name
+    - filter_key
+  - fields:
+    - connects
+    - in_bytes
+    - out_bytes
+    - response_1xx_count
+    - response_2xx_count
+    - response_3xx_count
+    - response_4xx_count
+    - response_5xx_count
+    - session_msec_counter
+    - session_msec
+
+- nginx_sts_upstream
+  - tags:
+    - source
+    - port
+    - upstream
+    - upstream_address
+  - fields:
+    - connects
+    - in_bytes
+    - out_bytes
+    - response_1xx_count
+    - response_2xx_count
+    - response_3xx_count
+    - response_4xx_count
+    - response_5xx_count
+    - session_msec_counter
+    - session_msec
+    - upstream_session_msec_counter
+    - upstream_session_msec
+    - upstream_connect_msec_counter
+    - upstream_connect_msec
+    - upstream_firstbyte_msec_counter
+    - upstream_firstbyte_msec
+    - weight
+    - max_fails
+    - fail_timeout
+    - backup
+    - down
+
+## Example Output
+
+```text
+nginx_sts_upstream,host=localhost,port=80,source=127.0.0.1,upstream=backend_cluster,upstream_address=1.2.3.4:8080 upstream_connect_msec_counter=0i,out_bytes=0i,down=false,connects=0i,session_msec=0i,upstream_session_msec=0i,upstream_session_msec_counter=0i,upstream_connect_msec=0i,upstream_firstbyte_msec_counter=0i,response_3xx_count=0i,session_msec_counter=0i,weight=1i,max_fails=1i,backup=false,upstream_firstbyte_msec=0i,in_bytes=0i,response_1xx_count=0i,response_2xx_count=0i,response_4xx_count=0i,response_5xx_count=0i,fail_timeout=10i 1584699180000000000
+nginx_sts_upstream,host=localhost,port=80,source=127.0.0.1,upstream=backend_cluster,upstream_address=9.8.7.6:8080 upstream_firstbyte_msec_counter=0i,response_2xx_count=0i,down=false,upstream_session_msec_counter=0i,out_bytes=0i,response_5xx_count=0i,weight=1i,max_fails=1i,fail_timeout=10i,connects=0i,session_msec_counter=0i,upstream_session_msec=0i,in_bytes=0i,response_1xx_count=0i,response_3xx_count=0i,response_4xx_count=0i,session_msec=0i,upstream_connect_msec=0i,upstream_connect_msec_counter=0i,upstream_firstbyte_msec=0i,backup=false 1584699180000000000
+nginx_sts_server,host=localhost,port=80,source=127.0.0.1,zone=* response_2xx_count=0i,response_4xx_count=0i,response_5xx_count=0i,session_msec_counter=0i,in_bytes=0i,out_bytes=0i,session_msec=0i,response_1xx_count=0i,response_3xx_count=0i,connects=0i 1584699180000000000
+nginx_sts_connections,host=localhost,port=80,source=127.0.0.1 waiting=1i,accepted=146i,handled=146i,requests=13421i,active=3i,reading=0i,writing=2i 1584699180000000000
+```
diff --git a/content/telegraf/v1/input-plugins/nginx_upstream_check/_index.md b/content/telegraf/v1/input-plugins/nginx_upstream_check/_index.md
new file mode 100644
index 000000000..631a61999
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/nginx_upstream_check/_index.md
@@ -0,0 +1,105 @@
+---
+description: "Telegraf plugin for collecting metrics from Nginx Upstream Check"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Nginx Upstream Check
+    identifier: input-nginx_upstream_check
+tags: [Nginx Upstream Check, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Nginx Upstream Check Input Plugin
+
+Read the status output of the [nginx_upstream_check]().  This module can
+periodically check the servers in the Nginx's upstream with configured request
+and interval to determine if the server is still available. If checks are failed
+the server is marked as "down" and will not receive any requests until the check
+will pass and a server will be marked as "up" again.
+
+The status page displays the current status of all upstreams and servers as well
+as number of the failed and successful checks. This information can be exported
+in JSON format and parsed by this input.
+
+[1]: https://github.com/yaoweibin/nginx_upstream_check_module
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read nginx_upstream_check module status information (https://github.com/yaoweibin/nginx_upstream_check_module)
+[[inputs.nginx_upstream_check]]
+  ## An URL where Nginx Upstream check module is enabled
+  ## It should be set to return a JSON formatted response
+  url = "http://127.0.0.1/status?format=json"
+
+  ## HTTP method
+  # method = "GET"
+
+  ## Optional HTTP headers
+  # headers = {"X-Special-Header" = "Special-Value"}
+
+  ## Override HTTP "Host" header
+  # host_header = "check.example.com"
+
+  ## Timeout for HTTP requests
+  timeout = "5s"
+
+  ## Optional HTTP Basic Auth credentials
+  # username = "username"
+  # password = "pa$$word"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+## Metrics
+
+- Measurement
+  - fall (The number of failed server check attempts, counter)
+  - rise (The number of successful server check attempts, counter)
+  - status (The reporter server status as a string)
+  - status_code (The server status code. 1 - up, 2 - down, 0 - other)
+
+The "status_code" field most likely will be the most useful one because it
+allows you to determine the current state of every server and, possible, add
+some monitoring to watch over it. InfluxDB can use string values and the
+"status" field can be used instead, but for most other monitoring solutions the
+integer code will be appropriate.
+
+### Tags
+
+- All measurements have the following tags:
+  - name (The hostname or IP of the upstream server)
+  - port (The alternative check port, 0 if the default one is used)
+  - type (The check type, http/tcp)
+  - upstream (The name of the upstream block in the Nginx configuration)
+  - url (The status url used by telegraf)
+
+## Example Output
+
+When run with:
+
+```sh
+./telegraf --config telegraf.conf --input-filter nginx_upstream_check --test
+```
+
+It produces:
+
+```text
+nginx_upstream_check,host=node1,name=192.168.0.1:8080,port=0,type=http,upstream=my_backends,url=http://127.0.0.1:80/status?format\=json fall=0i,rise=100i,status="up",status_code=1i 1529088524000000000
+nginx_upstream_check,host=node2,name=192.168.0.2:8080,port=0,type=http,upstream=my_backends,url=http://127.0.0.1:80/status?format\=json fall=100i,rise=0i,status="down",status_code=2i 1529088524000000000
+```
diff --git a/content/telegraf/v1/input-plugins/nginx_vts/_index.md b/content/telegraf/v1/input-plugins/nginx_vts/_index.md
new file mode 100644
index 000000000..358163536
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/nginx_vts/_index.md
@@ -0,0 +1,157 @@
+---
+description: "Telegraf plugin for collecting metrics from Nginx Virtual Host Traffic (VTS)"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Nginx Virtual Host Traffic (VTS)
+    identifier: input-nginx_vts
+tags: [Nginx Virtual Host Traffic (VTS), "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Nginx Virtual Host Traffic (VTS) Input Plugin
+
+This plugin gathers Nginx status using external virtual host traffic status
+module - <https://github.com/vozlt/nginx-module-vts>. This is an Nginx module
+that provides access to virtual host status information. It contains the current
+status such as servers, upstreams, caches. This is similar to the live activity
+monitoring of Nginx plus.  For module configuration details please see its
+[documentation](https://github.com/vozlt/nginx-module-vts#synopsis).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read Nginx virtual host traffic status module information (nginx-module-vts)
+[[inputs.nginx_vts]]
+  ## An array of ngx_http_status_module or status URI to gather stats.
+  urls = ["http://localhost/status"]
+
+  ## HTTP response timeout (default: 5s)
+  response_timeout = "5s"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+## Metrics
+
+- nginx_vts_connections
+  - active
+  - reading
+  - writing
+  - waiting
+  - accepted
+  - handled
+  - requests
+- nginx_vts_server, nginx_vts_filter
+  - requests
+  - request_time
+  - in_bytes
+  - out_bytes
+  - response_1xx_count
+  - response_2xx_count
+  - response_3xx_count
+  - response_4xx_count
+  - response_5xx_count
+  - cache_miss
+  - cache_bypass
+  - cache_expired
+  - cache_stale
+  - cache_updating
+  - cache_revalidated
+  - cache_hit
+  - cache_scarce
+- nginx_vts_upstream
+  - requests
+  - request_time
+  - response_time
+  - in_bytes
+  - out_bytes
+  - response_1xx_count
+  - response_2xx_count
+  - response_3xx_count
+  - response_4xx_count
+  - response_5xx_count
+  - weight
+  - max_fails
+  - fail_timeout
+  - backup
+  - down
+- nginx_vts_cache
+  - max_bytes
+  - used_bytes
+  - in_bytes
+  - out_bytes
+  - miss
+  - bypass
+  - expired
+  - stale
+  - updating
+  - revalidated
+  - hit
+  - scarce
+
+### Tags
+
+- nginx_vts_connections
+  - source
+  - port
+- nginx_vts_server
+  - source
+  - port
+  - zone
+- nginx_vts_filter
+  - source
+  - port
+  - filter_name
+  - filter_key
+- nginx_vts_upstream
+  - source
+  - port
+  - upstream
+  - upstream_address
+- nginx_vts_cache
+  - source
+  - port
+  - zone
+
+## Example Output
+
+Using this configuration:
+
+```toml
+[[inputs.nginx_vts]]
+  ## An array of Nginx status URIs to gather stats.
+  urls = ["http://localhost/status"]
+```
+
+When run with:
+
+```sh
+./telegraf -config telegraf.conf -input-filter nginx_vts -test
+```
+
+It produces:
+
+```shell
+nginx_vts_connections,source=localhost,port=80,host=localhost waiting=30i,accepted=295333i,handled=295333i,requests=6833487i,active=33i,reading=0i,writing=3i 1518341521000000000
+nginx_vts_server,zone=example.com,port=80,host=localhost,source=localhost cache_hit=158915i,in_bytes=1935528964i,out_bytes=6531366419i,response_2xx_count=809994i,response_4xx_count=16664i,cache_bypass=0i,cache_stale=0i,cache_revalidated=0i,requests=2187977i,response_1xx_count=0i,response_3xx_count=1360390i,cache_miss=2249i,cache_updating=0i,cache_scarce=0i,request_time=13i,response_5xx_count=929i,cache_expired=0i 1518341521000000000
+nginx_vts_server,host=localhost,source=localhost,port=80,zone=* requests=6775284i,in_bytes=5003242389i,out_bytes=36858233827i,cache_expired=318881i,cache_updating=0i,request_time=51i,response_1xx_count=0i,response_2xx_count=4385916i,response_4xx_count=83680i,response_5xx_count=1186i,cache_bypass=0i,cache_revalidated=0i,cache_hit=1972222i,cache_scarce=0i,response_3xx_count=2304502i,cache_miss=408251i,cache_stale=0i 1518341521000000000
+nginx_vts_filter,filter_key=FI,filter_name=country,port=80,host=localhost,source=localhost request_time=0i,in_bytes=139701i,response_3xx_count=0i,out_bytes=2644495i,response_1xx_count=0i,cache_expired=0i,cache_scarce=0i,requests=179i,cache_miss=0i,cache_bypass=0i,cache_stale=0i,cache_updating=0i,cache_revalidated=0i,cache_hit=0i,response_2xx_count=177i,response_4xx_count=2i,response_5xx_count=0i 1518341521000000000
+nginx_vts_upstream,port=80,host=localhost,upstream=backend_cluster,upstream_address=127.0.0.1:6000,source=localhost fail_timeout=10i,backup=false,request_time=31i,response_5xx_count=1081i,response_2xx_count=1877498i,max_fails=1i,in_bytes=2763336289i,out_bytes=19470265071i,weight=1i,down=false,response_time=31i,response_1xx_count=0i,response_4xx_count=76125i,requests=3379232i,response_3xx_count=1424528i 1518341521000000000
+nginx_vts_cache,source=localhost,port=80,host=localhost,zone=example stale=0i,used_bytes=64334336i,miss=394573i,bypass=0i,expired=318788i,updating=0i,revalidated=0i,hit=689883i,scarce=0i,max_bytes=9223372036854775296i,in_bytes=1111161581i,out_bytes=19175548290i 1518341521000000000
+```
diff --git a/content/telegraf/v1/input-plugins/nomad/_index.md b/content/telegraf/v1/input-plugins/nomad/_index.md
new file mode 100644
index 000000000..4e1621d01
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/nomad/_index.md
@@ -0,0 +1,55 @@
+---
+description: "Telegraf plugin for collecting metrics from Hashicorp Nomad"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Hashicorp Nomad
+    identifier: input-nomad
+tags: [Hashicorp Nomad, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Hashicorp Nomad Input Plugin
+
+The Nomad plugin must grab metrics from every Nomad agent of the
+cluster. Telegraf may be present in every node and connect to the agent
+locally. In this case should be something like `http://127.0.0.1:4646`.
+
+> Tested on Nomad 1.1.6
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from the Nomad API
+[[inputs.nomad]]
+  ## URL for the Nomad agent
+  # url = "http://127.0.0.1:4646"
+
+  ## Set response_timeout (default 5 seconds)
+  # response_timeout = "5s"
+
+  ## Optional TLS Config
+  # tls_ca = /path/to/cafile
+  # tls_cert = /path/to/certfile
+  # tls_key = /path/to/keyfile
+```
+
+## Metrics
+
+Both Nomad servers and agents collect various metrics. For every details, please
+have a look at Nomad following documentation:
+
+- [https://www.nomadproject.io/docs/operations/metrics](https://www.nomadproject.io/docs/operations/metrics)
+- [https://www.nomadproject.io/docs/operations/telemetry](https://www.nomadproject.io/docs/operations/telemetry)
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/nsd/_index.md b/content/telegraf/v1/input-plugins/nsd/_index.md
new file mode 100644
index 000000000..fb16fbeaa
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/nsd/_index.md
@@ -0,0 +1,202 @@
+---
+description: "Telegraf plugin for collecting metrics from NSD"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: NSD
+    identifier: input-nsd
+tags: [NSD, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# NSD Input Plugin
+
+This plugin gathers stats from
+[NSD](https://www.nlnetlabs.nl/projects/nsd/about) - an authoritative DNS name
+server.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# A plugin to collect stats from the NSD DNS resolver
+[[inputs.nsd]]
+  ## Address of server to connect to, optionally ':port'. Defaults to the
+  ## address in the nsd config file.
+  server = "127.0.0.1:8953"
+
+  ## If running as a restricted user you can prepend sudo for additional access:
+  # use_sudo = false
+
+  ## The default location of the nsd-control binary can be overridden with:
+  # binary = "/usr/sbin/nsd-control"
+
+  ## The default location of the nsd config file can be overridden with:
+  # config_file = "/etc/nsd/nsd.conf"
+
+  ## The default timeout of 1s can be overridden with:
+  # timeout = "1s"
+```
+
+### Permissions
+
+It's important to note that this plugin references nsd-control, which may
+require additional permissions to execute successfully.  Depending on the
+user/group permissions of the telegraf user executing this plugin, you may
+need to alter the group membership, set facls, or use sudo.
+
+**Group membership (Recommended)**:
+
+```bash
+$ groups telegraf
+telegraf : telegraf
+
+$ usermod -a -G nsd telegraf
+
+$ groups telegraf
+telegraf : telegraf nsd
+```
+
+**Sudo privileges**:
+If you use this method, you will need the following in your telegraf config:
+
+```toml
+[[inputs.nsd]]
+  use_sudo = true
+```
+
+You will also need to update your sudoers file:
+
+```bash
+$ visudo
+# Add the following line:
+Cmnd_Alias NSDCONTROLCTL = /usr/sbin/nsd-control
+telegraf  ALL=(ALL) NOPASSWD: NSDCONTROLCTL
+Defaults!NSDCONTROLCTL !logfile, !syslog, !pam_session
+```
+
+Please use the solution you see as most appropriate.
+
+## Metrics
+
+This is the full list of stats provided by nsd-control. In the output, the
+dots in the nsd-control stat name are replaced by underscores (see
+<https://www.nlnetlabs.nl/documentation/nsd/nsd-control/> for details).
+
+- nsd
+  - fields:
+    - num_queries
+    - time_boot
+    - time_elapsed
+    - size_db_disk
+    - size_db_mem
+    - size_xfrd_mem
+    - size_config_disk
+    - size_config_mem
+    - num_type_TYPE0
+    - num_type_A
+    - num_type_NS
+    - num_type_MD
+    - num_type_MF
+    - num_type_CNAME
+    - num_type_SOA
+    - num_type_MB
+    - num_type_MG
+    - num_type_MR
+    - num_type_NULL
+    - num_type_WKS
+    - num_type_PTR
+    - num_type_HINFO
+    - num_type_MINFO
+    - num_type_MX
+    - num_type_TXT
+    - num_type_RP
+    - num_type_AFSDB
+    - num_type_X25
+    - num_type_ISDN
+    - num_type_RT
+    - num_type_NSAP
+    - num_type_SIG
+    - num_type_KEY
+    - num_type_PX
+    - num_type_AAAA
+    - num_type_LOC
+    - num_type_NXT
+    - num_type_SRV
+    - num_type_NAPTR
+    - num_type_KX
+    - num_type_CERT
+    - num_type_DNAME
+    - num_type_OPT
+    - num_type_APL
+    - num_type_DS
+    - num_type_SSHFP
+    - num_type_IPSECKEY
+    - num_type_RRSIG
+    - num_type_NSEC
+    - num_type_DNSKEY
+    - num_type_DHCID
+    - num_type_NSEC3
+    - num_type_NSEC3PARAM
+    - num_type_TLSA
+    - num_type_SMIMEA
+    - num_type_CDS
+    - num_type_CDNSKEY
+    - num_type_OPENPGPKEY
+    - num_type_CSYNC
+    - num_type_SPF
+    - num_type_NID
+    - num_type_L32
+    - num_type_L64
+    - num_type_LP
+    - num_type_EUI48
+    - num_type_EUI64
+    - num_type_TYPE252
+    - num_type_TYPE253
+    - num_type_TYPE255
+    - num_opcode_QUERY
+    - num_opcode_NOTIFY
+    - num_class_CLASS0
+    - num_class_IN
+    - num_class_CH
+    - num_rcode_NOERROR
+    - num_rcode_FORMERR
+    - num_rcode_SERVFAIL
+    - num_rcode_NXDOMAIN
+    - num_rcode_NOTIMP
+    - num_rcode_REFUSED
+    - num_rcode_YXDOMAIN
+    - num_rcode_NOTAUTH
+    - num_edns
+    - num_ednserr
+    - num_udp
+    - num_udp6
+    - num_tcp
+    - num_tcp6
+    - num_tls
+    - num_tls6
+    - num_answer_wo_aa
+    - num_rxerr
+    - num_txerr
+    - num_raxfr
+    - num_truncated
+    - num_dropped
+    - zone_master
+    - zone_slave
+
+- nsd_servers
+  - tags:
+    - server
+  - fields:
+    - queries
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/nsq/_index.md b/content/telegraf/v1/input-plugins/nsq/_index.md
new file mode 100644
index 000000000..c5842e319
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/nsq/_index.md
@@ -0,0 +1,47 @@
+---
+description: "Telegraf plugin for collecting metrics from NSQ"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: NSQ
+    identifier: input-nsq
+tags: [NSQ, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# NSQ Input Plugin
+
+This plugin gathers metrics from [NSQ](https://nsq.io/).
+
+See the [NSQD API docs](https://nsq.io/components/nsqd.html) for endpoints that
+the plugin can read.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read NSQ topic and channel statistics.
+[[inputs.nsq]]
+  ## An array of NSQD HTTP API endpoints
+  endpoints  = ["http://localhost:4151"]
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+## Metrics
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/nsq_consumer/_index.md b/content/telegraf/v1/input-plugins/nsq_consumer/_index.md
new file mode 100644
index 000000000..c750a782c
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/nsq_consumer/_index.md
@@ -0,0 +1,76 @@
+---
+description: "Telegraf plugin for collecting metrics from NSQ Consumer"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: NSQ Consumer
+    identifier: input-nsq_consumer
+tags: [NSQ Consumer, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# NSQ Consumer Input Plugin
+
+The [NSQ](https://nsq.io) consumer plugin reads from NSQD and creates metrics using one
+of the supported [input data formats](/telegraf/v1/data_formats/input).
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from NSQD topic(s)
+[[inputs.nsq_consumer]]
+  ## An array representing the NSQD TCP HTTP Endpoints
+  nsqd = ["localhost:4150"]
+
+  ## An array representing the NSQLookupd HTTP Endpoints
+  nsqlookupd = ["localhost:4161"]
+  topic = "telegraf"
+  channel = "consumer"
+  max_in_flight = 100
+
+  ## Max undelivered messages
+  ## This plugin uses tracking metrics, which ensure messages are read to
+  ## outputs before acknowledging them to the original broker to ensure data
+  ## is not lost. This option sets the maximum messages to read from the
+  ## broker that have not been written by an output.
+  ##
+  ## This value needs to be picked with awareness of the agent's
+  ## metric_batch_size value as well. Setting max undelivered messages too high
+  ## can result in a constant stream of data batches to the output. While
+  ## setting it too low may never flush the broker's messages.
+  # max_undelivered_messages = 1000
+
+  ## Data format to consume.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+  data_format = "influx"
+```
+
+[nsq]: https://nsq.io
+[input data formats]: /docs/DATA_FORMATS_INPUT.md
+
+## Metrics
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/nstat/_index.md b/content/telegraf/v1/input-plugins/nstat/_index.md
new file mode 100644
index 000000000..a4363de6c
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/nstat/_index.md
@@ -0,0 +1,376 @@
+---
+description: "Telegraf plugin for collecting metrics from Nstat"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Nstat
+    identifier: input-nstat
+tags: [Nstat, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Nstat Input Plugin
+
+Plugin collects network metrics from `/proc/net/netstat`, `/proc/net/snmp` and
+`/proc/net/snmp6` files
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Collect kernel snmp counters and network interface statistics
+[[inputs.nstat]]
+  ## file paths for proc files. If empty default paths will be used:
+  ##    /proc/net/netstat, /proc/net/snmp, /proc/net/snmp6
+  ## These can also be overridden with env variables, see README.
+  proc_net_netstat = "/proc/net/netstat"
+  proc_net_snmp = "/proc/net/snmp"
+  proc_net_snmp6 = "/proc/net/snmp6"
+  ## dump metrics with 0 values too
+  dump_zeros       = true
+```
+
+The plugin firstly tries to read file paths from config values if it is empty,
+then it reads from env variables.
+
+* `PROC_NET_NETSTAT`
+* `PROC_NET_SNMP`
+* `PROC_NET_SNMP6`
+
+If these variables are also not set,
+then it tries to read the proc root from env - `PROC_ROOT`,
+and sets `/proc` as a root path if `PROC_ROOT` is also empty.
+
+Then appends default file paths:
+
+* `/net/netstat`
+* `/net/snmp`
+* `/net/snmp6`
+
+So if nothing is given, no paths in config and in env vars, the plugin takes the
+default paths.
+
+* `/proc/net/netstat`
+* `/proc/net/snmp`
+* `/proc/net/snmp6`
+
+In case that `proc_net_snmp6` path doesn't exist (e.g. IPv6 is not enabled) no
+error would be raised.
+
+## Metrics
+
+* nstat
+  * Icmp6InCsumErrors
+  * Icmp6InDestUnreachs
+  * Icmp6InEchoReplies
+  * Icmp6InEchos
+  * Icmp6InErrors
+  * Icmp6InGroupMembQueries
+  * Icmp6InGroupMembReductions
+  * Icmp6InGroupMembResponses
+  * Icmp6InMLDv2Reports
+  * Icmp6InMsgs
+  * Icmp6InNeighborAdvertisements
+  * Icmp6InNeighborSolicits
+  * Icmp6InParmProblems
+  * Icmp6InPktTooBigs
+  * Icmp6InRedirects
+  * Icmp6InRouterAdvertisements
+  * Icmp6InRouterSolicits
+  * Icmp6InTimeExcds
+  * Icmp6OutDestUnreachs
+  * Icmp6OutEchoReplies
+  * Icmp6OutEchos
+  * Icmp6OutErrors
+  * Icmp6OutGroupMembQueries
+  * Icmp6OutGroupMembReductions
+  * Icmp6OutGroupMembResponses
+  * Icmp6OutMLDv2Reports
+  * Icmp6OutMsgs
+  * Icmp6OutNeighborAdvertisements
+  * Icmp6OutNeighborSolicits
+  * Icmp6OutParmProblems
+  * Icmp6OutPktTooBigs
+  * Icmp6OutRedirects
+  * Icmp6OutRouterAdvertisements
+  * Icmp6OutRouterSolicits
+  * Icmp6OutTimeExcds
+  * Icmp6OutType133
+  * Icmp6OutType135
+  * Icmp6OutType143
+  * IcmpInAddrMaskReps
+  * IcmpInAddrMasks
+  * IcmpInCsumErrors
+  * IcmpInDestUnreachs
+  * IcmpInEchoReps
+  * IcmpInEchos
+  * IcmpInErrors
+  * IcmpInMsgs
+  * IcmpInParmProbs
+  * IcmpInRedirects
+  * IcmpInSrcQuenchs
+  * IcmpInTimeExcds
+  * IcmpInTimestampReps
+  * IcmpInTimestamps
+  * IcmpMsgInType3
+  * IcmpMsgOutType3
+  * IcmpOutAddrMaskReps
+  * IcmpOutAddrMasks
+  * IcmpOutDestUnreachs
+  * IcmpOutEchoReps
+  * IcmpOutEchos
+  * IcmpOutErrors
+  * IcmpOutMsgs
+  * IcmpOutParmProbs
+  * IcmpOutRedirects
+  * IcmpOutSrcQuenchs
+  * IcmpOutTimeExcds
+  * IcmpOutTimestampReps
+  * IcmpOutTimestamps
+  * Ip6FragCreates
+  * Ip6FragFails
+  * Ip6FragOKs
+  * Ip6InAddrErrors
+  * Ip6InBcastOctets
+  * Ip6InCEPkts
+  * Ip6InDelivers
+  * Ip6InDiscards
+  * Ip6InECT0Pkts
+  * Ip6InECT1Pkts
+  * Ip6InHdrErrors
+  * Ip6InMcastOctets
+  * Ip6InMcastPkts
+  * Ip6InNoECTPkts
+  * Ip6InNoRoutes
+  * Ip6InOctets
+  * Ip6InReceives
+  * Ip6InTooBigErrors
+  * Ip6InTruncatedPkts
+  * Ip6InUnknownProtos
+  * Ip6OutBcastOctets
+  * Ip6OutDiscards
+  * Ip6OutForwDatagrams
+  * Ip6OutMcastOctets
+  * Ip6OutMcastPkts
+  * Ip6OutNoRoutes
+  * Ip6OutOctets
+  * Ip6OutRequests
+  * Ip6ReasmFails
+  * Ip6ReasmOKs
+  * Ip6ReasmReqds
+  * Ip6ReasmTimeout
+  * IpDefaultTTL
+  * IpExtInBcastOctets
+  * IpExtInBcastPkts
+  * IpExtInCEPkts
+  * IpExtInCsumErrors
+  * IpExtInECT0Pkts
+  * IpExtInECT1Pkts
+  * IpExtInMcastOctets
+  * IpExtInMcastPkts
+  * IpExtInNoECTPkts
+  * IpExtInNoRoutes
+  * IpExtInOctets
+  * IpExtInTruncatedPkts
+  * IpExtOutBcastOctets
+  * IpExtOutBcastPkts
+  * IpExtOutMcastOctets
+  * IpExtOutMcastPkts
+  * IpExtOutOctets
+  * IpForwDatagrams
+  * IpForwarding
+  * IpFragCreates
+  * IpFragFails
+  * IpFragOKs
+  * IpInAddrErrors
+  * IpInDelivers
+  * IpInDiscards
+  * IpInHdrErrors
+  * IpInReceives
+  * IpInUnknownProtos
+  * IpOutDiscards
+  * IpOutNoRoutes
+  * IpOutRequests
+  * IpReasmFails
+  * IpReasmOKs
+  * IpReasmReqds
+  * IpReasmTimeout
+  * TcpActiveOpens
+  * TcpAttemptFails
+  * TcpCurrEstab
+  * TcpEstabResets
+  * TcpExtArpFilter
+  * TcpExtBusyPollRxPackets
+  * TcpExtDelayedACKLocked
+  * TcpExtDelayedACKLost
+  * TcpExtDelayedACKs
+  * TcpExtEmbryonicRsts
+  * TcpExtIPReversePathFilter
+  * TcpExtListenDrops
+  * TcpExtListenOverflows
+  * TcpExtLockDroppedIcmps
+  * TcpExtOfoPruned
+  * TcpExtOutOfWindowIcmps
+  * TcpExtPAWSActive
+  * TcpExtPAWSEstab
+  * TcpExtPAWSPassive
+  * TcpExtPruneCalled
+  * TcpExtRcvPruned
+  * TcpExtSyncookiesFailed
+  * TcpExtSyncookiesRecv
+  * TcpExtSyncookiesSent
+  * TcpExtTCPACKSkippedChallenge
+  * TcpExtTCPACKSkippedFinWait2
+  * TcpExtTCPACKSkippedPAWS
+  * TcpExtTCPACKSkippedSeq
+  * TcpExtTCPACKSkippedSynRecv
+  * TcpExtTCPACKSkippedTimeWait
+  * TcpExtTCPAbortFailed
+  * TcpExtTCPAbortOnClose
+  * TcpExtTCPAbortOnData
+  * TcpExtTCPAbortOnLinger
+  * TcpExtTCPAbortOnMemory
+  * TcpExtTCPAbortOnTimeout
+  * TcpExtTCPAutoCorking
+  * TcpExtTCPBacklogDrop
+  * TcpExtTCPChallengeACK
+  * TcpExtTCPDSACKIgnoredNoUndo
+  * TcpExtTCPDSACKIgnoredOld
+  * TcpExtTCPDSACKOfoRecv
+  * TcpExtTCPDSACKOfoSent
+  * TcpExtTCPDSACKOldSent
+  * TcpExtTCPDSACKRecv
+  * TcpExtTCPDSACKUndo
+  * TcpExtTCPDeferAcceptDrop
+  * TcpExtTCPDirectCopyFromBacklog
+  * TcpExtTCPDirectCopyFromPrequeue
+  * TcpExtTCPFACKReorder
+  * TcpExtTCPFastOpenActive
+  * TcpExtTCPFastOpenActiveFail
+  * TcpExtTCPFastOpenCookieReqd
+  * TcpExtTCPFastOpenListenOverflow
+  * TcpExtTCPFastOpenPassive
+  * TcpExtTCPFastOpenPassiveFail
+  * TcpExtTCPFastRetrans
+  * TcpExtTCPForwardRetrans
+  * TcpExtTCPFromZeroWindowAdv
+  * TcpExtTCPFullUndo
+  * TcpExtTCPHPAcks
+  * TcpExtTCPHPHits
+  * TcpExtTCPHPHitsToUser
+  * TcpExtTCPHystartDelayCwnd
+  * TcpExtTCPHystartDelayDetect
+  * TcpExtTCPHystartTrainCwnd
+  * TcpExtTCPHystartTrainDetect
+  * TcpExtTCPKeepAlive
+  * TcpExtTCPLossFailures
+  * TcpExtTCPLossProbeRecovery
+  * TcpExtTCPLossProbes
+  * TcpExtTCPLossUndo
+  * TcpExtTCPLostRetransmit
+  * TcpExtTCPMD5NotFound
+  * TcpExtTCPMD5Unexpected
+  * TcpExtTCPMTUPFail
+  * TcpExtTCPMTUPSuccess
+  * TcpExtTCPMemoryPressures
+  * TcpExtTCPMinTTLDrop
+  * TcpExtTCPOFODrop
+  * TcpExtTCPOFOMerge
+  * TcpExtTCPOFOQueue
+  * TcpExtTCPOrigDataSent
+  * TcpExtTCPPartialUndo
+  * TcpExtTCPPrequeueDropped
+  * TcpExtTCPPrequeued
+  * TcpExtTCPPureAcks
+  * TcpExtTCPRcvCoalesce
+  * TcpExtTCPRcvCollapsed
+  * TcpExtTCPRenoFailures
+  * TcpExtTCPRenoRecovery
+  * TcpExtTCPRenoRecoveryFail
+  * TcpExtTCPRenoReorder
+  * TcpExtTCPReqQFullDoCookies
+  * TcpExtTCPReqQFullDrop
+  * TcpExtTCPRetransFail
+  * TcpExtTCPSACKDiscard
+  * TcpExtTCPSACKReneging
+  * TcpExtTCPSACKReorder
+  * TcpExtTCPSYNChallenge
+  * TcpExtTCPSackFailures
+  * TcpExtTCPSackMerged
+  * TcpExtTCPSackRecovery
+  * TcpExtTCPSackRecoveryFail
+  * TcpExtTCPSackShiftFallback
+  * TcpExtTCPSackShifted
+  * TcpExtTCPSchedulerFailed
+  * TcpExtTCPSlowStartRetrans
+  * TcpExtTCPSpuriousRTOs
+  * TcpExtTCPSpuriousRtxHostQueues
+  * TcpExtTCPSynRetrans
+  * TcpExtTCPTSReorder
+  * TcpExtTCPTimeWaitOverflow
+  * TcpExtTCPTimeouts
+  * TcpExtTCPToZeroWindowAdv
+  * TcpExtTCPWantZeroWindowAdv
+  * TcpExtTCPWinProbe
+  * TcpExtTW
+  * TcpExtTWKilled
+  * TcpExtTWRecycled
+  * TcpInCsumErrors
+  * TcpInErrs
+  * TcpInSegs
+  * TcpMaxConn
+  * TcpOutRsts
+  * TcpOutSegs
+  * TcpPassiveOpens
+  * TcpRetransSegs
+  * TcpRtoAlgorithm
+  * TcpRtoMax
+  * TcpRtoMin
+  * Udp6IgnoredMulti
+  * Udp6InCsumErrors
+  * Udp6InDatagrams
+  * Udp6InErrors
+  * Udp6NoPorts
+  * Udp6OutDatagrams
+  * Udp6RcvbufErrors
+  * Udp6SndbufErrors
+  * UdpIgnoredMulti
+  * UdpInCsumErrors
+  * UdpInDatagrams
+  * UdpInErrors
+  * UdpLite6InCsumErrors
+  * UdpLite6InDatagrams
+  * UdpLite6InErrors
+  * UdpLite6NoPorts
+  * UdpLite6OutDatagrams
+  * UdpLite6RcvbufErrors
+  * UdpLite6SndbufErrors
+  * UdpLiteIgnoredMulti
+  * UdpLiteInCsumErrors
+  * UdpLiteInDatagrams
+  * UdpLiteInErrors
+  * UdpLiteNoPorts
+  * UdpLiteOutDatagrams
+  * UdpLiteRcvbufErrors
+  * UdpLiteSndbufErrors
+  * UdpNoPorts
+  * UdpOutDatagrams
+  * UdpRcvbufErrors
+  * UdpSndbufErrors
+
+### Tags
+
+* All measurements have the following tags
+  * host (host of the system)
+  * name (the type of the metric: snmp, snmp6 or netstat)
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/ntpq/_index.md b/content/telegraf/v1/input-plugins/ntpq/_index.md
new file mode 100644
index 000000000..661dfc458
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/ntpq/_index.md
@@ -0,0 +1,109 @@
+---
+description: "Telegraf plugin for collecting metrics from ntpq"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: ntpq
+    identifier: input-ntpq
+tags: [ntpq, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# ntpq Input Plugin
+
+Get standard NTP query metrics, requires ntpq executable.
+
+Below is the documentation of the various headers returned from the NTP query
+command when running `ntpq -p`.
+
+- remote – The remote peer or server being synced to. “LOCAL” is this local host
+(included in case there are no remote peers or servers available);
+- refid – Where or what the remote peer or server is itself synchronised to;
+- st (stratum) – The remote peer or server Stratum
+- t (type) – Type (u: unicast or manycast client, b: broadcast or multicast client,
+l: local reference clock, s: symmetric peer, A: manycast server,
+B: broadcast server, M: multicast server, see “Automatic Server Discovery“);
+- when – When last polled (seconds ago, “h” hours ago, or “d” days ago);
+- poll – Polling frequency: rfc5905 suggests this ranges in NTPv4 from 4 (16s)
+to 17 (36h) (log2 seconds), however observation suggests the actual displayed
+value is seconds for a much smaller range of 64 (26) to 1024 (210) seconds;
+- reach – An 8-bit left-shift shift register value recording polls (bit set =
+successful, bit reset = fail) displayed in octal;
+- delay – Round trip communication delay to the remote peer or server (milliseconds);
+- offset – Mean offset (phase) in the times reported between this local host and
+the remote peer or server (RMS, milliseconds);
+- jitter – Mean deviation (jitter) in the time reported for that remote peer or
+server (RMS of difference of multiple time samples, milliseconds);
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Get standard NTP query metrics, requires ntpq executable.
+[[inputs.ntpq]]
+  ## Servers to query with ntpq.
+  ## If no server is given, the local machine is queried.
+  # servers = []
+
+  ## If false, set the -n ntpq flag. Can reduce metric gather time.
+  ## DEPRECATED since 1.24.0: add '-n' to 'options' instead to skip DNS lookup
+  # dns_lookup = true
+
+  ## Options to pass to the ntpq command.
+  # options = "-p"
+
+  ## Output format for the 'reach' field.
+  ## Available values are
+  ##   octal   --  output as is in octal representation e.g. 377 (default)
+  ##   decimal --  convert value to decimal representation e.g. 371 -> 249
+  ##   count   --  count the number of bits in the value. This represents
+  ##               the number of successful reaches, e.g. 37 -> 5
+  ##   ratio   --  output the ratio of successful attempts e.g. 37 -> 5/8 = 0.625
+  # reach_format = "octal"
+```
+
+You can pass arbitrary options accepted by the `ntpq` command using the
+`options` setting. In case you want to skip DNS lookups use
+
+```toml
+  options = "-p -n"
+```
+
+for example.
+
+## Metrics
+
+- ntpq
+  - delay (float, milliseconds)
+  - jitter (float, milliseconds)
+  - offset (float, milliseconds)
+  - poll (int, seconds)
+  - reach (int)
+  - when (int, seconds)
+
+### Tags
+
+All measurements have the following tags:
+
+- refid
+- remote
+- type
+- stratum
+
+In case you are specifying `servers`, the measurement has an
+additional `source` tag.
+
+## Example Output
+
+```text
+ntpq,refid=.GPSs.,remote=*time.apple.com,stratum=1,type=u delay=91.797,jitter=3.735,offset=12.841,poll=64i,reach=377i,when=35i 1457960478909556134
+```
diff --git a/content/telegraf/v1/input-plugins/nvidia_smi/_index.md b/content/telegraf/v1/input-plugins/nvidia_smi/_index.md
new file mode 100644
index 000000000..40c4208b9
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/nvidia_smi/_index.md
@@ -0,0 +1,173 @@
+---
+description: "Telegraf plugin for collecting metrics from Nvidia System Management Interface (SMI)"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Nvidia System Management Interface (SMI)
+    identifier: input-nvidia_smi
+tags: [Nvidia System Management Interface (SMI), "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Nvidia System Management Interface (SMI) Input Plugin
+
+This plugin uses a query on the
+[`nvidia-smi`](https://developer.nvidia.com/nvidia-system-management-interface)
+binary to pull GPU stats including memory and GPU usage, temp and other.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Startup error behavior options
+
+In addition to the plugin-specific and global configuration settings the plugin
+supports options for specifying the behavior when experiencing startup errors
+using the `startup_error_behavior` setting. Available values are:
+
+- `error`:  Telegraf with stop and exit in case of startup errors. This is the
+            default behavior.
+- `ignore`: Telegraf will ignore startup errors for this plugin and disables it
+            but continues processing for all other plugins.
+- `retry`:  NOT AVAILABLE
+
+## Configuration
+
+```toml @sample.conf
+# Pulls statistics from nvidia GPUs attached to the host
+[[inputs.nvidia_smi]]
+  ## Optional: path to nvidia-smi binary, defaults "/usr/bin/nvidia-smi"
+  ## We will first try to locate the nvidia-smi binary with the explicitly specified value (or default value),
+  ## if it is not found, we will try to locate it on PATH(exec.LookPath), if it is still not found, an error will be returned
+  # bin_path = "/usr/bin/nvidia-smi"
+
+  ## Optional: timeout for GPU polling
+  # timeout = "5s"
+```
+
+### Linux
+
+On Linux, `nvidia-smi` is generally located at `/usr/bin/nvidia-smi`
+
+### Windows
+
+On Windows, `nvidia-smi` is generally located at `C:\Program Files\NVIDIA
+Corporation\NVSMI\nvidia-smi.exe` On Windows 10, you may also find this located
+here `C:\Windows\System32\nvidia-smi.exe`
+
+You'll need to escape the `\` within the `telegraf.conf` like this: `C:\\Program
+Files\\NVIDIA Corporation\\NVSMI\\nvidia-smi.exe`
+
+## Metrics
+
+- measurement: `nvidia_smi`
+  - tags
+    - `name` (type of GPU e.g. `GeForce GTX 1070 Ti`)
+    - `compute_mode` (The compute mode of the GPU e.g. `Default`)
+    - `index` (The port index where the GPU is connected to the motherboard e.g. `1`)
+    - `pstate` (Overclocking state for the GPU e.g. `P0`)
+    - `uuid` (A unique identifier for the GPU e.g. `GPU-f9ba66fc-a7f5-94c5-da19-019ef2f9c665`)
+  - fields
+    - `fan_speed` (integer, percentage)
+    - `fbc_stats_session_count` (integer)
+    - `fbc_stats_average_fps` (integer)
+    - `fbc_stats_average_latency` (integer)
+    - `memory_free` (integer, MiB)
+    - `memory_used` (integer, MiB)
+    - `memory_total` (integer, MiB)
+    - `memory_reserved` (integer, MiB)
+    - `retired_pages_multiple_single_bit` (integer)
+    - `retired_pages_double_bit` (integer)
+    - `retired_pages_blacklist` (string)
+    - `retired_pages_pending` (string)
+    - `remapped_rows_correctable` (int)
+    - `remapped_rows_uncorrectable` (int)
+    - `remapped_rows_pending` (string)
+    - `remapped_rows_pending` (string)
+    - `remapped_rows_failure` (string)
+    - `power_draw` (float, W)
+    - `temperature_gpu` (integer, degrees C)
+    - `utilization_gpu` (integer, percentage)
+    - `utilization_memory` (integer, percentage)
+    - `utilization_encoder` (integer, percentage)
+    - `utilization_decoder` (integer, percentage)
+    - `pcie_link_gen_current` (integer)
+    - `pcie_link_width_current` (integer)
+    - `encoder_stats_session_count` (integer)
+    - `encoder_stats_average_fps` (integer)
+    - `encoder_stats_average_latency` (integer)
+    - `clocks_current_graphics` (integer, MHz)
+    - `clocks_current_sm` (integer, MHz)
+    - `clocks_current_memory` (integer, MHz)
+    - `clocks_current_video` (integer, MHz)
+    - `driver_version` (string)
+    - `cuda_version` (string)
+
+## Sample Query
+
+The below query could be used to alert on the average temperature of the your
+GPUs over the last minute
+
+```sql
+SELECT mean("temperature_gpu") FROM "nvidia_smi" WHERE time > now() - 5m GROUP BY time(1m), "index", "name", "host"
+```
+
+## Troubleshooting
+
+Check the full output by running `nvidia-smi` binary manually.
+
+Linux:
+
+```sh
+sudo -u telegraf -- /usr/bin/nvidia-smi -q -x
+```
+
+Windows:
+
+```sh
+"C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe" -q -x
+```
+
+Please include the output of this command if opening an GitHub issue.
+
+## Example Output
+
+```text
+nvidia_smi,compute_mode=Default,host=8218cf,index=0,name=GeForce\ GTX\ 1070,pstate=P2,uuid=GPU-823bc202-6279-6f2c-d729-868a30f14d96 fan_speed=100i,memory_free=7563i,memory_total=8112i,memory_used=549i,temperature_gpu=53i,utilization_gpu=100i,utilization_memory=90i 1523991122000000000
+nvidia_smi,compute_mode=Default,host=8218cf,index=1,name=GeForce\ GTX\ 1080,pstate=P2,uuid=GPU-f9ba66fc-a7f5-94c5-da19-019ef2f9c665 fan_speed=100i,memory_free=7557i,memory_total=8114i,memory_used=557i,temperature_gpu=50i,utilization_gpu=100i,utilization_memory=85i 1523991122000000000
+nvidia_smi,compute_mode=Default,host=8218cf,index=2,name=GeForce\ GTX\ 1080,pstate=P2,uuid=GPU-d4cfc28d-0481-8d07-b81a-ddfc63d74adf fan_speed=100i,memory_free=7557i,memory_total=8114i,memory_used=557i,temperature_gpu=58i,utilization_gpu=100i,utilization_memory=86i 1523991122000000000
+```
+
+## Limitations
+
+Note that there seems to be an issue with getting current memory clock values
+when the memory is overclocked.  This may or may not apply to everyone but it's
+confirmed to be an issue on an EVGA 2080 Ti.
+
+**NOTE:** For use with docker either generate your own custom docker image based
+on nvidia/cuda which also installs a telegraf package or use [volume mount
+binding](https://docs.docker.com/storage/bind-mounts/) to inject the required
+binary into the docker container. In particular you will need to pass through
+the /dev/nvidia* devices, the nvidia-smi binary and the nvidia libraries.
+An minimal docker-compose example of how to do this is:
+
+```yaml
+  telegraf:
+    image: telegraf
+    runtime: nvidia
+    devices:
+      - /dev/nvidiactl:/dev/nvidiactl
+      - /dev/nvidia0:/dev/nvidia0
+    volumes:
+      - ./telegraf/etc/telegraf.conf:/etc/telegraf/telegraf.conf:ro
+      - /usr/bin/nvidia-smi:/usr/bin/nvidia-smi:ro
+      - /usr/lib/x86_64-linux-gnu/nvidia:/usr/lib/x86_64-linux-gnu/nvidia:ro
+    environment:
+      - LD_PRELOAD=/usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so
+```
diff --git a/content/telegraf/v1/input-plugins/opcua/_index.md b/content/telegraf/v1/input-plugins/opcua/_index.md
new file mode 100644
index 000000000..e19c25f44
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/opcua/_index.md
@@ -0,0 +1,287 @@
+---
+description: "Telegraf plugin for collecting metrics from OPC UA Client Reader"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: OPC UA Client Reader
+    identifier: input-opcua
+tags: [OPC UA Client Reader, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# OPC UA Client Reader Input Plugin
+
+The `opcua` plugin retrieves data from OPC UA Server devices.
+
+Telegraf minimum version: Telegraf 1.16
+Plugin minimum tested version: 1.16
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `username` and
+`password` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Retrieve data from OPCUA devices
+[[inputs.opcua]]
+  ## Metric name
+  # name = "opcua"
+  #
+  ## OPC UA Endpoint URL
+  # endpoint = "opc.tcp://localhost:4840"
+  #
+  ## Maximum time allowed to establish a connect to the endpoint.
+  # connect_timeout = "10s"
+  #
+  ## Maximum time allowed for a request over the established connection.
+  # request_timeout = "5s"
+
+  # Maximum time that a session shall remain open without activity.
+  # session_timeout = "20m"
+  #
+  ## Security policy, one of "None", "Basic128Rsa15", "Basic256",
+  ## "Basic256Sha256", or "auto"
+  # security_policy = "auto"
+  #
+  ## Security mode, one of "None", "Sign", "SignAndEncrypt", or "auto"
+  # security_mode = "auto"
+  #
+  ## Path to cert.pem. Required when security mode or policy isn't "None".
+  ## If cert path is not supplied, self-signed cert and key will be generated.
+  # certificate = "/etc/telegraf/cert.pem"
+  #
+  ## Path to private key.pem. Required when security mode or policy isn't "None".
+  ## If key path is not supplied, self-signed cert and key will be generated.
+  # private_key = "/etc/telegraf/key.pem"
+  #
+  ## Authentication Method, one of "Certificate", "UserName", or "Anonymous".  To
+  ## authenticate using a specific ID, select 'Certificate' or 'UserName'
+  # auth_method = "Anonymous"
+  #
+  ## Username. Required for auth_method = "UserName"
+  # username = ""
+  #
+  ## Password. Required for auth_method = "UserName"
+  # password = ""
+  #
+  ## Option to select the metric timestamp to use. Valid options are:
+  ##     "gather" -- uses the time of receiving the data in telegraf
+  ##     "server" -- uses the timestamp provided by the server
+  ##     "source" -- uses the timestamp provided by the source
+  # timestamp = "gather"
+  #
+  ## Client trace messages
+  ## When set to true, and debug mode enabled in the agent settings, the OPCUA
+  ## client's messages are included in telegraf logs. These messages are very
+  ## noisey, but essential for debugging issues.
+  # client_trace = false
+  #
+  ## Include additional Fields in each metric
+  ## Available options are:
+  ##   DataType -- OPC-UA Data Type (string)
+  # optional_fields = []
+  #
+  ## Node ID configuration
+  ## name              - field name to use in the output
+  ## namespace         - OPC UA namespace of the node (integer value 0 thru 3)
+  ## identifier_type   - OPC UA ID type (s=string, i=numeric, g=guid, b=opaque)
+  ## identifier        - OPC UA ID (tag as shown in opcua browser)
+  ## default_tags      - extra tags to be added to the output metric (optional)
+  ##
+  ## Use either the inline notation or the bracketed notation, not both.
+  #
+  ## Inline notation (default_tags not supported yet)
+  # nodes = [
+  #   {name="", namespace="", identifier_type="", identifier=""},
+  # ]
+  #
+  ## Bracketed notation
+  # [[inputs.opcua.nodes]]
+  #   name = "node1"
+  #   namespace = ""
+  #   identifier_type = ""
+  #   identifier = ""
+  #   default_tags = { tag1 = "value1", tag2 = "value2" }
+  #
+  # [[inputs.opcua.nodes]]
+  #   name = "node2"
+  #   namespace = ""
+  #   identifier_type = ""
+  #   identifier = ""
+  #
+  ## Node Group
+  ## Sets defaults so they aren't required in every node.
+  ## Default values can be set for:
+  ## * Metric name
+  ## * OPC UA namespace
+  ## * Identifier
+  ## * Default tags
+  ##
+  ## Multiple node groups are allowed
+  #[[inputs.opcua.group]]
+  ## Group Metric name. Overrides the top level name.  If unset, the
+  ## top level name is used.
+  # name =
+  #
+  ## Group default namespace. If a node in the group doesn't set its
+  ## namespace, this is used.
+  # namespace =
+  #
+  ## Group default identifier type. If a node in the group doesn't set its
+  ## namespace, this is used.
+  # identifier_type =
+  #
+  ## Default tags that are applied to every node in this group. Can be
+  ## overwritten in a node by setting a different value for the tag name.
+  ##   example: default_tags = { tag1 = "value1" }
+  # default_tags = {}
+  #
+  ## Node ID Configuration.  Array of nodes with the same settings as above.
+  ## Use either the inline notation or the bracketed notation, not both.
+  #
+  ## Inline notation (default_tags not supported yet)
+  # nodes = [
+  #  {name="node1", namespace="", identifier_type="", identifier=""},
+  #  {name="node2", namespace="", identifier_type="", identifier=""},
+  #]
+  #
+  ## Bracketed notation
+  # [[inputs.opcua.group.nodes]]
+  #   name = "node1"
+  #   namespace = ""
+  #   identifier_type = ""
+  #   identifier = ""
+  #   default_tags = { tag1 = "override1", tag2 = "value2" }
+  #
+  # [[inputs.opcua.group.nodes]]
+  #   name = "node2"
+  #   namespace = ""
+  #   identifier_type = ""
+  #   identifier = ""
+
+  ## Enable workarounds required by some devices to work correctly
+  # [inputs.opcua.workarounds]
+    ## Set additional valid status codes, StatusOK (0x0) is always considered valid
+    # additional_valid_status_codes = ["0xC0"]
+
+  # [inputs.opcua.request_workarounds]
+    ## Use unregistered reads instead of registered reads
+    # use_unregistered_reads = false
+```
+
+## Node Configuration
+
+An OPC UA node ID may resemble: "ns=3;s=Temperature". In this example:
+
+- ns=3 is indicating the `namespace` is 3
+- s=Temperature is indicting that the `identifier_type` is a string and `identifier` value is 'Temperature'
+- This example temperature node has a value of 79.0
+To gather data from this node enter the following line into the 'nodes' property above:
+
+```text
+{field_name="temp", namespace="3", identifier_type="s", identifier="Temperature"},
+```
+
+This node configuration produces a metric like this:
+
+```text
+opcua,id=ns\=3;s\=Temperature temp=79.0,Quality="OK (0x0)" 1597820490000000000
+```
+
+With 'DataType' entered in Additional Metrics, this node configuration
+produces a metric like this:
+
+```text
+opcua,id=ns\=3;s\=Temperature temp=79.0,Quality="OK (0x0)",DataType="Float" 1597820490000000000
+```
+
+## Group Configuration
+
+Groups can set default values for the namespace, identifier type, and
+tags settings.  The default values apply to all the nodes in the
+group.  If a default is set, a node may omit the setting altogether.
+This simplifies node configuration, especially when many nodes share
+the same namespace or identifier type.
+
+The output metric will include tags set in the group and the node.  If
+a tag with the same name is set in both places, the tag value from the
+node is used.
+
+This example group configuration has three groups with two nodes each:
+
+```toml
+  # Group 1
+  [[inputs.opcua.group]]
+    name = "group1_metric_name"
+    namespace = "3"
+    identifier_type = "i"
+    default_tags = { group1_tag = "val1" }
+    [[inputs.opcua.group.nodes]]
+      name = "name"
+      identifier = "1001"
+      default_tags = { node1_tag = "val2" }
+    [[inputs.opcua.group.nodes]]
+      name = "name"
+      identifier = "1002"
+      default_tags = {node1_tag = "val3"}
+
+  # Group 2
+  [[inputs.opcua.group]]
+    name = "group2_metric_name"
+    namespace = "3"
+    identifier_type = "i"
+    default_tags = { group2_tag = "val3" }
+    [[inputs.opcua.group.nodes]]
+      name = "saw"
+      identifier = "1003"
+      default_tags = { node2_tag = "val4" }
+    [[inputs.opcua.group.nodes]]
+      name = "sin"
+      identifier = "1004"
+
+  # Group 3
+  [[inputs.opcua.group]]
+    name = "group3_metric_name"
+    namespace = "3"
+    identifier_type = "i"
+    default_tags = { group3_tag = "val5" }
+    nodes = [
+      {name="name", identifier="1001"},
+      {name="name", identifier="1002"},
+    ]
+```
+
+## Connection Service
+
+This plugin actively reads to retrieve data from the OPC server.
+This is done every `interval`.
+
+## Metrics
+
+The metrics collected by this input plugin will depend on the
+configured `nodes` and `group`.
+
+## Example Output
+
+```text
+group1_metric_name,group1_tag=val1,id=ns\=3;i\=1001,node1_tag=val2 name=0,Quality="OK (0x0)" 1606893246000000000
+group1_metric_name,group1_tag=val1,id=ns\=3;i\=1002,node1_tag=val3 name=-1.389117,Quality="OK (0x0)" 1606893246000000000
+group2_metric_name,group2_tag=val3,id=ns\=3;i\=1003,node2_tag=val4 Quality="OK (0x0)",saw=-1.6 1606893246000000000
+group2_metric_name,group2_tag=val3,id=ns\=3;i\=1004 sin=1.902113,Quality="OK (0x0)" 1606893246000000000
+```
diff --git a/content/telegraf/v1/input-plugins/opcua_listener/_index.md b/content/telegraf/v1/input-plugins/opcua_listener/_index.md
new file mode 100644
index 000000000..47b52a2bf
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/opcua_listener/_index.md
@@ -0,0 +1,375 @@
+---
+description: "Telegraf plugin for collecting metrics from OPC UA Client Listener"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: OPC UA Client Listener
+    identifier: input-opcua_listener
+tags: [OPC UA Client Listener, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# OPC UA Client Listener Input Plugin
+
+The `opcua_listener` plugin subscribes to data from OPC UA Server devices.
+
+Telegraf minimum version: Telegraf 1.25
+Plugin minimum tested version: 1.25
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `username` and
+`password` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Retrieve data from OPCUA devices
+[[inputs.opcua_listener]]
+  ## Metric name
+  # name = "opcua_listener"
+  #
+  ## OPC UA Endpoint URL
+  # endpoint = "opc.tcp://localhost:4840"
+  #
+  ## Maximum time allowed to establish a connect to the endpoint.
+  # connect_timeout = "10s"
+  #
+  ## Behavior when we fail to connect to the endpoint on initialization. Valid options are:
+  ##     "error": throw an error and exits Telegraf
+  ##     "ignore": ignore this plugin if errors are encountered
+  #      "retry": retry connecting at each interval
+  # connect_fail_behavior = "error"
+  #
+  ## Maximum time allowed for a request over the established connection.
+  # request_timeout = "5s"
+  #
+  # Maximum time that a session shall remain open without activity.
+  # session_timeout = "20m"
+  #
+  ## The interval at which the server should at least update its monitored items.
+  ## Please note that the OPC UA server might reject the specified interval if it cannot meet the required update rate.
+  ## Therefore, always refer to the hardware/software documentation of your server to ensure the specified interval is supported.
+  # subscription_interval = "100ms"
+  #
+  ## Security policy, one of "None", "Basic128Rsa15", "Basic256",
+  ## "Basic256Sha256", or "auto"
+  # security_policy = "auto"
+  #
+  ## Security mode, one of "None", "Sign", "SignAndEncrypt", or "auto"
+  # security_mode = "auto"
+  #
+  ## Path to cert.pem. Required when security mode or policy isn't "None".
+  ## If cert path is not supplied, self-signed cert and key will be generated.
+  # certificate = "/etc/telegraf/cert.pem"
+  #
+  ## Path to private key.pem. Required when security mode or policy isn't "None".
+  ## If key path is not supplied, self-signed cert and key will be generated.
+  # private_key = "/etc/telegraf/key.pem"
+  #
+  ## Authentication Method, one of "Certificate", "UserName", or "Anonymous".  To
+  ## authenticate using a specific ID, select 'Certificate' or 'UserName'
+  # auth_method = "Anonymous"
+  #
+  ## Username. Required for auth_method = "UserName"
+  # username = ""
+  #
+  ## Password. Required for auth_method = "UserName"
+  # password = ""
+  #
+  ## Option to select the metric timestamp to use. Valid options are:
+  ##     "gather" -- uses the time of receiving the data in telegraf
+  ##     "server" -- uses the timestamp provided by the server
+  ##     "source" -- uses the timestamp provided by the source
+  # timestamp = "gather"
+  #
+  ## The default timetsamp format is RFC3339Nano
+  # Other timestamp layouts can be configured using the Go language time
+  # layout specification from https://golang.org/pkg/time/#Time.Format
+  # e.g.: json_timestamp_format = "2006-01-02T15:04:05Z07:00"
+  #timestamp_format = ""
+  #
+  #
+  ## Client trace messages
+  ## When set to true, and debug mode enabled in the agent settings, the OPCUA
+  ## client's messages are included in telegraf logs. These messages are very
+  ## noisey, but essential for debugging issues.
+  # client_trace = false
+  #
+  ## Include additional Fields in each metric
+  ## Available options are:
+  ##   DataType -- OPC-UA Data Type (string)
+  # optional_fields = []
+  #
+  ## Node ID configuration
+  ## name              - field name to use in the output
+  ## namespace         - OPC UA namespace of the node (integer value 0 thru 3)
+  ## identifier_type   - OPC UA ID type (s=string, i=numeric, g=guid, b=opaque)
+  ## identifier        - OPC UA ID (tag as shown in opcua browser)
+  ## default_tags      - extra tags to be added to the output metric (optional)
+  ## monitoring_params - additional settings for the monitored node (optional)
+  ##
+  ## Monitoring parameters
+  ## sampling_interval  - interval at which the server should check for data
+  ##                      changes (default: 0s)
+  ## queue_size         - size of the notification queue (default: 10)
+  ## discard_oldest     - how notifications should be handled in case of full
+  ##                      notification queues, possible values:
+  ##                      true: oldest value added to queue gets replaced with new
+  ##                            (default)
+  ##                      false: last value added to queue gets replaced with new
+  ## data_change_filter - defines the condition under which a notification should
+  ##                      be reported
+  ##
+  ## Data change filter
+  ## trigger        - specify the conditions under which a data change notification
+  ##                  should be reported, possible values:
+  ##                  "Status": only report notifications if the status changes
+  ##                            (default if parameter is omitted)
+  ##                  "StatusValue": report notifications if either status or value
+  ##                                 changes
+  ##                  "StatusValueTimestamp": report notifications if either status,
+  ##                                          value or timestamp changes
+  ## deadband_type  - type of the deadband filter to be applied, possible values:
+  ##                  "Absolute": absolute change in a data value to report a notification
+  ##                  "Percent": works only with nodes that have an EURange property set
+  ##                             and is defined as: send notification if
+  ##                             (last value - current value) >
+  ##                             (deadband_value/100.0) * ((high–low) of EURange)
+  ## deadband_value - value to deadband_type, must be a float value, no filter is set
+  ##                  for negative values
+  ##
+  ## Use either the inline notation or the bracketed notation, not both.
+  #
+  ## Inline notation (default_tags and monitoring_params not supported yet)
+  # nodes = [
+  #   {name="node1", namespace="", identifier_type="", identifier=""},
+  #   {name="node2", namespace="", identifier_type="", identifier=""}
+  # ]
+  #
+  ## Bracketed notation
+  # [[inputs.opcua_listener.nodes]]
+  #   name = "node1"
+  #   namespace = ""
+  #   identifier_type = ""
+  #   identifier = ""
+  #   default_tags = { tag1 = "value1", tag2 = "value2" }
+  #
+  # [[inputs.opcua_listener.nodes]]
+  #   name = "node2"
+  #   namespace = ""
+  #   identifier_type = ""
+  #   identifier = ""
+  #
+  #   [inputs.opcua_listener.nodes.monitoring_params]
+  #     sampling_interval = "0s"
+  #     queue_size = 10
+  #     discard_oldest = true
+  #
+  #     [inputs.opcua_listener.nodes.monitoring_params.data_change_filter]
+  #       trigger = "Status"
+  #       deadband_type = "Absolute"
+  #       deadband_value = 0.0
+  #
+  ## Node Group
+  ## Sets defaults so they aren't required in every node.
+  ## Default values can be set for:
+  ## * Metric name
+  ## * OPC UA namespace
+  ## * Identifier
+  ## * Default tags
+  ## * Sampling interval
+  ##
+  ## Multiple node groups are allowed
+  #[[inputs.opcua_listener.group]]
+  ## Group Metric name. Overrides the top level name.  If unset, the
+  ## top level name is used.
+  # name =
+  #
+  ## Group default namespace. If a node in the group doesn't set its
+  ## namespace, this is used.
+  # namespace =
+  #
+  ## Group default identifier type. If a node in the group doesn't set its
+  ## namespace, this is used.
+  # identifier_type =
+  #
+  ## Default tags that are applied to every node in this group. Can be
+  ## overwritten in a node by setting a different value for the tag name.
+  ##   example: default_tags = { tag1 = "value1" }
+  # default_tags = {}
+  #
+  ## Group default sampling interval. If a node in the group doesn't set its
+  ## sampling interval, this is used.
+  # sampling_interval = "0s"
+  #
+  ## Node ID Configuration.  Array of nodes with the same settings as above.
+  ## Use either the inline notation or the bracketed notation, not both.
+  #
+  ## Inline notation (default_tags and monitoring_params not supported yet)
+  # nodes = [
+  #  {name="node1", namespace="", identifier_type="", identifier=""},
+  #  {name="node2", namespace="", identifier_type="", identifier=""}
+  #]
+  #
+  ## Bracketed notation
+  # [[inputs.opcua_listener.group.nodes]]
+  #   name = "node1"
+  #   namespace = ""
+  #   identifier_type = ""
+  #   identifier = ""
+  #   default_tags = { tag1 = "override1", tag2 = "value2" }
+  #
+  # [[inputs.opcua_listener.group.nodes]]
+  #   name = "node2"
+  #   namespace = ""
+  #   identifier_type = ""
+  #   identifier = ""
+  #
+  #   [inputs.opcua_listener.group.nodes.monitoring_params]
+  #     sampling_interval = "0s"
+  #     queue_size = 10
+  #     discard_oldest = true
+  #
+  #     [inputs.opcua_listener.group.nodes.monitoring_params.data_change_filter]
+  #       trigger = "Status"
+  #       deadband_type = "Absolute"
+  #       deadband_value = 0.0
+  #
+
+  ## Enable workarounds required by some devices to work correctly
+  # [inputs.opcua_listener.workarounds]
+    ## Set additional valid status codes, StatusOK (0x0) is always considered valid
+    # additional_valid_status_codes = ["0xC0"]
+
+  # [inputs.opcua_listener.request_workarounds]
+    ## Use unregistered reads instead of registered reads
+    # use_unregistered_reads = false
+```
+
+## Node Configuration
+
+An OPC UA node ID may resemble: "ns=3;s=Temperature". In this example:
+
+- ns=3 is indicating the `namespace` is 3
+- s=Temperature is indicting that the `identifier_type` is a string and `identifier` value is 'Temperature'
+- This example temperature node has a value of 79.0
+To gather data from this node enter the following line into the 'nodes' property above:
+
+```text
+{name="temp", namespace="3", identifier_type="s", identifier="Temperature"},
+```
+
+This node configuration produces a metric like this:
+
+```text
+opcua,id=ns\=3;s\=Temperature temp=79.0,Quality="OK (0x0)" 1597820490000000000
+```
+
+With 'DataType' entered in Additional Metrics, this node configuration
+produces a metric like this:
+
+```text
+opcua,id=ns\=3;s\=Temperature temp=79.0,Quality="OK (0x0)",DataType="Float" 1597820490000000000
+```
+
+## Group Configuration
+
+Groups can set default values for the namespace, identifier type, tags
+settings and sampling interval.  The default values apply to all the
+nodes in the group.  If a default is set, a node may omit the setting
+altogether. This simplifies node configuration, especially when many
+nodes share the same namespace or identifier type.
+
+The output metric will include tags set in the group and the node.  If
+a tag with the same name is set in both places, the tag value from the
+node is used.
+
+This example group configuration has three groups with two nodes each:
+
+```toml
+  # Group 1
+  [[inputs.opcua_listener.group]]
+    name = "group1_metric_name"
+    namespace = "3"
+    identifier_type = "i"
+    default_tags = { group1_tag = "val1" }
+    [[inputs.opcua.group.nodes]]
+      name = "name"
+      identifier = "1001"
+      default_tags = { node1_tag = "val2" }
+    [[inputs.opcua.group.nodes]]
+      name = "name"
+      identifier = "1002"
+      default_tags = {node1_tag = "val3"}
+
+  # Group 2
+  [[inputs.opcua_listener.group]]
+    name = "group2_metric_name"
+    namespace = "3"
+    identifier_type = "i"
+    default_tags = { group2_tag = "val3" }
+    [[inputs.opcua.group.nodes]]
+      name = "saw"
+      identifier = "1003"
+      default_tags = { node2_tag = "val4" }
+    [[inputs.opcua.group.nodes]]
+      name = "sin"
+      identifier = "1004"
+
+  # Group 3
+  [[inputs.opcua_listener.group]]
+    name = "group3_metric_name"
+    namespace = "3"
+    identifier_type = "i"
+    default_tags = { group3_tag = "val5" }
+    nodes = [
+      {name="name", identifier="1001"},
+      {name="name", identifier="1002"},
+    ]
+```
+
+## Connection Service
+
+This plugin subscribes to the specified nodes to receive data from
+the OPC server. The updates are received at most as fast as the
+`subscription_interval`.
+
+## Metrics
+
+The metrics collected by this input plugin will depend on the
+configured `nodes` and `group`.
+
+## Example Output
+
+```text
+group1_metric_name,group1_tag=val1,id=ns\=3;i\=1001,node1_tag=val2 name=0,Quality="OK (0x0)" 1606893246000000000
+group1_metric_name,group1_tag=val1,id=ns\=3;i\=1002,node1_tag=val3 name=-1.389117,Quality="OK (0x0)" 1606893246000000000
+group2_metric_name,group2_tag=val3,id=ns\=3;i\=1003,node2_tag=val4 Quality="OK (0x0)",saw=-1.6 1606893246000000000
+group2_metric_name,group2_tag=val3,id=ns\=3;i\=1004 sin=1.902113,Quality="OK (0x0)" 1606893246000000000
+```
diff --git a/content/telegraf/v1/input-plugins/openldap/_index.md b/content/telegraf/v1/input-plugins/openldap/_index.md
new file mode 100644
index 000000000..7c965ab59
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/openldap/_index.md
@@ -0,0 +1,124 @@
+---
+description: "Telegraf plugin for collecting metrics from OpenLDAP"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: OpenLDAP
+    identifier: input-openldap
+tags: [OpenLDAP, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# OpenLDAP Input Plugin
+
+This plugin gathers metrics from OpenLDAP's cn=Monitor backend.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# OpenLDAP cn=Monitor plugin
+[[inputs.openldap]]
+  host = "localhost"
+  port = 389
+
+  # ldaps, starttls, or no encryption. default is an empty string, disabling all encryption.
+  # note that port will likely need to be changed to 636 for ldaps
+  # valid options: "" | "starttls" | "ldaps"
+  tls = ""
+
+  # skip peer certificate verification. Default is false.
+  insecure_skip_verify = false
+
+  # Path to PEM-encoded Root certificate to use to verify server certificate
+  tls_ca = "/etc/ssl/certs.pem"
+
+  # dn/password to bind with. If bind_dn is empty, an anonymous bind is performed.
+  bind_dn = ""
+  bind_password = ""
+
+  # reverse metric names so they sort more naturally
+  # Defaults to false if unset, but is set to true when generating a new config
+  reverse_metric_names = true
+```
+
+To use this plugin you must enable the [slapd
+monitoring](https://www.openldap.org/devel/admin/monitoringslapd.html) backend.
+
+## Metrics
+
+All **monitorCounter**, **monitoredInfo**, **monitorOpInitiated**, and
+**monitorOpCompleted** attributes are gathered based on this LDAP query:
+
+```sh
+(|(objectClass=monitorCounterObject)(objectClass=monitorOperation)(objectClass=monitoredObject))
+```
+
+Metric names are based on their entry DN with the cn=Monitor base removed. If
+`reverse_metric_names` is not set, metrics are based on their DN. If
+`reverse_metric_names` is set to `true`, the names are reversed. This is
+recommended as it allows the names to sort more naturally.
+
+Metrics for the **monitorOp*** attributes have **_initiated** and **_completed**
+added to the base name as appropriate.
+
+An OpenLDAP 2.4 server will provide these metrics:
+
+- openldap
+  - connections_current
+  - connections_max_file_descriptors
+  - connections_total
+  - operations_abandon_completed
+  - operations_abandon_initiated
+  - operations_add_completed
+  - operations_add_initiated
+  - operations_bind_completed
+  - operations_bind_initiated
+  - operations_compare_completed
+  - operations_compare_initiated
+  - operations_delete_completed
+  - operations_delete_initiated
+  - operations_extended_completed
+  - operations_extended_initiated
+  - operations_modify_completed
+  - operations_modify_initiated
+  - operations_modrdn_completed
+  - operations_modrdn_initiated
+  - operations_search_completed
+  - operations_search_initiated
+  - operations_unbind_completed
+  - operations_unbind_initiated
+  - statistics_bytes
+  - statistics_entries
+  - statistics_pdu
+  - statistics_referrals
+  - threads_active
+  - threads_backload
+  - threads_max
+  - threads_max_pending
+  - threads_open
+  - threads_pending
+  - threads_starting
+  - time_uptime
+  - waiters_read
+  - waiters_write
+
+### Tags
+
+- server= # value from config
+- port= # value from config
+
+## Example Output
+
+```text
+openldap,server=localhost,port=389,host=niska.ait.psu.edu operations_bind_initiated=10i,operations_unbind_initiated=6i,operations_modrdn_completed=0i,operations_delete_initiated=0i,operations_add_completed=2i,operations_delete_completed=0i,operations_abandon_completed=0i,statistics_entries=1516i,threads_open=2i,threads_active=1i,waiters_read=1i,operations_modify_completed=0i,operations_extended_initiated=4i,threads_pending=0i,operations_search_initiated=36i,operations_compare_initiated=0i,connections_max_file_descriptors=4096i,operations_modify_initiated=0i,operations_modrdn_initiated=0i,threads_max=16i,time_uptime=6017i,connections_total=1037i,connections_current=1i,operations_add_initiated=2i,statistics_bytes=162071i,operations_unbind_completed=6i,operations_abandon_initiated=0i,statistics_pdu=1566i,threads_max_pending=0i,threads_backload=1i,waiters_write=0i,operations_bind_completed=10i,operations_search_completed=35i,operations_compare_completed=0i,operations_extended_completed=4i,statistics_referrals=0i,threads_starting=0i 1516912070000000000
+```
diff --git a/content/telegraf/v1/input-plugins/openntpd/_index.md b/content/telegraf/v1/input-plugins/openntpd/_index.md
new file mode 100644
index 000000000..be9484536
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/openntpd/_index.md
@@ -0,0 +1,118 @@
+---
+description: "Telegraf plugin for collecting metrics from OpenNTPD"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: OpenNTPD
+    identifier: input-openntpd
+tags: [OpenNTPD, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# OpenNTPD Input Plugin
+
+Get standard NTP query metrics from [OpenNTPD](http://www.openntpd.org/) using the ntpctl command.
+
+[OpenNTPD]: http://www.openntpd.org/
+
+Below is the documentation of the various headers returned from the NTP query
+command when running `ntpctl -s peers`.
+
+- remote – The remote peer or server being synced to.
+- wt – the peer weight
+- tl – the peer trust level
+- st (stratum) – The remote peer or server Stratum
+- next – number of seconds until the next poll
+- poll – polling interval in seconds
+- delay – Round trip communication delay to the remote peer
+or server (milliseconds);
+- offset – Mean offset (phase) in the times reported between this local host and
+the remote peer or server (RMS, milliseconds);
+- jitter – Mean deviation (jitter) in the time reported for that remote peer or
+server (RMS of difference of multiple time samples, milliseconds);
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Get standard NTP query metrics from OpenNTPD.
+[[inputs.openntpd]]
+  ## Run ntpctl binary with sudo.
+  # use_sudo = false
+
+  ## Location of the ntpctl binary.
+  # binary = "/usr/sbin/ntpctl"
+
+  ## Maximum time the ntpctl binary is allowed to run.
+  # timeout = "5ms"
+```
+
+## Metrics
+
+- ntpctl
+  - tags:
+    - remote
+    - stratum
+  - fields:
+    - delay (float, milliseconds)
+    - jitter (float, milliseconds)
+    - offset (float, milliseconds)
+    - poll (int, seconds)
+    - next (int, seconds)
+    - wt (int)
+    - tl (int)
+
+## Permissions
+
+It's important to note that this plugin references ntpctl, which may require
+additional permissions to execute successfully.
+Depending on the user/group permissions of the telegraf user executing this
+plugin, you may need to alter the group membership, set facls, or use sudo.
+
+**Group membership (Recommended)**:
+
+```bash
+$ groups telegraf
+telegraf : telegraf
+
+$ usermod -a -G ntpd telegraf
+
+$ groups telegraf
+telegraf : telegraf ntpd
+```
+
+**Sudo privileges**:
+If you use this method, you will need the following in your telegraf config:
+
+```toml
+[[inputs.openntpd]]
+  use_sudo = true
+```
+
+You will also need to update your sudoers file:
+
+```bash
+$ visudo
+# Add the following lines:
+Cmnd_Alias NTPCTL = /usr/sbin/ntpctl
+telegraf ALL=(ALL) NOPASSWD: NTPCTL
+Defaults!NTPCTL !logfile, !syslog, !pam_session
+```
+
+Please use the solution you see as most appropriate.
+
+## Example Output
+
+```text
+openntpd,remote=194.57.169.1,stratum=2,host=localhost tl=10i,poll=1007i,
+offset=2.295,jitter=3.896,delay=53.766,next=266i,wt=1i 1514454299000000000
+```
diff --git a/content/telegraf/v1/input-plugins/opensearch_query/_index.md b/content/telegraf/v1/input-plugins/opensearch_query/_index.md
new file mode 100644
index 000000000..33cb326f2
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/opensearch_query/_index.md
@@ -0,0 +1,270 @@
+---
+description: "Telegraf plugin for collecting metrics from OpenSearch Query"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: OpenSearch Query
+    identifier: input-opensearch_query
+tags: [OpenSearch Query, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# OpenSearch Query Input Plugin
+
+This [OpenSearch](https://opensearch.org/) plugin queries endpoints
+to derive metrics from data stored in an OpenSearch cluster.
+
+The following is supported:
+
+- return number of hits for a search query
+- calculate the `avg`/`max`/`min`/`sum` for a numeric field, filtered by a query,
+  aggregated per tag
+- `value_count` returns the number of documents for a particular field
+- `stats` (returns `sum`, `min`, `max`, `avg`, and `value_count` in one query)
+- extended_stats (`stats` plus stats such as sum of squares, variance, and standard
+  deviation)
+- `percentiles` returns the 1st, 5th, 25th, 50th, 75th, 95th, and 99th percentiles
+
+## OpenSearch Support
+
+This plugins is tested against OpenSearch 2.5.0 and 1.3.7.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Derive metrics from aggregating OpenSearch query results
+[[inputs.opensearch_query]]
+  ## OpenSearch cluster endpoint(s). Multiple urls can be specified as part
+  ## of the same cluster.  Only one successful call will be made per interval.
+  urls = [ "https://node1.os.example.com:9200" ] # required.
+
+  ## OpenSearch client timeout, defaults to "5s".
+  # timeout = "5s"
+
+  ## HTTP basic authentication details
+  # username = "admin"
+  # password = "admin"
+
+  ## Skip TLS validation.  Useful for local testing and self-signed certs.
+  # insecure_skip_verify = false
+
+  [[inputs.opensearch_query.aggregation]]
+    ## measurement name for the results of the aggregation query
+    measurement_name = "measurement"
+
+    ## OpenSearch index or index pattern to search
+    index = "index-*"
+
+    ## The date/time field in the OpenSearch index (mandatory).
+    date_field = "@timestamp"
+
+    ## If the field used for the date/time field in OpenSearch is also using
+    ## a custom date/time format it may be required to provide the format to
+    ## correctly parse the field.
+    ##
+    ## If using one of the built in OpenSearch formats this is not required.
+    ## https://opensearch.org/docs/2.4/opensearch/supported-field-types/date/#built-in-formats
+    # date_field_custom_format = ""
+
+    ## Time window to query (eg. "1m" to query documents from last minute).
+    ## Normally should be set to same as collection interval
+    query_period = "1m"
+
+    ## Lucene query to filter results
+    # filter_query = "*"
+
+    ## Fields to aggregate values (must be numeric fields)
+    # metric_fields = ["metric"]
+
+    ## Aggregation function to use on the metric fields
+    ## Must be set if 'metric_fields' is set
+    ## Valid values are: avg, sum, min, max, sum
+    # metric_function = "avg"
+
+    ## Fields to be used as tags.  Must be text, non-analyzed fields. Metric
+    ## aggregations are performed per tag
+    # tags = ["field.keyword", "field2.keyword"]
+
+    ## Set to true to not ignore documents when the tag(s) above are missing
+    # include_missing_tag = false
+
+    ## String value of the tag when the tag does not exist
+    ## Required when include_missing_tag is true
+    # missing_tag_value = "null"
+```
+
+### Required parameters
+
+- `measurement_name`: The target measurement to be stored the results of the
+  aggregation query.
+- `index`: The index name to query on OpenSearch
+- `query_period`: The time window to query (eg. "1m" to query documents from
+  last minute). Normally should be set to same as collection
+- `date_field`: The date/time field in the OpenSearch index
+
+### Optional parameters
+
+- `date_field_custom_format`: Not needed if using one of the built in date/time
+  formats of OpenSearch, but may be required if using a custom date/time
+  format. The format syntax uses the [Joda date format](https://opensearch.org/docs/2.4/opensearch/supported-field-types/date/#custom-formats).
+- `filter_query`: Lucene query to filter the results (default: "\*")
+- `metric_fields`: The list of fields to perform metric aggregation (these must
+  be indexed as numeric fields)
+- `metric_function`: The single-value metric aggregation function to be performed
+  on the `metric_fields` defined. Currently supported aggregations are "avg",
+  "min", "max", "sum", "value_count", "stats", "extended_stats", "percentiles".
+  (see the [aggregation docs](https://opensearch.org/docs/2.4/opensearch/aggregations/)
+- `tags`: The list of fields to be used as tags (these must be indexed as
+  non-analyzed fields). A "terms aggregation" will be done per tag defined
+- `include_missing_tag`: Set to true to not ignore documents where the tag(s)
+  specified above does not exist. (If false, documents without the specified tag
+  field will be ignored in `doc_count` and in the metric aggregation)
+- `missing_tag_value`: The value of the tag that will be set for documents in
+  which the tag field does not exist. Only used when `include_missing_tag` is
+  set to `true`.
+
+[joda]: https://opensearch.org/docs/2.4/opensearch/supported-field-types/date/#custom-formats
+[agg]: https://opensearch.org/docs/2.4/opensearch/aggregations/
+
+### Example configurations
+
+#### Search the average response time, per URI and per response status code
+
+```toml
+[[inputs.opensearch_query.aggregation]]
+  measurement_name = "http_logs"
+  index = "my-index-*"
+  filter_query = "*"
+  metric_fields = ["response_time"]
+  metric_function = "avg"
+  tags = ["URI.keyword", "response.keyword"]
+  include_missing_tag = true
+  missing_tag_value = "null"
+  date_field = "@timestamp"
+  query_period = "1m"
+```
+
+#### Search the maximum response time per method and per URI
+
+```toml
+[[inputs.opensearch_query.aggregation]]
+  measurement_name = "http_logs"
+  index = "my-index-*"
+  filter_query = "*"
+  metric_fields = ["response_time"]
+  metric_function = "max"
+  tags = ["method.keyword","URI.keyword"]
+  include_missing_tag = false
+  missing_tag_value = "null"
+  date_field = "@timestamp"
+  query_period = "1m"
+```
+
+#### Search number of documents matching a filter query in all indices
+
+```toml
+[[inputs.opensearch_query.aggregation]]
+  measurement_name = "http_logs"
+  index = "*"
+  filter_query = "product_1 AND HEAD"
+  query_period = "1m"
+  date_field = "@timestamp"
+```
+
+#### Search number of documents matching a filter query, returning per response status code
+
+```toml
+[[inputs.opensearch_query.aggregation]]
+  measurement_name = "http_logs"
+  index = "*"
+  filter_query = "downloads"
+  tags = ["response.keyword"]
+  include_missing_tag = false
+  date_field = "@timestamp"
+  query_period = "1m"
+```
+
+#### Search all documents and generate common statistics, returning per response status code
+
+```toml
+[[inputs.opensearch_query.aggregation]]
+  measurement_name = "http_logs"
+  index = "*"
+  tags = ["response.keyword"]
+  include_missing_tag = false
+  date_field = "@timestamp"
+  query_period = "1m"
+```
+
+## Metrics
+
+All metrics derive from aggregating OpenSearch query results.  Queries must
+conform to appropriate OpenSearch
+[Aggregations](https://opensearch.org/docs/latest/opensearch/aggregations/)
+for more information.
+
+Metric names are composed of a combination of the field name, metric aggregation
+function, and the result field name.
+
+For simple metrics, the result field name is `value`, and so getting the `avg`
+on a field named `size` would produce the result `size_value_avg`.
+
+For functions with multiple metrics, we use the resulting field.  For example,
+the `stats` function returns five different results, so for a field `size`,
+we would see five metric fields, named `size_stats_min`,
+`size_stats_max`, `size_stats_sum`, `size_stats_avg`, and `size_stats_count`.
+
+Nested results will build on their parent field names, for example, results for
+percentile take the form:
+
+```json
+{
+  "aggregations" : {
+  "size_percentiles" : {
+    "values" : {
+      "1.0" : 21.984375,
+      "5.0" : 27.984375,
+      "25.0" : 44.96875,
+      "50.0" : 64.22061688311689,
+      "75.0" : 93.0,
+      "95.0" : 156.0,
+      "99.0" : 222.0
+    }
+  }
+ }
+}
+```
+
+Thus, our results would take the form `size_percentiles_values_1.0`.  This
+structure applies to `percentiles` and `extended_stats` functions.
+
+Note: `extended_stats` is currently limited to 2 standard deviations only.
+
+## Example Output
+
+```toml
+[[inputs.opensearch_query.aggregation]]
+    measurement_name = "bytes_stats"
+    index = "opensearch_dashboards_sample_data_logs"
+    date_field = "timestamp"
+    query_period = "10m"
+    filter_query = "*"
+    metric_fields = ["bytes"]
+    metric_function = "stats"
+    tags = ["response.keyword"]
+```
+
+```text
+bytes_stats,host=localhost,response_keyword=200 bytes_stats_sum=22231,doc_count=4i,bytes_stats_count=4,bytes_stats_min=941,bytes_stats_max=9544,bytes_stats_avg=5557.75 1672327840000000000
+bytes_stats,host=localhost,response_keyword=404 bytes_stats_min=5330,bytes_stats_max=5330,bytes_stats_avg=5330,doc_count=1i,bytes_stats_sum=5330,bytes_stats_count=1 1672327840000000000
+```
diff --git a/content/telegraf/v1/input-plugins/opensmtpd/_index.md b/content/telegraf/v1/input-plugins/opensmtpd/_index.md
new file mode 100644
index 000000000..7999eac72
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/opensmtpd/_index.md
@@ -0,0 +1,128 @@
+---
+description: "Telegraf plugin for collecting metrics from OpenSMTPD"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: OpenSMTPD
+    identifier: input-opensmtpd
+tags: [OpenSMTPD, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# OpenSMTPD Input Plugin
+
+This plugin gathers stats from [OpenSMTPD - a FREE implementation of the
+server-side SMTP protocol](https://www.opensmtpd.org/)
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# A plugin to collect stats from Opensmtpd - a validating, recursive, and caching DNS resolver
+ [[inputs.opensmtpd]]
+   ## If running as a restricted user you can prepend sudo for additional access:
+   #use_sudo = false
+
+   ## The default location of the smtpctl binary can be overridden with:
+   binary = "/usr/sbin/smtpctl"
+
+   # The default timeout of 1s can be overridden with:
+   #timeout = "1s"
+```
+
+## Metrics
+
+This is the full list of stats provided by smtpctl and potentially collected by
+telegram depending of your smtpctl configuration.
+
+- smtpctl
+    bounce_envelope
+    bounce_message
+    bounce_session
+    control_session
+    mda_envelope
+    mda_pending
+    mda_running
+    mda_user
+    mta_connector
+    mta_domain
+    mta_envelope
+    mta_host
+    mta_relay
+    mta_route
+    mta_session
+    mta_source
+    mta_task
+    mta_task_running
+    queue_bounce
+    queue_evpcache_load_hit
+    queue_evpcache_size
+    queue_evpcache_update_hit
+    scheduler_delivery_ok
+    scheduler_delivery_permfail
+    scheduler_delivery_tempfail
+    scheduler_envelope
+    scheduler_envelope_expired
+    scheduler_envelope_incoming
+    scheduler_envelope_inflight
+    scheduler_ramqueue_envelope
+    scheduler_ramqueue_message
+    scheduler_ramqueue_update
+    smtp_session
+    smtp_session_inet4
+    smtp_session_local
+    uptime
+
+## Permissions
+
+It's important to note that this plugin references smtpctl, which may require
+additional permissions to execute successfully.  Depending on the user/group
+permissions of the telegraf user executing this plugin, you may need to alter
+the group membership, set facls, or use sudo.
+
+**Group membership (Recommended)**:
+
+```bash
+$ groups telegraf
+telegraf : telegraf
+
+$ usermod -a -G opensmtpd telegraf
+
+$ groups telegraf
+telegraf : telegraf opensmtpd
+```
+
+**Sudo privileges**:
+If you use this method, you will need the following in your telegraf config:
+
+```toml
+[[inputs.opensmtpd]]
+  use_sudo = true
+```
+
+You will also need to update your sudoers file:
+
+```bash
+$ visudo
+# Add the following line:
+Cmnd_Alias SMTPCTL = /usr/sbin/smtpctl
+telegraf  ALL=(ALL) NOPASSWD: SMTPCTL
+Defaults!SMTPCTL !logfile, !syslog, !pam_session
+```
+
+Please use the solution you see as most appropriate.
+
+## Example Output
+
+```text
+opensmtpd,host=localhost scheduler_delivery_tempfail=822,mta_host=10,mta_task_running=4,queue_bounce=13017,scheduler_delivery_permfail=51022,mta_relay=7,queue_evpcache_size=2,scheduler_envelope_expired=26,bounce_message=0,mta_domain=7,queue_evpcache_update_hit=848,smtp_session_local=12294,bounce_envelope=0,queue_evpcache_load_hit=4389703,scheduler_ramqueue_update=0,mta_route=3,scheduler_delivery_ok=2149489,smtp_session_inet4=2131997,control_session=1,scheduler_envelope_incoming=0,uptime=10346728,scheduler_ramqueue_envelope=2,smtp_session=0,bounce_session=0,mta_envelope=2,mta_session=6,mta_task=2,scheduler_ramqueue_message=2,mta_connector=7,mta_source=1,scheduler_envelope=2,scheduler_envelope_inflight=2 1510220300000000000
+```
diff --git a/content/telegraf/v1/input-plugins/openstack/_index.md b/content/telegraf/v1/input-plugins/openstack/_index.md
new file mode 100644
index 000000000..50d6a5b69
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/openstack/_index.md
@@ -0,0 +1,389 @@
+---
+description: "Telegraf plugin for collecting metrics from OpenStack"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: OpenStack
+    identifier: input-openstack
+tags: [OpenStack, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# OpenStack Input Plugin
+
+Collects the metrics from following services of OpenStack:
+
+* CINDER(Block Storage)
+* GLANCE(Image service)
+* HEAT(Orchestration)
+* KEYSTONE(Identity service)
+* NEUTRON(Networking)
+* NOVA(Compute Service)
+
+At present this plugin requires the following APIs:
+
+* blockstorage  v3
+* compute  v2
+* identity  v3
+* networking  v2
+* orchestration  v1
+
+## Recommendations
+
+Due to the large number of unique tags that this plugin generates, in order to
+keep the cardinality down it is **highly recommended** to use
+modifiers is acceptable. For larger deployments, polling a
+large number of systems will impact performance. Use the `interval` option to
+change how often the plugin is run:
+
+`interval`: How often a metric is gathered. Setting this value at the plugin
+level overrides the global agent interval setting.
+
+Also, consider polling OpenStack services at different intervals depending on
+your requirements. This will help with load and cardinality as well.
+
+```toml
+[[inputs.openstack]]
+  interval = "5m"
+  ....
+  authentication_endpoint = "https://my.openstack.cloud:5000"
+  ...
+  enabled_services = ["nova_services"]
+  ....
+
+[[inputs.openstack]]
+  interval = "30m"
+  ....
+  authentication_endpoint = "https://my.openstack.cloud:5000"
+  ...
+  enabled_services = ["services", "projects", "hypervisors", "flavors", "networks", "volumes"]
+  ....
+```
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Collects performance metrics from OpenStack services
+[[inputs.openstack]]
+  ## The recommended interval to poll is '30m'
+
+  ## The identity endpoint to authenticate against and get the service catalog from.
+  authentication_endpoint = "https://my.openstack.cloud:5000"
+
+  ## The domain to authenticate against when using a V3 identity endpoint.
+  # domain = "default"
+
+  ## The project to authenticate as.
+  # project = "admin"
+
+  ## User authentication credentials. Must have admin rights.
+  username = "admin"
+  password = "password"
+
+  ## Available services are:
+  ## "agents", "aggregates", "cinder_services", "flavors", "hypervisors",
+  ## "networks", "nova_services", "ports", "projects", "servers",
+  ## "serverdiagnostics", "services", "stacks", "storage_pools", "subnets",
+  ## "volumes"
+  # enabled_services = ["services", "projects", "hypervisors", "flavors", "networks", "volumes"]
+
+  ## Query all instances of all tenants for the volumes and server services
+  ## NOTE: Usually this is only permitted for administrators!
+  # query_all_tenants = true
+
+  ## output secrets (such as adminPass(for server) and UserID(for volume)).
+  # output_secrets = false
+
+  ## Amount of time allowed to complete the HTTP(s) request.
+  # timeout = "5s"
+
+  ## HTTP Proxy support
+  # http_proxy_url = ""
+
+  ## Optional TLS Config
+  # tls_ca = /path/to/cafile
+  # tls_cert = /path/to/certfile
+  # tls_key = /path/to/keyfile
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## Options for tags received from Openstack
+  # tag_prefix = "openstack_tag_"
+  # tag_value = "true"
+
+  ## Timestamp format for timestamp data received from Openstack.
+  ## If false format is unix nanoseconds.
+  # human_readable_timestamps = false
+
+  ## Measure Openstack call duration
+  # measure_openstack_requests = false
+```
+
+## Metrics
+
+* openstack_aggregate
+  * name
+  * aggregate_host  [string]
+  * aggregate_hosts  [integer]
+  * created_at  [string]
+  * deleted  [boolean]
+  * deleted_at  [string]
+  * id  [integer]
+  * updated_at  [string]
+* openstack_flavor
+  * is_public
+  * name
+  * disk  [integer]
+  * ephemeral  [integer]
+  * id  [string]
+  * ram  [integer]
+  * rxtx_factor  [float]
+  * swap  [integer]
+  * vcpus  [integer]
+* openstack_hypervisor
+  * cpu_arch
+  * cpu_feature_tsc
+  * cpu_feature_tsc-deadline
+  * cpu_feature_tsc_adjust
+  * cpu_feature_tsx-ctrl
+  * cpu_feature_vme
+  * cpu_feature_vmx
+  * cpu_feature_x2apic
+  * cpu_feature_xgetbv1
+  * cpu_feature_xsave
+  * cpu_model
+  * cpu_vendor
+  * hypervisor_hostname
+  * hypervisor_type
+  * hypervisor_version
+  * service_host
+  * service_id
+  * state
+  * status
+  * cpu_topology_cores  [integer]
+  * cpu_topology_sockets  [integer]
+  * cpu_topology_threads  [integer]
+  * current_workload  [integer]
+  * disk_available_least  [integer]
+  * free_disk_gb  [integer]
+  * free_ram_mb  [integer]
+  * host_ip  [string]
+  * id  [string]
+  * local_gb  [integer]
+  * local_gb_used  [integer]
+  * memory_mb  [integer]
+  * memory_mb_used  [integer]
+  * running_vms  [integer]
+  * vcpus  [integer]
+  * vcpus_used  [integer]
+* openstack_identity
+  * description
+  * domain_id
+  * name
+  * parent_id
+  * enabled   boolean
+  * id        string
+  * is_domain boolean
+  * projects  integer
+* openstack_network
+  * name
+  * openstack_tags_xyz
+  * project_id
+  * status
+  * tenant_id
+  * admin_state_up  [boolean]
+  * availability_zone_hints  [string]
+  * created_at  [string]
+  * id  [string]
+  * shared  [boolean]
+  * subnet_id  [string]
+  * subnets  [integer]
+  * updated_at  [string]
+* openstack_neutron_agent
+  * agent_host
+  * agent_type
+  * availability_zone
+  * binary
+  * topic
+  * admin_state_up  [boolean]
+  * alive  [boolean]
+  * created_at  [string]
+  * heartbeat_timestamp  [string]
+  * id  [string]
+  * resources_synced  [boolean]
+  * started_at  [string]
+* openstack_nova_service
+  * host_machine
+  * name
+  * state
+  * status
+  * zone
+  * disabled_reason  [string]
+  * forced_down  [boolean]
+  * id  [string]
+  * updated_at  [string]
+* openstack_port
+  * device_id
+  * device_owner
+  * name
+  * network_id
+  * project_id
+  * status
+  * tenant_id
+  * admin_state_up  [boolean]
+  * allowed_address_pairs  [integer]
+  * fixed_ips  [integer]
+  * id  [string]
+  * ip_address  [string]
+  * mac_address  [string]
+  * security_groups  [string]
+  * subnet_id  [string]
+* openstack_request_duration
+  * agents  [integer]
+  * aggregates  [integer]
+  * flavors  [integer]
+  * hypervisors  [integer]
+  * networks  [integer]
+  * nova_services  [integer]
+  * ports  [integer]
+  * projects  [integer]
+  * servers  [integer]
+  * stacks  [integer]
+  * storage_pools  [integer]
+  * subnets  [integer]
+  * volumes  [integer]
+* openstack_server
+  * flavor
+  * host_id
+  * host_name
+  * image
+  * key_name
+  * name
+  * project
+  * status
+  * tenant_id
+  * user_id
+  * accessIPv4  [string]
+  * accessIPv6  [string]
+  * addresses  [integer]
+  * adminPass  [string]
+  * created  [string]
+  * disk_gb  [integer]
+  * fault_code  [integer]
+  * fault_created  [string]
+  * fault_details  [string]
+  * fault_message  [string]
+  * id  [string]
+  * progress  [integer]
+  * ram_mb  [integer]
+  * security_groups  [integer]
+  * updated  [string]
+  * vcpus  [integer]
+  * volume_id  [string]
+  * volumes_attached  [integer]
+* openstack_server_diagnostics
+  * disk_name
+  * no_of_disks
+  * no_of_ports
+  * port_name
+  * server_id
+  * cpu0_time  [float]
+  * cpu1_time  [float]
+  * cpu2_time  [float]
+  * cpu3_time  [float]
+  * cpu4_time  [float]
+  * cpu5_time  [float]
+  * cpu6_time  [float]
+  * cpu7_time  [float]
+  * disk_errors  [float]
+  * disk_read  [float]
+  * disk_read_req  [float]
+  * disk_write  [float]
+  * disk_write_req  [float]
+  * memory  [float]
+  * memory-actual  [float]
+  * memory-rss  [float]
+  * memory-swap_in  [float]
+  * port_rx  [float]
+  * port_rx_drop  [float]
+  * port_rx_errors  [float]
+  * port_rx_packets  [float]
+  * port_tx  [float]
+  * port_tx_drop  [float]
+  * port_tx_errors  [float]
+  * port_tx_packets  [float]
+* openstack_service
+  * name
+  * service_enabled  [boolean]
+  * service_id  [string]
+* openstack_storage_pool
+  * driver_version
+  * name
+  * storage_protocol
+  * vendor_name
+  * volume_backend_name
+  * free_capacity_gb  [float]
+  * total_capacity_gb  [float]
+* openstack_subnet
+  * cidr
+  * gateway_ip
+  * ip_version
+  * name
+  * network_id
+  * openstack_tags_subnet_type_PRV
+  * project_id
+  * tenant_id
+  * allocation_pools  [string]
+  * dhcp_enabled  [boolean]
+  * dns_nameservers  [string]
+  * id  [string]
+* openstack_volume
+  * attachment_attachment_id
+  * attachment_device
+  * attachment_host_name
+  * availability_zone
+  * bootable
+  * description
+  * name
+  * status
+  * user_id
+  * volume_type
+  * attachment_attached_at  [string]
+  * attachment_server_id  [string]
+  * created_at  [string]
+  * encrypted  [boolean]
+  * id  [string]
+  * multiattach  [boolean]
+  * size  [integer]
+  * total_attachments  [integer]
+  * updated_at  [string]
+
+## Example Output
+
+```text
+openstack_neutron_agent,agent_host=vim2,agent_type=DHCP\ agent,availability_zone=nova,binary=neutron-dhcp-agent,host=telegraf_host,topic=dhcp_agent admin_state_up=true,alive=true,created_at="2021-01-07T03:40:53Z",heartbeat_timestamp="2021-10-14T07:46:40Z",id="17e1e446-d7da-4656-9e32-67d3690a306f",resources_synced=false,started_at="2021-07-02T21:47:42Z" 1634197616000000000
+openstack_aggregate,host=telegraf_host,name=non-dpdk aggregate_host="vim3",aggregate_hosts=2i,created_at="2021-02-01T18:28:00Z",deleted=false,deleted_at="0001-01-01T00:00:00Z",id=3i,updated_at="0001-01-01T00:00:00Z" 1634197617000000000
+openstack_flavor,host=telegraf_host,is_public=true,name=hwflavor disk=20i,ephemeral=0i,id="f89785c0-6b9f-47f5-a02e-f0fcbb223163",ram=8192i,rxtx_factor=1,swap=0i,vcpus=8i 1634197617000000000
+openstack_hypervisor,cpu_arch=x86_64,cpu_feature_3dnowprefetch=true,cpu_feature_abm=true,cpu_feature_acpi=true,cpu_feature_adx=true,cpu_feature_aes=true,cpu_feature_apic=true,cpu_feature_xtpr=true,cpu_model=C-Server,cpu_vendor=xyz,host=telegraf_host,hypervisor_hostname=vim3,hypervisor_type=QEMU,hypervisor_version=4002000,service_host=vim3,service_id=192,state=up,status=enabled cpu_topology_cores=28i,cpu_topology_sockets=1i,cpu_topology_threads=2i,current_workload=0i,disk_available_least=2596i,free_disk_gb=2744i,free_ram_mb=374092i,host_ip="xx:xx:xx:x::xxx",id="12",local_gb=3366i,local_gb_used=622i,memory_mb=515404i,memory_mb_used=141312i,running_vms=15i,vcpus=0i,vcpus_used=72i 1634197618000000000
+openstack_network,host=telegraf_host,name=Network\ 2,project_id=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,status=active,tenant_id=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx admin_state_up=true,availability_zone_hints="",created_at="2021-07-29T15:58:25Z",id="f5af5e71-e890-4245-a377-d4d86273c319",shared=false,subnet_id="2f7341c6-074d-42aa-9abc-71c662d9b336",subnets=1i,updated_at="2021-09-02T16:46:48Z" 1634197618000000000
+openstack_nova_service,host=telegraf_host,host_machine=vim3,name=nova-compute,state=up,status=enabled,zone=nova disabled_reason="",forced_down=false,id="192",updated_at="2021-10-14T07:46:52Z" 1634197619000000000
+openstack_port,device_id=a043b8b3-2831-462a-bba8-19088f3db45a,device_owner=compute:nova,host=telegraf_host,name=offload-port1,network_id=6b40d744-9a48-43f2-a4c8-2e0ccb45ac96,project_id=71f9bc44621234f8af99a3949258fc7b,status=ACTIVE,tenant_id=71f9bc44621234f8af99a3949258fc7b admin_state_up=true,allowed_address_pairs=0i,fixed_ips=1i,id="fb64626a-07e1-4d78-a70d-900e989537cc",ip_address="1.1.1.5",mac_address="xx:xx:xx:xx:xx:xx",security_groups="",subnet_id="eafa1eca-b318-4746-a55a-682478466689" 1634197620000000000
+openstack_identity,domain_id=default,host=telegraf_host,name=service,parent_id=default enabled=true,id="a0877dd2ed1d4b5f952f5689bc04b0cb",is_domain=false,projects=7i 1634197621000000000
+openstack_server,flavor=0d438971-56cf-4f86-801f-7b04b29384cb,host=telegraf_host,host_id=c0fe05b14261d35cf8748a3f5aae1234b88c2fd62b69fe24ca4a27e9,host_name=vim1,image=b295f1f3-1w23-470c-8734-197676eedd16,name=test-VM7,project=admin,status=active,tenant_id=80ac889731f540498fb1dc78e4bcd5ed,user_id=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx accessIPv4="",accessIPv6="",addresses=1i,adminPass="",created="2021-09-07T14:40:11Z",disk_gb=8i,fault_code=0i,fault_created="0001-01-01T00:00:00Z",fault_details="",fault_message="",id="db92ee0d-459b-458e-9fe3-2be5ec7c87e1",progress=0i,ram_mb=16384i,security_groups=1i,updated="2021-09-07T14:40:19Z",vcpus=4i,volumes_attached=0i 1634197656000000000
+openstack_service,host=telegraf_host,name=identity service_enabled=true,service_id="ad605eff92444a158d0f78768f2c4668" 1634197656000000000
+openstack_storage_pool,driver_version=1.0.0,host=telegraf_host,name=storage_bloack_1,storage_protocol=nfs,vendor_name=xyz,volume_backend_name=abc free_capacity_gb=4847.54,total_capacity_gb=4864 1634197658000000000
+openstack_subnet,cidr=10.10.20.10/28,gateway_ip=10.10.20.17,host=telegraf_host,ip_version=4,name=IPv4_Subnet_2,network_id=73c6e1d3-f522-4a3f-8e3c-762a0c06d68b,openstack_tags_lab=True,project_id=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,tenant_id=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx allocation_pools="10.10.20.11-10.10.20.30",dhcp_enabled=true,dns_nameservers="",id="db69fbb2-9ca1-4370-8c78-82a27951c94b" 1634197660000000000
+openstack_volume,attachment_attachment_id=c83ca0d6-c467-44a0-ac1f-f87d769c0c65,attachment_device=/dev/vda,attachment_host_name=vim1,availability_zone=nova,bootable=true,host=telegraf_host,status=in-use,user_id=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,volume_type=storage_bloack_1 attachment_attached_at="2021-01-12T21:02:04Z",attachment_server_id="c0c6b4af-0d26-4a0b-a6b4-4ea41fa3bb4a",created_at="2021-01-12T21:01:47Z",encrypted=false,id="d4204f1b-b1ae-1233-b25c-a57d91d2846e",multiattach=false,size=80i,total_attachments=1i,updated_at="2021-01-12T21:02:04Z" 1634197660000000000
+openstack_request_duration,host=telegraf_host networks=703214354i 1634197660000000000
+openstack_server_diagnostics,disk_name=vda,host=telegraf_host,no_of_disks=1,no_of_ports=2,port_name=vhu1234566c-9c,server_id=fdddb58c-bbb9-1234-894b-7ae140178909 cpu0_time=4924220000000,cpu1_time=218809610000000,cpu2_time=218624300000000,cpu3_time=220505700000000,disk_errors=-1,disk_read=619156992,disk_read_req=35423,disk_write=8432728064,disk_write_req=882445,memory=8388608,memory-actual=8388608,memory-rss=37276,memory-swap_in=0,port_rx=410516469288,port_rx_drop=13373626,port_rx_errors=-1,port_rx_packets=52140392,port_tx=417312195654,port_tx_drop=0,port_tx_errors=0,port_tx_packets=321385978 1634197660000000000
+```
diff --git a/content/telegraf/v1/input-plugins/opentelemetry/_index.md b/content/telegraf/v1/input-plugins/opentelemetry/_index.md
new file mode 100644
index 000000000..c0c5588b4
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/opentelemetry/_index.md
@@ -0,0 +1,186 @@
+---
+description: "Telegraf plugin for collecting metrics from OpenTelemetry"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: OpenTelemetry
+    identifier: input-opentelemetry
+tags: [OpenTelemetry, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# OpenTelemetry Input Plugin
+
+This plugin receives traces, metrics and logs from
+[OpenTelemetry](https://opentelemetry.io) clients and agents via gRPC.
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Receive OpenTelemetry traces, metrics, and logs over gRPC
+[[inputs.opentelemetry]]
+  ## Override the default (0.0.0.0:4317) destination OpenTelemetry gRPC service
+  ## address:port
+  # service_address = "0.0.0.0:4317"
+
+  ## Override the default (5s) new connection timeout
+  # timeout = "5s"
+
+  ## gRPC Maximum Message Size
+  # max_msg_size = "4MB"
+
+  ## Override the default span attributes to be used as line protocol tags.
+  ## These are always included as tags:
+  ## - trace ID
+  ## - span ID
+  ## Common attributes can be found here:
+  ## - https://github.com/open-telemetry/opentelemetry-collector/tree/main/semconv
+  # span_dimensions = ["service.name", "span.name"]
+
+  ## Override the default log record attributes to be used as line protocol tags.
+  ## These are always included as tags, if available:
+  ## - trace ID
+  ## - span ID
+  ## Common attributes can be found here:
+  ## - https://github.com/open-telemetry/opentelemetry-collector/tree/main/semconv
+  ## When using InfluxDB for both logs and traces, be certain that log_record_dimensions
+  ## matches the span_dimensions value.
+  # log_record_dimensions = ["service.name"]
+
+  ## Override the default profile attributes to be used as line protocol tags.
+  ## These are always included as tags, if available:
+  ## - profile_id
+  ## - address
+  ## - sample
+  ## - sample_name
+  ## - sample_unit
+  ## - sample_type
+  ## - sample_type_unit
+  ## Common attributes can be found here:
+  ## - https://github.com/open-telemetry/opentelemetry-collector/tree/main/semconv
+  # profile_dimensions = []
+
+  ## Override the default (prometheus-v1) metrics schema.
+  ## Supports: "prometheus-v1", "prometheus-v2"
+  ## For more information about the alternatives, read the Prometheus input
+  ## plugin notes.
+  # metrics_schema = "prometheus-v1"
+
+  ## Optional TLS Config.
+  ## For advanced options: https://github.com/influxdata/telegraf/blob/v1.18.3/docs/TLS.md
+  ##
+  ## Set one or more allowed client CA certificate file names to
+  ## enable mutually authenticated TLS connections.
+  # tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
+  ## Add service certificate and key.
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+```
+
+### Schema
+
+The OpenTelemetry->InfluxDB conversion [schema](https://github.com/influxdata/influxdb-observability/blob/main/docs/index.md) and [implementation](https://github.com/influxdata/influxdb-observability/tree/main/otel2influx) are
+hosted at <https://github.com/influxdata/influxdb-observability> .
+
+Spans are stored in measurement `spans`.
+Logs are stored in measurement `logs`.
+
+For metrics, two output schemata exist.  Metrics received with
+`metrics_schema=prometheus-v1` are assigned measurement from the OTel field
+`Metric.name`.  Metrics received with `metrics_schema=prometheus-v2` are stored
+in measurement `prometheus`.
+
+Also see the OpenTelemetry output plugin for Telegraf.
+
+[1]: https://github.com/influxdata/influxdb-observability/blob/main/docs/index.md
+
+[2]: https://github.com/influxdata/influxdb-observability/tree/main/otel2influx
+
+## Example Output
+
+### Tracing Spans
+
+```text
+spans end_time_unix_nano="2021-02-19 20:50:25.6893952 +0000 UTC",instrumentation_library_name="tracegen",kind="SPAN_KIND_INTERNAL",name="okey-dokey",net.peer.ip="1.2.3.4",parent_span_id="d5270e78d85f570f",peer.service="tracegen-client",service.name="tracegen",span.kind="server",span_id="4c28227be6a010e1",status_code="STATUS_CODE_OK",trace_id="7d4854815225332c9834e6dbf85b9380" 1613767825689169000
+spans end_time_unix_nano="2021-02-19 20:50:25.6893952 +0000 UTC",instrumentation_library_name="tracegen",kind="SPAN_KIND_INTERNAL",name="lets-go",net.peer.ip="1.2.3.4",peer.service="tracegen-server",service.name="tracegen",span.kind="client",span_id="d5270e78d85f570f",status_code="STATUS_CODE_OK",trace_id="7d4854815225332c9834e6dbf85b9380" 1613767825689135000
+spans end_time_unix_nano="2021-02-19 20:50:25.6895667 +0000 UTC",instrumentation_library_name="tracegen",kind="SPAN_KIND_INTERNAL",name="okey-dokey",net.peer.ip="1.2.3.4",parent_span_id="b57e98af78c3399b",peer.service="tracegen-client",service.name="tracegen",span.kind="server",span_id="a0643a156d7f9f7f",status_code="STATUS_CODE_OK",trace_id="fd6b8bb5965e726c94978c644962cdc8" 1613767825689388000
+spans end_time_unix_nano="2021-02-19 20:50:25.6895667 +0000 UTC",instrumentation_library_name="tracegen",kind="SPAN_KIND_INTERNAL",name="lets-go",net.peer.ip="1.2.3.4",peer.service="tracegen-server",service.name="tracegen",span.kind="client",span_id="b57e98af78c3399b",status_code="STATUS_CODE_OK",trace_id="fd6b8bb5965e726c94978c644962cdc8" 1613767825689303300
+spans end_time_unix_nano="2021-02-19 20:50:25.6896741 +0000 UTC",instrumentation_library_name="tracegen",kind="SPAN_KIND_INTERNAL",name="okey-dokey",net.peer.ip="1.2.3.4",parent_span_id="6a8e6a0edcc1c966",peer.service="tracegen-client",service.name="tracegen",span.kind="server",span_id="d68f7f3b41eb8075",status_code="STATUS_CODE_OK",trace_id="651dadde186b7834c52b13a28fc27bea" 1613767825689480300
+```
+
+## Metrics
+
+### `prometheus-v1`
+
+```text
+cpu_temp,foo=bar gauge=87.332
+http_requests_total,method=post,code=200 counter=1027
+http_requests_total,method=post,code=400 counter=3
+http_request_duration_seconds 0.05=24054,0.1=33444,0.2=100392,0.5=129389,1=133988,sum=53423,count=144320
+rpc_duration_seconds 0.01=3102,0.05=3272,0.5=4773,0.9=9001,0.99=76656,sum=1.7560473e+07,count=2693
+```
+
+### `prometheus-v2`
+
+```text
+prometheus,foo=bar cpu_temp=87.332
+prometheus,method=post,code=200 http_requests_total=1027
+prometheus,method=post,code=400 http_requests_total=3
+prometheus,le=0.05 http_request_duration_seconds_bucket=24054
+prometheus,le=0.1  http_request_duration_seconds_bucket=33444
+prometheus,le=0.2  http_request_duration_seconds_bucket=100392
+prometheus,le=0.5  http_request_duration_seconds_bucket=129389
+prometheus,le=1    http_request_duration_seconds_bucket=133988
+prometheus         http_request_duration_seconds_count=144320,http_request_duration_seconds_sum=53423
+prometheus,quantile=0.01 rpc_duration_seconds=3102
+prometheus,quantile=0.05 rpc_duration_seconds=3272
+prometheus,quantile=0.5  rpc_duration_seconds=4773
+prometheus,quantile=0.9  rpc_duration_seconds=9001
+prometheus,quantile=0.99 rpc_duration_seconds=76656
+prometheus               rpc_duration_seconds_count=1.7560473e+07,rpc_duration_seconds_sum=2693
+```
+
+### Logs
+
+```text
+logs fluent.tag="fluent.info",pid=18i,ppid=9i,worker=0i 1613769568895331700
+logs fluent.tag="fluent.debug",instance=1720i,queue_size=0i,stage_size=0i 1613769568895697200
+logs fluent.tag="fluent.info",worker=0i 1613769568896515100
+```
+
+### Profiles
+
+```text
+profiles,address=95210353,host.name=testbox,profile_id=618098d29a6cefd6a4c0ea806880c2a8,sample=0,sample_name=cpu,sample_type=samples,sample_type_unit=count,sample_unit=nanoseconds build_id="fab9b8c848218405738c11a7ec4982e9",build_id_type="BUILD_ID_BINARY_HASH",end_time_unix_nano=1721306050081621681u,file_offset=18694144u,filename="chromium",frame_type="native",location="",memory_limit=250413056u,memory_start=18698240u,stack_trace_id="hYmAzQVF8vy8MWbzsKpQNw",start_time_unix_nano=1721306050081621681u,value=1i 1721306048731622020
+profiles,address=15945263,host.name=testbox,profile_id=618098d29a6cefd6a4c0ea806880c2a8,sample=1,sample_name=cpu,sample_type=samples,sample_type_unit=count,sample_unit=nanoseconds build_id="7dab4a2e0005d025e75cc72191f8d6bf",build_id_type="BUILD_ID_BINARY_HASH",end_time_unix_nano=1721306050081621681u,file_offset=15638528u,filename="dockerd",frame_type="native",location="",memory_limit=47255552u,memory_start=15638528u,stack_trace_id="4N3KEcGylb5Qoi2905c1ZA",start_time_unix_nano=1721306050081621681u,value=1i 1721306049831718725
+profiles,address=15952400,host.name=testbox,profile_id=618098d29a6cefd6a4c0ea806880c2a8,sample=1,sample_name=cpu,sample_type=samples,sample_type_unit=count,sample_unit=nanoseconds build_id="7dab4a2e0005d025e75cc72191f8d6bf",build_id_type="BUILD_ID_BINARY_HASH",end_time_unix_nano=1721306050081621681u,file_offset=15638528u,filename="dockerd",frame_type="native",location="",memory_limit=47255552u,memory_start=15638528u,stack_trace_id="4N3KEcGylb5Qoi2905c1ZA",start_time_unix_nano=1721306050081621681u,value=1i 1721306049831718725
+profiles,address=15953899,host.name=testbox,profile_id=618098d29a6cefd6a4c0ea806880c2a8,sample=1,sample_name=cpu,sample_type=samples,sample_type_unit=count,sample_unit=nanoseconds build_id="7dab4a2e0005d025e75cc72191f8d6bf",build_id_type="BUILD_ID_BINARY_HASH",end_time_unix_nano=1721306050081621681u,file_offset=15638528u,filename="dockerd",frame_type="native",location="",memory_limit=47255552u,memory_start=15638528u,stack_trace_id="4N3KEcGylb5Qoi2905c1ZA",start_time_unix_nano=1721306050081621681u,value=1i 1721306049831718725
+profiles,address=16148175,host.name=testbox,profile_id=618098d29a6cefd6a4c0ea806880c2a8,sample=1,sample_name=cpu,sample_type=samples,sample_type_unit=count,sample_unit=nanoseconds build_id="7dab4a2e0005d025e75cc72191f8d6bf",build_id_type="BUILD_ID_BINARY_HASH",end_time_unix_nano=1721306050081621681u,file_offset=15638528u,filename="dockerd",frame_type="native",location="",memory_limit=47255552u,memory_start=15638528u,stack_trace_id="4N3KEcGylb5Qoi2905c1ZA",start_time_unix_nano=1721306050081621681u,value=1i 1721306049831718725
+profiles,address=4770577,host.name=testbox,profile_id=618098d29a6cefd6a4c0ea806880c2a8,sample=2,sample_name=cpu,sample_type=samples,sample_type_unit=count,sample_unit=nanoseconds build_id="cfc3dc7d1638c1284a6b62d4b5c0d74e",build_id_type="BUILD_ID_BINARY_HASH",end_time_unix_nano=1721306050081621681u,file_offset=0u,filename="",frame_type="kernel",location="do_epoll_wait",memory_limit=0u,memory_start=0u,stack_trace_id="UaO9bysJnAYXFYobSdHXqg",start_time_unix_nano=1721306050081621681u,value=1i 1721306050081621681
+profiles,address=4773632,host.name=testbox,profile_id=618098d29a6cefd6a4c0ea806880c2a8,sample=2,sample_name=cpu,sample_type=samples,sample_type_unit=count,sample_unit=nanoseconds build_id="cfc3dc7d1638c1284a6b62d4b5c0d74e",build_id_type="BUILD_ID_BINARY_HASH",end_time_unix_nano=1721306050081621681u,file_offset=0u,filename="",frame_type="kernel",location="__x64_sys_epoll_wait",memory_limit=0u,memory_start=0u,stack_trace_id="UaO9bysJnAYXFYobSdHXqg",start_time_unix_nano=1721306050081621681u,value=1i 1721306050081621681
+profiles,address=14783666,host.name=testbox,profile_id=618098d29a6cefd6a4c0ea806880c2a8,sample=2,sample_name=cpu,sample_type=samples,sample_type_unit=count,sample_unit=nanoseconds build_id="cfc3dc7d1638c1284a6b62d4b5c0d74e",build_id_type="BUILD_ID_BINARY_HASH",end_time_unix_nano=1721306050081621681u,file_offset=0u,filename="",frame_type="kernel",location="do_syscall_64",memory_limit=0u,memory_start=0u,stack_trace_id="UaO9bysJnAYXFYobSdHXqg",start_time_unix_nano=1721306050081621681u,value=1i 1721306050081621681
+profiles,address=16777518,host.name=testbox,profile_id=618098d29a6cefd6a4c0ea806880c2a8,sample=2,sample_name=cpu,sample_type=samples,sample_type_unit=count,sample_unit=nanoseconds build_id="cfc3dc7d1638c1284a6b62d4b5c0d74e",build_id_type="BUILD_ID_BINARY_HASH",end_time_unix_nano=1721306050081621681u,file_offset=0u,filename="",frame_type="kernel",location="entry_SYSCALL_64_after_hwframe",memory_limit=0u,memory_start=0u,stack_trace_id="UaO9bysJnAYXFYobSdHXqg",start_time_unix_nano=1721306050081621681u,value=1i 1721306050081621681
+profiles,address=1139937,host.name=testbox,profile_id=618098d29a6cefd6a4c0ea806880c2a8,sample=2,sample_name=cpu,sample_type=samples,sample_type_unit=count,sample_unit=nanoseconds build_id="982ed6c7a77f99f0ae746be0187953bf",build_id_type="BUILD_ID_BINARY_HASH",end_time_unix_nano=1721306050081621681u,file_offset=147456u,filename="libc.so.6",frame_type="native",location="",memory_limit=1638400u,memory_start=147456u,stack_trace_id="UaO9bysJnAYXFYobSdHXqg",start_time_unix_nano=1721306050081621681u,value=1i 1721306050081621681
+profiles,address=117834912,host.name=testbox,profile_id=618098d29a6cefd6a4c0ea806880c2a8,sample=2,sample_name=cpu,sample_type=samples,sample_type_unit=count,sample_unit=nanoseconds build_id="fab9b8c848218405738c11a7ec4982e9",build_id_type="BUILD_ID_BINARY_HASH",end_time_unix_nano=1721306050081621681u,file_offset=18694144u,filename="chromium",frame_type="native",location="",memory_limit=250413056u,memory_start=18698240u,stack_trace_id="UaO9bysJnAYXFYobSdHXqg",start_time_unix_nano=1721306050081621681u,value=1i 1721306050081621681
+```
diff --git a/content/telegraf/v1/input-plugins/openweathermap/_index.md b/content/telegraf/v1/input-plugins/openweathermap/_index.md
new file mode 100644
index 000000000..6161206b9
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/openweathermap/_index.md
@@ -0,0 +1,114 @@
+---
+description: "Telegraf plugin for collecting metrics from OpenWeatherMap"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: OpenWeatherMap
+    identifier: input-openweathermap
+tags: [OpenWeatherMap, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# OpenWeatherMap Input Plugin
+
+Collect current weather and forecast data from OpenWeatherMap.
+
+To use this plugin you will need an [api key](https://openweathermap.org/appid) (app_id).
+
+City identifiers can be found in the [city list](http://bulk.openweathermap.org/sample/city.list.json.gz). Alternately you
+can [search](https://openweathermap.org/find) by name; the `city_id` can be found as the last digits
+of the URL: <https://openweathermap.org/city/2643743>. Language
+identifiers can be found in the [lang list](https://openweathermap.org/current#multi). Documentation for
+condition ID, icon, and main is at [weather conditions](https://openweathermap.org/weather-conditions).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read current weather and forecasts data from openweathermap.org
+[[inputs.openweathermap]]
+  ## OpenWeatherMap API key.
+  app_id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
+
+  ## City ID's to collect weather data from.
+  city_id = ["5391959"]
+
+  ## Language of the description field. Can be one of "ar", "bg",
+  ## "ca", "cz", "de", "el", "en", "fa", "fi", "fr", "gl", "hr", "hu",
+  ## "it", "ja", "kr", "la", "lt", "mk", "nl", "pl", "pt", "ro", "ru",
+  ## "se", "sk", "sl", "es", "tr", "ua", "vi", "zh_cn", "zh_tw"
+  # lang = "en"
+
+  ## APIs to fetch; can contain "weather" or "forecast".
+  # fetch = ["weather", "forecast"]
+
+  ## OpenWeatherMap base URL
+  # base_url = "https://api.openweathermap.org/"
+
+  ## Timeout for HTTP response.
+  # response_timeout = "5s"
+
+  ## Preferred unit system for temperature and wind speed. Can be one of
+  ## "metric", "imperial", or "standard".
+  # units = "metric"
+
+  ## Style to query the current weather; available options
+  ##   batch      -- query multiple cities at once using the "group" endpoint
+  ##   individual -- query each city individually using the "weather" endpoint
+  ## You should use "individual" here as it is documented and provides more
+  ## frequent updates. The default is "batch" for backward compatibility.
+  # query_style = "batch"
+
+  ## Query interval to fetch data.
+  ## By default the global 'interval' setting is used. You should override the
+  ## interval here if the global setting is shorter than 10 minutes as
+  ## OpenWeatherMap weather data is only updated every 10 minutes.
+  # interval = "10m"
+```
+
+## Metrics
+
+- weather
+  - tags:
+    - city_id
+    - forecast
+    - condition_id
+    - condition_main
+  - fields:
+    - cloudiness (int, percent)
+    - humidity (int, percent)
+    - pressure (float, atmospheric pressure hPa)
+    - rain (float, rain volume for the last 1-3 hours (depending on API response) in mm)
+    - snow (float, snow volume for the last 1-3 hours (depending on API response) in mm)
+    - sunrise (int, nanoseconds since unix epoch)
+    - sunset (int, nanoseconds since unix epoch)
+    - temperature (float, degrees)
+    - feels_like (float, degrees)
+    - visibility (int, meters, not available on forecast data)
+    - wind_degrees (float, wind direction in degrees)
+    - wind_speed (float, wind speed in meters/sec or miles/sec)
+    - condition_description (string, localized long description)
+    - condition_icon
+
+## Example Output
+
+```text
+weather,city=San\ Francisco,city_id=5391959,condition_id=803,condition_main=Clouds,country=US,forecast=114h,host=robot pressure=1027,temperature=10.09,wind_degrees=34,wind_speed=1.24,condition_description="broken clouds",cloudiness=80i,humidity=67i,rain=0,feels_like=8.9,condition_icon="04n" 1645952400000000000
+weather,city=San\ Francisco,city_id=5391959,condition_id=804,condition_main=Clouds,country=US,forecast=117h,host=robot humidity=65i,rain=0,temperature=10.12,wind_degrees=31,cloudiness=90i,pressure=1026,feels_like=8.88,wind_speed=1.31,condition_description="overcast clouds",condition_icon="04n" 1645963200000000000
+weather,city=San\ Francisco,city_id=5391959,condition_id=804,condition_main=Clouds,country=US,forecast=120h,host=robot cloudiness=100i,humidity=61i,rain=0,temperature=10.28,wind_speed=1.94,condition_icon="04d",pressure=1027,feels_like=8.96,wind_degrees=16,condition_description="overcast clouds" 1645974000000000000
+```
+
+[api key]: https://openweathermap.org/appid
+[city list]: http://bulk.openweathermap.org/sample/city.list.json.gz
+[search]: https://openweathermap.org/find
+[lang list]: https://openweathermap.org/current#multi
+[weather conditions]: https://openweathermap.org/weather-conditions
diff --git a/content/telegraf/v1/input-plugins/p4runtime/_index.md b/content/telegraf/v1/input-plugins/p4runtime/_index.md
new file mode 100644
index 000000000..459837b83
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/p4runtime/_index.md
@@ -0,0 +1,107 @@
+---
+description: "Telegraf plugin for collecting metrics from P4 Runtime"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: P4 Runtime
+    identifier: input-p4runtime
+tags: [P4 Runtime, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# P4 Runtime Input Plugin
+
+P4 is a language for programming the data plane of network devices,
+such as Programmable Switches or Programmable Network Interface Cards.
+The P4Runtime API is a control plane specification to manage
+the data plane elements of those devices dynamically by a P4 program.
+
+The `p4runtime` plugin gathers metrics about `Counter` values
+present in P4 Program loaded onto networking device.
+Metrics are collected through gRPC connection with
+[P4Runtime](https://github.com/p4lang/p4runtime) server.
+
+P4Runtime Plugin uses `PkgInfo.Name` field.
+If user wants to gather information about program name, please follow
+[6.2.1. Annotating P4 code with PkgInfo] instruction and apply changes
+to your P4 program.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# P4Runtime telemetry input plugin
+[[inputs.p4runtime]]
+  ## Define the endpoint of P4Runtime gRPC server to collect metrics.
+  # endpoint = "127.0.0.1:9559"
+  ## Set DeviceID required for Client Arbitration.
+  ## https://p4.org/p4-spec/p4runtime/main/P4Runtime-Spec.html#sec-client-arbitration-and-controller-replication
+  # device_id = 1
+  ## Filter counters by their names that should be observed.
+  ## Example: counter_names_include=["ingressCounter", "egressCounter"]
+  # counter_names_include = []
+
+  ## Optional TLS Config.
+  ## Enable client-side TLS and define CA to authenticate the device.
+  # enable_tls = false
+  # tls_ca = "/etc/telegraf/ca.crt"
+  ## Set minimal TLS version to accept by the client.
+  # tls_min_version = "TLS12"
+  ## Use TLS but skip chain & host verification.
+  # insecure_skip_verify = true
+
+  ## Define client-side TLS certificate & key to authenticate to the device.
+  # tls_cert = "/etc/telegraf/client.crt"
+  # tls_key = "/etc/telegraf/client.key"
+```
+
+## Metrics
+
+P4Runtime gRPC server communicates using [p4runtime.proto] Protocol Buffer.
+Static information about P4 program loaded into programmable switch
+are collected by `GetForwardingPipelineConfigRequest` message.
+Plugin gathers dynamic metrics with `Read` method.
+`Readrequest` is defined with single `Entity` of type `CounterEntry`.
+Since P4 Counter is array, plugin collects values of every cell of array
+by [wildcard query].
+
+Counters defined in P4 Program have unique ID and name.
+Counters are arrays, thus `counter_index` informs
+which cell value of array is described in metric.
+
+Tags are constructed in given manner:
+
+- `p4program_name`: P4 program name provided by user.
+If user wants to gather information about program name, please follow
+[6.2.1. Annotating P4 code with PkgInfo] instruction and apply changes
+to your P4 program.
+- `counter_name`: Name of given counter in P4 program.
+- `counter_type`: Type of counter (BYTES, PACKETS, BOTH).
+
+Fields are constructed in given manner:
+
+- `bytes`: Number of bytes gathered in counter.
+- `packets` Number of packets gathered in counter.
+- `counter_index`: Index at which metrics are collected in P4 counter.
+
+## Example Output
+
+Expected output for p4runtime plugin instance
+running on host named `p4runtime-host`:
+
+```text
+p4_runtime,counter_name=MyIngress.egressTunnelCounter,counter_type=BOTH,host=p4 bytes=408i,packets=4i,counter_index=200i 1675175030000000000
+```
+
+[6.2.1. Annotating P4 code with PkgInfo]: https://p4.org/p4-spec/p4runtime/main/P4Runtime-Spec.html#sec-annotating-p4-code-with-pkginfo
+[p4runtime.proto]: https://github.com/p4lang/p4runtime/blob/main/proto/p4/v1/p4runtime.proto
+[wildcard query]: https://github.com/p4lang/p4runtime/blob/main/proto/p4/v1/p4runtime.proto#L379
diff --git a/content/telegraf/v1/input-plugins/passenger/_index.md b/content/telegraf/v1/input-plugins/passenger/_index.md
new file mode 100644
index 000000000..e17cc302d
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/passenger/_index.md
@@ -0,0 +1,129 @@
+---
+description: "Telegraf plugin for collecting metrics from Passenger"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Passenger
+    identifier: input-passenger
+tags: [Passenger, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Passenger Input Plugin
+
+Gather [Phusion Passenger](https://www.phusionpassenger.com/) metrics using the
+`passenger-status` command line utility.
+
+## Series Cardinality Warning
+
+Depending on your environment, this `passenger_process` measurement of this
+plugin can quickly create a high number of series which, when unchecked, can
+cause high load on your database.  You can use the following techniques to
+manage your series cardinality:
+
+- Use the
+  [measurement filtering](https://docs.influxdata.com/telegraf/latest/administration/configuration/#measurement-filtering)
+  options to exclude unneeded tags.  In some environments, you may wish to use
+  `tagexclude` to remove the `pid` and `process_group_id` tags.
+- Write to a database with an appropriate
+  [retention policy](https://docs.influxdata.com/influxdb/latest/guides/downsampling_and_retention/).
+- Consider using the
+  [Time Series Index](https://docs.influxdata.com/influxdb/latest/concepts/time-series-index/).
+- Monitor your databases
+  [series cardinality](https://docs.influxdata.com/influxdb/latest/query_language/spec/#show-cardinality).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics of passenger using passenger-status
+[[inputs.passenger]]
+  ## Path of passenger-status.
+  ##
+  ## Plugin gather metric via parsing XML output of passenger-status
+  ## More information about the tool:
+  ##   https://www.phusionpassenger.com/library/admin/apache/overall_status_report.html
+  ##
+  ## If no path is specified, then the plugin simply execute passenger-status
+  ## hopefully it can be found in your PATH
+  command = "passenger-status -v --show=xml"
+```
+
+### Permissions
+
+Telegraf must have permission to execute the `passenger-status` command.  On
+most systems, Telegraf runs as the `telegraf` user.
+
+## Metrics
+
+- passenger
+  - tags:
+    - passenger_version
+  - fields:
+    - process_count
+    - max
+    - capacity_used
+    - get_wait_list_size
+
+- passenger_supergroup
+  - tags:
+    - name
+  - fields:
+    - get_wait_list_size
+    - capacity_used
+
+- passenger_group
+  - tags:
+    - name
+    - app_root
+    - app_type
+  - fields:
+    - get_wait_list_size
+    - capacity_used
+    - processes_being_spawned
+
+- passenger_process
+  - tags:
+    - group_name
+    - app_root
+    - supergroup_name
+    - pid
+    - code_revision
+    - life_status
+    - process_group_id
+  - fields:
+    - concurrency
+    - sessions
+    - busyness
+    - processed
+    - spawner_creation_time
+    - spawn_start_time
+    - spawn_end_time
+    - last_used
+    - uptime
+    - cpu
+    - rss
+    - pss
+    - private_dirty
+    - swap
+    - real_memory
+    - vmsize
+
+## Example Output
+
+```text
+passenger,passenger_version=5.0.17 capacity_used=23i,get_wait_list_size=0i,max=23i,process_count=23i 1452984112799414257
+passenger_supergroup,name=/var/app/current/public capacity_used=23i,get_wait_list_size=0i 1452984112799496977
+passenger_group,app_root=/var/app/current,app_type=rack,name=/var/app/current/public capacity_used=23i,get_wait_list_size=0i,processes_being_spawned=0i 1452984112799527021
+passenger_process,app_root=/var/app/current,code_revision=899ac7f,group_name=/var/app/current/public,life_status=ALIVE,pid=11553,process_group_id=13608,supergroup_name=/var/app/current/public busyness=0i,concurrency=1i,cpu=58i,last_used=1452747071764940i,private_dirty=314900i,processed=951i,pss=319391i,real_memory=314900i,rss=418548i,sessions=0i,spawn_end_time=1452746845013365i,spawn_start_time=1452746844946982i,spawner_creation_time=1452746835922747i,swap=0i,uptime=226i,vmsize=1563580i 1452984112799571490
+passenger_process,app_root=/var/app/current,code_revision=899ac7f,group_name=/var/app/current/public,life_status=ALIVE,pid=11563,process_group_id=13608,supergroup_name=/var/app/current/public busyness=2147483647i,concurrency=1i,cpu=47i,last_used=1452747071709179i,private_dirty=309240i,processed=756i,pss=314036i,real_memory=309240i,rss=418296i,sessions=1i,spawn_end_time=1452746845172460i,spawn_start_time=1452746845136882i,spawner_creation_time=1452746835922747i,swap=0i,uptime=226i,vmsize=1563608i 1452984112799638581
+```
diff --git a/content/telegraf/v1/input-plugins/pf/_index.md b/content/telegraf/v1/input-plugins/pf/_index.md
new file mode 100644
index 000000000..c9e597c6a
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/pf/_index.md
@@ -0,0 +1,112 @@
+---
+description: "Telegraf plugin for collecting metrics from PF"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: PF
+    identifier: input-pf
+tags: [PF, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# PF Input Plugin
+
+The pf plugin gathers information from the FreeBSD/OpenBSD pf
+firewall. Currently it can retrieve information about the state table: the
+number of current entries in the table, and counters for the number of searches,
+inserts, and removals to the table.
+
+The pf plugin retrieves this information by invoking the `pfstat` command. The
+`pfstat` command requires read access to the device file `/dev/pf`. You have
+several options to permit telegraf to run `pfctl`:
+
+* Run telegraf as root. This is strongly discouraged.
+* Change the ownership and permissions for /dev/pf such that the user telegraf runs at can read the /dev/pf device file. This is probably not that good of an idea either.
+* Configure sudo to grant telegraf to run `pfctl` as root. This is the most restrictive option, but require sudo setup.
+* Add "telegraf" to the "proxy" group as /dev/pf is owned by root:proxy.
+
+## Using sudo
+
+You may edit your sudo configuration with the following:
+
+```sudo
+telegraf ALL=(root) NOPASSWD: /sbin/pfctl -s info
+```
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Gather counters from PF
+[[inputs.pf]]
+  ## PF require root access on most systems.
+  ## Setting 'use_sudo' to true will make use of sudo to run pfctl.
+  ## Users must configure sudo to allow telegraf user to run pfctl with no password.
+  ## pfctl can be restricted to only list command "pfctl -s info".
+  use_sudo = false
+```
+
+## Metrics
+
+* pf
+  * entries (integer, count)
+  * searches (integer, count)
+  * inserts (integer, count)
+  * removals (integer, count)
+  * match (integer, count)
+  * bad-offset (integer, count)
+  * fragment (integer, count)
+  * short (integer, count)
+  * normalize (integer, count)
+  * memory (integer, count)
+  * bad-timestamp (integer, count)
+  * congestion (integer, count)
+  * ip-option (integer, count)
+  * proto-cksum (integer, count)
+  * state-mismatch (integer, count)
+  * state-insert (integer, count)
+  * state-limit (integer, count)
+  * src-limit (integer, count)
+  * synproxy (integer, count)
+
+## Example Output
+
+```shell
+> pfctl -s info
+Status: Enabled for 0 days 00:26:05           Debug: Urgent
+
+State Table                          Total             Rate
+  current entries                        2
+  searches                           11325            7.2/s
+  inserts                                5            0.0/s
+  removals                               3            0.0/s
+Counters
+  match                              11226            7.2/s
+  bad-offset                             0            0.0/s
+  fragment                               0            0.0/s
+  short                                  0            0.0/s
+  normalize                              0            0.0/s
+  memory                                 0            0.0/s
+  bad-timestamp                          0            0.0/s
+  congestion                             0            0.0/s
+  ip-option                              0            0.0/s
+  proto-cksum                            0            0.0/s
+  state-mismatch                         0            0.0/s
+  state-insert                           0            0.0/s
+  state-limit                            0            0.0/s
+  src-limit                              0            0.0/s
+  synproxy                               0            0.0/s
+```
+
+```text
+pf,host=columbia entries=3i,searches=2668i,inserts=12i,removals=9i 1510941775000000000
+```
diff --git a/content/telegraf/v1/input-plugins/pgbouncer/_index.md b/content/telegraf/v1/input-plugins/pgbouncer/_index.md
new file mode 100644
index 000000000..04564b084
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/pgbouncer/_index.md
@@ -0,0 +1,149 @@
+---
+description: "Telegraf plugin for collecting metrics from PgBouncer"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: PgBouncer
+    identifier: input-pgbouncer
+tags: [PgBouncer, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# PgBouncer Input Plugin
+
+The `pgbouncer` plugin provides metrics for your PgBouncer load balancer.
+
+More information about the meaning of these metrics can be found in the
+[PgBouncer Documentation](https://pgbouncer.github.io/usage.html).
+
+- PgBouncer minimum tested version: 1.5
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from one or many pgbouncer servers
+[[inputs.pgbouncer]]
+  ## specify address via a url matching:
+  ##   postgres://[pqgotest[:password]]@host:port[/dbname]\
+  ##       ?sslmode=[disable|verify-ca|verify-full]
+  ## or a simple string:
+  ##   host=localhost port=5432 user=pqgotest password=... sslmode=... dbname=app_production
+  ##
+  ## All connection parameters are optional.
+  ##
+  address = "host=localhost user=pgbouncer sslmode=disable"
+
+  ## Specify which "show" commands to gather metrics for.
+  ## Choose from: "stats", "pools", "lists", "databases"
+  # show_commands = ["stats", "pools"]
+```
+
+### `address`
+
+Specify address via a postgresql connection string:
+
+```text
+host=/run/postgresql port=6432 user=telegraf database=pgbouncer
+```
+
+Or via an url matching:
+
+```text
+postgres://[pqgotest[:password]]@host:port[/dbname]?sslmode=[disable|verify-ca|verify-full]
+```
+
+All connection parameters are optional.
+
+Without the dbname parameter, the driver will default to a database with the
+same name as the user.  This dbname is just for instantiating a connection with
+the server and doesn't restrict the databases we are trying to grab metrics for.
+
+## Metrics
+
+- pgbouncer
+  - tags:
+    - db
+    - server
+  - fields:
+    - avg_query_count
+    - avg_query_time
+    - avg_wait_time
+    - avg_xact_count
+    - avg_xact_time
+    - total_query_count
+    - total_query_time
+    - total_received
+    - total_sent
+    - total_wait_time
+    - total_xact_count
+    - total_xact_time
+
+- pgbouncer_pools
+  - tags:
+    - db
+    - pool_mode
+    - server
+    - user
+  - fields:
+    - cl_active
+    - cl_waiting
+    - maxwait
+    - maxwait_us
+    - sv_active
+    - sv_idle
+    - sv_login
+    - sv_tested
+    - sv_used
+
+- pgbouncer_lists
+  - tags:
+    - db
+    - server
+    - user
+  - fields:
+    - databases
+    - users
+    - pools
+    - free_clients
+    - used_clients
+    - login_clients
+    - free_servers
+    - used_servers
+    - dns_names
+    - dns_zones
+    - dns_queries
+
+- pgbouncer_databases
+  - tags:
+    - db
+    - pg_dbname
+    - server
+    - user
+  - fields:
+    - current_connections
+    - pool_size
+    - min_pool_size
+    - reserve_pool
+    - max_connections
+    - paused
+    - disabled
+
+## Example Output
+
+```text
+pgbouncer,db=pgbouncer,server=host\=debian-buster-postgres\ user\=dbn\ port\=6432\ dbname\=pgbouncer\  avg_query_count=0i,avg_query_time=0i,avg_wait_time=0i,avg_xact_count=0i,avg_xact_time=0i,total_query_count=26i,total_query_time=0i,total_received=0i,total_sent=0i,total_wait_time=0i,total_xact_count=26i,total_xact_time=0i 1581569936000000000
+pgbouncer_pools,db=pgbouncer,pool_mode=statement,server=host\=debian-buster-postgres\ user\=dbn\ port\=6432\ dbname\=pgbouncer\ ,user=pgbouncer cl_active=1i,cl_waiting=0i,maxwait=0i,maxwait_us=0i,sv_active=0i,sv_idle=0i,sv_login=0i,sv_tested=0i,sv_used=0i 1581569936000000000
+pgbouncer_lists,db=pgbouncer,server=host\=debian-buster-postgres\ user\=dbn\ port\=6432\ dbname\=pgbouncer\ ,user=pgbouncer databases=1i,dns_names=0i,dns_queries=0i,dns_zones=0i,free_clients=47i,free_servers=0i,login_clients=0i,pools=1i,used_clients=3i,used_servers=0i,users=4i 1581569936000000000
+pgbouncer_databases,db=pgbouncer,pg_dbname=pgbouncer,server=host\=debian-buster-postgres\ user\=dbn\ port\=6432\ dbname\=pgbouncer\ name=pgbouncer disabled=0i,pool_size=2i,current_connections=0i,min_pool_size=0i,reserve_pool=0i,max_connections=0i,paused=0i 1581569936000000000
+pgbouncer_databases,db=postgres,pg_dbname=postgres,server=host\=debian-buster-postgres\ user\=dbn\ port\=6432\ dbname\=pgbouncer\ name=postgres current_connections=0i,disabled=0i,pool_size=20i,min_pool_size=0i,reserve_pool=0i,paused=0i,max_connections=0i 1581569936000000000
+```
diff --git a/content/telegraf/v1/input-plugins/phpfpm/_index.md b/content/telegraf/v1/input-plugins/phpfpm/_index.md
new file mode 100644
index 000000000..2924e9fd1
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/phpfpm/_index.md
@@ -0,0 +1,132 @@
+---
+description: "Telegraf plugin for collecting metrics from PHP-FPM"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: PHP-FPM
+    identifier: input-phpfpm
+tags: [PHP-FPM, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# PHP-FPM Input Plugin
+
+Get phpfpm stats using either HTTP status page or fpm socket.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics of phpfpm, via HTTP status page or socket
+[[inputs.phpfpm]]
+  ## An array of addresses to gather stats about. Specify an ip or hostname
+  ## with optional port and path
+  ##
+  ## Plugin can be configured in three modes (either can be used):
+  ##   - http: the URL must start with http:// or https://, ie:
+  ##       "http://localhost/status"
+  ##       "http://192.168.130.1/status?full"
+  ##
+  ##   - unixsocket: path to fpm socket, ie:
+  ##       "/var/run/php5-fpm.sock"
+  ##      or using a custom fpm status path:
+  ##       "/var/run/php5-fpm.sock:fpm-custom-status-path"
+  ##      glob patterns are also supported:
+  ##       "/var/run/php*.sock"
+  ##
+  ##   - fcgi: the URL must start with fcgi:// or cgi://, and port must be present, ie:
+  ##       "fcgi://10.0.0.12:9000/status"
+  ##       "cgi://10.0.10.12:9001/status"
+  ##
+  ## Example of multiple gathering from local socket and remote host
+  ## urls = ["http://192.168.1.20/status", "/tmp/fpm.sock"]
+  urls = ["http://localhost/status"]
+
+  ## Format of stats to parse, set to "status" or "json"
+  ## If the user configures the URL to return JSON (e.g.
+  ## http://localhost/status?json), set to JSON. Otherwise, will attempt to
+  ## parse line-by-line. The JSON mode will produce additional metrics.
+  # format = "status"
+
+  ## Duration allowed to complete HTTP requests.
+  # timeout = "5s"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+When using `unixsocket`, you have to ensure that telegraf runs on same
+host, and socket path is accessible to telegraf user.
+
+## Metrics
+
+- phpfpm
+  - tags:
+    - pool
+    - url
+  - fields:
+    - accepted_conn
+    - listen_queue
+    - max_listen_queue
+    - listen_queue_len
+    - idle_processes
+    - active_processes
+    - total_processes
+    - max_active_processes
+    - max_children_reached
+    - slow_requests
+- phpfpm_process
+  - tags:
+    - pool
+    - request method
+    - request uri
+    - script
+    - url
+    - user
+  - fields:
+    - pid
+    - content length
+    - last request cpu
+    - last request memory
+    - request duration
+    - requests
+    - start time
+    - start since
+    - state
+
+## Example Output
+
+```text
+phpfpm,pool=www accepted_conn=13i,active_processes=2i,idle_processes=1i,listen_queue=0i,listen_queue_len=0i,max_active_processes=2i,max_children_reached=0i,max_listen_queue=0i,slow_requests=0i,total_processes=3i 1453011293083331187
+phpfpm,pool=www2 accepted_conn=12i,active_processes=1i,idle_processes=2i,listen_queue=0i,listen_queue_len=0i,max_active_processes=2i,max_children_reached=0i,max_listen_queue=0i,slow_requests=0i,total_processes=3i 1453011293083691422
+phpfpm,pool=www3 accepted_conn=11i,active_processes=1i,idle_processes=2i,listen_queue=0i,listen_queue_len=0i,max_active_processes=2i,max_children_reached=0i,max_listen_queue=0i,slow_requests=0i,total_processes=3i 1453011293083691658
+```
+
+With the JSON output, additional metrics around processes are generated:
+
+```text
+phpfpm,pool=www,url=http://127.0.0.1:44637?full&json accepted_conn=3879i,active_processes=1i,idle_processes=9i,listen_queue=0i,listen_queue_len=0i,max_active_processes=3i,max_children_reached=0i,max_listen_queue=0i,slow_requests=0i,start_since=4901i,total_processes=10i
+phpfpm_process,pool=www,request_method=GET,request_uri=/fpm-status?json&full,script=-,url=http://127.0.0.1:44637?full&json,user=- content_length=0i,pid=583i,last_request_cpu=0,last_request_memory=0,request_duration=159i,requests=386i,start_time=1702044927i,state="Running"
+phpfpm_process,pool=www,request_method=GET,request_uri=/fpm-status,script=-,url=http://127.0.0.1:44637?full&json,user=- content_length=0i,pid=584i,last_request_cpu=0,last_request_memory=2097152,request_duration=174i,requests=390i,start_time=1702044927i,state="Idle"
+phpfpm_process,pool=www,request_method=GET,request_uri=/index.php,script=script.php,url=http://127.0.0.1:44637?full&json,user=- content_length=0i,pid=585i,last_request_cpu=104.93,last_request_memory=2097152,request_duration=9530i,requests=389i,start_time=1702044927i,state="Idle"
+phpfpm_process,pool=www,request_method=GET,request_uri=/ping,script=-,url=http://127.0.0.1:44637?full&json,user=- content_length=0i,pid=586i,last_request_cpu=0,last_request_memory=2097152,request_duration=127i,requests=399i,start_time=1702044927i,state="Idle"
+phpfpm_process,pool=www,request_method=GET,request_uri=/index.php,script=script.php,url=http://127.0.0.1:44637?full&json,user=- content_length=0i,pid=587i,last_request_cpu=0,last_request_memory=2097152,request_duration=9713i,requests=382i,start_time=1702044927i,state="Idle"
+phpfpm_process,pool=www,request_method=GET,request_uri=/ping,script=-,url=http://127.0.0.1:44637?full&json,user=- content_length=0i,pid=588i,last_request_cpu=0,last_request_memory=2097152,request_duration=133i,requests=383i,start_time=1702044927i,state="Idle"
+phpfpm_process,pool=www,request_method=GET,request_uri=/fpm-status?json,script=-,url=http://127.0.0.1:44637?full&json,user=- content_length=0i,pid=589i,last_request_cpu=0,last_request_memory=2097152,request_duration=154i,requests=381i,start_time=1702044927i,state="Idle"
+phpfpm_process,pool=www,request_method=GET,request_uri=/ping,script=-,url=http://127.0.0.1:44637?full&json,user=- content_length=0i,pid=590i,last_request_cpu=0,last_request_memory=2097152,request_duration=108i,requests=397i,start_time=1702044927i,state="Idle"
+phpfpm_process,pool=www,request_method=GET,request_uri=/index.php,script=script.php,url=http://127.0.0.1:44637?full&json,user=- content_length=0i,pid=591i,last_request_cpu=110.28,last_request_memory=2097152,request_duration=9068i,requests=381i,start_time=1702044927i,state="Idle"
+phpfpm_process,pool=www,request_method=GET,request_uri=/index.php,script=script.php,url=http://127.0.0.1:44637?full&json,user=- content_length=0i,pid=592i,last_request_cpu=64.27,last_request_memory=2097152,request_duration=15559i,requests=391i,start_time=1702044927i,state="Idle"
+```
diff --git a/content/telegraf/v1/input-plugins/ping/_index.md b/content/telegraf/v1/input-plugins/ping/_index.md
new file mode 100644
index 000000000..5cc52c3c1
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/ping/_index.md
@@ -0,0 +1,211 @@
+---
+description: "Telegraf plugin for collecting metrics from Ping"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Ping
+    identifier: input-ping
+tags: [Ping, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Ping Input Plugin
+
+Sends a ping message by executing the system ping command and reports the
+results.
+
+This plugin has two main methods of operation: `exec` and `native`.  The
+recommended method is `native`, which has greater system compatibility and
+performance.  However, for backwards compatibility the `exec` method is the
+default.
+
+When using `method = "exec"`, the systems ping utility is executed to send the
+ping packets.
+
+Most ping command implementations are supported, one notable exception being
+that there is currently no support for GNU Inetutils ping.  You may instead use
+the iputils-ping implementation:
+
+```sh
+apt-get install iputils-ping
+```
+
+When using `method = "native"` a ping is sent and the results are reported in
+native Go by the Telegraf process, eliminating the need to execute the system
+`ping` command.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Ping given url(s) and return statistics
+[[inputs.ping]]
+  ## Hosts to send ping packets to.
+  urls = ["example.org"]
+
+  ## Method used for sending pings, can be either "exec" or "native".  When set
+  ## to "exec" the systems ping command will be executed.  When set to "native"
+  ## the plugin will send pings directly.
+  ##
+  ## While the default is "exec" for backwards compatibility, new deployments
+  ## are encouraged to use the "native" method for improved compatibility and
+  ## performance.
+  # method = "exec"
+
+  ## Number of ping packets to send per interval.  Corresponds to the "-c"
+  ## option of the ping command.
+  # count = 1
+
+  ## Time to wait between sending ping packets in seconds.  Operates like the
+  ## "-i" option of the ping command.
+  # ping_interval = 1.0
+
+  ## If set, the time to wait for a ping response in seconds.  Operates like
+  ## the "-W" option of the ping command.
+  # timeout = 1.0
+
+  ## If set, the total ping deadline, in seconds.  Operates like the -w option
+  ## of the ping command.
+  # deadline = 10
+
+  ## Interface or source address to send ping from.  Operates like the -I or -S
+  ## option of the ping command.
+  # interface = ""
+
+  ## Percentiles to calculate. This only works with the native method.
+  # percentiles = [50, 95, 99]
+
+  ## Specify the ping executable binary.
+  # binary = "ping"
+
+  ## Arguments for ping command. When arguments is not empty, the command from
+  ## the binary option will be used and other options (ping_interval, timeout,
+  ## etc) will be ignored.
+  # arguments = ["-c", "3"]
+
+  ## Use only IPv4 addresses when resolving a hostname. By default, both IPv4
+  ## and IPv6 can be used.
+  # ipv4 = false
+
+  ## Use only IPv6 addresses when resolving a hostname. By default, both IPv4
+  ## and IPv6 can be used.
+  # ipv6 = false
+
+  ## Number of data bytes to be sent. Corresponds to the "-s"
+  ## option of the ping command. This only works with the native method.
+  # size = 56
+```
+
+### File Limit
+
+Since this plugin runs the ping command, it may need to open multiple files per
+host.  The number of files used is lessened with the `native` option but still
+many files are used.  With a large host list you may receive a `too many open
+files` error.
+
+To increase this limit on platforms using systemd the recommended method is to
+use the "drop-in directory", usually located at
+`/etc/systemd/system/telegraf.service.d`.
+
+You can create or edit a drop-in file in the correct location using:
+
+```sh
+systemctl edit telegraf
+```
+
+Increase the number of open files:
+
+```ini
+[Service]
+LimitNOFILE=8192
+```
+
+Restart Telegraf:
+
+```sh
+systemctl restart telegraf
+```
+
+### Linux Permissions
+
+When using `method = "native"`, Telegraf will attempt to use privileged raw ICMP
+sockets.  On most systems, doing so requires `CAP_NET_RAW` capabilities or for
+Telegraf to be run as root.
+
+With systemd:
+
+```sh
+systemctl edit telegraf
+```
+
+```ini
+[Service]
+CapabilityBoundingSet=CAP_NET_RAW
+AmbientCapabilities=CAP_NET_RAW
+```
+
+```sh
+systemctl restart telegraf
+```
+
+Without systemd:
+
+```sh
+setcap cap_net_raw=eip /usr/bin/telegraf
+```
+
+Reference [`man 7 capabilities`]() for more information about
+setting capabilities.
+
+[man 7 capabilities]: http://man7.org/linux/man-pages/man7/capabilities.7.html
+
+### Other OS Permissions
+
+When using `method = "native"`, you will need permissions similar to the
+executable ping program for your OS.
+
+## Metrics
+
+- ping
+  - tags:
+    - url
+  - fields:
+    - packets_transmitted (integer)
+    - packets_received (integer)
+    - percent_packet_loss (float)
+    - ttl (integer, Not available on Windows)
+    - average_response_ms (float)
+    - minimum_response_ms (float)
+    - maximum_response_ms (float)
+    - standard_deviation_ms (float, Available on Windows only with method = "native")
+    - percentile\<N\>_ms (float, Where `<N>` is the percentile specified in `percentiles`. Available with method = "native" only)
+    - errors (float, Windows only)
+    - reply_received (integer, Windows with method = "exec" only)
+    - percent_reply_loss (float, Windows with method = "exec" only)
+    - result_code (int, success = 0, no such host = 1, ping error = 2)
+
+### reply_received vs packets_received
+
+On Windows systems with `method = "exec"`, the "Destination net unreachable"
+reply will increment `packets_received` but not `reply_received`*.
+
+### ttl
+
+There is currently no support for TTL on windows with `"native"`; track
+progress at <https://github.com/golang/go/issues/7175> and
+<https://github.com/golang/go/issues/7174>
+
+## Example Output
+
+```text
+ping,url=example.org average_response_ms=23.066,ttl=63,maximum_response_ms=24.64,minimum_response_ms=22.451,packets_received=5i,packets_transmitted=5i,percent_packet_loss=0,result_code=0i,standard_deviation_ms=0.809 1535747258000000000
+```
diff --git a/content/telegraf/v1/input-plugins/postfix/_index.md b/content/telegraf/v1/input-plugins/postfix/_index.md
new file mode 100644
index 000000000..003b56b52
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/postfix/_index.md
@@ -0,0 +1,85 @@
+---
+description: "Telegraf plugin for collecting metrics from Postfix"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Postfix
+    identifier: input-postfix
+tags: [Postfix, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Postfix Input Plugin
+
+The postfix plugin reports metrics on the postfix queues.
+
+For each of the active, hold, incoming, maildrop, and deferred queues
+(<http://www.postfix.org/QSHAPE_README.html#queues>), it will report the queue
+length (number of items), size (bytes used by items), and age (age of oldest
+item in seconds).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Measure postfix queue statistics
+# This plugin ONLY supports non-Windows
+[[inputs.postfix]]
+  ## Postfix queue directory. If not provided, telegraf will try to use
+  ## 'postconf -h queue_directory' to determine it.
+  # queue_directory = "/var/spool/postfix"
+```
+
+### Permissions
+
+Telegraf will need read access to the files in the queue directory.  You may
+need to alter the permissions of these directories to provide access to the
+telegraf user.
+
+This can be setup either using standard unix permissions or with Posix ACLs,
+you will only need to use one method:
+
+Unix permissions:
+
+```sh
+sudo chgrp -R telegraf /var/spool/postfix/{active,hold,incoming,deferred}
+sudo chmod -R g+rXs /var/spool/postfix/{active,hold,incoming,deferred}
+sudo usermod -a -G postdrop telegraf
+sudo chmod g+r /var/spool/postfix/maildrop
+```
+
+Posix ACL:
+
+```sh
+sudo setfacl -Rm g:telegraf:rX /var/spool/postfix/
+sudo setfacl -dm g:telegraf:rX /var/spool/postfix/
+```
+
+## Metrics
+
+- postfix_queue
+  - tags:
+    - queue
+  - fields:
+    - length (integer)
+    - size (integer, bytes)
+    - age (integer, seconds)
+
+## Example Output
+
+```text
+postfix_queue,queue=active length=3,size=12345,age=9
+postfix_queue,queue=hold length=0,size=0,age=0
+postfix_queue,queue=maildrop length=1,size=2000,age=2
+postfix_queue,queue=incoming length=1,size=1020,age=0
+postfix_queue,queue=deferred length=400,size=76543210,age=3600
+```
diff --git a/content/telegraf/v1/input-plugins/postgresql/_index.md b/content/telegraf/v1/input-plugins/postgresql/_index.md
new file mode 100644
index 000000000..06923044c
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/postgresql/_index.md
@@ -0,0 +1,192 @@
+---
+description: "Telegraf plugin for collecting metrics from PostgreSQL"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: PostgreSQL
+    identifier: input-postgresql
+tags: [PostgreSQL, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# PostgreSQL Input Plugin
+
+The `postgresql` plugin provides metrics for your PostgreSQL Server instance.
+Recorded metrics are lightweight and use Dynamic Management Views supplied
+by PostgreSQL.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `address` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from one or many postgresql servers
+[[inputs.postgresql]]
+  ## Specify address via a url matching:
+  ##   postgres://[pqgotest[:password]]@localhost[/dbname]?sslmode=[disable|verify-ca|verify-full]&statement_timeout=...
+  ## or a simple string:
+  ##   host=localhost user=pqgotest password=... sslmode=... dbname=app_production
+  ## Users can pass the path to the socket as the host value to use a socket
+  ## connection (e.g. `/var/run/postgresql`).
+  ##
+  ## All connection parameters are optional.
+  ##
+  ## Without the dbname parameter, the driver will default to a database
+  ## with the same name as the user. This dbname is just for instantiating a
+  ## connection with the server and doesn't restrict the databases we are trying
+  ## to grab metrics for.
+  ##
+  address = "host=localhost user=postgres sslmode=disable"
+
+  ## A custom name for the database that will be used as the "server" tag in the
+  ## measurement output. If not specified, a default one generated from
+  ## the connection address is used.
+  # outputaddress = "db01"
+
+  ## connection configuration.
+  ## maxlifetime - specify the maximum lifetime of a connection.
+  ## default is forever (0s)
+  ##
+  ## Note that this does not interrupt queries, the lifetime will not be enforced
+  ## whilst a query is running
+  # max_lifetime = "0s"
+
+  ## A  list of databases to explicitly ignore.  If not specified, metrics for all
+  ## databases are gathered.  Do NOT use with the 'databases' option.
+  # ignored_databases = ["postgres", "template0", "template1"]
+
+  ## A list of databases to pull metrics about. If not specified, metrics for all
+  ## databases are gathered.  Do NOT use with the 'ignored_databases' option.
+  # databases = ["app_production", "testing"]
+
+  ## Whether to use prepared statements when connecting to the database.
+  ## This should be set to false when connecting through a PgBouncer instance
+  ## with pool_mode set to transaction.
+  prepared_statements = true
+```
+
+Specify address via a postgresql connection string:
+
+```text
+host=localhost port=5432 user=telegraf database=telegraf
+```
+
+Or via an url matching:
+
+```text
+postgres://[pqgotest[:password]]@host:port[/dbname]?sslmode=[disable|verify-ca|verify-full]
+```
+
+Users can pass the path to the socket as the host value to use a socket
+connection (e.g. `/var/run/postgresql`).
+
+It is also possible to specify a query timeout maximum execution time (in ms)
+for any individual statement passed over the connection
+
+```text
+postgres://[pqgotest[:password]]@host:port[/dbname]?sslmode=[disable|verify-ca|verify-full]&statement_timeout=10000
+```
+
+All connection parameters are optional. Without the dbname parameter, the driver
+will default to a database with the same name as the user. This dbname is just
+for instantiating a connection with the server and doesn't restrict the
+databases we are trying to grab metrics for.
+
+A list of databases to explicitly ignore.  If not specified, metrics for all
+databases are gathered.  Do NOT use with the 'databases' option.
+
+```text
+ignored_databases = ["postgres", "template0", "template1"]`
+```
+
+A list of databases to pull metrics about. If not specified, metrics for all
+databases are gathered.  Do NOT use with the 'ignored_databases' option.
+
+```text
+databases = ["app_production", "testing"]`
+```
+
+### Permissions
+
+The plugins gathers metrics from the `pg_stat_database` and `pg_stat_bgwriter`
+views. To grant a user access to the views run:
+
+```sql
+GRANT pg_read_all_stats TO user;
+```
+
+See the [PostgreSQL docs](https://www.postgresql.org/docs/current/predefined-roles.html) for more information on the predefined roles.
+
+[PostgreSQL docs]: https://www.postgresql.org/docs/current/predefined-roles.html
+
+### TLS Configuration
+
+Add the `sslkey`, `sslcert` and `sslrootcert` options to your DSN:
+
+```shell
+host=localhost user=pgotest dbname=app_production sslmode=require sslkey=/etc/telegraf/key.pem sslcert=/etc/telegraf/cert.pem sslrootcert=/etc/telegraf/ca.pem
+```
+
+## Metrics
+
+This postgresql plugin provides metrics for your postgres database. It currently
+works with postgres versions 8.1+. It uses data from the built in
+_pg_stat_database_ and pg_stat_bgwriter views. The metrics recorded depend on
+your version of postgres. See table:
+
+```sh
+pg version      9.2+   9.1   8.3-9.0   8.1-8.2   7.4-8.0(unsupported)
+---             ---    ---   -------   -------   -------
+datid            x      x       x         x
+datname          x      x       x         x
+numbackends      x      x       x         x         x
+xact_commit      x      x       x         x         x
+xact_rollback    x      x       x         x         x
+blks_read        x      x       x         x         x
+blks_hit         x      x       x         x         x
+tup_returned     x      x       x
+tup_fetched      x      x       x
+tup_inserted     x      x       x
+tup_updated      x      x       x
+tup_deleted      x      x       x
+conflicts        x      x
+temp_files       x
+temp_bytes       x
+deadlocks        x
+blk_read_time    x
+blk_write_time   x
+stats_reset*     x      x
+```
+
+_* value ignored and therefore not recorded._
+
+More information about the meaning of these metrics can be found in the
+[PostgreSQL Documentation](http://www.postgresql.org/docs/9.2/static/monitoring-stats.html#PG-STAT-DATABASE-VIEW).
+
+[1]: http://www.postgresql.org/docs/9.2/static/monitoring-stats.html#PG-STAT-DATABASE-VIEW
+
+## Example Output
+
+```text
+postgresql,db=postgres_global,server=dbname\=postgres\ host\=localhost\ port\=5432\ statement_timeout\=10000\ user\=postgres tup_fetched=1271i,tup_updated=5i,session_time=1451414320768.855,xact_rollback=2i,conflicts=0i,blk_write_time=0,temp_bytes=0i,datid=0i,sessions_fatal=0i,tup_returned=1339i,sessions_abandoned=0i,blk_read_time=0,blks_read=88i,idle_in_transaction_time=0,sessions=0i,active_time=0,tup_inserted=24i,tup_deleted=0i,temp_files=0i,numbackends=0i,xact_commit=4i,sessions_killed=0i,blks_hit=5616i,deadlocks=0i 1672399790000000000
+postgresql,db=postgres,host=oss_cluster_host,server=dbname\=postgres\ host\=localhost\ port\=5432\ statement_timeout\=10000\ user\=postgres conflicts=0i,sessions_abandoned=2i,active_time=460340.823,tup_returned=119382i,tup_deleted=0i,blk_write_time=0,xact_commit=305i,blks_hit=16358i,deadlocks=0i,sessions=12i,numbackends=1i,temp_files=0i,xact_rollback=5i,sessions_fatal=0i,datname="postgres",blk_read_time=0,idle_in_transaction_time=0,temp_bytes=0i,tup_inserted=3i,tup_updated=0i,blks_read=299i,datid=5i,session_time=469056.613,sessions_killed=0i,tup_fetched=5550i 1672399790000000000
+postgresql,db=template1,host=oss_cluster_host,server=dbname\=postgres\ host\=localhost\ port\=5432\ statement_timeout\=10000\ user\=postgres active_time=0,idle_in_transaction_time=0,blks_read=1352i,sessions_abandoned=0i,tup_fetched=28544i,session_time=0,sessions_killed=0i,temp_bytes=0i,tup_returned=188541i,xact_commit=1168i,blk_read_time=0,sessions_fatal=0i,datid=1i,datname="template1",conflicts=0i,xact_rollback=0i,numbackends=0i,deadlocks=0i,sessions=0i,tup_inserted=17520i,temp_files=0i,tup_updated=743i,blk_write_time=0,blks_hit=99487i,tup_deleted=34i 1672399790000000000
+postgresql,db=template0,host=oss_cluster_host,server=dbname\=postgres\ host\=localhost\ port\=5432\ statement_timeout\=10000\ user\=postgres sessions=0i,datid=4i,tup_updated=0i,sessions_abandoned=0i,blk_write_time=0,numbackends=0i,blks_read=0i,blks_hit=0i,sessions_fatal=0i,temp_files=0i,deadlocks=0i,conflicts=0i,xact_commit=0i,xact_rollback=0i,session_time=0,datname="template0",tup_returned=0i,tup_inserted=0i,idle_in_transaction_time=0,tup_fetched=0i,active_time=0,temp_bytes=0i,tup_deleted=0i,blk_read_time=0,sessions_killed=0i 1672399790000000000
+postgresql,db=postgres,host=oss_cluster_host,server=dbname\=postgres\ host\=localhost\ port\=5432\ statement_timeout\=10000\ user\=postgres buffers_clean=0i,buffers_alloc=426i,checkpoints_req=1i,buffers_checkpoint=50i,buffers_backend_fsync=0i,checkpoint_write_time=5053,checkpoints_timed=26i,checkpoint_sync_time=26,maxwritten_clean=0i,buffers_backend=9i 1672399790000000000
+```
diff --git a/content/telegraf/v1/input-plugins/postgresql_extensible/_index.md b/content/telegraf/v1/input-plugins/postgresql_extensible/_index.md
new file mode 100644
index 000000000..a28cb5ca4
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/postgresql_extensible/_index.md
@@ -0,0 +1,329 @@
+---
+description: "Telegraf plugin for collecting metrics from PostgreSQL Extensible"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: PostgreSQL Extensible
+    identifier: input-postgresql_extensible
+tags: [PostgreSQL Extensible, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# PostgreSQL Extensible Input Plugin
+
+This postgresql plugin provides metrics for your postgres database. It has been
+designed to parse SQL queries in the plugin section of your `telegraf.conf`.
+
+The example below has two queries are specified, with the following parameters:
+
+* The SQL query itself
+* The minimum PostgreSQL version supported (the numeric display visible in pg_settings)
+* A boolean to define if the query has to be run against some specific database (defined in the `databases` variable of the plugin section)
+* The name of the measurement
+* A list of the columns to be defined as tags
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `address` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from one or many postgresql servers
+[[inputs.postgresql_extensible]]
+  # specify address via a url matching:
+  # postgres://[pqgotest[:password]]@host:port[/dbname]?sslmode=...&statement_timeout=...
+  # or a simple string:
+  #   host=localhost port=5432 user=pqgotest password=... sslmode=... dbname=app_production
+  #
+  # All connection parameters are optional.
+  # Without the dbname parameter, the driver will default to a database
+  # with the same name as the user. This dbname is just for instantiating a
+  # connection with the server and doesn't restrict the databases we are trying
+  # to grab metrics for.
+  #
+  address = "host=localhost user=postgres sslmode=disable"
+
+  ## Whether to use prepared statements when connecting to the database.
+  ## This should be set to false when connecting through a PgBouncer instance
+  ## with pool_mode set to transaction.
+  prepared_statements = true
+
+  # Define the toml config where the sql queries are stored
+  # The script option can be used to specify the .sql file path.
+  # If script and sqlquery options specified at same time, sqlquery will be used
+  #
+  # the measurement field defines measurement name for metrics produced
+  # by the query. Default is "postgresql".
+  #
+  # the tagvalue field is used to define custom tags (separated by comas).
+  # the query is expected to return columns which match the names of the
+  # defined tags. The values in these columns must be of a string-type,
+  # a number-type or a blob-type.
+  #
+  # The timestamp field is used to override the data points timestamp value. By
+  # default, all rows inserted with current time. By setting a timestamp column,
+  # the row will be inserted with that column's value.
+  #
+  # The min_version field specifies minimal database version this query
+  # will run on.
+  #
+  # The max_version field when set specifies maximal database version
+  # this query will NOT run on.
+  #
+  # Database version in `minversion` and `maxversion` is represented as
+  # a single integer without last component, for example:
+  # 9.6.2 -> 906
+  # 15.2 -> 1500
+  #
+  # Structure :
+  # [[inputs.postgresql_extensible.query]]
+  #   measurement string
+  #   sqlquery string
+  #   min_version int
+  #   max_version int
+  #   withdbname boolean
+  #   tagvalue string (coma separated)
+  #   timestamp string
+  [[inputs.postgresql_extensible.query]]
+    measurement="pg_stat_database"
+    sqlquery="SELECT * FROM pg_stat_database WHERE datname"
+    min_version=901
+    tagvalue=""
+  [[inputs.postgresql_extensible.query]]
+    script="your_sql-filepath.sql"
+    min_version=901
+    max_version=1300
+    tagvalue=""
+```
+
+The system can be easily extended using homemade metrics collection tools or
+using postgresql extensions ([pg_stat_statements](), [pg_proctab]() or
+[powa](http://dalibo.github.io/powa/))
+
+[1]: http://www.postgresql.org/docs/current/static/pgstatstatements.html
+
+[2]: https://github.com/markwkm/pg_proctab
+
+[3]: http://dalibo.github.io/powa/
+
+## Sample Queries
+
+* telegraf.conf postgresql_extensible queries (assuming that you have configured
+ correctly your connection)
+
+```toml
+[[inputs.postgresql_extensible.query]]
+  sqlquery="SELECT * FROM pg_stat_database"
+  version=901
+  withdbname=false
+  tagvalue=""
+[[inputs.postgresql_extensible.query]]
+  sqlquery="SELECT * FROM pg_stat_bgwriter"
+  version=901
+  withdbname=false
+  tagvalue=""
+[[inputs.postgresql_extensible.query]]
+  sqlquery="select * from sessions"
+  version=901
+  withdbname=false
+  tagvalue="db,username,state"
+[[inputs.postgresql_extensible.query]]
+  sqlquery="select setting as max_connections from pg_settings where \
+  name='max_connections'"
+  version=801
+  withdbname=false
+  tagvalue=""
+[[inputs.postgresql_extensible.query]]
+  sqlquery="select * from pg_stat_kcache"
+  version=901
+  withdbname=false
+  tagvalue=""
+[[inputs.postgresql_extensible.query]]
+  sqlquery="select setting as shared_buffers from pg_settings where \
+  name='shared_buffers'"
+  version=801
+  withdbname=false
+  tagvalue=""
+[[inputs.postgresql_extensible.query]]
+  sqlquery="SELECT db, count( distinct blocking_pid ) AS num_blocking_sessions,\
+  count( distinct blocked_pid) AS num_blocked_sessions FROM \
+  public.blocking_procs group by db"
+  version=901
+  withdbname=false
+  tagvalue="db"
+[[inputs.postgresql_extensible.query]]
+  sqlquery="""
+    SELECT type, (enabled || '') AS enabled, COUNT(*)
+      FROM application_users
+      GROUP BY type, enabled
+  """
+  version=901
+  withdbname=false
+  tagvalue="type,enabled"
+```
+
+## Postgresql Side
+
+postgresql.conf :
+
+```sql
+shared_preload_libraries = 'pg_stat_statements,pg_stat_kcache'
+```
+
+Please follow the requirements to setup those extensions.
+
+In the database (can be a specific monitoring db)
+
+```sql
+create extension pg_stat_statements;
+create extension pg_stat_kcache;
+create extension pg_proctab;
+```
+
+(assuming that the extension is installed on the OS Layer)
+
+* pg_stat_kcache is available on the postgresql.org yum repo
+* pg_proctab is available at : <https://github.com/markwkm/pg_proctab>
+
+## Views
+
+* Blocking sessions
+
+```sql
+CREATE OR REPLACE VIEW public.blocking_procs AS
+ SELECT a.datname AS db,
+    kl.pid AS blocking_pid,
+    ka.usename AS blocking_user,
+    ka.query AS blocking_query,
+    bl.pid AS blocked_pid,
+    a.usename AS blocked_user,
+    a.query AS blocked_query,
+    to_char(age(now(), a.query_start), 'HH24h:MIm:SSs'::text) AS age
+   FROM pg_locks bl
+     JOIN pg_stat_activity a ON bl.pid = a.pid
+     JOIN pg_locks kl ON bl.locktype = kl.locktype AND NOT bl.database IS
+     DISTINCT FROM kl.database AND NOT bl.relation IS DISTINCT FROM kl.relation
+     AND NOT bl.page IS DISTINCT FROM kl.page AND NOT bl.tuple IS DISTINCT FROM
+     kl.tuple AND NOT bl.virtualxid IS DISTINCT FROM kl.virtualxid AND NOT
+     bl.transactionid IS DISTINCT FROM kl.transactionid AND NOT bl.classid IS
+     DISTINCT FROM kl.classid AND NOT bl.objid IS DISTINCT FROM kl.objid AND
+      NOT bl.objsubid IS DISTINCT FROM kl.objsubid AND bl.pid <> kl.pid
+     JOIN pg_stat_activity ka ON kl.pid = ka.pid
+  WHERE kl.granted AND NOT bl.granted
+  ORDER BY a.query_start;
+```
+
+* Sessions Statistics
+
+```sql
+CREATE OR REPLACE VIEW public.sessions AS
+ WITH proctab AS (
+         SELECT pg_proctab.pid,
+                CASE
+                    WHEN pg_proctab.state::text = 'R'::bpchar::text
+                      THEN 'running'::text
+                    WHEN pg_proctab.state::text = 'D'::bpchar::text
+                      THEN 'sleep-io'::text
+                    WHEN pg_proctab.state::text = 'S'::bpchar::text
+                      THEN 'sleep-waiting'::text
+                    WHEN pg_proctab.state::text = 'Z'::bpchar::text
+                      THEN 'zombie'::text
+                    WHEN pg_proctab.state::text = 'T'::bpchar::text
+                      THEN 'stopped'::text
+                    ELSE NULL::text
+                END AS proc_state,
+            pg_proctab.ppid,
+            pg_proctab.utime,
+            pg_proctab.stime,
+            pg_proctab.vsize,
+            pg_proctab.rss,
+            pg_proctab.processor,
+            pg_proctab.rchar,
+            pg_proctab.wchar,
+            pg_proctab.syscr,
+            pg_proctab.syscw,
+            pg_proctab.reads,
+            pg_proctab.writes,
+            pg_proctab.cwrites
+           FROM pg_proctab() pg_proctab(pid, comm, fullcomm, state, ppid, pgrp,
+             session, tty_nr, tpgid, flags, minflt, cminflt, majflt, cmajflt,
+             utime, stime, cutime, cstime, priority, nice, num_threads,
+             itrealvalue, starttime, vsize, rss, exit_signal, processor,
+             rt_priority, policy, delayacct_blkio_ticks, uid, username, rchar,
+             wchar, syscr, syscw, reads, writes, cwrites)
+        ), stat_activity AS (
+         SELECT pg_stat_activity.datname,
+            pg_stat_activity.pid,
+            pg_stat_activity.usename,
+                CASE
+                    WHEN pg_stat_activity.query IS NULL THEN 'no query'::text
+                    WHEN pg_stat_activity.query IS NOT NULL AND
+                    pg_stat_activity.state = 'idle'::text THEN 'no query'::text
+                    ELSE regexp_replace(pg_stat_activity.query, '[\n\r]+'::text,
+                       ' '::text, 'g'::text)
+                END AS query
+           FROM pg_stat_activity
+        )
+ SELECT stat.datname::name AS db,
+    stat.usename::name AS username,
+    stat.pid,
+    proc.proc_state::text AS state,
+('"'::text || stat.query) || '"'::text AS query,
+    (proc.utime/1000)::bigint AS session_usertime,
+    (proc.stime/1000)::bigint AS session_systemtime,
+    proc.vsize AS session_virtual_memory_size,
+    proc.rss AS session_resident_memory_size,
+    proc.processor AS session_processor_number,
+    proc.rchar AS session_bytes_read,
+    proc.rchar-proc.reads AS session_logical_bytes_read,
+    proc.wchar AS session_bytes_written,
+    proc.wchar-proc.writes AS session_logical_bytes_writes,
+    proc.syscr AS session_read_io,
+    proc.syscw AS session_write_io,
+    proc.reads AS session_physical_reads,
+    proc.writes AS session_physical_writes,
+    proc.cwrites AS session_cancel_writes
+   FROM proctab proc,
+    stat_activity stat
+  WHERE proc.pid = stat.pid;
+```
+
+## Example Output
+
+The example out below was taken by running the query
+
+```sql
+select count(*)*100 / (select cast(nullif(setting, '') AS integer) from pg_settings where name='max_connections') as percentage_of_used_cons from pg_stat_activity
+```
+
+Which generates the following
+
+```text
+postgresql,db=postgres,server=dbname\=postgres\ host\=localhost\ port\=5432\ statement_timeout\=10000\ user\=postgres percentage_of_used_cons=6i 1672400531000000000
+```
+
+## Metrics
+
+The metrics collected by this input plugin will depend on the configured query.
+
+By default, the following format will be used
+
+* postgresql
+  * tags:
+    * db
+    * server
diff --git a/content/telegraf/v1/input-plugins/powerdns/_index.md b/content/telegraf/v1/input-plugins/powerdns/_index.md
new file mode 100644
index 000000000..24fcf8e66
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/powerdns/_index.md
@@ -0,0 +1,100 @@
+---
+description: "Telegraf plugin for collecting metrics from PowerDNS"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: PowerDNS
+    identifier: input-powerdns
+tags: [PowerDNS, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# PowerDNS Input Plugin
+
+The powerdns plugin gathers metrics about PowerDNS using unix socket.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from one or many PowerDNS servers
+[[inputs.powerdns]]
+  # An array of sockets to gather stats about.
+  # Specify a path to unix socket.
+  #
+  # If no servers are specified, then '/var/run/pdns.controlsocket' is used as the path.
+  unix_sockets = ["/var/run/pdns.controlsocket"]
+```
+
+### Permissions
+
+Telegraf will need access to the powerdns control socket. On many systems this
+can be accomplished by adding the `telegraf` user to the `pdns` group:
+
+```sh
+usermod telegraf -a -G pdns
+```
+
+Additionally, telegraf may need additional permissions. Look at the
+`socket-mode` PowerDNS option to set permissions on the socket.
+
+## Metrics
+
+- powerdns
+  - corrupt-packets
+  - deferred-cache-inserts
+  - deferred-cache-lookup
+  - dnsupdate-answers
+  - dnsupdate-changes
+  - dnsupdate-queries
+  - dnsupdate-refused
+  - packetcache-hit
+  - packetcache-miss
+  - packetcache-size
+  - query-cache-hit
+  - query-cache-miss
+  - rd-queries
+  - recursing-answers
+  - recursing-questions
+  - recursion-unanswered
+  - security-status
+  - servfail-packets
+  - signatures
+  - tcp-answers
+  - tcp-queries
+  - timedout-packets
+  - udp-answers
+  - udp-answers-bytes
+  - udp-do-queries
+  - udp-queries
+  - udp4-answers
+  - udp4-queries
+  - udp6-answers
+  - udp6-queries
+  - key-cache-size
+  - latency
+  - meta-cache-size
+  - qsize-q
+  - signature-cache-size
+  - sys-msec
+  - uptime
+  - user-msec
+
+## Tags
+
+- tags: `server=socket`
+
+## Example Output
+
+```text
+powerdns,server=/var/run/pdns.controlsocket corrupt-packets=0i,deferred-cache-inserts=0i,deferred-cache-lookup=0i,dnsupdate-answers=0i,dnsupdate-changes=0i,dnsupdate-queries=0i,dnsupdate-refused=0i,key-cache-size=0i,latency=26i,meta-cache-size=0i,packetcache-hit=0i,packetcache-miss=1i,packetcache-size=0i,qsize-q=0i,query-cache-hit=0i,query-cache-miss=6i,rd-queries=1i,recursing-answers=0i,recursing-questions=0i,recursion-unanswered=0i,security-status=3i,servfail-packets=0i,signature-cache-size=0i,signatures=0i,sys-msec=4349i,tcp-answers=0i,tcp-queries=0i,timedout-packets=0i,udp-answers=1i,udp-answers-bytes=50i,udp-do-queries=0i,udp-queries=0i,udp4-answers=1i,udp4-queries=1i,udp6-answers=0i,udp6-queries=0i,uptime=166738i,user-msec=3036i 1454078624932715706
+```
diff --git a/content/telegraf/v1/input-plugins/powerdns_recursor/_index.md b/content/telegraf/v1/input-plugins/powerdns_recursor/_index.md
new file mode 100644
index 000000000..50eb4e15d
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/powerdns_recursor/_index.md
@@ -0,0 +1,206 @@
+---
+description: "Telegraf plugin for collecting metrics from PowerDNS Recursor"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: PowerDNS Recursor
+    identifier: input-powerdns_recursor
+tags: [PowerDNS Recursor, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# PowerDNS Recursor Input Plugin
+
+The `powerdns_recursor` plugin gathers metrics about PowerDNS Recursor using
+the unix controlsocket.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from one or many PowerDNS Recursor servers
+[[inputs.powerdns_recursor]]
+  ## Path to the Recursor control socket.
+  unix_sockets = ["/var/run/pdns_recursor.controlsocket"]
+
+  ## Directory to create receive socket.  This default is likely not writable,
+  ## please reference the full plugin documentation for a recommended setup.
+  # socket_dir = "/var/run/"
+  ## Socket permissions for the receive socket.
+  # socket_mode = "0666"
+
+  ## The version of the PowerDNS control protocol to use. You will have to
+  ## change this based on your PowerDNS Recursor version, see below:
+  ## Version 1: PowerDNS <4.5.0
+  ## Version 2: PowerDNS 4.5.0 - 4.5.11
+  ## Version 3: PowerDNS >=4.6.0
+  ## By default this is set to 1.
+  # control_protocol_version = 1
+
+```
+
+### Newer PowerDNS Recursor versions
+
+By default, this plugin is compatible with PowerDNS Recursor versions older
+than `4.5.0`. If you are using a newer version then you'll need to adjust the
+`control_protocol_version` configuration option based on your version. For
+versions between `4.5.0` and `4.5.11` set it to `2` and for versions `4.6.0`
+and newer set it to `3`. If you don't, you will get an `i/o timeout` or a
+`protocol wrong type for socket` error.
+
+### Permissions
+
+Telegraf will need read/write access to the control socket and to the
+`socket_dir`.  PowerDNS will need to be able to write to the `socket_dir`.
+
+The setup described below was tested on a Debian Stretch system and may need
+adapted for other systems.
+
+First change permissions on the controlsocket in the PowerDNS recursor
+configuration, usually in `/etc/powerdns/recursor.conf`:
+
+```sh
+socket-mode = 660
+```
+
+Then place the `telegraf` user into the `pdns` group:
+
+```sh
+usermod telegraf -a -G pdns
+```
+
+Since `telegraf` cannot write to to the default `/var/run` socket directory,
+create a subdirectory and adjust permissions for this directory so that both
+users can access it.
+
+```sh
+mkdir /var/run/pdns
+chown root:pdns /var/run/pdns
+chmod 770 /var/run/pdns
+```
+
+## Metrics
+
+- powerdns_recursor
+  - tags:
+    - server
+  - fields:
+    - all-outqueries
+    - answers-slow
+    - answers0-1
+    - answers1-10
+    - answers10-100
+    - answers100-1000
+    - auth-zone-queries
+    - auth4-answers-slow
+    - auth4-answers0-1
+    - auth4-answers1-10
+    - auth4-answers10-100
+    - auth4-answers100-1000
+    - auth6-answers-slow
+    - auth6-answers0-1
+    - auth6-answers1-10
+    - auth6-answers10-100
+    - auth6-answers100-1000
+    - cache-entries
+    - cache-hits
+    - cache-misses
+    - case-mismatches
+    - chain-resends
+    - client-parse-errors
+    - concurrent-queries
+    - dlg-only-drops
+    - dnssec-queries
+    - dnssec-result-bogus
+    - dnssec-result-indeterminate
+    - dnssec-result-insecure
+    - dnssec-result-nta
+    - dnssec-result-secure
+    - dnssec-validations
+    - dont-outqueries
+    - ecs-queries
+    - ecs-responses
+    - edns-ping-matches
+    - edns-ping-mismatches
+    - failed-host-entries
+    - fd-usage
+    - ignored-packets
+    - ipv6-outqueries
+    - ipv6-questions
+    - malloc-bytes
+    - max-cache-entries
+    - max-mthread-stack
+    - max-packetcache-entries
+    - negcache-entries
+    - no-packet-error
+    - noedns-outqueries
+    - noerror-answers
+    - noping-outqueries
+    - nsset-invalidations
+    - nsspeeds-entries
+    - nxdomain-answers
+    - outgoing-timeouts
+    - outgoing4-timeouts
+    - outgoing6-timeouts
+    - over-capacity-drops
+    - packetcache-entries
+    - packetcache-hits
+    - packetcache-misses
+    - policy-drops
+    - policy-result-custom
+    - policy-result-drop
+    - policy-result-noaction
+    - policy-result-nodata
+    - policy-result-nxdomain
+    - policy-result-truncate
+    - qa-latency
+    - query-pipe-full-drops
+    - questions
+    - real-memory-usage
+    - resource-limits
+    - security-status
+    - server-parse-errors
+    - servfail-answers
+    - spoof-prevents
+    - sys-msec
+    - tcp-client-overflow
+    - tcp-clients
+    - tcp-outqueries
+    - tcp-questions
+    - throttle-entries
+    - throttled-out
+    - throttled-outqueries
+    - too-old-drops
+    - udp-in-errors
+    - udp-noport-errors
+    - udp-recvbuf-errors
+    - udp-sndbuf-errors
+    - unauthorized-tcp
+    - unauthorized-udp
+    - unexpected-packets
+    - unreachables
+    - uptime
+    - user-msec
+    - x-our-latency
+    - x-ourtime-slow
+    - x-ourtime0-1
+    - x-ourtime1-2
+    - x-ourtime16-32
+    - x-ourtime2-4
+    - x-ourtime4-8
+    - x-ourtime8-16
+
+## Example Output
+
+```text
+powerdns_recursor,server=/var/run/pdns_recursor.controlsocket all-outqueries=3631810i,answers-slow=36863i,answers0-1=179612i,answers1-10=1223305i,answers10-100=1252199i,answers100-1000=408357i,auth-zone-queries=4i,auth4-answers-slow=44758i,auth4-answers0-1=59721i,auth4-answers1-10=1766787i,auth4-answers10-100=1329638i,auth4-answers100-1000=430372i,auth6-answers-slow=0i,auth6-answers0-1=0i,auth6-answers1-10=0i,auth6-answers10-100=0i,auth6-answers100-1000=0i,cache-entries=296689i,cache-hits=150654i,cache-misses=2949682i,case-mismatches=0i,chain-resends=420004i,client-parse-errors=0i,concurrent-queries=0i,dlg-only-drops=0i,dnssec-queries=152970i,dnssec-result-bogus=0i,dnssec-result-indeterminate=0i,dnssec-result-insecure=0i,dnssec-result-nta=0i,dnssec-result-secure=47i,dnssec-validations=47i,dont-outqueries=62i,ecs-queries=0i,ecs-responses=0i,edns-ping-matches=0i,edns-ping-mismatches=0i,failed-host-entries=21i,fd-usage=32i,ignored-packets=0i,ipv6-outqueries=0i,ipv6-questions=0i,malloc-bytes=0i,max-cache-entries=1000000i,max-mthread-stack=33747i,max-packetcache-entries=500000i,negcache-entries=100019i,no-packet-error=0i,noedns-outqueries=73341i,noerror-answers=25453808i,noping-outqueries=0i,nsset-invalidations=2398i,nsspeeds-entries=3966i,nxdomain-answers=3341302i,outgoing-timeouts=44384i,outgoing4-timeouts=44384i,outgoing6-timeouts=0i,over-capacity-drops=0i,packetcache-entries=78258i,packetcache-hits=25999027i,packetcache-misses=3100179i,policy-drops=0i,policy-result-custom=0i,policy-result-drop=0i,policy-result-noaction=3100336i,policy-result-nodata=0i,policy-result-nxdomain=0i,policy-result-truncate=0i,qa-latency=6553i,query-pipe-full-drops=0i,questions=29099363i,real-memory-usage=280494080i,resource-limits=0i,security-status=1i,server-parse-errors=0i,servfail-answers=304253i,spoof-prevents=0i,sys-msec=1312600i,tcp-client-overflow=0i,tcp-clients=0i,tcp-outqueries=116i,tcp-questions=133i,throttle-entries=21i,throttled-out=13296i,throttled-outqueries=13296i,too-old-drops=2i,udp-in-errors=4i,udp-noport-errors=2918i,udp-recvbuf-errors=0i,udp-sndbuf-errors=0i,unauthorized-tcp=0i,unauthorized-udp=0i,unexpected-packets=0i,unreachables=1708i,uptime=167482i,user-msec=1282640i,x-our-latency=19i,x-ourtime-slow=642i,x-ourtime0-1=3095566i,x-ourtime1-2=3401i,x-ourtime16-32=201i,x-ourtime2-4=304i,x-ourtime4-8=198i,x-ourtime8-16=24i 1533903879000000000
+```
diff --git a/content/telegraf/v1/input-plugins/processes/_index.md b/content/telegraf/v1/input-plugins/processes/_index.md
new file mode 100644
index 000000000..dfb12754c
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/processes/_index.md
@@ -0,0 +1,110 @@
+---
+description: "Telegraf plugin for collecting metrics from Processes"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Processes
+    identifier: input-processes
+tags: [Processes, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Processes Input Plugin
+
+This plugin gathers info about the total number of processes and groups
+them by status (zombie, sleeping, running, etc.)
+
+On linux this plugin requires access to procfs (/proc), on other OSes
+it requires access to execute `ps`.
+
+**Supported Platforms**: Linux, FreeBSD, Darwin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Get the number of processes and group them by status
+# This plugin ONLY supports non-Windows
+[[inputs.processes]]
+  ## Use sudo to run ps command on *BSD systems. Linux systems will read
+  ## /proc, so this does not apply there.
+  # use_sudo = false
+```
+
+Another possible configuration is to define an alternative path for resolving
+the /proc location.  Using the environment variable `HOST_PROC` the plugin will
+retrieve process information from the specified location.
+
+`docker run -v /proc:/rootfs/proc:ro -e HOST_PROC=/rootfs/proc`
+
+### Using sudo
+
+Linux systems will read from `/proc`, while BSD systems will use the `ps`
+command. The `ps` command generally does not require elevated permissions.
+However, if a user wants to collect system-wide stats, elevated permissions are
+required. If the user has configured sudo with the ability to run this
+command, then set the `use_sudo` to true.
+
+If your account does not already have the ability to run commands with
+passwordless sudo then updates to the sudoers file are required. Below is an
+example to allow the requires ps commands:
+
+First, use the `visudo` command to start editing the sudoers file. Then add
+the following content, where `<username>` is the username of the user that
+needs this access:
+
+```text
+Cmnd_Alias PS = /bin/ps
+<username> ALL=(root) NOPASSWD: PS
+Defaults!PS !logfile, !syslog, !pam_session
+```
+
+## Metrics
+
+- processes
+  - fields:
+    - blocked (aka disk sleep or uninterruptible sleep)
+    - running
+    - sleeping
+    - stopped
+    - total
+    - zombie
+    - dead
+    - wait (freebsd only)
+    - idle (bsd and Linux 4+ only)
+    - paging (linux only)
+    - parked (linux only)
+    - total_threads (linux only)
+
+## Process State Mappings
+
+Different OSes use slightly different State codes for their processes, these
+state codes are documented in `man ps`, and I will give a mapping of what major
+OS state codes correspond to in telegraf metrics:
+
+```sh
+Linux  FreeBSD  Darwin  meaning
+  R       R       R     running
+  S       S       S     sleeping
+  Z       Z       Z     zombie
+  X      none    none   dead
+  T       T       T     stopped
+  I       I       I     idle (sleeping for longer than about 20 seconds)
+  D      D,L      U     blocked (waiting in uninterruptible sleep, or locked)
+  W       W      none   paging (linux kernel < 2.6 only), wait (freebsd)
+```
+
+## Example Output
+
+```text
+processes blocked=8i,running=1i,sleeping=265i,stopped=0i,total=274i,zombie=0i,dead=0i,paging=0i,total_threads=687i 1457478636980905042
+```
diff --git a/content/telegraf/v1/input-plugins/procstat/_index.md b/content/telegraf/v1/input-plugins/procstat/_index.md
new file mode 100644
index 000000000..d709db4ba
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/procstat/_index.md
@@ -0,0 +1,308 @@
+---
+description: "Telegraf plugin for collecting metrics from Procstat"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Procstat
+    identifier: input-procstat
+tags: [Procstat, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Procstat Input Plugin
+
+The procstat plugin can be used to monitor the system resource usage of one or
+more processes.  The procstat_lookup metric displays the query information,
+specifically the number of PIDs returned on a search
+
+Processes can be selected for monitoring using one of several methods:
+
+- pidfile
+- exe
+- pattern
+- user
+- systemd_unit
+- cgroup
+- supervisor_unit
+- win_service
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Monitor process cpu and memory usage
+[[inputs.procstat]]
+  ## PID file to monitor process
+  pid_file = "/var/run/nginx.pid"
+  ## executable name (ie, pgrep <exe>)
+  # exe = "nginx"
+  ## pattern as argument for pgrep (ie, pgrep -f <pattern>)
+  # pattern = "nginx"
+  ## user as argument for pgrep (ie, pgrep -u <user>)
+  # user = "nginx"
+  ## Systemd unit name, supports globs when include_systemd_children is set to true
+  # systemd_unit = "nginx.service"
+  # include_systemd_children = false
+  ## CGroup name or path, supports globs
+  # cgroup = "systemd/system.slice/nginx.service"
+  ## Supervisor service names of hypervisorctl management
+  # supervisor_units = ["webserver", "proxy"]
+
+  ## Windows service name
+  # win_service = ""
+
+  ## override for process_name
+  ## This is optional; default is sourced from /proc/<pid>/status
+  # process_name = "bar"
+
+  ## Field name prefix
+  # prefix = ""
+
+  ## Mode to use when calculating CPU usage. Can be one of 'solaris' or 'irix'.
+  # mode = "irix"
+
+  ## Add the given information tag instead of a field
+  ## This allows to create unique metrics/series when collecting processes with
+  ## otherwise identical tags. However, please be careful as this can easily
+  ## result in a large number of series, especially with short-lived processes,
+  ## creating high cardinality at the output.
+  ## Available options are:
+  ##   cmdline   -- full commandline
+  ##   pid       -- ID of the process
+  ##   ppid      -- ID of the process' parent
+  ##   status    -- state of the process
+  ##   user      -- username owning the process
+  ## socket only options:
+  ##   protocol  -- protocol type of the process socket
+  ##   state     -- state of the process socket
+  ##   src       -- source address of the process socket (non-unix sockets)
+  ##   src_port  -- source port of the process socket (non-unix sockets)
+  ##   dest      -- destination address of the process socket (non-unix sockets)
+  ##   dest_port -- destination port of the process socket (non-unix sockets)
+  ##   name      -- name of the process socket (unix sockets only)
+  # tag_with = []
+
+  ## Properties to collect
+  ## Available options are
+  ##   cpu     -- CPU usage statistics
+  ##   limits  -- set resource limits
+  ##   memory  -- memory usage statistics
+  ##   mmap    -- mapped memory usage statistics (caution: can cause high load)
+  ##   sockets -- socket statistics for protocols in 'socket_protocols'
+  # properties = ["cpu", "limits", "memory", "mmap"]
+
+  ## Protocol filter for the sockets property
+  ## Available options are
+  ##   all  -- all of the protocols below
+  ##   tcp4 -- TCP socket statistics for IPv4
+  ##   tcp6 -- TCP socket statistics for IPv6
+  ##   udp4 -- UDP socket statistics for IPv4
+  ##   udp6 -- UDP socket statistics for IPv6
+  ##   unix -- Unix socket statistics
+  # socket_protocols = ["all"]
+
+  ## Method to use when finding process IDs.  Can be one of 'pgrep', or
+  ## 'native'.  The pgrep finder calls the pgrep executable in the PATH while
+  ## the native finder performs the search directly in a manor dependent on the
+  ## platform.  Default is 'pgrep'
+  # pid_finder = "pgrep"
+
+  ## New-style filtering configuration (multiple filter sections are allowed)
+  # [[inputs.procstat.filter]]
+  #    ## Name of the filter added as 'filter' tag
+  #    name = "shell"
+  #
+  #    ## Service filters, only one is allowed
+  #    ## Systemd unit names (wildcards are supported)
+  #    # systemd_units = []
+  #    ## CGroup name or path (wildcards are supported)
+  #    # cgroups = []
+  #    ## Supervisor service names of hypervisorctl management
+  #    # supervisor_units = []
+  #    ## Windows service names
+  #    # win_service = []
+  #
+  #    ## Process filters, multiple are allowed
+  #    ## Regular expressions to use for matching against the full command
+  #    # patterns = ['.*']
+  #    ## List of users owning the process (wildcards are supported)
+  #    # users = ['*']
+  #    ## List of executable paths of the process (wildcards are supported)
+  #    # executables = ['*']
+  #    ## List of process names (wildcards are supported)
+  #    # process_names = ['*']
+  #    ## Recursion depth for determining children of the matched processes
+  #    ## A negative value means all children with infinite depth
+  #    # recursion_depth = 0
+```
+
+### Windows support
+
+Preliminary support for Windows has been added, however you may prefer using
+the `win_perf_counters` input plugin as a more mature alternative.
+
+### Darwin specifics
+
+If you use this plugin with `supervisor_units` *and* `pattern` on Darwin, you
+**have to** use the `pgrep` finder as the underlying library relies on `pgrep`.
+
+### Permissions
+
+Some files or directories may require elevated permissions. As such a user may
+need to provide telegraf with higher levels of permissions to access and produce
+metrics.
+
+## Metrics
+
+For descriptions of these tags and fields, consider reading one of the
+following:
+
+- [Linux Kernel /proc Filesystem](https://www.kernel.org/doc/html/latest/filesystems/proc.html)
+- [proc manpage](https://man7.org/linux/man-pages/man5/proc.5.html)
+
+[kernel /proc]: https://www.kernel.org/doc/html/latest/filesystems/proc.html
+[manpage]: https://man7.org/linux/man-pages/man5/proc.5.html
+
+Below are an example set of tags and fields:
+
+- procstat
+  - tags:
+    - pid (if requested)
+    - cmdline (if requested)
+    - process_name
+    - pidfile (when defined)
+    - exe (when defined)
+    - pattern (when defined)
+    - user (when selected)
+    - systemd_unit (when defined)
+    - cgroup (when defined)
+    - cgroup_full (when cgroup or systemd_unit is used with glob)
+    - supervisor_unit (when defined)
+    - win_service (when defined)
+  - fields:
+    - child_major_faults (int)
+    - child_minor_faults (int)
+    - created_at (int) [epoch in nanoseconds]
+    - cpu_time (int)
+    - cpu_time_iowait (float) (zero for all OSes except Linux)
+    - cpu_time_system (float)
+    - cpu_time_user (float)
+    - cpu_usage (float)
+    - disk_read_bytes (int, Linux only, *telegraf* may need to be ran as **root**)
+    - disk_write_bytes (int, Linux only, *telegraf* may need to be ran as **root**)
+    - involuntary_context_switches (int)
+    - major_faults (int)
+    - memory_anonymous (int)
+    - memory_private_clean (int)
+    - memory_private_dirty (int)
+    - memory_pss (int)
+    - memory_referenced (int)
+    - memory_rss (int)
+    - memory_shared_clean (int)
+    - memory_shared_dirty (int)
+    - memory_size (int)
+    - memory_swap (int)
+    - memory_usage (float)
+    - memory_vms (int)
+    - minor_faults (int)
+    - nice_priority (int)
+    - num_fds (int, *telegraf* may need to be ran as **root**)
+    - num_threads (int)
+    - pid (int)
+    - ppid (int)
+    - status (string)
+    - read_bytes (int, *telegraf* may need to be ran as **root**)
+    - read_count (int, *telegraf* may need to be ran as **root**)
+    - realtime_priority (int)
+    - rlimit_cpu_time_hard (int)
+    - rlimit_cpu_time_soft (int)
+    - rlimit_file_locks_hard (int)
+    - rlimit_file_locks_soft (int)
+    - rlimit_memory_data_hard (int)
+    - rlimit_memory_data_soft (int)
+    - rlimit_memory_locked_hard (int)
+    - rlimit_memory_locked_soft (int)
+    - rlimit_memory_rss_hard (int)
+    - rlimit_memory_rss_soft (int)
+    - rlimit_memory_stack_hard (int)
+    - rlimit_memory_stack_soft (int)
+    - rlimit_memory_vms_hard (int)
+    - rlimit_memory_vms_soft (int)
+    - rlimit_nice_priority_hard (int)
+    - rlimit_nice_priority_soft (int)
+    - rlimit_num_fds_hard (int)
+    - rlimit_num_fds_soft (int)
+    - rlimit_realtime_priority_hard (int)
+    - rlimit_realtime_priority_soft (int)
+    - rlimit_signals_pending_hard (int)
+    - rlimit_signals_pending_soft (int)
+    - signals_pending (int)
+    - voluntary_context_switches (int)
+    - write_bytes (int, *telegraf* may need to be ran as **root**)
+    - write_count (int, *telegraf* may need to be ran as **root**)
+- procstat_lookup
+  - tags:
+    - exe
+    - pid_finder
+    - pid_file
+    - pattern
+    - prefix
+    - user
+    - systemd_unit
+    - cgroup
+    - supervisor_unit
+    - win_service
+    - result
+  - fields:
+    - pid_count (int)
+    - running (int)
+    - result_code (int, success = 0, lookup_error = 1)
+- procstat_socket (if configured, Linux only)
+  - tags:
+    - pid (if requested)
+    - protocol (if requested)
+    - cmdline (if requested)
+    - process_name
+    - pidfile (when defined)
+    - exe (when defined)
+    - pattern (when defined)
+    - user (when selected)
+    - systemd_unit (when defined)
+    - cgroup (when defined)
+    - cgroup_full (when cgroup or systemd_unit is used with glob)
+    - supervisor_unit (when defined)
+    - win_service (when defined)
+  - fields:
+    - protocol
+    - state
+    - pid
+    - src
+    - src_port (tcp and udp sockets only)
+    - dest (tcp and udp sockets only)
+    - dest_port (tcp and udp sockets only)
+    - bytes_received (tcp sockets only)
+    - bytes_sent (tcp sockets only)
+    - lost (tcp sockets only)
+    - retransmits (tcp sockets only)
+    - rx_queue
+    - tx_queue
+    - inode (unix sockets only)
+
+*NOTE: Resource limit > 2147483647 will be reported as 2147483647.*
+
+## Example Output
+
+```text
+procstat_lookup,host=prash-laptop,pattern=influxd,pid_finder=pgrep,result=success pid_count=1i,running=1i,result_code=0i 1582089700000000000
+procstat,host=prash-laptop,pattern=influxd,process_name=influxd,user=root involuntary_context_switches=151496i,child_minor_faults=1061i,child_major_faults=8i,cpu_time_user=2564.81,pid=32025i,major_faults=8609i,created_at=1580107536000000000i,voluntary_context_switches=1058996i,cpu_time_system=616.98,memory_swap=0i,memory_locked=0i,memory_usage=1.7797634601593018,num_threads=18i,cpu_time_iowait=0,memory_rss=148643840i,memory_vms=1435688960i,memory_data=0i,memory_stack=0i,minor_faults=1856550i 1582089700000000000
+procstat_socket,host=prash-laptop,process_name=browser,protocol=tcp4 bytes_received=826987i,bytes_sent=32869i,dest="192.168.0.2",dest_port=443i,lost=0i,pid=32025i,retransmits=0i,rx_queue=0i,src="192.168.0.1",src_port=52106i,state="established",tx_queue=0i 1582089700000000000
+```
diff --git a/content/telegraf/v1/input-plugins/prometheus/_index.md b/content/telegraf/v1/input-plugins/prometheus/_index.md
new file mode 100644
index 000000000..3b36f739e
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/prometheus/_index.md
@@ -0,0 +1,471 @@
+---
+description: "Telegraf plugin for collecting metrics from Prometheus"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Prometheus
+    identifier: input-prometheus
+tags: [Prometheus, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Prometheus Input Plugin
+
+The prometheus input plugin gathers metrics from HTTP servers exposing metrics
+in Prometheus format.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from one or many prometheus clients
+[[inputs.prometheus]]
+  ## An array of urls to scrape metrics from.
+  urls = ["http://localhost:9100/metrics"]
+
+  ## Metric version controls the mapping from Prometheus metrics into Telegraf metrics.
+  ## See "Metric Format Configuration" in plugins/inputs/prometheus/README.md for details.
+  ## Valid options: 1, 2
+  # metric_version = 1
+
+  ## Url tag name (tag containing scrapped url. optional, default is "url")
+  # url_tag = "url"
+
+  ## Whether the timestamp of the scraped metrics will be ignored.
+  ## If set to true, the gather time will be used.
+  # ignore_timestamp = false
+
+  ## Override content-type of the returned message
+  ## Available options are for prometheus:
+  ##   text, protobuf-delimiter, protobuf-compact, protobuf-text,
+  ## and for openmetrics:
+  ##   openmetrics-text, openmetrics-protobuf
+  ## By default the content-type of the response is used.
+  # content_type_override = ""
+
+  ## An array of Kubernetes services to scrape metrics from.
+  # kubernetes_services = ["http://my-service-dns.my-namespace:9100/metrics"]
+
+  ## Kubernetes config file to create client from.
+  # kube_config = "/path/to/kubernetes.config"
+
+  ## Scrape Pods
+  ## Enable scraping of k8s pods. Further settings as to which pods to scape
+  ## are determiend by the 'method' option below. When enabled, the default is
+  ## to use annotations to determine whether to scrape or not.
+  # monitor_kubernetes_pods = false
+
+  ## Scrape Pods Method
+  ## annotations: default, looks for specific pod annotations documented below
+  ## settings: only look for pods matching the settings provided, not
+  ##   annotations
+  ## settings+annotations: looks at pods that match annotations using the user
+  ##   defined settings
+  # monitor_kubernetes_pods_method = "annotations"
+
+  ## Scrape Pods 'annotations' method options
+  ## If set method is set to 'annotations' or 'settings+annotations', these
+  ## annotation flags are looked for:
+  ## - prometheus.io/scrape: Required to enable scraping for this pod. Can also
+  ##     use 'prometheus.io/scrape=false' annotation to opt-out entirely.
+  ## - prometheus.io/scheme: If the metrics endpoint is secured then you will
+  ##     need to set this to 'https' & most likely set the tls config
+  ## - prometheus.io/path: If the metrics path is not /metrics, define it with
+  ##     this annotation
+  ## - prometheus.io/port: If port is not 9102 use this annotation
+
+  ## Scrape Pods 'settings' method options
+  ## When using 'settings' or 'settings+annotations', the default values for
+  ## annotations can be modified using with the following options:
+  # monitor_kubernetes_pods_scheme = "http"
+  # monitor_kubernetes_pods_port = "9102"
+  # monitor_kubernetes_pods_path = "/metrics"
+
+  ## Get the list of pods to scrape with either the scope of
+  ## - cluster: the kubernetes watch api (default, no need to specify)
+  ## - node: the local cadvisor api; for scalability. Note that the config node_ip or the environment variable NODE_IP must be set to the host IP.
+  # pod_scrape_scope = "cluster"
+
+  ## Only for node scrape scope: node IP of the node that telegraf is running on.
+  ## Either this config or the environment variable NODE_IP must be set.
+  # node_ip = "10.180.1.1"
+
+  ## Only for node scrape scope: interval in seconds for how often to get updated pod list for scraping.
+  ## Default is 60 seconds.
+  # pod_scrape_interval = 60
+
+  ## Content length limit
+  ## When set, telegraf will drop responses with length larger than the configured value.
+  ## Default is "0KB" which means unlimited.
+  # content_length_limit = "0KB"
+
+  ## Restricts Kubernetes monitoring to a single namespace
+  ##   ex: monitor_kubernetes_pods_namespace = "default"
+  # monitor_kubernetes_pods_namespace = ""
+  ## The name of the label for the pod that is being scraped.
+  ## Default is 'namespace' but this can conflict with metrics that have the label 'namespace'
+  # pod_namespace_label_name = "namespace"
+  # label selector to target pods which have the label
+  # kubernetes_label_selector = "env=dev,app=nginx"
+  # field selector to target pods
+  # eg. To scrape pods on a specific node
+  # kubernetes_field_selector = "spec.nodeName=$HOSTNAME"
+
+  ## Filter which pod annotations and labels will be added to metric tags
+  #
+  # pod_annotation_include = ["annotation-key-1"]
+  # pod_annotation_exclude = ["exclude-me"]
+  # pod_label_include = ["label-key-1"]
+  # pod_label_exclude = ["exclude-me"]
+
+  # cache refresh interval to set the interval for re-sync of pods list.
+  # Default is 60 minutes.
+  # cache_refresh_interval = 60
+
+  ## Scrape Services available in Consul Catalog
+  # [inputs.prometheus.consul]
+  #   enabled = true
+  #   agent = "http://localhost:8500"
+  #   query_interval = "5m"
+
+  #   [[inputs.prometheus.consul.query]]
+  #     name = "a service name"
+  #     tag = "a service tag"
+  #     url = 'http://{{if ne .ServiceAddress ""}}{{.ServiceAddress}}{{else}}{{.Address}}{{end}}:{{.ServicePort}}/{{with .ServiceMeta.metrics_path}}{{.}}{{else}}metrics{{end}}'
+  #     [inputs.prometheus.consul.query.tags]
+  #       host = "{{.Node}}"
+
+  ## Use bearer token for authorization. ('bearer_token' takes priority)
+  # bearer_token = "/path/to/bearer/token"
+  ## OR
+  # bearer_token_string = "abc_123"
+
+  ## HTTP Basic Authentication username and password. ('bearer_token' and
+  ## 'bearer_token_string' take priority)
+  # username = ""
+  # password = ""
+
+  ## Optional custom HTTP headers
+  # http_headers = {"X-Special-Header" = "Special-Value"}
+
+  ## Specify timeout duration for slower prometheus clients (default is 5s)
+  # timeout = "5s"
+
+  ## This option is now used by the HTTP client to set the header response
+  ## timeout, not the overall HTTP timeout.
+  # response_timeout = "5s"
+
+  ## HTTP Proxy support
+  # use_system_proxy = false
+  # http_proxy_url = ""
+
+  ## Optional TLS Config
+  # tls_ca = /path/to/cafile
+  # tls_cert = /path/to/certfile
+  # tls_key = /path/to/keyfile
+
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## Use the given name as the SNI server name on each URL
+  # tls_server_name = "myhost.example.org"
+
+  ## TLS renegotiation method, choose from "never", "once", "freely"
+  # tls_renegotiation_method = "never"
+
+  ## Enable/disable TLS
+  ## Set to true/false to enforce TLS being enabled/disabled. If not set,
+  ## enable TLS only if any of the other options are specified.
+  # tls_enable = true
+
+  ## This option allows you to report the status of prometheus requests.
+  # enable_request_metrics = false
+
+  ## Control pod scraping based on pod namespace annotations
+  ## Pass and drop here act like tagpass and tagdrop, but instead
+  ## of filtering metrics they filters pod candidates for scraping
+  #[inputs.prometheus.namespace_annotation_pass]
+  # annotation_key = ["value1", "value2"]
+  #[inputs.prometheus.namespace_annotation_drop]
+  # some_annotation_key = ["dont-scrape"]
+```
+
+`urls` can contain a unix socket as well. If a different path is required
+(default is `/metrics` for both http[s] and unix) for a unix socket, add `path`
+as a query parameter as follows:
+`unix:///var/run/prometheus.sock?path=/custom/metrics`
+
+### Metric Format Configuration
+
+The `metric_version` setting controls how telegraf translates prometheus format
+metrics to telegraf metrics. There are two options.
+
+With `metric_version = 1`, the prometheus metric name becomes the telegraf
+metric name. Prometheus labels become telegraf tags. Prometheus values become
+telegraf field values. The fields have generic keys based on the type of the
+prometheus metric. This option produces metrics that are dense (not
+sparse). Denseness is a useful property for some outputs, including those that
+are more efficient with row-oriented data.
+
+`metric_version = 2` differs in a few ways. The prometheus metric name becomes a
+telegraf field key. Metrics hold more than one value and the field keys aren't
+generic. The resulting metrics are sparse, but for some outputs they may be
+easier to process or query, including those that are more efficient with
+column-oriented data. The telegraf metric name is the same for all metrics in
+the input instance. It can be set with the `name_override` setting and defaults
+to "prometheus". To have multiple metric names, you can use multiple instances
+of the plugin, each with its own `name_override`.
+
+`metric_version = 2` uses the same histogram format as the histogram
+aggregator
+
+The Example Outputs sections shows examples for both options.
+
+When using this plugin along with the prometheus_client output, use the same
+option in both to ensure metrics are round-tripped without modification.
+
+### Kubernetes Service Discovery
+
+URLs listed in the `kubernetes_services` parameter will be expanded by looking
+up all A records assigned to the hostname as described in [Kubernetes DNS
+service discovery]().
+
+This method can be used to locate all [Kubernetes headless services](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services).
+
+[serv-disc]: https://kubernetes.io/docs/concepts/services-networking/service/#dns
+
+[headless]: https://kubernetes.io/docs/concepts/services-networking/service/#headless-services
+
+### Kubernetes scraping
+
+Enabling this option will allow the plugin to scrape for prometheus annotation
+on Kubernetes pods. Currently, you can run this plugin in your kubernetes
+cluster, or we use the kubeconfig file to determine where to monitor.  Currently
+the following annotation are supported:
+
+* `prometheus.io/scrape` Enable scraping for this pod.
+* `prometheus.io/scheme` If the metrics endpoint is secured then you will need to set this to `https` & most likely set the tls config. (default 'http')
+* `prometheus.io/path` Override the path for the metrics endpoint on the service. (default '/metrics')
+* `prometheus.io/port` Used to override the port. (default 9102)
+
+Using the `monitor_kubernetes_pods_namespace` option allows you to limit which
+pods you are scraping.
+
+The setting `pod_namespace_label_name` allows you to change the label name for
+the namespace of the pod you are scraping. The default is `namespace`, but this
+will overwrite a label with the name `namespace` from a metric scraped.
+
+Using `pod_scrape_scope = "node"` allows more scalable scraping for pods which
+will scrape pods only in the node that telegraf is running. It will fetch the
+pod list locally from the node's kubelet. This will require running Telegraf in
+every node of the cluster. Note that either `node_ip` must be specified in the
+config or the environment variable `NODE_IP` must be set to the host IP. ThisThe
+latter can be done in the yaml of the pod running telegraf:
+
+```sh
+env:
+  - name: NODE_IP
+    valueFrom:
+      fieldRef:
+        fieldPath: status.hostIP
+ ```
+
+If using node level scrape scope, `pod_scrape_interval` specifies how often (in
+seconds) the pod list for scraping should updated. If not specified, the default
+is 60 seconds.
+
+The pod running telegraf will need to have the proper rbac configuration in
+order to be allowed to call the k8s api to discover and watch pods in the
+cluster.  A typical configuration will create a service account, a cluster role
+with the appropriate rules and a cluster role binding to tie the cluster role to
+the service account.  Example of configuration for cluster level discovery:
+
+```yaml
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+  name: telegraf-k8s-role-{{.Release.Name}}
+rules:
+- apiGroups: [""]
+  resources:
+  - nodes
+  - nodes/proxy
+  - services
+  - endpoints
+  - pods
+  verbs: ["get", "list", "watch"]
+---
+# Rolebinding for namespace to cluster-admin
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+  name: telegraf-k8s-role-{{.Release.Name}}
+roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: ClusterRole
+  name: telegraf-k8s-role-{{.Release.Name}}
+subjects:
+- kind: ServiceAccount
+  name: telegraf-k8s-{{ .Release.Name }}
+  namespace: {{ .Release.Namespace }}
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+  name: telegraf-k8s-{{ .Release.Name }}
+```
+
+### Consul Service Discovery
+
+Enabling this option and configuring consul `agent` url will allow the plugin to
+query consul catalog for available services. Using `query_interval` the plugin
+will periodically query the consul catalog for services with `name` and `tag`
+and refresh the list of scraped urls.  It can use the information from the
+catalog to build the scraped url and additional tags from a template.
+
+Multiple consul queries can be configured, each for different service.
+The following example fields can be used in url or tag templates:
+
+* Node
+* Address
+* NodeMeta
+* ServicePort
+* ServiceAddress
+* ServiceTags
+* ServiceMeta
+
+For full list of available fields and their type see struct CatalogService in
+<https://github.com/hashicorp/consul/blob/master/api/catalog.go>
+
+### Bearer Token
+
+If set, the file specified by the `bearer_token` parameter will be read on
+each interval and its contents will be appended to the Bearer string in the
+Authorization header.
+
+## Usage for Caddy HTTP server
+
+Steps to monitor Caddy with Telegraf's Prometheus input plugin:
+
+* Download [Caddy](https://caddyserver.com/download)
+* Download Prometheus and set up [monitoring Caddy with Prometheus metrics](https://caddyserver.com/docs/metrics#monitoring-caddy-with-prometheus-metrics)
+* Restart Caddy
+* Configure Telegraf to fetch metrics on it:
+
+```toml
+[[inputs.prometheus]]
+#   ## An array of urls to scrape metrics from.
+  urls = ["http://localhost:2019/metrics"]
+```
+
+> This is the default URL where Caddy will send data.
+> For more details, please read the [Caddy Prometheus documentation](https://github.com/miekg/caddy-prometheus/blob/master/README.md).
+
+## Metrics
+
+Measurement names are based on the Metric Family and tags are created for each
+label.  The value is added to a field named based on the metric type.
+
+All metrics receive the `url` tag indicating the related URL specified in the
+Telegraf configuration. If using Kubernetes service discovery the `address`
+tag is also added indicating the discovered ip address.
+
+* prometheus_request
+  * tags:
+    * url
+    * address
+  * fields:
+    * response_time (float, seconds)
+    * content_length (int, response body length)
+
+## Example Output
+
+### Source
+
+```shell
+# HELP go_gc_duration_seconds A summary of the GC invocation durations.
+# TYPE go_gc_duration_seconds summary
+go_gc_duration_seconds{quantile="0"} 7.4545e-05
+go_gc_duration_seconds{quantile="0.25"} 7.6999e-05
+go_gc_duration_seconds{quantile="0.5"} 0.000277935
+go_gc_duration_seconds{quantile="0.75"} 0.000706591
+go_gc_duration_seconds{quantile="1"} 0.000706591
+go_gc_duration_seconds_sum 0.00113607
+go_gc_duration_seconds_count 4
+# HELP go_goroutines Number of goroutines that currently exist.
+# TYPE go_goroutines gauge
+go_goroutines 15
+# HELP cpu_usage_user Telegraf collected metric
+# TYPE cpu_usage_user gauge
+cpu_usage_user{cpu="cpu0"} 1.4112903225816156
+cpu_usage_user{cpu="cpu1"} 0.702106318955865
+cpu_usage_user{cpu="cpu2"} 2.0161290322588776
+cpu_usage_user{cpu="cpu3"} 1.5045135406226022
+```
+
+### Output
+
+```text
+go_gc_duration_seconds,url=http://example.org:9273/metrics 1=0.001336611,count=14,sum=0.004527551,0=0.000057965,0.25=0.000083812,0.5=0.000286537,0.75=0.000365303 1505776733000000000
+go_goroutines,url=http://example.org:9273/metrics gauge=21 1505776695000000000
+cpu_usage_user,cpu=cpu0,url=http://example.org:9273/metrics gauge=1.513622603430151 1505776751000000000
+cpu_usage_user,cpu=cpu1,url=http://example.org:9273/metrics gauge=5.829145728641773 1505776751000000000
+cpu_usage_user,cpu=cpu2,url=http://example.org:9273/metrics gauge=2.119071644805144 1505776751000000000
+cpu_usage_user,cpu=cpu3,url=http://example.org:9273/metrics gauge=1.5228426395944945 1505776751000000000
+prometheus_request,result=success,url=http://example.org:9273/metrics content_length=179013i,http_response_code=200i,response_time=0.051521601 1505776751000000000
+```
+
+### Output (when metric_version = 2)
+
+```text
+prometheus,quantile=1,url=http://example.org:9273/metrics go_gc_duration_seconds=0.005574303 1556075100000000000
+prometheus,quantile=0.75,url=http://example.org:9273/metrics go_gc_duration_seconds=0.0001046 1556075100000000000
+prometheus,quantile=0.5,url=http://example.org:9273/metrics go_gc_duration_seconds=0.0000719 1556075100000000000
+prometheus,quantile=0.25,url=http://example.org:9273/metrics go_gc_duration_seconds=0.0000579 1556075100000000000
+prometheus,quantile=0,url=http://example.org:9273/metrics go_gc_duration_seconds=0.0000349 1556075100000000000
+prometheus,url=http://example.org:9273/metrics go_gc_duration_seconds_count=324,go_gc_duration_seconds_sum=0.091340353 1556075100000000000
+prometheus,url=http://example.org:9273/metrics go_goroutines=15 1556075100000000000
+prometheus,cpu=cpu0,url=http://example.org:9273/metrics cpu_usage_user=1.513622603430151 1505776751000000000
+prometheus,cpu=cpu1,url=http://example.org:9273/metrics cpu_usage_user=5.829145728641773 1505776751000000000
+prometheus,cpu=cpu2,url=http://example.org:9273/metrics cpu_usage_user=2.119071644805144 1505776751000000000
+prometheus,cpu=cpu3,url=http://example.org:9273/metrics cpu_usage_user=1.5228426395944945 1505776751000000000
+prometheus_request,result=success,url=http://example.org:9273/metrics content_length=179013i,http_response_code=200i,response_time=0.051521601 1505776751000000000
+```
+
+### Output with timestamp included
+
+Below is an example of a Prometheus metric which includes a timestamp:
+
+```text
+# TYPE test_counter counter
+test_counter{label="test"} 1 1685443805885
+```
+
+Telegraf will generate the following metric:
+
+```text
+test_counter,address=127.0.0.1,label=test counter=1 1685443805885000000
+```
+
+using the standard configuration
+
+```toml
+[[inputs.prometheus]]
+  ## An array of urls to scrape metrics from.
+  urls = ["http://localhost:2019/metrics"]
+```
+
+**Please note:** Metrics generated by Prometheus endpoints are generated with
+*millisecond precision*. The default Telegraf agent level precision setting
+reduces this to seconds. Change the `precision` setting at agent or plugin level
+to milliseconds or smaller to report metric timestamps with full precision.
diff --git a/content/telegraf/v1/input-plugins/proxmox/_index.md b/content/telegraf/v1/input-plugins/proxmox/_index.md
new file mode 100644
index 000000000..8689419eb
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/proxmox/_index.md
@@ -0,0 +1,108 @@
+---
+description: "Telegraf plugin for collecting metrics from Proxmox"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Proxmox
+    identifier: input-proxmox
+tags: [Proxmox, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Proxmox Input Plugin
+
+The proxmox plugin gathers metrics about containers and VMs using the Proxmox
+API.
+
+Telegraf minimum version: Telegraf 1.16.0
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Provides metrics from Proxmox nodes (Proxmox Virtual Environment > 6.2).
+[[inputs.proxmox]]
+  ## API connection configuration. The API token was introduced in Proxmox v6.2. Required permissions for user and token: PVEAuditor role on /.
+  base_url = "https://localhost:8006/api2/json"
+  api_token = "USER@REALM!TOKENID=UUID"
+
+  ## Node name, defaults to OS hostname
+  ## Unless Telegraf is on the same host as Proxmox, setting this is required
+  ## for Telegraf to successfully connect to Proxmox. If not on the same host,
+  ## leaving this empty will often lead to a "search domain is not set" error.
+  # node_name = ""
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  insecure_skip_verify = false
+
+  # HTTP response timeout (default: 5s)
+  response_timeout = "5s"
+```
+
+### Permissions
+
+The plugin will need to have access to the Proxmox API. In Proxmox API tokens
+are a subset of the corresponding user. This means an API token cannot execute
+commands that the user cannot either.
+
+For Telegraf, an API token and user must be provided with at least the
+PVEAuditor role on /. Below is an example of creating a telegraf user and token
+and then ensuring the user and token have the correct role:
+
+```s
+## Create a influx user with PVEAuditor role
+pveum user add influx@pve
+pveum acl modify / -role PVEAuditor -user influx@pve
+## Create a token with the PVEAuditor role
+pveum user token add influx@pve monitoring -privsep 1
+pveum acl modify / -role PVEAuditor -token 'influx@pve!monitoring'
+```
+
+See this [Proxmox docs example](https://pve.proxmox.com/wiki/User_Management#_limited_api_token_for_monitoring) for further details.
+
+[1]: https://pve.proxmox.com/wiki/User_Management#_limited_api_token_for_monitoring
+
+## Metrics
+
+- proxmox
+  - status
+  - uptime
+  - cpuload
+  - mem_used
+  - mem_total
+  - mem_free
+  - mem_used_percentage
+  - swap_used
+  - swap_total
+  - swap_free
+  - swap_used_percentage
+  - disk_used
+  - disk_total
+  - disk_free
+  - disk_used_percentage
+
+### Tags
+
+- node_fqdn - FQDN of the node telegraf is running on
+- vm_name - Name of the VM/container
+- vm_fqdn - FQDN of the VM/container
+- vm_type - Type of the VM/container (lxc, qemu)
+
+## Example Output
+
+```text
+proxmox,host=pxnode,node_fqdn=pxnode.example.com,vm_fqdn=vm1.example.com,vm_name=vm1,vm_type=lxc cpuload=0.147998116735236,disk_free=4461129728i,disk_total=5217320960i,disk_used=756191232i,disk_used_percentage=14,mem_free=1046827008i,mem_total=1073741824i,mem_used=26914816i,mem_used_percentage=2,status="running",swap_free=536698880i,swap_total=536870912i,swap_used=172032i,swap_used_percentage=0,uptime=1643793i 1595457277000000000
+```
diff --git a/content/telegraf/v1/input-plugins/puppetagent/_index.md b/content/telegraf/v1/input-plugins/puppetagent/_index.md
new file mode 100644
index 000000000..64c61c124
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/puppetagent/_index.md
@@ -0,0 +1,180 @@
+---
+description: "Telegraf plugin for collecting metrics from PuppetAgent"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: PuppetAgent
+    identifier: input-puppetagent
+tags: [PuppetAgent, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# PuppetAgent Input Plugin
+
+The puppetagent plugin collects variables outputted from the
+'last_run_summary.yaml' file usually located in `/var/lib/puppet/state/`
+[PuppetAgent Runs](https://puppet.com/blog/puppet-monitoring-how-to-monitor-success-or-failure-of-puppet-runs/).
+
+```sh
+cat /var/lib/puppet/state/last_run_summary.yaml
+
+---
+  events:
+    failure: 0
+    total: 0
+    success: 0
+  resources:
+    failed: 0
+    scheduled: 0
+    changed: 0
+    skipped: 0
+    total: 109
+    failed_to_restart: 0
+    restarted: 0
+    out_of_sync: 0
+  changes:
+    total: 0
+  time:
+    user: 0.004331
+    schedule: 0.001123
+    filebucket: 0.000353
+    file: 0.441472
+    exec: 0.508123
+    anchor: 0.000555
+    yumrepo: 0.006989
+    ssh_authorized_key: 0.000764
+    service: 1.807795
+    package: 1.325788
+    total: 8.85354707064819
+    config_retrieval: 4.75567007064819
+    last_run: 1444936531
+    cron: 0.000584
+  version:
+    config: 1444936521
+    puppet: "3.7.5"
+```
+
+```sh
+jcross@pit-devops-02 ~ >sudo ./telegraf_linux_amd64 --input-filter puppetagent --config tele.conf --test
+* Plugin: puppetagent, Collection 1
+> [] puppetagent_events_failure value=0
+> [] puppetagent_events_total value=0
+> [] puppetagent_events_success value=0
+> [] puppetagent_resources_failed value=0
+> [] puppetagent_resources_scheduled value=0
+> [] puppetagent_resources_changed value=0
+> [] puppetagent_resources_skipped value=0
+> [] puppetagent_resources_total value=109
+> [] puppetagent_resources_failedtorestart value=0
+> [] puppetagent_resources_restarted value=0
+> [] puppetagent_resources_outofsync value=0
+> [] puppetagent_changes_total value=0
+> [] puppetagent_time_user value=0.00393
+> [] puppetagent_time_schedule value=0.001234
+> [] puppetagent_time_filebucket value=0.000244
+> [] puppetagent_time_file value=0.587734
+> [] puppetagent_time_exec value=0.389584
+> [] puppetagent_time_anchor value=0.000399
+> [] puppetagent_time_sshauthorizedkey value=0.000655
+> [] puppetagent_time_service value=0
+> [] puppetagent_time_package value=1.297537
+> [] puppetagent_time_total value=9.45297606225586
+> [] puppetagent_time_configretrieval value=5.89822006225586
+> [] puppetagent_time_lastrun value=1444940131
+> [] puppetagent_time_cron value=0.000646
+> [] puppetagent_version_config value=1444940121
+> [] puppetagent_version_puppet value=3.7.5
+```
+
+[1]: https://puppet.com/blog/puppet-monitoring-how-to-monitor-success-or-failure-of-puppet-runs/
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Reads last_run_summary.yaml file and converts to measurements
+[[inputs.puppetagent]]
+  ## Location of puppet last run summary file
+  location = "/var/lib/puppet/state/last_run_summary.yaml"
+```
+
+## Metrics
+
+### PuppetAgent int64 measurements
+
+Meta:
+
+- units: int64
+- tags: ``
+
+Measurement names:
+
+- puppetagent_changes_total
+- puppetagent_events_failure
+- puppetagent_events_total
+- puppetagent_events_success
+- puppetagent_resources_changed
+- puppetagent_resources_corrective_change
+- puppetagent_resources_failed
+- puppetagent_resources_failedtorestart
+- puppetagent_resources_outofsync
+- puppetagent_resources_restarted
+- puppetagent_resources_scheduled
+- puppetagent_resources_skipped
+- puppetagent_resources_total
+- puppetagent_time_service
+- puppetagent_time_lastrun
+- puppetagent_version_config
+
+### PuppetAgent float64 measurements
+
+Meta:
+
+- units: float64
+- tags: ``
+
+Measurement names:
+
+- puppetagent_time_anchor
+- puppetagent_time_catalogapplication
+- puppetagent_time_configretrieval
+- puppetagent_time_convertcatalog
+- puppetagent_time_cron
+- puppetagent_time_exec
+- puppetagent_time_factgeneration
+- puppetagent_time_file
+- puppetagent_time_filebucket
+- puppetagent_time_group
+- puppetagent_time_lastrun
+- puppetagent_time_noderetrieval
+- puppetagent_time_notify
+- puppetagent_time_package
+- puppetagent_time_pluginsync
+- puppetagent_time_schedule
+- puppetagent_time_sshauthorizedkey
+- puppetagent_time_total
+- puppetagent_time_transactionevaluation
+- puppetagent_time_user
+- puppetagent_version_config
+
+### PuppetAgent string measurements
+
+Meta:
+
+- units: string
+- tags: ``
+
+Measurement names:
+
+- puppetagent_version_puppet
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/rabbitmq/_index.md b/content/telegraf/v1/input-plugins/rabbitmq/_index.md
new file mode 100644
index 000000000..617e2f49c
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/rabbitmq/_index.md
@@ -0,0 +1,265 @@
+---
+description: "Telegraf plugin for collecting metrics from RabbitMQ"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: RabbitMQ
+    identifier: input-rabbitmq
+tags: [RabbitMQ, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# RabbitMQ Input Plugin
+
+Reads metrics from RabbitMQ servers via the [Management Plugin](https://www.rabbitmq.com/management.html).
+
+For additional details reference the [RabbitMQ Management HTTP
+Stats]().
+
+[management]: https://www.rabbitmq.com/management.html
+[management-reference]: https://raw.githack.com/rabbitmq/rabbitmq-management/rabbitmq_v3_6_9/priv/www/api/index.html
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `username` and
+`password` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Reads metrics from RabbitMQ servers via the Management Plugin
+[[inputs.rabbitmq]]
+  ## Management Plugin url. (default: http://localhost:15672)
+  # url = "http://localhost:15672"
+
+  ## Credentials
+  # username = "guest"
+  # password = "guest"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## Optional request timeouts
+  ##
+  ## ResponseHeaderTimeout, if non-zero, specifies the amount of time to wait
+  ## for a server's response headers after fully writing the request.
+  # header_timeout = "3s"
+  ##
+  ## client_timeout specifies a time limit for requests made by this client.
+  ## Includes connection time, any redirects, and reading the response body.
+  # client_timeout = "4s"
+
+  ## A list of nodes to gather as the rabbitmq_node measurement. If not
+  ## specified, metrics for all nodes are gathered.
+  # nodes = ["rabbit@node1", "rabbit@node2"]
+
+  ## A list of exchanges to gather as the rabbitmq_exchange measurement. If not
+  ## specified, metrics for all exchanges are gathered.
+  # exchanges = ["telegraf"]
+
+  ## Metrics to include and exclude. Globs accepted.
+  ## Note that an empty array for both will include all metrics
+  ## Currently the following metrics are supported: "exchange", "federation", "node", "overview", "queue"
+  # metric_include = []
+  # metric_exclude = []
+
+  ## Queues to include and exclude. Globs accepted.
+  ## Note that an empty array for both will include all queues
+  # queue_name_include = []
+  # queue_name_exclude = []
+
+  ## Federation upstreams to include and exclude specified as an array of glob
+  ## pattern strings.  Federation links can also be limited by the queue and
+  ## exchange filters.
+  # federation_upstream_include = []
+  # federation_upstream_exclude = []
+```
+
+## Metrics
+
+- rabbitmq_overview
+  - tags:
+    - url
+    - name
+  - fields:
+    - channels (int, channels)
+    - connections (int, connections)
+    - consumers (int, consumers)
+    - exchanges (int, exchanges)
+    - messages (int, messages)
+    - messages_acked (int, messages)
+    - messages_delivered (int, messages)
+    - messages_delivered_get (int, messages)
+    - messages_published (int, messages)
+    - messages_ready (int, messages)
+    - messages_unacked (int, messages)
+    - queues (int, queues)
+    - clustering_listeners (int, cluster nodes)
+    - amqp_listeners (int, amqp nodes up)
+    - return_unroutable (int, number of unroutable messages)
+    - return_unroutable_rate (float, number of unroutable messages per second)
+
+- rabbitmq_node
+  - tags:
+    - url
+    - node
+    - url
+  - fields:
+    - disk_free (int, bytes)
+    - disk_free_limit (int, bytes)
+    - disk_free_alarm (int, disk alarm)
+    - fd_total (int, file descriptors)
+    - fd_used (int, file descriptors)
+    - mem_limit (int, bytes)
+    - mem_used (int, bytes)
+    - mem_alarm (int, memory a)
+    - proc_total (int, erlang processes)
+    - proc_used (int, erlang processes)
+    - run_queue (int, erlang processes)
+    - sockets_total (int, sockets)
+    - sockets_used (int, sockets)
+    - running (int, node up)
+    - uptime (int, milliseconds)
+    - mnesia_disk_tx_count (int, number of disk transaction)
+    - mnesia_ram_tx_count (int, number of ram transaction)
+    - mnesia_disk_tx_count_rate (float, number of disk transaction per second)
+    - mnesia_ram_tx_count_rate (float, number of ram transaction per second)
+    - gc_num (int, number of garbage collection)
+    - gc_bytes_reclaimed (int, bytes)
+    - gc_num_rate (float, number of garbage collection per second)
+    - gc_bytes_reclaimed_rate (float, bytes per second)
+    - io_read_avg_time (float, number of read operations)
+    - io_read_avg_time_rate (int, number of read operations per second)
+    - io_read_bytes (int, bytes)
+    - io_read_bytes_rate (float, bytes per second)
+    - io_write_avg_time (int, milliseconds)
+    - io_write_avg_time_rate (float, milliseconds per second)
+    - io_write_bytes (int, bytes)
+    - io_write_bytes_rate (float, bytes per second)
+    - mem_connection_readers (int, bytes)
+    - mem_connection_writers (int, bytes)
+    - mem_connection_channels (int, bytes)
+    - mem_connection_other (int, bytes)
+    - mem_queue_procs (int, bytes)
+    - mem_queue_slave_procs (int, bytes)
+    - mem_plugins (int, bytes)
+    - mem_other_proc (int, bytes)
+    - mem_metrics (int, bytes)
+    - mem_mgmt_db (int, bytes)
+    - mem_mnesia (int, bytes)
+    - mem_other_ets (int, bytes)
+    - mem_binary (int, bytes)
+    - mem_msg_index (int, bytes)
+    - mem_code (int, bytes)
+    - mem_atom (int, bytes)
+    - mem_other_system (int, bytes)
+    - mem_allocated_unused (int, bytes)
+    - mem_reserved_unallocated (int, bytes)
+    - mem_total (int, bytes)
+
+- rabbitmq_queue
+  - tags:
+    - url
+    - queue
+    - vhost
+    - node
+    - durable
+    - auto_delete
+  - fields:
+    - consumer_utilisation (float, percent)
+    - consumers (int, int)
+    - idle_since (string, time - e.g., "2006-01-02 15:04:05")
+    - head_message_timestamp (int, unix timestamp - only emitted if available from API)
+    - memory (int, bytes)
+    - message_bytes (int, bytes)
+    - message_bytes_persist (int, bytes)
+    - message_bytes_ram (int, bytes)
+    - message_bytes_ready (int, bytes)
+    - message_bytes_unacked (int, bytes)
+    - messages (int, count)
+    - messages_ack (int, count)
+    - messages_ack_rate (float, messages per second)
+    - messages_deliver (int, count)
+    - messages_deliver_rate (float, messages per second)
+    - messages_deliver_get (int, count)
+    - messages_deliver_get_rate (float, messages per second)
+    - messages_publish (int, count)
+    - messages_publish_rate (float, messages per second)
+    - messages_ready (int, count)
+    - messages_redeliver (int, count)
+    - messages_redeliver_rate (float, messages per second)
+    - messages_unack (int, count)
+    - slave_nodes (int, count)
+    - synchronised_slave_nodes (int, count)
+
+- rabbitmq_exchange
+  - tags:
+    - url
+    - exchange
+    - type
+    - vhost
+    - internal
+    - durable
+    - auto_delete
+  - fields:
+    - messages_publish_in (int, count)
+    - messages_publish_in_rate (int, messages per second)
+    - messages_publish_out (int, count)
+    - messages_publish_out_rate (int, messages per second)
+
+- rabbitmq_federation
+  - tags:
+    - url
+    - vhost
+    - type
+    - upstream
+    - exchange
+    - upstream_exchange
+    - queue
+    - upstream_queue
+  - fields:
+    - acks_uncommitted (int, count)
+    - consumers (int, count)
+    - messages_unacknowledged (int, count)
+    - messages_uncommitted (int, count)
+    - messages_unconfirmed (int, count)
+    - messages_confirm (int, count)
+    - messages_publish (int, count)
+    - messages_return_unroutable (int, count)
+
+## Sample Queries
+
+Message rates for the entire node can be calculated from total message
+counts. For instance, to get the rate of messages published per minute, use this
+query:
+
+```sql
+SELECT NON_NEGATIVE_DERIVATIVE(LAST("messages_published"), 1m) AS messages_published_rate FROM rabbitmq_overview WHERE time > now() - 10m GROUP BY time(1m)
+```
+
+## Example Output
+
+```text
+rabbitmq_queue,url=http://amqp.example.org:15672,queue=telegraf,vhost=influxdb,node=rabbit@amqp.example.org,durable=true,auto_delete=false,host=amqp.example.org head_message_timestamp=1493684017,messages_deliver_get=0i,messages_publish=329i,messages_publish_rate=0.2,messages_redeliver_rate=0,message_bytes_ready=0i,message_bytes_unacked=0i,messages_deliver=329i,messages_unack=0i,consumers=1i,idle_since="",messages=0i,messages_deliver_rate=0.2,messages_deliver_get_rate=0.2,messages_redeliver=0i,memory=43032i,message_bytes_ram=0i,messages_ack=329i,messages_ready=0i,messages_ack_rate=0.2,consumer_utilisation=1,message_bytes=0i,message_bytes_persist=0i 1493684035000000000
+rabbitmq_overview,url=http://amqp.example.org:15672,host=amqp.example.org channels=2i,consumers=1i,exchanges=17i,messages_acked=329i,messages=0i,messages_ready=0i,messages_unacked=0i,connections=2i,queues=1i,messages_delivered=329i,messages_published=329i,clustering_listeners=2i,amqp_listeners=1i 1493684035000000000
+rabbitmq_node,url=http://amqp.example.org:15672,node=rabbit@amqp.example.org,host=amqp.example.org fd_total=1024i,fd_used=32i,mem_limit=8363329126i,sockets_total=829i,disk_free=8175935488i,disk_free_limit=50000000i,mem_used=58771080i,proc_total=1048576i,proc_used=267i,run_queue=0i,sockets_used=2i,running=1i 149368403500000000
+rabbitmq_exchange,url=http://amqp.example.org:15672,exchange=telegraf,type=fanout,vhost=influxdb,internal=false,durable=true,auto_delete=false,host=amqp.example.org messages_publish_in=2i,messages_publish_out=1i 149368403500000000
+```
diff --git a/content/telegraf/v1/input-plugins/radius/_index.md b/content/telegraf/v1/input-plugins/radius/_index.md
new file mode 100644
index 000000000..884809662
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/radius/_index.md
@@ -0,0 +1,60 @@
+---
+description: "Telegraf plugin for collecting metrics from Radius"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Radius
+    identifier: input-radius
+tags: [Radius, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Radius Input Plugin
+
+The Radius plugin collects radius authentication response times.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+[[inputs.radius]]
+  ## An array of Server IPs and ports to gather from. If none specified, defaults to localhost.
+  servers = ["127.0.0.1:1812","hostname.domain.com:1812"]
+
+  ## Credentials for radius authentication.
+  username = "myuser"
+  password = "mypassword"
+  secret = "mysecret"
+
+  ## Request source server IP, normally the server running telegraf.
+  ## This corresponds to Radius' NAS-IP-Address.
+  # request_ip = "127.0.0.1"
+
+  ## Maximum time to receive response.
+  # response_timeout = "5s"
+```
+
+## Metrics
+
+- radius
+  - tags:
+    - response_code
+    - source
+    - source_port
+  - fields:
+    - responsetime_ms (int64)
+
+## Example Output
+
+```text
+radius,response_code=Access-Accept,source=hostname.com,source_port=1812 responsetime_ms=311i 1677526200000000000
+```
diff --git a/content/telegraf/v1/input-plugins/raindrops/_index.md b/content/telegraf/v1/input-plugins/raindrops/_index.md
new file mode 100644
index 000000000..4a92d201d
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/raindrops/_index.md
@@ -0,0 +1,70 @@
+---
+description: "Telegraf plugin for collecting metrics from Raindrops"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Raindrops
+    identifier: input-raindrops
+tags: [Raindrops, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Raindrops Input Plugin
+
+The [raindrops](http://raindrops.bogomips.org/) plugin reads from specified
+raindops [middleware](http://raindrops.bogomips.org/Raindrops/Middleware.html)
+URI and adds stats to InfluxDB.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read raindrops stats (raindrops - real-time stats for preforking Rack servers)
+[[inputs.raindrops]]
+  ## An array of raindrops middleware URI to gather stats.
+  urls = ["http://localhost:8080/_raindrops"]
+```
+
+## Metrics
+
+- raindrops
+  - calling (integer, count)
+  - writing (integer, count)
+- raindrops_listen
+  - active (integer, bytes)
+  - queued (integer, bytes)
+
+### Tags
+
+- Raindops calling/writing of all the workers:
+  - server
+  - port
+
+- raindrops_listen (ip:port):
+  - ip
+  - port
+
+- raindrops_listen (Unix Socket):
+  - socket
+
+## Example Output
+
+```text
+raindrops,port=8080,server=localhost calling=0i,writing=0i 1455479896806238204
+raindrops_listen,ip=0.0.0.0,port=8080 active=0i,queued=0i 1455479896806561938
+raindrops_listen,ip=0.0.0.0,port=8081 active=1i,queued=0i 1455479896806605749
+raindrops_listen,ip=127.0.0.1,port=8082 active=0i,queued=0i 1455479896806646315
+raindrops_listen,ip=0.0.0.0,port=8083 active=0i,queued=0i 1455479896806683252
+raindrops_listen,ip=0.0.0.0,port=8084 active=0i,queued=0i 1455479896806712025
+raindrops_listen,ip=0.0.0.0,port=3000 active=0i,queued=0i 1455479896806779197
+raindrops_listen,socket=/tmp/listen.me active=0i,queued=0i 1455479896806813907
+```
diff --git a/content/telegraf/v1/input-plugins/ras/_index.md b/content/telegraf/v1/input-plugins/ras/_index.md
new file mode 100644
index 000000000..59ec3f89b
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/ras/_index.md
@@ -0,0 +1,89 @@
+---
+description: "Telegraf plugin for collecting metrics from RAS Daemon"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: RAS Daemon
+    identifier: input-ras
+tags: [RAS Daemon, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# RAS Daemon Input Plugin
+
+This plugin is only available on Linux (only for `386`, `amd64`, `arm` and
+`arm64` architectures).
+
+The `RAS` plugin gathers and counts errors provided by
+[RASDaemon](https://github.com/mchehab/rasdaemon).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# RAS plugin exposes counter metrics for Machine Check Errors provided by RASDaemon (sqlite3 output is required).
+# This plugin ONLY supports Linux on 386, amd64, arm, and arm64
+[[inputs.ras]]
+  ## Optional path to RASDaemon sqlite3 database.
+  ## Default: /var/lib/rasdaemon/ras-mc_event.db
+  # db_path = ""
+```
+
+In addition `RASDaemon` runs, by default, with `--enable-sqlite3` flag. In case
+of problems with SQLite3 database please verify this is still a default option.
+
+## Metrics
+
+- ras
+  - tags:
+    - socket_id
+  - fields:
+    - memory_read_corrected_errors
+    - memory_read_uncorrectable_errors
+    - memory_write_corrected_errors
+    - memory_write_uncorrectable_errors
+    - cache_l0_l1_errors
+    - tlb_instruction_errors
+    - cache_l2_errors
+    - upi_errors
+    - processor_base_errors
+    - processor_bus_errors
+    - internal_timer_errors
+    - smm_handler_code_access_violation_errors
+    - internal_parity_errors
+    - frc_errors
+    - external_mce_errors
+    - microcode_rom_parity_errors
+    - unclassified_mce_errors
+
+Please note that `processor_base_errors` is aggregate counter measuring the
+following MCE events:
+
+- internal_timer_errors
+- smm_handler_code_access_violation_errors
+- internal_parity_errors
+- frc_errors
+- external_mce_errors
+- microcode_rom_parity_errors
+- unclassified_mce_errors
+
+## Permissions
+
+This plugin requires access to SQLite3 database from `RASDaemon`. Please make
+sure that user has required permissions to this database.
+
+## Example Output
+
+```text
+ras,host=ubuntu,socket_id=0 external_mce_base_errors=1i,frc_errors=1i,instruction_tlb_errors=5i,internal_parity_errors=1i,internal_timer_errors=1i,l0_and_l1_cache_errors=7i,memory_read_corrected_errors=25i,memory_read_uncorrectable_errors=0i,memory_write_corrected_errors=5i,memory_write_uncorrectable_errors=0i,microcode_rom_parity_errors=1i,processor_base_errors=7i,processor_bus_errors=1i,smm_handler_code_access_violation_errors=1i,unclassified_mce_base_errors=1i 1598867393000000000
+ras,host=ubuntu level_2_cache_errors=0i,upi_errors=0i 1598867393000000000
+```
diff --git a/content/telegraf/v1/input-plugins/ravendb/_index.md b/content/telegraf/v1/input-plugins/ravendb/_index.md
new file mode 100644
index 000000000..65757d11b
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/ravendb/_index.md
@@ -0,0 +1,247 @@
+---
+description: "Telegraf plugin for collecting metrics from RavenDB"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: RavenDB
+    identifier: input-ravendb
+tags: [RavenDB, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# RavenDB Input Plugin
+
+Reads metrics from RavenDB servers via monitoring endpoints APIs.
+
+Requires RavenDB Server 5.2+.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Reads metrics from RavenDB servers via the Monitoring Endpoints
+[[inputs.ravendb]]
+  ## Node URL and port that RavenDB is listening on. By default,
+  ## attempts to connect securely over HTTPS, however, if the user
+  ## is running a local unsecure development cluster users can use
+  ## HTTP via a URL like "http://localhost:8080"
+  url = "https://localhost:4433"
+
+  ## RavenDB X509 client certificate setup
+  # tls_cert = "/etc/telegraf/raven.crt"
+  # tls_key = "/etc/telegraf/raven.key"
+
+  ## Optional request timeout
+  ##
+  ## Timeout, specifies the amount of time to wait
+  ## for a server's response headers after fully writing the request and
+  ## time limit for requests made by this client
+  # timeout = "5s"
+
+  ## List of statistics which are collected
+  # At least one is required
+  # Allowed values: server, databases, indexes, collections
+  #
+  # stats_include = ["server", "databases", "indexes", "collections"]
+
+  ## List of db where database stats are collected
+  ## If empty, all db are concerned
+  # db_stats_dbs = []
+
+  ## List of db where index status are collected
+  ## If empty, all indexes from all db are concerned
+  # index_stats_dbs = []
+
+  ## List of db where collection status are collected
+  ## If empty, all collections from all db are concerned
+  # collection_stats_dbs = []
+```
+
+**Note:** The client certificate used should have `Operator` permissions on the
+cluster.
+
+## Metrics
+
+- ravendb_server
+  - tags:
+    - url
+    - node_tag
+    - cluster_id
+    - public_server_url (optional)
+  - fields:
+    - backup_current_number_of_running_backups
+    - backup_max_number_of_concurrent_backups
+    - certificate_server_certificate_expiration_left_in_sec (optional)
+    - certificate_well_known_admin_certificates (optional, separated by ';')
+    - cluster_current_term
+    - cluster_index
+    - cluster_node_state
+      - 0 -> Passive
+      - 1 -> Candidate
+      - 2 -> Follower
+      - 3 -> LeaderElect
+      - 4 -> Leader
+    - config_public_tcp_server_urls (optional, separated by ';')
+    - config_server_urls
+    - config_tcp_server_urls (optional, separated by ';')
+    - cpu_assigned_processor_count
+    - cpu_machine_usage
+    - cpu_machine_io_wait (optional)
+    - cpu_process_usage
+    - cpu_processor_count
+    - cpu_thread_pool_available_worker_threads
+    - cpu_thread_pool_available_completion_port_threads
+    - databases_loaded_count
+    - databases_total_count
+    - disk_remaining_storage_space_percentage
+    - disk_system_store_used_data_file_size_in_mb
+    - disk_system_store_total_data_file_size_in_mb
+    - disk_total_free_space_in_mb
+    - license_expiration_left_in_sec (optional)
+    - license_max_cores
+    - license_type
+    - license_utilized_cpu_cores
+    - memory_allocated_in_mb
+    - memory_installed_in_mb
+    - memory_low_memory_severity
+      - 0 -> None
+      - 1 -> Low
+      - 2 -> Extremely Low
+    - memory_physical_in_mb
+    - memory_total_dirty_in_mb
+    - memory_total_swap_size_in_mb
+    - memory_total_swap_usage_in_mb
+    - memory_working_set_swap_usage_in_mb
+    - network_concurrent_requests_count
+    - network_last_authorized_non_cluster_admin_request_time_in_sec (optional)
+    - network_last_request_time_in_sec (optional)
+    - network_requests_per_sec
+    - network_tcp_active_connections
+    - network_total_requests
+    - server_full_version
+    - server_process_id
+    - server_version
+    - uptime_in_sec
+
+- ravendb_databases
+  - tags:
+    - url
+    - database_name
+    - database_id
+    - node_tag
+    - public_server_url (optional)
+  - fields:
+    - counts_alerts
+    - counts_attachments
+    - counts_documents
+    - counts_performance_hints
+    - counts_rehabs
+    - counts_replication_factor
+    - counts_revisions
+    - counts_unique_attachments
+    - statistics_doc_puts_per_sec
+    - statistics_map_index_indexes_per_sec
+    - statistics_map_reduce_index_mapped_per_sec
+    - statistics_map_reduce_index_reduced_per_sec
+    - statistics_request_average_duration_in_ms
+    - statistics_requests_count
+    - statistics_requests_per_sec
+    - indexes_auto_count
+    - indexes_count
+    - indexes_disabled_count
+    - indexes_errors_count
+    - indexes_errored_count
+    - indexes_idle_count
+    - indexes_stale_count
+    - indexes_static_count
+    - storage_documents_allocated_data_file_in_mb
+    - storage_documents_used_data_file_in_mb
+    - storage_indexes_allocated_data_file_in_mb
+    - storage_indexes_used_data_file_in_mb
+    - storage_total_allocated_storage_file_in_mb
+    - storage_total_free_space_in_mb
+    - storage_io_read_operations (optional, Linux only)
+    - storage_io_write_operations (optional, Linux only)
+    - storage_read_throughput_in_kb (optional, Linux only)
+    - storage_write_throughput_in_kb (optional, Linux only)
+    - storage_queue_length (optional, Linux only)
+    - time_since_last_backup_in_sec (optional)
+    - uptime_in_sec
+
+- ravendb_indexes
+  - tags:
+    - database_name
+    - index_name
+    - node_tag
+    - public_server_url (optional)
+    - url
+  - fields
+    - errors
+    - is_invalid
+    - lock_mode
+      - Unlock
+      - LockedIgnore
+      - LockedError
+    - mapped_per_sec
+    - priority
+      - Low
+      - Normal
+      - High
+    - reduced_per_sec
+    - state
+      - Normal
+      - Disabled
+      - Idle
+      - Error
+    - status
+      - Running
+      - Paused
+      - Disabled
+    - time_since_last_indexing_in_sec (optional)
+    - time_since_last_query_in_sec (optional)
+    - type
+      - None
+      - AutoMap
+      - AutoMapReduce
+      - Map
+      - MapReduce
+      - Faulty
+      - JavaScriptMap
+      - JavaScriptMapReduce
+
+- ravendb_collections
+  - tags:
+    - collection_name
+    - database_name
+    - node_tag
+    - public_server_url (optional)
+    - url
+  - fields
+    - documents_count
+    - documents_size_in_bytes
+    - revisions_size_in_bytes
+    - tombstones_size_in_bytes
+    - total_size_in_bytes
+
+## Example Output
+
+```text
+ravendb_server,cluster_id=07aecc42-9194-4181-999c-1c42450692c9,host=DESKTOP-2OISR6D,node_tag=A,url=http://localhost:8080 backup_current_number_of_running_backups=0i,backup_max_number_of_concurrent_backups=4i,certificate_server_certificate_expiration_left_in_sec=-1,cluster_current_term=2i,cluster_index=10i,cluster_node_state=4i,config_server_urls="http://127.0.0.1:8080",cpu_assigned_processor_count=8i,cpu_machine_usage=19.09944089456869,cpu_process_usage=0.16977205323024872,cpu_processor_count=8i,cpu_thread_pool_available_completion_port_threads=1000i,cpu_thread_pool_available_worker_threads=32763i,databases_loaded_count=1i,databases_total_count=1i,disk_remaining_storage_space_percentage=18i,disk_system_store_total_data_file_size_in_mb=35184372088832i,disk_system_store_used_data_file_size_in_mb=31379031064576i,disk_total_free_space_in_mb=42931i,license_expiration_left_in_sec=24079222.8772186,license_max_cores=256i,license_type="Enterprise",license_utilized_cpu_cores=8i,memory_allocated_in_mb=205i,memory_installed_in_mb=16384i,memory_low_memory_severity=0i,memory_physical_in_mb=16250i,memory_total_dirty_in_mb=0i,memory_total_swap_size_in_mb=0i,memory_total_swap_usage_in_mb=0i,memory_working_set_swap_usage_in_mb=0i,network_concurrent_requests_count=1i,network_last_request_time_in_sec=0.0058717,network_requests_per_sec=0.09916543455308825,network_tcp_active_connections=128i,network_total_requests=10i,server_full_version="5.2.0-custom-52",server_process_id=31044i,server_version="5.2",uptime_in_sec=56i 1613027977000000000
+ravendb_databases,database_id=ced0edba-8f80-48b8-8e81-c3d2c6748ec3,database_name=db1,host=DESKTOP-2OISR6D,node_tag=A,url=http://localhost:8080 counts_alerts=0i,counts_attachments=17i,counts_documents=1059i,counts_performance_hints=0i,counts_rehabs=0i,counts_replication_factor=1i,counts_revisions=5475i,counts_unique_attachments=17i,indexes_auto_count=0i,indexes_count=7i,indexes_disabled_count=0i,indexes_errored_count=0i,indexes_errors_count=0i,indexes_idle_count=0i,indexes_stale_count=0i,indexes_static_count=7i,statistics_doc_puts_per_sec=0,statistics_map_index_indexes_per_sec=0,statistics_map_reduce_index_mapped_per_sec=0,statistics_map_reduce_index_reduced_per_sec=0,statistics_request_average_duration_in_ms=0,statistics_requests_count=0i,statistics_requests_per_sec=0,storage_documents_allocated_data_file_in_mb=140737488355328i,storage_documents_used_data_file_in_mb=74741020884992i,storage_indexes_allocated_data_file_in_mb=175921860444160i,storage_indexes_used_data_file_in_mb=120722940755968i,storage_total_allocated_storage_file_in_mb=325455441821696i,storage_total_free_space_in_mb=42931i,uptime_in_sec=54 1613027977000000000
+ravendb_indexes,database_name=db1,host=DESKTOP-2OISR6D,index_name=Orders/Totals,node_tag=A,url=http://localhost:8080 errors=0i,is_invalid=false,lock_mode="Unlock",mapped_per_sec=0,priority="Normal",reduced_per_sec=0,state="Normal",status="Running",time_since_last_indexing_in_sec=45.4256655,time_since_last_query_in_sec=45.4304202,type="Map" 1613027977000000000
+ravendb_collections,collection_name=@hilo,database_name=db1,host=DESKTOP-2OISR6D,node_tag=A,url=http://localhost:8080 documents_count=8i,documents_size_in_bytes=122880i,revisions_size_in_bytes=0i,tombstones_size_in_bytes=122880i,total_size_in_bytes=245760i 1613027977000000000
+```
+
+## Contributors
+
+- Marcin Lewandowski (<https://github.com/ml054/>)
+- Casey Barton (<https://github.com/bartoncasey>)
diff --git a/content/telegraf/v1/input-plugins/redfish/_index.md b/content/telegraf/v1/input-plugins/redfish/_index.md
new file mode 100644
index 000000000..75ee0be65
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/redfish/_index.md
@@ -0,0 +1,178 @@
+---
+description: "Telegraf plugin for collecting metrics from Redfish"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Redfish
+    identifier: input-redfish
+tags: [Redfish, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Redfish Input Plugin
+
+The `redfish` plugin gathers metrics and status information about CPU
+temperature, fanspeed, Powersupply, voltage, hostname and Location details
+(datacenter, placement, rack and room) of hardware servers for which [DMTF's
+Redfish](https://redfish.dmtf.org/) is enabled.
+
+Telegraf minimum version: Telegraf 1.15.0
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `username` and
+`password` options. See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more
+details on how to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Read CPU, Fans, Powersupply and Voltage metrics of hardware server through redfish APIs
+[[inputs.redfish]]
+  ## Redfish API Base URL.
+  address = "https://127.0.0.1:5000"
+
+  ## Credentials for the Redfish API. Can also use secrets.
+  username = "root"
+  password = "password123456"
+
+  ## System Id to collect data for in Redfish APIs.
+  computer_system_id="System.Embedded.1"
+
+  ## Metrics to collect
+  ## The metric collects to gather. Choose from "power" and "thermal".
+  # include_metrics = ["power", "thermal"]
+
+  ## Tag sets allow you to include redfish OData link parent data
+  ## For Example.
+  ## Thermal data is an OData link with parent Chassis which has a link of Location.
+  ## For more info see the Redfish Resource and Schema Guide at DMTFs website.
+  ## Available sets are: "chassis.location" and "chassis"
+  # include_tag_sets = ["chassis.location"]
+
+  ## Workarounds
+  ## Defines workarounds for certain hardware vendors. Choose from:
+  ## * ilo4-thermal - Do not pass 0Data-Version header to Thermal endpoint
+  # workarounds = []
+
+  ## Amount of time allowed to complete the HTTP request
+  # timeout = "5s"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+## Metrics
+
+- redfish_thermal_temperatures
+  - tags:
+    - source
+    - member_id
+    - address
+    - name
+    - state
+    - health
+  - fields:
+    - reading_celsius
+    - upper_threshold_critical
+    - upper_threshold_fatal
+    - lower_threshold_critical
+    - lower_threshold_fatal
+
+- redfish_thermal_fans
+  - tags:
+    - source
+    - member_id
+    - address
+    - name
+    - state
+    - health
+  - fields:
+    - reading_rpm (or) reading_percent
+    - upper_threshold_critical
+    - upper_threshold_fatal
+    - lower_threshold_critical
+    - lower_threshold_fatal
+
+- redfish_power_powersupplies
+  - tags:
+    - source
+    - member_id
+    - address
+    - name
+    - state
+    - health
+  - fields:
+    - last_power_output_watts
+    - line_input_voltage
+    - power_capacity_watts
+    - power_input_watts
+    - power_output_watts
+
+- redfish_power_voltages (available only if voltage data is found)
+  - tags:
+    - source
+    - member_id
+    - address
+    - name
+    - state
+    - health
+  - fields:
+    - reading_volts
+    - upper_threshold_critical
+    - upper_threshold_fatal
+    - lower_threshold_critical
+    - lower_threshold_fatal
+
+## Tag Sets
+
+- chassis.location
+  - tags:
+    - datacenter (available only if location data is found)
+    - rack (available only if location data is found)
+    - room (available only if location data is found)
+    - row (available only if location data is found)
+
+- chassis
+  - tags:
+    - chassis_chassistype
+    - chassis_manufacturer
+    - chassis_model
+    - chassis_partnumber
+    - chassis_powerstate
+    - chassis_sku
+    - chassis_serialnumber
+    - chassis_state
+    - chassis_health
+
+## Example Output
+
+```text
+redfish_thermal_temperatures,address=127.0.0.1,chassis_chassistype=RackMount,chassis_health=OK,chassis_manufacturer=Contoso,chassis_model=3500RX,chassis_partnumber=224071-J23,chassis_powerstate=On,chassis_serialnumber=437XR1138R2,chassis_sku=8675309,chassis_state=Enabled,health=OK,member_id=0,name=CPU1\ Temp,rack=WEB43,row=North,source=web483,state=Enabled reading_celsius=41,upper_threshold_critical=45,upper_threshold_fatal=48 1691270160000000000
+redfish_thermal_temperatures,address=127.0.0.1,chassis_chassistype=RackMount,chassis_health=OK,chassis_manufacturer=Contoso,chassis_model=3500RX,chassis_partnumber=224071-J23,chassis_powerstate=On,chassis_serialnumber=437XR1138R2,chassis_sku=8675309,chassis_state=Enabled,member_id=1,name=CPU2\ Temp,rack=WEB43,row=North,source=web483,state=Disabled upper_threshold_critical=45,upper_threshold_fatal=48 1691270160000000000
+redfish_thermal_temperatures,address=127.0.0.1,chassis_chassistype=RackMount,chassis_health=OK,chassis_manufacturer=Contoso,chassis_model=3500RX,chassis_partnumber=224071-J23,chassis_powerstate=On,chassis_serialnumber=437XR1138R2,chassis_sku=8675309,chassis_state=Enabled,health=OK,member_id=2,name=Chassis\ Intake\ Temp,rack=WEB43,row=North,source=web483,state=Enabled upper_threshold_critical=40,upper_threshold_fatal=50,lower_threshold_critical=5,lower_threshold_fatal=0,reading_celsius=25 1691270160000000000
+redfish_thermal_fans,address=127.0.0.1,chassis_chassistype=RackMount,chassis_health=OK,chassis_manufacturer=Contoso,chassis_model=3500RX,chassis_partnumber=224071-J23,chassis_powerstate=On,chassis_serialnumber=437XR1138R2,chassis_sku=8675309,chassis_state=Enabled,health=OK,member_id=0,name=BaseBoard\ System\ Fan,rack=WEB43,row=North,source=web483,state=Enabled lower_threshold_fatal=0i,reading_rpm=2100i 1691270160000000000
+redfish_thermal_fans,address=127.0.0.1,chassis_chassistype=RackMount,chassis_health=OK,chassis_manufacturer=Contoso,chassis_model=3500RX,chassis_partnumber=224071-J23,chassis_powerstate=On,chassis_serialnumber=437XR1138R2,chassis_sku=8675309,chassis_state=Enabled,health=OK,member_id=1,name=BaseBoard\ System\ Fan\ Backup,rack=WEB43,row=North,source=web483,state=Enabled lower_threshold_fatal=0i,reading_rpm=2050i 1691270160000000000
+redfish_power_powersupplies,address=127.0.0.1,chassis_chassistype=RackMount,chassis_health=OK,chassis_manufacturer=Contoso,chassis_model=3500RX,chassis_partnumber=224071-J23,chassis_powerstate=On,chassis_serialnumber=437XR1138R2,chassis_sku=8675309,chassis_state=Enabled,health=Warning,member_id=0,name=Power\ Supply\ Bay,rack=WEB43,row=North,source=web483,state=Enabled line_input_voltage=120,last_power_output_watts=325,power_capacity_watts=800 1691270160000000000
+redfish_power_voltages,address=127.0.0.1,chassis_chassistype=RackMount,chassis_health=OK,chassis_manufacturer=Contoso,chassis_model=3500RX,chassis_partnumber=224071-J23,chassis_powerstate=On,chassis_serialnumber=437XR1138R2,chassis_sku=8675309,chassis_state=Enabled,health=OK,member_id=0,name=VRM1\ Voltage,rack=WEB43,row=North,source=web483,state=Enabled upper_threshold_fatal=15,lower_threshold_critical=11,lower_threshold_fatal=10,reading_volts=12,upper_threshold_critical=13 1691270160000000000
+redfish_power_voltages,address=127.0.0.1,chassis_chassistype=RackMount,chassis_health=OK,chassis_manufacturer=Contoso,chassis_model=3500RX,chassis_partnumber=224071-J23,chassis_powerstate=On,chassis_serialnumber=437XR1138R2,chassis_sku=8675309,chassis_state=Enabled,health=OK,member_id=1,name=VRM2\ Voltage,rack=WEB43,row=North,source=web483,state=Enabled reading_volts=5,upper_threshold_critical=7,lower_threshold_critical=4.5 1691270160000000000
+redfish_thermal_temperatures,address=127.0.0.1,chassis_chassistype=RackMount,chassis_health=OK,chassis_manufacturer=Contoso,chassis_model=3500RX,chassis_partnumber=224071-J23,chassis_powerstate=On,chassis_serialnumber=437XR1138R2,chassis_sku=8675309,chassis_state=Enabled,health=OK,member_id=0,name=CPU1\ Temp,rack=WEB43,row=North,source=web483,state=Enabled upper_threshold_critical=45,upper_threshold_fatal=48,reading_celsius=41 1691270170000000000
+redfish_thermal_temperatures,address=127.0.0.1,chassis_chassistype=RackMount,chassis_health=OK,chassis_manufacturer=Contoso,chassis_model=3500RX,chassis_partnumber=224071-J23,chassis_powerstate=On,chassis_serialnumber=437XR1138R2,chassis_sku=8675309,chassis_state=Enabled,member_id=1,name=CPU2\ Temp,rack=WEB43,row=North,source=web483,state=Disabled upper_threshold_critical=45,upper_threshold_fatal=48 1691270170000000000
+redfish_thermal_temperatures,address=127.0.0.1,chassis_chassistype=RackMount,chassis_health=OK,chassis_manufacturer=Contoso,chassis_model=3500RX,chassis_partnumber=224071-J23,chassis_powerstate=On,chassis_serialnumber=437XR1138R2,chassis_sku=8675309,chassis_state=Enabled,health=OK,member_id=2,name=Chassis\ Intake\ Temp,rack=WEB43,row=North,source=web483,state=Enabled lower_threshold_critical=5,lower_threshold_fatal=0,reading_celsius=25,upper_threshold_critical=40,upper_threshold_fatal=50 1691270170000000000
+```
diff --git a/content/telegraf/v1/input-plugins/redis/_index.md b/content/telegraf/v1/input-plugins/redis/_index.md
new file mode 100644
index 000000000..e5288482d
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/redis/_index.md
@@ -0,0 +1,262 @@
+---
+description: "Telegraf plugin for collecting metrics from Redis"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Redis
+    identifier: input-redis
+tags: [Redis, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Redis Input Plugin
+
+The Redis input plugin gathers metrics from one or many Redis servers.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from one or many redis servers
+[[inputs.redis]]
+  ## specify servers via a url matching:
+  ##  [protocol://]()@address[:port]
+  ##  e.g.
+  ##    tcp://localhost:6379
+  ##    tcp://username:password@192.168.99.100
+  ##    unix:///var/run/redis.sock
+  ##
+  ## If no servers are specified, then localhost is used as the host.
+  ## If no port is specified, 6379 is used
+  servers = ["tcp://localhost:6379"]
+
+  ## Optional. Specify redis commands to retrieve values
+  # [[inputs.redis.commands]]
+  #   # The command to run where each argument is a separate element
+  #   command = ["get", "sample-key"]
+  #   # The field to store the result in
+  #   field = "sample-key-value"
+  #   # The type of the result
+  #   # Can be "string", "integer", or "float"
+  #   type = "string"
+
+  ## Specify username and password for ACL auth (Redis 6.0+). You can add this
+  ## to the server URI above or specify it here. The values here take
+  ## precedence.
+  # username = ""
+  # password = ""
+
+  ## Optional TLS Config
+  ## Check tls/config.go ClientConfig for more options
+  # tls_enable = true
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = true
+```
+
+## Metrics
+
+The plugin gathers the results of the [INFO](https://redis.io/commands/info)
+redis command.  There are two separate measurements: _redis_ and
+_redis\_keyspace_, the latter is used for gathering database related statistics.
+
+Additionally the plugin also calculates the hit/miss ratio (keyspace\_hitrate)
+and the elapsed time since the last rdb save (rdb\_last\_save\_time\_elapsed).
+
+- redis
+  - keyspace_hitrate(float, number)
+  - rdb_last_save_time_elapsed(int, seconds)
+
+    **Server**
+  - uptime(int, seconds)
+  - lru_clock(int, number)
+  - redis_version(string)
+
+    **Clients**
+  - clients(int, number)
+  - client_longest_output_list(int, number)
+  - client_biggest_input_buf(int, number)
+  - blocked_clients(int, number)
+
+    **Memory**
+  - used_memory(int, bytes)
+  - used_memory_rss(int, bytes)
+  - used_memory_peak(int, bytes)
+  - total_system_memory(int, bytes)
+  - used_memory_lua(int, bytes)
+  - maxmemory(int, bytes)
+  - maxmemory_policy(string)
+  - mem_fragmentation_ratio(float, number)
+
+    **Persistence**
+  - loading(int,flag)
+  - rdb_changes_since_last_save(int, number)
+  - rdb_bgsave_in_progress(int, flag)
+  - rdb_last_save_time(int, seconds)
+  - rdb_last_bgsave_status(string)
+  - rdb_last_bgsave_time_sec(int, seconds)
+  - rdb_current_bgsave_time_sec(int, seconds)
+  - aof_enabled(int, flag)
+  - aof_rewrite_in_progress(int, flag)
+  - aof_rewrite_scheduled(int, flag)
+  - aof_last_rewrite_time_sec(int, seconds)
+  - aof_current_rewrite_time_sec(int, seconds)
+  - aof_last_bgrewrite_status(string)
+  - aof_last_write_status(string)
+
+    **Stats**
+  - total_connections_received(int, number)
+  - total_commands_processed(int, number)
+  - instantaneous_ops_per_sec(int, number)
+  - total_net_input_bytes(int, bytes)
+  - total_net_output_bytes(int, bytes)
+  - instantaneous_input_kbps(float, KB/sec)
+  - instantaneous_output_kbps(float, KB/sec)
+  - rejected_connections(int, number)
+  - sync_full(int, number)
+  - sync_partial_ok(int, number)
+  - sync_partial_err(int, number)
+  - expired_keys(int, number)
+  - evicted_keys(int, number)
+  - keyspace_hits(int, number)
+  - keyspace_misses(int, number)
+  - pubsub_channels(int, number)
+  - pubsub_patterns(int, number)
+  - latest_fork_usec(int, microseconds)
+  - migrate_cached_sockets(int, number)
+
+    **Replication**
+  - connected_slaves(int, number)
+  - master_link_down_since_seconds(int, number)
+  - master_link_status(string)
+  - master_repl_offset(int, number)
+  - second_repl_offset(int, number)
+  - repl_backlog_active(int, number)
+  - repl_backlog_size(int, bytes)
+  - repl_backlog_first_byte_offset(int, number)
+  - repl_backlog_histlen(int, bytes)
+
+    **CPU**
+  - used_cpu_sys(float, number)
+  - used_cpu_user(float, number)
+  - used_cpu_sys_children(float, number)
+  - used_cpu_user_children(float, number)
+
+    **Cluster**
+  - cluster_enabled(int, flag)
+
+- redis_keyspace
+  - keys(int, number)
+  - expires(int, number)
+  - avg_ttl(int, number)
+
+- redis_cmdstat
+    Every Redis used command could have the following fields:
+  - calls(int, number)
+  - failed_calls(int, number)
+  - rejected_calls(int, number)
+  - usec(int, mircoseconds)
+  - usec_per_call(float, microseconds)
+
+- redis_latency_percentiles_usec
+  - fields:
+    - p50(float, microseconds)
+    - p99(float, microseconds)
+    - p99.9(float, microseconds)
+
+- redis_replication
+  - tags:
+    - replication_role
+    - replica_ip
+    - replica_port
+    - state (either "online", "wait_bgsave", or "send_bulk")
+
+  - fields:
+    - lag(int, number)
+    - offset(int, number)
+
+- redis_errorstat
+  - tags:
+    - err
+  - fields:
+    - total (int, number)
+
+### Tags
+
+- All measurements have the following tags:
+  - port
+  - server
+  - replication_role
+
+- The redis_keyspace measurement has an additional database tag:
+  - database
+
+- The redis_cmdstat measurement has an additional command tag:
+  - command
+
+- The redis_latency_percentiles_usec measurement has an additional command tag:
+  - command
+
+## Example Output
+
+Using this configuration:
+
+```toml
+[[inputs.redis]]
+  ## specify servers via a url matching:
+  ##  [protocol://]()@address[:port]
+  ##  e.g.
+  ##    tcp://localhost:6379
+  ##    tcp://:password@192.168.99.100
+  ##
+  ## If no servers are specified, then localhost is used as the host.
+  ## If no port is specified, 6379 is used
+  servers = ["tcp://localhost:6379"]
+```
+
+When run with:
+
+```sh
+./telegraf --config telegraf.conf --input-filter redis --test
+```
+
+It produces:
+
+```text
+redis,server=localhost,port=6379,replication_role=master,host=host keyspace_hitrate=1,clients=2i,blocked_clients=0i,instantaneous_input_kbps=0,sync_full=0i,pubsub_channels=0i,pubsub_patterns=0i,total_net_output_bytes=6659253i,used_memory=842448i,total_system_memory=8351916032i,aof_current_rewrite_time_sec=-1i,rdb_changes_since_last_save=0i,sync_partial_err=0i,latest_fork_usec=508i,instantaneous_output_kbps=0,expired_keys=0i,used_memory_peak=843416i,aof_rewrite_in_progress=0i,aof_last_bgrewrite_status="ok",migrate_cached_sockets=0i,connected_slaves=0i,maxmemory_policy="noeviction",aof_rewrite_scheduled=0i,total_net_input_bytes=3125i,used_memory_rss=9564160i,repl_backlog_histlen=0i,rdb_last_bgsave_status="ok",aof_last_rewrite_time_sec=-1i,keyspace_misses=0i,client_biggest_input_buf=5i,used_cpu_user=1.33,maxmemory=0i,rdb_current_bgsave_time_sec=-1i,total_commands_processed=271i,repl_backlog_size=1048576i,used_cpu_sys=3,uptime=2822i,lru_clock=16706281i,used_memory_lua=37888i,rejected_connections=0i,sync_partial_ok=0i,evicted_keys=0i,rdb_last_save_time_elapsed=1922i,rdb_last_save_time=1493099368i,instantaneous_ops_per_sec=0i,used_cpu_user_children=0,client_longest_output_list=0i,master_repl_offset=0i,repl_backlog_active=0i,keyspace_hits=2i,used_cpu_sys_children=0,cluster_enabled=0i,rdb_last_bgsave_time_sec=0i,aof_last_write_status="ok",total_connections_received=263i,aof_enabled=0i,repl_backlog_first_byte_offset=0i,mem_fragmentation_ratio=11.35,loading=0i,rdb_bgsave_in_progress=0i 1493101290000000000
+```
+
+redis_keyspace:
+
+```text
+redis_keyspace,database=db1,host=host,server=localhost,port=6379,replication_role=master keys=1i,expires=0i,avg_ttl=0i 1493101350000000000
+```
+
+redis_command:
+
+```text
+redis_cmdstat,command=publish,host=host,port=6379,replication_role=master,server=localhost calls=569514i,failed_calls=0i,rejected_calls=0i,usec=9916334i,usec_per_call=17.41 1559227136000000000
+```
+
+redis_latency_percentiles_usec:
+
+```text
+redis_latency_percentiles_usec,command=zadd,host=host,port=6379,replication_role=master,server=localhost p50=9.023,p99=28.031,p99.9=43.007 1559227136000000000
+```
+
+redis_error:
+
+```text
+redis_errorstat,err=MOVED,host=host,port=6379,replication_role=master,server=localhost total=4284 1691119309000000000
+```
diff --git a/content/telegraf/v1/input-plugins/redis_sentinel/_index.md b/content/telegraf/v1/input-plugins/redis_sentinel/_index.md
new file mode 100644
index 000000000..3f3a1a91a
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/redis_sentinel/_index.md
@@ -0,0 +1,226 @@
+---
+description: "Telegraf plugin for collecting metrics from Redis Sentinel"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Redis Sentinel
+    identifier: input-redis_sentinel
+tags: [Redis Sentinel, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Redis Sentinel Input Plugin
+
+A plugin for Redis Sentinel to monitor multiple Sentinel instances that are
+monitoring multiple Redis servers and replicas.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from one or many redis-sentinel servers
+[[inputs.redis_sentinel]]
+  ## specify servers via a url matching:
+  ##  [protocol://]()@address[:port]
+  ##  e.g.
+  ##    tcp://localhost:26379
+  ##    tcp://username:password@192.168.99.100
+  ##    unix:///var/run/redis-sentinel.sock
+  ##
+  ## If no servers are specified, then localhost is used as the host.
+  ## If no port is specified, 26379 is used
+  # servers = ["tcp://localhost:26379"]
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = true
+```
+
+## Measurements & Fields
+
+The plugin gathers the results of these commands and measurements:
+
+* `sentinel masters` - `redis_sentinel_masters`
+* `sentinel sentinels` - `redis_sentinels`
+* `sentinel replicas` - `redis_replicas`
+* `info all` - `redis_sentinel`
+
+The `has_quorum` field in `redis_sentinel_masters` is from calling the command
+`sentinels ckquorum`.
+
+There are 5 remote network requests made for each server listed in the config.
+
+## Metrics
+
+* redis_sentinel_masters
+  * tags:
+    * host
+    * master
+    * port
+    * source
+
+  * fields:
+    * config_epoch (int)
+    * down_after_milliseconds (int)
+    * failover_timeout (int)
+    * flags (string)
+    * has_quorum (bool)
+    * info_refresh (int)
+    * ip (string)
+    * last_ok_ping_reply (int)
+    * last_ping_reply (int)
+    * last_ping_sent (int)
+    * link_pending_commands (int)
+    * link_refcount (int)
+    * num_other_sentinels (int)
+    * num_slaves (int)
+    * parallel_syncs (int)
+    * port (int)
+    * quorum (int)
+    * role_reported (string)
+    * role_reported_time (int)
+
+* redis_sentinel_sentinels
+  * tags:
+    * host
+    * master
+    * port
+    * sentinel_ip
+    * sentinel_port
+    * source
+
+  * fields:
+    * down_after_milliseconds (int)
+    * flags (string)
+    * last_hello_message (int)
+    * last_ok_ping_reply (int)
+    * last_ping_reply (int)
+    * last_ping_sent (int)
+    * link_pending_commands (int)
+    * link_refcount (int)
+    * name (string)
+    * voted_leader (string)
+    * voted_leader_epoch (int)
+
+* redis_sentinel_replicas
+  * tags:
+    * host
+    * master
+    * port
+    * replica_ip
+    * replica_port
+    * source
+
+  * fields:
+    * down_after_milliseconds (int)
+    * flags (string)
+    * info_refresh (int)
+    * last_ok_ping_reply (int)
+    * last_ping_reply (int)
+    * last_ping_sent (int)
+    * link_pending_commands (int)
+    * link_refcount (int)
+    * master_host (string)
+    * master_link_down_time (int)
+    * master_link_status (string)
+    * master_port (int)
+    * name (string)
+    * role_reported (string)
+    * role_reported_time (int)
+    * slave_priority (int)
+    * slave_repl_offset (int)
+
+* redis_sentinel
+  * tags:
+    * host
+    * port
+    * source
+
+  * fields:
+    * active_defrag_hits (int)
+    * active_defrag_key_hits (int)
+    * active_defrag_key_misses (int)
+    * active_defrag_misses (int)
+    * blocked_clients (int)
+    * client_recent_max_input_buffer (int)
+    * client_recent_max_output_buffer (int)
+    * clients (int)
+    * evicted_keys (int)
+    * expired_keys (int)
+    * expired_stale_perc (float)
+    * expired_time_cap_reached_count (int)
+    * instantaneous_input_kbps (float)
+    * instantaneous_ops_per_sec (int)
+    * instantaneous_output_kbps (float)
+    * keyspace_hits (int)
+    * keyspace_misses (int)
+    * latest_fork_usec (int)
+    * lru_clock (int)
+    * migrate_cached_sockets (int)
+    * pubsub_channels (int)
+    * pubsub_patterns (int)
+    * redis_version (string)
+    * rejected_connections (int)
+    * sentinel_masters (int)
+    * sentinel_running_scripts (int)
+    * sentinel_scripts_queue_length (int)
+    * sentinel_simulate_failure_flags (int)
+    * sentinel_tilt (int)
+    * slave_expires_tracked_keys (int)
+    * sync_full (int)
+    * sync_partial_err (int)
+    * sync_partial_ok (int)
+    * total_commands_processed (int)
+    * total_connections_received (int)
+    * total_net_input_bytes (int)
+    * total_net_output_bytes (int)
+    * uptime_ns (int, nanoseconds)
+    * used_cpu_sys (float)
+    * used_cpu_sys_children (float)
+    * used_cpu_user (float)
+    * used_cpu_user_children (float)
+
+## Example Output
+
+An example of 2 Redis Sentinel instances monitoring a single master and
+replica. It produces:
+
+### redis_sentinel_masters
+
+```text
+redis_sentinel_masters,host=somehostname,master=mymaster,port=26380,source=localhost config_epoch=0i,down_after_milliseconds=30000i,failover_timeout=180000i,flags="master",has_quorum=1i,info_refresh=110i,ip="127.0.0.1",last_ok_ping_reply=819i,last_ping_reply=819i,last_ping_sent=0i,link_pending_commands=0i,link_refcount=1i,num_other_sentinels=1i,num_slaves=1i,parallel_syncs=1i,port=6379i,quorum=2i,role_reported="master",role_reported_time=311248i 1570207377000000000
+redis_sentinel_masters,host=somehostname,master=mymaster,port=26379,source=localhost config_epoch=0i,down_after_milliseconds=30000i,failover_timeout=180000i,flags="master",has_quorum=1i,info_refresh=1650i,ip="127.0.0.1",last_ok_ping_reply=1003i,last_ping_reply=1003i,last_ping_sent=0i,link_pending_commands=0i,link_refcount=1i,num_other_sentinels=1i,num_slaves=1i,parallel_syncs=1i,port=6379i,quorum=2i,role_reported="master",role_reported_time=302990i 1570207377000000000
+```
+
+### redis_sentinel_sentinels
+
+```text
+redis_sentinel_sentinels,host=somehostname,master=mymaster,port=26380,sentinel_ip=127.0.0.1,sentinel_port=26379,source=localhost down_after_milliseconds=30000i,flags="sentinel",last_hello_message=1337i,last_ok_ping_reply=566i,last_ping_reply=566i,last_ping_sent=0i,link_pending_commands=0i,link_refcount=1i,name="fd7444de58ecc00f2685cd89fc11ff96c72f0569",voted_leader="?",voted_leader_epoch=0i 1570207377000000000
+redis_sentinel_sentinels,host=somehostname,master=mymaster,port=26379,sentinel_ip=127.0.0.1,sentinel_port=26380,source=localhost down_after_milliseconds=30000i,flags="sentinel",last_hello_message=1510i,last_ok_ping_reply=1004i,last_ping_reply=1004i,last_ping_sent=0i,link_pending_commands=0i,link_refcount=1i,name="d06519438fe1b35692cb2ea06d57833c959f9114",voted_leader="?",voted_leader_epoch=0i 1570207377000000000
+```
+
+### redis_sentinel_replicas
+
+```text
+redis_sentinel_replicas,host=somehostname,master=mymaster,port=26379,replica_ip=127.0.0.1,replica_port=6380,source=localhost down_after_milliseconds=30000i,flags="slave",info_refresh=1651i,last_ok_ping_reply=1005i,last_ping_reply=1005i,last_ping_sent=0i,link_pending_commands=0i,link_refcount=1i,master_host="127.0.0.1",master_link_down_time=0i,master_link_status="ok",master_port=6379i,name="127.0.0.1:6380",role_reported="slave",role_reported_time=302983i,slave_priority=100i,slave_repl_offset=40175i 1570207377000000000
+redis_sentinel_replicas,host=somehostname,master=mymaster,port=26380,replica_ip=127.0.0.1,replica_port=6380,source=localhost down_after_milliseconds=30000i,flags="slave",info_refresh=111i,last_ok_ping_reply=821i,last_ping_reply=821i,last_ping_sent=0i,link_pending_commands=0i,link_refcount=1i,master_host="127.0.0.1",master_link_down_time=0i,master_link_status="ok",master_port=6379i,name="127.0.0.1:6380",role_reported="slave",role_reported_time=311243i,slave_priority=100i,slave_repl_offset=40441i 1570207377000000000
+```
+
+### redis_sentinel
+
+```text
+redis_sentinel,host=somehostname,port=26379,source=localhost active_defrag_hits=0i,active_defrag_key_hits=0i,active_defrag_key_misses=0i,active_defrag_misses=0i,blocked_clients=0i,client_recent_max_input_buffer=2i,client_recent_max_output_buffer=0i,clients=3i,evicted_keys=0i,expired_keys=0i,expired_stale_perc=0,expired_time_cap_reached_count=0i,instantaneous_input_kbps=0.01,instantaneous_ops_per_sec=0i,instantaneous_output_kbps=0,keyspace_hits=0i,keyspace_misses=0i,latest_fork_usec=0i,lru_clock=9926289i,migrate_cached_sockets=0i,pubsub_channels=0i,pubsub_patterns=0i,redis_version="5.0.5",rejected_connections=0i,sentinel_masters=1i,sentinel_running_scripts=0i,sentinel_scripts_queue_length=0i,sentinel_simulate_failure_flags=0i,sentinel_tilt=0i,slave_expires_tracked_keys=0i,sync_full=0i,sync_partial_err=0i,sync_partial_ok=0i,total_commands_processed=459i,total_connections_received=6i,total_net_input_bytes=24517i,total_net_output_bytes=14864i,uptime_ns=303000000000i,used_cpu_sys=0.404,used_cpu_sys_children=0,used_cpu_user=0.436,used_cpu_user_children=0 1570207377000000000
+redis_sentinel,host=somehostname,port=26380,source=localhost active_defrag_hits=0i,active_defrag_key_hits=0i,active_defrag_key_misses=0i,active_defrag_misses=0i,blocked_clients=0i,client_recent_max_input_buffer=2i,client_recent_max_output_buffer=0i,clients=2i,evicted_keys=0i,expired_keys=0i,expired_stale_perc=0,expired_time_cap_reached_count=0i,instantaneous_input_kbps=0.01,instantaneous_ops_per_sec=0i,instantaneous_output_kbps=0,keyspace_hits=0i,keyspace_misses=0i,latest_fork_usec=0i,lru_clock=9926289i,migrate_cached_sockets=0i,pubsub_channels=0i,pubsub_patterns=0i,redis_version="5.0.5",rejected_connections=0i,sentinel_masters=1i,sentinel_running_scripts=0i,sentinel_scripts_queue_length=0i,sentinel_simulate_failure_flags=0i,sentinel_tilt=0i,slave_expires_tracked_keys=0i,sync_full=0i,sync_partial_err=0i,sync_partial_ok=0i,total_commands_processed=442i,total_connections_received=2i,total_net_input_bytes=23861i,total_net_output_bytes=4443i,uptime_ns=312000000000i,used_cpu_sys=0.46,used_cpu_sys_children=0,used_cpu_user=0.416,used_cpu_user_children=0 1570207377000000000
+```
diff --git a/content/telegraf/v1/input-plugins/rethinkdb/_index.md b/content/telegraf/v1/input-plugins/rethinkdb/_index.md
new file mode 100644
index 000000000..87a35a889
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/rethinkdb/_index.md
@@ -0,0 +1,82 @@
+---
+description: "Telegraf plugin for collecting metrics from RethinkDB"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: RethinkDB
+    identifier: input-rethinkdb
+tags: [RethinkDB, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# RethinkDB Input Plugin
+
+Collect metrics from [RethinkDB](https://www.rethinkdb.com/).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from one or many RethinkDB servers
+[[inputs.rethinkdb]]
+  ## An array of URI to gather stats about. Specify an ip or hostname
+  ## with optional port add password. ie,
+  ##   rethinkdb://user:auth_key@10.10.3.30:28105,
+  ##   rethinkdb://10.10.3.33:18832,
+  ##   10.0.0.1:10000, etc.
+  servers = ["127.0.0.1:28015"]
+
+  ## If you use actual rethinkdb of > 2.3.0 with username/password authorization,
+  ## protocol have to be named "rethinkdb2" - it will use 1_0 H.
+  # servers = ["rethinkdb2://username:password@127.0.0.1:28015"]
+
+  ## If you use older versions of rethinkdb (<2.2) with auth_key, protocol
+  ## have to be named "rethinkdb".
+  # servers = ["rethinkdb://username:auth_key@127.0.0.1:28015"]
+```
+
+## Metrics
+
+- rethinkdb
+  - tags:
+    - type
+    - ns
+    - rethinkdb_host
+    - rethinkdb_hostname
+  - fields:
+    - cache_bytes_in_use (integer, bytes)
+    - disk_read_bytes_per_sec (integer, reads)
+    - disk_read_bytes_total (integer, bytes)
+    - disk_written_bytes_per_sec (integer, bytes)
+    - disk_written_bytes_total (integer, bytes)
+    - disk_usage_data_bytes (integer, bytes)
+    - disk_usage_garbage_bytes (integer, bytes)
+    - disk_usage_metadata_bytes (integer, bytes)
+    - disk_usage_preallocated_bytes (integer, bytes)
+
+- rethinkdb_engine
+  - tags:
+    - type
+    - ns
+    - rethinkdb_host
+    - rethinkdb_hostname
+  - fields:
+    - active_clients (integer, clients)
+    - clients (integer, clients)
+    - queries_per_sec (integer, queries)
+    - total_queries (integer, queries)
+    - read_docs_per_sec (integer, reads)
+    - total_reads (integer, reads)
+    - written_docs_per_sec (integer, writes)
+    - total_writes (integer, writes)
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/riak/_index.md b/content/telegraf/v1/input-plugins/riak/_index.md
new file mode 100644
index 000000000..43d4bc9e2
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/riak/_index.md
@@ -0,0 +1,99 @@
+---
+description: "Telegraf plugin for collecting metrics from Riak"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Riak
+    identifier: input-riak
+tags: [Riak, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Riak Input Plugin
+
+The Riak plugin gathers metrics from one or more riak instances.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics one or many Riak servers
+[[inputs.riak]]
+  # Specify a list of one or more riak http servers
+  servers = ["http://localhost:8098"]
+```
+
+## Metrics
+
+Riak provides one measurement named "riak", with the following fields:
+
+- cpu_avg1
+- cpu_avg15
+- cpu_avg5
+- memory_code
+- memory_ets
+- memory_processes
+- memory_system
+- memory_total
+- node_get_fsm_objsize_100
+- node_get_fsm_objsize_95
+- node_get_fsm_objsize_99
+- node_get_fsm_objsize_mean
+- node_get_fsm_objsize_median
+- node_get_fsm_siblings_100
+- node_get_fsm_siblings_95
+- node_get_fsm_siblings_99
+- node_get_fsm_siblings_mean
+- node_get_fsm_siblings_median
+- node_get_fsm_time_100
+- node_get_fsm_time_95
+- node_get_fsm_time_99
+- node_get_fsm_time_mean
+- node_get_fsm_time_median
+- node_gets
+- node_gets_total
+- node_put_fsm_time_100
+- node_put_fsm_time_95
+- node_put_fsm_time_99
+- node_put_fsm_time_mean
+- node_put_fsm_time_median
+- node_puts
+- node_puts_total
+- pbc_active
+- pbc_connects
+- pbc_connects_total
+- vnode_gets
+- vnode_gets_total
+- vnode_index_reads
+- vnode_index_reads_total
+- vnode_index_writes
+- vnode_index_writes_total
+- vnode_puts
+- vnode_puts_total
+- read_repairs
+- read_repairs_total
+
+Measurements of time (such as node_get_fsm_time_mean) are measured in
+nanoseconds.
+
+### Tags
+
+All measurements have the following tags:
+
+- server (the host:port of the given server address, ex. `127.0.0.1:8087`)
+- nodename (the internal node name received, ex. `riak@127.0.0.1`)
+
+## Example Output
+
+```text
+riak,nodename=riak@127.0.0.1,server=localhost:8098 cpu_avg1=31i,cpu_avg15=69i,cpu_avg5=51i,memory_code=11563738i,memory_ets=5925872i,memory_processes=30236069i,memory_system=93074971i,memory_total=123311040i,node_get_fsm_objsize_100=0i,node_get_fsm_objsize_95=0i,node_get_fsm_objsize_99=0i,node_get_fsm_objsize_mean=0i,node_get_fsm_objsize_median=0i,node_get_fsm_siblings_100=0i,node_get_fsm_siblings_95=0i,node_get_fsm_siblings_99=0i,node_get_fsm_siblings_mean=0i,node_get_fsm_siblings_median=0i,node_get_fsm_time_100=0i,node_get_fsm_time_95=0i,node_get_fsm_time_99=0i,node_get_fsm_time_mean=0i,node_get_fsm_time_median=0i,node_gets=0i,node_gets_total=19i,node_put_fsm_time_100=0i,node_put_fsm_time_95=0i,node_put_fsm_time_99=0i,node_put_fsm_time_mean=0i,node_put_fsm_time_median=0i,node_puts=0i,node_puts_total=0i,pbc_active=0i,pbc_connects=0i,pbc_connects_total=20i,vnode_gets=0i,vnode_gets_total=57i,vnode_index_reads=0i,vnode_index_reads_total=0i,vnode_index_writes=0i,vnode_index_writes_total=0i,vnode_puts=0i,vnode_puts_total=0i,read_repair=0i,read_repairs_total=0i 1455913392622482332
+```
diff --git a/content/telegraf/v1/input-plugins/riemann_listener/_index.md b/content/telegraf/v1/input-plugins/riemann_listener/_index.md
new file mode 100644
index 000000000..09f548efe
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/riemann_listener/_index.md
@@ -0,0 +1,79 @@
+---
+description: "Telegraf plugin for collecting metrics from Riemann Listener"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Riemann Listener
+    identifier: input-riemann_listener
+tags: [Riemann Listener, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Riemann Listener Input Plugin
+
+The Riemann Listener is a simple input plugin that listens for messages from
+client that use riemann clients using riemann-protobuff format.
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Riemann protobuff listener
+[[inputs.riemann_listener]]
+  ## URL to listen on
+  ## Default is "tcp://:5555"
+  #  service_address = "tcp://:8094"
+  #  service_address = "tcp://127.0.0.1:http"
+  #  service_address = "tcp4://:8094"
+  #  service_address = "tcp6://:8094"
+  #  service_address = "tcp6://[2001:db8::1]:8094"
+
+  ## Maximum number of concurrent connections.
+  ## 0 (default) is unlimited.
+  #  max_connections = 1024
+  ## Read timeout.
+  ## 0 (default) is unlimited.
+  #  read_timeout = "30s"
+  ## Optional TLS configuration.
+  #  tls_cert = "/etc/telegraf/cert.pem"
+  #  tls_key  = "/etc/telegraf/key.pem"
+  ## Enables client authentication if set.
+  #  tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
+  ## Maximum socket buffer size (in bytes when no unit specified).
+  #  read_buffer_size = "64KiB"
+  ## Period between keep alive probes.
+  ## 0 disables keep alive probes.
+  ## Defaults to the OS configuration.
+  #  keep_alive_period = "5m"
+```
+
+Just like Riemann the default port is 5555. This can be configured, refer
+configuration above.
+
+Riemann `Service` is mapped as `measurement`. `metric` and `TTL` are converted
+into field values.  As Riemann tags as simply an array, they are converted into
+the `influx_line` format key-value, where both key and value are the tags.
+
+## Metrics
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/s7comm/_index.md b/content/telegraf/v1/input-plugins/s7comm/_index.md
new file mode 100644
index 000000000..f2b27b899
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/s7comm/_index.md
@@ -0,0 +1,113 @@
+---
+description: "Telegraf plugin for collecting metrics from Siemens S7"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Siemens S7
+    identifier: input-s7comm
+tags: [Siemens S7, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Siemens S7 Input Plugin
+
+This plugin reads metrics from Siemens PLCs via the S7 protocol.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Startup error behavior options <!-- @/docs/includes/startup_error_behavior.md -->
+
+In addition to the plugin-specific and global configuration settings the plugin
+supports options for specifying the behavior when experiencing startup errors
+using the `startup_error_behavior` setting. Available values are:
+
+- `error`:  Telegraf with stop and exit in case of startup errors. This is the
+            default behavior.
+- `ignore`: Telegraf will ignore startup errors for this plugin and disables it
+            but continues processing for all other plugins.
+- `retry`:  Telegraf will try to startup the plugin in every gather or write
+            cycle in case of startup errors. The plugin is disabled until
+            the startup succeeds.
+
+## Configuration
+
+```toml @sample.conf
+# Plugin for retrieving data from Siemens PLCs via the S7 protocol (RFC1006)
+[[inputs.s7comm]]
+  ## Parameters to contact the PLC (mandatory)
+  ## The server is in the <host>[:port] format where the port defaults to 102
+  ## if not explicitly specified.
+  server = "127.0.0.1:102"
+  rack = 0
+  slot = 0
+
+  ## Connection or drive type of S7 protocol
+  ## Available options are "PD" (programming  device), "OP" (operator panel) or "basic" (S7 basic communication).
+  # connection_type = "PD"
+
+  ## Max count of fields to be bundled in one batch-request. (PDU size)
+  # pdu_size = 20
+
+  ## Timeout for requests
+  # timeout = "10s"
+
+  ## Log detailed connection messages for tracing issues
+  # log_level = "trace"
+
+  ## Metric definition(s)
+  [[inputs.s7comm.metric]]
+    ## Name of the measurement
+    # name = "s7comm"
+
+    ## Field definitions
+    ## name    - field name
+    ## address - indirect address "<area>.<type><address>[.extra]"
+    ##           area    - e.g. be "DB1" for data-block one
+    ##           type    - supported types are (uppercase)
+    ##                     X  -- bit, requires the bit-number as 'extra'
+    ##                           parameter
+    ##                     B  -- byte (8 bit)
+    ##                     C  -- character (8 bit)
+    ##                     W  -- word (16 bit)
+    ##                     DW -- double word (32 bit)
+    ##                     I  -- integer (16 bit)
+    ##                     DI -- double integer (32 bit)
+    ##                     R  -- IEEE 754 real floating point number (32 bit)
+    ##                     DT -- date-time, always converted to unix timestamp
+    ##                           with nano-second precision
+    ##                     S  -- string, requires the maximum length of the
+    ##                           string as 'extra' parameter
+    ##           address - start address to read if not specified otherwise
+    ##                     in the type field
+    ##           extra   - extra parameter e.g. for the bit and string type
+    fields = [
+      { name="rpm",             address="DB1.R4"    },
+      { name="status_ok",       address="DB1.X2.1"  },
+      { name="last_error",      address="DB2.S1.32" },
+      { name="last_error_time", address="DB2.DT2"   }
+    ]
+
+    ## Tags assigned to the metric
+    # [inputs.s7comm.metric.tags]
+    #   device = "compressor"
+    #   location = "main building"
+```
+
+## Example Output
+
+```text
+s7comm,host=Hugin rpm=712i,status_ok=true,last_error="empty slot",last_error_time=1611319681000000000i 1611332164000000000
+```
+
+## Metrics
+
+The format of metrics produced by this plugin depends on the metric
+configuration(s).
diff --git a/content/telegraf/v1/input-plugins/salesforce/_index.md b/content/telegraf/v1/input-plugins/salesforce/_index.md
new file mode 100644
index 000000000..467b12e07
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/salesforce/_index.md
@@ -0,0 +1,80 @@
+---
+description: "Telegraf plugin for collecting metrics from Salesforce"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Salesforce
+    identifier: input-salesforce
+tags: [Salesforce, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Salesforce Input Plugin
+
+The Salesforce plugin gathers metrics about the limits in your Salesforce
+organization and the remaining usage.  It fetches its data from the [limits
+endpoint]() of Salesforce's REST API.
+
+[limits]: https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_limits.htm
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read API usage and limits for a Salesforce organisation
+[[inputs.salesforce]]
+  ## specify your credentials
+  ##
+  username = "your_username"
+  password = "your_password"
+  ##
+  ## (optional) security token
+  # security_token = "your_security_token"
+  ##
+  ## (optional) environment type (sandbox or production)
+  ## default is: production
+  ##
+  # environment = "production"
+  ##
+  ## (optional) API version (default: "39.0")
+  ##
+  # version = "39.0"
+```
+
+## Metrics
+
+Salesforce provide one measurement named "salesforce".
+Each entry is converted to snake\_case and 2 fields are created.
+
+- \<key\>_max represents the limit threshold
+- \<key\>_remaining represents the usage remaining before hitting the limit threshold
+
+- salesforce
+  - \<key\>_max (int)
+  - \<key\>_remaining (int)
+  - (...)
+
+### Tags
+
+- All measurements have the following tags:
+  - host
+  - organization_id (t18 char organisation ID)
+
+## Example Output
+
+```sh
+$./telegraf --config telegraf.conf --input-filter salesforce --test
+```
+
+```text
+salesforce,organization_id=XXXXXXXXXXXXXXXXXX,host=xxxxx.salesforce.com daily_workflow_emails_max=546000i,hourly_time_based_workflow_max=50i,daily_async_apex_executions_remaining=250000i,daily_durable_streaming_api_events_remaining=1000000i,streaming_api_concurrent_clients_remaining=2000i,daily_bulk_api_requests_remaining=10000i,hourly_sync_report_runs_remaining=500i,daily_api_requests_max=5000000i,data_storage_mb_remaining=1073i,file_storage_mb_remaining=1069i,daily_generic_streaming_api_events_remaining=10000i,hourly_async_report_runs_remaining=1200i,hourly_time_based_workflow_remaining=50i,daily_streaming_api_events_remaining=1000000i,single_email_max=5000i,hourly_dashboard_refreshes_remaining=200i,streaming_api_concurrent_clients_max=2000i,daily_durable_generic_streaming_api_events_remaining=1000000i,daily_api_requests_remaining=4999998i,hourly_dashboard_results_max=5000i,hourly_async_report_runs_max=1200i,daily_durable_generic_streaming_api_events_max=1000000i,hourly_dashboard_results_remaining=5000i,concurrent_sync_report_runs_max=20i,durable_streaming_api_concurrent_clients_remaining=2000i,daily_workflow_emails_remaining=546000i,hourly_dashboard_refreshes_max=200i,daily_streaming_api_events_max=1000000i,hourly_sync_report_runs_max=500i,hourly_o_data_callout_max=10000i,mass_email_max=5000i,mass_email_remaining=5000i,single_email_remaining=5000i,hourly_dashboard_statuses_max=999999999i,concurrent_async_get_report_instances_max=200i,daily_durable_streaming_api_events_max=1000000i,daily_generic_streaming_api_events_max=10000i,hourly_o_data_callout_remaining=10000i,concurrent_sync_report_runs_remaining=20i,daily_bulk_api_requests_max=10000i,data_storage_mb_max=1073i,hourly_dashboard_statuses_remaining=999999999i,concurrent_async_get_report_instances_remaining=200i,daily_async_apex_executions_max=250000i,durable_streaming_api_concurrent_clients_max=2000i,file_storage_mb_max=1073i 1501565661000000000
+```
diff --git a/content/telegraf/v1/input-plugins/sensors/_index.md b/content/telegraf/v1/input-plugins/sensors/_index.md
new file mode 100644
index 000000000..c3eafc3d1
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/sensors/_index.md
@@ -0,0 +1,74 @@
+---
+description: "Telegraf plugin for collecting metrics from LM Sensors"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: LM Sensors
+    identifier: input-sensors
+tags: [LM Sensors, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# LM Sensors Input Plugin
+
+Collect [lm-sensors](https://en.wikipedia.org/wiki/Lm_sensors) metrics -
+requires the lm-sensors package installed.
+
+This plugin collects sensor metrics with the `sensors` executable from the
+lm-sensor package.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Monitor sensors, requires lm-sensors package
+# This plugin ONLY supports Linux
+[[inputs.sensors]]
+  ## Remove numbers from field names.
+  ## If true, a field name like 'temp1_input' will be changed to 'temp_input'.
+  # remove_numbers = true
+
+  ## Timeout is the maximum amount of time that the sensors command can run.
+  # timeout = "5s"
+```
+
+## Metrics
+
+Fields are created dynamically depending on the sensors. All fields are float.
+
+### Tags
+
+- All measurements have the following tags:
+  - chip
+  - feature
+
+## Example Output
+
+### Default
+
+```text
+sensors,chip=power_meter-acpi-0,feature=power1 power_average=0,power_average_interval=300 1466751326000000000
+sensors,chip=k10temp-pci-00c3,feature=temp1 temp_crit=70,temp_crit_hyst=65,temp_input=29,temp_max=70 1466751326000000000
+sensors,chip=k10temp-pci-00cb,feature=temp1 temp_input=29,temp_max=70 1466751326000000000
+sensors,chip=k10temp-pci-00d3,feature=temp1 temp_input=27.5,temp_max=70 1466751326000000000
+sensors,chip=k10temp-pci-00db,feature=temp1 temp_crit=70,temp_crit_hyst=65,temp_input=29.5,temp_max=70 1466751326000000000
+```
+
+### With remove_numbers=false
+
+```text
+sensors,chip=power_meter-acpi-0,feature=power1 power1_average=0,power1_average_interval=300 1466753424000000000
+sensors,chip=k10temp-pci-00c3,feature=temp1 temp1_crit=70,temp1_crit_hyst=65,temp1_input=29.125,temp1_max=70 1466753424000000000
+sensors,chip=k10temp-pci-00cb,feature=temp1 temp1_input=29,temp1_max=70 1466753424000000000
+sensors,chip=k10temp-pci-00d3,feature=temp1 temp1_input=29.5,temp1_max=70 1466753424000000000
+sensors,chip=k10temp-pci-00db,feature=temp1 temp1_crit=70,temp1_crit_hyst=65,temp1_input=30,temp1_max=70 1466753424000000000
+```
diff --git a/content/telegraf/v1/input-plugins/sflow/_index.md b/content/telegraf/v1/input-plugins/sflow/_index.md
new file mode 100644
index 000000000..efb1cc63d
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/sflow/_index.md
@@ -0,0 +1,152 @@
+---
+description: "Telegraf plugin for collecting metrics from SFlow"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: SFlow
+    identifier: input-sflow
+tags: [SFlow, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# SFlow Input Plugin
+
+The SFlow Input Plugin provides support for acting as an SFlow V5 collector in
+accordance with the specification from [sflow.org](https://sflow.org/).
+
+Currently only Flow Samples of Ethernet / IPv4 & IPv4 TCP & UDP headers are
+turned into metrics.  Counters and other header samples are ignored.
+
+## Series Cardinality Warning
+
+This plugin may produce a high number of series which, when not controlled
+for, will cause high load on your database. Use the following techniques to
+avoid cardinality issues:
+
+- Use [metric filtering](https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#metric-filtering) options to exclude unneeded measurements and tags.
+- Write to a database with an appropriate [retention policy](https://docs.influxdata.com/influxdb/latest/guides/downsampling_and_retention/).
+- Consider using the [Time Series Index](https://docs.influxdata.com/influxdb/latest/concepts/time-series-index/).
+- Monitor your databases [series cardinality](https://docs.influxdata.com/influxdb/latest/query_language/spec/#show-cardinality).
+- Consult the [InfluxDB documentation](https://docs.influxdata.com/influxdb/latest/) for the most up-to-date techniques.
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# SFlow V5 Protocol Listener
+[[inputs.sflow]]
+  ## Address to listen for sFlow packets.
+  ##   example: service_address = "udp://:6343"
+  ##            service_address = "udp4://:6343"
+  ##            service_address = "udp6://:6343"
+  service_address = "udp://:6343"
+
+  ## Set the size of the operating system's receive buffer.
+  ##   example: read_buffer_size = "64KiB"
+  # read_buffer_size = ""
+```
+
+## Metrics
+
+- sflow
+  - tags:
+    - agent_address (IP address of the agent that obtained the sflow sample and sent it to this collector)
+    - source_id_type(source_id_type field of flow_sample or flow_sample_expanded structures)
+    - source_id_index(source_id_index field of flow_sample or flow_sample_expanded structures)
+    - input_ifindex (value (input) field of flow_sample or flow_sample_expanded structures)
+    - output_ifindex (value (output) field of flow_sample or flow_sample_expanded structures)
+    - sample_direction (source_id_index, netif_index_in and netif_index_out)
+    - header_protocol (header_protocol field of sampled_header structures)
+    - ether_type (eth_type field of an ETHERNET-ISO88023 header)
+    - src_ip (source_ipaddr field of IPv4 or IPv6 structures)
+    - src_port (src_port field of TCP or UDP structures)
+    - src_port_name (src_port)
+    - src_mac (source_mac_addr field of an ETHERNET-ISO88023 header)
+    - src_vlan (src_vlan field of extended_switch structure)
+    - src_priority (src_priority field of extended_switch structure)
+    - src_mask_len (src_mask_len field of extended_router structure)
+    - dst_ip (destination_ipaddr field of IPv4 or IPv6 structures)
+    - dst_port (dst_port field of TCP or UDP structures)
+    - dst_port_name (dst_port)
+    - dst_mac (destination_mac_addr field of an ETHERNET-ISO88023 header)
+    - dst_vlan (dst_vlan field of extended_switch structure)
+    - dst_priority (dst_priority field of extended_switch structure)
+    - dst_mask_len (dst_mask_len field of extended_router structure)
+    - next_hop (next_hop field of extended_router structure)
+    - ip_version (ip_ver field of IPv4 or IPv6 structures)
+    - ip_protocol (ip_protocol field of IPv4 or IPv6 structures)
+    - ip_dscp (ip_dscp field of IPv4 or IPv6 structures)
+    - ip_ecn (ecn field of IPv4 or IPv6 structures)
+    - tcp_urgent_pointer (urgent_pointer field of TCP structure)
+  - fields:
+    - bytes (integer, the product of frame_length and packets)
+    - drops (integer, drops field of flow_sample or flow_sample_expanded structures)
+    - packets (integer, sampling_rate field of flow_sample or flow_sample_expanded structures)
+    - frame_length (integer, frame_length field of sampled_header structures)
+    - header_size (integer, header_size field of sampled_header structures)
+    - ip_fragment_offset (integer, ip_ver field of IPv4 structures)
+    - ip_header_length (integer, ip_ver field of IPv4 structures)
+    - ip_total_length (integer, ip_total_len field of IPv4 structures)
+    - ip_ttl (integer, ip_ttl field of IPv4 structures or ip_hop_limit field IPv6 structures)
+    - tcp_header_length (integer, size field of TCP structure. This value is specified in 32-bit words. It must be multiplied by 4 to produce a value in bytes.)
+    - tcp_window_size (integer, window_size field of TCP structure)
+    - udp_length (integer, length field of UDP structures)
+    - ip_flags (integer, ip_ver field of IPv4 structures)
+    - tcp_flags (integer, TCP flags of TCP IP header (IPv4 or IPv6))
+
+## Troubleshooting
+
+The [sflowtool](https://github.com/sflow/sflowtool) utility can be used to print sFlow packets, and compared
+against the metrics produced by Telegraf.
+
+```sh
+sflowtool -p 6343
+```
+
+If opening an issue, in addition to the output of sflowtool it will also be
+helpful to collect a packet capture.  Adjust the interface, host and port as
+needed:
+
+```sh
+sudo tcpdump -s 0 -i eth0 -w telegraf-sflow.pcap host 127.0.0.1 and port 6343
+```
+
+[sflowtool]: https://github.com/sflow/sflowtool
+
+## Example Output
+
+```text
+sflow,agent_address=0.0.0.0,dst_ip=10.0.0.2,dst_mac=ff:ff:ff:ff:ff:ff,dst_port=40042,ether_type=IPv4,header_protocol=ETHERNET-ISO88023,input_ifindex=6,ip_dscp=27,ip_ecn=0,output_ifindex=1073741823,source_id_index=3,source_id_type=0,src_ip=10.0.0.1,src_mac=ff:ff:ff:ff:ff:ff,src_port=443 bytes=1570i,drops=0i,frame_length=157i,header_length=128i,ip_flags=2i,ip_fragment_offset=0i,ip_total_length=139i,ip_ttl=42i,sampling_rate=10i,tcp_header_length=0i,tcp_urgent_pointer=0i,tcp_window_size=14i 1584473704793580447
+```
+
+## Reference Documentation
+
+This sflow implementation was built from the reference document
+[sflow.org/sflow_version_5.txt]()
+
+[metric filtering]: https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md#metric-filtering
+[retention policy]: https://docs.influxdata.com/influxdb/latest/guides/downsampling_and_retention/
+[tsi]: https://docs.influxdata.com/influxdb/latest/concepts/time-series-index/
+[series cardinality]: https://docs.influxdata.com/influxdb/latest/query_language/spec/#show-cardinality
+[influx-docs]: https://docs.influxdata.com/influxdb/latest/
+[sflow_version_5]: https://sflow.org/sflow_version_5.txt
diff --git a/content/telegraf/v1/input-plugins/slab/_index.md b/content/telegraf/v1/input-plugins/slab/_index.md
new file mode 100644
index 000000000..deeb1ebe5
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/slab/_index.md
@@ -0,0 +1,81 @@
+---
+description: "Telegraf plugin for collecting metrics from Slab"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Slab
+    identifier: input-slab
+tags: [Slab, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Slab Input Plugin
+
+This plugin collects details on how much memory each entry in Slab cache is
+consuming. For example, it collects the consumption of `kmalloc-1024` and
+`xfs_inode`. Since this information is obtained by parsing `/proc/slabinfo`
+file, only Linux is supported. The specification of `/proc/slabinfo` has not
+changed since [Linux v2.6.12 (April 2005)](https://github.com/torvalds/linux/blob/1da177e4/mm/slab.c#L2848-L2861), so it can be regarded as
+sufficiently stable. The memory usage is equivalent to the `CACHE_SIZE` column
+of `slabtop` command.  If the HOST_PROC environment variable is set, Telegraf
+will use its value instead of `/proc`
+
+**Note: `/proc/slabinfo` is usually restricted to read as root user. Make sure
+telegraf can execute `sudo` without password.**
+
+[slab-c]: https://github.com/torvalds/linux/blob/1da177e4/mm/slab.c#L2848-L2861
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Get slab statistics from procfs
+# This plugin ONLY supports Linux
+[[inputs.slab]]
+  # no configuration - please see the plugin's README for steps to configure
+  # sudo properly
+```
+
+## Sudo configuration
+
+Since the slabinfo file is only readable by root, the plugin runs `sudo
+/bin/cat` to read the file.
+
+Sudo can be configured to allow telegraf to run just the command needed to read
+the slabinfo file. For example, if telegraf is running as the user 'telegraf'
+and HOST_PROC is not used, add this to the sudoers file: `telegraf ALL = (root)
+NOPASSWD: /bin/cat /proc/slabinfo`
+
+## Metrics
+
+Metrics include generic ones such as `kmalloc_*` as well as those of kernel
+subsystems and drivers used by the system such as `xfs_inode`.
+Each field with `_size` suffix indicates memory consumption in bytes.
+
+- mem
+  - fields:
+    - kmalloc_8_size (integer)
+    - kmalloc_16_size (integer)
+    - kmalloc_32_size (integer)
+    - kmalloc_64_size (integer)
+    - kmalloc_96_size (integer)
+    - kmalloc_128_size (integer)
+    - kmalloc_256_size (integer)
+    - kmalloc_512_size (integer)
+    - xfs_ili_size (integer)
+    - xfs_inode_size (integer)
+
+## Example Output
+
+```text
+slab kmalloc_1024_size=239927296i,kmalloc_512_size=5582848i 1651049129000000000
+```
diff --git a/content/telegraf/v1/input-plugins/slurm/_index.md b/content/telegraf/v1/input-plugins/slurm/_index.md
new file mode 100644
index 000000000..07781e863
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/slurm/_index.md
@@ -0,0 +1,209 @@
+---
+description: "Telegraf plugin for collecting metrics from SLURM"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: SLURM
+    identifier: input-slurm
+tags: [SLURM, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# SLURM Input Plugin
+
+This plugin gather diag, jobs, nodes, partitions and reservation metrics by
+leveraging SLURM's REST API as provided by the `slurmrestd` daemon.
+
+This plugin targets the `openapi/v0.0.38` OpenAPI plugin as defined in SLURM's
+documentation. That particular plugin should be configured when starting the
+`slurmrestd` daemon up. For more information, be sure to check SLURM's
+documentation [here](https://slurm.schedmd.com/rest.html).
+
+A great wealth of information can also be found on the repository of the
+Go module implementing the API client, [pcolladosoto/goslurm](https://github.com/pcolladosoto/goslurm).
+
+[SLURM Doc]: https://slurm.schedmd.com/rest.html
+[pcolladosoto/goslurm]: https://github.com/pcolladosoto/goslurm
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Gather SLURM metrics
+[[inputs.slurm]]
+  ## Slurmrestd URL. Both http and https can be used as schemas.
+  url = "http://127.0.0.1:6820"
+
+  ## Credentials for JWT-based authentication.
+  # username = "foo"
+  # token = "topSecret"
+
+  ## Enabled endpoints
+  ## List of endpoints a user can acquire data from.
+  ## Available values are: diag, jobs, nodes, partitions, reservations.
+  # enabled_endpoints = ["diag", "jobs", "nodes", "partitions", "reservations"]
+
+  ## Maximum time to receive a response. If set to 0s, the
+  ## request will not time out.
+  # response_timeout = "5s"
+
+  ## Optional TLS Config. Note these options will only
+  ## be taken into account when the scheme specififed on
+  ## the URL parameter is https. They will be silently
+  ## ignored otherwise.
+  ## Set to true/false to enforce TLS being enabled/disabled. If not set,
+  ## enable TLS only if any of the other options are specified.
+  # tls_enable =
+  ## Trusted root certificates for server
+  # tls_ca = "/path/to/cafile"
+  ## Used for TLS client certificate authentication
+  # tls_cert = "/path/to/certfile"
+  ## Used for TLS client certificate authentication
+  # tls_key = "/path/to/keyfile"
+  ## Password for the key file if it is encrypted
+  # tls_key_pwd = ""
+  ## Send the specified TLS server name via SNI
+  # tls_server_name = "kubernetes.example.com"
+  ## Minimal TLS version to accept by the client
+  # tls_min_version = "TLS12"
+  ## List of ciphers to accept, by default all secure ciphers will be accepted
+  ## See https://pkg.go.dev/crypto/tls#pkg-constants for supported values.
+  ## Use "all", "secure" and "insecure" to add all support ciphers, secure
+  ## suites or insecure suites respectively.
+  # tls_cipher_suites = ["secure"]
+  ## Renegotiation method, "never", "once" or "freely"
+  # tls_renegotiation_method = "never"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+## Metrics
+
+Given the great deal of metrics offered by SLURM's API, an attempt has been
+done to strike a balance between verbosity and usefulness in terms of the
+gathered information.
+
+- slurm_diag
+  - tags:
+    - source
+  - fields:
+    - server_thread_count
+    - jobs_canceled
+    - jobs_submitted
+    - jobs_started
+    - jobs_completed
+    - jobs_failed
+    - jobs_pending
+    - jobs_running
+    - schedule_cycle_last
+    - schedule_cycle_mean
+    - bf_queue_len
+    - bf_queue_len_mean
+    - bf_active
+- slurm_jobs
+  - tags:
+    - source
+    - name
+    - job_id
+  - fields:
+    - state
+    - state_reason
+    - partition
+    - nodes
+    - node_count
+    - priority
+    - nice
+    - group_id
+    - command
+    - standard_output
+    - standard_error
+    - standard_input
+    - current_working_directory
+    - submit_time
+    - start_time
+    - cpus
+    - tasks
+    - time_limit
+    - tres_cpu
+    - tres_mem
+    - tres_node
+    - tres_billing
+- slurm_nodes
+  - tags:
+    - source
+    - name
+  - fields:
+    - state
+    - cores
+    - cpus
+    - cpu_load
+    - alloc_cpu
+    - real_memory
+    - free_memory
+    - alloc_memory
+    - tres_cpu
+    - tres_mem
+    - tres_billing
+    - tres_used_cpu
+    - tres_used_mem
+    - weight
+    - slurmd_version
+    - architecture
+- slurm_partitions
+  - tags:
+    - source
+    - name
+  - fields:
+    - state
+    - total_cpu
+    - total_nodes
+    - nodes
+    - tres_cpu
+    - tres_mem
+    - tres_node
+    - tres_billing
+- slurm_reservations
+  - tags:
+    - source
+    - name
+  - fields:
+    - core_count
+    - core_spec_count
+    - groups
+    - users
+    - start_time
+    - partition
+    - accounts
+    - node_count
+    - node_list
+
+## Example Output
+
+```text
+slurm_diag,host=hoth,source=slurm_primary.example.net bf_active=false,bf_queue_len=1i,bf_queue_len_mean=1i,jobs_canceled=0i,jobs_completed=137i,jobs_failed=0i,jobs_pending=0i,jobs_running=100i,jobs_started=137i,jobs_submitted=137i,schedule_cycle_last=27i,schedule_cycle_mean=86i,server_thread_count=3i 1723466497000000000
+slurm_jobs,host=hoth,job_id=23160,name=gridjob,source=slurm_primary.example.net command="/tmp/SLURM_job_script.11BCgQ",cpus=2i,current_working_directory="/home/sessiondir/7CQODmQ3uw5nKG01gq4B3BRpm7wtQmABFKDmbnHPDmG9JKDmILUkln",group_id=2005i,nice=50i,node_count=1i,nodes="naboo225",partition="atlas",priority=4294878569i,standard_error="/home/sessiondir/7CQODmQ3uw5nKG01gq4B3BRpm7wtQmABFKDmbnHPDmG9JKDmILUkln.comment",standard_input="/dev/null",standard_output="/home/sessiondir/7CQODmQ3uw5nKG01gq4B3BRpm7wtQmABFKDmbnHPDmG9JKDmILUkln.comment",start_time=1723354525i,state="RUNNING",state_reason="None",submit_time=1723354525i,tasks=1i,time_limit=3600i,tres_billing=1,tres_cpu=1,tres_mem=2000,tres_node=1 1723466497000000000
+slurm_jobs,host=hoth,job_id=23365,name=gridjob,source=slurm_primary.example.net command="/tmp/SLURM_job_script.yRcFYL",cpus=2i,current_working_directory="/home/sessiondir/LgwNDmTLAx5nKG01gq4B3BRpm7wtQmABFKDmbnHPDm2BKKDm8bFZsm",group_id=2005i,nice=50i,node_count=1i,nodes="naboo224",partition="atlas",priority=4294878364i,standard_error="/home/sessiondir/LgwNDmTLAx5nKG01gq4B3BRpm7wtQmABFKDmbnHPDm2BKKDm8bFZsm.comment",standard_input="/dev/null",standard_output="/home/sessiondir/LgwNDmTLAx5nKG01gq4B3BRpm7wtQmABFKDmbnHPDm2BKKDm8bFZsm.comment",start_time=1723376763i,state="RUNNING",state_reason="None",submit_time=1723376761i,tasks=1i,time_limit=3600i,tres_billing=1,tres_cpu=1,tres_mem=1000,tres_node=1 1723466497000000000
+slurm_jobs,host=hoth,job_id=23366,name=gridjob,source=slurm_primary.example.net command="/tmp/SLURM_job_script.5Y9Ngb",cpus=2i,current_working_directory="/home/sessiondir/HFYKDmULAx5nKG01gq4B3BRpm7wtQmABFKDmbnHPDm3BKKDmiyK3em",group_id=2005i,nice=50i,node_count=1i,nodes="naboo225",partition="atlas",priority=4294878363i,standard_error="/home/sessiondir/HFYKDmULAx5nKG01gq4B3BRpm7wtQmABFKDmbnHPDm3BKKDmiyK3em.comment",standard_input="/dev/null",standard_output="/home/sessiondir/HFYKDmULAx5nKG01gq4B3BRpm7wtQmABFKDmbnHPDm3BKKDmiyK3em.comment",start_time=1723376883i,state="RUNNING",state_reason="None",submit_time=1723376882i,tasks=1i,time_limit=3600i,tres_billing=1,tres_cpu=1,tres_mem=1000,tres_node=1 1723466497000000000
+slurm_jobs,host=hoth,job_id=23367,name=gridjob,source=slurm_primary.example.net command="/tmp/SLURM_job_script.NmOqMU",cpus=2i,current_working_directory="/home/sessiondir/nnLLDmULAx5nKG01gq4B3BRpm7wtQmABFKDmbnHPDm4BKKDmfhjFPn",group_id=2005i,nice=50i,node_count=1i,nodes="naboo225",partition="atlas",priority=4294878362i,standard_error="/home/sessiondir/nnLLDmULAx5nKG01gq4B3BRpm7wtQmABFKDmbnHPDm4BKKDmfhjFPn.comment",standard_input="/dev/null",standard_output="/home/sessiondir/nnLLDmULAx5nKG01gq4B3BRpm7wtQmABFKDmbnHPDm4BKKDmfhjFPn.comment",start_time=1723376883i,state="RUNNING",state_reason="None",submit_time=1723376882i,tasks=1i,time_limit=3600i,tres_billing=1,tres_cpu=1,tres_mem=1000,tres_node=1 1723466497000000000
+slurm_jobs,host=hoth,job_id=23385,name=gridjob,source=slurm_primary.example.net command="/tmp/SLURM_job_script.NNsI08",cpus=2i,current_working_directory="/home/sessiondir/PWvNDmH7tw5nKG01gq4B3BRpm7wtQmABFKDmbnHPDmz7JKDmqgKyRo",group_id=2005i,nice=50i,node_count=1i,nodes="naboo225",partition="atlas",priority=4294878344i,standard_error="/home/sessiondir/PWvNDmH7tw5nKG01gq4B3BRpm7wtQmABFKDmbnHPDmz7JKDmqgKyRo.comment",standard_input="/dev/null",standard_output="/home/sessiondir/PWvNDmH7tw5nKG01gq4B3BRpm7wtQmABFKDmbnHPDmz7JKDmqgKyRo.comment",start_time=1723378725i,state="RUNNING",state_reason="None",submit_time=1723378725i,tasks=1i,time_limit=3600i,tres_billing=1,tres_cpu=1,tres_mem=1000,tres_node=1 1723466497000000000
+slurm_jobs,host=hoth,job_id=23386,name=gridjob,source=slurm_primary.example.net command="/tmp/SLURM_job_script.bcmS4h",cpus=2i,current_working_directory="/home/sessiondir/ZNHMDmI7tw5nKG01gq4B3BRpm7wtQmABFKDmbnHPDm27JKDm3Ve66n",group_id=2005i,nice=50i,node_count=1i,nodes="naboo224",partition="atlas",priority=4294878343i,standard_error="/home/sessiondir/ZNHMDmI7tw5nKG01gq4B3BRpm7wtQmABFKDmbnHPDm27JKDm3Ve66n.comment",standard_input="/dev/null",standard_output="/home/sessiondir/ZNHMDmI7tw5nKG01gq4B3BRpm7wtQmABFKDmbnHPDm27JKDm3Ve66n.comment",start_time=1723379206i,state="RUNNING",state_reason="None",submit_time=1723379205i,tasks=1i,time_limit=3600i,tres_billing=1,tres_cpu=1,tres_mem=1000,tres_node=1 1723466497000000000
+slurm_jobs,host=hoth,job_id=23387,name=gridjob,source=slurm_primary.example.net command="/tmp/SLURM_job_script.OgpoQZ",cpus=2i,current_working_directory="/home/sessiondir/qohNDmUqBx5nKG01gq4B3BRpm7wtQmABFKDmbnHPDmMCKKDmzM4Yhn",group_id=2005i,nice=50i,node_count=1i,nodes="naboo222",partition="atlas",priority=4294878342i,standard_error="/home/sessiondir/qohNDmUqBx5nKG01gq4B3BRpm7wtQmABFKDmbnHPDmMCKKDmzM4Yhn.comment",standard_input="/dev/null",standard_output="/home/sessiondir/qohNDmUqBx5nKG01gq4B3BRpm7wtQmABFKDmbnHPDmMCKKDmzM4Yhn.comment",start_time=1723379246i,state="RUNNING",state_reason="None",submit_time=1723379245i,tasks=1i,time_limit=3600i,tres_billing=1,tres_cpu=1,tres_mem=1000,tres_node=1 1723466497000000000
+slurm_jobs,host=hoth,job_id=23388,name=gridjob,source=slurm_primary.example.net command="/tmp/SLURM_job_script.xYbxSe",cpus=2i,current_working_directory="/home/sessiondir/u9HODmXqBx5nKG01gq4B3BRpm7wtQmABFKDmbnHPDmWCKKDmRlccYn",group_id=2005i,nice=50i,node_count=1i,nodes="naboo224",partition="atlas",priority=4294878341i,standard_error="/home/sessiondir/u9HODmXqBx5nKG01gq4B3BRpm7wtQmABFKDmbnHPDmWCKKDmRlccYn.comment",standard_input="/dev/null",standard_output="/home/sessiondir/u9HODmXqBx5nKG01gq4B3BRpm7wtQmABFKDmbnHPDmWCKKDmRlccYn.comment",start_time=1723379326i,state="RUNNING",state_reason="None",submit_time=1723379326i,tasks=1i,time_limit=3600i,tres_billing=1,tres_cpu=1,tres_mem=1000,tres_node=1 1723466497000000000
+slurm_jobs,host=hoth,job_id=23389,name=gridjob,source=slurm_primary.example.net command="/tmp/SLURM_job_script.QHtIIm",cpus=2i,current_working_directory="/home/sessiondir/ZLvKDmYqBx5nKG01gq4B3BRpm7wtQmABFKDmbnHPDmXCKKDmjp19km",group_id=2005i,nice=50i,node_count=1i,nodes="naboo227",partition="atlas",priority=4294878340i,standard_error="/home/sessiondir/ZLvKDmYqBx5nKG01gq4B3BRpm7wtQmABFKDmbnHPDmXCKKDmjp19km.comment",standard_input="/dev/null",standard_output="/home/sessiondir/ZLvKDmYqBx5nKG01gq4B3BRpm7wtQmABFKDmbnHPDmXCKKDmjp19km.comment",start_time=1723379326i,state="RUNNING",state_reason="None",submit_time=1723379326i,tasks=1i,time_limit=3600i,tres_billing=1,tres_cpu=1,tres_mem=1000,tres_node=1 1723466497000000000
+slurm_jobs,host=hoth,job_id=23393,name=gridjob,source=slurm_primary.example.net command="/tmp/SLURM_job_script.IH19bN",cpus=2i,current_working_directory="/home/sessiondir/YdPODmVqBx5nKG01gq4B3BRpm7wtQmABFKDmbnHPDmSCKKDmrYDOwm",group_id=2005i,nice=50i,node_count=1i,nodes="naboo224",partition="atlas",priority=4294878336i,standard_error="/home/sessiondir/YdPODmVqBx5nKG01gq4B3BRpm7wtQmABFKDmbnHPDmSCKKDmrYDOwm.comment",standard_input="/dev/null",standard_output="/home/sessiondir/YdPODmVqBx5nKG01gq4B3BRpm7wtQmABFKDmbnHPDmSCKKDmrYDOwm.comment",start_time=1723379767i,state="RUNNING",state_reason="None",submit_time=1723379766i,tasks=1i,time_limit=3600i,tres_billing=1,tres_cpu=1,tres_mem=1000,tres_node=1 1723466497000000000
+slurm_nodes,host=hoth,name=naboo145,source=slurm_primary.example.net alloc_cpu=0i,alloc_memory=0i,architecture="x86_64",cores=18i,cpu_load=0i,cpus=36i,free_memory=86450i,real_memory=94791i,slurmd_version="22.05.9",state="idle",tres_billing=36,tres_cpu=36,tres_mem=94791,weight=1i 1723466497000000000
+slurm_nodes,host=hoth,name=naboo146,source=slurm_primary.example.net alloc_cpu=0i,alloc_memory=0i,architecture="x86_64",cores=18i,cpu_load=0i,cpus=36i,free_memory=92148i,real_memory=94791i,slurmd_version="22.05.9",state="idle",tres_billing=36,tres_cpu=36,tres_mem=94791,weight=1i 1723466497000000000
+slurm_nodes,host=hoth,name=naboo147,source=slurm_primary.example.net alloc_cpu=36i,alloc_memory=45000i,architecture="x86_64",cores=18i,cpu_load=3826i,cpus=36i,free_memory=1607i,real_memory=94793i,slurmd_version="22.05.9",state="allocated",tres_billing=36,tres_cpu=36,tres_mem=94793,tres_used_cpu=36,tres_used_mem=45000,weight=1i 1723466497000000000
+slurm_nodes,host=hoth,name=naboo216,source=slurm_primary.example.net alloc_cpu=8i,alloc_memory=8000i,architecture="x86_64",cores=4i,cpu_load=891i,cpus=8i,free_memory=17972i,real_memory=31877i,slurmd_version="22.05.9",state="allocated",tres_billing=8,tres_cpu=8,tres_mem=31877,tres_used_cpu=8,tres_used_mem=8000,weight=1i 1723466497000000000
+slurm_nodes,host=hoth,name=naboo219,source=slurm_primary.example.net alloc_cpu=16i,alloc_memory=16000i,architecture="x86_64",cores=4i,cpu_load=1382i,cpus=16i,free_memory=15645i,real_memory=31875i,slurmd_version="22.05.9",state="allocated",tres_billing=16,tres_cpu=16,tres_mem=31875,tres_used_cpu=16,tres_used_mem=16000,weight=1i 1723466497000000000
+slurm_partitions,host=hoth,name=atlas,source=slurm_primary.example.net nodes="naboo145,naboo146,naboo147,naboo216,naboo219,naboo222,naboo224,naboo225,naboo227,naboo228,naboo229,naboo234,naboo235,naboo236,naboo237,naboo238,naboo239,naboo240,naboo241,naboo242,naboo243",state="UP",total_cpu=632i,total_nodes=21i,tres_billing=632,tres_cpu=632,tres_mem=1415207,tres_node=21 1723466497000000000
+```
diff --git a/content/telegraf/v1/input-plugins/smart/_index.md b/content/telegraf/v1/input-plugins/smart/_index.md
new file mode 100644
index 000000000..f0f74c7ec
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/smart/_index.md
@@ -0,0 +1,322 @@
+---
+description: "Telegraf plugin for collecting metrics from S.M.A.R.T."
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: S.M.A.R.T.
+    identifier: input-smart
+tags: [S.M.A.R.T., "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# S.M.A.R.T. Input Plugin
+
+Get metrics using the command line utility `smartctl` for
+S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology) storage
+devices. SMART is a monitoring system included in computer hard disk drives
+(HDDs) and solid-state drives (SSDs) that detects and reports on various
+indicators of drive reliability, with the intent of enabling the anticipation of
+hardware failures.  See smartmontools (<https://www.smartmontools.org/>).
+
+SMART information is separated between different measurements: `smart_device` is
+used for general information, while `smart_attribute` stores the detailed
+attribute information if `attributes = true` is enabled in the plugin
+configuration.
+
+If no devices are specified, the plugin will scan for SMART devices via the
+following command:
+
+```sh
+smartctl --scan
+```
+
+Metrics will be reported from the following `smartctl` command:
+
+```sh
+smartctl --info --attributes --health -n <nocheck> --format=brief <device>
+```
+
+This plugin supports _smartmontools_ version 5.41 and above, but v. 5.41 and
+v. 5.42 might require setting `nocheck`, see the comment in the sample
+configuration.  Also, NVMe capabilities were introduced in version 6.5.
+
+To enable SMART on a storage device run:
+
+```sh
+smartctl -s on <device>
+```
+
+## NVMe vendor specific attributes
+
+For NVMe disk type, plugin can use command line utility `nvme-cli`. It has a
+feature to easy access a vendor specific attributes.  This plugin supports
+nmve-cli version 1.5 and above (<https://github.com/linux-nvme/nvme-cli>).  In
+case of `nvme-cli` absence NVMe vendor specific metrics will not be obtained.
+
+Vendor specific SMART metrics for NVMe disks may be reported from the following
+`nvme` command:
+
+```sh
+nvme <vendor> smart-log-add <device>
+```
+
+Note that vendor plugins for `nvme-cli` could require different naming
+convention and report format.
+
+To see installed plugin extensions, depended on the nvme-cli version, look at
+the bottom of:
+
+```sh
+nvme help
+```
+
+To gather disk vendor id (vid) `id-ctrl` could be used:
+
+```sh
+nvme id-ctrl <device>
+```
+
+Association between a vid and company can be found there:
+<https://pcisig.com/membership/member-companies>.
+
+Devices affiliation to being NVMe or non NVMe will be determined thanks to:
+
+```sh
+smartctl --scan
+```
+
+and:
+
+```sh
+smartctl --scan -d nvme
+```
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from storage devices supporting S.M.A.R.T.
+[[inputs.smart]]
+    ## Optionally specify the path to the smartctl executable
+    # path_smartctl = "/usr/bin/smartctl"
+
+    ## Optionally specify the path to the nvme-cli executable
+    # path_nvme = "/usr/bin/nvme"
+
+    ## Optionally specify if vendor specific attributes should be propagated for NVMe disk case
+    ## ["auto-on"] - automatically find and enable additional vendor specific disk info
+    ## ["vendor1", "vendor2", ...] - e.g. "Intel" enable additional Intel specific disk info
+    # enable_extensions = ["auto-on"]
+
+    ## On most platforms used cli utilities requires root access.
+    ## Setting 'use_sudo' to true will make use of sudo to run smartctl or nvme-cli.
+    ## Sudo must be configured to allow the telegraf user to run smartctl or nvme-cli
+    ## without a password.
+    # use_sudo = false
+
+    ## Adds an extra tag "device_type", which can be used to differentiate
+    ## multiple disks behind the same controller (e.g., MegaRAID).
+    # tag_with_device_type = false
+
+    ## Skip checking disks in this power mode. Defaults to
+    ## "standby" to not wake up disks that have stopped rotating.
+    ## See --nocheck in the man pages for smartctl.
+    ## smartctl version 5.41 and 5.42 have faulty detection of
+    ## power mode and might require changing this value to
+    ## "never" depending on your disks.
+    # nocheck = "standby"
+
+    ## Gather all returned S.M.A.R.T. attribute metrics and the detailed
+    ## information from each drive into the 'smart_attribute' measurement.
+    # attributes = false
+
+    ## Optionally specify devices to exclude from reporting if disks auto-discovery is performed.
+    # excludes = [ "/dev/pass6" ]
+
+    ## Optionally specify devices and device type, if unset
+    ## a scan (smartctl --scan and smartctl --scan -d nvme) for S.M.A.R.T. devices will be done
+    ## and all found will be included except for the excluded in excludes.
+    # devices = [ "/dev/ada0 -d atacam", "/dev/nvme0"]
+
+    ## Timeout for the cli command to complete.
+    # timeout = "30s"
+
+    ## Optionally call smartctl and nvme-cli with a specific concurrency policy.
+    ## By default, smartctl and nvme-cli are called in separate threads (goroutines) to gather disk attributes.
+    ## Some devices (e.g. disks in RAID arrays) may have access limitations that require sequential reading of
+    ## SMART data - one individual array drive at the time. In such case please set this configuration option
+    ## to "sequential" to get readings for all drives.
+    ## valid options: concurrent, sequential
+    # read_method = "concurrent"
+```
+
+## Permissions
+
+It's important to note that this plugin references smartctl and nvme-cli, which
+may require additional permissions to execute successfully.  Depending on the
+user/group permissions of the telegraf user executing this plugin, you may need
+to use sudo.
+
+You will need the following in your telegraf config:
+
+```toml
+[[inputs.smart]]
+  use_sudo = true
+```
+
+You will also need to update your sudoers file:
+
+```bash
+$ visudo
+# For smartctl add the following lines:
+Cmnd_Alias SMARTCTL = /usr/bin/smartctl
+telegraf  ALL=(ALL) NOPASSWD: SMARTCTL
+Defaults!SMARTCTL !logfile, !syslog, !pam_session
+
+# For nvme-cli add the following lines:
+Cmnd_Alias NVME = /path/to/nvme
+telegraf  ALL=(ALL) NOPASSWD: NVME
+Defaults!NVME !logfile, !syslog, !pam_session
+```
+
+To run smartctl or nvme with `sudo` wrapper script can be
+created. `path_smartctl` or `path_nvme` in the configuration should be set to
+execute this script.
+
+## Metrics
+
+- smart_device:
+  - tags:
+    - capacity
+    - device
+    - device_type (only emitted if `tag_with_device_type` is set to `true`)
+    - enabled
+    - model
+    - serial_no
+    - wwn
+  - fields:
+    - exit_status
+    - health_ok
+    - media_wearout_indicator
+    - percent_lifetime_remain
+    - read_error_rate
+    - seek_error
+    - temp_c
+    - udma_crc_errors
+    - wear_leveling_count
+
+- smart_attribute:
+  - tags:
+    - capacity
+    - device
+    - device_type (only emitted if `tag_with_device_type` is set to `true`)
+    - enabled
+    - fail
+    - flags
+    - id
+    - model
+    - name
+    - serial_no
+    - wwn
+  - fields:
+    - exit_status
+    - raw_value
+    - threshold
+    - value
+    - worst
+
+### Flags
+
+The interpretation of the tag `flags` is:
+
+- `K` auto-keep
+- `C` event count
+- `R` error rate
+- `S` speed/performance
+- `O` updated online
+- `P` prefailure warning
+
+### Exit Status
+
+The `exit_status` field captures the exit status of the used cli utilities
+command which is defined by a bitmask. For the interpretation of the bitmask see
+the man page for smartctl or nvme-cli.
+
+## Device Names
+
+Device names, e.g., `/dev/sda`, are _not persistent_, and may be
+subject to change across reboots or system changes. Instead, you can use the
+_World Wide Name_ (WWN) or serial number to identify devices. On Linux block
+devices can be referenced by the WWN in the following location:
+`/dev/disk/by-id/`.
+
+## Troubleshooting
+
+If you expect to see more SMART metrics than this plugin shows, be sure to use a
+proper version of smartctl or nvme-cli utility which has the functionality to
+gather desired data. Also, check your device capability because not every SMART
+metrics are mandatory. For example the number of temperature sensors depends on
+the device specification.
+
+If this plugin is not working as expected for your SMART enabled device,
+please run these commands and include the output in a bug report:
+
+For non NVMe devices (from smartctl version >= 7.0 this will also return NVMe
+devices by default):
+
+```sh
+smartctl --scan
+```
+
+For NVMe devices:
+
+```sh
+smartctl --scan -d nvme
+```
+
+Run the following command replacing your configuration setting for NOCHECK and
+the DEVICE (name of the device could be taken from the previous command):
+
+```sh
+smartctl --info --health --attributes --tolerance=verypermissive --nocheck NOCHECK --format=brief -d DEVICE
+```
+
+If you try to gather vendor specific metrics, please provide this command
+and replace vendor and device to match your case:
+
+```sh
+nvme VENDOR smart-log-add DEVICE
+```
+
+If you have specified devices array in configuration file, and Telegraf only
+shows data from one device, you should change the plugin configuration to
+sequentially gather disk attributes instead of collecting it in separate threads
+(goroutines). To do this find in plugin configuration read_method and change it
+to sequential:
+
+```toml
+    ## Optionally call smartctl and nvme-cli with a specific concurrency policy.
+    ## By default, smartctl and nvme-cli are called in separate threads (goroutines) to gather disk attributes.
+    ## Some devices (e.g. disks in RAID arrays) may have access limitations that require sequential reading of
+    ## SMART data - one individual array drive at the time. In such case please set this configuration option
+    ## to "sequential" to get readings for all drives.
+    ## valid options: concurrent, sequential
+    read_method = "sequential"
+```
+
+## Example Output
+
+```text
+smart_device,enabled=Enabled,host=mbpro.local,device=rdisk0,model=APPLE\ SSD\ SM0512F,serial_no=S1K5NYCD964433,wwn=5002538655584d30,capacity=500277790720 udma_crc_errors=0i,exit_status=0i,health_ok=true,read_error_rate=0i,temp_c=40i 1502536854000000000
+smart_attribute,capacity=500277790720,device=rdisk0,enabled=Enabled,fail=-,flags=-O-RC-,host=mbpro.local,id=199,model=APPLE\ SSD\ SM0512F,name=UDMA_CRC_Error_Count,serial_no=S1K5NYCD964433,wwn=5002538655584d30 exit_status=0i,raw_value=0i,threshold=0i,value=200i,worst=200i 1502536854000000000
+smart_attribute,capacity=500277790720,device=rdisk0,enabled=Enabled,fail=-,flags=-O---K,host=mbpro.local,id=199,model=APPLE\ SSD\ SM0512F,name=Unknown_SSD_Attribute,serial_no=S1K5NYCD964433,wwn=5002538655584d30 exit_status=0i,raw_value=0i,threshold=0i,value=100i,worst=100i 1502536854000000000
+```
diff --git a/content/telegraf/v1/input-plugins/smartctl/_index.md b/content/telegraf/v1/input-plugins/smartctl/_index.md
new file mode 100644
index 000000000..fb5b17b30
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/smartctl/_index.md
@@ -0,0 +1,122 @@
+---
+description: "Telegraf plugin for collecting metrics from smartctl JSON"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: smartctl JSON
+    identifier: input-smartctl
+tags: [smartctl JSON, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# smartctl JSON Input Plugin
+
+Get metrics using the command line utility `smartctl` for S.M.A.R.T.
+(Self-Monitoring, Analysis and Reporting Technology) storage devices. SMART is a
+monitoring system included in computer hard disk drives (HDDs), solid-state
+drives (SSDs), and nVME drives that detects and reports on various indicators of
+drive reliability, with the intent of enabling the anticipation of hardware
+failures.
+
+This version of the plugin requires support of the JSON flag from the `smartctl`
+command. This flag was added in 7.0 (2019) and further enhanced in subsequent
+releases.
+
+See smartmontools (<https://www.smartmontools.org/>) for more information.
+
+## smart vs smartctl
+
+The smartctl plugin is an alternative to the smart plugin. The biggest
+difference is that the smart plugin can also call `nvmectl` to collect
+additional details about NVMe devices as well as some vendor specific device
+information.
+
+This plugin will also require a version of the `smartctl` command that supports
+JSON output versus the smart plugin will parse the raw output.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from SMART storage devices using smartclt's JSON output
+[[inputs.smartctl]]
+    ## Optionally specify the path to the smartctl executable
+    # path = "/usr/sbin/smartctl"
+
+    ## Use sudo
+    ## On most platforms used, smartctl requires root access. Setting 'use_sudo'
+    ## to true will make use of sudo to run smartctl. Sudo must be configured to
+    ## allow the telegraf user to run smartctl without a password.
+    # use_sudo = false
+
+    ## Devices to include or exclude
+    ## By default, the plugin will use all devices found in the output of
+    ## `smartctl --scan-open`. Only one option is allowed at a time. If set, include
+    ## sets the specific devices to scan, while exclude omits specific devices.
+    # devices_include = []
+    # devices_exclude = []
+
+    ## Skip checking disks in specified power mode
+    ## Defaults to "standby" to not wake up disks that have stopped rotating.
+    ## For full details on the options here, see the --nocheck section in the
+    ## smartctl man page. Choose from:
+    ##   * never: always check the device
+    ##   * sleep: check the device unless it is in sleep mode
+    ##   * standby: check the device unless it is in sleep or standby mode
+    ##   * idle: check the device unless it is in sleep, standby, or idle mode
+    # nocheck = "standby"
+
+    ## Timeout for the cli command to complete
+    # timeout = "30s"
+```
+
+## Permissions
+
+It is important to note that this plugin references `smartctl`, which may
+require additional permissions to execute successfully.  Depending on the
+user/group permissions of the telegraf user executing this plugin, users may
+need to use sudo.
+
+Users need the following in the Telegraf config:
+
+```toml
+[[inputs.smart_json]]
+  use_sudo = true
+```
+
+And to update the `/etc/sudoers` file to allow running smartctl:
+
+```bash
+$ visudo
+# Add the following lines:
+Cmnd_Alias SMARTCTL = /usr/sbin/smartctl
+telegraf  ALL=(ALL) NOPASSWD: SMARTCTL
+Defaults!SMARTCTL !logfile, !syslog, !pam_session
+```
+
+## Debugging Issues
+
+This plugin uses the following commands to determine devices and collect
+metrics:
+
+* `smartctl --json --scan-open`
+* `smartctl --json --all $DEVICE --device $TYPE --nocheck=$NOCHECK`
+
+Please include the output of the above two commands for all devices that are
+having issues.
+
+## Metrics
+
+## Example Output
+
+```text
+```
diff --git a/content/telegraf/v1/input-plugins/snmp/_index.md b/content/telegraf/v1/input-plugins/snmp/_index.md
new file mode 100644
index 000000000..91233b5c7
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/snmp/_index.md
@@ -0,0 +1,431 @@
+---
+description: "Telegraf plugin for collecting metrics from SNMP"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: SNMP
+    identifier: input-snmp
+tags: [SNMP, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# SNMP Input Plugin
+
+The `snmp` input plugin uses polling to gather metrics from SNMP agents.
+Support for gathering individual OIDs as well as complete SNMP tables is
+included.
+
+## Note about Paths
+
+Path is a global variable, separate snmp instances will append the specified
+path onto the global path variable
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `auth_password` and
+`priv_password` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## SNMP backend: gosmi and netsnmp
+
+Telegraf has two backends to translate SNMP objects. By default, Telegraf will
+use `netsnmp`, however, this option is deprecated and it is encouraged that
+users migrate to `gosmi`. If users find issues with `gosmi` that do not occur
+with `netsnmp` please open a project issue on GitHub.
+
+The SNMP backend setting is a global-level setting that applies to all use of
+SNMP in Telegraf. Users can set this option in the `[agent]` configuration via
+the `snmp_translator` option. See the [agent configuration](/telegraf/v1/configuration/#agent) for more
+details.
+
+[AGENT]: ../../../docs/CONFIGURATION.md#agent
+
+## Configuration
+
+```toml @sample.conf
+# Retrieves SNMP values from remote agents
+[[inputs.snmp]]
+  ## Agent addresses to retrieve values from.
+  ##   format:  agents = ["<scheme://><hostname>:<port>"]
+  ##   scheme:  optional, either udp, udp4, udp6, tcp, tcp4, tcp6.
+  ##            default is udp
+  ##   port:    optional
+  ##   example: agents = ["udp://127.0.0.1:161"]
+  ##            agents = ["tcp://127.0.0.1:161"]
+  ##            agents = ["udp4://v4only-snmp-agent"]
+  agents = ["udp://127.0.0.1:161"]
+
+  ## Timeout for each request.
+  # timeout = "5s"
+
+  ## SNMP version; can be 1, 2, or 3.
+  # version = 2
+
+  ## Unconnected UDP socket
+  ## When true, SNMP responses are accepted from any address not just
+  ## the requested address. This can be useful when gathering from
+  ## redundant/failover systems.
+  # unconnected_udp_socket = false
+
+  ## Path to mib files
+  ## Used by the gosmi translator.
+  ## To add paths when translating with netsnmp, use the MIBDIRS environment variable
+  # path = ["/usr/share/snmp/mibs"]
+
+  ## SNMP community string.
+  # community = "public"
+
+  ## Agent host tag; should be set to "source" for consistent usage across plugins
+  ##   example: agent_host_tag = "source"
+  ## The default value is inconsistent with other plugins. Users will get a
+  ## warning that can be ignored if this is not changed. However, to have a
+  ## consistent experience, set this to "source" in your config to align with
+  ## other plugins.
+  # agent_host_tag = "agent_host"
+
+  ## Number of retries to attempt.
+  # retries = 3
+
+  ## The GETBULK max-repetitions parameter.
+  # max_repetitions = 10
+
+  ## SNMPv3 authentication and encryption options.
+  ##
+  ## Security Name.
+  # sec_name = "myuser"
+  ## Authentication protocol; one of "MD5", "SHA", "SHA224", "SHA256", "SHA384", "SHA512" or "".
+  # auth_protocol = "MD5"
+  ## Authentication password.
+  # auth_password = "pass"
+  ## Security Level; one of "noAuthNoPriv", "authNoPriv", or "authPriv".
+  # sec_level = "authNoPriv"
+  ## Context Name.
+  # context_name = ""
+  ## Privacy protocol used for encrypted messages; one of "DES", "AES", "AES192", "AES192C", "AES256", "AES256C", or "".
+  ### Protocols "AES192", "AES192", "AES256", and "AES256C" require the underlying net-snmp tools
+  ### to be compiled with --enable-blumenthal-aes (http://www.net-snmp.org/docs/INSTALL.html)
+  # priv_protocol = ""
+  ## Privacy password used for encrypted messages.
+  # priv_password = ""
+
+  ## Add fields and tables defining the variables you wish to collect.  This
+  ## example collects the system uptime and interface variables.  Reference the
+  ## full plugin documentation for configuration details.
+  [[inputs.snmp.field]]
+    oid = "RFC1213-MIB::sysUpTime.0"
+    name = "sysUptime"
+    conversion = "float(2)"
+
+  [[inputs.snmp.field]]
+    oid = "RFC1213-MIB::sysName.0"
+    name = "sysName"
+    is_tag = true
+
+  [[inputs.snmp.table]]
+    oid = "IF-MIB::ifTable"
+    name = "interface"
+    inherit_tags = ["sysName"]
+
+    [[inputs.snmp.table.field]]
+      oid = "IF-MIB::ifDescr"
+      name = "ifDescr"
+      is_tag = true
+```
+
+### Configure SNMP Requests
+
+This plugin provides two methods for configuring the SNMP requests: `fields`
+and `tables`.  Use the `field` option to gather single ad-hoc variables.
+To collect SNMP tables, use the `table` option.
+
+#### Field
+
+Use a `field` to collect a variable by OID.  Requests specified with this
+option operate similar to the `snmpget` utility.
+
+```toml
+[[inputs.snmp]]
+  # ... snip ...
+
+  [[inputs.snmp.field]]
+    ## Object identifier of the variable as a numeric or textual OID.
+    oid = "RFC1213-MIB::sysName.0"
+
+    ## Name of the field or tag to create.  If not specified, it defaults to
+    ## the value of 'oid'. If 'oid' is numeric, an attempt to translate the
+    ## numeric OID into a textual OID will be made.
+    # name = ""
+
+    ## If true the variable will be added as a tag, otherwise a field will be
+    ## created.
+    # is_tag = false
+
+    ## Apply one of the following conversions to the variable value:
+    ##   float(X):    Convert the input value into a float and divides by the
+    ##                Xth power of 10. Effectively just moves the decimal left
+    ##                X places. For example a value of `123` with `float(2)`
+    ##                will result in `1.23`.
+    ##   float:       Convert the value into a float with no adjustment. Same
+    ##                as `float(0)`.
+    ##   int:         Convert the value into an integer.
+    ##   hwaddr:      Convert the value to a MAC address.
+    ##   ipaddr:      Convert the value to an IP address.
+    ##   hex:         Convert bytes to a hex string.
+    ##   hextoint:X:Y Convert bytes to integer, where X is the endian and Y the
+    ##                bit size. For example: hextoint:LittleEndian:uint64 or
+    ##                hextoint:BigEndian:uint32. Valid options for the endian
+    ##                are: BigEndian and LittleEndian. For the bit size: 
+    ##                uint16, uint32 and uint64.
+    ##   enum(1):     Convert the value according to its syntax in the MIB (full).
+    ##                (Only supported with gosmi translator)
+    ##   enum:        Convert the value according to its syntax in the MIB.
+    ##                (Only supported with gosmi translator)
+    ##
+    # conversion = ""
+```
+
+#### Table
+
+Use a `table` to configure the collection of a SNMP table.  SNMP requests
+formed with this option operate similarly way to the `snmptable` command.
+
+Control the handling of specific table columns using a nested `field`.  These
+nested fields are specified similarly to a top-level `field`.
+
+By default all columns of the SNMP table will be collected - it is not required
+to add a nested field for each column, only those which you wish to modify. To
+*only* collect certain columns, omit the `oid` from the `table` section and only
+include `oid` settings in `field` sections. For more complex include/exclude
+cases for columns use [metric filtering](/telegraf/v1/configuration/#metric-filtering).
+
+One [metric](/docs/METRICS.md) is created for each row of the SNMP table.
+
+```toml
+[[inputs.snmp]]
+  # ... snip ...
+
+  [[inputs.snmp.table]]
+    ## Object identifier of the SNMP table as a numeric or textual OID.
+    oid = "IF-MIB::ifTable"
+
+    ## Name of the field or tag to create.  If not specified, it defaults to
+    ## the value of 'oid'.  If 'oid' is numeric an attempt to translate the
+    ## numeric OID into a textual OID will be made.
+    # name = ""
+
+    ## Which tags to inherit from the top-level config and to use in the output
+    ## of this table's measurement.
+    ## example: inherit_tags = ["source"]
+    # inherit_tags = []
+
+    ## Add an 'index' tag with the table row number.  Use this if the table has
+    ## no indexes or if you are excluding them.  This option is normally not
+    ## required as any index columns are automatically added as tags.
+    # index_as_tag = false
+
+    [[inputs.snmp.table.field]]
+      ## OID to get. May be a numeric or textual module-qualified OID.
+      oid = "IF-MIB::ifDescr"
+
+      ## Name of the field or tag to create.  If not specified, it defaults to
+      ## the value of 'oid'. If 'oid' is numeric an attempt to translate the
+      ## numeric OID into a textual OID will be made.
+      # name = ""
+
+      ## Output this field as a tag.
+      # is_tag = false
+
+      ## The OID sub-identifier to strip off so that the index can be matched
+      ## against other fields in the table.
+      # oid_index_suffix = ""
+
+      ## Specifies the length of the index after the supplied table OID (in OID
+      ## path segments). Truncates the index after this point to remove non-fixed
+      ## value or length index suffixes.
+      # oid_index_length = 0
+
+      ## Specifies if the value of given field should be snmptranslated
+      ## by default no field values are translated
+      # translate = true
+
+      ## Secondary index table allows to merge data from two tables with
+      ## different index that this filed will be used to join them. There can
+      ## be only one secondary index table.
+      # secondary_index_table = false
+
+      ## This field is using secondary index, and will be later merged with
+      ## primary index using SecondaryIndexTable. SecondaryIndexTable and
+      ## SecondaryIndexUse are exclusive.
+      # secondary_index_use = false
+
+      ## Controls if entries from secondary table should be added or not
+      ## if joining index is present or not. I set to true, means that join
+      ## is outer, and index is prepended with "Secondary." for missing values
+      ## to avoid overlapping indexes from both tables. Can be set per field or
+      ## globally with SecondaryIndexTable, global true overrides per field false.
+      # secondary_outer_join = false
+```
+
+#### Two Table Join
+
+Snmp plugin can join two snmp tables that have different indexes. For this to
+work one table should have translation field that return index of second table
+as value. Examples of such fields are:
+
+* Cisco portTable with translation field: `CISCO-STACK-MIB::portIfIndex`,
+which value is IfIndex from ifTable
+* Adva entityFacilityTable with translation field: `ADVA-FSPR7-MIB::entityFacilityOneIndex`,
+which value is IfIndex from ifTable
+* Cisco cpeExtPsePortTable with translation field: `CISCO-POWER-ETHERNET-EXT-MIB::cpeExtPsePortEntPhyIndex`,
+which value is index from entPhysicalTable
+
+Such field can be used to translate index to secondary table with
+`secondary_index_table = true` and all fields from secondary table (with index
+pointed from translation field), should have added option `secondary_index_use =
+true`. Telegraf cannot duplicate entries during join so translation must be
+1-to-1 (not 1-to-many). To add fields from secondary table with index that is
+not present in translation table (outer join), there is a second option for
+translation index `secondary_outer_join = true`.
+
+##### Example configuration for table joins
+
+CISCO-POWER-ETHERNET-EXT-MIB table before join:
+
+```toml
+[[inputs.snmp.table]]
+name = "ciscoPower"
+index_as_tag = true
+
+[[inputs.snmp.table.field]]
+name = "PortPwrConsumption"
+oid = "CISCO-POWER-ETHERNET-EXT-MIB::cpeExtPsePortPwrConsumption"
+
+[[inputs.snmp.table.field]]
+name = "EntPhyIndex"
+oid = "CISCO-POWER-ETHERNET-EXT-MIB::cpeExtPsePortEntPhyIndex"
+```
+
+Partial result (removed agent and host tags from all following outputs
+in this section):
+
+```text
+> ciscoPower,index=1.2 EntPhyIndex=1002i,PortPwrConsumption=6643i 1621460628000000000
+> ciscoPower,index=1.6 EntPhyIndex=1006i,PortPwrConsumption=10287i 1621460628000000000
+> ciscoPower,index=1.5 EntPhyIndex=1005i,PortPwrConsumption=8358i 1621460628000000000
+```
+
+Note here that EntPhyIndex column carries index from ENTITY-MIB table, config
+for it:
+
+```toml
+[[inputs.snmp.table]]
+name = "entityTable"
+index_as_tag = true
+
+[[inputs.snmp.table.field]]
+name = "EntPhysicalName"
+oid = "ENTITY-MIB::entPhysicalName"
+```
+
+Partial result:
+
+```text
+> entityTable,index=1006 EntPhysicalName="GigabitEthernet1/6" 1621460809000000000
+> entityTable,index=1002 EntPhysicalName="GigabitEthernet1/2" 1621460809000000000
+> entityTable,index=1005 EntPhysicalName="GigabitEthernet1/5" 1621460809000000000
+```
+
+Now, lets attempt to join these results into one table. EntPhyIndex matches
+index from second table, and lets convert EntPhysicalName into tag, so second
+table will only provide tags into result. Configuration:
+
+```toml
+[[inputs.snmp.table]]
+name = "ciscoPowerEntity"
+index_as_tag = true
+
+[[inputs.snmp.table.field]]
+name = "PortPwrConsumption"
+oid = "CISCO-POWER-ETHERNET-EXT-MIB::cpeExtPsePortPwrConsumption"
+
+[[inputs.snmp.table.field]]
+name = "EntPhyIndex"
+oid = "CISCO-POWER-ETHERNET-EXT-MIB::cpeExtPsePortEntPhyIndex"
+secondary_index_table = true    # enables joining
+
+[[inputs.snmp.table.field]]
+name = "EntPhysicalName"
+oid = "ENTITY-MIB::entPhysicalName"
+secondary_index_use = true      # this tag is indexed from secondary table
+is_tag = true
+```
+
+Result:
+
+```text
+> ciscoPowerEntity,EntPhysicalName=GigabitEthernet1/2,index=1.2 EntPhyIndex=1002i,PortPwrConsumption=6643i 1621461148000000000
+> ciscoPowerEntity,EntPhysicalName=GigabitEthernet1/6,index=1.6 EntPhyIndex=1006i,PortPwrConsumption=10287i 1621461148000000000
+> ciscoPowerEntity,EntPhysicalName=GigabitEthernet1/5,index=1.5 EntPhyIndex=1005i,PortPwrConsumption=8358i 1621461148000000000
+```
+
+## Troubleshooting
+
+Check that a numeric field can be translated to a textual field:
+
+```sh
+$ snmptranslate .1.3.6.1.2.1.1.3.0
+DISMAN-EVENT-MIB::sysUpTimeInstance
+```
+
+Request a top-level field:
+
+```sh
+snmpget -v2c -c public 127.0.0.1 sysUpTime.0
+```
+
+Request a table:
+
+```sh
+snmptable -v2c -c public 127.0.0.1 ifTable
+```
+
+To collect a packet capture, run this command in the background while running
+Telegraf or one of the above commands.  Adjust the interface, host and port as
+needed:
+
+```sh
+sudo tcpdump -s 0 -i eth0 -w telegraf-snmp.pcap host 127.0.0.1 and port 161
+```
+
+## Metrics
+
+The field and tags will depend on the table and fields configured.
+
+* snmp
+  * tags:
+    * agent_host (deprecated in 1.29: use `source` instead)
+
+## Example Output
+
+```text
+snmp,agent_host=127.0.0.1,sysName=example.org uptime=113319.74 1575509815000000000
+interface,agent_host=127.0.0.1,ifDescr=wlan0,ifIndex=3,sysName=example.org ifAdminStatus=1i,ifInDiscards=0i,ifInErrors=0i,ifInNUcastPkts=0i,ifInOctets=3436617431i,ifInUcastPkts=2717778i,ifInUnknownProtos=0i,ifLastChange=0i,ifMtu=1500i,ifOperStatus=1i,ifOutDiscards=0i,ifOutErrors=0i,ifOutNUcastPkts=0i,ifOutOctets=581368041i,ifOutQLen=0i,ifOutUcastPkts=1354338i,ifPhysAddress="c8:5b:76:c9:e6:8c",ifSpecific=".0.0",ifSpeed=0i,ifType=6i 1575509815000000000
+interface,agent_host=127.0.0.1,ifDescr=eth0,ifIndex=2,sysName=example.org ifAdminStatus=1i,ifInDiscards=0i,ifInErrors=0i,ifInNUcastPkts=21i,ifInOctets=3852386380i,ifInUcastPkts=3634004i,ifInUnknownProtos=0i,ifLastChange=9088763i,ifMtu=1500i,ifOperStatus=1i,ifOutDiscards=0i,ifOutErrors=0i,ifOutNUcastPkts=0i,ifOutOctets=434865441i,ifOutQLen=0i,ifOutUcastPkts=2110394i,ifPhysAddress="c8:5b:76:c9:e6:8c",ifSpecific=".0.0",ifSpeed=1000000000i,ifType=6i 1575509815000000000
+interface,agent_host=127.0.0.1,ifDescr=lo,ifIndex=1,sysName=example.org ifAdminStatus=1i,ifInDiscards=0i,ifInErrors=0i,ifInNUcastPkts=0i,ifInOctets=51555569i,ifInUcastPkts=339097i,ifInUnknownProtos=0i,ifLastChange=0i,ifMtu=65536i,ifOperStatus=1i,ifOutDiscards=0i,ifOutErrors=0i,ifOutNUcastPkts=0i,ifOutOctets=51555569i,ifOutQLen=0i,ifOutUcastPkts=339097i,ifSpecific=".0.0",ifSpeed=10000000i,ifType=24i 1575509815000000000
+```
+
+[metric filtering]: /docs/CONFIGURATION.md#metric-filtering
+[metric]: /docs/METRICS.md
diff --git a/content/telegraf/v1/input-plugins/snmp_trap/_index.md b/content/telegraf/v1/input-plugins/snmp_trap/_index.md
new file mode 100644
index 000000000..93cb8e2d9
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/snmp_trap/_index.md
@@ -0,0 +1,150 @@
+---
+description: "Telegraf plugin for collecting metrics from SNMP Trap"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: SNMP Trap
+    identifier: input-snmp_trap
+tags: [SNMP Trap, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# SNMP Trap Input Plugin
+
+The SNMP Trap plugin is a service input plugin that receives SNMP
+notifications (traps and inform requests).
+
+Notifications are received on plain UDP. The port to listen is
+configurable.
+
+## Note about Paths
+
+Path is a global variable, separate snmp instances will append the specified
+path onto the global path variable
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `sec_name`,
+`auth_password` and `priv_password` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## SNMP backend: gosmi and netsnmp
+
+Telegraf has two backends to translate SNMP objects. By default, Telegraf will
+use `netsnmp`, however, this option is deprecated and it is encouraged that
+users migrate to `gosmi`. If users find issues with `gosmi` that do not occur
+with `netsnmp` please open a project issue on GitHub.
+
+The SNMP backend setting is a global-level setting that applies to all use of
+SNMP in Telegraf. Users can set this option in the `[agent]` configuration via
+the `snmp_translator` option. See the [agent configuration](/telegraf/v1/configuration/#agent) for more
+details.
+
+[AGENT]: ../../../docs/CONFIGURATION.md#agent
+
+## Configuration
+
+```toml @sample.conf
+# Receive SNMP traps
+[[inputs.snmp_trap]]
+  ## Transport, local address, and port to listen on.  Transport must
+  ## be "udp://".  Omit local address to listen on all interfaces.
+  ##   example: "udp://127.0.0.1:1234"
+  ##
+  ## Special permissions may be required to listen on a port less than
+  ## 1024.  See README.md for details
+  ##
+  # service_address = "udp://:162"
+  ##
+  ## Path to mib files
+  ## Used by the gosmi translator.
+  ## To add paths when translating with netsnmp, use the MIBDIRS environment variable
+  # path = ["/usr/share/snmp/mibs"]
+  ##
+  ## Deprecated in 1.20.0; no longer running snmptranslate
+  ## Timeout running snmptranslate command
+  # timeout = "5s"
+  ## Snmp version; one of "1", "2c" or "3".
+  # version = "2c"
+  ## SNMPv3 authentication and encryption options.
+  ##
+  ## Security Name.
+  # sec_name = "myuser"
+  ## Authentication protocol; one of "MD5", "SHA", "SHA224", "SHA256", "SHA384", "SHA512" or "".
+  # auth_protocol = "MD5"
+  ## Authentication password.
+  # auth_password = "pass"
+  ## Security Level; one of "noAuthNoPriv", "authNoPriv", or "authPriv".
+  # sec_level = "authNoPriv"
+  ## Privacy protocol used for encrypted messages; one of "DES", "AES", "AES192", "AES192C", "AES256", "AES256C" or "".
+  # priv_protocol = ""
+  ## Privacy password used for encrypted messages.
+  # priv_password = ""
+```
+
+### Using a Privileged Port
+
+On many operating systems, listening on a privileged port (a port
+number less than 1024) requires extra permission.  Since the default
+SNMP trap port 162 is in this category, using telegraf to receive SNMP
+traps may need extra permission.
+
+Instructions for listening on a privileged port vary by operating
+system. It is not recommended to run telegraf as superuser in order to
+use a privileged port. Instead follow the principle of least privilege
+and use a more specific operating system mechanism to allow telegraf to
+use the port.  You may also be able to have telegraf use an
+unprivileged port and then configure a firewall port forward rule from
+the privileged port.
+
+To use a privileged port on Linux, you can use setcap to enable the
+CAP_NET_BIND_SERVICE capability on the telegraf binary:
+
+```shell
+setcap cap_net_bind_service=+ep /usr/bin/telegraf
+```
+
+On Mac OS, listening on privileged ports is unrestricted on versions
+10.14 and later.
+
+## Metrics
+
+- snmp_trap
+  - tags:
+    - source (string, IP address of trap source)
+    - name (string, value from SNMPv2-MIB::snmpTrapOID.0 PDU)
+    - mib (string, MIB from SNMPv2-MIB::snmpTrapOID.0 PDU)
+    - oid (string, OID string from SNMPv2-MIB::snmpTrapOID.0 PDU)
+    - version (string, "1" or "2c" or "3")
+    - context_name (string, value from v3 trap)
+    - engine_id (string, value from v3 trap)
+    - community (string, value from 1 or 2c trap)
+  - fields:
+    - Fields are mapped from variables in the trap. Field names are
+      the trap variable names after MIB lookup. Field values are trap
+      variable values.
+
+## Example Output
+
+```text
+snmp_trap,mib=SNMPv2-MIB,name=coldStart,oid=.1.3.6.1.6.3.1.1.5.1,source=192.168.122.102,version=2c,community=public snmpTrapEnterprise.0="linux",sysUpTimeInstance=1i 1574109187723429814
+snmp_trap,mib=NET-SNMP-AGENT-MIB,name=nsNotifyShutdown,oid=.1.3.6.1.4.1.8072.4.0.2,source=192.168.122.102,version=2c,community=public sysUpTimeInstance=5803i,snmpTrapEnterprise.0="netSnmpNotificationPrefix" 1574109186555115459
+```
+
+## References
+
+- [net-snmp project home](http://www.net-snmp.org)
+- [`snmpcmd` man-page](http://net-snmp.sourceforge.net/docs/man/snmpcmd.html)
diff --git a/content/telegraf/v1/input-plugins/socket_listener/_index.md b/content/telegraf/v1/input-plugins/socket_listener/_index.md
new file mode 100644
index 000000000..69b1b0007
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/socket_listener/_index.md
@@ -0,0 +1,219 @@
+---
+description: "Telegraf plugin for collecting metrics from Socket Listener"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Socket Listener
+    identifier: input-socket_listener
+tags: [Socket Listener, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Socket Listener Input Plugin
+
+The Socket Listener is a service input plugin that listens for messages from
+streaming (tcp, unix) or datagram (udp, unixgram) protocols.
+
+The plugin expects messages in the Telegraf Input Data
+Formats.
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Generic socket listener capable of handling multiple socket types.
+[[inputs.socket_listener]]
+  ## URL to listen on
+  # service_address = "tcp://:8094"
+  # service_address = "tcp://127.0.0.1:http"
+  # service_address = "tcp4://:8094"
+  # service_address = "tcp6://:8094"
+  # service_address = "tcp6://[2001:db8::1]:8094"
+  # service_address = "udp://:8094"
+  # service_address = "udp4://:8094"
+  # service_address = "udp6://:8094"
+  # service_address = "unix:///tmp/telegraf.sock"
+  # service_address = "unixgram:///tmp/telegraf.sock"
+  # service_address = "vsock://cid:port"
+
+  ## Permission for unix sockets (only available on unix sockets)
+  ## This setting may not be respected by some platforms. To safely restrict
+  ## permissions it is recommended to place the socket into a previously
+  ## created directory with the desired permissions.
+  ##   ex: socket_mode = "777"
+  # socket_mode = ""
+
+  ## Maximum number of concurrent connections (only available on stream sockets like TCP)
+  ## Zero means unlimited.
+  # max_connections = 0
+
+  ## Read timeout (only available on stream sockets like TCP)
+  ## Zero means unlimited.
+  # read_timeout = "0s"
+
+  ## Optional TLS configuration (only available on stream sockets like TCP)
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key  = "/etc/telegraf/key.pem"
+  ## Enables client authentication if set.
+  # tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
+
+  ## Maximum socket buffer size (in bytes when no unit specified)
+  ## For stream sockets, once the buffer fills up, the sender will start
+  ## backing up. For datagram sockets, once the buffer fills up, metrics will
+  ## start dropping. Defaults to the OS default.
+  # read_buffer_size = "64KiB"
+
+  ## Period between keep alive probes (only applies to TCP sockets)
+  ## Zero disables keep alive probes. Defaults to the OS configuration.
+  # keep_alive_period = "5m"
+
+  ## Content encoding for message payloads
+  ## Can be set to "gzip" for compressed payloads or "identity" for no encoding.
+  # content_encoding = "identity"
+
+  ## Maximum size of decoded packet (in bytes when no unit specified)
+  # max_decompression_size = "500MB"
+
+  ## Message splitting strategy and corresponding settings for stream sockets
+  ## (tcp, tcp4, tcp6, unix or unixpacket). The setting is ignored for packet
+  ## listeners such as udp.
+  ## Available strategies are:
+  ##   newline         -- split at newlines (default)
+  ##   null            -- split at null bytes
+  ##   delimiter       -- split at delimiter byte-sequence in hex-format
+  ##                      given in `splitting_delimiter`
+  ##   fixed length    -- split after number of bytes given in `splitting_length`
+  ##   variable length -- split depending on length information received in the
+  ##                      data. The length field information is specified in
+  ##                      `splitting_length_field`.
+  # splitting_strategy = "newline"
+
+  ## Delimiter used to split received data to messages consumed by the parser.
+  ## The delimiter is a hex byte-sequence marking the end of a message
+  ## e.g. "0x0D0A", "x0d0a" or "0d0a" marks a Windows line-break (CR LF).
+  ## The value is case-insensitive and can be specified with "0x" or "x" prefix
+  ## or without.
+  ## Note: This setting is only used for splitting_strategy = "delimiter".
+  # splitting_delimiter = ""
+
+  ## Fixed length of a message in bytes.
+  ## Note: This setting is only used for splitting_strategy = "fixed length".
+  # splitting_length = 0
+
+  ## Specification of the length field contained in the data to split messages
+  ## with variable length. The specification contains the following fields:
+  ##  offset        -- start of length field in bytes from begin of data
+  ##  bytes         -- length of length field in bytes
+  ##  endianness    -- endianness of the value, either "be" for big endian or
+  ##                   "le" for little endian
+  ##  header_length -- total length of header to be skipped when passing
+  ##                   data on to the parser. If zero (default), the header
+  ##                   is passed on to the parser together with the message.
+  ## Note: This setting is only used for splitting_strategy = "variable length".
+  # splitting_length_field = {offset = 0, bytes = 0, endianness = "be", header_length = 0}
+
+  ## Data format to consume.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+  # data_format = "influx"
+```
+
+## A Note on UDP OS Buffer Sizes
+
+The `read_buffer_size` config option can be used to adjust the size of the
+socket buffer, but this number is limited by OS settings. On Linux,
+`read_buffer_size` will default to `rmem_default` and will be capped by
+`rmem_max`. On BSD systems, `read_buffer_size` is capped by `maxsockbuf`, and
+there is no OS default setting.
+
+Instructions on how to adjust these OS settings are available below.
+
+Some OSes (most notably, Linux) place very restrictive limits on the performance
+of UDP protocols. It is _highly_ recommended that you increase these OS limits
+to at least 8MB before trying to run large amounts of UDP traffic to your
+instance.  8MB is just a recommendation, and can be adjusted higher.
+
+### Linux
+
+Check the current UDP/IP receive buffer limit & default by typing the following
+commands:
+
+```sh
+sysctl net.core.rmem_max
+sysctl net.core.rmem_default
+```
+
+If the values are less than 8388608 bytes you should add the following lines to
+the /etc/sysctl.conf file:
+
+```text
+net.core.rmem_max=8388608
+net.core.rmem_default=8388608
+```
+
+Changes to /etc/sysctl.conf do not take effect until reboot.
+To update the values immediately, type the following commands as root:
+
+```sh
+sysctl -w net.core.rmem_max=8388608
+sysctl -w net.core.rmem_default=8388608
+```
+
+### BSD/Darwin
+
+On BSD/Darwin systems you need to add about a 15% padding to the kernel limit
+socket buffer. Meaning if you want an 8MB buffer (8388608 bytes) you need to set
+the kernel limit to `8388608*1.15 = 9646900`. This is not documented anywhere
+but can be seen [in the kernel source code](https://github.com/freebsd/freebsd/blob/master/sys/kern/uipc_sockbuf.c#L63-L64).
+
+Check the current UDP/IP buffer limit by typing the following command:
+
+```sh
+sysctl kern.ipc.maxsockbuf
+```
+
+If the value is less than 9646900 bytes you should add the following lines
+to the /etc/sysctl.conf file (create it if necessary):
+
+```text
+kern.ipc.maxsockbuf=9646900
+```
+
+Changes to /etc/sysctl.conf do not take effect until reboot.
+To update the values immediately, type the following command as root:
+
+```sh
+sysctl -w kern.ipc.maxsockbuf=9646900
+```
+
+[1]: https://github.com/freebsd/freebsd/blob/master/sys/kern/uipc_sockbuf.c#L63-L64
+
+## Metrics
+
+The plugin accepts arbitrary input and parses it according to the `data_format`
+setting. There is no predefined metric format.
+
+## Example Output
+
+There is no predefined metric format, so output depends on plugin input.
diff --git a/content/telegraf/v1/input-plugins/socketstat/_index.md b/content/telegraf/v1/input-plugins/socketstat/_index.md
new file mode 100644
index 000000000..ea1c85b3b
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/socketstat/_index.md
@@ -0,0 +1,91 @@
+---
+description: "Telegraf plugin for collecting metrics from SocketStat"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: SocketStat
+    identifier: input-socketstat
+tags: [SocketStat, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# SocketStat Input Plugin
+
+The socketstat plugin gathers indicators from established connections, using
+iproute2's `ss` command.
+
+The `ss` command does not require specific privileges.
+
+**WARNING: The output format will produce series with very high cardinality.**
+You should either store those by an engine which doesn't suffer from it, use a
+short retention policy or do appropriate filtering.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Gather indicators from established connections, using iproute2's ss command.
+# This plugin ONLY supports non-Windows
+[[inputs.socketstat]]
+  ## ss can display information about tcp, udp, raw, unix, packet, dccp and sctp sockets
+  ## Specify here the types you want to gather
+  protocols = [ "tcp", "udp" ]
+
+  ## The default timeout of 1s for ss execution can be overridden here:
+  # timeout = "1s"
+```
+
+## Metrics
+
+The measurements `socketstat` contains the following fields
+
+- state (string) (for tcp, dccp and sctp protocols)
+
+If ss provides it (it depends on the protocol and ss version) it has the
+following additional fields
+
+- bytes_acked (integer, bytes)
+- bytes_received (integer, bytes)
+- segs_out (integer, count)
+- segs_in (integer, count)
+- data_segs_out (integer, count)
+- data_segs_in (integer, count)
+
+All measurements have the following tags:
+
+- proto
+- local_addr
+- local_port
+- remote_addr
+- remote_port
+
+## Example Output
+
+### recent ss version (iproute2 4.3.0 here)
+
+```sh
+./telegraf --config telegraf.conf --input-filter socketstat --test
+```
+
+```text
+socketstat,host=ubuntu-xenial,local_addr=10.6.231.226,local_port=42716,proto=tcp,remote_addr=192.168.2.21,remote_port=80 bytes_acked=184i,bytes_received=2624519595i,recv_q=4344i,segs_in=1812580i,segs_out=661642i,send_q=0i,state="ESTAB" 1606457205000000000
+```
+
+### older ss version (iproute2 3.12.0 here)
+
+```sh
+./telegraf --config telegraf.conf --input-filter socketstat --test
+```
+
+```text
+socketstat,host=ubuntu-trusty,local_addr=10.6.231.163,local_port=35890,proto=tcp,remote_addr=192.168.2.21,remote_port=80 recv_q=0i,send_q=0i,state="ESTAB" 1606456977000000000
+```
diff --git a/content/telegraf/v1/input-plugins/solr/_index.md b/content/telegraf/v1/input-plugins/solr/_index.md
new file mode 100644
index 000000000..99875c65d
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/solr/_index.md
@@ -0,0 +1,65 @@
+---
+description: "Telegraf plugin for collecting metrics from Solr"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Solr
+    identifier: input-solr
+tags: [Solr, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Solr Input Plugin
+
+The [solr](http://lucene.apache.org/solr/) plugin collects stats via the [MBean
+Request Handler]().
+
+More about [performance statistics](https://cwiki.apache.org/confluence/display/solr/Performance+Statistics+Reference).
+
+Tested from 3.5 to 9.3
+
+[1]: https://cwiki.apache.org/confluence/display/solr/MBean+Request+Handler
+
+[2]: https://cwiki.apache.org/confluence/display/solr/Performance+Statistics+Reference
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read stats from one or more Solr servers or cores
+[[inputs.solr]]
+  ## specify a list of one or more Solr servers
+  servers = ["http://localhost:8983"]
+
+  ## specify a list of one or more Solr cores (default - all)
+  # cores = ["*"]
+  
+  ## Optional HTTP Basic Auth Credentials
+  # username = "username"
+  # password = "pa$$word"
+
+  ## Timeout for HTTP requests
+  # timeout = "5s"
+```
+
+## Metrics
+
+## Example Output
+
+```text
+solr_core,core=main,handler=searcher,host=testhost deleted_docs=17616645i,max_docs=261848363i,num_docs=244231718i 1478214949000000000
+solr_core,core=main,handler=core,host=testhost deleted_docs=0i,max_docs=0i,num_docs=0i 1478214949000000000
+solr_queryhandler,core=main,handler=/replication,host=testhost 15min_rate_reqs_per_second=0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000444659081257,5min_rate_reqs_per_second=0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000014821969375,75th_pc_request_time=16.484211,95th_pc_request_time=16.484211,999th_pc_request_time=16.484211,99th_pc_request_time=16.484211,avg_requests_per_second=0.0000008443809966322143,avg_time_per_request=12.984811,errors=0i,handler_start=1474662050865i,median_request_time=11.352427,requests=3i,timeouts=0i,total_time=38.954433 1478214949000000000
+solr_queryhandler,core=main,handler=/update/extract,host=testhost 15min_rate_reqs_per_second=0,5min_rate_reqs_per_second=0,75th_pc_request_time=0,95th_pc_request_time=0,999th_pc_request_time=0,99th_pc_request_time=0,avg_requests_per_second=0,avg_time_per_request=0,errors=0i,handler_start=0i,median_request_time=0,requests=0i,timeouts=0i,total_time=0 1478214949000000000
+solr_queryhandler,core=main,handler=org.apache.solr.handler.component.SearchHandler,host=testhost 15min_rate_reqs_per_second=0,5min_rate_reqs_per_second=0,75th_pc_request_time=0,95th_pc_request_time=0,999th_pc_request_time=0,99th_pc_request_time=0,avg_requests_per_second=0,avg_time_per_request=0,errors=0i,handler_start=1474662050861i,median_request_time=0,requests=0i,timeouts=0i,total_time=0 1478214949000000000
+solr_queryhandler,core=main,handler=/tvrh,host=testhost 15min_rate_reqs_per_second=0,5min_rate_reqs_per_second=0,75th_pc_request_time=0,95th_pc_request_time=0,999th_pc_request_time=0,99th_pc_request_time=0,avg_requests_per_second=0,avg_time_per_request=0,errors=0i,handler_start=0i,median_request_time=0,requests=0i,timeouts=0i,total_time=0 1478214949000000000
+```
diff --git a/content/telegraf/v1/input-plugins/sql/_index.md b/content/telegraf/v1/input-plugins/sql/_index.md
new file mode 100644
index 000000000..3274f33d7
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/sql/_index.md
@@ -0,0 +1,217 @@
+---
+description: "Telegraf plugin for collecting metrics from SQL"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: SQL
+    identifier: input-sql
+tags: [SQL, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# SQL Input Plugin
+
+This plugin reads metrics from performing SQL queries against a SQL
+server. Different server types are supported and their settings might differ
+(especially the connection parameters).  Please check the list of supported SQL
+drivers options.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `dsn` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from SQL queries
+[[inputs.sql]]
+  ## Database Driver
+  ## See https://github.com/influxdata/telegraf/blob/master/docs/SQL_DRIVERS_INPUT.md for
+  ## a list of supported drivers.
+  driver = "mysql"
+
+  ## Data source name for connecting
+  ## The syntax and supported options depends on selected driver.
+  dsn = "username:password@mysqlserver:3307/dbname?param=value"
+
+  ## Timeout for any operation
+  ## Note that the timeout for queries is per query not per gather.
+  # timeout = "5s"
+
+  ## Connection time limits
+  ## By default the maximum idle time and maximum lifetime of a connection is unlimited, i.e. the connections
+  ## will not be closed automatically. If you specify a positive time, the connections will be closed after
+  ## idleing or existing for at least that amount of time, respectively.
+  # connection_max_idle_time = "0s"
+  # connection_max_life_time = "0s"
+
+  ## Connection count limits
+  ## By default the number of open connections is not limited and the number of maximum idle connections
+  ## will be inferred from the number of queries specified. If you specify a positive number for any of the
+  ## two options, connections will be closed when reaching the specified limit. The number of idle connections
+  ## will be clipped to the maximum number of connections limit if any.
+  # connection_max_open = 0
+  # connection_max_idle = auto
+
+  ## Specifies plugin behavior regarding disconnected servers
+  ## Available choices :
+  ##   - error: telegraf will return an error on startup if one the servers is unreachable
+  ##   - ignore: telegraf will ignore unreachable servers on both startup and gather
+  # disconnected_servers_behavior = "error"
+
+  [[inputs.sql.query]]
+    ## Query to perform on the server
+    query="SELECT user,state,latency,score FROM Scoreboard WHERE application > 0"
+    ## Alternatively to specifying the query directly you can select a file here containing the SQL query.
+    ## Only one of 'query' and 'query_script' can be specified!
+    # query_script = "/path/to/sql/script.sql"
+
+    ## Name of the measurement
+    ## In case both measurement and 'measurement_col' are given, the latter takes precedence.
+    # measurement = "sql"
+
+    ## Column name containing the name of the measurement
+    ## If given, this will take precedence over the 'measurement' setting. In case a query result
+    ## does not contain the specified column, we fall-back to the 'measurement' setting.
+    # measurement_column = ""
+
+    ## Column name containing the time of the measurement
+    ## If omitted, the time of the query will be used.
+    # time_column = ""
+
+    ## Format of the time contained in 'time_col'
+    ## The time must be 'unix', 'unix_ms', 'unix_us', 'unix_ns', or a golang time format.
+    ## See https://golang.org/pkg/time/#Time.Format for details.
+    # time_format = "unix"
+
+    ## Column names containing tags
+    ## An empty include list will reject all columns and an empty exclude list will not exclude any column.
+    ## I.e. by default no columns will be returned as tag and the tags are empty.
+    # tag_columns_include = []
+    # tag_columns_exclude = []
+
+    ## Column names containing fields (explicit types)
+    ## Convert the given columns to the corresponding type. Explicit type conversions take precedence over
+    ## the automatic (driver-based) conversion below.
+    ## NOTE: Columns should not be specified for multiple types or the resulting type is undefined.
+    # field_columns_float = []
+    # field_columns_int = []
+    # field_columns_uint = []
+    # field_columns_bool = []
+    # field_columns_string = []
+
+    ## Column names containing fields (automatic types)
+    ## An empty include list is equivalent to '[*]' and all returned columns will be accepted. An empty
+    ## exclude list will not exclude any column. I.e. by default all columns will be returned as fields.
+    ## NOTE: We rely on the database driver to perform automatic datatype conversion.
+    # field_columns_include = []
+    # field_columns_exclude = []
+```
+
+## Options
+
+### Driver
+
+The `driver` and `dsn` options specify how to connect to the database. As
+especially the `dsn` format and values vary with the `driver` refer to the list
+of supported SQL drivers for possible
+values and more details.
+
+### Connection limits
+
+With these options you can limit the number of connections kept open by this
+plugin. Details about the exact workings can be found in the [golang sql
+documentation](https://golang.org/pkg/database/sql/#DB.SetConnMaxIdleTime).
+
+### Query sections
+
+Multiple `query` sections can be specified for this plugin. Each specified query
+will first be prepared on the server and then executed in every interval using
+the column mappings specified. Please note that `tag` and `field` columns are
+not exclusive, i.e. a column can be added to both. When using both `include` and
+`exclude` lists, the `exclude` list takes precedence over the `include`
+list. I.e. given you specify `foo` in both lists, `foo` will _never_ pass the
+filter. In case any the columns specified in `measurement_col` or `time_col` are
+_not_ returned by the query, the plugin falls-back to the documented
+defaults. Fields or tags specified in the includes of the options but missing in
+the returned query are silently ignored.
+
+## Types
+
+This plugin relies on the driver to do the type conversion. For the different
+properties of the metric the following types are accepted.
+
+### Measurement
+
+Only columns of type `string`  are accepted.
+
+### Time
+
+For the metric time columns of type `time` are accepted directly. For numeric
+columns, `time_format` should be set to any of `unix`, `unix_ms`, `unix_ns` or
+`unix_us` accordingly. By default the a timestamp in `unix` format is
+expected. For string columns, please specify the `time_format` accordingly.  See
+the [golang time documentation](https://golang.org/pkg/time/#Time.Format) for
+details.
+
+### Tags
+
+For tags columns with textual values (`string` and `bytes`), signed and unsigned
+integers (8, 16, 32 and 64 bit), floating-point (32 and 64 bit), `boolean` and
+`time` values are accepted. Those values will be converted to string.
+
+### Fields
+
+For fields columns with textual values (`string` and `bytes`), signed and
+unsigned integers (8, 16, 32 and 64 bit), floating-point (32 and 64 bit),
+`boolean` and `time` values are accepted. Here `bytes` will be converted to
+`string`, signed and unsigned integer values will be converted to `int64` or
+`uint64` respectively. Floating-point values are converted to `float64` and
+`time` is converted to a nanosecond timestamp of type `int64`.
+
+## Example Output
+
+Using the [MariaDB sample database](https://www.mariadbtutorial.com/getting-started/mariadb-sample-database) and the configuration
+
+```toml
+[[inputs.sql]]
+  driver = "mysql"
+  dsn = "root:password@/nation"
+
+  [[inputs.sql.query]]
+    query="SELECT * FROM guests"
+    measurement = "nation"
+    tag_columns_include = ["name"]
+    field_columns_exclude = ["name"]
+```
+
+Telegraf will output the following metrics
+
+```text
+nation,host=Hugin,name=John guest_id=1i 1611332164000000000
+nation,host=Hugin,name=Jane guest_id=2i 1611332164000000000
+nation,host=Hugin,name=Jean guest_id=3i 1611332164000000000
+nation,host=Hugin,name=Storm guest_id=4i 1611332164000000000
+nation,host=Hugin,name=Beast guest_id=5i 1611332164000000000
+```
+
+[maria-sample]: https://www.mariadbtutorial.com/getting-started/mariadb-sample-database
+
+## Metrics
+
+The format of metrics produced by this plugin depends on the content and data
+format of the file.
diff --git a/content/telegraf/v1/input-plugins/sqlserver/_index.md b/content/telegraf/v1/input-plugins/sqlserver/_index.md
new file mode 100644
index 000000000..ef20f6b27
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/sqlserver/_index.md
@@ -0,0 +1,544 @@
+---
+description: "Telegraf plugin for collecting metrics from SQL Server"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: SQL Server
+    identifier: input-sqlserver
+tags: [SQL Server, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# SQL Server Input Plugin
+
+The `sqlserver` plugin provides metrics for your SQL Server instance.
+Recorded metrics are lightweight and use Dynamic Management Views
+supplied by SQL Server.
+
+## The SQL Server plugin supports the following editions/versions of SQL Server
+
+- SQL Server
+  - 2012 or newer (Plugin support aligned with the [official Microsoft SQL Server support](https://docs.microsoft.com/en-us/sql/sql-server/end-of-support/sql-server-end-of-life-overview?view=sql-server-ver15#lifecycle-dates))
+  - End-of-life SQL Server versions are not guaranteed to be supported by Telegraf. Any issues with the SQL Server plugin for these EOL versions will need to be addressed by the community.
+- Azure SQL Database (Single)
+- Azure SQL Managed Instance
+- Azure SQL Elastic Pool
+- Azure Arc-enabled SQL Managed Instance
+
+## Additional Setup
+
+You have to create a login on every SQL Server instance or Azure SQL
+Managed instance you want to monitor, with following script:
+
+```sql
+USE master;
+GO
+CREATE LOGIN [telegraf] WITH PASSWORD = N'mystrongpassword';
+GO
+GRANT VIEW SERVER STATE TO [telegraf];
+GO
+GRANT VIEW ANY DEFINITION TO [telegraf];
+GO
+```
+
+For Azure SQL Database, you require the View Database State permission
+and can create a user with a password directly in the database.
+
+```sql
+CREATE USER [telegraf] WITH PASSWORD = N'mystrongpassword';
+GO
+GRANT VIEW DATABASE STATE TO [telegraf];
+GO
+```
+
+For Azure SQL Elastic Pool, please follow the following instructions
+to collect metrics.
+
+On master logical database, create an SQL login 'telegraf' and assign
+it to the server-level role ##MS_ServerStateReader##.
+
+```sql
+CREATE LOGIN [telegraf] WITH PASSWORD = N'mystrongpassword';
+GO
+ALTER SERVER ROLE ##MS_ServerStateReader##
+  ADD MEMBER [telegraf];
+GO
+```
+
+Elastic pool metrics can be collected from any database in the pool if a user
+for the `telegraf` login is created in that database. For collection to work,
+this database must remain in the pool, and must not be renamed. If you plan
+to add/remove databases from this pool, create a separate database for
+monitoring purposes that will remain in the pool.
+
+> Note: To avoid duplicate monitoring data, do not collect elastic pool metrics
+from more than one database in the same pool.
+
+```sql
+GO
+CREATE USER [telegraf] FOR LOGIN telegraf;
+```
+
+For Service SID authentication to SQL Server (Windows service installations
+only).
+
+- [More information about using service SIDs to grant permissions in SQL Server](https://docs.microsoft.com/en-us/sql/relational-databases/security/using-service-sids-to-grant-permissions-to-services-in-sql-server)
+
+In an administrative command prompt configure the telegraf service for use
+with a service SID
+
+```Batchfile
+sc.exe sidtype "telegraf" unrestricted
+```
+
+To create the login for the telegraf service run the following script:
+
+```sql
+USE master;
+GO
+CREATE LOGIN [NT SERVICE\telegraf] FROM WINDOWS;
+GO
+GRANT VIEW SERVER STATE TO [NT SERVICE\telegraf];
+GO
+GRANT VIEW ANY DEFINITION TO [NT SERVICE\telegraf];
+GO
+```
+
+Remove User Id and Password keywords from the connection string in your
+config file to use windows authentication.
+
+```toml
+[[inputs.sqlserver]]
+  servers = ["Server=192.168.1.10;Port=1433;app name=telegraf;log=1;",]
+```
+
+To set up a configurable timeout, add timeout to the connections string
+in your config file.
+
+```toml
+servers = [
+  "Server=192.168.1.10;Port=1433;User Id=<user>;Password=<pw>;app name=telegraf;log=1;dial timeout=30",
+]
+```
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `servers` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from Microsoft SQL Server
+[[inputs.sqlserver]]
+  ## Specify instances to monitor with a list of connection strings.
+  ## All connection parameters are optional.
+  ## By default, the host is localhost, listening on default port, TCP 1433.
+  ##   for Windows, the user is the currently running AD user (SSO).
+  ##   See https://github.com/microsoft/go-mssqldb for detailed connection
+  ##   parameters, in particular, tls connections can be created like so:
+  ##   "encrypt=true;certificate=<cert>;hostNameInCertificate=<SqlServer host fqdn>"
+  servers = [
+    "Server=192.168.1.10;Port=1433;User Id=<user>;Password=<pw>;app name=telegraf;log=1;",
+  ]
+
+  ## Timeout for query execution operation
+  ## Note that the timeout for queries is per query not per gather.
+  ## 0 value means no timeout
+  # query_timeout = "0s"
+
+  ## Authentication method
+  ## valid methods: "connection_string", "AAD"
+  # auth_method = "connection_string"
+
+  ## ClientID is the is the client ID of the user assigned identity of the VM
+  ## that should be used to authenticate to the Azure SQL server.
+  # client_id = ""
+
+  ## "database_type" enables a specific set of queries depending on the database type. If specified, it replaces azuredb = true/false and query_version = 2
+  ## In the config file, the sql server plugin section should be repeated each with a set of servers for a specific database_type.
+  ## Possible values for database_type are - "SQLServer" or "AzureSQLDB" or "AzureSQLManagedInstance" or "AzureSQLPool"
+  database_type = "SQLServer"
+
+  ## A list of queries to include. If not specified, all the below listed queries are used.
+  include_query = []
+
+  ## A list of queries to explicitly ignore.
+  exclude_query = ["SQLServerAvailabilityReplicaStates", "SQLServerDatabaseReplicaStates"]
+
+  ## Queries enabled by default for database_type = "SQLServer" are -
+  ## SQLServerPerformanceCounters, SQLServerWaitStatsCategorized, SQLServerDatabaseIO, SQLServerProperties, SQLServerMemoryClerks,
+  ## SQLServerSchedulers, SQLServerRequests, SQLServerVolumeSpace, SQLServerCpu, SQLServerAvailabilityReplicaStates, SQLServerDatabaseReplicaStates,
+  ## SQLServerRecentBackups
+
+  ## Queries enabled by default for database_type = "AzureSQLDB" are -
+  ## AzureSQLDBResourceStats, AzureSQLDBResourceGovernance, AzureSQLDBWaitStats, AzureSQLDBDatabaseIO, AzureSQLDBServerProperties,
+  ## AzureSQLDBOsWaitstats, AzureSQLDBMemoryClerks, AzureSQLDBPerformanceCounters, AzureSQLDBRequests, AzureSQLDBSchedulers
+
+  ## Queries enabled by default for database_type = "AzureSQLManagedInstance" are -
+  ## AzureSQLMIResourceStats, AzureSQLMIResourceGovernance, AzureSQLMIDatabaseIO, AzureSQLMIServerProperties, AzureSQLMIOsWaitstats,
+  ## AzureSQLMIMemoryClerks, AzureSQLMIPerformanceCounters, AzureSQLMIRequests, AzureSQLMISchedulers
+
+  ## Queries enabled by default for database_type = "AzureSQLPool" are -
+  ## AzureSQLPoolResourceStats, AzureSQLPoolResourceGovernance, AzureSQLPoolDatabaseIO, AzureSQLPoolWaitStats,
+  ## AzureSQLPoolMemoryClerks, AzureSQLPoolPerformanceCounters, AzureSQLPoolSchedulers
+
+  ## Queries enabled by default for database_type = "AzureArcSQLManagedInstance" are -
+  ## AzureSQLMIDatabaseIO, AzureSQLMIServerProperties, AzureSQLMIOsWaitstats,
+  ## AzureSQLMIMemoryClerks, AzureSQLMIPerformanceCounters, AzureSQLMIRequests, AzureSQLMISchedulers
+
+  ## Following are old config settings
+  ## You may use them only if you are using the earlier flavor of queries, however it is recommended to use
+  ## the new mechanism of identifying the database_type there by use it's corresponding queries
+
+  ## Optional parameter, setting this to 2 will use a new version
+  ## of the collection queries that break compatibility with the original
+  ## dashboards.
+  ## Version 2 - is compatible from SQL Server 2012 and later versions and also for SQL Azure DB
+  # query_version = 2
+
+  ## If you are using AzureDB, setting this to true will gather resource utilization metrics
+  # azuredb = false
+
+  ## Toggling this to true will emit an additional metric called "sqlserver_telegraf_health".
+  ## This metric tracks the count of attempted queries and successful queries for each SQL instance specified in "servers".
+  ## The purpose of this metric is to assist with identifying and diagnosing any connectivity or query issues.
+  ## This setting/metric is optional and is disabled by default.
+  # health_metric = false
+
+  ## Possible queries across different versions of the collectors
+  ## Queries enabled by default for specific Database Type
+
+  ## database_type =  AzureSQLDB  by default collects the following queries
+  ## - AzureSQLDBWaitStats
+  ## - AzureSQLDBResourceStats
+  ## - AzureSQLDBResourceGovernance
+  ## - AzureSQLDBDatabaseIO
+  ## - AzureSQLDBServerProperties
+  ## - AzureSQLDBOsWaitstats
+  ## - AzureSQLDBMemoryClerks
+  ## - AzureSQLDBPerformanceCounters
+  ## - AzureSQLDBRequests
+  ## - AzureSQLDBSchedulers
+
+  ## database_type =  AzureSQLManagedInstance by default collects the following queries
+  ## - AzureSQLMIResourceStats
+  ## - AzureSQLMIResourceGovernance
+  ## - AzureSQLMIDatabaseIO
+  ## - AzureSQLMIServerProperties
+  ## - AzureSQLMIOsWaitstats
+  ## - AzureSQLMIMemoryClerks
+  ## - AzureSQLMIPerformanceCounters
+  ## - AzureSQLMIRequests
+  ## - AzureSQLMISchedulers
+
+  ## database_type =  AzureSQLPool by default collects the following queries
+  ## - AzureSQLPoolResourceStats
+  ## - AzureSQLPoolResourceGovernance
+  ## - AzureSQLPoolDatabaseIO
+  ## - AzureSQLPoolOsWaitStats,
+  ## - AzureSQLPoolMemoryClerks
+  ## - AzureSQLPoolPerformanceCounters
+  ## - AzureSQLPoolSchedulers
+
+  ## database_type =  SQLServer by default collects the following queries
+  ## - SQLServerPerformanceCounters
+  ## - SQLServerWaitStatsCategorized
+  ## - SQLServerDatabaseIO
+  ## - SQLServerProperties
+  ## - SQLServerMemoryClerks
+  ## - SQLServerSchedulers
+  ## - SQLServerRequests
+  ## - SQLServerVolumeSpace
+  ## - SQLServerCpu
+  ## - SQLServerRecentBackups
+  ## and following as optional (if mentioned in the include_query list)
+  ## - SQLServerAvailabilityReplicaStates
+  ## - SQLServerDatabaseReplicaStates
+```
+
+## Support for Azure Active Directory (AAD) authentication using [Managed Identity](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview)
+
+- Azure SQL Database supports 2 main methods of authentication: [SQL authentication and AAD authentication](https://docs.microsoft.com/en-us/azure/azure-sql/database/security-overview#authentication).
+- The recommended practice is to [use AAD authentication when possible](https://docs.microsoft.com/en-us/azure/azure-sql/database/authentication-aad-overview).
+
+AAD is a more modern authentication protocol, allows for easier
+credential/role management, and can eliminate the need to include passwords
+in a connection string.
+
+To enable support for AAD authentication, we leverage the existing AAD
+authentication support.
+
+If more then one managed identity is assigned to the VM. You need specify the
+client_id of the identity you wish to use to authenticate with the SQL Server.
+If only one is assigned you don't need so specify this value.
+
+- Please see [SQL Server driver for Go](https://github.com/microsoft/go-mssqldb#azure-active-directory-authentication)
+
+### How to use AAD Auth with MSI
+
+- Please note AAD based auth is currently only supported for Azure SQL Database and Azure SQL Managed Instance (but not for SQL Server), as described [here](https://docs.microsoft.com/en-us/azure/azure-sql/database/security-overview#authentication).
+
+- Configure "system-assigned managed identity" for Azure resources on the Monitoring VM (the VM that'd connect to the SQL server/database) [using the Azure portal](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm).
+- On the database being monitored, create/update a USER with the name of the Monitoring VM as the principal using the below script. This might require allow-listing the client machine's IP address (from where the below SQL script is being run) on the SQL Server resource.
+
+In case of multiple assigned identities on one VM you can use the parameter
+user_assigned_id to specify the client_id.
+
+```sql
+EXECUTE ('IF EXISTS(SELECT * FROM sys.database_principals WHERE name = ''<Monitoring_VM_Name>'')
+    BEGIN
+        DROP USER [<Monitoring_VM_Name>]
+    END')
+EXECUTE ('CREATE USER [<Monitoring_VM_Name>] FROM EXTERNAL PROVIDER')
+EXECUTE ('GRANT VIEW DATABASE STATE TO [<Monitoring_VM_Name>]')
+```
+
+- On the SQL Server resource of the database(s) being monitored, go to "Firewalls and Virtual Networks" tab and allowlist the monitoring VM IP address.
+- On the Monitoring VM, update the telegraf config file with the database connection string in the following format. The connection string only provides the server and database name, but no password (since the VM's system-assigned managed identity would be used for authentication). The auth method must be set to "AAD"
+
+```toml
+  servers = [
+    "Server=<Azure_SQL_Server_Name>.database.windows.net;Port=1433;Database=<Azure_SQL_Database_Name>;app name=telegraf;log=1;",
+  ]
+  auth_method = "AAD"
+```
+
+## Metrics
+
+To provide backwards compatibility, this plugin support two versions of
+metrics queries.
+
+**Note**: Version 2 queries are not backwards compatible with the old queries.
+Any dashboards or queries based on the old query format will not work with
+the new format. The version 2 queries only report raw metrics, no math has
+been done to calculate deltas. To graph this data you must calculate deltas
+in your dashboarding software.
+
+### Version 1 (query_version=1): This is Deprecated in 1.16, all future development will be under configuration option database_type
+
+The original metrics queries provide:
+
+- *Performance counters*: 1000+ metrics from `sys.dm_os_performance_counters`
+- *Performance metrics*: special performance and ratio metrics
+- *Wait stats*: wait tasks categorized from `sys.dm_os_wait_stats`
+- *Memory clerk*: memory breakdown from `sys.dm_os_memory_clerks`
+- *Database size*: databases size trend from `sys.dm_io_virtual_file_stats`
+- *Database IO*: databases I/O from `sys.dm_io_virtual_file_stats`
+- *Database latency*: databases latency from `sys.dm_io_virtual_file_stats`
+- *Database properties*: databases properties, state and recovery model, from `sys.databases`
+- *OS Volume*: available, used and total space from `sys.dm_os_volume_stats`
+- *CPU*: cpu usage from `sys.dm_os_ring_buffers`
+
+If you are using the original queries all stats have the following tags:
+
+- `servername`:  hostname:instance
+- `type`: type of stats to easily filter measurements
+
+### Version 2 (query_version=2): This is Deprecated in 1.16, all future development will be under configuration option database_type
+
+The new (version 2) metrics provide:
+
+- *Database IO*: IO stats from `sys.dm_io_virtual_file_stats`.
+- *Memory Clerk*: Memory clerk breakdown from `sys.dm_os_memory_clerks`, most clerks have been given a friendly name.
+- *Performance Counters*: A select list of performance counters from `sys.dm_os_performance_counters`. Some of the important metrics included:
+  - *Activity*: Transactions/sec/database, Batch requests/sec, blocked processes, + more
+  - *Availability Groups*: Bytes sent to replica, Bytes received from replica, Log bytes received, Log send queue, transaction delay, + more
+  - *Log activity*: Log bytes flushed/sec, Log flushes/sec, Log Flush Wait Time
+  - *Memory*: PLE, Page reads/sec, Page writes/sec, + more
+  - *TempDB*: Free space, Version store usage, Active temp tables, temp table creation rate, + more
+  - *Resource Governor*: CPU Usage, Requests/sec, Queued Requests, and Blocked tasks per workload group + more
+- *Server properties*: Number of databases in all possible states (online, offline, suspect, etc.), cpu count, total physical memory, available physical memory, SQL Server service uptime, SQL Server SPID, and SQL Server version. In the case of Azure SQL relevant properties such as Tier, #Vcores, Memory etc.
+- *Wait stats*: Wait time in ms, number of waiting tasks, resource wait time, signal wait time, max wait time in ms, wait type, and wait category. The waits are categorized using the same categories used in Query Store.
+- *Schedulers* - This captures `sys.dm_os_schedulers`.
+- *SqlRequests* - This captures a snapshot of `sys.dm_exec_requests` and `sys.dm_exec_sessions` that gives you running requests as well as wait types and
+  blocking sessions. Telegraf's monitoring request is omitted unless it is a heading blocker. Also includes sleeping sessions with open transactions.
+- *VolumeSpace* - uses `sys.dm_os_volume_stats` to get total, used and occupied space on every disk that contains a data or log file. (Note that even if enabled it won't get any data from Azure SQL Database or SQL Managed Instance). It is pointless to run this with high frequency (ie: every 10s), but it won't cause any problem.
+- *Cpu* - uses the buffer ring (`sys.dm_os_ring_buffers`) to get CPU data, the table is updated once per minute. (Note that even if enabled it won't get any data from Azure SQL Database or SQL Managed Instance).
+
+  In order to allow tracking on a per statement basis this query produces a
+  unique tag for each query.  Depending on the database workload, this may
+  result in a high cardinality series.  Reference the FAQ for tips on
+  [managing series cardinality](/docs/FAQ.md#user-content-q-how-can-i-manage-series-cardinality).
+
+- *Azure Managed Instances*
+  - Stats from `sys.server_resource_stats`
+  - Resource governance stats from `sys.dm_instance_resource_governance`
+- *Azure SQL Database* in addition to other stats
+  - Stats from `sys.dm_db_wait_stats`
+  - Resource governance stats from `sys.dm_user_db_resource_governance`
+  - Stats from `sys.dm_db_resource_stats`
+
+### database_type = "AzureSQLDB"
+
+These are metrics for Azure SQL Database (single database) and are very
+similar to version 2 but split out for maintenance reasons, better ability
+to test,differences in DMVs:
+
+- *AzureSQLDBDatabaseIO*: IO stats from `sys.dm_io_virtual_file_stats` including resource governance time, RBPEX, IO for Hyperscale.
+- *AzureSQLDBMemoryClerks*: Memory clerk breakdown from `sys.dm_os_memory_clerks`.
+- *AzureSQLDBResourceGovernance*: Relevant properties indicatign resource limits from `sys.dm_user_db_resource_governance`
+- *AzureSQLDBPerformanceCounters*: A select list of performance counters from `sys.dm_os_performance_counters` including cloud specific counters for SQL Hyperscale.
+- *AzureSQLDBServerProperties*: Relevant Azure SQL relevant properties from  such as Tier, #Vcores, Memory etc, storage, etc.
+- *AzureSQLDBWaitstats*: Wait time in ms from `sys.dm_db_wait_stats`, number of waiting tasks, resource wait time, signal wait time, max wait time in ms, wait type, and wait category. The waits are categorized using the same categories used in Query Store. These waits are collected only as of the end of the a statement. and for a specific database only.
+- *AzureSQLOsWaitstats*: Wait time in ms from `sys.dm_os_wait_stats`, number of waiting tasks, resource wait time, signal wait time, max wait time in ms, wait type, and wait category. The waits are categorized using the same categories used in Query Store. These waits are collected as they occur and instance wide
+- *AzureSQLDBRequests*: Requests which are blocked or have a wait type from `sys.dm_exec_sessions` and `sys.dm_exec_requests`. Telegraf's monitoring request is omitted unless it is a heading blocker
+- *AzureSQLDBSchedulers* - This captures `sys.dm_os_schedulers` snapshots.
+
+### database_type = "AzureSQLManagedInstance"
+
+These are metrics for Azure SQL Managed instance, are very similar to version
+2 but split out for maintenance reasons, better ability to test, differences
+in DMVs:
+
+- *AzureSQLMIDatabaseIO*: IO stats from `sys.dm_io_virtual_file_stats` including resource governance time, RBPEX, IO for Hyperscale.
+- *AzureSQLMIMemoryClerks*: Memory clerk breakdown from `sys.dm_os_memory_clerks`.
+- *AzureSQLMIResourceGovernance*: Relevant properties indicatign resource limits from `sys.dm_instance_resource_governance`
+- *AzureSQLMIPerformanceCounters*: A select list of performance counters from `sys.dm_os_performance_counters` including cloud specific counters for SQL Hyperscale.
+- *AzureSQLMIServerProperties*: Relevant Azure SQL relevant properties such as Tier, #Vcores, Memory etc, storage, etc.
+- *AzureSQLMIOsWaitstats*: Wait time in ms from `sys.dm_os_wait_stats`, number of waiting tasks, resource wait time, signal wait time, max wait time in ms, wait type, and wait category. The waits are categorized using the same categories used in Query Store. These waits are collected as they occur and instance wide
+- *AzureSQLMIRequests*: Requests which are blocked or have a wait type from `sys.dm_exec_sessions` and `sys.dm_exec_requests`. Telegraf's monitoring request is omitted unless it is a heading blocker
+- *AzureSQLMISchedulers*: This captures `sys.dm_os_schedulers` snapshots.
+
+### database_type = "AzureSQLPool"
+
+These are metrics for Azure SQL to monitor resources usage at Elastic Pool
+level. These metrics require additional permissions to be collected, please
+ensure to check additional setup section in this documentation.
+
+- *AzureSQLPoolResourceStats*: Returns resource usage statistics for the current elastic pool in a SQL Database server. Queried from `sys.dm_resource_governor_resource_pools_history_ex`.
+- *AzureSQLPoolResourceGovernance*: Returns actual configuration and capacity settings used by resource governance mechanisms in the current elastic pool. Queried from `sys.dm_user_db_resource_governance`.
+- *AzureSQLPoolDatabaseIO*: Returns I/O statistics for data and log files for each database in the pool. Queried from `sys.dm_io_virtual_file_stats`.
+- *AzureSQLPoolOsWaitStats*: Returns information about all the waits encountered by threads that executed. Queried from `sys.dm_os_wait_stats`.
+- *AzureSQLPoolMemoryClerks*: Memory clerk breakdown from `sys.dm_os_memory_clerks`.
+- *AzureSQLPoolPerformanceCounters*: A selected list of performance counters from `sys.dm_os_performance_counters`. Note: Performance counters where the cntr_type column value is 537003264 are already returned with a percentage format between 0 and 100. For other counters, please check [sys.dm_os_performance_counters](https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-os-performance-counters-transact-sql?view=azuresqldb-current) documentation.
+- *AzureSQLPoolSchedulers*: This captures `sys.dm_os_schedulers` snapshots.
+
+### database_type = "SQLServer"
+
+- *SQLServerDatabaseIO*: IO stats from `sys.dm_io_virtual_file_stats`
+- *SQLServerMemoryClerks*: Memory clerk breakdown from `sys.dm_os_memory_clerks`, most clerks have been given a friendly name.
+- *SQLServerPerformanceCounters*: A select list of performance counters from `sys.dm_os_performance_counters`. Some of the important metrics included:
+  - *Activity*: Transactions/sec/database, Batch requests/sec, blocked processes, + more
+  - *Availability Groups*: Bytes sent to replica, Bytes received from replica, Log bytes received, Log send queue, transaction delay, + more
+  - *Log activity*: Log bytes flushed/sec, Log flushes/sec, Log Flush Wait Time
+  - *Memory*: PLE, Page reads/sec, Page writes/sec, + more
+  - *TempDB*: Free space, Version store usage, Active temp tables, temp table creation rate, + more
+  - *Resource Governor*: CPU Usage, Requests/sec, Queued Requests, and Blocked tasks per workload group + more
+- *SQLServerProperties*: Number of databases in all possible states (online, offline, suspect, etc.), cpu count, total physical memory, available physical memory, SQL Server service uptime, SQL Server SPID and SQL Server version. In the case of Azure SQL relevant properties such as Tier, #Vcores, Memory etc.
+- *SQLServerWaitStatsCategorized*: Wait time in ms, number of waiting tasks, resource wait time, signal wait time, max wait time in ms, wait type, and wait category. The waits are categorized using the same categories used in Query Store.
+- *SQLServerSchedulers*: This captures `sys.dm_os_schedulers`.
+- *SQLServerRequests*: This captures a snapshot of `sys.dm_exec_requests` and `sys.dm_exec_sessions` that gives you running requests as well as wait types and
+  blocking sessions.
+- *SQLServerVolumeSpace*: Uses `sys.dm_os_volume_stats` to get total, used and occupied space on every disk that contains a data or log file. (Note that even if enabled it won't get any data from Azure SQL Database or SQL Managed Instance). It is pointless to run this with high frequency (ie: every 10s), but it won't cause any problem.
+- SQLServerCpu: Uses the buffer ring (`sys.dm_os_ring_buffers`) to get CPU data, the table is updated once per minute. (Note that even if enabled it won't get any data from Azure SQL Database or SQL Managed Instance).
+- SQLServerAvailabilityReplicaStates: Collects availability replica state information from `sys.dm_hadr_availability_replica_states` for a High Availability / Disaster Recovery (HADR) setup
+- SQLServerDatabaseReplicaStates: Collects database replica state information from `sys.dm_hadr_database_replica_states` for a High Availability / Disaster Recovery (HADR) setup
+- SQLServerRecentBackups: Collects latest full, differential and transaction log backup date and size from `msdb.dbo.backupset`
+- SQLServerPersistentVersionStore: Collects persistent version store information from `sys.dm_tran_persistent_version_store_stats` for databases with Accelerated Database Recovery enabled
+
+### Output Measures
+
+The guiding principal is that all data collected from the same primary DMV ends
+up in the same measure irrespective of database_type.
+
+- `sqlserver_database_io` - Used by  AzureSQLDBDatabaseIO, AzureSQLMIDatabaseIO, SQLServerDatabaseIO, DatabaseIO given the data is from `sys.dm_io_virtual_file_stats`
+- `sqlserver_waitstats` - Used by  WaitStatsCategorized,AzureSQLDBOsWaitstats,AzureSQLMIOsWaitstats
+- `sqlserver_server_properties` - Used by  SQLServerProperties, AzureSQLDBServerProperties , AzureSQLMIServerProperties,ServerProperties
+- `sqlserver_memory_clerks` - Used by SQLServerMemoryClerks, AzureSQLDBMemoryClerks, AzureSQLMIMemoryClerks,MemoryClerk
+- `sqlserver_performance` - Used by  SQLServerPerformanceCounters, AzureSQLDBPerformanceCounters, AzureSQLMIPerformanceCounters,PerformanceCounters
+- `sys.dm_os_schedulers`  - Used by SQLServerSchedulers,AzureSQLDBServerSchedulers, AzureSQLMIServerSchedulers
+
+The following Performance counter metrics can be used directly, with no delta
+calculations:
+
+- SQLServer:Buffer Manager\Buffer cache hit ratio
+- SQLServer:Buffer Manager\Page life expectancy
+- SQLServer:Buffer Node\Page life expectancy
+- SQLServer:Database Replica\Log Apply Pending Queue
+- SQLServer:Database Replica\Log Apply Ready Queue
+- SQLServer:Database Replica\Log Send Queue
+- SQLServer:Database Replica\Recovery Queue
+- SQLServer:Databases\Data File(s) Size (KB)
+- SQLServer:Databases\Log File(s) Size (KB)
+- SQLServer:Databases\Log File(s) Used Size (KB)
+- SQLServer:Databases\XTP Memory Used (KB)
+- SQLServer:General Statistics\Active Temp Tables
+- SQLServer:General Statistics\Processes blocked
+- SQLServer:General Statistics\Temp Tables For Destruction
+- SQLServer:General Statistics\User Connections
+- SQLServer:Memory Broker Clerks\Memory broker clerk size
+- SQLServer:Memory Manager\Memory Grants Pending
+- SQLServer:Memory Manager\Target Server Memory (KB)
+- SQLServer:Memory Manager\Total Server Memory (KB)
+- SQLServer:Resource Pool Stats\Active memory grant amount (KB)
+- SQLServer:Resource Pool Stats\Disk Read Bytes/sec
+- SQLServer:Resource Pool Stats\Disk Read IO Throttled/sec
+- SQLServer:Resource Pool Stats\Disk Read IO/sec
+- SQLServer:Resource Pool Stats\Disk Write Bytes/sec
+- SQLServer:Resource Pool Stats\Disk Write IO Throttled/sec
+- SQLServer:Resource Pool Stats\Disk Write IO/sec
+- SQLServer:Resource Pool Stats\Used memory (KB)
+- SQLServer:Transactions\Free Space in tempdb (KB)
+- SQLServer:Transactions\Version Store Size (KB)
+- SQLServer:User Settable\Query
+- SQLServer:Workload Group Stats\Blocked tasks
+- SQLServer:Workload Group Stats\CPU usage %
+- SQLServer:Workload Group Stats\Queued requests
+- SQLServer:Workload Group Stats\Requests completed/sec
+
+Version 2 queries have the following tags:
+
+- `sql_instance`: Physical host and instance name (hostname:instance)
+- `database_name`:  For Azure SQLDB, database_name denotes the name of the Azure SQL Database as server name is a logical construct.
+
+### Health Metric
+
+All collection versions (version 1, version 2, and database_type) support an
+optional plugin health metric called `sqlserver_telegraf_health`. This metric
+tracks if connections to SQL Server are succeeding or failing. Users can
+leverage this metric to detect if their SQL Server monitoring is not working
+as intended.
+
+In the configuration file, toggling `health_metric` to `true` will enable
+collection of this metric. By default, this value is set to `false` and
+the metric is not collected. The health metric emits one record for each
+connection specified by `servers` in the configuration file.
+
+The health metric emits the following tags:
+
+- `sql_instance` - Name of the server specified in the connection string. This value is emitted as-is in the connection string. If the server could not be parsed from the connection string, a constant placeholder value is emitted
+- `database_name` -  Name of the database or (initial catalog) specified in the connection string. This value is emitted as-is in the connection string. If the database could not be parsed from the connection string, a constant placeholder value is emitted
+
+The health metric emits the following fields:
+
+- `attempted_queries` - Number of queries that were attempted for this connection
+- `successful_queries` - Number of queries that completed successfully for this connection
+- `database_type` - Type of database as specified by `database_type`. If `database_type` is empty, the `QueryVersion` and `AzureDB` fields are concatenated instead
+
+If `attempted_queries` and `successful_queries` are not equal for
+a given connection, some metrics were not successfully gathered for
+that connection. If `successful_queries` is 0, no metrics were successfully
+gathered.
+
+[cardinality]: /docs/FAQ.md#user-content-q-how-can-i-manage-series-cardinality
+
+## Example Output
+
+```text
+sqlserver_cpu_other_process_cpu{host="servername",measurement_db_type="SQLServer",sql_instance="SERVERNAME:INST"} 9
+sqlserver_performance{counter="Log File(s) Size (KB)",counter_type="65792",host="servername",instance="instance_name",measurement_db_type="SQLServer",object="MSSQL$INSTANCE_NAME:Databases",sql_instance="SERVERNAME:INSTANCE_NAME"} 1.048568e+06
+```
diff --git a/content/telegraf/v1/input-plugins/stackdriver/_index.md b/content/telegraf/v1/input-plugins/stackdriver/_index.md
new file mode 100644
index 000000000..46dd2527e
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/stackdriver/_index.md
@@ -0,0 +1,197 @@
+---
+description: "Telegraf plugin for collecting metrics from Stackdriver Google Cloud Monitoring"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Stackdriver Google Cloud Monitoring
+    identifier: input-stackdriver
+tags: [Stackdriver Google Cloud Monitoring, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Stackdriver Google Cloud Monitoring Input Plugin
+
+Query data from Google Cloud Monitoring (formerly Stackdriver) using the
+[Cloud Monitoring API v3](https://cloud.google.com/monitoring/api/v3/).
+
+This plugin accesses APIs which are [chargeable](https://cloud.google.com/stackdriver/pricing#stackdriver_monitoring_services); you might incur
+costs.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Gather timeseries from Google Cloud Platform v3 monitoring API
+[[inputs.stackdriver]]
+  ## GCP Project
+  project = "erudite-bloom-151019"
+
+  ## Include timeseries that start with the given metric type.
+  metric_type_prefix_include = [
+    "compute.googleapis.com/",
+  ]
+
+  ## Exclude timeseries that start with the given metric type.
+  # metric_type_prefix_exclude = []
+
+  ## Most metrics are updated no more than once per minute; it is recommended
+  ## to override the agent level interval with a value of 1m or greater.
+  interval = "1m"
+
+  ## Maximum number of API calls to make per second.  The quota for accounts
+  ## varies, it can be viewed on the API dashboard:
+  ##   https://cloud.google.com/monitoring/quotas#quotas_and_limits
+  # rate_limit = 14
+
+  ## The delay and window options control the number of points selected on
+  ## each gather.  When set, metrics are gathered between:
+  ##   start: now() - delay - window
+  ##   end:   now() - delay
+  #
+  ## Collection delay; if set too low metrics may not yet be available.
+  # delay = "5m"
+  #
+  ## If unset, the window will start at 1m and be updated dynamically to span
+  ## the time between calls (approximately the length of the plugin interval).
+  # window = "1m"
+
+  ## TTL for cached list of metric types.  This is the maximum amount of time
+  ## it may take to discover new metrics.
+  # cache_ttl = "1h"
+
+  ## If true, raw bucket counts are collected for distribution value types.
+  ## For a more lightweight collection, you may wish to disable and use
+  ## distribution_aggregation_aligners instead.
+  # gather_raw_distribution_buckets = true
+
+  ## Aggregate functions to be used for metrics whose value type is
+  ## distribution.  These aggregate values are recorded in in addition to raw
+  ## bucket counts; if they are enabled.
+  ##
+  ## For a list of aligner strings see:
+  ##   https://cloud.google.com/monitoring/api/ref_v3/rpc/google.monitoring.v3#aligner
+  # distribution_aggregation_aligners = [
+  #  "ALIGN_PERCENTILE_99",
+  #  "ALIGN_PERCENTILE_95",
+  #  "ALIGN_PERCENTILE_50",
+  # ]
+
+  ## Filters can be added to reduce the number of time series matched.  All
+  ## functions are supported: starts_with, ends_with, has_substring, and
+  ## one_of.  Only the '=' operator is supported.
+  ##
+  ## The logical operators when combining filters are defined statically using
+  ## the following values:
+  ##   filter ::= <resource_labels> {AND <metric_labels> AND <user_labels> AND <system_labels>}
+  ##   resource_labels ::= <resource_labels> {OR <resource_label>}
+  ##   metric_labels ::= <metric_labels> {OR <metric_label>}
+  ##   user_labels ::= <user_labels> {OR <user_label>}
+  ##   system_labels ::= <system_labels> {OR <system_label>}
+  ##
+  ## For more details, see https://cloud.google.com/monitoring/api/v3/filters
+  #
+  ## Resource labels refine the time series selection with the following expression:
+  ##   resource.labels.<key> = <value>
+  # [[inputs.stackdriver.filter.resource_labels]]
+  #   key = "instance_name"
+  #   value = 'starts_with("localhost")'
+  #
+  ## Metric labels refine the time series selection with the following expression:
+  ##   metric.labels.<key> = <value>
+  #  [[inputs.stackdriver.filter.metric_labels]]
+  #    key = "device_name"
+  #    value = 'one_of("sda", "sdb")'
+  #
+  ## User labels refine the time series selection with the following expression:
+  ##   metadata.user_labels."<key>" = <value>
+  #  [[inputs.stackdriver.filter.user_labels]]
+  #    key = "environment"
+  #    value = 'one_of("prod", "staging")'
+  #
+  ## System labels refine the time series selection with the following expression:
+  ##   metadata.system_labels."<key>" = <value>
+  #  [[inputs.stackdriver.filter.system_labels]]
+  #    key = "machine_type"
+  #    value = 'starts_with("e2-")'
+```
+
+### Authentication
+
+It is recommended to use a service account to authenticate with the
+Stackdriver Monitoring API.  [Getting Started with Authentication](https://cloud.google.com/docs/authentication/getting-started).
+
+## Metrics
+
+Metrics are created using one of there patterns depending on if the value type
+is a scalar value, raw distribution buckets, or aligned bucket values.
+
+In all cases, the Stackdriver metric type is split on the last component into
+the measurement and field:
+
+```sh
+compute.googleapis.com/instance/disk/read_bytes_count
+└──────────  measurement  ─────────┘ └──  field  ───┘
+```
+
+**Scalar Values:**
+
+- measurement
+  - tags:
+    - resource_labels
+    - metric_labels
+  - fields:
+    - field
+
+**Distributions:**
+
+Distributions are represented by a set of fields along with the bucket values
+tagged with the bucket boundary.  Buckets are cumulative: each bucket
+represents the total number of items less than the `lt` tag.
+
+- measurement
+  - tags:
+    - resource_labels
+    - metric_labels
+  - fields:
+    - field_count
+    - field_mean
+    - field_sum_of_squared_deviation
+    - field_range_min
+    - field_range_max
+
+- measurement
+  - tags:
+    - resource_labels
+    - metric_labels
+    - lt (less than)
+  - fields:
+    - field_bucket
+
+**Aligned Aggregations:**
+
+- measurement
+  - tags:
+    - resource_labels
+    - metric_labels
+  - fields:
+    - field_alignment_function
+
+## Troubleshooting
+
+When Telegraf is ran with `--debug`, detailed information about the performed
+queries will be logged.
+
+## Example Output
+
+[stackdriver]: https://cloud.google.com/monitoring/api/v3/
+[auth]: https://cloud.google.com/docs/authentication/getting-started
+[pricing]: https://cloud.google.com/stackdriver/pricing#stackdriver_monitoring_services
diff --git a/content/telegraf/v1/input-plugins/statsd/_index.md b/content/telegraf/v1/input-plugins/statsd/_index.md
new file mode 100644
index 000000000..831917b6e
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/statsd/_index.md
@@ -0,0 +1,321 @@
+---
+description: "Telegraf plugin for collecting metrics from StatsD"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: StatsD
+    identifier: input-statsd
+tags: [StatsD, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# StatsD Input Plugin
+
+The StatsD input plugin gathers metrics from a Statsd server.
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Statsd Server
+[[inputs.statsd]]
+  ## Protocol, must be "tcp", "udp4", "udp6" or "udp" (default=udp)
+  protocol = "udp"
+
+  ## MaxTCPConnection - applicable when protocol is set to tcp (default=250)
+  max_tcp_connections = 250
+
+  ## Enable TCP keep alive probes (default=false)
+  tcp_keep_alive = false
+
+  ## Specifies the keep-alive period for an active network connection.
+  ## Only applies to TCP sockets and will be ignored if tcp_keep_alive is false.
+  ## Defaults to the OS configuration.
+  # tcp_keep_alive_period = "2h"
+
+  ## Address and port to host UDP listener on
+  service_address = ":8125"
+
+  ## The following configuration options control when telegraf clears it's cache
+  ## of previous values. If set to false, then telegraf will only clear it's
+  ## cache when the daemon is restarted.
+  ## Reset gauges every interval (default=true)
+  delete_gauges = true
+  ## Reset counters every interval (default=true)
+  delete_counters = true
+  ## Reset sets every interval (default=true)
+  delete_sets = true
+  ## Reset timings & histograms every interval (default=true)
+  delete_timings = true
+
+  ## Enable aggregation temporality adds temporality=delta or temporality=commulative tag, and
+  ## start_time field, which adds the start time of the metric accumulation.
+  ## You should use this when using OpenTelemetry output.
+  # enable_aggregation_temporality = false
+
+  ## Percentiles to calculate for timing & histogram stats.
+  percentiles = [50.0, 90.0, 99.0, 99.9, 99.95, 100.0]
+
+  ## separator to use between elements of a statsd metric
+  metric_separator = "_"
+
+  ## Parses extensions to statsd in the datadog statsd format
+  ## currently supports metrics and datadog tags.
+  ## http://docs.datadoghq.com/guides/dogstatsd/
+  datadog_extensions = false
+
+  ## Parses distributions metric as specified in the datadog statsd format
+  ## https://docs.datadoghq.com/developers/metrics/types/?tab=distribution#definition
+  datadog_distributions = false
+
+  ## Keep or drop the container id as tag. Included as optional field
+  ## in DogStatsD protocol v1.2 if source is running in Kubernetes
+  ## https://docs.datadoghq.com/developers/dogstatsd/datagram_shell/?tab=metrics#dogstatsd-protocol-v12
+  datadog_keep_container_tag = false
+
+  ## Statsd data translation templates, more info can be read here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/TEMPLATE_PATTERN.md
+  # templates = [
+  #     "cpu.* measurement*"
+  # ]
+
+  ## Number of UDP messages allowed to queue up, once filled,
+  ## the statsd server will start dropping packets
+  allowed_pending_messages = 10000
+
+  ## Number of worker threads used to parse the incoming messages.
+  # number_workers_threads = 5
+
+  ## Number of timing/histogram values to track per-measurement in the
+  ## calculation of percentiles. Raising this limit increases the accuracy
+  ## of percentiles but also increases the memory usage and cpu time.
+  percentile_limit = 1000
+
+  ## Maximum socket buffer size in bytes, once the buffer fills up, metrics
+  ## will start dropping.  Defaults to the OS default.
+  # read_buffer_size = 65535
+
+  ## Max duration (TTL) for each metric to stay cached/reported without being updated.
+  # max_ttl = "10h"
+
+  ## Sanitize name method
+  ## By default, telegraf will pass names directly as they are received.
+  ## However, upstream statsd now does sanitization of names which can be
+  ## enabled by using the "upstream" method option. This option will a) replace
+  ## white space with '_', replace '/' with '-', and remove characters not
+  ## matching 'a-zA-Z_\-0-9\.;='.
+  #sanitize_name_method = ""
+
+  ## Replace dots (.) with underscore (_) and dashes (-) with
+  ## double underscore (__) in metric names.
+  # convert_names = false
+
+  ## Convert all numeric counters to float
+  ## Enabling this would ensure that both counters and guages are both emitted
+  ## as floats.
+  # float_counters = false
+```
+
+## Description
+
+The statsd plugin is a special type of plugin which runs a backgrounded statsd
+listener service while telegraf is running.
+
+The format of the statsd messages was based on the format described in the
+original [etsy
+statsd](https://github.com/etsy/statsd/blob/master/docs/metric_types.md)
+implementation. In short, the telegraf statsd listener will accept:
+
+- Gauges
+  - `users.current.den001.myapp:32|g` <- standard
+  - `users.current.den001.myapp:+10|g` <- additive
+  - `users.current.den001.myapp:-10|g`
+- Counters
+  - `deploys.test.myservice:1|c` <- increments by 1
+  - `deploys.test.myservice:101|c` <- increments by 101
+  - `deploys.test.myservice:1|c|@0.1` <- with sample rate, increments by 10
+- Sets
+  - `users.unique:101|s`
+  - `users.unique:101|s`
+  - `users.unique:102|s` <- would result in a count of 2 for `users.unique`
+- Timings & Histograms
+  - `load.time:320|ms`
+  - `load.time.nanoseconds:1|h`
+  - `load.time:200|ms|@0.1` <- sampled 1/10 of the time
+- Distributions
+  - `load.time:320|d`
+  - `load.time.nanoseconds:1|d`
+  - `load.time:200|d|@0.1` <- sampled 1/10 of the time
+
+It is possible to omit repetitive names and merge individual stats into a
+single line by separating them with additional colons:
+
+- `users.current.den001.myapp:32|g:+10|g:-10|g`
+- `deploys.test.myservice:1|c:101|c:1|c|@0.1`
+- `users.unique:101|s:101|s:102|s`
+- `load.time:320|ms:200|ms|@0.1`
+
+This also allows for mixed types in a single line:
+
+- `foo:1|c:200|ms`
+
+The string `foo:1|c:200|ms` is internally split into two individual metrics
+`foo:1|c` and `foo:200|ms` which are added to the aggregator separately.
+
+## Influx Statsd
+
+In order to take advantage of InfluxDB's tagging system, we have made a couple
+additions to the standard statsd protocol. First, you can specify
+tags in a manner similar to the line-protocol, like this:
+
+```shell
+users.current,service=payroll,region=us-west:32|g
+```
+
+<!-- TODO Second, you can specify multiple fields within a measurement:
+
+```
+current.users,service=payroll,server=host01:west=10,east=10,central=2,south=10|g
+```
+
+-->
+
+## Metrics
+
+Meta:
+
+- tags: `metric_type=<gauge|set|counter|timing|histogram>`
+
+Outputted measurements will depend entirely on the measurements that the user
+sends, but here is a brief rundown of what you can expect to find from each
+metric type:
+
+- Gauges
+  - Gauges are a constant data type. They are not subject to averaging, and they
+    don’t change unless you change them. That is, once you set a gauge value, it
+    will be a flat line on the graph until you change it again.
+- Counters
+  - Counters are the most basic type. They are treated as a count of a type of
+    event. They will continually increase unless you set `delete_counters=true`.
+- Sets
+  - Sets count the number of unique values passed to a key. For example, you
+    could count the number of users accessing your system using `users:<user_id>|s`.
+    No matter how many times the same user_id is sent, the count will only increase
+    by 1.
+- Timings & Histograms
+  - Timers are meant to track how long something took. They are an invaluable
+    tool for tracking application performance.
+  - The following aggregate measurements are made for timers:
+    - `statsd_<name>_lower`: The lower bound is the lowest value statsd saw
+        for that stat during that interval.
+    - `statsd_<name>_upper`: The upper bound is the highest value statsd saw
+        for that stat during that interval.
+    - `statsd_<name>_mean`: The mean is the average of all values statsd saw
+        for that stat during that interval.
+    - `statsd_<name>_median`: The median is the middle of all values statsd saw
+        for that stat during that interval.
+    - `statsd_<name>_stddev`: The stddev is the sample standard deviation
+        of all values statsd saw for that stat during that interval.
+    - `statsd_<name>_sum`: The sum is the sample sum of all values statsd saw
+        for that stat during that interval.
+    - `statsd_<name>_count`: The count is the number of timings statsd saw
+        for that stat during that interval. It is not averaged.
+    - `statsd_<name>_percentile_<P>` The `Pth` percentile is a value x such
+        that `P%` of all the values statsd saw for that stat during that time
+        period are below x. The most common value that people use for `P` is the
+        `90`, this is a great number to try to optimize.
+- Distributions
+  - The Distribution metric represents the global statistical distribution of a set of values calculated across your entire distributed infrastructure in one time interval. A Distribution can be used to instrument logical objects, like services, independently from the underlying hosts.
+  - Unlike the Histogram metric type, which aggregates on the Agent during a given time interval, a Distribution metric sends all the raw data during a time interval.
+
+## Plugin arguments
+
+- **protocol** string: Protocol used in listener - tcp or udp options
+- **max_tcp_connections** []int: Maximum number of concurrent TCP connections
+to allow. Used when protocol is set to tcp.
+- **tcp_keep_alive** boolean: Enable TCP keep alive probes
+- **tcp_keep_alive_period** duration: Specifies the keep-alive period for an active network connection
+- **service_address** string: Address to listen for statsd UDP packets on
+- **delete_gauges** boolean: Delete gauges on every collection interval
+- **delete_counters** boolean: Delete counters on every collection interval
+- **delete_sets** boolean: Delete set counters on every collection interval
+- **delete_timings** boolean: Delete timings on every collection interval
+- **percentiles** []int: Percentiles to calculate for timing & histogram stats
+- **allowed_pending_messages** integer: Number of messages allowed to queue up
+waiting to be processed. When this fills, messages will be dropped and logged.
+- **percentile_limit** integer: Number of timing/histogram values to track
+per-measurement in the calculation of percentiles. Raising this limit increases
+the accuracy of percentiles but also increases the memory usage and cpu time.
+- **templates** []string: Templates for transforming statsd buckets into influx
+measurements and tags.
+- **parse_data_dog_tags** boolean: Enable parsing of tags in DataDog's dogstatsd format (<http://docs.datadoghq.com/guides/dogstatsd/>)
+- **datadog_extensions** boolean: Enable parsing of DataDog's extensions to dogstatsd format (<http://docs.datadoghq.com/guides/dogstatsd/>)
+- **datadog_distributions** boolean: Enable parsing of the Distribution metric in DataDog's dogstatsd format (<https://docs.datadoghq.com/developers/metrics/types/?tab=distribution#definition>)
+- **datadog_keep_container_tag** boolean: Keep or drop the container id as tag. Included as optional field in DogStatsD protocol v1.2 if source is running in Kubernetes.
+- **max_ttl** config.Duration: Max duration (TTL) for each metric to stay cached/reported without being updated.
+
+## Statsd bucket -> InfluxDB line-protocol Templates
+
+The plugin supports specifying templates for transforming statsd buckets into
+InfluxDB measurement names and tags. The templates have a _measurement_ keyword,
+which can be used to specify parts of the bucket that are to be used in the
+measurement name. Other words in the template are used as tag names. For
+example, the following template:
+
+```toml
+templates = [
+    "measurement.measurement.region"
+]
+```
+
+would result in the following transformation:
+
+```shell
+cpu.load.us-west:100|g
+=> cpu_load,region=us-west 100
+```
+
+Users can also filter the template to use based on the name of the bucket,
+using glob matching, like so:
+
+```toml
+templates = [
+    "cpu.* measurement.measurement.region",
+    "mem.* measurement.measurement.host"
+]
+```
+
+which would result in the following transformation:
+
+```shell
+cpu.load.us-west:100|g
+=> cpu_load,region=us-west 100
+
+mem.cached.localhost:256|g
+=> mem_cached,host=localhost 256
+```
+
+Consult the Template Patterns documentation for
+additional details.
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/supervisor/_index.md b/content/telegraf/v1/input-plugins/supervisor/_index.md
new file mode 100644
index 000000000..c4e7ba5c2
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/supervisor/_index.md
@@ -0,0 +1,123 @@
+---
+description: "Telegraf plugin for collecting metrics from Supervisor"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Supervisor
+    identifier: input-supervisor
+tags: [Supervisor, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Supervisor Input Plugin
+
+This plugin gathers information about processes that
+running under supervisor using XML-RPC API.
+
+Minimum tested version of supervisor: 3.3.2
+
+## Supervisor configuration
+
+This plugin needs an HTTP server to be enabled in supervisor,
+also it's recommended to enable basic authentication on the
+HTTP server. When using basic authentication make sure to
+include the username and password in the plugin's url setting.
+Here is an example of the `inet_http_server` section in supervisor's
+config that will work with default plugin configuration:
+
+```ini
+[inet_http_server]
+port = 127.0.0.1:9001
+username = user
+password = pass
+```
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Gathers information about processes that running under supervisor using XML-RPC API
+[[inputs.supervisor]]
+  ## Url of supervisor's XML-RPC endpoint if basic auth enabled in supervisor http server,
+  ## than you have to add credentials to url (ex. http://login:pass@localhost:9001/RPC2)
+  # url="http://localhost:9001/RPC2"
+  ## With settings below you can manage gathering additional information about processes
+  ## If both of them empty, then all additional information will be collected.
+  ## Currently supported supported additional metrics are: pid, rc
+  # metrics_include = []
+  # metrics_exclude = ["pid", "rc"]
+```
+
+### Optional metrics
+
+You can control gathering of some supervisor's metrics (processes PIDs
+and exit codes) by setting metrics_include and metrics_exclude parameters
+in configuration file.
+
+### Server tag
+
+Server tag is used to identify metrics source server. You have an option
+to use host:port pair of supervisor's http endpoint by default or you
+can use supervisor's identification string, which is set in supervisor's
+configuration file.
+
+## Metrics
+
+- supervisor_processes
+  - Tags:
+    - source (Hostname or IP address of supervisor's instance)
+    - port (Port number of supervisor's HTTP server)
+    - id (Supervisor's identification string)
+    - name (Process name)
+    - group (Process group)
+  - Fields:
+    - state (int, see reference)
+    - uptime (int, seconds)
+    - pid (int, optional)
+    - exitCode (int, optional)
+
+- supervisor_instance
+  - Tags:
+    - source (Hostname or IP address of supervisor's instance)
+    - port (Port number of supervisor's HTTP server)
+    - id (Supervisor's identification string)
+  - Fields:
+    - state (int, see reference)
+
+### Supervisor process state field reference table
+
+|Statecode|Statename|                                            Description                                                 |
+|--------|----------|--------------------------------------------------------------------------------------------------------|
+|    0   |  STOPPED |             The process has been stopped due to a stop request or has never been started.              |
+|   10   | STARTING |                             The process is starting due to a start request.                            |
+|   20   |  RUNNING |                                       The process is running.                                          |
+|   30   |  BACKOFF |The process entered the STARTING state but subsequently exited too quickly to move to the RUNNING state.|
+|   40   | STOPPING |                           The process is stopping due to a stop request.                               |
+|   100  |  EXITED  |                 The process exited from the RUNNING state (expectedly or unexpectedly).                |
+|   200  |   FATAL  |                            The process could not be started successfully.                              |
+|  1000  |  UNKNOWN |                  The process is in an unknown state (supervisord programming error).                   |
+
+### Supervisor instance state field reference
+
+|Statecode| Statename  |                  Description                 |
+|---------|------------|----------------------------------------------|
+|    2    |    FATAL   |  Supervisor has experienced a serious error. |
+|    1    |   RUNNING  |         Supervisor is working normally.      |
+|    0    | RESTARTING |  Supervisor is in the process of restarting. |
+|   -1    |  SHUTDOWN  |Supervisor is in the process of shutting down.|
+
+## Example Output
+
+```text
+supervisor_processes,group=ExampleGroup,id=supervisor,port=9001,process=ExampleProcess,source=localhost state=20i,uptime=75958i 1659786637000000000
+supervisor_instance,id=supervisor,port=9001,source=localhost state=1i 1659786637000000000
+```
diff --git a/content/telegraf/v1/input-plugins/suricata/_index.md b/content/telegraf/v1/input-plugins/suricata/_index.md
new file mode 100644
index 000000000..a5e78cb3c
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/suricata/_index.md
@@ -0,0 +1,206 @@
+---
+description: "Telegraf plugin for collecting metrics from Suricata"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Suricata
+    identifier: input-suricata
+tags: [Suricata, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Suricata Input Plugin
+
+This plugin reports internal performance counters of the Suricata IDS/IPS
+engine, such as captured traffic volume, memory usage, uptime, flow counters,
+and much more. It provides a socket for the Suricata log output to write JSON
+stats output to, and processes the incoming data to fit Telegraf's format.
+It can also report for triggered Suricata IDS/IPS alerts.
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Suricata stats and alerts plugin
+[[inputs.suricata]]
+  ## Source
+  ## Data sink for Suricata stats log. This is expected to be a filename of a
+  ## unix socket to be created for listening.
+  # source = "/var/run/suricata-stats.sock"
+
+  ## Delimiter
+  ## Used for flattening field keys, e.g. subitem "alert" of "detect" becomes
+  ## "detect_alert" when delimiter is "_".
+  # delimiter = "_"
+
+  ## Metric version
+  ## Version 1 only collects stats and optionally will look for alerts if
+  ## the configuration setting alerts is set to true.
+  ## Version 2 parses any event type message by default and produced metrics
+  ## under a single metric name using a tag to differentiate between event
+  ## types. The timestamp for the message is applied to the generated metric.
+  ## Additional tags and fields are included as well.
+  # version = "1"
+
+  ## Alerts
+  ## In metric version 1, only status is captured by default, alerts must be
+  ## turned on with this configuration option. This option does not apply for
+  ## metric version 2.
+  # alerts = false
+```
+
+## Metrics
+
+Fields in the 'suricata' measurement follow the JSON format used by Suricata's
+stats output.
+See <http://suricata.readthedocs.io/en/latest/performance/statistics.html> for
+more information.
+
+All fields for Suricata stats are numeric.
+
+- suricata
+  - tags:
+    - thread: `Global` for global statistics (if enabled), thread IDs (e.g. `W#03-enp0s31f6`) for thread-specific statistics
+  - fields:
+    - app_layer_flow_dcerpc_udp
+    - app_layer_flow_dns_tcp
+    - app_layer_flow_dns_udp
+    - app_layer_flow_enip_udp
+    - app_layer_flow_failed_tcp
+    - app_layer_flow_failed_udp
+    - app_layer_flow_http
+    - app_layer_flow_ssh
+    - app_layer_flow_tls
+    - app_layer_tx_dns_tcp
+    - app_layer_tx_dns_udp
+    - app_layer_tx_enip_udp
+    - app_layer_tx_http
+    - app_layer_tx_smtp
+    - capture_kernel_drops
+    - capture_kernel_packets
+    - decoder_avg_pkt_size
+    - decoder_bytes
+    - decoder_ethernet
+    - decoder_gre
+    - decoder_icmpv4
+    - decoder_icmpv4_ipv4_unknown_ver
+    - decoder_icmpv6
+    - decoder_invalid
+    - decoder_ipv4
+    - decoder_ipv6
+    - decoder_max_pkt_size
+    - decoder_pkts
+    - decoder_tcp
+    - decoder_tcp_hlen_too_small
+    - decoder_tcp_invalid_optlen
+    - decoder_teredo
+    - decoder_udp
+    - decoder_vlan
+    - detect_alert
+    - dns_memcap_global
+    - dns_memuse
+    - flow_memuse
+    - flow_mgr_closed_pruned
+    - flow_mgr_est_pruned
+    - flow_mgr_flows_checked
+    - flow_mgr_flows_notimeout
+    - flow_mgr_flows_removed
+    - flow_mgr_flows_timeout
+    - flow_mgr_flows_timeout_inuse
+    - flow_mgr_new_pruned
+    - flow_mgr_rows_checked
+    - flow_mgr_rows_empty
+    - flow_mgr_rows_maxlen
+    - flow_mgr_rows_skipped
+    - flow_spare
+    - flow_tcp_reuse
+    - http_memuse
+    - tcp_memuse
+    - tcp_pseudo
+    - tcp_reassembly_gap
+    - tcp_reassembly_memuse
+    - tcp_rst
+    - tcp_sessions
+    - tcp_syn
+    - tcp_synack
+    - ...
+
+Some fields of the Suricata alerts are strings, for example the signatures. See
+the Suricata [event docs](https://suricata.readthedocs.io/en/suricata-6.0.0/output/eve/eve-json-format.html?highlight=priority#event-type-alert) for more information.
+
+- suricata_alert
+  - fields:
+    - action
+    - gid
+    - severity
+    - signature
+    - source_ip
+    - source_port
+    - target_port
+    - target_port
+    - ...
+
+[1]: https://suricata.readthedocs.io/en/suricata-6.0.0/output/eve/eve-json-format.html?highlight=priority#event-type-alert
+
+### Suricata configuration
+
+Suricata needs to deliver the 'stats' event type to a given unix socket for
+this plugin to pick up. This can be done, for example, by creating an additional
+output in the Suricata configuration file:
+
+```yaml
+- eve-log:
+    enabled: yes
+    filetype: unix_stream
+    filename: /tmp/suricata-stats.sock
+    types:
+      - stats:
+         threads: yes
+```
+
+### FreeBSD tuning
+
+Under FreeBSD it is necessary to increase the localhost buffer space to at least
+16384, default is 8192 otherwise messages from Suricata are truncated as they
+exceed the default available buffer space, consequently no statistics are
+processed by the plugin.
+
+```text
+sysctl -w net.local.stream.recvspace=16384
+sysctl -w net.local.stream.sendspace=16384
+```
+
+## Example Output
+
+```text
+suricata,host=myhost,thread=FM#01 flow_mgr_rows_empty=0,flow_mgr_rows_checked=65536,flow_mgr_closed_pruned=0,flow_emerg_mode_over=0,flow_mgr_flows_timeout_inuse=0,flow_mgr_rows_skipped=65535,flow_mgr_bypassed_pruned=0,flow_mgr_flows_removed=0,flow_mgr_est_pruned=0,flow_mgr_flows_notimeout=1,flow_mgr_flows_checked=1,flow_mgr_rows_busy=0,flow_spare=10000,flow_mgr_rows_maxlen=1,flow_mgr_new_pruned=0,flow_emerg_mode_entered=0,flow_tcp_reuse=0,flow_mgr_flows_timeout=0 1568368562545197545
+suricata,host=myhost,thread=W#04-wlp4s0 decoder_ltnull_pkt_too_small=0,decoder_ipraw_invalid_ip_version=0,defrag_ipv4_reassembled=0,tcp_no_flow=0,app_layer_flow_tls=1,decoder_udp=25,defrag_ipv6_fragments=0,defrag_ipv4_fragments=0,decoder_tcp=59,decoder_vlan=0,decoder_pkts=84,decoder_vlan_qinq=0,decoder_avg_pkt_size=574,flow_memcap=0,defrag_max_frag_hits=0,tcp_ssn_memcap_drop=0,capture_kernel_packets=84,app_layer_flow_dcerpc_udp=0,app_layer_tx_dns_tcp=0,tcp_rst=0,decoder_icmpv4=0,app_layer_tx_tls=0,decoder_ipv4=84,decoder_erspan=0,decoder_ltnull_unsupported_type=0,decoder_invalid=0,app_layer_flow_ssh=0,capture_kernel_drops=0,app_layer_flow_ftp=0,app_layer_tx_http=0,tcp_pseudo_failed=0,defrag_ipv6_reassembled=0,defrag_ipv6_timeouts=0,tcp_pseudo=0,tcp_sessions=1,decoder_ethernet=84,decoder_raw=0,decoder_sctp=0,app_layer_flow_dns_udp=1,decoder_gre=0,app_layer_flow_http=0,app_layer_flow_imap=0,tcp_segment_memcap_drop=0,detect_alert=0,app_layer_flow_failed_tcp=0,decoder_teredo=0,decoder_mpls=0,decoder_ppp=0,decoder_max_pkt_size=1422,decoder_ipv6=0,tcp_reassembly_gap=0,app_layer_flow_dcerpc_tcp=0,decoder_ipv4_in_ipv6=0,tcp_stream_depth_reached=0,app_layer_flow_dns_tcp=0,app_layer_flow_smtp=0,tcp_syn=1,decoder_sll=0,tcp_invalid_checksum=0,app_layer_tx_dns_udp=1,decoder_bytes=48258,defrag_ipv4_timeouts=0,app_layer_flow_msn=0,decoder_pppoe=0,decoder_null=0,app_layer_flow_failed_udp=3,app_layer_tx_smtp=0,decoder_icmpv6=0,decoder_ipv6_in_ipv6=0,tcp_synack=1,app_layer_flow_smb=0,decoder_dce_pkt_too_small=0 1568368562545174807
+suricata,host=myhost,thread=W#01-wlp4s0 tcp_synack=0,app_layer_flow_imap=0,decoder_ipv4_in_ipv6=0,decoder_max_pkt_size=684,decoder_gre=0,defrag_ipv4_timeouts=0,tcp_invalid_checksum=0,decoder_ipv4=53,flow_memcap=0,app_layer_tx_http=0,app_layer_tx_smtp=0,decoder_null=0,tcp_no_flow=0,app_layer_tx_tls=0,app_layer_flow_ssh=0,app_layer_flow_smtp=0,decoder_pppoe=0,decoder_teredo=0,decoder_ipraw_invalid_ip_version=0,decoder_ltnull_pkt_too_small=0,tcp_rst=0,decoder_ppp=0,decoder_ipv6=29,app_layer_flow_dns_udp=3,decoder_vlan=0,app_layer_flow_dcerpc_tcp=0,tcp_syn=0,defrag_ipv4_fragments=0,defrag_ipv6_timeouts=0,decoder_raw=0,defrag_ipv6_reassembled=0,tcp_reassembly_gap=0,tcp_sessions=0,decoder_udp=44,tcp_segment_memcap_drop=0,app_layer_tx_dns_udp=3,app_layer_flow_tls=0,decoder_tcp=37,defrag_ipv4_reassembled=0,app_layer_flow_failed_udp=6,app_layer_flow_ftp=0,decoder_icmpv6=1,tcp_stream_depth_reached=0,capture_kernel_drops=0,decoder_sll=0,decoder_bytes=15883,decoder_ethernet=91,tcp_pseudo=0,app_layer_flow_http=0,decoder_sctp=0,decoder_pkts=91,decoder_avg_pkt_size=174,decoder_erspan=0,app_layer_flow_msn=0,app_layer_flow_smb=0,capture_kernel_packets=91,decoder_icmpv4=0,decoder_ipv6_in_ipv6=0,tcp_ssn_memcap_drop=0,decoder_vlan_qinq=0,decoder_ltnull_unsupported_type=0,decoder_invalid=0,defrag_max_frag_hits=0,tcp_pseudo_failed=0,detect_alert=0,app_layer_tx_dns_tcp=0,app_layer_flow_failed_tcp=0,app_layer_flow_dcerpc_udp=0,app_layer_flow_dns_tcp=0,defrag_ipv6_fragments=0,decoder_mpls=0,decoder_dce_pkt_too_small=0 1568368562545148438
+suricata,host=myhost flow_memuse=7094464,tcp_memuse=3276800,tcp_reassembly_memuse=12332832,dns_memuse=0,dns_memcap_state=0,dns_memcap_global=0,http_memuse=0,http_memcap=0 1568368562545144569
+suricata,host=myhost,thread=W#07-wlp4s0 app_layer_tx_http=0,app_layer_tx_dns_tcp=0,decoder_vlan=0,decoder_pppoe=0,decoder_sll=0,decoder_tcp=0,flow_memcap=0,app_layer_flow_msn=0,tcp_no_flow=0,tcp_rst=0,tcp_segment_memcap_drop=0,tcp_sessions=0,detect_alert=0,defrag_ipv6_reassembled=0,decoder_ipraw_invalid_ip_version=0,decoder_erspan=0,decoder_icmpv4=0,app_layer_tx_dns_udp=2,decoder_ltnull_pkt_too_small=0,decoder_bytes=1998,decoder_ipv6=1,defrag_ipv4_fragments=0,defrag_ipv6_fragments=0,app_layer_tx_smtp=0,decoder_ltnull_unsupported_type=0,decoder_max_pkt_size=342,app_layer_flow_ftp=0,decoder_ipv6_in_ipv6=0,defrag_ipv4_reassembled=0,defrag_ipv6_timeouts=0,app_layer_flow_dns_tcp=0,decoder_avg_pkt_size=181,defrag_ipv4_timeouts=0,tcp_stream_depth_reached=0,decoder_mpls=0,app_layer_flow_dns_udp=2,tcp_ssn_memcap_drop=0,app_layer_flow_dcerpc_tcp=0,app_layer_flow_failed_udp=2,app_layer_flow_smb=0,app_layer_flow_failed_tcp=0,decoder_invalid=0,decoder_null=0,decoder_gre=0,decoder_ethernet=11,app_layer_flow_ssh=0,defrag_max_frag_hits=0,capture_kernel_drops=0,tcp_pseudo_failed=0,app_layer_flow_smtp=0,decoder_udp=10,decoder_sctp=0,decoder_teredo=0,decoder_icmpv6=1,tcp_pseudo=0,tcp_synack=0,app_layer_tx_tls=0,app_layer_flow_imap=0,capture_kernel_packets=11,decoder_pkts=11,decoder_raw=0,decoder_ppp=0,tcp_syn=0,tcp_invalid_checksum=0,app_layer_flow_tls=0,decoder_ipv4_in_ipv6=0,app_layer_flow_http=0,decoder_dce_pkt_too_small=0,decoder_ipv4=10,decoder_vlan_qinq=0,tcp_reassembly_gap=0,app_layer_flow_dcerpc_udp=0 1568368562545110847
+suricata,host=myhost,thread=W#06-wlp4s0 app_layer_tx_smtp=0,decoder_ipv6_in_ipv6=0,decoder_dce_pkt_too_small=0,tcp_segment_memcap_drop=0,tcp_sessions=1,decoder_ppp=0,tcp_pseudo_failed=0,app_layer_tx_dns_tcp=0,decoder_invalid=0,defrag_ipv4_timeouts=0,app_layer_flow_smb=0,app_layer_flow_ssh=0,decoder_bytes=19407,decoder_null=0,app_layer_flow_tls=1,decoder_avg_pkt_size=473,decoder_pkts=41,decoder_pppoe=0,decoder_tcp=32,defrag_ipv4_reassembled=0,tcp_reassembly_gap=0,decoder_raw=0,flow_memcap=0,defrag_ipv6_timeouts=0,app_layer_flow_smtp=0,app_layer_tx_http=0,decoder_sll=0,decoder_udp=8,decoder_ltnull_pkt_too_small=0,decoder_ltnull_unsupported_type=0,decoder_ipv4_in_ipv6=0,decoder_vlan=0,decoder_max_pkt_size=1422,tcp_no_flow=0,app_layer_flow_failed_tcp=0,app_layer_flow_dns_tcp=0,app_layer_flow_ftp=0,decoder_icmpv4=0,defrag_max_frag_hits=0,tcp_rst=0,app_layer_flow_msn=0,app_layer_flow_failed_udp=2,app_layer_flow_dns_udp=0,app_layer_flow_dcerpc_udp=0,decoder_ipv4=39,decoder_ethernet=41,defrag_ipv6_reassembled=0,tcp_ssn_memcap_drop=0,app_layer_tx_tls=0,decoder_gre=0,decoder_vlan_qinq=0,tcp_pseudo=0,app_layer_flow_imap=0,app_layer_flow_dcerpc_tcp=0,defrag_ipv4_fragments=0,defrag_ipv6_fragments=0,tcp_synack=1,app_layer_flow_http=0,app_layer_tx_dns_udp=0,capture_kernel_packets=41,decoder_ipv6=2,tcp_invalid_checksum=0,tcp_stream_depth_reached=0,decoder_ipraw_invalid_ip_version=0,decoder_icmpv6=1,tcp_syn=1,detect_alert=0,capture_kernel_drops=0,decoder_teredo=0,decoder_erspan=0,decoder_sctp=0,decoder_mpls=0 1568368562545084670
+suricata,host=myhost,thread=W#02-wlp4s0 decoder_tcp=53,tcp_rst=3,tcp_reassembly_gap=0,defrag_ipv6_timeouts=0,tcp_ssn_memcap_drop=0,app_layer_flow_dcerpc_tcp=0,decoder_max_pkt_size=1422,decoder_ipv6_in_ipv6=0,tcp_no_flow=0,app_layer_flow_ftp=0,app_layer_flow_ssh=0,decoder_pkts=82,decoder_sctp=0,tcp_invalid_checksum=0,app_layer_flow_dns_tcp=0,decoder_ipraw_invalid_ip_version=0,decoder_bytes=26441,decoder_erspan=0,tcp_pseudo_failed=0,tcp_syn=1,app_layer_tx_http=0,app_layer_tx_smtp=0,decoder_teredo=0,decoder_ipv4=80,defrag_ipv4_fragments=0,tcp_stream_depth_reached=0,app_layer_flow_smb=0,capture_kernel_packets=82,decoder_null=0,decoder_ltnull_pkt_too_small=0,decoder_ppp=0,decoder_icmpv6=1,app_layer_flow_dns_udp=2,app_layer_flow_http=0,app_layer_tx_dns_udp=3,decoder_mpls=0,decoder_sll=0,defrag_ipv4_reassembled=0,tcp_segment_memcap_drop=0,app_layer_flow_imap=0,decoder_ltnull_unsupported_type=0,decoder_icmpv4=0,decoder_raw=0,defrag_ipv4_timeouts=0,app_layer_flow_failed_udp=8,decoder_gre=0,capture_kernel_drops=0,defrag_ipv6_reassembled=0,tcp_pseudo=0,app_layer_flow_tls=1,decoder_avg_pkt_size=322,decoder_dce_pkt_too_small=0,decoder_ethernet=82,defrag_ipv6_fragments=0,tcp_sessions=1,tcp_synack=1,app_layer_tx_dns_tcp=0,decoder_vlan=0,flow_memcap=0,decoder_vlan_qinq=0,decoder_udp=28,decoder_invalid=0,detect_alert=0,app_layer_flow_failed_tcp=0,app_layer_tx_tls=0,decoder_pppoe=0,decoder_ipv6=2,decoder_ipv4_in_ipv6=0,defrag_max_frag_hits=0,app_layer_flow_dcerpc_udp=0,app_layer_flow_smtp=0,app_layer_flow_msn=0 1568368562545061864
+suricata,host=myhost,thread=W#08-wlp4s0 decoder_dce_pkt_too_small=0,app_layer_tx_dns_tcp=0,decoder_pkts=58,decoder_ppp=0,decoder_raw=0,decoder_ipv4_in_ipv6=0,decoder_max_pkt_size=1392,tcp_invalid_checksum=0,tcp_syn=0,decoder_ipv4=51,decoder_ipv6_in_ipv6=0,decoder_tcp=0,decoder_ltnull_pkt_too_small=0,flow_memcap=0,decoder_udp=58,tcp_ssn_memcap_drop=0,tcp_pseudo=0,app_layer_flow_dcerpc_udp=0,app_layer_flow_dns_udp=5,app_layer_tx_http=0,capture_kernel_drops=0,decoder_vlan=0,tcp_segment_memcap_drop=0,app_layer_flow_ftp=0,app_layer_flow_imap=0,app_layer_flow_http=0,app_layer_flow_tls=0,decoder_icmpv4=0,decoder_sctp=0,defrag_ipv4_timeouts=0,tcp_reassembly_gap=0,detect_alert=0,decoder_ethernet=58,tcp_pseudo_failed=0,decoder_teredo=0,defrag_ipv4_reassembled=0,tcp_sessions=0,app_layer_flow_msn=0,decoder_ipraw_invalid_ip_version=0,tcp_no_flow=0,app_layer_flow_dns_tcp=0,decoder_null=0,defrag_ipv4_fragments=0,app_layer_flow_dcerpc_tcp=0,app_layer_flow_failed_udp=8,app_layer_tx_tls=0,decoder_bytes=15800,decoder_ipv6=7,tcp_stream_depth_reached=0,decoder_invalid=0,decoder_ltnull_unsupported_type=0,app_layer_tx_dns_udp=6,decoder_pppoe=0,decoder_avg_pkt_size=272,decoder_erspan=0,defrag_ipv6_timeouts=0,app_layer_flow_failed_tcp=0,decoder_gre=0,decoder_sll=0,defrag_max_frag_hits=0,app_layer_flow_ssh=0,capture_kernel_packets=58,decoder_mpls=0,decoder_vlan_qinq=0,tcp_rst=0,app_layer_flow_smb=0,app_layer_tx_smtp=0,decoder_icmpv6=0,defrag_ipv6_fragments=0,defrag_ipv6_reassembled=0,tcp_synack=0,app_layer_flow_smtp=0 1568368562545035575
+suricata,host=myhost,thread=W#05-wlp4s0 tcp_reassembly_gap=0,capture_kernel_drops=0,decoder_ltnull_unsupported_type=0,tcp_sessions=0,tcp_stream_depth_reached=0,tcp_pseudo_failed=0,app_layer_flow_failed_tcp=0,app_layer_tx_dns_tcp=0,decoder_null=0,decoder_dce_pkt_too_small=0,decoder_udp=7,tcp_rst=3,app_layer_flow_dns_tcp=0,decoder_invalid=0,defrag_ipv4_reassembled=0,tcp_synack=0,app_layer_flow_ftp=0,decoder_bytes=3117,decoder_pppoe=0,app_layer_flow_dcerpc_tcp=0,app_layer_flow_smb=0,decoder_ipv6_in_ipv6=0,decoder_ipraw_invalid_ip_version=0,app_layer_flow_imap=0,app_layer_tx_dns_udp=2,decoder_ppp=0,decoder_ipv4=21,decoder_tcp=14,flow_memcap=0,tcp_syn=0,tcp_invalid_checksum=0,decoder_teredo=0,decoder_ltnull_pkt_too_small=0,defrag_max_frag_hits=0,app_layer_tx_tls=0,decoder_pkts=24,decoder_sll=0,defrag_ipv6_fragments=0,app_layer_flow_dcerpc_udp=0,app_layer_flow_smtp=0,decoder_icmpv6=3,defrag_ipv6_timeouts=0,decoder_ipv6=3,decoder_raw=0,defrag_ipv6_reassembled=0,tcp_no_flow=0,detect_alert=0,app_layer_flow_tls=0,decoder_ethernet=24,decoder_vlan=0,decoder_icmpv4=0,decoder_ipv4_in_ipv6=0,app_layer_flow_failed_udp=1,decoder_mpls=0,decoder_max_pkt_size=653,decoder_sctp=0,defrag_ipv4_timeouts=0,tcp_ssn_memcap_drop=0,app_layer_flow_dns_udp=1,app_layer_tx_smtp=0,capture_kernel_packets=24,decoder_vlan_qinq=0,decoder_gre=0,app_layer_flow_ssh=0,app_layer_flow_msn=0,defrag_ipv4_fragments=0,app_layer_flow_http=0,tcp_segment_memcap_drop=0,tcp_pseudo=0,app_layer_tx_http=0,decoder_erspan=0,decoder_avg_pkt_size=129 1568368562545009684
+suricata,host=myhost,thread=W#03-wlp4s0 app_layer_flow_failed_tcp=0,decoder_teredo=0,decoder_ipv6_in_ipv6=0,tcp_pseudo_failed=0,tcp_stream_depth_reached=0,tcp_syn=0,decoder_gre=0,tcp_segment_memcap_drop=0,tcp_ssn_memcap_drop=0,app_layer_tx_smtp=0,decoder_raw=0,decoder_ltnull_pkt_too_small=0,tcp_sessions=0,tcp_reassembly_gap=0,app_layer_flow_ssh=0,app_layer_flow_imap=0,decoder_ipv4=463,decoder_ethernet=463,capture_kernel_packets=463,decoder_pppoe=0,defrag_ipv4_reassembled=0,app_layer_flow_tls=0,app_layer_flow_dcerpc_udp=0,app_layer_flow_dns_udp=0,decoder_vlan=0,decoder_ipraw_invalid_ip_version=0,decoder_mpls=0,tcp_no_flow=0,decoder_avg_pkt_size=445,decoder_udp=432,flow_memcap=0,app_layer_tx_dns_udp=0,app_layer_flow_msn=0,app_layer_flow_http=0,app_layer_flow_dcerpc_tcp=0,decoder_ipv6=0,decoder_ipv4_in_ipv6=0,defrag_ipv4_timeouts=0,defrag_ipv4_fragments=0,defrag_ipv6_timeouts=0,decoder_sctp=0,defrag_ipv6_fragments=0,app_layer_flow_dns_tcp=0,app_layer_tx_tls=0,defrag_max_frag_hits=0,decoder_bytes=206345,decoder_vlan_qinq=0,decoder_invalid=0,decoder_ppp=0,tcp_rst=0,detect_alert=0,capture_kernel_drops=0,app_layer_flow_failed_udp=4,decoder_null=0,decoder_icmpv4=0,decoder_icmpv6=0,decoder_ltnull_unsupported_type=0,defrag_ipv6_reassembled=0,tcp_invalid_checksum=0,tcp_synack=0,decoder_tcp=31,tcp_pseudo=0,app_layer_flow_smb=0,app_layer_flow_smtp=0,decoder_max_pkt_size=1463,decoder_dce_pkt_too_small=0,app_layer_tx_http=0,decoder_pkts=463,decoder_sll=0,app_layer_flow_ftp=0,app_layer_tx_dns_tcp=0,decoder_erspan=0 1568368562544966078
+```
diff --git a/content/telegraf/v1/input-plugins/swap/_index.md b/content/telegraf/v1/input-plugins/swap/_index.md
new file mode 100644
index 000000000..fb69cc695
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/swap/_index.md
@@ -0,0 +1,52 @@
+---
+description: "Telegraf plugin for collecting metrics from Swap"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Swap
+    identifier: input-swap
+tags: [Swap, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Swap Input Plugin
+
+The swap plugin collects system swap metrics. This plugin ONLY supports Linux.
+
+For more information on what swap memory is, read [All about Linux swap
+space](https://www.linux.com/news/all-about-linux-swap-space).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics about swap memory usage
+[[inputs.swap]]
+  # no configuration
+```
+
+## Metrics
+
+- swap
+  - fields:
+    - free (int, bytes): free swap memory
+    - total (int, bytes): total swap memory
+    - used (int, bytes): used swap memory
+    - used_percent (float, percent): percentage of swap memory used
+    - in (int, bytes): data swapped in since last boot calculated from page number
+    - out (int, bytes): data swapped out since last boot calculated from page number
+
+## Example Output
+
+```text
+swap total=20855394304i,used_percent=45.43883523785713,used=9476448256i,free=1715331072i 1511894782000000000
+```
diff --git a/content/telegraf/v1/input-plugins/synproxy/_index.md b/content/telegraf/v1/input-plugins/synproxy/_index.md
new file mode 100644
index 000000000..7ba224fbb
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/synproxy/_index.md
@@ -0,0 +1,75 @@
+---
+description: "Telegraf plugin for collecting metrics from Synproxy"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Synproxy
+    identifier: input-synproxy
+tags: [Synproxy, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Synproxy Input Plugin
+
+The synproxy plugin gathers the synproxy counters. Synproxy is a Linux netfilter
+module used for SYN attack mitigation.  The use of synproxy is documented in
+`man iptables-extensions` under the SYNPROXY section.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Get synproxy counter statistics from procfs
+# This plugin ONLY supports Linux
+[[inputs.synproxy]]
+  # no configuration
+```
+
+The synproxy plugin does not need any configuration
+
+## Metrics
+
+The following synproxy counters are gathered
+
+- synproxy
+  - fields:
+    - cookie_invalid (uint32, packets, counter) - Invalid cookies
+    - cookie_retrans (uint32, packets, counter) - Cookies retransmitted
+    - cookie_valid (uint32, packets, counter) - Valid cookies
+    - entries (uint32, packets, counter) - Entries
+    - syn_received (uint32, packets, counter) - SYN received
+    - conn_reopened (uint32, packets, counter) - Connections reopened
+
+## Sample Queries
+
+Get the number of packets per 5 minutes for the measurement in the last hour
+from InfluxDB:
+
+```sql
+SELECT difference(last("cookie_invalid")) AS "cookie_invalid", difference(last("cookie_retrans")) AS "cookie_retrans", difference(last("cookie_valid")) AS "cookie_valid", difference(last("entries")) AS "entries", difference(last("syn_received")) AS "syn_received", difference(last("conn_reopened")) AS "conn_reopened" FROM synproxy WHERE time > NOW() - 1h GROUP BY time(5m) FILL(null);
+```
+
+## Troubleshooting
+
+Execute the following CLI command in Linux to test the synproxy counters:
+
+```sh
+cat /proc/net/stat/synproxy
+```
+
+## Example Output
+
+This section shows example output in Line Protocol format.
+
+```text
+synproxy,host=Filter-GW01,rack=filter-node1 conn_reopened=0i,cookie_invalid=235i,cookie_retrans=0i,cookie_valid=8814i,entries=0i,syn_received=8742i 1549550634000000000
+```
diff --git a/content/telegraf/v1/input-plugins/syslog/_index.md b/content/telegraf/v1/input-plugins/syslog/_index.md
new file mode 100644
index 000000000..74d392305
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/syslog/_index.md
@@ -0,0 +1,280 @@
+---
+description: "Telegraf plugin for collecting metrics from Syslog"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Syslog
+    identifier: input-syslog
+tags: [Syslog, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Syslog Input Plugin
+
+The syslog plugin listens for syslog messages transmitted over a Unix Domain
+socket, [UDP](https://tools.ietf.org/html/rfc5426),
+[TCP](https://tools.ietf.org/html/rfc6587), or
+[TLS](https://tools.ietf.org/html/rfc5425); with or without the octet counting
+framing.
+
+Syslog messages should be formatted according to
+[RFC 5424](https://tools.ietf.org/html/rfc5424) (syslog protocol) or
+[RFC 3164](https://tools.ietf.org/html/rfc3164) (BSD syslog protocol).
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+[[inputs.syslog]]
+  ## Protocol, address and port to host the syslog receiver.
+  ## If no host is specified, then localhost is used.
+  ## If no port is specified, 6514 is used (RFC5425#section-4.1).
+  ##   ex: server = "tcp://localhost:6514"
+  ##       server = "udp://:6514"
+  ##       server = "unix:///var/run/telegraf-syslog.sock"
+  ## When using tcp, consider using 'tcp4' or 'tcp6' to force the usage of IPv4
+  ## or IPV6 respectively. There are cases, where when not specified, a system
+  ## may force an IPv4 mapped IPv6 address.
+  server = "tcp://127.0.0.1:6514"
+
+  ## Permission for unix sockets (only available on unix sockets)
+  ## This setting may not be respected by some platforms. To safely restrict
+  ## permissions it is recommended to place the socket into a previously
+  ## created directory with the desired permissions.
+  ##   ex: socket_mode = "777"
+  # socket_mode = ""
+
+  ## Maximum number of concurrent connections (only available on stream sockets like TCP)
+  ## Zero means unlimited.
+  # max_connections = 0
+
+  ## Read timeout (only available on stream sockets like TCP)
+  ## Zero means unlimited.
+  # read_timeout = "0s"
+
+  ## Optional TLS configuration (only available on stream sockets like TCP)
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key  = "/etc/telegraf/key.pem"
+  ## Enables client authentication if set.
+  # tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
+
+  ## Maximum socket buffer size (in bytes when no unit specified)
+  ## For stream sockets, once the buffer fills up, the sender will start
+  ## backing up. For datagram sockets, once the buffer fills up, metrics will
+  ## start dropping. Defaults to the OS default.
+  # read_buffer_size = "64KiB"
+
+  ## Period between keep alive probes (only applies to TCP sockets)
+  ## Zero disables keep alive probes. Defaults to the OS configuration.
+  # keep_alive_period = "5m"
+
+  ## Content encoding for message payloads
+  ## Can be set to "gzip" for compressed payloads or "identity" for no encoding.
+  # content_encoding = "identity"
+
+  ## Maximum size of decoded packet (in bytes when no unit specified)
+  # max_decompression_size = "500MB"
+
+  ## Framing technique used for messages transport
+  ## Available settings are:
+  ##   octet-counting  -- see RFC5425#section-4.3.1 and RFC6587#section-3.4.1
+  ##   non-transparent -- see RFC6587#section-3.4.2
+  # framing = "octet-counting"
+
+  ## The trailer to be expected in case of non-transparent framing (default = "LF").
+  ## Must be one of "LF", or "NUL".
+  # trailer = "LF"
+
+  ## Whether to parse in best effort mode or not (default = false).
+  ## By default best effort parsing is off.
+  # best_effort = false
+
+  ## The RFC standard to use for message parsing
+  ## By default RFC5424 is used. RFC3164 only supports UDP transport (no streaming support)
+  ## Must be one of "RFC5424", or "RFC3164".
+  # syslog_standard = "RFC5424"
+
+  ## Character to prepend to SD-PARAMs (default = "_").
+  ## A syslog message can contain multiple parameters and multiple identifiers within structured data section.
+  ## Eg., [id1 name1="val1" name2="val2"]()
+  ## For each combination a field is created.
+  ## Its name is created concatenating identifier, sdparam_separator, and parameter name.
+  # sdparam_separator = "_"
+```
+
+### Message transport
+
+The `framing` option only applies to streams. It governs the way we expect to
+receive messages within the stream.  Namely, with the [`"octet counting"`]()
+technique (default) or with the [`"non-transparent"`]() framing.
+
+The `trailer` option only applies when `framing` option is
+`"non-transparent"`. It must have one of the following values: `"LF"` (default),
+or `"NUL"`.
+
+[1]: https://tools.ietf.org/html/rfc5425#section-4.3
+
+[2]: https://tools.ietf.org/html/rfc6587#section-3.4.2
+
+### Best effort
+
+The [`best_effort`](https://github.com/influxdata/go-syslog#best-effort-mode)
+option instructs the parser to extract partial but valid info from syslog
+messages. If unset only full messages will be collected.
+
+### Rsyslog Integration
+
+Rsyslog can be configured to forward logging messages to Telegraf by configuring
+[remote logging](https://www.rsyslog.com/doc/v8-stable/configuration/actions.html#remote-machine).
+
+Most system are setup with a configuration split between `/etc/rsyslog.conf`
+and the files in the `/etc/rsyslog.d/` directory, it is recommended to add the
+new configuration into the config directory to simplify updates to the main
+config file.
+
+Add the following lines to `/etc/rsyslog.d/50-telegraf.conf` making
+adjustments to the target address as needed:
+
+```shell
+$ActionQueueType LinkedList # use asynchronous processing
+$ActionQueueFileName srvrfwd # set file name, also enables disk mode
+$ActionResumeRetryCount -1 # infinite retries on insert failure
+$ActionQueueSaveOnShutdown on # save in-memory data if rsyslog shuts down
+
+# forward over tcp with octet framing according to RFC 5425
+*.* @@(o)127.0.0.1:6514;RSYSLOG_SyslogProtocol23Format
+
+# uncomment to use udp according to RFC 5424
+#*.* @127.0.0.1:6514;RSYSLOG_SyslogProtocol23Format
+```
+
+You can alternately use `advanced` format (aka RainerScript):
+
+```bash
+# forward over tcp with octet framing according to RFC 5425
+action(type="omfwd" Protocol="tcp" TCP_Framing="octet-counted" Target="127.0.0.1" Port="6514" Template="RSYSLOG_SyslogProtocol23Format")
+
+# uncomment to use udp according to RFC 5424
+#action(type="omfwd" Protocol="udp" Target="127.0.0.1" Port="6514" Template="RSYSLOG_SyslogProtocol23Format")
+```
+
+To complete TLS setup please refer to [rsyslog docs](https://www.rsyslog.com/doc/v8-stable/tutorials/tls.html).
+
+[3]: https://www.rsyslog.com/doc/v8-stable/configuration/actions.html#remote-machine
+
+[4]: https://www.rsyslog.com/doc/v8-stable/tutorials/tls.html
+
+## Metrics
+
+- syslog
+  - tags
+    - severity (string)
+    - facility (string)
+    - hostname (string)
+    - appname (string)
+    - source (string)
+  - fields
+    - version (integer)
+    - severity_code (integer)
+    - facility_code (integer)
+    - timestamp (integer): the time recorded in the syslog message
+    - procid (string)
+    - msgid (string)
+    - sdid (bool)
+    - *Structured Data* (string)
+  - timestamp: the time the messages was received
+
+### Structured Data
+
+Structured data produces field keys by combining the `SD_ID` with the
+`PARAM_NAME` combined using the `sdparam_separator` as in the following example:
+
+```shell
+170 <165>1 2018-10-01:14:15.000Z mymachine.example.com evntslog - ID47 [exampleSDID@32473 iut="3" eventSource="Application" eventID="1011"] An application event log entry...
+```
+
+```shell
+syslog,appname=evntslog,facility=local4,hostname=mymachine.example.com,severity=notice exampleSDID@32473_eventID="1011",exampleSDID@32473_eventSource="Application",exampleSDID@32473_iut="3",facility_code=20i,message="An application event log entry...",msgid="ID47",severity_code=5i,timestamp=1065910455003000000i,version=1i 1538421339749472344
+```
+
+## Troubleshooting
+
+```sh
+# TCP with octet framing
+echo "57 <13>1 2018-10-01T12:00:00.0Z example.org root - - - test" | nc 127.0.0.1 6514
+
+# UDP
+echo "<13>1 2018-10-01T12:00:00.0Z example.org root - - - test" | nc -u 127.0.0.1 6514
+```
+
+### Resolving Source IPs
+
+The `source` tag stores the remote IP address of the syslog sender.
+To resolve these IPs to DNS names, use the
+`reverse_dns` processor.
+
+You can send debugging messages directly to the input plugin using netcat:
+
+### RFC3164
+
+RFC3164 encoded messages are supported for UDP only, but not all vendors output
+valid RFC3164 messages by default
+
+- E.g. Cisco IOS
+
+If you see the following error, it is due to a message encoded in this format:
+
+ ```shell
+ E! Error in plugin [inputs.syslog]: expecting a version value in the range 1-999 [col 5]
+ ```
+
+Users can use rsyslog to translate RFC3164 syslog messages into RFC5424 format.
+Add the following lines to the rsyslog configuration file
+(e.g. `/etc/rsyslog.d/50-telegraf.conf`):
+
+```s
+# This makes rsyslog listen on 127.0.0.1:514 to receive RFC3164 udp
+# messages which can them be forwarded to telegraf as RFC5424
+$ModLoad imudp #loads the udp module
+$UDPServerAddress 127.0.0.1
+$UDPServerRun 514
+```
+
+Make adjustments to the target address as needed and sent your RFC3164 messages
+to port 514.
+
+## Example Output
+
+Here is example output of this plugin:
+
+```text
+syslog,appname=docker-compose,facility=daemon,host=bb8,hostname=droplet,location=home,severity=info,source=10.0.0.12 facility_code=3i,message="<redacted>",severity_code=6i,timestamp=1624643706396113000i,version=1i 1624643706400667198
+syslog,appname=tailscaled,facility=daemon,host=bb8,hostname=dev,location=home,severity=info,source=10.0.0.15 facility_code=3i,message="<redacted>",severity_code=6i,timestamp=1624643706403394000i,version=1i 1624643706407850408
+syslog,appname=docker-compose,facility=daemon,host=bb8,hostname=droplet,location=home,severity=info,source=10.0.0.12 facility_code=3i,message="<redacted>",severity_code=6i,timestamp=1624643706675853000i,version=1i 1624643706679251683
+syslog,appname=telegraf,facility=daemon,host=bb8,hostname=droplet,location=home,severity=info,source=10.0.0.12 facility_code=3i,message="<redacted>",severity_code=6i,timestamp=1624643710005006000i,version=1i 1624643710008285426
+syslog,appname=telegraf,facility=daemon,host=bb8,hostname=droplet,location=home,severity=info,source=10.0.0.12 facility_code=3i,message="<redacted>",severity_code=6i,timestamp=1624643710005696000i,version=1i 1624643710010754050
+syslog,appname=docker-compose,facility=daemon,host=bb8,hostname=droplet,location=home,severity=info,source=10.0.0.12 facility_code=3i,message="<redacted>",severity_code=6i,timestamp=1624643715777813000i,version=1i 1624643715782158154
+syslog,appname=docker-compose,facility=daemon,host=bb8,hostname=droplet,location=home,severity=info,source=10.0.0.12 facility_code=3i,message="<redacted>",severity_code=6i,timestamp=1624643716396547000i,version=1i 1624643716400395788
+syslog,appname=tailscaled,facility=daemon,host=bb8,hostname=dev,location=home,severity=info,source=10.0.0.15 facility_code=3i,message="<redacted>",severity_code=6i,timestamp=1624643716404931000i,version=1i 1624643716416947058
+syslog,appname=docker-compose,facility=daemon,host=bb8,hostname=droplet,location=home,severity=info,source=10.0.0.12 facility_code=3i,message="<redacted>",severity_code=6i,timestamp=1624643716676633000i,version=1i 1624643716680157558
+```
diff --git a/content/telegraf/v1/input-plugins/sysstat/_index.md b/content/telegraf/v1/input-plugins/sysstat/_index.md
new file mode 100644
index 000000000..3a660ac67
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/sysstat/_index.md
@@ -0,0 +1,470 @@
+---
+description: "Telegraf plugin for collecting metrics from sysstat"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: sysstat
+    identifier: input-sysstat
+tags: [sysstat, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# sysstat Input Plugin
+
+Collect [sysstat](https://github.com/sysstat/sysstat) metrics - requires the
+sysstat package installed.
+
+This plugin collects system metrics with the sysstat collector utility `sadc`
+and parses the created binary data file with the `sadf` utility.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Sysstat metrics collector
+# This plugin ONLY supports Linux
+[[inputs.sysstat]]
+  ## Path to the sadc command.
+  #
+  ## Common Defaults:
+  ##   Debian/Ubuntu: /usr/lib/sysstat/sadc
+  ##   Arch:          /usr/lib/sa/sadc
+  ##   RHEL/CentOS:   /usr/lib64/sa/sadc
+  sadc_path = "/usr/lib/sa/sadc" # required
+
+  ## Path to the sadf command, if it is not in PATH
+  # sadf_path = "/usr/bin/sadf"
+
+  ## Activities is a list of activities, that are passed as argument to the
+  ## sadc collector utility (e.g: DISK, SNMP etc...)
+  ## The more activities that are added, the more data is collected.
+  # activities = ["DISK"]
+
+  ## Group metrics to measurements.
+  ##
+  ## If group is false each metric will be prefixed with a description
+  ## and represents itself a measurement.
+  ##
+  ## If Group is true, corresponding metrics are grouped to a single measurement.
+  # group = true
+
+  ## Options for the sadf command. The values on the left represent the sadf options and
+  ## the values on the right their description (which are used for grouping and prefixing metrics).
+  ##
+  ## Run 'sar -h' or 'man sar' to find out the supported options for your sysstat version.
+  [inputs.sysstat.options]
+    -C = "cpu"
+    -B = "paging"
+    -b = "io"
+    -d = "disk"             # requires DISK activity
+    "-n ALL" = "network"
+    "-P ALL" = "per_cpu"
+    -q = "queue"
+    -R = "mem"
+    -r = "mem_util"
+    -S = "swap_util"
+    -u = "cpu_util"
+    -v = "inode"
+    -W = "swap"
+    -w = "task"
+  # -H = "hugepages"        # only available for newer linux distributions
+  # "-I ALL" = "interrupts" # requires INT activity
+
+  ## Device tags can be used to add additional tags for devices. For example the configuration below
+  ## adds a tag vg with value rootvg for all metrics with sda devices.
+  # [[inputs.sysstat.device_tags.sda]]
+  #  vg = "rootvg"
+```
+
+## Metrics
+
+### If group=true
+
+- cpu
+  - pct_idle (float)
+  - pct_iowait (float)
+  - pct_nice (float)
+  - pct_steal (float)
+  - pct_system (float)
+  - pct_user (float)
+
+- disk
+  - avgqu-sz (float)
+  - avgrq-sz (float)
+  - await (float)
+  - pct_util (float)
+  - rd_sec_pers (float)
+  - svctm (float)
+  - tps (float)
+
+And much more, depending on the options you configure.
+
+### If group=false
+
+- cpu_pct_idle
+  - value (float)
+- cpu_pct_iowait
+  - value (float)
+- cpu_pct_nice
+  - value (float)
+- cpu_pct_steal
+  - value (float)
+- cpu_pct_system
+  - value (float)
+- cpu_pct_user
+  - value (float)
+- disk_avgqu-sz
+  - value (float)
+- disk_avgrq-sz
+  - value (float)
+- disk_await
+  - value (float)
+- disk_pct_util
+  - value (float)
+- disk_rd_sec_per_s
+  - value (float)
+- disk_svctm
+  - value (float)
+- disk_tps
+  - value (float)
+
+And much more, depending on the options you configure.
+
+### Tags
+
+- All measurements have the following tags:
+  - device
+
+And more if you define some `device_tags`.
+
+## Example Output
+
+With the configuration below:
+
+```toml
+[[inputs.sysstat]]
+  sadc_path = "/usr/lib/sa/sadc" # required
+  activities = ["DISK", "SNMP", "INT"]
+  group = true
+  [inputs.sysstat.options]
+ -C = "cpu"
+ -B = "paging"
+ -b = "io"
+ -d = "disk"             # requires DISK activity
+ -H = "hugepages"
+ "-I ALL" = "interrupts" # requires INT activity
+ "-n ALL" = "network"
+ "-P ALL" = "per_cpu"
+ -q = "queue"
+ -R = "mem"
+ "-r ALL" = "mem_util"
+ -S = "swap_util"
+ -u = "cpu_util"
+ -v = "inode"
+ -W = "swap"
+ -w = "task"
+  [[inputs.sysstat.device_tags.sda]]
+    vg = "rootvg"
+```
+
+you get the following output:
+
+```text
+cpu_util,device=all pct_idle=98.85,pct_iowait=0,pct_nice=0.38,pct_steal=0,pct_system=0.64,pct_user=0.13 1459255626657883725
+swap pswpin_per_s=0,pswpout_per_s=0 1459255626658387650
+per_cpu,device=cpu1 pct_idle=98.98,pct_iowait=0,pct_nice=0.26,pct_steal=0,pct_system=0.51,pct_user=0.26 1459255626659630437
+per_cpu,device=all pct_idle=98.85,pct_iowait=0,pct_nice=0.38,pct_steal=0,pct_system=0.64,pct_user=0.13 1459255626659670744
+per_cpu,device=cpu0 pct_idle=98.73,pct_iowait=0,pct_nice=0.76,pct_steal=0,pct_system=0.51,pct_user=0 1459255626659697515
+hugepages kbhugfree=0,kbhugused=0,pct_hugused=0 1459255626660057517
+network,device=lo coll_per_s=0,pct_ifutil=0,rxcmp_per_s=0,rxdrop_per_s=0,rxerr_per_s=0,rxfifo_per_s=0,rxfram_per_s=0,rxkB_per_s=0.81,rxmcst_per_s=0,rxpck_per_s=16,txcarr_per_s=0,txcmp_per_s=0,txdrop_per_s=0,txerr_per_s=0,txfifo_per_s=0,txkB_per_s=0.81,txpck_per_s=16 1459255626661197666
+network access_per_s=0,active_per_s=0,asmf_per_s=0,asmok_per_s=0,asmrq_per_s=0,atmptf_per_s=0,badcall_per_s=0,call_per_s=0,estres_per_s=0,fragcrt_per_s=0,fragf_per_s=0,fragok_per_s=0,fwddgm_per_s=0,getatt_per_s=0,hit_per_s=0,iadrerr_per_s=0,iadrmk_per_s=0,iadrmkr_per_s=0,idel_per_s=16,idgm_per_s=0,idgmerr_per_s=0,idisc_per_s=0,idstunr_per_s=0,iech_per_s=0,iechr_per_s=0,ierr_per_s=0,ihdrerr_per_s=0,imsg_per_s=0,ip-frag=0,iparmpb_per_s=0,irec_per_s=16,iredir_per_s=0,iseg_per_s=16,isegerr_per_s=0,isrcq_per_s=0,itm_per_s=0,itmex_per_s=0,itmr_per_s=0,iukwnpr_per_s=0,miss_per_s=0,noport_per_s=0,oadrmk_per_s=0,oadrmkr_per_s=0,odgm_per_s=0,odisc_per_s=0,odstunr_per_s=0,oech_per_s=0,oechr_per_s=0,oerr_per_s=0,omsg_per_s=0,onort_per_s=0,oparmpb_per_s=0,oredir_per_s=0,orq_per_s=16,orsts_per_s=0,oseg_per_s=16,osrcq_per_s=0,otm_per_s=0,otmex_per_s=0,otmr_per_s=0,packet_per_s=0,passive_per_s=0,rawsck=0,read_per_s=0,retrans_per_s=0,saccess_per_s=0,scall_per_s=0,sgetatt_per_s=0,sread_per_s=0,swrite_per_s=0,tcp-tw=7,tcp_per_s=0,tcpsck=1543,totsck=4052,udp_per_s=0,udpsck=2,write_per_s=0 1459255626661381788
+network,device=ens33 coll_per_s=0,pct_ifutil=0,rxcmp_per_s=0,rxdrop_per_s=0,rxerr_per_s=0,rxfifo_per_s=0,rxfram_per_s=0,rxkB_per_s=0,rxmcst_per_s=0,rxpck_per_s=0,txcarr_per_s=0,txcmp_per_s=0,txdrop_per_s=0,txerr_per_s=0,txfifo_per_s=0,txkB_per_s=0,txpck_per_s=0 1459255626661533072
+disk,device=sda,vg=rootvg avgqu-sz=0.01,avgrq-sz=8.5,await=3.31,pct_util=0.1,rd_sec_per_s=0,svctm=0.25,tps=4,wr_sec_per_s=34 1459255626663974389
+queue blocked=0,ldavg-1=1.61,ldavg-15=1.34,ldavg-5=1.67,plist-sz=1415,runq-sz=0 1459255626664159054
+paging fault_per_s=0.25,majflt_per_s=0,pct_vmeff=0,pgfree_per_s=19,pgpgin_per_s=0,pgpgout_per_s=17,pgscand_per_s=0,pgscank_per_s=0,pgsteal_per_s=0 1459255626664304249
+mem_util kbactive=2206568,kbanonpg=1472208,kbbuffers=118020,kbcached=1035252,kbcommit=8717200,kbdirty=156,kbinact=418912,kbkstack=24672,kbmemfree=1744868,kbmemused=3610272,kbpgtbl=87116,kbslab=233804,kbvmused=0,pct_commit=136.13,pct_memused=67.42 1459255626664554981
+io bread_per_s=0,bwrtn_per_s=34,rtps=0,tps=4,wtps=4 1459255626664596198
+inode dentunusd=235039,file-nr=17120,inode-nr=94505,pty-nr=14 1459255626664663693
+interrupts,device=i000 intr_per_s=0 1459255626664800109
+interrupts,device=i003 intr_per_s=0 1459255626665255145
+interrupts,device=i004 intr_per_s=0 1459255626665281776
+interrupts,device=i006 intr_per_s=0 1459255626665297416
+interrupts,device=i007 intr_per_s=0 1459255626665321008
+interrupts,device=i010 intr_per_s=0 1459255626665339413
+interrupts,device=i012 intr_per_s=0 1459255626665361510
+interrupts,device=i013 intr_per_s=0 1459255626665381327
+interrupts,device=i015 intr_per_s=1 1459255626665397313
+interrupts,device=i001 intr_per_s=0.25 1459255626665412985
+interrupts,device=i002 intr_per_s=0 1459255626665430475
+interrupts,device=i005 intr_per_s=0 1459255626665453944
+interrupts,device=i008 intr_per_s=0 1459255626665470650
+interrupts,device=i011 intr_per_s=0 1459255626665486069
+interrupts,device=i009 intr_per_s=0 1459255626665502913
+interrupts,device=i014 intr_per_s=0 1459255626665518152
+task cswch_per_s=722.25,proc_per_s=0 1459255626665849646
+cpu,device=all pct_idle=98.85,pct_iowait=0,pct_nice=0.38,pct_steal=0,pct_system=0.64,pct_user=0.13 1459255626666639715
+mem bufpg_per_s=0,campg_per_s=1.75,frmpg_per_s=-8.25 1459255626666770205
+swap_util kbswpcad=0,kbswpfree=1048572,kbswpused=0,pct_swpcad=0,pct_swpused=0 1459255626667313276
+```
+
+If you change the group value to false like below:
+
+```toml
+[[inputs.sysstat]]
+  sadc_path = "/usr/lib/sa/sadc" # required
+  activities = ["DISK", "SNMP", "INT"]
+  group = false
+  [inputs.sysstat.options]
+ -C = "cpu"
+ -B = "paging"
+ -b = "io"
+ -d = "disk"             # requires DISK activity
+ -H = "hugepages"
+ "-I ALL" = "interrupts" # requires INT activity
+ "-n ALL" = "network"
+ "-P ALL" = "per_cpu"
+ -q = "queue"
+ -R = "mem"
+ "-r ALL" = "mem_util"
+ -S = "swap_util"
+ -u = "cpu_util"
+ -v = "inode"
+ -W = "swap"
+ -w = "task"
+  [[inputs.sysstat.device_tags.sda]]
+    vg = "rootvg"
+```
+
+you get the following output:
+
+```text
+io_tps value=0.5 1459255780126025822
+io_rtps value=0 1459255780126025822
+io_wtps value=0.5 1459255780126025822
+io_bread_per_s value=0 1459255780126025822
+io_bwrtn_per_s value=38 1459255780126025822
+cpu_util_pct_user,device=all value=39.07 1459255780126025822
+cpu_util_pct_nice,device=all value=0 1459255780126025822
+cpu_util_pct_system,device=all value=47.94 1459255780126025822
+cpu_util_pct_iowait,device=all value=0 1459255780126025822
+cpu_util_pct_steal,device=all value=0 1459255780126025822
+cpu_util_pct_idle,device=all value=12.98 1459255780126025822
+swap_pswpin_per_s value=0 1459255780126025822
+cpu_pct_user,device=all value=39.07 1459255780126025822
+cpu_pct_nice,device=all value=0 1459255780126025822
+cpu_pct_system,device=all value=47.94 1459255780126025822
+cpu_pct_iowait,device=all value=0 1459255780126025822
+cpu_pct_steal,device=all value=0 1459255780126025822
+cpu_pct_idle,device=all value=12.98 1459255780126025822
+per_cpu_pct_user,device=all value=39.07 1459255780126025822
+per_cpu_pct_nice,device=all value=0 1459255780126025822
+per_cpu_pct_system,device=all value=47.94 1459255780126025822
+per_cpu_pct_iowait,device=all value=0 1459255780126025822
+per_cpu_pct_steal,device=all value=0 1459255780126025822
+per_cpu_pct_idle,device=all value=12.98 1459255780126025822
+per_cpu_pct_user,device=cpu0 value=33.5 1459255780126025822
+per_cpu_pct_nice,device=cpu0 value=0 1459255780126025822
+per_cpu_pct_system,device=cpu0 value=65.25 1459255780126025822
+per_cpu_pct_iowait,device=cpu0 value=0 1459255780126025822
+per_cpu_pct_steal,device=cpu0 value=0 1459255780126025822
+per_cpu_pct_idle,device=cpu0 value=1.25 1459255780126025822
+per_cpu_pct_user,device=cpu1 value=44.85 1459255780126025822
+per_cpu_pct_nice,device=cpu1 value=0 1459255780126025822
+per_cpu_pct_system,device=cpu1 value=29.55 1459255780126025822
+per_cpu_pct_iowait,device=cpu1 value=0 1459255780126025822
+per_cpu_pct_steal,device=cpu1 value=0 1459255780126025822
+per_cpu_pct_idle,device=cpu1 value=25.59 1459255780126025822
+hugepages_kbhugfree value=0 1459255780126025822
+hugepages_kbhugused value=0 1459255780126025822
+hugepages_pct_hugused value=0 1459255780126025822
+interrupts_intr_per_s,device=i000 value=0 1459255780126025822
+inode_dentunusd value=252876 1459255780126025822
+mem_util_kbmemfree value=1613612 1459255780126025822
+disk_tps,device=sda,vg=rootvg value=0.5 1459255780126025822
+swap_pswpout_per_s value=0 1459255780126025822
+network_rxpck_per_s,device=ens33 value=0 1459255780126025822
+queue_runq-sz value=4 1459255780126025822
+task_proc_per_s value=0 1459255780126025822
+task_cswch_per_s value=2019 1459255780126025822
+mem_frmpg_per_s value=0 1459255780126025822
+mem_bufpg_per_s value=0.5 1459255780126025822
+mem_campg_per_s value=1.25 1459255780126025822
+interrupts_intr_per_s,device=i001 value=0 1459255780126025822
+inode_file-nr value=19104 1459255780126025822
+mem_util_kbmemused value=3741528 1459255780126025822
+disk_rd_sec_per_s,device=sda,vg=rootvg value=0 1459255780126025822
+network_txpck_per_s,device=ens33 value=0 1459255780126025822
+queue_plist-sz value=1512 1459255780126025822
+paging_pgpgin_per_s value=0 1459255780126025822
+paging_pgpgout_per_s value=19 1459255780126025822
+paging_fault_per_s value=0.25 1459255780126025822
+paging_majflt_per_s value=0 1459255780126025822
+paging_pgfree_per_s value=34.25 1459255780126025822
+paging_pgscank_per_s value=0 1459255780126025822
+paging_pgscand_per_s value=0 1459255780126025822
+paging_pgsteal_per_s value=0 1459255780126025822
+paging_pct_vmeff value=0 1459255780126025822
+interrupts_intr_per_s,device=i002 value=0 1459255780126025822
+interrupts_intr_per_s,device=i003 value=0 1459255780126025822
+interrupts_intr_per_s,device=i004 value=0 1459255780126025822
+interrupts_intr_per_s,device=i005 value=0 1459255780126025822
+interrupts_intr_per_s,device=i006 value=0 1459255780126025822
+interrupts_intr_per_s,device=i007 value=0 1459255780126025822
+interrupts_intr_per_s,device=i008 value=0 1459255780126025822
+interrupts_intr_per_s,device=i009 value=0 1459255780126025822
+interrupts_intr_per_s,device=i010 value=0 1459255780126025822
+interrupts_intr_per_s,device=i011 value=0 1459255780126025822
+interrupts_intr_per_s,device=i012 value=0 1459255780126025822
+interrupts_intr_per_s,device=i013 value=0 1459255780126025822
+interrupts_intr_per_s,device=i014 value=0 1459255780126025822
+interrupts_intr_per_s,device=i015 value=1 1459255780126025822
+inode_inode-nr value=94709 1459255780126025822
+inode_pty-nr value=14 1459255780126025822
+mem_util_pct_memused value=69.87 1459255780126025822
+mem_util_kbbuffers value=118252 1459255780126025822
+mem_util_kbcached value=1045240 1459255780126025822
+mem_util_kbcommit value=9628152 1459255780126025822
+mem_util_pct_commit value=150.35 1459255780126025822
+mem_util_kbactive value=2303752 1459255780126025822
+mem_util_kbinact value=428340 1459255780126025822
+mem_util_kbdirty value=104 1459255780126025822
+mem_util_kbanonpg value=1568676 1459255780126025822
+mem_util_kbslab value=240032 1459255780126025822
+mem_util_kbkstack value=26224 1459255780126025822
+mem_util_kbpgtbl value=98056 1459255780126025822
+mem_util_kbvmused value=0 1459255780126025822
+disk_wr_sec_per_s,device=sda,vg=rootvg value=38 1459255780126025822
+disk_avgrq-sz,device=sda,vg=rootvg value=76 1459255780126025822
+disk_avgqu-sz,device=sda,vg=rootvg value=0 1459255780126025822
+disk_await,device=sda,vg=rootvg value=2 1459255780126025822
+disk_svctm,device=sda,vg=rootvg value=2 1459255780126025822
+disk_pct_util,device=sda,vg=rootvg value=0.1 1459255780126025822
+network_rxkB_per_s,device=ens33 value=0 1459255780126025822
+network_txkB_per_s,device=ens33 value=0 1459255780126025822
+network_rxcmp_per_s,device=ens33 value=0 1459255780126025822
+network_txcmp_per_s,device=ens33 value=0 1459255780126025822
+network_rxmcst_per_s,device=ens33 value=0 1459255780126025822
+network_pct_ifutil,device=ens33 value=0 1459255780126025822
+network_rxpck_per_s,device=lo value=10.75 1459255780126025822
+network_txpck_per_s,device=lo value=10.75 1459255780126025822
+network_rxkB_per_s,device=lo value=0.77 1459255780126025822
+network_txkB_per_s,device=lo value=0.77 1459255780126025822
+network_rxcmp_per_s,device=lo value=0 1459255780126025822
+network_txcmp_per_s,device=lo value=0 1459255780126025822
+network_rxmcst_per_s,device=lo value=0 1459255780126025822
+network_pct_ifutil,device=lo value=0 1459255780126025822
+network_rxerr_per_s,device=ens33 value=0 1459255780126025822
+network_txerr_per_s,device=ens33 value=0 1459255780126025822
+network_coll_per_s,device=ens33 value=0 1459255780126025822
+network_rxdrop_per_s,device=ens33 value=0 1459255780126025822
+network_txdrop_per_s,device=ens33 value=0 1459255780126025822
+network_txcarr_per_s,device=ens33 value=0 1459255780126025822
+network_rxfram_per_s,device=ens33 value=0 1459255780126025822
+network_rxfifo_per_s,device=ens33 value=0 1459255780126025822
+network_txfifo_per_s,device=ens33 value=0 1459255780126025822
+network_rxerr_per_s,device=lo value=0 1459255780126025822
+network_txerr_per_s,device=lo value=0 1459255780126025822
+network_coll_per_s,device=lo value=0 1459255780126025822
+network_rxdrop_per_s,device=lo value=0 1459255780126025822
+network_txdrop_per_s,device=lo value=0 1459255780126025822
+network_txcarr_per_s,device=lo value=0 1459255780126025822
+network_rxfram_per_s,device=lo value=0 1459255780126025822
+network_rxfifo_per_s,device=lo value=0 1459255780126025822
+network_txfifo_per_s,device=lo value=0 1459255780126025822
+network_call_per_s value=0 1459255780126025822
+network_retrans_per_s value=0 1459255780126025822
+network_read_per_s value=0 1459255780126025822
+network_write_per_s value=0 1459255780126025822
+network_access_per_s value=0 1459255780126025822
+network_getatt_per_s value=0 1459255780126025822
+network_scall_per_s value=0 1459255780126025822
+network_badcall_per_s value=0 1459255780126025822
+network_packet_per_s value=0 1459255780126025822
+network_udp_per_s value=0 1459255780126025822
+network_tcp_per_s value=0 1459255780126025822
+network_hit_per_s value=0 1459255780126025822
+network_miss_per_s value=0 1459255780126025822
+network_sread_per_s value=0 1459255780126025822
+network_swrite_per_s value=0 1459255780126025822
+network_saccess_per_s value=0 1459255780126025822
+network_sgetatt_per_s value=0 1459255780126025822
+network_totsck value=4234 1459255780126025822
+network_tcpsck value=1637 1459255780126025822
+network_udpsck value=2 1459255780126025822
+network_rawsck value=0 1459255780126025822
+network_ip-frag value=0 1459255780126025822
+network_tcp-tw value=4 1459255780126025822
+network_irec_per_s value=10.75 1459255780126025822
+network_fwddgm_per_s value=0 1459255780126025822
+network_idel_per_s value=10.75 1459255780126025822
+network_orq_per_s value=10.75 1459255780126025822
+network_asmrq_per_s value=0 1459255780126025822
+network_asmok_per_s value=0 1459255780126025822
+network_fragok_per_s value=0 1459255780126025822
+network_fragcrt_per_s value=0 1459255780126025822
+network_ihdrerr_per_s value=0 1459255780126025822
+network_iadrerr_per_s value=0 1459255780126025822
+network_iukwnpr_per_s value=0 1459255780126025822
+network_idisc_per_s value=0 1459255780126025822
+network_odisc_per_s value=0 1459255780126025822
+network_onort_per_s value=0 1459255780126025822
+network_asmf_per_s value=0 1459255780126025822
+network_fragf_per_s value=0 1459255780126025822
+network_imsg_per_s value=0 1459255780126025822
+network_omsg_per_s value=0 1459255780126025822
+network_iech_per_s value=0 1459255780126025822
+network_iechr_per_s value=0 1459255780126025822
+network_oech_per_s value=0 1459255780126025822
+network_oechr_per_s value=0 1459255780126025822
+network_itm_per_s value=0 1459255780126025822
+network_itmr_per_s value=0 1459255780126025822
+network_otm_per_s value=0 1459255780126025822
+network_otmr_per_s value=0 1459255780126025822
+network_iadrmk_per_s value=0 1459255780126025822
+network_iadrmkr_per_s value=0 1459255780126025822
+network_oadrmk_per_s value=0 1459255780126025822
+network_oadrmkr_per_s value=0 1459255780126025822
+network_ierr_per_s value=0 1459255780126025822
+network_oerr_per_s value=0 1459255780126025822
+network_idstunr_per_s value=0 1459255780126025822
+network_odstunr_per_s value=0 1459255780126025822
+network_itmex_per_s value=0 1459255780126025822
+network_otmex_per_s value=0 1459255780126025822
+network_iparmpb_per_s value=0 1459255780126025822
+network_oparmpb_per_s value=0 1459255780126025822
+network_isrcq_per_s value=0 1459255780126025822
+network_osrcq_per_s value=0 1459255780126025822
+network_iredir_per_s value=0 1459255780126025822
+network_oredir_per_s value=0 1459255780126025822
+network_active_per_s value=0 1459255780126025822
+network_passive_per_s value=0 1459255780126025822
+network_iseg_per_s value=10.75 1459255780126025822
+network_oseg_per_s value=9.5 1459255780126025822
+network_atmptf_per_s value=0 1459255780126025822
+network_estres_per_s value=0 1459255780126025822
+network_retrans_per_s value=1.5 1459255780126025822
+network_isegerr_per_s value=0.25 1459255780126025822
+network_orsts_per_s value=0 1459255780126025822
+network_idgm_per_s value=0 1459255780126025822
+network_odgm_per_s value=0 1459255780126025822
+network_noport_per_s value=0 1459255780126025822
+network_idgmerr_per_s value=0 1459255780126025822
+queue_ldavg-1 value=2.1 1459255780126025822
+queue_ldavg-5 value=1.82 1459255780126025822
+queue_ldavg-15 value=1.44 1459255780126025822
+queue_blocked value=0 1459255780126025822
+swap_util_kbswpfree value=1048572 1459255780126025822
+swap_util_kbswpused value=0 1459255780126025822
+swap_util_pct_swpused value=0 1459255780126025822
+swap_util_kbswpcad value=0 1459255780126025822
+swap_util_pct_swpcad value=0 1459255780126025822
+```
diff --git a/content/telegraf/v1/input-plugins/system/_index.md b/content/telegraf/v1/input-plugins/system/_index.md
new file mode 100644
index 000000000..aea004ca4
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/system/_index.md
@@ -0,0 +1,66 @@
+---
+description: "Telegraf plugin for collecting metrics from System"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: System
+    identifier: input-system
+tags: [System, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# System Input Plugin
+
+The system plugin gathers general stats on system load, uptime,
+and number of users logged in. It is similar to the unix `uptime` command.
+
+Number of CPUs is obtained from the /proc/cpuinfo file.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics about system load & uptime
+[[inputs.system]]
+  # no configuration
+```
+
+### Permissions
+
+The `n_users` field requires read access to `/var/run/utmp`, and may require the
+`telegraf` user to be added to the `utmp` group on some systems. If this file
+does not exist `n_users` will be skipped.
+
+The `n_unique_users` shows the count of unique usernames logged in. This way if
+a user has multiple sessions open/started they would only get counted once. The
+same requirements for `n_users` apply.
+
+## Metrics
+
+- system
+  - fields:
+    - load1 (float)
+    - load15 (float)
+    - load5 (float)
+    - n_users (integer)
+    - n_unique_users (integer)
+    - n_cpus (integer)
+    - uptime (integer, seconds)
+    - uptime_format (string, deprecated in 1.10, use `uptime` field)
+
+## Example Output
+
+```text
+system,host=tyrion load1=3.72,load5=2.4,load15=2.1,n_users=3i,n_cpus=4i 1483964144000000000
+system,host=tyrion uptime=1249632i 1483964144000000000
+system,host=tyrion uptime_format="14 days, 11:07" 1483964144000000000
+```
diff --git a/content/telegraf/v1/input-plugins/systemd_units/_index.md b/content/telegraf/v1/input-plugins/systemd_units/_index.md
new file mode 100644
index 000000000..b6a600a14
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/systemd_units/_index.md
@@ -0,0 +1,196 @@
+---
+description: "Telegraf plugin for collecting metrics from Systemd-Units"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Systemd-Units
+    identifier: input-systemd_units
+tags: [Systemd-Units, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Systemd-Units Input Plugin
+
+This plugin gathers the status of systemd-units on Linux, using systemd's DBus
+interface.
+
+Please note: At least systemd v230 is required!
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Gather information about systemd-unit states
+# This plugin ONLY supports Linux
+[[inputs.systemd_units]]
+  ## Pattern of units to collect
+  ## A space-separated list of unit-patterns including wildcards determining
+  ## the units to collect.
+  ##  ex: pattern = "telegraf* influxdb* user@*"
+  # pattern = "*"
+
+  ## Filter for a specific unit type
+  ## Available settings are: service, socket, target, device, mount,
+  ## automount, swap, timer, path, slice and scope
+  # unittype = "service"
+
+  ## Collect system or user scoped units
+  ##  ex: scope = "user"
+  # scope = "system"
+
+  ## Collect also units not loaded by systemd, i.e. disabled or static units
+  ## Enabling this feature might introduce significant load when used with
+  ## unspecific patterns (such as '*') as systemd will need to load all
+  ## matching unit files.
+  # collect_disabled_units = false
+
+  ## Collect detailed information for the units
+  # details = false
+
+  ## Timeout for state-collection
+  # timeout = "5s"
+```
+
+This plugin supports two modes of operation:
+
+### Non-detailed mode
+
+This is the default mode, collecting data on the unit's status only without
+further details on the unit.
+
+### Detailed mode
+
+This mode can be enabled by setting the configuration option `details` to
+`true`. In this mode the plugin collects all information of the non-detailed
+mode but provides additional unit information such as memory usage,
+restart-counts, PID, etc. See the metrics section
+
+### Load
+
+Enumeration of [unit_load_state_table]()
+
+| Value | Meaning     | Description                     |
+| ----- | -------     | -----------                     |
+| 0     | loaded      | unit is ~                       |
+| 1     | stub        | unit is ~                       |
+| 2     | not-found   | unit is ~                       |
+| 3     | bad-setting | unit is ~                       |
+| 4     | error       | unit is ~                       |
+| 5     | merged      | unit is ~                       |
+| 6     | masked      | unit is ~                       |
+
+[1]: https://github.com/systemd/systemd/blob/c87700a1335f489be31cd3549927da68b5638819/src/basic/unit-def.c#L87
+
+### Active
+
+Enumeration of [unit_active_state_table]()
+
+| Value | Meaning   | Description                        |
+| ----- | -------   | -----------                        |
+| 0     | active       | unit is ~                       |
+| 1     | reloading    | unit is ~                       |
+| 2     | inactive     | unit is ~                       |
+| 3     | failed       | unit is ~                       |
+| 4     | activating   | unit is ~                       |
+| 5     | deactivating | unit is ~                       |
+
+[2]: https://github.com/systemd/systemd/blob/c87700a1335f489be31cd3549927da68b5638819/src/basic/unit-def.c#L99
+
+### Sub
+
+enumeration of sub states, see various [unittype_state_tables](); duplicates
+were removed, tables are hex aligned to keep some space for future values
+
+| Value  | Meaning               | Description                         |
+| -----  | -------               | -----------                         |
+|        |                       | service_state_table start at 0x0000 |
+| 0x0000 | running               | unit is ~                           |
+| 0x0001 | dead                  | unit is ~                           |
+| 0x0002 | start-pre             | unit is ~                           |
+| 0x0003 | start                 | unit is ~                           |
+| 0x0004 | exited                | unit is ~                           |
+| 0x0005 | reload                | unit is ~                           |
+| 0x0006 | stop                  | unit is ~                           |
+| 0x0007 | stop-watchdog         | unit is ~                           |
+| 0x0008 | stop-sigterm          | unit is ~                           |
+| 0x0009 | stop-sigkill          | unit is ~                           |
+| 0x000a | stop-post             | unit is ~                           |
+| 0x000b | final-sigterm         | unit is ~                           |
+| 0x000c | failed                | unit is ~                           |
+| 0x000d | auto-restart          | unit is ~                           |
+| 0x000e | condition             | unit is ~                           |
+| 0x000f | cleaning              | unit is ~                           |
+|        |                       | service_state_table start at 0x0010 |
+| 0x0010 | waiting               | unit is ~                           |
+| 0x0011 | reload-signal         | unit is ~                           |
+| 0x0012 | reload-notify         | unit is ~                           |
+| 0x0013 | final-watchdog        | unit is ~                           |
+| 0x0014 | dead-before-auto-restart    | unit is ~                     |
+| 0x0015 | failed-before-auto-restart  | unit is ~                     |
+| 0x0016 | dead-resources-pinned | unit is ~                           |
+| 0x0017 | auto-restart-queued   | unit is ~                           |
+|        |                       | service_state_table start at 0x0020 |
+| 0x0020 | tentative             | unit is ~                           |
+| 0x0021 | plugged               | unit is ~                           |
+|        |                       | service_state_table start at 0x0030 |
+| 0x0030 | mounting              | unit is ~                           |
+| 0x0031 | mounting-done         | unit is ~                           |
+| 0x0032 | mounted               | unit is ~                           |
+| 0x0033 | remounting            | unit is ~                           |
+| 0x0034 | unmounting            | unit is ~                           |
+| 0x0035 | remounting-sigterm    | unit is ~                           |
+| 0x0036 | remounting-sigkill    | unit is ~                           |
+| 0x0037 | unmounting-sigterm    | unit is ~                           |
+| 0x0038 | unmounting-sigkill    | unit is ~                           |
+|        |                       | service_state_table start at 0x0040 |
+|        |                       | service_state_table start at 0x0050 |
+| 0x0050 | abandoned             | unit is ~                           |
+|        |                       | service_state_table start at 0x0060 |
+| 0x0060 | active                | unit is ~                           |
+|        |                       | service_state_table start at 0x0070 |
+| 0x0070 | start-chown           | unit is ~                           |
+| 0x0071 | start-post            | unit is ~                           |
+| 0x0072 | listening             | unit is ~                           |
+| 0x0073 | stop-pre              | unit is ~                           |
+| 0x0074 | stop-pre-sigterm      | unit is ~                           |
+| 0x0075 | stop-pre-sigkill      | unit is ~                           |
+| 0x0076 | final-sigkill         | unit is ~                           |
+|        |                       | service_state_table start at 0x0080 |
+| 0x0080 | activating            | unit is ~                           |
+| 0x0081 | activating-done       | unit is ~                           |
+| 0x0082 | deactivating          | unit is ~                           |
+| 0x0083 | deactivating-sigterm  | unit is ~                           |
+| 0x0084 | deactivating-sigkill  | unit is ~                           |
+|        |                       | service_state_table start at 0x0090 |
+|        |                       | service_state_table start at 0x00a0 |
+| 0x00a0 | elapsed               | unit is ~                           |
+|        |                       |                                     |
+
+[3]: https://github.com/systemd/systemd/blob/c87700a1335f489be31cd3549927da68b5638819/src/basic/unit-def.c#L163
+
+## Example Output
+
+### Output in non-detailed mode
+
+```text
+systemd_units,host=host1.example.com,name=dbus.service,load=loaded,active=active,sub=running,user=telegraf load_code=0i,active_code=0i,sub_code=0i 1533730725000000000
+systemd_units,host=host1.example.com,name=networking.service,load=loaded,active=failed,sub=failed,user=telegraf load_code=0i,active_code=3i,sub_code=12i 1533730725000000000
+systemd_units,host=host1.example.com,name=ssh.service,load=loaded,active=active,sub=running,user=telegraf load_code=0i,active_code=0i,sub_code=0i 1533730725000000000
+```
+
+### Output in detailed mode
+
+```text
+systemd_units,active=active,host=host1.example.com,load=loaded,name=dbus.service,sub=running,preset=disabled,state=static,user=telegraf active_code=0i,load_code=0i,mem_avail=6470856704i,mem_current=2691072i,mem_peak=3895296i,pid=481i,restarts=0i,status_errno=0i,sub_code=0i,swap_current=794624i,swap_peak=884736i 1533730725000000000
+systemd_units,active=inactive,host=host1.example.com,load=not-found,name=networking.service,sub=dead,user=telegraf active_code=2i,load_code=2i,pid=0i,restarts=0i,status_errno=0i,sub_code=1i 1533730725000000000
+systemd_units,active=active,host=host1.example.com,load=loaded,name=pcscd.service,sub=running,preset=disabled,state=indirect,user=telegraf active_code=0i,load_code=0i,mem_avail=6370541568i,mem_current=512000i,mem_peak=4399104i,pid=1673i,restarts=0i,status_errno=0i,sub_code=0i,swap_current=3149824i,swap_peak=3149824i 1533730725000000000
+```
diff --git a/content/telegraf/v1/input-plugins/tacacs/_index.md b/content/telegraf/v1/input-plugins/tacacs/_index.md
new file mode 100644
index 000000000..99a094131
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/tacacs/_index.md
@@ -0,0 +1,71 @@
+---
+description: "Telegraf plugin for collecting metrics from Tacacs"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Tacacs
+    identifier: input-tacacs
+tags: [Tacacs, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Tacacs Input Plugin
+
+The Tacacs plugin collects successful tacacs authentication response times
+from tacacs servers such as Aruba ClearPass, FreeRADIUS or tac_plus (TACACS+).
+It is primarily meant to monitor how long it takes for the server to fully
+handle an auth request, including all potential dependent calls (for example
+to AD servers, or other sources of truth for auth the tacacs server uses).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Tacacs plugin collects successful tacacs authentication response times.
+[[inputs.tacacs]]
+  ## An array of Server IPs (or hostnames) and ports to gather from. If none specified, defaults to localhost.
+  # servers = ["127.0.0.1:49"]
+
+  ## Request source server IP, normally the server running telegraf.
+  # request_ip = "127.0.0.1"
+
+  ## Credentials for tacacs authentication.
+  username = "myuser"
+  password = "mypassword"
+  secret = "mysecret"
+
+  ## Maximum time to receive response.
+  # response_timeout = "5s"
+```
+
+## Metrics
+
+- tacacs
+  - tags:
+    - source
+  - fields:
+    - response_status (string, see below
+    - responsetime_ms (int64 see below    | tacacs server | real value
+| Timeout              | Timeout      | telegraf      | eq. to response_timeout
+
+### field `responsetime_ms`
+
+The field responsetime_ms is response time of the tacacs server
+in milliseconds of the furthest achieved stage of auth.
+In case of timeout, its filled by telegraf to be the value of
+the configured response_timeout.
+
+## Example Output
+
+```text
+tacacs,source=127.0.0.1:49 responsetime_ms=311i,response_status="AuthenStatusPass" 1677526200000000000
+```
diff --git a/content/telegraf/v1/input-plugins/tail/_index.md b/content/telegraf/v1/input-plugins/tail/_index.md
new file mode 100644
index 000000000..93febed6b
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/tail/_index.md
@@ -0,0 +1,148 @@
+---
+description: "Telegraf plugin for collecting metrics from Tail"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Tail
+    identifier: input-tail
+tags: [Tail, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Tail Input Plugin
+
+The tail plugin "tails" a logfile and parses each log message.
+
+By default, the tail plugin acts like the following unix tail command:
+
+```shell
+tail -F --lines=0 myfile.log
+```
+
+- `-F` means that it will follow the _name_ of the given file, so
+that it will be compatible with log-rotated files, and that it will retry on
+inaccessible files.
+- `--lines=0` means that it will start at the end of the file (unless
+the `from_beginning` option is set).
+
+see <http://man7.org/linux/man-pages/man1/tail.1.html> for more details.
+
+The plugin expects messages in one of the Telegraf Input Data
+Formats.
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Parse the new lines appended to a file
+[[inputs.tail]]
+  ## File names or a pattern to tail.
+  ## These accept standard unix glob matching rules, but with the addition of
+  ## ** as a "super asterisk". ie:
+  ##   "/var/log/**.log"  -> recursively find all .log files in /var/log
+  ##   "/var/log/*/*.log" -> find all .log files with a parent dir in /var/log
+  ##   "/var/log/apache.log" -> just tail the apache log file
+  ##   "/var/log/log[!1-2]*  -> tail files without 1-2
+  ##   "/var/log/log[^1-2]*  -> identical behavior as above
+  ## See https://github.com/gobwas/glob for more examples
+  ##
+  files = ["/var/mymetrics.out"]
+
+  ## Read file from beginning.
+  # from_beginning = false
+
+  ## Whether file is a named pipe
+  # pipe = false
+
+  ## Method used to watch for file updates.  Can be either "inotify" or "poll".
+  ## inotify is supported on linux, *bsd, and macOS, while Windows requires
+  ## using poll. Poll checks for changes every 250ms.
+  # watch_method = "inotify"
+
+  ## Maximum lines of the file to process that have not yet be written by the
+  ## output.  For best throughput set based on the number of metrics on each
+  ## line and the size of the output's metric_batch_size.
+  # max_undelivered_lines = 1000
+
+  ## Character encoding to use when interpreting the file contents.  Invalid
+  ## characters are replaced using the unicode replacement character.  When set
+  ## to the empty string the data is not decoded to text.
+  ##   ex: character_encoding = "utf-8"
+  ##       character_encoding = "utf-16le"
+  ##       character_encoding = "utf-16be"
+  ##       character_encoding = ""
+  # character_encoding = ""
+
+  ## Data format to consume.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+  data_format = "influx"
+
+  ## Set the tag that will contain the path of the tailed file. If you don't want this tag, set it to an empty string.
+  # path_tag = "path"
+
+  ## Filters to apply to files before generating metrics
+  ## "ansi_color" removes ANSI colors
+  # filters = []
+
+  ## multiline parser/codec
+  ## https://www.elastic.co/guide/en/logstash/2.4/plugins-filters-multiline.html
+  #[inputs.tail.multiline]
+    ## The pattern should be a regexp which matches what you believe to be an indicator that the field is part of an event consisting of multiple lines of log data.
+    #pattern = "^\s"
+
+    ## The field's value must be previous or next and indicates the relation to the
+    ## multi-line event.
+    #match_which_line = "previous"
+
+    ## The invert_match can be true or false (defaults to false).
+    ## If true, a message not matching the pattern will constitute a match of the multiline filter and the what will be applied. (vice-versa is also true)
+    #invert_match = false
+
+    ## The handling method for quoted text (defaults to 'ignore').
+    ## The following methods are available:
+    ##   ignore  -- do not consider quotation (default)
+    ##   single-quotes -- consider text quoted by single quotes (')
+    ##   double-quotes -- consider text quoted by double quotes (")
+    ##   backticks     -- consider text quoted by backticks (`)
+    ## When handling quotes, escaped quotes (e.g. \") are handled correctly.
+    #quotation = "ignore"
+
+    ## The preserve_newline option can be true or false (defaults to false).
+    ## If true, the newline character is preserved for multiline elements,
+    ## this is useful to preserve message-structure e.g. for logging outputs.
+    #preserve_newline = false
+
+    #After the specified timeout, this plugin sends the multiline event even if no new pattern is found to start a new event. The default is 5s.
+    #timeout = 5s
+```
+
+## Metrics
+
+Metrics are produced according to the `data_format` option.  Additionally a
+tag labeled `path` is added to the metric containing the filename being tailed.
+
+## Example Output
+
+There is no predefined metric format, so output depends on plugin input.
diff --git a/content/telegraf/v1/input-plugins/teamspeak/_index.md b/content/telegraf/v1/input-plugins/teamspeak/_index.md
new file mode 100644
index 000000000..cf7506ef0
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/teamspeak/_index.md
@@ -0,0 +1,73 @@
+---
+description: "Telegraf plugin for collecting metrics from Teamspeak 3"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Teamspeak 3
+    identifier: input-teamspeak
+tags: [Teamspeak 3, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Teamspeak 3 Input Plugin
+
+This plugin uses the Teamspeak 3 ServerQuery interface of the Teamspeak server
+to collect statistics of one or more virtual servers. If you are querying an
+external Teamspeak server, make sure to add the host which is running Telegraf
+to query_ip_allowlist.txt in the Teamspeak Server directory. For information
+about how to configure the server take a look the [Teamspeak 3 ServerQuery
+Manual]().
+
+[1]: http://media.teamspeak.com/ts3_literature/TeamSpeak%203%20Server%20Query%20Manual.pdf
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Reads metrics from a Teamspeak 3 Server via ServerQuery
+[[inputs.teamspeak]]
+  ## Server address for Teamspeak 3 ServerQuery
+  # server = "127.0.0.1:10011"
+  ## Username for ServerQuery
+  username = "serverqueryuser"
+  ## Password for ServerQuery
+  password = "secret"
+  ## Nickname of the ServerQuery client
+  nickname = "telegraf"
+  ## Array of virtual servers
+  # virtual_servers = [1]
+```
+
+## Metrics
+
+- teamspeak
+  - uptime
+  - clients_online
+  - total_ping
+  - total_packet_loss
+  - packets_sent_total
+  - packets_received_total
+  - bytes_sent_total
+  - bytes_received_total
+  - query_clients_online
+
+### Tags
+
+- The following tags are used:
+  - virtual_server
+  - name
+
+## Example Output
+
+```text
+teamspeak,virtual_server=1,name=LeopoldsServer,host=vm01 bytes_received_total=29638202639i,uptime=13567846i,total_ping=26.89,total_packet_loss=0,packets_sent_total=415821252i,packets_received_total=237069900i,bytes_sent_total=55309568252i,clients_online=11i,query_clients_online=1i 1507406561000000000
+```
diff --git a/content/telegraf/v1/input-plugins/temp/_index.md b/content/telegraf/v1/input-plugins/temp/_index.md
new file mode 100644
index 000000000..8a3993760
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/temp/_index.md
@@ -0,0 +1,73 @@
+---
+description: "Telegraf plugin for collecting metrics from Temperature"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Temperature
+    identifier: input-temp
+tags: [Temperature, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Temperature Input Plugin
+
+The temp input plugin gather metrics on system temperature.  This plugin is
+meant to be multi platform and uses platform specific collection methods.
+
+Currently supports Linux and Windows.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics about temperature
+[[inputs.temp]]
+  ## Desired output format (Linux only)
+  ## Available values are
+  ##   v1 -- use pre-v1.22.4 sensor naming, e.g. coretemp_core0_input
+  ##   v2 -- use v1.22.4+ sensor naming, e.g. coretemp_core_0_input
+  # metric_format = "v2"
+
+  ## Add device tag to distinguish devices with the same name (Linux only)
+  # add_device_tag = false
+```
+
+## Metrics
+
+- temp
+  - tags:
+    - sensor
+  - fields:
+    - temp (float, celcius)
+
+## Troubleshooting
+
+On **Windows**, the plugin uses a WMI call that is can be replicated with the
+following command:
+
+```shell
+wmic /namespace:\\root\wmi PATH MSAcpi_ThermalZoneTemperature
+```
+
+If the result is "Not Supported" you may be running in a virtualized environment
+and not a physical machine. Additionally, if you still get this result your
+motherboard or system may not support querying these values. Finally, you may
+be required to run as admin to get the values.
+
+## Example Output
+
+```text
+temp,sensor=coretemp_physicalid0_crit temp=100 1531298763000000000
+temp,sensor=coretemp_physicalid0_critalarm temp=0 1531298763000000000
+temp,sensor=coretemp_physicalid0_input temp=100 1531298763000000000
+temp,sensor=coretemp_physicalid0_max temp=100 1531298763000000000
+```
diff --git a/content/telegraf/v1/input-plugins/tengine/_index.md b/content/telegraf/v1/input-plugins/tengine/_index.md
new file mode 100644
index 000000000..aaf3ad356
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/tengine/_index.md
@@ -0,0 +1,92 @@
+---
+description: "Telegraf plugin for collecting metrics from Tengine"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Tengine
+    identifier: input-tengine
+tags: [Tengine, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Tengine Input Plugin
+
+The tengine plugin gathers metrics from the
+[Tengine Web Server](http://tengine.taobao.org/) via the
+[reqstat](http://tengine.taobao.org/document/http_reqstat.html) module.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read Tengine's basic status information (ngx_http_reqstat_module)
+[[inputs.tengine]]
+  ## An array of Tengine reqstat module URI to gather stats.
+  urls = ["http://127.0.0.1/us"]
+
+  ## HTTP response timeout (default: 5s)
+  # response_timeout = "5s"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+## Metrics
+
+- Measurement
+  - tags:
+    - port
+    - server
+    - server_name
+  - fields:
+    - bytes_in (integer, total number of bytes received from client)
+    - bytes_out (integer, total number of bytes sent to client)
+    - conn_total (integer, total number of accepted connections)
+    - req_total (integer, total number of processed requests)
+    - http_2xx (integer, total number of 2xx requests)
+    - http_3xx (integer, total number of 3xx requests)
+    - http_4xx (integer, total number of 4xx requests)
+    - http_5xx (integer, total number of 5xx requests)
+    - http_other_status (integer, total number of other requests)
+    - rt (integer, accumulation or rt)
+    - ups_req (integer, total number of requests calling for upstream)
+    - ups_rt (integer, accumulation or upstream rt)
+    - ups_tries (integer, total number of times calling for upstream)
+    - http_200 (integer, total number of 200 requests)
+    - http_206 (integer, total number of 206 requests)
+    - http_302 (integer, total number of 302 requests)
+    - http_304 (integer, total number of 304 requests)
+    - http_403 (integer, total number of 403 requests)
+    - http_404 (integer, total number of 404 requests)
+    - http_416 (integer, total number of 416 requests)
+    - http_499 (integer, total number of 499 requests)
+    - http_500 (integer, total number of 500 requests)
+    - http_502 (integer, total number of 502 requests)
+    - http_503 (integer, total number of 503 requests)
+    - http_504 (integer, total number of 504 requests)
+    - http_508 (integer, total number of 508 requests)
+    - http_other_detail_status (integer, total number of requests of other status codes*http_ups_4xx total number of requests of upstream 4xx)
+    - http_ups_5xx (integer, total number of requests of upstream 5xx)
+
+## Example Output
+
+```text
+tengine,host=gcp-thz-api-5,port=80,server=localhost,server_name=localhost bytes_in=9129i,bytes_out=56334i,conn_total=14i,http_200=90i,http_206=0i,http_2xx=90i,http_302=0i,http_304=0i,http_3xx=0i,http_403=0i,http_404=0i,http_416=0i,http_499=0i,http_4xx=0i,http_500=0i,http_502=0i,http_503=0i,http_504=0i,http_508=0i,http_5xx=0i,http_other_detail_status=0i,http_other_status=0i,http_ups_4xx=0i,http_ups_5xx=0i,req_total=90i,rt=0i,ups_req=0i,ups_rt=0i,ups_tries=0i 1526546308000000000
+tengine,host=gcp-thz-api-5,port=80,server=localhost,server_name=28.79.190.35.bc.googleusercontent.com bytes_in=1500i,bytes_out=3009i,conn_total=4i,http_200=1i,http_206=0i,http_2xx=1i,http_302=0i,http_304=0i,http_3xx=0i,http_403=0i,http_404=1i,http_416=0i,http_499=0i,http_4xx=3i,http_500=0i,http_502=0i,http_503=0i,http_504=0i,http_508=0i,http_5xx=0i,http_other_detail_status=0i,http_other_status=0i,http_ups_4xx=0i,http_ups_5xx=0i,req_total=4i,rt=0i,ups_req=0i,ups_rt=0i,ups_tries=0i 1526546308000000000
+tengine,host=gcp-thz-api-5,port=80,server=localhost,server_name=www.google.com bytes_in=372i,bytes_out=786i,conn_total=1i,http_200=1i,http_206=0i,http_2xx=1i,http_302=0i,http_304=0i,http_3xx=0i,http_403=0i,http_404=0i,http_416=0i,http_499=0i,http_4xx=0i,http_500=0i,http_502=0i,http_503=0i,http_504=0i,http_508=0i,http_5xx=0i,http_other_detail_status=0i,http_other_status=0i,http_ups_4xx=0i,http_ups_5xx=0i,req_total=1i,rt=0i,ups_req=0i,ups_rt=0i,ups_tries=0i 1526546308000000000
+tengine,host=gcp-thz-api-5,port=80,server=localhost,server_name=35.190.79.28 bytes_in=4433i,bytes_out=10259i,conn_total=5i,http_200=3i,http_206=0i,http_2xx=3i,http_302=0i,http_304=0i,http_3xx=0i,http_403=0i,http_404=11i,http_416=0i,http_499=0i,http_4xx=11i,http_500=0i,http_502=0i,http_503=0i,http_504=0i,http_508=0i,http_5xx=0i,http_other_detail_status=0i,http_other_status=0i,http_ups_4xx=0i,http_ups_5xx=0i,req_total=14i,rt=0i,ups_req=0i,ups_rt=0i,ups_tries=0i 1526546308000000000
+tengine,host=gcp-thz-api-5,port=80,server=localhost,server_name=tenka-prod-api.txwy.tw bytes_in=3014397400i,bytes_out=14279992835i,conn_total=36844i,http_200=3177339i,http_206=0i,http_2xx=3177339i,http_302=0i,http_304=0i,http_3xx=0i,http_403=0i,http_404=123i,http_416=0i,http_499=0i,http_4xx=123i,http_500=17214i,http_502=4453i,http_503=80i,http_504=0i,http_508=0i,http_5xx=21747i,http_other_detail_status=0i,http_other_status=0i,http_ups_4xx=123i,http_ups_5xx=21747i,req_total=3199209i,rt=245874536i,ups_req=2685076i,ups_rt=245858217i,ups_tries=2685076i 1526546308000000000
+```
diff --git a/content/telegraf/v1/input-plugins/tomcat/_index.md b/content/telegraf/v1/input-plugins/tomcat/_index.md
new file mode 100644
index 000000000..3dcaa09d9
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/tomcat/_index.md
@@ -0,0 +1,101 @@
+---
+description: "Telegraf plugin for collecting metrics from Tomcat"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Tomcat
+    identifier: input-tomcat
+tags: [Tomcat, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Tomcat Input Plugin
+
+The Tomcat plugin collects statistics available from the tomcat manager status
+page from the `http://<host>/manager/status/all?XML=true URL.` (`XML=true` will
+return only xml data).
+
+See the [Tomcat documentation](https://tomcat.apache.org/tomcat-9.0-doc/manager-howto.html#Server_Status) for details of these statistics.
+
+[1]: https://tomcat.apache.org/tomcat-9.0-doc/manager-howto.html#Server_Status
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Gather metrics from the Tomcat server status page.
+[[inputs.tomcat]]
+  ## URL of the Tomcat server status
+  # url = "http://127.0.0.1:8080/manager/status/all?XML=true"
+
+  ## HTTP Basic Auth Credentials
+  # username = "tomcat"
+  # password = "s3cret"
+
+  ## Request timeout
+  # timeout = "5s"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+## Metrics
+
+- tomcat_jvm_memory
+  - free
+  - max
+  - total
+- tomcat_jvm_memorypool
+  - committed
+  - init
+  - max
+  - used
+- tomcat_connector
+  - bytes_received
+  - bytes_sent
+  - current_thread_busy
+  - current_thread_count
+  - error_count
+  - max_threads
+  - max_time
+  - processing_time
+  - request_count
+
+### Tags
+
+- tomcat_jvm_memory
+  - source
+- tomcat_jvm_memorypool has the following tags:
+  - name
+  - type
+  - source
+- tomcat_connector
+  - name
+  - source
+
+## Example Output
+
+```text
+tomcat_jvm_memory,host=N8-MBP free=20014352i,max=127729664i,total=41459712i 1474663361000000000
+tomcat_jvm_memorypool,host=N8-MBP,name=Eden\ Space,type=Heap\ memory committed=11534336i,init=2228224i,max=35258368i,used=1941200i 1474663361000000000
+tomcat_jvm_memorypool,host=N8-MBP,name=Survivor\ Space,type=Heap\ memory committed=1376256i,init=262144i,max=4390912i,used=1376248i 1474663361000000000
+tomcat_jvm_memorypool,host=N8-MBP,name=Tenured\ Gen,type=Heap\ memory committed=28549120i,init=5636096i,max=88080384i,used=18127912i 1474663361000000000
+tomcat_jvm_memorypool,host=N8-MBP,name=Code\ Cache,type=Non-heap\ memory committed=6946816i,init=2555904i,max=251658240i,used=6406528i 1474663361000000000
+tomcat_jvm_memorypool,host=N8-MBP,name=Compressed\ Class\ Space,type=Non-heap\ memory committed=1966080i,init=0i,max=1073741824i,used=1816120i 1474663361000000000
+tomcat_jvm_memorypool,host=N8-MBP,name=Metaspace,type=Non-heap\ memory committed=18219008i,init=0i,max=-1i,used=17559376i 1474663361000000000
+tomcat_connector,host=N8-MBP,name=ajp-bio-8009 bytes_received=0i,bytes_sent=0i,current_thread_count=0i,current_threads_busy=0i,error_count=0i,max_threads=200i,max_time=0i,processing_time=0i,request_count=0i 1474663361000000000
+tomcat_connector,host=N8-MBP,name=http-bio-8080 bytes_received=0i,bytes_sent=86435i,current_thread_count=10i,current_threads_busy=1i,error_count=2i,max_threads=200i,max_time=167i,processing_time=245i,request_count=15i 1474663361000000000
+```
diff --git a/content/telegraf/v1/input-plugins/trig/_index.md b/content/telegraf/v1/input-plugins/trig/_index.md
new file mode 100644
index 000000000..40a0481d8
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/trig/_index.md
@@ -0,0 +1,48 @@
+---
+description: "Telegraf plugin for collecting metrics from Trig"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Trig
+    identifier: input-trig
+tags: [Trig, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Trig Input Plugin
+
+The `trig` plugin is for demonstration purposes and inserts sine and cosine
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Inserts sine and cosine waves for demonstration purposes
+[[inputs.trig]]
+  ## Set the amplitude
+  amplitude = 10.0
+```
+
+## Metrics
+
+- trig
+  - fields:
+    - cosine (float)
+    - sine (float)
+
+## Example Output
+
+```text
+trig,host=MBP15-SWANG.local cosine=10,sine=0 1632338680000000000
+trig,host=MBP15-SWANG.local sine=5.877852522924732,cosine=8.090169943749473 1632338690000000000
+trig,host=MBP15-SWANG.local sine=9.510565162951535,cosine=3.0901699437494745 1632338700000000000
+```
diff --git a/content/telegraf/v1/input-plugins/twemproxy/_index.md b/content/telegraf/v1/input-plugins/twemproxy/_index.md
new file mode 100644
index 000000000..13cf7a52f
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/twemproxy/_index.md
@@ -0,0 +1,40 @@
+---
+description: "Telegraf plugin for collecting metrics from Twemproxy"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Twemproxy
+    identifier: input-twemproxy
+tags: [Twemproxy, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Twemproxy Input Plugin
+
+The `twemproxy` plugin gathers statistics from
+[Twemproxy](https://github.com/twitter/twemproxy) servers.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read Twemproxy stats data
+[[inputs.twemproxy]]
+  ## Twemproxy stats address and port (no scheme)
+  addr = "localhost:22222"
+  ## Monitor pool name
+  pools = ["redis_pool", "mc_pool"]
+```
+
+## Metrics
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/unbound/_index.md b/content/telegraf/v1/input-plugins/unbound/_index.md
new file mode 100644
index 000000000..a5c1acc6d
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/unbound/_index.md
@@ -0,0 +1,183 @@
+---
+description: "Telegraf plugin for collecting metrics from Unbound"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Unbound
+    identifier: input-unbound
+tags: [Unbound, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Unbound Input Plugin
+
+This plugin gathers stats from [Unbound](https://www.unbound.net/) -
+a validating, recursive, and caching DNS resolver.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# A plugin to collect stats from the Unbound DNS resolver
+[[inputs.unbound]]
+  ## Address of server to connect to, read from unbound conf default, optionally ':port'
+  ## Will lookup IP if given a hostname
+  server = "127.0.0.1:8953"
+
+  ## If running as a restricted user you can prepend sudo for additional access:
+  # use_sudo = false
+
+  ## The default location of the unbound-control binary can be overridden with:
+  # binary = "/usr/sbin/unbound-control"
+
+  ## The default location of the unbound config file can be overridden with:
+  # config_file = "/etc/unbound/unbound.conf"
+
+  ## The default timeout of 1s can be overridden with:
+  # timeout = "1s"
+
+  ## When set to true, thread metrics are tagged with the thread id.
+  ##
+  ## The default is false for backwards compatibility, and will be changed to
+  ## true in a future version.  It is recommended to set to true on new
+  ## deployments.
+  thread_as_tag = false
+```
+
+### Permissions
+
+It's important to note that this plugin references unbound-control, which may
+require additional permissions to execute successfully.  Depending on the
+user/group permissions of the telegraf user executing this plugin, you may need
+to alter the group membership, set facls, or use sudo.
+
+**Group membership (Recommended)**:
+
+```bash
+$ groups telegraf
+telegraf : telegraf
+
+$ usermod -a -G unbound telegraf
+
+$ groups telegraf
+telegraf : telegraf unbound
+```
+
+**Sudo privileges**:
+If you use this method, you will need the following in your telegraf config:
+
+```toml
+[[inputs.unbound]]
+  use_sudo = true
+```
+
+You will also need to update your sudoers file:
+
+```bash
+$ visudo
+# Add the following line:
+Cmnd_Alias UNBOUNDCTL = /usr/sbin/unbound-control
+telegraf  ALL=(ALL) NOPASSWD: UNBOUNDCTL
+Defaults!UNBOUNDCTL !logfile, !syslog, !pam_session
+```
+
+Please use the solution you see as most appropriate.
+
+## Metrics
+
+This is the full list of stats provided by unbound-control and potentially
+collected depending of your unbound configuration.  Histogram related statistics
+will never be collected, extended statistics can also be imported
+("extended-statistics: yes" in unbound configuration).  In the output, the dots
+in the unbound-control stat name are replaced by underscores(see
+<https://www.unbound.net/documentation/unbound-control.html> for details).
+
+Shown metrics are with `thread_as_tag` enabled.
+
+- unbound
+  - fields:
+    total_num_queries
+    total_num_cachehits
+    total_num_cachemiss
+    total_num_prefetch
+    total_num_recursivereplies
+    total_requestlist_avg
+    total_requestlist_max
+    total_requestlist_overwritten
+    total_requestlist_exceeded
+    total_requestlist_current_all
+    total_requestlist_current_user
+    total_recursion_time_avg
+    total_recursion_time_median
+    time_now
+    time_up
+    time_elapsed
+    mem_total_sbrk
+    mem_cache_rrset
+    mem_cache_message
+    mem_mod_iterator
+    mem_mod_validator
+    num_query_type_A
+    num_query_type_PTR
+    num_query_type_TXT
+    num_query_type_AAAA
+    num_query_type_SRV
+    num_query_type_ANY
+    num_query_class_IN
+    num_query_opcode_QUERY
+    num_query_tcp
+    num_query_ipv6
+    num_query_flags_QR
+    num_query_flags_AA
+    num_query_flags_TC
+    num_query_flags_RD
+    num_query_flags_RA
+    num_query_flags_Z
+    num_query_flags_AD
+    num_query_flags_CD
+    num_query_edns_present
+    num_query_edns_DO
+    num_answer_rcode_NOERROR
+    num_answer_rcode_SERVFAIL
+    num_answer_rcode_NXDOMAIN
+    num_answer_rcode_nodata
+    num_answer_secure
+    num_answer_bogus
+    num_rrset_bogus
+    unwanted_queries
+    unwanted_replies
+
+- unbound_thread
+  - tags:
+    - thread
+  - fields:
+    - num_queries
+    - num_cachehits
+    - num_cachemiss
+    - num_prefetch
+    - num_recursivereplies
+    - requestlist_avg
+    - requestlist_max
+    - requestlist_overwritten
+    - requestlist_exceeded
+    - requestlist_current_all
+    - requestlist_current_user
+    - recursion_time_avg
+    - recursion_time_median
+
+## Example Output
+
+```text
+unbound,host=localhost total_requestlist_avg=0,total_requestlist_exceeded=0,total_requestlist_overwritten=0,total_requestlist_current_user=0,total_recursion_time_avg=0.029186,total_tcpusage=0,total_num_queries=51,total_num_queries_ip_ratelimited=0,total_num_recursivereplies=6,total_requestlist_max=0,time_now=1522804978.784814,time_elapsed=310.435217,total_num_cachemiss=6,total_num_zero_ttl=0,time_up=310.435217,total_num_cachehits=45,total_num_prefetch=0,total_requestlist_current_all=0,total_recursion_time_median=0.016384 1522804979000000000
+unbound_threads,host=localhost,thread=0 num_queries_ip_ratelimited=0,requestlist_current_user=0,recursion_time_avg=0.029186,num_prefetch=0,requestlist_overwritten=0,requestlist_exceeded=0,requestlist_current_all=0,tcpusage=0,num_cachehits=37,num_cachemiss=6,num_recursivereplies=6,requestlist_avg=0,num_queries=43,num_zero_ttl=0,requestlist_max=0,recursion_time_median=0.032768 1522804979000000000
+unbound_threads,host=localhost,thread=1 num_zero_ttl=0,recursion_time_avg=0,num_queries_ip_ratelimited=0,num_cachehits=8,num_prefetch=0,requestlist_exceeded=0,recursion_time_median=0,tcpusage=0,num_cachemiss=0,num_recursivereplies=0,requestlist_max=0,requestlist_overwritten=0,requestlist_current_user=0,num_queries=8,requestlist_avg=0,requestlist_current_all=0 1522804979000000000
+```
diff --git a/content/telegraf/v1/input-plugins/upsd/_index.md b/content/telegraf/v1/input-plugins/upsd/_index.md
new file mode 100644
index 000000000..4cd62ac15
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/upsd/_index.md
@@ -0,0 +1,123 @@
+---
+description: "Telegraf plugin for collecting metrics from UPSD"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: UPSD
+    identifier: input-upsd
+tags: [UPSD, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# UPSD Input Plugin
+
+This plugin reads data of one or more Uninterruptible Power Supplies
+from an `upsd` daemon using its NUT network protocol.
+
+## Requirements
+
+`upsd` should be installed and it's daemon should be running.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Monitor UPSes connected via Network UPS Tools
+[[inputs.upsd]]
+  ## A running NUT server to connect to.
+  ## IPv6 addresses must be enclosed in brackets (e.g. "[::1]")
+  # server = "127.0.0.1"
+  # port = 3493
+  # username = "user"
+  # password = "password"
+
+  ## Force parsing numbers as floats
+  ## It is highly recommended to enable this setting to parse numbers
+  ## consistently as floats to avoid database conflicts where some numbers are
+  ## parsed as integers and others as floats.
+  # force_float = false
+
+  ## Collect additional fields if they are available for the UPS
+  ## The fields need to be specified as NUT variable names, see
+  ## https://networkupstools.org/docs/developer-guide.chunked/apas02.html
+  ## Wildcards are accepted.
+  # additional_fields = []
+
+  ## Dump information for debugging
+  ## Allows to print the raw variables (and corresponding types) as received
+  ## from the NUT server ONCE for each UPS.
+  ## Please attach this information when reporting issues!
+  # log_level = "trace"
+```
+
+## Pitfalls
+
+Please note that field types are automatically determined based on the values.
+Especially the strings `enabled` and `disabled` are automatically converted to
+`boolean` values. This might lead to trouble for fields that can contain
+non-binary values like `enabled`, `disabled` and `muted` as the output field
+will be `boolean` for the first two values but `string` for the latter. To
+convert `enabled` and `disabled` values back to `string` for those fields, use
+the [enum processor](../../processors/enum/README.md) with
+
+```toml
+[[processors.enum]]
+  [[processors.enum.mapping]]
+    field = "ups_beeper_status"
+    [processors.enum.mapping.value_mappings]
+      true = "enabled"
+      false = "disabled"
+```
+
+Alternatively, you can also map the non-binary value to a `boolean`.
+
+[enum_processor]: ../../processors/enum/README.md
+
+## Metrics
+
+This implementation tries to maintain compatibility with the apcupsd metrics:
+
+- upsd
+  - tags:
+    - serial
+    - ups_name
+    - model
+  - fields:
+    - status_flags ([status-bits](https://www.rfc-editor.org/rfc/rfc9271.html#section-5.1))
+    - input_voltage
+    - load_percent
+    - battery_charge_percent
+    - time_left_ns
+    - output_voltage
+    - internal_temp
+    - battery_voltage
+    - input_frequency
+    - battery_date
+    - nominal_input_voltage
+    - nominal_battery_voltage
+    - nominal_power
+    - firmware
+
+With the exception of:
+
+- tags:
+  - status (string representing the set status_flags)
+- fields:
+  - time_on_battery_ns
+
+[rfc9271-sec5.1]: https://www.rfc-editor.org/rfc/rfc9271.html#section-5.1
+
+## Example Output
+
+```text
+upsd,serial=AS1231515,ups_name=name1 load_percent=9.7,time_left_ns=9800000,output_voltage=230.4,internal_temp=32.4,battery_voltage=27.4,input_frequency=50.2,input_voltage=230.4,battery_charge_percent=100,status_flags=8i 1490035922000000000
+```
diff --git a/content/telegraf/v1/input-plugins/uwsgi/_index.md b/content/telegraf/v1/input-plugins/uwsgi/_index.md
new file mode 100644
index 000000000..073516d68
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/uwsgi/_index.md
@@ -0,0 +1,112 @@
+---
+description: "Telegraf plugin for collecting metrics from uWSGI"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: uWSGI
+    identifier: input-uwsgi
+tags: [uWSGI, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# uWSGI Input Plugin
+
+The uWSGI input plugin gathers metrics about uWSGI using its [Stats
+Server](https://uwsgi-docs.readthedocs.io/en/latest/StatsServer.html).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read uWSGI metrics.
+[[inputs.uwsgi]]
+  ## List with urls of uWSGI Stats servers. Url must match pattern:
+  ## scheme://address[:port]
+  ##
+  ## For example:
+  ## servers = ["tcp://localhost:5050", "http://localhost:1717", "unix:///tmp/statsock"]
+  servers = ["tcp://127.0.0.1:1717"]
+
+  ## General connection timeout
+  # timeout = "5s"
+```
+
+## Metrics
+
+- uwsgi_overview
+- tags:
+  - source
+  - uid
+  - gid
+  - version
+- fields:
+  - listen_queue
+  - listen_queue_errors
+  - signal_queue
+  - load
+  - pid
+
+- uwsgi_workers
+  - tags:
+    - worker_id
+    - source
+  - fields:
+    - requests
+    - accepting
+    - delta_request
+    - exceptions
+    - harakiri_count
+    - pid
+    - signals
+    - signal_queue
+    - status
+    - rss
+    - vsz
+    - running_time
+    - last_spawn
+    - respawn_count
+    - tx
+    - avg_rt
+
+- uwsgi_apps
+  - tags:
+    - app_id
+    - worker_id
+    - source
+  - fields:
+    - modifier1
+    - requests
+    - startup_time
+    - exceptions
+
+- uwsgi_cores
+  - tags:
+    - core_id
+    - worker_id
+    - source
+  - fields:
+    - requests
+    - static_requests
+    - routed_requests
+    - offloaded_requests
+    - write_errors
+    - read_errors
+    - in_request
+
+## Example Output
+
+```text
+uwsgi_overview,gid=0,uid=0,source=172.17.0.2,version=2.0.18 listen_queue=0i,listen_queue_errors=0i,load=0i,pid=1i,signal_queue=0i 1564441407000000000
+uwsgi_workers,source=172.17.0.2,worker_id=1 accepting=1i,avg_rt=0i,delta_request=0i,exceptions=0i,harakiri_count=0i,last_spawn=1564441202i,pid=6i,requests=0i,respawn_count=1i,rss=0i,running_time=0i,signal_queue=0i,signals=0i,status="idle",tx=0i,vsz=0i 1564441407000000000
+uwsgi_apps,app_id=0,worker_id=1,source=172.17.0.2 exceptions=0i,modifier1=0i,requests=0i,startup_time=0i 1564441407000000000
+uwsgi_cores,core_id=0,worker_id=1,source=172.17.0.2 in_request=0i,offloaded_requests=0i,read_errors=0i,requests=0i,routed_requests=0i,static_requests=0i,write_errors=0i 1564441407000000000
+```
diff --git a/content/telegraf/v1/input-plugins/varnish/_index.md b/content/telegraf/v1/input-plugins/varnish/_index.md
new file mode 100644
index 000000000..c1a8983f1
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/varnish/_index.md
@@ -0,0 +1,597 @@
+---
+description: "Telegraf plugin for collecting metrics from Varnish"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Varnish
+    identifier: input-varnish
+tags: [Varnish, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Varnish Input Plugin
+
+This plugin gathers stats from [Varnish HTTP Cache](https://varnish-cache.org/)
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# A plugin to collect stats from Varnish HTTP Cache
+# This plugin ONLY supports non-Windows
+[[inputs.varnish]]
+  ## If running as a restricted user you can prepend sudo for additional access:
+  #use_sudo = false
+
+  ## The default location of the varnishstat binary can be overridden with:
+  binary = "/usr/bin/varnishstat"
+
+  ## Additional custom arguments for the varnishstat command
+  # binary_args = ["-f", "MAIN.*"]
+
+  ## The default location of the varnishadm binary can be overridden with:
+  adm_binary = "/usr/bin/varnishadm"
+
+  ## Custom arguments for the varnishadm command
+  # adm_binary_args = [""]
+
+  ## Metric version defaults to metric_version=1, use metric_version=2 for removal of nonactive vcls
+  ## Varnish 6.0.2 and newer is required for metric_version=2.
+  metric_version = 1
+
+  ## Additional regexps to override builtin conversion of varnish metrics into telegraf metrics.
+  ## Regexp group "_vcl" is used for extracting the VCL name. Metrics that contain nonactive VCL's are skipped.
+  ## Regexp group "_field" overrides the field name. Other named regexp groups are used as tags.
+  # regexps = ['^XCNT\.(?P<_vcl>[\w\-]*)(\.)*(?P<group>[\w\-.+]*)\.(?P<_field>[\w\-.+]*)\.val']
+
+  ## By default, telegraf gather stats for 3 metric points.
+  ## Setting stats will override the defaults shown below.
+  ## Glob matching can be used, ie, stats = ["MAIN.*"]
+  ## stats may also be set to ["*"], which will collect all stats
+  stats = ["MAIN.cache_hit", "MAIN.cache_miss", "MAIN.uptime"]
+
+  ## Optional name for the varnish instance (or working directory) to query
+  ## Usually append after -n in varnish cli
+  # instance_name = instanceName
+
+  ## Timeout for varnishstat command
+  # timeout = "1s"
+```
+
+## Metrics
+
+### metric_version=1
+
+This is the full list of stats provided by varnish. Stats will be grouped by
+their capitalized prefix (eg MAIN, MEMPOOL, etc). In the output, the prefix will
+be used as a tag, and removed from field names.
+
+- varnish
+  - MAIN.uptime                                    (uint64, count, Child process uptime)
+  - MAIN.sess_conn                                 (uint64, count, Sessions accepted)
+  - MAIN.sess_drop                                 (uint64, count, Sessions dropped)
+  - MAIN.sess_fail                                 (uint64, count, Session accept failures)
+  - MAIN.sess_pipe_overflow                        (uint64, count, Session pipe overflow)
+  - MAIN.client_req_400                            (uint64, count, Client requests received,)
+  - MAIN.client_req_411                            (uint64, count, Client requests received,)
+  - MAIN.client_req_413                            (uint64, count, Client requests received,)
+  - MAIN.client_req_417                            (uint64, count, Client requests received,)
+  - MAIN.client_req                                (uint64, count, Good client requests)
+  - MAIN.cache_hit                                 (uint64, count, Cache hits)
+  - MAIN.cache_hitpass                             (uint64, count, Cache hits for)
+  - MAIN.cache_miss                                (uint64, count, Cache misses)
+  - MAIN.backend_conn                              (uint64, count, Backend conn. success)
+  - MAIN.backend_unhealthy                         (uint64, count, Backend conn. not)
+  - MAIN.backend_busy                              (uint64, count, Backend conn. too)
+  - MAIN.backend_fail                              (uint64, count, Backend conn. failures)
+  - MAIN.backend_reuse                             (uint64, count, Backend conn. reuses)
+  - MAIN.backend_toolate                           (uint64, count, Backend conn. was)
+  - MAIN.backend_recycle                           (uint64, count, Backend conn. recycles)
+  - MAIN.backend_retry                             (uint64, count, Backend conn. retry)
+  - MAIN.fetch_head                                (uint64, count, Fetch no body)
+  - MAIN.fetch_length                              (uint64, count, Fetch with Length)
+  - MAIN.fetch_chunked                             (uint64, count, Fetch chunked)
+  - MAIN.fetch_eof                                 (uint64, count, Fetch EOF)
+  - MAIN.fetch_bad                                 (uint64, count, Fetch bad T- E)
+  - MAIN.fetch_close                               (uint64, count, Fetch wanted close)
+  - MAIN.fetch_oldhttp                             (uint64, count, Fetch pre HTTP/1.1)
+  - MAIN.fetch_zero                                (uint64, count, Fetch zero len)
+  - MAIN.fetch_1xx                                 (uint64, count, Fetch no body)
+  - MAIN.fetch_204                                 (uint64, count, Fetch no body)
+  - MAIN.fetch_304                                 (uint64, count, Fetch no body)
+  - MAIN.fetch_failed                              (uint64, count, Fetch failed (all)
+  - MAIN.fetch_no_thread                           (uint64, count, Fetch failed (no)
+  - MAIN.pools                                     (uint64, count, Number of thread)
+  - MAIN.threads                                   (uint64, count, Total number of)
+  - MAIN.threads_limited                           (uint64, count, Threads hit max)
+  - MAIN.threads_created                           (uint64, count, Threads created)
+  - MAIN.threads_destroyed                         (uint64, count, Threads destroyed)
+  - MAIN.threads_failed                            (uint64, count, Thread creation failed)
+  - MAIN.thread_queue_len                          (uint64, count, Length of session)
+  - MAIN.busy_sleep                                (uint64, count, Number of requests)
+  - MAIN.busy_wakeup                               (uint64, count, Number of requests)
+  - MAIN.sess_queued                               (uint64, count, Sessions queued for)
+  - MAIN.sess_dropped                              (uint64, count, Sessions dropped for)
+  - MAIN.n_object                                  (uint64, count, object structs made)
+  - MAIN.n_vampireobject                           (uint64, count, unresurrected objects)
+  - MAIN.n_objectcore                              (uint64, count, objectcore structs made)
+  - MAIN.n_objecthead                              (uint64, count, objecthead structs made)
+  - MAIN.n_waitinglist                             (uint64, count, waitinglist structs made)
+  - MAIN.n_backend                                 (uint64, count, Number of backends)
+  - MAIN.n_expired                                 (uint64, count, Number of expired)
+  - MAIN.n_lru_nuked                               (uint64, count, Number of LRU)
+  - MAIN.n_lru_moved                               (uint64, count, Number of LRU)
+  - MAIN.losthdr                                   (uint64, count, HTTP header overflows)
+  - MAIN.s_sess                                    (uint64, count, Total sessions seen)
+  - MAIN.s_req                                     (uint64, count, Total requests seen)
+  - MAIN.s_pipe                                    (uint64, count, Total pipe sessions)
+  - MAIN.s_pass                                    (uint64, count, Total pass- ed requests)
+  - MAIN.s_fetch                                   (uint64, count, Total backend fetches)
+  - MAIN.s_synth                                   (uint64, count, Total synthetic responses)
+  - MAIN.s_req_hdrbytes                            (uint64, count, Request header bytes)
+  - MAIN.s_req_bodybytes                           (uint64, count, Request body bytes)
+  - MAIN.s_resp_hdrbytes                           (uint64, count, Response header bytes)
+  - MAIN.s_resp_bodybytes                          (uint64, count, Response body bytes)
+  - MAIN.s_pipe_hdrbytes                           (uint64, count, Pipe request header)
+  - MAIN.s_pipe_in                                 (uint64, count, Piped bytes from)
+  - MAIN.s_pipe_out                                (uint64, count, Piped bytes to)
+  - MAIN.sess_closed                               (uint64, count, Session Closed)
+  - MAIN.sess_pipeline                             (uint64, count, Session Pipeline)
+  - MAIN.sess_readahead                            (uint64, count, Session Read Ahead)
+  - MAIN.sess_herd                                 (uint64, count, Session herd)
+  - MAIN.shm_records                               (uint64, count, SHM records)
+  - MAIN.shm_writes                                (uint64, count, SHM writes)
+  - MAIN.shm_flushes                               (uint64, count, SHM flushes due)
+  - MAIN.shm_cont                                  (uint64, count, SHM MTX contention)
+  - MAIN.shm_cycles                                (uint64, count, SHM cycles through)
+  - MAIN.sms_nreq                                  (uint64, count, SMS allocator requests)
+  - MAIN.sms_nobj                                  (uint64, count, SMS outstanding allocations)
+  - MAIN.sms_nbytes                                (uint64, count, SMS outstanding bytes)
+  - MAIN.sms_balloc                                (uint64, count, SMS bytes allocated)
+  - MAIN.sms_bfree                                 (uint64, count, SMS bytes freed)
+  - MAIN.backend_req                               (uint64, count, Backend requests made)
+  - MAIN.n_vcl                                     (uint64, count, Number of loaded)
+  - MAIN.n_vcl_avail                               (uint64, count, Number of VCLs)
+  - MAIN.n_vcl_discard                             (uint64, count, Number of discarded)
+  - MAIN.bans                                      (uint64, count, Count of bans)
+  - MAIN.bans_completed                            (uint64, count, Number of bans)
+  - MAIN.bans_obj                                  (uint64, count, Number of bans)
+  - MAIN.bans_req                                  (uint64, count, Number of bans)
+  - MAIN.bans_added                                (uint64, count, Bans added)
+  - MAIN.bans_deleted                              (uint64, count, Bans deleted)
+  - MAIN.bans_tested                               (uint64, count, Bans tested against)
+  - MAIN.bans_obj_killed                           (uint64, count, Objects killed by)
+  - MAIN.bans_lurker_tested                        (uint64, count, Bans tested against)
+  - MAIN.bans_tests_tested                         (uint64, count, Ban tests tested)
+  - MAIN.bans_lurker_tests_tested                  (uint64, count, Ban tests tested)
+  - MAIN.bans_lurker_obj_killed                    (uint64, count, Objects killed by)
+  - MAIN.bans_dups                                 (uint64, count, Bans superseded by)
+  - MAIN.bans_lurker_contention                    (uint64, count, Lurker gave way)
+  - MAIN.bans_persisted_bytes                      (uint64, count, Bytes used by)
+  - MAIN.bans_persisted_fragmentation              (uint64, count, Extra bytes in)
+  - MAIN.n_purges                                  (uint64, count, Number of purge)
+  - MAIN.n_obj_purged                              (uint64, count, Number of purged)
+  - MAIN.exp_mailed                                (uint64, count, Number of objects)
+  - MAIN.exp_received                              (uint64, count, Number of objects)
+  - MAIN.hcb_nolock                                (uint64, count, HCB Lookups without)
+  - MAIN.hcb_lock                                  (uint64, count, HCB Lookups with)
+  - MAIN.hcb_insert                                (uint64, count, HCB Inserts)
+  - MAIN.esi_errors                                (uint64, count, ESI parse errors)
+  - MAIN.esi_warnings                              (uint64, count, ESI parse warnings)
+  - MAIN.vmods                                     (uint64, count, Loaded VMODs)
+  - MAIN.n_gzip                                    (uint64, count, Gzip operations)
+  - MAIN.n_gunzip                                  (uint64, count, Gunzip operations)
+  - MAIN.vsm_free                                  (uint64, count, Free VSM space)
+  - MAIN.vsm_used                                  (uint64, count, Used VSM space)
+  - MAIN.vsm_cooling                               (uint64, count, Cooling VSM space)
+  - MAIN.vsm_overflow                              (uint64, count, Overflow VSM space)
+  - MAIN.vsm_overflowed                            (uint64, count, Overflowed VSM space)
+  - MGT.uptime                                     (uint64, count, Management process uptime)
+  - MGT.child_start                                (uint64, count, Child process started)
+  - MGT.child_exit                                 (uint64, count, Child process normal)
+  - MGT.child_stop                                 (uint64, count, Child process unexpected)
+  - MGT.child_died                                 (uint64, count, Child process died)
+  - MGT.child_dump                                 (uint64, count, Child process core)
+  - MGT.child_panic                                (uint64, count, Child process panic)
+  - MEMPOOL.vbc.live                               (uint64, count, In use)
+  - MEMPOOL.vbc.pool                               (uint64, count, In Pool)
+  - MEMPOOL.vbc.sz_wanted                          (uint64, count, Size requested)
+  - MEMPOOL.vbc.sz_needed                          (uint64, count, Size allocated)
+  - MEMPOOL.vbc.allocs                             (uint64, count, Allocations )
+  - MEMPOOL.vbc.frees                              (uint64, count, Frees )
+  - MEMPOOL.vbc.recycle                            (uint64, count, Recycled from pool)
+  - MEMPOOL.vbc.timeout                            (uint64, count, Timed out from)
+  - MEMPOOL.vbc.toosmall                           (uint64, count, Too small to)
+  - MEMPOOL.vbc.surplus                            (uint64, count, Too many for)
+  - MEMPOOL.vbc.randry                             (uint64, count, Pool ran dry)
+  - MEMPOOL.busyobj.live                           (uint64, count, In use)
+  - MEMPOOL.busyobj.pool                           (uint64, count, In Pool)
+  - MEMPOOL.busyobj.sz_wanted                      (uint64, count, Size requested)
+  - MEMPOOL.busyobj.sz_needed                      (uint64, count, Size allocated)
+  - MEMPOOL.busyobj.allocs                         (uint64, count, Allocations )
+  - MEMPOOL.busyobj.frees                          (uint64, count, Frees )
+  - MEMPOOL.busyobj.recycle                        (uint64, count, Recycled from pool)
+  - MEMPOOL.busyobj.timeout                        (uint64, count, Timed out from)
+  - MEMPOOL.busyobj.toosmall                       (uint64, count, Too small to)
+  - MEMPOOL.busyobj.surplus                        (uint64, count, Too many for)
+  - MEMPOOL.busyobj.randry                         (uint64, count, Pool ran dry)
+  - MEMPOOL.req0.live                              (uint64, count, In use)
+  - MEMPOOL.req0.pool                              (uint64, count, In Pool)
+  - MEMPOOL.req0.sz_wanted                         (uint64, count, Size requested)
+  - MEMPOOL.req0.sz_needed                         (uint64, count, Size allocated)
+  - MEMPOOL.req0.allocs                            (uint64, count, Allocations )
+  - MEMPOOL.req0.frees                             (uint64, count, Frees )
+  - MEMPOOL.req0.recycle                           (uint64, count, Recycled from pool)
+  - MEMPOOL.req0.timeout                           (uint64, count, Timed out from)
+  - MEMPOOL.req0.toosmall                          (uint64, count, Too small to)
+  - MEMPOOL.req0.surplus                           (uint64, count, Too many for)
+  - MEMPOOL.req0.randry                            (uint64, count, Pool ran dry)
+  - MEMPOOL.sess0.live                             (uint64, count, In use)
+  - MEMPOOL.sess0.pool                             (uint64, count, In Pool)
+  - MEMPOOL.sess0.sz_wanted                        (uint64, count, Size requested)
+  - MEMPOOL.sess0.sz_needed                        (uint64, count, Size allocated)
+  - MEMPOOL.sess0.allocs                           (uint64, count, Allocations )
+  - MEMPOOL.sess0.frees                            (uint64, count, Frees )
+  - MEMPOOL.sess0.recycle                          (uint64, count, Recycled from pool)
+  - MEMPOOL.sess0.timeout                          (uint64, count, Timed out from)
+  - MEMPOOL.sess0.toosmall                         (uint64, count, Too small to)
+  - MEMPOOL.sess0.surplus                          (uint64, count, Too many for)
+  - MEMPOOL.sess0.randry                           (uint64, count, Pool ran dry)
+  - MEMPOOL.req1.live                              (uint64, count, In use)
+  - MEMPOOL.req1.pool                              (uint64, count, In Pool)
+  - MEMPOOL.req1.sz_wanted                         (uint64, count, Size requested)
+  - MEMPOOL.req1.sz_needed                         (uint64, count, Size allocated)
+  - MEMPOOL.req1.allocs                            (uint64, count, Allocations )
+  - MEMPOOL.req1.frees                             (uint64, count, Frees )
+  - MEMPOOL.req1.recycle                           (uint64, count, Recycled from pool)
+  - MEMPOOL.req1.timeout                           (uint64, count, Timed out from)
+  - MEMPOOL.req1.toosmall                          (uint64, count, Too small to)
+  - MEMPOOL.req1.surplus                           (uint64, count, Too many for)
+  - MEMPOOL.req1.randry                            (uint64, count, Pool ran dry)
+  - MEMPOOL.sess1.live                             (uint64, count, In use)
+  - MEMPOOL.sess1.pool                             (uint64, count, In Pool)
+  - MEMPOOL.sess1.sz_wanted                        (uint64, count, Size requested)
+  - MEMPOOL.sess1.sz_needed                        (uint64, count, Size allocated)
+  - MEMPOOL.sess1.allocs                           (uint64, count, Allocations )
+  - MEMPOOL.sess1.frees                            (uint64, count, Frees )
+  - MEMPOOL.sess1.recycle                          (uint64, count, Recycled from pool)
+  - MEMPOOL.sess1.timeout                          (uint64, count, Timed out from)
+  - MEMPOOL.sess1.toosmall                         (uint64, count, Too small to)
+  - MEMPOOL.sess1.surplus                          (uint64, count, Too many for)
+  - MEMPOOL.sess1.randry                           (uint64, count, Pool ran dry)
+  - SMA.s0.c_req                                   (uint64, count, Allocator requests)
+  - SMA.s0.c_fail                                  (uint64, count, Allocator failures)
+  - SMA.s0.c_bytes                                 (uint64, count, Bytes allocated)
+  - SMA.s0.c_freed                                 (uint64, count, Bytes freed)
+  - SMA.s0.g_alloc                                 (uint64, count, Allocations outstanding)
+  - SMA.s0.g_bytes                                 (uint64, count, Bytes outstanding)
+  - SMA.s0.g_space                                 (uint64, count, Bytes available)
+  - SMA.Transient.c_req                            (uint64, count, Allocator requests)
+  - SMA.Transient.c_fail                           (uint64, count, Allocator failures)
+  - SMA.Transient.c_bytes                          (uint64, count, Bytes allocated)
+  - SMA.Transient.c_freed                          (uint64, count, Bytes freed)
+  - SMA.Transient.g_alloc                          (uint64, count, Allocations outstanding)
+  - SMA.Transient.g_bytes                          (uint64, count, Bytes outstanding)
+  - SMA.Transient.g_space                          (uint64, count, Bytes available)
+  - VBE.default(127.0.0.1,,8080).vcls              (uint64, count, VCL references)
+  - VBE.default(127.0.0.1,,8080).happy             (uint64, count, Happy health probes)
+  - VBE.default(127.0.0.1,,8080).bereq_hdrbytes    (uint64, count, Request header bytes)
+  - VBE.default(127.0.0.1,,8080).bereq_bodybytes   (uint64, count, Request body bytes)
+  - VBE.default(127.0.0.1,,8080).beresp_hdrbytes   (uint64, count, Response header bytes)
+  - VBE.default(127.0.0.1,,8080).beresp_bodybytes  (uint64, count, Response body bytes)
+  - VBE.default(127.0.0.1,,8080).pipe_hdrbytes     (uint64, count, Pipe request header)
+  - VBE.default(127.0.0.1,,8080).pipe_out          (uint64, count, Piped bytes to)
+  - VBE.default(127.0.0.1,,8080).pipe_in           (uint64, count, Piped bytes from)
+  - LCK.sms.creat                                  (uint64, count, Created locks)
+  - LCK.sms.destroy                                (uint64, count, Destroyed locks)
+  - LCK.sms.locks                                  (uint64, count, Lock Operations)
+  - LCK.smp.creat                                  (uint64, count, Created locks)
+  - LCK.smp.destroy                                (uint64, count, Destroyed locks)
+  - LCK.smp.locks                                  (uint64, count, Lock Operations)
+  - LCK.sma.creat                                  (uint64, count, Created locks)
+  - LCK.sma.destroy                                (uint64, count, Destroyed locks)
+  - LCK.sma.locks                                  (uint64, count, Lock Operations)
+  - LCK.smf.creat                                  (uint64, count, Created locks)
+  - LCK.smf.destroy                                (uint64, count, Destroyed locks)
+  - LCK.smf.locks                                  (uint64, count, Lock Operations)
+  - LCK.hsl.creat                                  (uint64, count, Created locks)
+  - LCK.hsl.destroy                                (uint64, count, Destroyed locks)
+  - LCK.hsl.locks                                  (uint64, count, Lock Operations)
+  - LCK.hcb.creat                                  (uint64, count, Created locks)
+  - LCK.hcb.destroy                                (uint64, count, Destroyed locks)
+  - LCK.hcb.locks                                  (uint64, count, Lock Operations)
+  - LCK.hcl.creat                                  (uint64, count, Created locks)
+  - LCK.hcl.destroy                                (uint64, count, Destroyed locks)
+  - LCK.hcl.locks                                  (uint64, count, Lock Operations)
+  - LCK.vcl.creat                                  (uint64, count, Created locks)
+  - LCK.vcl.destroy                                (uint64, count, Destroyed locks)
+  - LCK.vcl.locks                                  (uint64, count, Lock Operations)
+  - LCK.sessmem.creat                              (uint64, count, Created locks)
+  - LCK.sessmem.destroy                            (uint64, count, Destroyed locks)
+  - LCK.sessmem.locks                              (uint64, count, Lock Operations)
+  - LCK.sess.creat                                 (uint64, count, Created locks)
+  - LCK.sess.destroy                               (uint64, count, Destroyed locks)
+  - LCK.sess.locks                                 (uint64, count, Lock Operations)
+  - LCK.wstat.creat                                (uint64, count, Created locks)
+  - LCK.wstat.destroy                              (uint64, count, Destroyed locks)
+  - LCK.wstat.locks                                (uint64, count, Lock Operations)
+  - LCK.herder.creat                               (uint64, count, Created locks)
+  - LCK.herder.destroy                             (uint64, count, Destroyed locks)
+  - LCK.herder.locks                               (uint64, count, Lock Operations)
+  - LCK.wq.creat                                   (uint64, count, Created locks)
+  - LCK.wq.destroy                                 (uint64, count, Destroyed locks)
+  - LCK.wq.locks                                   (uint64, count, Lock Operations)
+  - LCK.objhdr.creat                               (uint64, count, Created locks)
+  - LCK.objhdr.destroy                             (uint64, count, Destroyed locks)
+  - LCK.objhdr.locks                               (uint64, count, Lock Operations)
+  - LCK.exp.creat                                  (uint64, count, Created locks)
+  - LCK.exp.destroy                                (uint64, count, Destroyed locks)
+  - LCK.exp.locks                                  (uint64, count, Lock Operations)
+  - LCK.lru.creat                                  (uint64, count, Created locks)
+  - LCK.lru.destroy                                (uint64, count, Destroyed locks)
+  - LCK.lru.locks                                  (uint64, count, Lock Operations)
+  - LCK.cli.creat                                  (uint64, count, Created locks)
+  - LCK.cli.destroy                                (uint64, count, Destroyed locks)
+  - LCK.cli.locks                                  (uint64, count, Lock Operations)
+  - LCK.ban.creat                                  (uint64, count, Created locks)
+  - LCK.ban.destroy                                (uint64, count, Destroyed locks)
+  - LCK.ban.locks                                  (uint64, count, Lock Operations)
+  - LCK.vbp.creat                                  (uint64, count, Created locks)
+  - LCK.vbp.destroy                                (uint64, count, Destroyed locks)
+  - LCK.vbp.locks                                  (uint64, count, Lock Operations)
+  - LCK.backend.creat                              (uint64, count, Created locks)
+  - LCK.backend.destroy                            (uint64, count, Destroyed locks)
+  - LCK.backend.locks                              (uint64, count, Lock Operations)
+  - LCK.vcapace.creat                              (uint64, count, Created locks)
+  - LCK.vcapace.destroy                            (uint64, count, Destroyed locks)
+  - LCK.vcapace.locks                              (uint64, count, Lock Operations)
+  - LCK.nbusyobj.creat                             (uint64, count, Created locks)
+  - LCK.nbusyobj.destroy                           (uint64, count, Destroyed locks)
+  - LCK.nbusyobj.locks                             (uint64, count, Lock Operations)
+  - LCK.busyobj.creat                              (uint64, count, Created locks)
+  - LCK.busyobj.destroy                            (uint64, count, Destroyed locks)
+  - LCK.busyobj.locks                              (uint64, count, Lock Operations)
+  - LCK.mempool.creat                              (uint64, count, Created locks)
+  - LCK.mempool.destroy                            (uint64, count, Destroyed locks)
+  - LCK.mempool.locks                              (uint64, count, Lock Operations)
+  - LCK.vxid.creat                                 (uint64, count, Created locks)
+  - LCK.vxid.destroy                               (uint64, count, Destroyed locks)
+  - LCK.vxid.locks                                 (uint64, count, Lock Operations)
+  - LCK.pipestat.creat                             (uint64, count, Created locks)
+  - LCK.pipestat.destroy                           (uint64, count, Destroyed locks)
+  - LCK.pipestat.locks                             (uint64, count, Lock Operations)
+
+### Tags
+
+As indicated above, the prefix of a varnish stat will be used as it's 'section'
+tag. So section tag may have one of the following values:
+
+- section:
+  - MAIN
+  - MGT
+  - MEMPOOL
+  - SMA
+  - VBE
+  - LCK
+
+### metric_version=2
+
+When `metric_version=2` is enabled, the plugin runs `varnishstat -j` command and
+parses the JSON output into metrics.
+
+Plugin uses `varnishadm vcl.list -j` commandline to find the active VCL. Metrics
+that are related to the nonactive VCL are excluded from monitoring.
+
+## Requirements
+
+- Varnish 6.0.2+ is required (older versions do not support JSON output from CLI tools)
+
+## Examples
+
+Varnish counter:
+
+```json
+{
+  "MAIN.cache_hit": {
+    "description": "Cache hits",
+    "flag": "c",
+    "format": "i",
+    "value": 51
+  }
+}
+```
+
+Influx metric:
+`varnish,section=MAIN cache_hit=51i 1462765437090957980`
+
+## Advanced customizations using regexps
+
+Finding the VCL in a varnish measurement and parsing into tags can be adjusted
+by using GO regular expressions.
+
+Regexps use a special named group `(?P<_vcl>[\w\-]*)(\.)` to extract VCL
+name. `(?P<_field>[\w\-.+]*)\.val` regexp group extracts the field name. All
+other named regexp groups like `(?P<my_tag>[\w\-.+]*)` are tags.
+
+_Tip: It is useful to verify regexps using online tools like
+<https://regoio.herokuapp.com/>._
+
+By default, the plugin has a builtin list of regexps for following VMODs:
+
+- Dynamic Backends (goto)
+  - regexp: `^VBE\.(?P<_vcl>[\w\-]*)\.goto\.[[:alnum:]]+\.\((?P<backend>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})\)\.\((?P<server>.*)\)\.\(ttl:\d*\.\d*.*\)`
+    - `VBE.VCL12323.goto.000007c8.(123.123.123.123).(http://aaa.xxcc:80).(ttl:3600.000000).cache_hit` -> `varnish,section=VBE,backend="123.123.123.123",server="http://aaa.xxcc:80" cache_hit=51i 1462765437090957980`
+
+- Key value storage (kvstore)
+  - regexp `^KVSTORE\.(?P<id>[\w\-]*)\.(?P<_vcl>[\w\-]*)\.([\w\-]*)`
+    - `KVSTORE.object_name.vcl_name.key` -> `varnish,section=KVSTORE,id=object_name key=5i`
+- XCNT (libvmod-xcounter)
+  - regexp `^XCNT\.(?P<_vcl>[\w\-]*)(\.)*(?P<group>[\w\-.+]*)\.(?P<_field>[\w\-.+]*)\.val`
+    - `XCNT.abc1234.XXX+_YYYY.cr.pass.val` -> `varnish,section=XCNT,group="XXX+_YYYY.cr" pass=5i`
+
+- standard VBE metrics
+  - regexp `^VBE\.(?P<_vcl>[\w\-]*)\.(?P<backend>[\w\-]*)\.([\w\-]*)`
+    - `VBE.reload_20210622_153544_23757.default.unhealthy` -> `varnish,section=VBE,backend="default" unhealthy=51i 1462765437090957980`
+- default generic metric
+  - regexp `([\w\-]*)\.(?P<_field>[\w\-.]*)`
+    - `MSE_STORE.store-1-1.g_aio_running_bytes_write` -> `varnish,section=MSE_STORE store-1-1.g_aio_running_bytes_write=5i`
+
+The default regexps list can be extended in the telegraf config. The following
+example shows a config with a custom regexp for parsing of `accounting` VMOD
+metrics in `ACCG.<namespace>.<key>.<stat_name>` format. The namespace value will
+be used as a tag.
+
+```toml
+[[inputs.varnish]]
+    regexps = ['^ACCG.(?P<namespace>[\w-]*).(?P<_field>[\w-.]*)']
+```
+
+## Custom arguments
+
+You can change the default binary location and custom arguments for
+`varnishstat` and `varnishadm` command output. This is useful when running
+varnish in docker or executing using varnish by SSH on a different machine.
+
+It's important to note that `instance_name` parameter is not take into account
+when using custom `binary_args` or `adm_binary_args`. You have to add `"-n",
+"/instance_name"` manually into configuration.
+
+### Example for SSH
+
+```toml
+[[inputs.varnish]]
+  binary = "/usr/bin/ssh"
+  binary_args = ["root@10.100.0.112", "varnishstat", "-n", "/var/lib/varnish/ubuntu", "-j"]
+  adm_binary   =  "/usr/bin/ssh"
+  adm_binary_args = ["root@10.100.0.112", "varnishadm", "-n", "/var/lib/varnish/ubuntu", "vcl.list", "-j"]
+  metric_version = 2
+  stats = ["*"]
+```
+
+### Example for Docker
+
+```toml
+[[inputs.varnish]]
+  binary = "/usr/local/bin/docker"
+  binary_args = ["exec", "-t", "container_name", "varnishstat",  "-j"]
+  adm_binary   =  "/usr/local/bin/docker"
+  adm_binary_args =  ["exec", "-t", "container_name", "varnishadm", "vcl.list", "-j"]
+  metric_version = 2
+  stats = ["*"]
+```
+
+## Permissions
+
+It's important to note that this plugin references `varnishstat` and
+`varnishadm`, which may require additional permissions to execute successfully.
+Depending on the user/group permissions of the telegraf user executing this
+plugin, you may need to alter the group membership, set facls, or use sudo.
+
+### Group membership (Recommended)
+
+```bash
+$ groups telegraf
+telegraf : telegraf
+
+$ usermod -a -G varnish telegraf
+
+$ groups telegraf
+telegraf : telegraf varnish
+```
+
+### Extended filesystem ACL's
+
+```bash
+$ getfacl /var/lib/varnish/<hostname>/_.vsm
+# file: var/lib/varnish/<hostname>/_.vsm
+# owner: root
+# group: root
+user::rw-
+group::r--
+other::---
+
+$ setfacl -m u:telegraf:r /var/lib/varnish/<hostname>/_.vsm
+
+$ getfacl /var/lib/varnish/<hostname>/_.vsm
+# file: var/lib/varnish/<hostname>/_.vsm
+# owner: root
+# group: root
+user::rw-
+user:telegraf:r--
+group::r--
+mask::r--
+other::---
+```
+
+**Sudo privileges**:
+If you use this method, you will need the following in your telegraf config:
+
+```toml
+[[inputs.varnish]]
+  use_sudo = true
+```
+
+You will also need to update your sudoers file:
+
+```bash
+$ visudo
+# Add the following line:
+Cmnd_Alias VARNISHSTAT = /usr/bin/varnishstat
+telegraf  ALL=(ALL) NOPASSWD: VARNISHSTAT
+Defaults!VARNISHSTAT !logfile, !syslog, !pam_session
+```
+
+Please use the solution you see as most appropriate.
+
+## Example Output
+
+### metric_version = 1
+
+```bash
+ telegraf --config etc/telegraf.conf --input-filter varnish --test
+* Plugin: varnish, Collection 1
+> varnish,host=rpercy-VirtualBox,section=MAIN cache_hit=0i,cache_miss=0i,uptime=8416i 1462765437090957980
+```
+
+### metric_version = 2
+
+```bash
+telegraf --config etc/telegraf.conf --input-filter varnish --test
+> varnish,host=kozel.local,section=MAIN n_vampireobject=0i 1631121567000000000
+> varnish,backend=server_test1,host=kozel.local,section=VBE fail_eacces=0i 1631121567000000000
+> varnish,backend=default,host=kozel.local,section=VBE req=0i 1631121567000000000
+> varnish,host=kozel.local,section=MAIN client_req_400=0i 1631121567000000000
+> varnish,host=kozel.local,section=MAIN shm_cycles=10i 1631121567000000000
+> varnish,backend=default,host=kozel.local,section=VBE pipe_hdrbytes=0i 1631121567000000000
+```
+
+You can merge metrics together into a metric with multiple fields into the most
+memory and network transfer efficient form using `aggregators.merge`
+
+```toml
+[[aggregators.merge]]
+  drop_original = true
+```
+
+The output will be:
+
+```shell
+telegraf --config etc/telegraf.conf --input-filter varnish --test
+```
+
+```text
+varnish,host=kozel.local,section=MAIN backend_busy=0i,backend_conn=19i,backend_fail=0i,backend_recycle=8i,backend_req=19i,backend_retry=0i,backend_reuse=0i,backend_unhealthy=0i,bans=1i,bans_added=1i,bans_completed=1i,bans_deleted=0i,bans_dups=0i,bans_lurker_contention=0i,bans_lurker_obj_killed=0i,bans_lurker_obj_killed_cutoff=0i,bans_lurker_tested=0i,bans_lurker_tests_tested=0i,bans_obj=0i,bans_obj_killed=0i,bans_persisted_bytes=16i,bans_persisted_fragmentation=0i,bans_req=0i,bans_tested=0i,bans_tests_tested=0i,busy_killed=0i,busy_sleep=0i,busy_wakeup=0i,cache_hit=643999i,cache_hit_grace=22i,cache_hitmiss=0i,cache_hitpass=0i,cache_miss=1i,client_req=644000i,client_req_400=0i,client_req_417=0i,client_resp_500=0i,esi_errors=0i,esi_warnings=0i,exp_mailed=37i,exp_received=37i,fetch_1xx=0i,fetch_204=0i,fetch_304=2i,fetch_bad=0i,fetch_chunked=6i,fetch_eof=0i,fetch_failed=0i,fetch_head=0i,fetch_length=11i,fetch_no_thread=0i,fetch_none=0i,hcb_insert=1i,hcb_lock=1i,hcb_nolock=644000i,losthdr=0i,n_backend=19i,n_expired=1i,n_gunzip=289204i,n_gzip=0i,n_lru_limited=0i,n_lru_moved=843i,n_lru_nuked=0i,n_obj_purged=0i,n_object=0i,n_objectcore=40i,n_objecthead=40i,n_purges=0i,n_test_gunzip=6i,n_vampireobject=0i,n_vcl=7i,n_vcl_avail=7i,n_vcl_discard=0i,pools=2i,req_dropped=0i,s_fetch=1i,s_pass=0i,s_pipe=0i,s_pipe_hdrbytes=0i,s_pipe_in=0i,s_pipe_out=0i,s_req_bodybytes=0i,s_req_hdrbytes=54740000i,s_resp_bodybytes=341618192i,s_resp_hdrbytes=190035576i,s_sess=651038i,s_synth=0i,sc_overload=0i,sc_pipe_overflow=0i,sc_range_short=0i,sc_rem_close=7038i,sc_req_close=0i,sc_req_http10=644000i,sc_req_http20=0i,sc_resp_close=0i,sc_rx_bad=0i,sc_rx_body=0i,sc_rx_junk=0i,sc_rx_overflow=0i,sc_rx_timeout=0i,sc_tx_eof=0i,sc_tx_error=0i,sc_tx_pipe=0i,sc_vcl_failure=0i,sess_closed=644000i,sess_closed_err=644000i,sess_conn=651038i,sess_drop=0i,sess_dropped=0i,sess_fail=0i,sess_fail_ebadf=0i,sess_fail_econnaborted=0i,sess_fail_eintr=0i,sess_fail_emfile=0i,sess_fail_enomem=0i,sess_fail_other=0i,sess_herd=11i,sess_queued=0i,sess_readahead=0i,shm_cont=3572i,shm_cycles=10i,shm_flushes=0i,shm_records=30727866i,shm_writes=4661979i,summs=2225754i,thread_queue_len=0i,threads=200i,threads_created=200i,threads_destroyed=0i,threads_failed=0i,threads_limited=0i,uptime=4416326i,vcl_fail=0i,vmods=2i,ws_backend_overflow=0i,ws_client_overflow=0i,ws_session_overflow=0i,ws_thread_overflow=0i 1631121675000000000
+varnish,backend=default,host=kozel.local,section=VBE bereq_bodybytes=0i,bereq_hdrbytes=0i,beresp_bodybytes=0i,beresp_hdrbytes=0i,busy=0i,conn=0i,fail=0i,fail_eacces=0i,fail_eaddrnotavail=0i,fail_econnrefused=0i,fail_enetunreach=0i,fail_etimedout=0i,fail_other=0i,happy=9223372036854775807i,helddown=0i,pipe_hdrbytes=0i,pipe_in=0i,pipe_out=0i,req=0i,unhealthy=0i 1631121675000000000
+varnish,backend=server1,host=kozel.local,section=VBE bereq_bodybytes=0i,bereq_hdrbytes=0i,beresp_bodybytes=0i,beresp_hdrbytes=0i,busy=0i,conn=0i,fail=0i,fail_eacces=0i,fail_eaddrnotavail=0i,fail_econnrefused=30609i,fail_enetunreach=0i,fail_etimedout=0i,fail_other=0i,happy=0i,helddown=3i,pipe_hdrbytes=0i,pipe_in=0i,pipe_out=0i,req=0i,unhealthy=0i 1631121675000000000
+varnish,backend=server2,host=kozel.local,section=VBE bereq_bodybytes=0i,bereq_hdrbytes=0i,beresp_bodybytes=0i,beresp_hdrbytes=0i,busy=0i,conn=0i,fail=0i,fail_eacces=0i,fail_eaddrnotavail=0i,fail_econnrefused=30609i,fail_enetunreach=0i,fail_etimedout=0i,fail_other=0i,happy=0i,helddown=3i,pipe_hdrbytes=0i,pipe_in=0i,pipe_out=0i,req=0i,unhealthy=0i 1631121675000000000
+varnish,backend=server_test1,host=kozel.local,section=VBE bereq_bodybytes=0i,bereq_hdrbytes=0i,beresp_bodybytes=0i,beresp_hdrbytes=0i,busy=0i,conn=0i,fail=0i,fail_eacces=0i,fail_eaddrnotavail=0i,fail_econnrefused=49345i,fail_enetunreach=0i,fail_etimedout=0i,fail_other=0i,happy=0i,helddown=2i,pipe_hdrbytes=0i,pipe_in=0i,pipe_out=0i,req=0i,unhealthy=0i 1631121675000000000
+```
diff --git a/content/telegraf/v1/input-plugins/vault/_index.md b/content/telegraf/v1/input-plugins/vault/_index.md
new file mode 100644
index 000000000..ebcc32446
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/vault/_index.md
@@ -0,0 +1,62 @@
+---
+description: "Telegraf plugin for collecting metrics from Hashicorp Vault"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Hashicorp Vault
+    identifier: input-vault
+tags: [Hashicorp Vault, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Hashicorp Vault Input Plugin
+
+The Vault plugin could grab metrics from every Vault agent of the
+cluster. Telegraf may be present in every node and connect to the agent
+locally. In this case should be something like `http://127.0.0.1:8200`.
+
+> Tested on vault 1.8.5
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from the Vault API
+[[inputs.vault]]
+  ## URL for the Vault agent
+  # url = "http://127.0.0.1:8200"
+
+  ## Use Vault token for authorization.
+  ## Vault token configuration is mandatory.
+  ## If both are empty or both are set, an error is thrown.
+  # token_file = "/path/to/auth/token"
+  ## OR
+  token = "s.CDDrgg5zPv5ssI0Z2P4qxJj2"
+
+  ## Set response_timeout (default 5 seconds)
+  # response_timeout = "5s"
+
+  ## Optional TLS Config
+  # tls_ca = /path/to/cafile
+  # tls_cert = /path/to/certfile
+  # tls_key = /path/to/keyfile
+```
+
+## Metrics
+
+For a more deep understanding of Vault monitoring, please have a look at the
+following Vault documentation:
+
+- [https://www.vaultproject.io/docs/internals/telemetry](https://www.vaultproject.io/docs/internals/telemetry)
+- [https://learn.hashicorp.com/tutorials/vault/monitor-telemetry-audit-splunk?in=vault/monitoring](https://learn.hashicorp.com/tutorials/vault/monitor-telemetry-audit-splunk?in=vault/monitoring)
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/vsphere/_index.md b/content/telegraf/v1/input-plugins/vsphere/_index.md
new file mode 100644
index 000000000..e2eef3bf1
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/vsphere/_index.md
@@ -0,0 +1,850 @@
+---
+description: "Telegraf plugin for collecting metrics from VMware vSphere"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: VMware vSphere
+    identifier: input-vsphere
+tags: [VMware vSphere, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# VMware vSphere Input Plugin
+
+The VMware vSphere plugin uses the vSphere API to gather metrics from multiple
+vCenter servers.
+
+* Clusters
+* Hosts
+* Resource Pools
+* VMs
+* Datastores
+* vSAN
+
+## Supported versions of vSphere
+
+This plugin supports vSphere version 6.5, 6.7, 7.0 and 8.0.
+It may work with versions 5.1, 5.5 and 6.0, but neither are
+officially supported.
+
+Compatibility information is available from the govmomi project
+[here](https://github.com/vmware/govmomi/tree/v0.26.0#compatibility)
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `username` and
+`password` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics from one or many vCenters
+[[inputs.vsphere]]
+  ## List of vCenter URLs to be monitored. These three lines must be uncommented
+  ## and edited for the plugin to work.
+  vcenters = [ "https://vcenter.local/sdk" ]
+  username = "user@corp.local"
+  password = "secret"
+
+  ## VMs
+  ## Typical VM metrics (if omitted or empty, all metrics are collected)
+  # vm_include = [ "/*/vm/**"] # Inventory path to VMs to collect (by default all are collected)
+  # vm_exclude = [] # Inventory paths to exclude
+  vm_metric_include = [
+    "cpu.demand.average",
+    "cpu.idle.summation",
+    "cpu.latency.average",
+    "cpu.readiness.average",
+    "cpu.ready.summation",
+    "cpu.run.summation",
+    "cpu.usagemhz.average",
+    "cpu.used.summation",
+    "cpu.wait.summation",
+    "mem.active.average",
+    "mem.granted.average",
+    "mem.latency.average",
+    "mem.swapin.average",
+    "mem.swapinRate.average",
+    "mem.swapout.average",
+    "mem.swapoutRate.average",
+    "mem.usage.average",
+    "mem.vmmemctl.average",
+    "net.bytesRx.average",
+    "net.bytesTx.average",
+    "net.droppedRx.summation",
+    "net.droppedTx.summation",
+    "net.usage.average",
+    "power.power.average",
+    "virtualDisk.numberReadAveraged.average",
+    "virtualDisk.numberWriteAveraged.average",
+    "virtualDisk.read.average",
+    "virtualDisk.readOIO.latest",
+    "virtualDisk.throughput.usage.average",
+    "virtualDisk.totalReadLatency.average",
+    "virtualDisk.totalWriteLatency.average",
+    "virtualDisk.write.average",
+    "virtualDisk.writeOIO.latest",
+    "sys.uptime.latest",
+  ]
+  # vm_metric_exclude = [] ## Nothing is excluded by default
+  # vm_instances = true ## true by default
+
+  ## Hosts
+  ## Typical host metrics (if omitted or empty, all metrics are collected)
+  # host_include = [ "/*/host/**"] # Inventory path to hosts to collect (by default all are collected)
+  # host_exclude [] # Inventory paths to exclude
+  host_metric_include = [
+    "cpu.coreUtilization.average",
+    "cpu.costop.summation",
+    "cpu.demand.average",
+    "cpu.idle.summation",
+    "cpu.latency.average",
+    "cpu.readiness.average",
+    "cpu.ready.summation",
+    "cpu.swapwait.summation",
+    "cpu.usage.average",
+    "cpu.usagemhz.average",
+    "cpu.used.summation",
+    "cpu.utilization.average",
+    "cpu.wait.summation",
+    "disk.deviceReadLatency.average",
+    "disk.deviceWriteLatency.average",
+    "disk.kernelReadLatency.average",
+    "disk.kernelWriteLatency.average",
+    "disk.numberReadAveraged.average",
+    "disk.numberWriteAveraged.average",
+    "disk.read.average",
+    "disk.totalReadLatency.average",
+    "disk.totalWriteLatency.average",
+    "disk.write.average",
+    "mem.active.average",
+    "mem.latency.average",
+    "mem.state.latest",
+    "mem.swapin.average",
+    "mem.swapinRate.average",
+    "mem.swapout.average",
+    "mem.swapoutRate.average",
+    "mem.totalCapacity.average",
+    "mem.usage.average",
+    "mem.vmmemctl.average",
+    "net.bytesRx.average",
+    "net.bytesTx.average",
+    "net.droppedRx.summation",
+    "net.droppedTx.summation",
+    "net.errorsRx.summation",
+    "net.errorsTx.summation",
+    "net.usage.average",
+    "power.power.average",
+    "storageAdapter.numberReadAveraged.average",
+    "storageAdapter.numberWriteAveraged.average",
+    "storageAdapter.read.average",
+    "storageAdapter.write.average",
+    "sys.uptime.latest",
+  ]
+    ## Collect IP addresses? Valid values are "ipv4" and "ipv6"
+  # ip_addresses = ["ipv6", "ipv4" ]
+
+  # host_metric_exclude = [] ## Nothing excluded by default
+  # host_instances = true ## true by default
+
+
+  ## Clusters
+  # cluster_include = [ "/*/host/**"] # Inventory path to clusters to collect (by default all are collected)
+  # cluster_exclude = [] # Inventory paths to exclude
+  # cluster_metric_include = [] ## if omitted or empty, all metrics are collected
+  # cluster_metric_exclude = [] ## Nothing excluded by default
+  # cluster_instances = false ## false by default
+
+  ## Resource Pools
+  # resource_pool_include = [ "/*/host/**"] # Inventory path to resource pools to collect (by default all are collected)
+  # resource_pool_exclude = [] # Inventory paths to exclude
+  # resource_pool_metric_include = [] ## if omitted or empty, all metrics are collected
+  # resource_pool_metric_exclude = [] ## Nothing excluded by default
+  # resource_pool_instances = false ## false by default
+
+  ## Datastores
+  # datastore_include = [ "/*/datastore/**"] # Inventory path to datastores to collect (by default all are collected)
+  # datastore_exclude = [] # Inventory paths to exclude
+  # datastore_metric_include = [] ## if omitted or empty, all metrics are collected
+  # datastore_metric_exclude = [] ## Nothing excluded by default
+  # datastore_instances = false ## false by default
+
+  ## Datacenters
+  # datacenter_include = [ "/*/host/**"] # Inventory path to clusters to collect (by default all are collected)
+  # datacenter_exclude = [] # Inventory paths to exclude
+  datacenter_metric_include = [] ## if omitted or empty, all metrics are collected
+  datacenter_metric_exclude = [ "*" ] ## Datacenters are not collected by default.
+  # datacenter_instances = false ## false by default
+
+  ## VSAN
+  # vsan_metric_include = [] ## if omitted or empty, all metrics are collected
+  # vsan_metric_exclude = [ "*" ] ## vSAN are not collected by default.
+  ## Whether to skip verifying vSAN metrics against the ones from GetSupportedEntityTypes API.
+  # vsan_metric_skip_verify = false ## false by default.
+
+  ## Interval for sampling vSAN performance metrics, can be reduced down to
+  ## 30 seconds for vSAN 8 U1.
+  # vsan_interval = "5m"
+
+  ## Plugin Settings
+  ## separator character to use for measurement and field names (default: "_")
+  # separator = "_"
+
+  ## number of objects to retrieve per query for realtime resources (vms and hosts)
+  ## set to 64 for vCenter 5.5 and 6.0 (default: 256)
+  # max_query_objects = 256
+
+  ## number of metrics to retrieve per query for non-realtime resources (clusters and datastores)
+  ## set to 64 for vCenter 5.5 and 6.0 (default: 256)
+  # max_query_metrics = 256
+
+  ## number of go routines to use for collection and discovery of objects and metrics
+  # collect_concurrency = 1
+  # discover_concurrency = 1
+
+  ## the interval before (re)discovering objects subject to metrics collection (default: 300s)
+  # object_discovery_interval = "300s"
+
+  ## timeout applies to any of the api request made to vcenter
+  # timeout = "60s"
+
+  ## When set to true, all samples are sent as integers. This makes the output
+  ## data types backwards compatible with Telegraf 1.9 or lower. Normally all
+  ## samples from vCenter, with the exception of percentages, are integer
+  ## values, but under some conditions, some averaging takes place internally in
+  ## the plugin. Setting this flag to "false" will send values as floats to
+  ## preserve the full precision when averaging takes place.
+  # use_int_samples = true
+
+  ## Custom attributes from vCenter can be very useful for queries in order to slice the
+  ## metrics along different dimension and for forming ad-hoc relationships. They are disabled
+  ## by default, since they can add a considerable amount of tags to the resulting metrics. To
+  ## enable, simply set custom_attribute_exclude to [] (empty set) and use custom_attribute_include
+  ## to select the attributes you want to include.
+  ## By default, since they can add a considerable amount of tags to the resulting metrics. To
+  ## enable, simply set custom_attribute_exclude to [] (empty set) and use custom_attribute_include
+  ## to select the attributes you want to include.
+  # custom_attribute_include = []
+  # custom_attribute_exclude = ["*"]
+
+  ## The number of vSphere 5 minute metric collection cycles to look back for non-realtime metrics. In
+  ## some versions (6.7, 7.0 and possible more), certain metrics, such as cluster metrics, may be reported
+  ## with a significant delay (>30min). If this happens, try increasing this number. Please note that increasing
+  ## it too much may cause performance issues.
+  # metric_lookback = 3
+
+  ## Optional SSL Config
+  # ssl_ca = "/path/to/cafile"
+  # ssl_cert = "/path/to/certfile"
+  # ssl_key = "/path/to/keyfile"
+  ## Use SSL but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## The Historical Interval value must match EXACTLY the interval in the daily
+  # "Interval Duration" found on the VCenter server under Configure > General > Statistics > Statistic intervals
+  # historical_interval = "5m"
+
+  ## Specifies plugin behavior regarding disconnected servers
+  ## Available choices :
+  ##   - error: telegraf will return an error on startup if one the servers is unreachable
+  ##   - ignore: telegraf will ignore unreachable servers on both startup and gather
+  # disconnected_servers_behavior = "error"
+
+  ## HTTP Proxy support
+  # use_system_proxy = true
+  # http_proxy_url = ""
+```
+
+NOTE: To disable collection of a specific resource type, simply exclude all
+metrics using the XX_metric_exclude. For example, to disable collection of VMs,
+add this:
+
+```toml
+vm_metric_exclude = [ "*" ]
+```
+
+NOTE: To disable collection of a specific resource type, simply exclude all
+metrics using the XX_metric_exclude.
+For example, to disable collection of VMs, add this:
+
+### Objects and Metrics per Query
+
+By default, in the vCenter configuration a limit is set to the number of
+entities that are included in a performance chart query. Default settings for
+vCenter 6.5 and later is 256. Earlier versions of vCenter have this set to 64.
+A vCenter administrator can change this setting.
+See this [VMware KB article](https://kb.vmware.com/s/article/2107096) for more
+information.
+
+Any modification should be reflected in this plugin by modifying the parameter
+`max_query_objects`
+
+```toml
+  ## number of objects to retrieve per query for realtime resources (VMs and hosts)
+  ## set to 64 for vCenter 5.5 and 6.0 (default: 256)
+  # max_query_objects = 256
+```
+
+### Collection and Discovery Concurrency
+
+In large vCenter setups it may be prudent to have multiple concurrent go
+routines collect performance metrics in order to avoid potential errors for
+time elapsed during a collection cycle. This should never be greater than 8,
+though the default of 1 (no concurrency) should be sufficient for most
+configurations.
+
+For setting up concurrency, modify `collect_concurrency` and
+`discover_concurrency` parameters.
+
+```toml
+  ## number of go routines to use for collection and discovery of objects and metrics
+  # collect_concurrency = 1
+  # discover_concurrency = 1
+```
+
+### Inventory Paths
+
+Resources to be monitored can be selected using Inventory Paths. This treats
+the vSphere inventory as a tree structure similar to a file system. A vSphere
+inventory has a structure similar to this:
+
+```bash
+<root>
++-DC0 # Virtual datacenter
+   +-datastore # Datastore folder (created by system)
+   | +-Datastore1
+   +-host # Host folder (created by system)
+   | +-Cluster1
+   | | +-Host1
+   | | | +-VM1
+   | | | +-VM2
+   | | | +-hadoop1
+   | | +-ResourcePool1
+   | | | +-VM3
+   | | | +-VM4
+   | +-Host2 # Dummy cluster created for non-clustered host
+   | | +-Host2
+   | | | +-VM5
+   | | | +-VM6
+   +-vm # VM folder (created by system)
+   | +-VM1
+   | +-VM2
+   | +-Folder1
+   | | +-hadoop1
+   | | +-NestedFolder1
+   | | | +-VM3
+   | | | +-VM4
+```
+
+#### Using Inventory Paths
+
+Using familiar UNIX-style paths, one could select e.g. VM2 with the path
+`/DC0/vm/VM2`.
+
+Often, we want to select a group of resource, such as all the VMs in a
+folder. We could use the path `/DC0/vm/Folder1/*` for that.
+
+Another possibility is to select objects using a partial name, such as
+`/DC0/vm/Folder1/hadoop*` yielding all VMs in Folder1 with a name starting
+with "hadoop".
+
+Finally, due to the arbitrary nesting of the folder structure, we need a
+"recursive wildcard" for traversing multiple folders. We use the "**" symbol
+for that. If we want to look for a VM with a name starting with "hadoop" in
+any folder, we could use the following path: `/DC0/vm/**/hadoop*`
+
+#### Multiple Paths to VMs
+
+As we can see from the example tree above, VMs appear both in its on folder
+under the datacenter, as well as under the hosts. This is useful when you like
+to select VMs on a specific host. For example,
+`/DC0/host/Cluster1/Host1/hadoop*` selects all VMs with a name starting with
+"hadoop" that are running on Host1.
+
+We can extend this to looking at a cluster level:
+`/DC0/host/Cluster1/*/hadoop*`. This selects any VM matching "hadoop*" on any
+host in Cluster1.
+
+#### Inventory paths and top-level folders
+
+If your datacenter is in a folder and not directly below the inventory root, the
+default inventory paths will not work. This is intentional, since recursive
+wildcards may be slow in very large environments.
+
+If your datacenter is in a folder, you have two options:
+
+1. Explicitly include the folder in the path. For example, if your datacenter is in
+a folder named ```F1``` you could use the following path to get to your hosts:
+   ```/F1/MyDatacenter/host/**```
+2. Use a recursive wildcard to search an arbitrarily long chain of nested folders. To
+get to the hosts, you could use the following path: ```/**/host/**```. Note that
+this may run slowly in a very large environment, since a large number of nodes will
+be traversed.
+
+## Performance Considerations
+
+### Realtime vs. Historical Metrics
+
+vCenter keeps two different kinds of metrics, known as realtime and historical
+metrics.
+
+* Realtime metrics: Available at a 20 second granularity. These metrics are stored in memory and are very fast and cheap to query. Our tests have shown that a complete set of realtime metrics for 7000 virtual machines can be obtained in less than 20 seconds. Realtime metrics are only available on **ESXi hosts** and **virtual machine** resources. Realtime metrics are only stored for 1 hour in vCenter.
+* Historical metrics: Available at a (default) 5 minute, 30 minutes, 2 hours and 24 hours rollup levels. The vSphere Telegraf plugin only uses the most granular rollup which defaults to 5 minutes but can be changed in vCenter to other interval durations. These metrics are stored in the vCenter database and can be expensive and slow to query. Historical metrics are the only type of metrics available for **clusters**, **datastores**, **resource pools** and **datacenters**.
+
+This distinction has an impact on how Telegraf collects metrics. A single
+instance of an input plugin can have one and only one collection interval,
+which means that you typically set the collection interval based on the most
+frequently collected metric. Let's assume you set the collection interval to 1
+minute. All realtime metrics will be collected every minute. Since the
+historical metrics are only available on a 5 minute interval, the vSphere
+Telegraf plugin automatically skips four out of five collection cycles for
+these metrics. This works fine in many cases. Problems arise when the
+collection of historical metrics takes longer than the collection interval.
+This will cause error messages similar to this to appear in the Telegraf logs:
+
+```text
+2019-01-16T13:41:10Z W! [agent] input "inputs.vsphere" did not complete within its interval
+```
+
+This will disrupt the metric collection and can result in missed samples. The
+best practice workaround is to specify two instances of the vSphere plugin, one
+for the realtime metrics with a short collection interval and one for the
+historical metrics with a longer interval. You can use the `*_metric_exclude`
+to turn off the resources you don't want to collect metrics for in each
+instance. For example:
+
+```toml
+## Realtime instance
+[[inputs.vsphere]]
+  interval = "60s"
+  vcenters = [ "https://someaddress/sdk" ]
+  username = "someuser@vsphere.local"
+  password = "secret"
+
+  insecure_skip_verify = true
+  force_discover_on_init = true
+
+  # Exclude all historical metrics
+  datastore_metric_exclude = ["*"]
+  cluster_metric_exclude = ["*"]
+  datacenter_metric_exclude = ["*"]
+  resource_pool_metric_exclude = ["*"]
+  vsan_metric_exclude = ["*"]
+
+  collect_concurrency = 5
+  discover_concurrency = 5
+
+# Historical instance
+[[inputs.vsphere]]
+
+  interval = "300s"
+
+  vcenters = [ "https://someaddress/sdk" ]
+  username = "someuser@vsphere.local"
+  password = "secret"
+
+  insecure_skip_verify = true
+  force_discover_on_init = true
+  host_metric_exclude = ["*"] # Exclude realtime metrics
+  vm_metric_exclude = ["*"] # Exclude realtime metrics
+
+  max_query_metrics = 256
+  collect_concurrency = 3
+```
+
+### Configuring max_query_metrics Setting
+
+The `max_query_metrics` determines the maximum number of metrics to attempt to
+retrieve in one call to vCenter. Generally speaking, a higher number means
+faster and more efficient queries. However, the number of allowed metrics in a
+query is typically limited in vCenter by the `config.vpxd.stats.maxQueryMetrics`
+setting in vCenter. The value defaults to 64 on vSphere 5.5 and earlier and to
+256 on more recent versions. The vSphere plugin always checks this setting and
+will automatically reduce the number if the limit configured in vCenter is lower
+than max_query_metrics in the plugin. This will result in a log message similar
+to this:
+
+```text
+2019-01-21T03:24:18Z W! [input.vsphere] Configured max_query_metrics is 256, but server limits it to 64. Reducing.
+```
+
+You may ask a vCenter administrator to increase this limit to help boost
+performance.
+
+### Cluster Metrics and the max_query_metrics Setting
+
+Cluster metrics are handled a bit differently by vCenter. They are aggregated
+from ESXi and virtual machine metrics and may not be available when you query
+their most recent values. When this happens, vCenter will attempt to perform
+that aggregation on the fly. Unfortunately, all the subqueries needed
+internally in vCenter to perform this aggregation will count towards
+`config.vpxd.stats.maxQueryMetrics`. This means that even a very small query
+may result in an error message similar to this:
+
+```text
+2018-11-02T13:37:11Z E! Error in plugin [inputs.vsphere]: ServerFaultCode: This operation is restricted by the administrator - 'vpxd.stats.maxQueryMetrics'. Contact your system administrator
+```
+
+There are two ways of addressing this:
+
+* Ask your vCenter administrator to set `config.vpxd.stats.maxQueryMetrics` to a number that's higher than the total number of virtual machines managed by a vCenter instance.
+* Exclude the cluster metrics and use either the basicstats aggregator to calculate sums and averages per cluster or use queries in the visualization tool to obtain the same result.
+
+### Concurrency Settings
+
+The vSphere plugin allows you to specify two concurrency settings:
+
+* `collect_concurrency`: The maximum number of simultaneous queries for performance metrics allowed per resource type.
+* `discover_concurrency`: The maximum number of simultaneous queries for resource discovery allowed.
+
+While a higher level of concurrency typically has a positive impact on
+performance, increasing these numbers too much can cause performance issues at
+the vCenter server. A rule of thumb is to set these parameters to the number of
+virtual machines divided by 1500 and rounded up to the nearest integer.
+
+### Configuring historical_interval Setting
+
+When the vSphere plugin queries vCenter for historical statistics it queries for
+statistics that exist at a specific interval. The default historical interval
+duration is 5 minutes but if this interval has been changed then you must
+override the default query interval in the vSphere plugin.
+
+* `historical_interval`: The interval of the most granular statistics configured in vSphere represented in seconds.
+
+## Metrics
+
+* Cluster Stats
+  * Cluster services: CPU, memory, failover
+  * CPU: total, usage
+  * Memory: consumed, total, vmmemctl
+  * VM operations: # changes, clone, create, deploy, destroy, power, reboot, reconfigure, register, reset, shutdown, standby, vmotion
+* Host Stats:
+  * CPU: total, usage, cost, mhz
+  * Datastore: iops, latency, read/write bytes, # reads/writes
+  * Disk: commands, latency, kernel reads/writes, # reads/writes, queues
+  * Memory: total, usage, active, latency, swap, shared, vmmemctl
+  * Network: broadcast, bytes, dropped, errors, multicast, packets, usage
+  * Power: energy, usage, capacity
+  * Res CPU: active, max, running
+  * Storage Adapter: commands, latency, # reads/writes
+  * Storage Path: commands, latency, # reads/writes
+  * System Resources: cpu active, cpu max, cpu running, cpu usage, mem allocated, mem consumed, mem shared, swap
+  * System: uptime
+  * Flash Module: active VMDKs
+* VM Stats:
+  * CPU: demand, usage, readiness, cost, mhz
+  * Datastore: latency, # reads/writes
+  * Disk: commands, latency, # reads/writes, provisioned, usage
+  * Memory: granted, usage, active, swap, vmmemctl
+  * Network: broadcast, bytes, dropped, multicast, packets, usage
+  * Power: energy, usage
+  * Res CPU: active, max, running
+  * System: operating system uptime, uptime
+  * Virtual Disk: seeks, # reads/writes, latency, load
+* Resource Pools stats:
+  * Memory: total, usage, active, latency, swap, shared, vmmemctl
+  * CPU: capacity, usage, corecount
+  * Disk: throughput
+  * Network: throughput
+  * Power: energy, usage
+* Datastore stats:
+  * Disk: Capacity, provisioned, used
+
+For a detailed list of commonly available metrics, please refer to
+METRICS.md
+
+## Add a vSAN extension
+
+A vSAN resource is a special type of resource that can be collected by the
+plugin. The configuration of a vSAN resource slightly differs from the
+configuration of hosts, VMs, and other resources.
+
+### Prerequisites for vSAN
+
+* vSphere 6.5 and later
+* Clusters with vSAN enabled
+* [Turn on Virtual SAN performance service](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.virtualsan.doc/GUID-02F67DC3-3D5A-48A4-A445-D2BD6AF2862C.html): When you create a vSAN cluster,
+the performance service is disabled. To monitor the performance metrics,
+you must turn on vSAN performance service.
+
+### vSAN Configuration
+
+```toml
+[[inputs.vsphere]]
+  interval = "300s"
+  vcenters = ["https://<vcenter-ip>/sdk", "https://<vcenter2-ip>/sdk"]
+  username = "<user>"
+  password = "<pwd>"
+
+  # Exclude all other metrics
+  vm_metric_exclude = ["*"]
+  datastore_metric_exclude = ["*"]
+  datacenter_metric_exclude = ["*"]
+  host_metric_exclude = ["*"]
+  cluster_metric_exclude = ["*"]
+  
+  # By default all supported entity will be included
+  vsan_metric_include = [
+    "summary.disk-usage",
+    "summary.health",
+    "summary.resync",
+    "performance.cluster-domclient",
+    "performance.cluster-domcompmgr",
+    "performance.host-domclient",
+    "performance.host-domcompmgr",
+    "performance.cache-disk",
+    "performance.disk-group",
+    "performance.capacity-disk",
+    "performance.disk-group",
+    "performance.virtual-machine",
+    "performance.vscsi",
+    "performance.virtual-disk",
+    "performance.vsan-host-net",
+    "performance.vsan-vnic-net",
+    "performance.vsan-pnic-net",
+    "performance.vsan-iscsi-host",
+    "performance.vsan-iscsi-target",
+    "performance.vsan-iscsi-lun",
+    "performance.lsom-world-cpu",
+    "performance.nic-world-cpu",
+    "performance.dom-world-cpu",
+    "performance.cmmds-world-cpu",
+    "performance.host-cpu",
+    "performance.host-domowner",
+    "performance.host-memory-slab",
+    "performance.host-memory-heap",
+    "performance.system-mem",
+  ]
+  # by default vsan_metric_skip_verify = false
+  vsan_metric_skip_verify = true
+  vsan_metric_exclude = [ ]
+  # vsan_cluster_include = [ "/*/host/**" ] # Inventory path to clusters to collect (by default all are collected)
+  
+  collect_concurrency = 5
+  discover_concurrency = 5
+  
+  ## Optional SSL Config
+  # ssl_ca = "/path/to/cafile"
+  # ssl_cert = "/path/to/certfile"
+  # ssl_key = "/path/to/keyfile"
+  ## Use SSL but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+* Use `vsan_metric_include = [...]` to define the vSAN metrics that you want to collect.
+For example, `vsan_metric_include = ["summary.*", "performance.host-domclient", "performance.cache-disk", "performance.disk-group", "performance.capacity-disk"]`.
+To include all supported vSAN metrics, use `vsan_metric_include = [ "*" ]`.
+To disable all the vSAN metrics, use `vsan_metric_exclude = [ "*" ]`.
+
+* `vsan_metric_skip_verify` defines whether to skip verifying vSAN metrics against the ones from [GetSupportedEntityTypes API](https://code.vmware.com/apis/48/vsan#/doc/vim.cluster.VsanPerformanceManager.html#getSupportedEntityTypes).
+This option is given because some performance entities are not returned by the API, but we want to offer the flexibility if you really need the stats.
+When set to false, anything not in the supported entity list will be filtered out.
+When set to true, queried metrics will be identical to vsan_metric_include and the exclusive array will not be used in this case. By default the value is false.
+
+* `vsan_cluster_include` defines a list of inventory paths that will be used to select a portion of vSAN clusters.
+vSAN metrics are only collected on the cluster level. Therefore, use the same way as inventory paths for vSphere clusters
+* vsan-host-net
+  * hostname
+* vsan-pnic-net
+  * pnic
+* vsan-vnic-net
+  * vnic
+  * stackName
+
+### Realtime vs. Historical Metrics in vSAN
+
+vSAN metrics also keep two different kinds of metrics - realtime and
+historical metrics.
+
+* Realtime metrics are metrics with the prefix 'summary'. These metrics are available in realtime.
+* Historical metrics are metrics with the prefix 'performance'. These are metrics queried from vSAN performance API, which is available at a 5-minute rollup level.
+
+For performance consideration, it is better to specify two instances of the
+plugin, one for the realtime metrics with a short collection interval,
+and the second one - for the historical metrics with a longer interval.
+For example:
+
+```toml
+## Realtime instance
+[[inputs.vsphere]]
+  interval = "30s"
+  vcenters = [ "https://someaddress/sdk" ]
+  username = "someuser@vsphere.local"
+  password = "secret"
+
+  insecure_skip_verify = true
+  force_discover_on_init = true
+
+  # Exclude all other metrics
+  vm_metric_exclude = ["*"]
+  datastore_metric_exclude = ["*"]
+  datacenter_metric_exclude = ["*"]
+  host_metric_exclude = ["*"]
+  cluster_metric_exclude = ["*"]
+  
+  vsan_metric_include = [ "summary.*" ]
+  vsan_metric_exclude = [ ]
+  vsan_metric_skip_verify = false
+
+  collect_concurrency = 5
+  discover_concurrency = 5
+
+# Historical instance
+[[inputs.vsphere]]
+
+  interval = "300s"
+  vcenters = [ "https://someaddress/sdk" ]
+  username = "someuser@vsphere.local"
+  password = "secret"
+
+  insecure_skip_verify = true
+  force_discover_on_init = true
+
+  # Exclude all other metrics
+  vm_metric_exclude = ["*"]
+  datastore_metric_exclude = ["*"]
+  datacenter_metric_exclude = ["*"]
+  host_metric_exclude = ["*"]
+  cluster_metric_exclude = ["*"]
+  
+  vsan_metric_include = [ "performance.*" ]
+  vsan_metric_exclude = [ ]
+  vsan_metric_skip_verify = false
+  
+  collect_concurrency = 5
+  discover_concurrency = 5
+```
+
+## Example Output
+
+```text
+vsphere_vm_cpu,esxhostname=DC0_H0,guest=other,host=host.example.com,moid=vm-35,os=Mac,source=DC0_H0_VM0,vcenter=localhost:8989,vmname=DC0_H0_VM0 run_summation=2608i,ready_summation=129i,usage_average=5.01,used_summation=2134i,demand_average=326i 1535660299000000000
+vsphere_vm_net,esxhostname=DC0_H0,guest=other,host=host.example.com,moid=vm-35,os=Mac,source=DC0_H0_VM0,vcenter=localhost:8989,vmname=DC0_H0_VM0 bytesRx_average=321i,bytesTx_average=335i 1535660299000000000
+vsphere_vm_virtualDisk,esxhostname=DC0_H0,guest=other,host=host.example.com,moid=vm-35,os=Mac,source=DC0_H0_VM0,vcenter=localhost:8989,vmname=DC0_H0_VM0 write_average=144i,read_average=4i 1535660299000000000
+vsphere_vm_net,esxhostname=DC0_H0,guest=other,host=host.example.com,moid=vm-38,os=Mac,source=DC0_H0_VM1,vcenter=localhost:8989,vmname=DC0_H0_VM1 bytesRx_average=242i,bytesTx_average=308i 1535660299000000000
+vsphere_vm_virtualDisk,esxhostname=DC0_H0,guest=other,host=host.example.com,moid=vm-38,os=Mac,source=DC0_H0_VM1,vcenter=localhost:8989,vmname=DC0_H0_VM1 write_average=232i,read_average=4i 1535660299000000000
+vsphere_vm_cpu,esxhostname=DC0_H0,guest=other,host=host.example.com,moid=vm-38,os=Mac,source=DC0_H0_VM1,vcenter=localhost:8989,vmname=DC0_H0_VM1 usage_average=5.49,used_summation=1804i,demand_average=308i,run_summation=2001i,ready_summation=120i 1535660299000000000
+vsphere_vm_cpu,clustername=DC0_C0,esxhostname=DC0_C0_H0,guest=other,host=host.example.com,moid=vm-41,os=Mac,source=DC0_C0_RP0_VM0,vcenter=localhost:8989,vmname=DC0_C0_RP0_VM0 usage_average=4.19,used_summation=2108i,demand_average=285i,run_summation=1793i,ready_summation=93i 1535660299000000000
+vsphere_vm_net,clustername=DC0_C0,esxhostname=DC0_C0_H0,guest=other,host=host.example.com,moid=vm-41,os=Mac,source=DC0_C0_RP0_VM0,vcenter=localhost:8989,vmname=DC0_C0_RP0_VM0 bytesRx_average=272i,bytesTx_average=419i 1535660299000000000
+vsphere_vm_virtualDisk,clustername=DC0_C0,esxhostname=DC0_C0_H0,guest=other,host=host.example.com,moid=vm-41,os=Mac,source=DC0_C0_RP0_VM0,vcenter=localhost:8989,vmname=DC0_C0_RP0_VM0 write_average=229i,read_average=4i 1535660299000000000
+vsphere_vm_cpu,clustername=DC0_C0,esxhostname=DC0_C0_H0,guest=other,host=host.example.com,moid=vm-44,os=Mac,source=DC0_C0_RP0_VM1,vcenter=localhost:8989,vmname=DC0_C0_RP0_VM1 run_summation=2277i,ready_summation=118i,usage_average=4.67,used_summation=2546i,demand_average=289i 1535660299000000000
+vsphere_vm_net,clustername=DC0_C0,esxhostname=DC0_C0_H0,guest=other,host=host.example.com,moid=vm-44,os=Mac,source=DC0_C0_RP0_VM1,vcenter=localhost:8989,vmname=DC0_C0_RP0_VM1 bytesRx_average=243i,bytesTx_average=296i 1535660299000000000
+vsphere_vm_virtualDisk,clustername=DC0_C0,esxhostname=DC0_C0_H0,guest=other,host=host.example.com,moid=vm-44,os=Mac,source=DC0_C0_RP0_VM1,vcenter=localhost:8989,vmname=DC0_C0_RP0_VM1 write_average=158i,read_average=4i 1535660299000000000
+vsphere_host_net,esxhostname=DC0_H0,host=host.example.com,interface=vmnic0,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 usage_average=1042i,bytesTx_average=753i,bytesRx_average=660i 1535660299000000000
+vsphere_host_cpu,esxhostname=DC0_H0,host=host.example.com,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 utilization_average=10.46,usage_average=22.4,readiness_average=0.4,costop_summation=2i,coreUtilization_average=19.61,wait_summation=5148518i,idle_summation=58581i,latency_average=0.6,ready_summation=13370i,used_summation=19219i 1535660299000000000
+vsphere_host_cpu,cpu=0,esxhostname=DC0_H0,host=host.example.com,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 coreUtilization_average=25.6,utilization_average=11.58,used_summation=24306i,usage_average=24.26,idle_summation=86688i 1535660299000000000
+vsphere_host_cpu,cpu=1,esxhostname=DC0_H0,host=host.example.com,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 coreUtilization_average=12.29,utilization_average=8.32,used_summation=31312i,usage_average=22.47,idle_summation=94934i 1535660299000000000
+vsphere_host_disk,esxhostname=DC0_H0,host=host.example.com,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 read_average=331i,write_average=2800i 1535660299000000000
+vsphere_host_disk,disk=/var/folders/rf/txwdm4pj409f70wnkdlp7sz80000gq/T/govcsim-DC0-LocalDS_0-367088371@folder-5,esxhostname=DC0_H0,host=host.example.com,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 write_average=2701i,read_average=258i 1535660299000000000
+vsphere_host_mem,esxhostname=DC0_H0,host=host.example.com,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 usage_average=93.27 1535660299000000000
+vsphere_host_net,esxhostname=DC0_H0,host=host.example.com,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 bytesTx_average=650i,usage_average=1414i,bytesRx_average=569i 1535660299000000000
+vsphere_host_cpu,clustername=DC0_C0,cpu=1,esxhostname=DC0_C0_H0,host=host.example.com,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 utilization_average=12.6,used_summation=25775i,usage_average=24.44,idle_summation=68886i,coreUtilization_average=17.59 1535660299000000000
+vsphere_host_disk,clustername=DC0_C0,esxhostname=DC0_C0_H0,host=host.example.com,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 read_average=340i,write_average=2340i 1535660299000000000
+vsphere_host_disk,clustername=DC0_C0,disk=/var/folders/rf/txwdm4pj409f70wnkdlp7sz80000gq/T/govcsim-DC0-LocalDS_0-367088371@folder-5,esxhostname=DC0_C0_H0,host=host.example.com,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 write_average=2277i,read_average=282i 1535660299000000000
+vsphere_host_mem,clustername=DC0_C0,esxhostname=DC0_C0_H0,host=host.example.com,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 usage_average=104.78 1535660299000000000
+vsphere_host_net,clustername=DC0_C0,esxhostname=DC0_C0_H0,host=host.example.com,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 bytesTx_average=463i,usage_average=1131i,bytesRx_average=719i 1535660299000000000
+vsphere_host_net,clustername=DC0_C0,esxhostname=DC0_C0_H0,host=host.example.com,interface=vmnic0,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 usage_average=1668i,bytesTx_average=838i,bytesRx_average=921i 1535660299000000000
+vsphere_host_cpu,clustername=DC0_C0,esxhostname=DC0_C0_H0,host=host.example.com,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 used_summation=28952i,utilization_average=11.36,idle_summation=93261i,latency_average=0.46,ready_summation=12837i,usage_average=21.56,readiness_average=0.39,costop_summation=2i,coreUtilization_average=27.19,wait_summation=3820829i 1535660299000000000
+vsphere_host_cpu,clustername=DC0_C0,cpu=0,esxhostname=DC0_C0_H0,host=host.example.com,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 coreUtilization_average=24.12,utilization_average=13.83,used_summation=22462i,usage_average=24.69,idle_summation=96993i 1535660299000000000
+internal_vsphere,host=host.example.com,os=Mac,vcenter=localhost:8989 connect_ns=4727607i,discover_ns=65389011i,discovered_objects=8i 1535660309000000000
+internal_vsphere,host=host.example.com,os=Mac,resourcetype=datastore,vcenter=localhost:8989 gather_duration_ns=296223i,gather_count=0i 1535660309000000000
+internal_vsphere,host=host.example.com,os=Mac,resourcetype=vm,vcenter=192.168.1.151 gather_duration_ns=136050i,gather_count=0i 1535660309000000000
+internal_vsphere,host=host.example.com,os=Mac,resourcetype=host,vcenter=localhost:8989 gather_count=62i,gather_duration_ns=8788033i 1535660309000000000
+internal_vsphere,host=host.example.com,os=Mac,resourcetype=host,vcenter=192.168.1.151 gather_count=0i,gather_duration_ns=162002i 1535660309000000000
+internal_gather,host=host.example.com,input=vsphere,os=Mac gather_time_ns=17483653i,metrics_gathered=28i 1535660309000000000
+internal_vsphere,host=host.example.com,os=Mac,vcenter=192.168.1.151 connect_ns=0i 1535660309000000000
+internal_vsphere,host=host.example.com,os=Mac,resourcetype=vm,vcenter=localhost:8989 gather_duration_ns=7291897i,gather_count=36i 1535660309000000000
+internal_vsphere,host=host.example.com,os=Mac,resourcetype=datastore,vcenter=192.168.1.151 gather_duration_ns=958474i,gather_count=0i 1535660309000000000
+vsphere_vm_cpu,esxhostname=DC0_H0,guest=other,host=host.example.com,moid=vm-38,os=Mac,source=DC0_H0_VM1,vcenter=localhost:8989,vmname=DC0_H0_VM1 usage_average=8.82,used_summation=3192i,demand_average=283i,run_summation=2419i,ready_summation=115i 1535660319000000000
+vsphere_vm_net,esxhostname=DC0_H0,guest=other,host=host.example.com,moid=vm-38,os=Mac,source=DC0_H0_VM1,vcenter=localhost:8989,vmname=DC0_H0_VM1 bytesRx_average=277i,bytesTx_average=343i 1535660319000000000
+vsphere_vm_virtualDisk,esxhostname=DC0_H0,guest=other,host=host.example.com,moid=vm-38,os=Mac,source=DC0_H0_VM1,vcenter=localhost:8989,vmname=DC0_H0_VM1 read_average=1i,write_average=741i 1535660319000000000
+vsphere_vm_net,clustername=DC0_C0,esxhostname=DC0_C0_H0,guest=other,host=host.example.com,moid=vm-41,os=Mac,source=DC0_C0_RP0_VM0,vcenter=localhost:8989,vmname=DC0_C0_RP0_VM0 bytesRx_average=386i,bytesTx_average=369i 1535660319000000000
+vsphere_vm_virtualDisk,clustername=DC0_C0,esxhostname=DC0_C0_H0,guest=other,host=host.example.com,moid=vm-41,os=Mac,source=DC0_C0_RP0_VM0,vcenter=localhost:8989,vmname=DC0_C0_RP0_VM0 write_average=814i,read_average=1i 1535660319000000000
+vsphere_vm_cpu,clustername=DC0_C0,esxhostname=DC0_C0_H0,guest=other,host=host.example.com,moid=vm-41,os=Mac,source=DC0_C0_RP0_VM0,vcenter=localhost:8989,vmname=DC0_C0_RP0_VM0 run_summation=1778i,ready_summation=111i,usage_average=7.54,used_summation=2339i,demand_average=297i 1535660319000000000
+vsphere_vm_cpu,clustername=DC0_C0,esxhostname=DC0_C0_H0,guest=other,host=host.example.com,moid=vm-44,os=Mac,source=DC0_C0_RP0_VM1,vcenter=localhost:8989,vmname=DC0_C0_RP0_VM1 usage_average=6.98,used_summation=2125i,demand_average=211i,run_summation=2990i,ready_summation=141i 1535660319000000000
+vsphere_vm_net,clustername=DC0_C0,esxhostname=DC0_C0_H0,guest=other,host=host.example.com,moid=vm-44,os=Mac,source=DC0_C0_RP0_VM1,vcenter=localhost:8989,vmname=DC0_C0_RP0_VM1 bytesRx_average=357i,bytesTx_average=268i 1535660319000000000
+vsphere_vm_virtualDisk,clustername=DC0_C0,esxhostname=DC0_C0_H0,guest=other,host=host.example.com,moid=vm-44,os=Mac,source=DC0_C0_RP0_VM1,vcenter=localhost:8989,vmname=DC0_C0_RP0_VM1 write_average=528i,read_average=1i 1535660319000000000
+vsphere_vm_cpu,esxhostname=DC0_H0,guest=other,host=host.example.com,moid=vm-35,os=Mac,source=DC0_H0_VM0,vcenter=localhost:8989,vmname=DC0_H0_VM0 used_summation=2374i,demand_average=195i,run_summation=3454i,ready_summation=110i,usage_average=7.34 1535660319000000000
+vsphere_vm_net,esxhostname=DC0_H0,guest=other,host=host.example.com,moid=vm-35,os=Mac,source=DC0_H0_VM0,vcenter=localhost:8989,vmname=DC0_H0_VM0 bytesRx_average=308i,bytesTx_average=246i 1535660319000000000
+vsphere_vm_virtualDisk,esxhostname=DC0_H0,guest=other,host=host.example.com,moid=vm-35,os=Mac,source=DC0_H0_VM0,vcenter=localhost:8989,vmname=DC0_H0_VM0 write_average=1178i,read_average=1i 1535660319000000000
+vsphere_host_net,esxhostname=DC0_H0,host=host.example.com,interface=vmnic0,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 bytesRx_average=773i,usage_average=1521i,bytesTx_average=890i 1535660319000000000
+vsphere_host_cpu,esxhostname=DC0_H0,host=host.example.com,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 wait_summation=3421258i,idle_summation=67994i,latency_average=0.36,usage_average=29.86,readiness_average=0.37,used_summation=25244i,costop_summation=2i,coreUtilization_average=21.94,utilization_average=17.19,ready_summation=15897i 1535660319000000000
+vsphere_host_cpu,cpu=0,esxhostname=DC0_H0,host=host.example.com,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 utilization_average=11.32,used_summation=19333i,usage_average=14.29,idle_summation=92708i,coreUtilization_average=27.68 1535660319000000000
+vsphere_host_cpu,cpu=1,esxhostname=DC0_H0,host=host.example.com,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 used_summation=28596i,usage_average=25.32,idle_summation=79553i,coreUtilization_average=28.01,utilization_average=11.33 1535660319000000000
+vsphere_host_disk,esxhostname=DC0_H0,host=host.example.com,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 read_average=86i,write_average=1659i 1535660319000000000
+vsphere_host_disk,disk=/var/folders/rf/txwdm4pj409f70wnkdlp7sz80000gq/T/govcsim-DC0-LocalDS_0-367088371@folder-5,esxhostname=DC0_H0,host=host.example.com,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 write_average=1997i,read_average=58i 1535660319000000000
+vsphere_host_mem,esxhostname=DC0_H0,host=host.example.com,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 usage_average=68.45 1535660319000000000
+vsphere_host_net,esxhostname=DC0_H0,host=host.example.com,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 bytesTx_average=679i,usage_average=2286i,bytesRx_average=719i 1535660319000000000
+vsphere_host_cpu,clustername=DC0_C0,cpu=1,esxhostname=DC0_C0_H0,host=host.example.com,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 utilization_average=10.52,used_summation=21693i,usage_average=23.09,idle_summation=84590i,coreUtilization_average=29.92 1535660319000000000
+vsphere_host_disk,clustername=DC0_C0,esxhostname=DC0_C0_H0,host=host.example.com,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 read_average=113i,write_average=1236i 1535660319000000000
+vsphere_host_disk,clustername=DC0_C0,disk=/var/folders/rf/txwdm4pj409f70wnkdlp7sz80000gq/T/govcsim-DC0-LocalDS_0-367088371@folder-5,esxhostname=DC0_C0_H0,host=host.example.com,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 write_average=1708i,read_average=110i 1535660319000000000
+vsphere_host_mem,clustername=DC0_C0,esxhostname=DC0_C0_H0,host=host.example.com,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 usage_average=111.46 1535660319000000000
+vsphere_host_net,clustername=DC0_C0,esxhostname=DC0_C0_H0,host=host.example.com,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 bytesTx_average=998i,usage_average=2000i,bytesRx_average=881i 1535660319000000000
+vsphere_host_net,clustername=DC0_C0,esxhostname=DC0_C0_H0,host=host.example.com,interface=vmnic0,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 usage_average=1683i,bytesTx_average=675i,bytesRx_average=1078i 1535660319000000000
+vsphere_host_cpu,clustername=DC0_C0,esxhostname=DC0_C0_H0,host=host.example.com,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 used_summation=28531i,wait_summation=3139129i,utilization_average=9.99,idle_summation=98579i,latency_average=0.51,costop_summation=2i,coreUtilization_average=14.35,ready_summation=16121i,usage_average=34.19,readiness_average=0.4 1535660319000000000
+vsphere_host_cpu,clustername=DC0_C0,cpu=0,esxhostname=DC0_C0_H0,host=host.example.com,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 utilization_average=12.2,used_summation=22750i,usage_average=18.84,idle_summation=99539i,coreUtilization_average=23.05 1535660319000000000
+internal_vsphere,host=host.example.com,os=Mac,resourcetype=host,vcenter=localhost:8989 gather_duration_ns=7076543i,gather_count=62i 1535660339000000000
+internal_vsphere,host=host.example.com,os=Mac,resourcetype=host,vcenter=192.168.1.151 gather_duration_ns=4051303i,gather_count=0i 1535660339000000000
+internal_gather,host=host.example.com,input=vsphere,os=Mac metrics_gathered=56i,gather_time_ns=13555029i 1535660339000000000
+internal_vsphere,host=host.example.com,os=Mac,vcenter=192.168.1.151 connect_ns=0i 1535660339000000000
+internal_vsphere,host=host.example.com,os=Mac,resourcetype=vm,vcenter=localhost:8989 gather_duration_ns=6335467i,gather_count=36i 1535660339000000000
+internal_vsphere,host=host.example.com,os=Mac,resourcetype=datastore,vcenter=192.168.1.151 gather_duration_ns=958474i,gather_count=0i 1535660339000000000
+internal_vsphere,host=host.example.com,os=Mac,vcenter=localhost:8989 discover_ns=65389011i,discovered_objects=8i,connect_ns=4727607i 1535660339000000000
+internal_vsphere,host=host.example.com,os=Mac,resourcetype=datastore,vcenter=localhost:8989 gather_duration_ns=296223i,gather_count=0i 1535660339000000000
+internal_vsphere,host=host.example.com,os=Mac,resourcetype=vm,vcenter=192.168.1.151 gather_count=0i,gather_duration_ns=1540920i 1535660339000000000
+vsphere_vm_virtualDisk,esxhostname=DC0_H0,guest=other,host=host.example.com,moid=vm-35,os=Mac,source=DC0_H0_VM0,vcenter=localhost:8989,vmname=DC0_H0_VM0 write_average=302i,read_average=11i 1535660339000000000
+vsphere_vm_cpu,esxhostname=DC0_H0,guest=other,host=host.example.com,moid=vm-35,os=Mac,source=DC0_H0_VM0,vcenter=localhost:8989,vmname=DC0_H0_VM0 usage_average=5.58,used_summation=2941i,demand_average=298i,run_summation=3255i,ready_summation=96i 1535660339000000000
+vsphere_vm_net,esxhostname=DC0_H0,guest=other,host=host.example.com,moid=vm-35,os=Mac,source=DC0_H0_VM0,vcenter=localhost:8989,vmname=DC0_H0_VM0 bytesRx_average=155i,bytesTx_average=241i 1535660339000000000
+vsphere_vm_cpu,esxhostname=DC0_H0,guest=other,host=host.example.com,moid=vm-38,os=Mac,source=DC0_H0_VM1,vcenter=localhost:8989,vmname=DC0_H0_VM1 usage_average=10.3,used_summation=3053i,demand_average=346i,run_summation=3289i,ready_summation=122i 1535660339000000000
+vsphere_vm_net,esxhostname=DC0_H0,guest=other,host=host.example.com,moid=vm-38,os=Mac,source=DC0_H0_VM1,vcenter=localhost:8989,vmname=DC0_H0_VM1 bytesRx_average=215i,bytesTx_average=275i 1535660339000000000
+vsphere_vm_virtualDisk,esxhostname=DC0_H0,guest=other,host=host.example.com,moid=vm-38,os=Mac,source=DC0_H0_VM1,vcenter=localhost:8989,vmname=DC0_H0_VM1 write_average=252i,read_average=14i 1535660339000000000
+vsphere_vm_cpu,clustername=DC0_C0,esxhostname=DC0_C0_H0,guest=other,host=host.example.com,moid=vm-41,os=Mac,source=DC0_C0_RP0_VM0,vcenter=localhost:8989,vmname=DC0_C0_RP0_VM0 usage_average=8,used_summation=2183i,demand_average=354i,run_summation=3542i,ready_summation=128i 1535660339000000000
+vsphere_vm_net,clustername=DC0_C0,esxhostname=DC0_C0_H0,guest=other,host=host.example.com,moid=vm-41,os=Mac,source=DC0_C0_RP0_VM0,vcenter=localhost:8989,vmname=DC0_C0_RP0_VM0 bytesRx_average=178i,bytesTx_average=200i 1535660339000000000
+vsphere_vm_virtualDisk,clustername=DC0_C0,esxhostname=DC0_C0_H0,guest=other,host=host.example.com,moid=vm-41,os=Mac,source=DC0_C0_RP0_VM0,vcenter=localhost:8989,vmname=DC0_C0_RP0_VM0 write_average=283i,read_average=12i 1535660339000000000
+vsphere_vm_cpu,clustername=DC0_C0,esxhostname=DC0_C0_H0,guest=other,host=host.example.com,moid=vm-44,os=Mac,source=DC0_C0_RP0_VM1,vcenter=localhost:8989,vmname=DC0_C0_RP0_VM1 demand_average=328i,run_summation=3481i,ready_summation=122i,usage_average=7.95,used_summation=2167i 1535660339000000000
+vsphere_vm_net,clustername=DC0_C0,esxhostname=DC0_C0_H0,guest=other,host=host.example.com,moid=vm-44,os=Mac,source=DC0_C0_RP0_VM1,vcenter=localhost:8989,vmname=DC0_C0_RP0_VM1 bytesTx_average=282i,bytesRx_average=196i 1535660339000000000
+vsphere_vm_virtualDisk,clustername=DC0_C0,esxhostname=DC0_C0_H0,guest=other,host=host.example.com,moid=vm-44,os=Mac,source=DC0_C0_RP0_VM1,vcenter=localhost:8989,vmname=DC0_C0_RP0_VM1 write_average=321i,read_average=13i 1535660339000000000
+vsphere_host_disk,esxhostname=DC0_H0,host=host.example.com,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 read_average=39i,write_average=2635i 1535660339000000000
+vsphere_host_disk,disk=/var/folders/rf/txwdm4pj409f70wnkdlp7sz80000gq/T/govcsim-DC0-LocalDS_0-367088371@folder-5,esxhostname=DC0_H0,host=host.example.com,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 write_average=2635i,read_average=30i 1535660339000000000
+vsphere_host_mem,esxhostname=DC0_H0,host=host.example.com,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 usage_average=98.5 1535660339000000000
+vsphere_host_net,esxhostname=DC0_H0,host=host.example.com,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 usage_average=1887i,bytesRx_average=662i,bytesTx_average=251i 1535660339000000000
+vsphere_host_net,esxhostname=DC0_H0,host=host.example.com,interface=vmnic0,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 usage_average=1481i,bytesTx_average=899i,bytesRx_average=992i 1535660339000000000
+vsphere_host_cpu,esxhostname=DC0_H0,host=host.example.com,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 used_summation=50405i,costop_summation=2i,utilization_average=17.32,latency_average=0.61,ready_summation=14843i,usage_average=27.94,coreUtilization_average=32.12,wait_summation=3058787i,idle_summation=56600i,readiness_average=0.36 1535660339000000000
+vsphere_host_cpu,cpu=0,esxhostname=DC0_H0,host=host.example.com,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 coreUtilization_average=37.61,utilization_average=17.05,used_summation=38013i,usage_average=32.66,idle_summation=89575i 1535660339000000000
+vsphere_host_cpu,cpu=1,esxhostname=DC0_H0,host=host.example.com,moid=host-19,os=Mac,source=DC0_H0,vcenter=localhost:8989 coreUtilization_average=25.92,utilization_average=18.72,used_summation=39790i,usage_average=40.42,idle_summation=69457i 1535660339000000000
+vsphere_host_net,clustername=DC0_C0,esxhostname=DC0_C0_H0,host=host.example.com,interface=vmnic0,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 usage_average=1246i,bytesTx_average=673i,bytesRx_average=781i 1535660339000000000
+vsphere_host_cpu,clustername=DC0_C0,esxhostname=DC0_C0_H0,host=host.example.com,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 coreUtilization_average=33.8,idle_summation=77121i,ready_summation=15857i,readiness_average=0.39,used_summation=29554i,costop_summation=2i,wait_summation=4338417i,utilization_average=17.87,latency_average=0.44,usage_average=28.78 1535660339000000000
+vsphere_host_cpu,clustername=DC0_C0,cpu=0,esxhostname=DC0_C0_H0,host=host.example.com,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 idle_summation=86610i,coreUtilization_average=34.36,utilization_average=19.03,used_summation=28766i,usage_average=23.72 1535660339000000000
+vsphere_host_cpu,clustername=DC0_C0,cpu=1,esxhostname=DC0_C0_H0,host=host.example.com,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 coreUtilization_average=33.15,utilization_average=16.8,used_summation=44282i,usage_average=30.08,idle_summation=93490i 1535660339000000000
+vsphere_host_disk,clustername=DC0_C0,esxhostname=DC0_C0_H0,host=host.example.com,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 read_average=56i,write_average=1672i 1535660339000000000
+vsphere_host_disk,clustername=DC0_C0,disk=/var/folders/rf/txwdm4pj409f70wnkdlp7sz80000gq/T/govcsim-DC0-LocalDS_0-367088371@folder-5,esxhostname=DC0_C0_H0,host=host.example.com,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 write_average=2110i,read_average=48i 1535660339000000000
+vsphere_host_mem,clustername=DC0_C0,esxhostname=DC0_C0_H0,host=host.example.com,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 usage_average=116.21 1535660339000000000
+vsphere_host_net,clustername=DC0_C0,esxhostname=DC0_C0_H0,host=host.example.com,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 bytesRx_average=726i,bytesTx_average=643i,usage_average=1504i 1535660339000000000
+vsphere_host_mem,clustername=DC0_C0,esxhostname=DC0_C0_H0,host=host.example.com,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 usage_average=116.21 1535660339000000000
+vsphere_host_net,clustername=DC0_C0,esxhostname=DC0_C0_H0,host=host.example.com,moid=host-30,os=Mac,source=DC0_C0_H0,vcenter=localhost:8989 bytesRx_average=726i,bytesTx_average=643i,usage_average=1504i 1535660339000000000
+```
+
+## vSAN Sample Output
+
+```text
+vsphere_vsan_performance_hostdomclient,clustername=Example-VSAN,dcname=Example-DC,host=host.example.com,hostname=DC0_C0_H0,moid=domain-c8,source=Example-VSAN,vcenter=localhost:8898 iops_read=7,write_congestion=0,unmap_congestion=0,read_count=2199,iops=8,latency_max_write=8964,latency_avg_unmap=0,latency_avg_write=1883,write_count=364,num_oio=12623,throughput=564127,client_cache_hits=0,latency_max_read=17821,latency_max_unmap=0,read_congestion=0,latency_avg=1154,congestion=0,throughput_read=554721,latency_avg_read=1033,throughput_write=9406,client_cache_hit_rate=0,iops_unmap=0,throughput_unmap=0,latency_stddev=1315,io_count=2563,oio=4,iops_write=1,unmap_count=0 1578955200000000000
+vsphere_vsan_performance_clusterdomcompmgr,clustername=Example-VSAN,dcname=Example-DC,host=host.example.com,moid=domain-c7,source=Example-VSAN,uuid=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXX,vcenter=localhost:8898 latency_avg_rec_write=0,latency_avg_write=9886,congestion=0,iops_resync_read=0,lat_avg_resync_read=0,iops_read=289,latency_avg_read=1184,throughput_write=50137368,iops_rec_write=0,throughput_rec_write=0,tput_resync_read=0,throughput_read=9043654,iops_write=1272,oio=97 1578954900000000000
+vsphere_vsan_performance_clusterdomclient,clustername=Example-VSAN,dcname=Example-DC,host=host.example.com,moid=domain-c7,source=Example-VSAN,uuid=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXX,vcenter=localhost:8898 latency_avg_write=1011,congestion=0,oio=26,iops_read=6,throughput_read=489093,latency_avg_read=1085,iops_write=43,throughput_write=435142 1578955200000000000
+vsphere_vsan_summary,clustername=Example-VSAN,dcname=Example-DC,host=host.example.com,moid=domain-c7,source=Example-VSAN,vcenter=localhost:8898 total_bytes_to_sync=0i,total_objects_to_sync=0i,total_recovery_eta=0i 1578955489000000000
+vsphere_vsan_summary,clustername=Example-VSAN,dcname=Example-DC,host=host.example.com,moid=domain-c7,source=Example-VSAN,vcenter=localhost:8898 overall_health=1i 1578955489000000000
+vsphere_vsan_summary,clustername=Example-VSAN,dcname=Example-DC,host=host.example.com,moid=domain-c7,source=Example-VSAN,vcenter=localhost:8898 free_capacity_byte=11022535578757i,total_capacity_byte=14102625779712i 1578955488000000000
+```
diff --git a/content/telegraf/v1/input-plugins/webhooks/_index.md b/content/telegraf/v1/input-plugins/webhooks/_index.md
new file mode 100644
index 000000000..144282caf
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/webhooks/_index.md
@@ -0,0 +1,130 @@
+---
+description: "Telegraf plugin for collecting metrics from Webhooks"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Webhooks
+    identifier: input-webhooks
+tags: [Webhooks, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Webhooks Input Plugin
+
+This is a Telegraf service plugin that start a http server and register
+multiple webhook listeners.
+
+```sh
+telegraf config -input-filter webhooks -output-filter influxdb > config.conf.new
+```
+
+Change the config file to point to the InfluxDB server you are using and adjust
+the settings to match your environment. Once that is complete:
+
+```sh
+cp config.conf.new /etc/telegraf/telegraf.conf
+sudo service telegraf start
+```
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# A Webhooks Event collector
+[[inputs.webhooks]]
+  ## Address and port to host Webhook listener on
+  service_address = ":1619"
+
+  ## Maximum duration before timing out read of the request
+  # read_timeout = "10s"
+  ## Maximum duration before timing out write of the response
+  # write_timeout = "10s"
+
+  [inputs.webhooks.filestack]
+    path = "/filestack"
+
+    ## HTTP basic auth
+    #username = ""
+    #password = ""
+
+  [inputs.webhooks.github]
+    path = "/github"
+    # secret = ""
+
+    ## HTTP basic auth
+    #username = ""
+    #password = ""
+
+  [inputs.webhooks.mandrill]
+    path = "/mandrill"
+
+    ## HTTP basic auth
+    #username = ""
+    #password = ""
+
+  [inputs.webhooks.rollbar]
+    path = "/rollbar"
+
+    ## HTTP basic auth
+    #username = ""
+    #password = ""
+
+  [inputs.webhooks.papertrail]
+    path = "/papertrail"
+
+    ## HTTP basic auth
+    #username = ""
+    #password = ""
+
+  [inputs.webhooks.particle]
+    path = "/particle"
+
+    ## HTTP basic auth
+    #username = ""
+    #password = ""
+
+  [inputs.webhooks.artifactory]
+    path = "/artifactory"
+```
+
+## Available webhooks
+
+- Filestack
+- Github
+- Mandrill
+- Rollbar
+- Papertrail
+- Particle
+- Artifactory
+
+## Adding new webhooks plugin
+
+1. Add your webhook plugin inside the `webhooks` folder
+1. Your plugin must implement the `Webhook` interface
+1. Import your plugin in the `webhooks.go` file and add it to the `Webhooks` struct
+
+Both Github and Rollbar are good example to follow.
+
+## Metrics
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/win_eventlog/_index.md b/content/telegraf/v1/input-plugins/win_eventlog/_index.md
new file mode 100644
index 000000000..3ed17bd59
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/win_eventlog/_index.md
@@ -0,0 +1,307 @@
+---
+description: "Telegraf plugin for collecting metrics from Windows Eventlog"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Windows Eventlog
+    identifier: input-win_eventlog
+tags: [Windows Eventlog, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Windows Eventlog Input Plugin
+
+Telegraf's win_eventlog input plugin gathers metrics from the windows event log.
+
+## Collect Windows Event Log messages
+
+Supports Windows Vista and higher.
+
+Telegraf should have Administrator permissions to subscribe for some of the
+Windows Events Channels, like System Log.
+
+Telegraf minimum version: Telegraf 1.16.0
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Input plugin to collect Windows Event Log messages
+# This plugin ONLY supports Windows
+[[inputs.win_eventlog]]
+  ## Telegraf should have Administrator permissions to subscribe for some
+  ## Windows Events channels (e.g. System log)
+
+  ## LCID (Locale ID) for event rendering
+  ## 1033 to force English language
+  ## 0 to use default Windows locale
+  # locale = 0
+
+  ## Name of eventlog, used only if xpath_query is empty
+  ## Example: "Application"
+  # eventlog_name = ""
+
+  ## xpath_query can be in defined short form like "Event/System[EventID=999]"
+  ## or you can form a XML Query. Refer to the Consuming Events article:
+  ## https://docs.microsoft.com/en-us/windows/win32/wes/consuming-events
+  ## XML query is the recommended form, because it is most flexible
+  ## You can create or debug XML Query by creating Custom View in Windows Event Viewer
+  ## and then copying resulting XML here
+  xpath_query = '''
+  <QueryList>
+    <Query Id="0" Path="Security">
+      <Select Path="Security">*</Select>
+      <Suppress Path="Security">*[System[( (EventID &gt;= 5152 and EventID &lt;= 5158) or EventID=5379 or EventID=4672)]]</Suppress>
+    </Query>
+    <Query Id="1" Path="Application">
+      <Select Path="Application">*[System[(Level &lt; 4)]]</Select>
+    </Query>
+    <Query Id="2" Path="Windows PowerShell">
+      <Select Path="Windows PowerShell">*[System[(Level &lt; 4)]]</Select>
+    </Query>
+    <Query Id="3" Path="System">
+      <Select Path="System">*</Select>
+    </Query>
+    <Query Id="4" Path="Setup">
+      <Select Path="Setup">*</Select>
+    </Query>
+  </QueryList>
+  '''
+
+  ## When true, event logs are read from the beginning; otherwise only future
+  ## events will be logged.
+  # from_beginning = false
+
+  ## Number of events to fetch in one batch
+  # event_batch_size = 5
+
+  # Process UserData XML to fields, if this node exists in Event XML
+  # process_userdata = true
+
+  # Process EventData XML to fields, if this node exists in Event XML
+  # process_eventdata = true
+
+  ## Separator character to use for unrolled XML Data field names
+  # separator = "_"
+
+  ## Get only first line of Message field. For most events first line is
+  ## usually more than enough
+  # only_first_line_of_message = true
+
+  ## Parse timestamp from TimeCreated.SystemTime event field.
+  ## Will default to current time of telegraf processing on parsing error or if
+  ## set to false
+  # timestamp_from_event = true
+
+  ## System field names:
+  ##   "Source", "EventID", "Version", "Level", "Task", "Opcode", "Keywords",
+  ##   "TimeCreated", "EventRecordID", "ActivityID", "RelatedActivityID",
+  ##   "ProcessID", "ThreadID", "ProcessName", "Channel", "Computer", "UserID",
+  ##   "UserName", "Message", "LevelText", "TaskText", "OpcodeText"
+  ##
+  ## In addition to System, Data fields can be unrolled from additional XML
+  ## nodes in event. Human-readable representation of those nodes is formatted
+  ## into event Message field, but XML is more machine-parsable
+
+  ## Event fields to include as tags
+  ## The values below are included by default.
+  ## Globbing supported (e.g. "Level*" matches both "Level" and "LevelText")
+  # event_tags = ["Source", "EventID", "Level", "LevelText", "Task", "TaskText", "Opcode", "OpcodeText", "Keywords", "Channel", "Computer"]
+
+  ## Event fields to include
+  ## All fields are sent by default.
+  ## Globbing supported (e.g. "Level*" matches both "Level" and "LevelText")
+  # event_fields = ["*"]
+
+  ## Event fields to exclude
+  ## Note that if you exclude all fields then no metrics are produced. A valid
+  ## metric includes at least one field.
+  ## Globbing supported (e.g. "Level*" matches both "Level" and "LevelText")
+  # exclude_fields = []
+
+  ## Event fields to exclude if their value is empty or equals to zero
+  ## The values below are included by default.
+  ## Globbing supported (e.g. "Level*" matches both "Level" and "LevelText")
+  # exclude_empty = ["Task", "Opcode", "*ActivityID", "UserID"]
+```
+
+### Filtering
+
+There are three types of filtering: **Event Log** name, **XPath Query** and
+**XML Query**.
+
+**Event Log** name filtering is simple:
+
+```toml
+  eventlog_name = "Application"
+  xpath_query = '''
+```
+
+For **XPath Query** filtering set the `xpath_query` value, and `eventlog_name`
+will be ignored:
+
+```toml
+  eventlog_name = ""
+  xpath_query = "Event/System[EventID=999]"
+```
+
+**XML Query** is the most flexible: you can Select or Suppress any values, and
+give ranges for other values. XML query is the recommended form, because it is
+most flexible. You can create or debug XML Query by creating Custom View in
+Windows Event Viewer and then copying resulting XML in config file.
+
+XML Query documentation:
+
+<https://docs.microsoft.com/en-us/windows/win32/wes/consuming-events>
+
+## Troubleshooting
+
+In case you see a `Collection took longer than expected` warning, there might
+be a burst of events logged and the API is not able to deliver them fast enough
+to complete processing within the specified interval. Tweaking the
+`event_batch_size` setting might help to mitigate the issue.
+The said warning does not indicate data-loss, but you should investigate the
+amount of events you log.
+
+## Metrics
+
+You can send any field, *System*, *Computed* or *XML* as tag field. List of
+those fields is in the `event_tags` config array. Globbing is supported in this
+array, i.e. `Level*` for all fields beginning with `Level`, or `L?vel` for all
+fields where the name is `Level`, `L3vel`, `L@vel` and so on. Tag fields are
+converted to strings automatically.
+
+By default, all other fields are sent, but you can limit that either by listing
+it in `event_fields` config array with globbing, or by adding some field name
+masks in the `exclude_fields` config array.
+
+You can limit sending fields with empty values by adding masks of names of such
+fields in the `exclude_empty` config array. Value considered empty, if the
+System field of type `int` or `uint32` is equal to zero, or if any field of type
+`string` is an empty string.
+
+List of System fields:
+
+- Source (string)
+- EventID (int)
+- Version (int)
+- Level (int)
+- LevelText (string)
+- Opcode (int)
+- OpcodeText (string)
+- Task (int)
+- TaskText (string)
+- Keywords (string): comma-separated in case of multiple values
+- TimeCreated (string)
+- EventRecordID (string)
+- ActivityID (string)
+- RelatedActivityID (string)
+- ProcessID (int)
+- ThreadID (int)
+- ProcessName (string): derived from ProcessID
+- Channel (string)
+- Computer (string): useful if consumed from Forwarded Events
+- UserID (string): SID
+- UserName (string): derived from UserID, presented in form of DOMAIN\Username
+- Message (string)
+
+### Computed fields
+
+Fields `Level`, `Opcode` and `Task` are converted to text and saved as computed
+`*Text` fields.
+
+`Keywords` field is converted from hex uint64 value by the `_EvtFormatMessage`
+WINAPI function. There can be more than one value, in that case they will be
+comma-separated. If keywords can't be converted (bad device driver or forwarded
+from another computer with unknown Event Channel), hex uint64 is saved as is.
+
+`ProcessName` field is found by looking up ProcessID. Can be empty if telegraf
+doesn't have enough permissions.
+
+`Username` field is found by looking up SID from UserID.
+
+`Message` field is rendered from the event data, and can be several kilobytes of
+text with line breaks. For most events the first line of this text is more then
+enough, and additional info is more useful to be parsed as XML fields. So, for
+brevity, plugin takes only the first line. You can set
+`only_first_line_of_message` parameter to `false` to take full message text.
+
+`TimeCreated` field is a string in RFC3339Nano format. By default Telegraf
+parses it as an event timestamp. If there is a field parse error or
+`timestamp_from_event` configuration parameter is set to `false`, then event
+timestamp will be set to the exact time when Telegraf has parsed this event, so
+it will be rounded to the nearest minute.
+
+### Additional Fields
+
+The content of **Event Data** and **User Data** XML Nodes can be added as
+additional fields, and is added by default. You can disable that by setting
+`process_userdata` or `process_eventdata` parameters to `false`.
+
+For the fields from additional XML Nodes the `Name` attribute is taken as the
+name, and inner text is the value. Type of those fields is always string.
+
+Name of the field is formed from XML Path by adding _ inbetween levels. For
+example, if UserData XML looks like this:
+
+```xml
+<UserData>
+ <CbsPackageChangeState xmlns="http://manifests.microsoft.com/win/2004/08/windows/setup_provider">
+  <PackageIdentifier>KB4566782</PackageIdentifier>
+  <IntendedPackageState>5112</IntendedPackageState>
+  <IntendedPackageStateTextized>Installed</IntendedPackageStateTextized>
+  <ErrorCode>0x0</ErrorCode>
+  <Client>UpdateAgentLCU</Client>
+ </CbsPackageChangeState>
+</UserData>
+```
+
+It will be converted to following fields:
+
+```text
+CbsPackageChangeState_PackageIdentifier = "KB4566782"
+CbsPackageChangeState_IntendedPackageState = "5112"
+CbsPackageChangeState_IntendedPackageStateTextized = "Installed"
+CbsPackageChangeState_ErrorCode = "0x0"
+CbsPackageChangeState_Client = "UpdateAgentLCU"
+```
+
+If there are more than one field with the same name, all those fields are given
+suffix with number: `_1`, `_2` and so on.
+
+## Localization
+
+Human readable Event Description is in the Message field. But it is better to be
+skipped in favour of the Event XML values, because they are more
+machine-readable.
+
+Keywords, LevelText, TaskText, OpcodeText and Message are saved with the current
+Windows locale by default. You can override this, for example, to English locale
+by setting `locale` config parameter to `1033`. Unfortunately, **Event Data**
+and **User Data** XML Nodes are in default Windows locale only.
+
+Locale should be present on the computer. English locale is usually available on
+all localized versions of modern Windows. A list of all locales is available
+from Microsoft's [Open Specifications](https://docs.microsoft.com/en-us/openspecs/office_standards/ms-oe376/6c085406-a698-4e12-9d4d-c3b0ee3dbc4a).
+
+[1]: https://docs.microsoft.com/en-us/openspecs/office_standards/ms-oe376/6c085406-a698-4e12-9d4d-c3b0ee3dbc4a
+
+## Example Output
+
+Some values are changed for anonymity.
+
+```text
+win_eventlog,Channel=System,Computer=PC,EventID=105,Keywords=0x8000000000000000,Level=4,LevelText=Information,Opcode=10,OpcodeText=General,Source=WudfUsbccidDriver,Task=1,TaskText=Driver,host=PC ProcessName="WUDFHost.exe",UserName="NT AUTHORITY\\LOCAL SERVICE",Data_dwMaxCCIDMessageLength="271",Data_bPINSupport="0x0",Data_bMaxCCIDBusySlots="1",EventRecordID=1914688i,UserID="S-1-5-19",Version=0i,Data_bClassGetEnvelope="0x0",Data_wLcdLayout="0x0",Data_bClassGetResponse="0x0",TimeCreated="2020-08-21T08:43:26.7481077Z",Message="The Smartcard reader reported the following class descriptor (part 2)." 1597999410000000000
+win_eventlog,Channel=Security,Computer=PC,EventID=4798,Keywords=Audit\ Success,Level=0,LevelText=Information,Opcode=0,OpcodeText=Info,Source=Microsoft-Windows-Security-Auditing,Task=13824,TaskText=User\ Account\ Management,host=PC Data_TargetDomainName="PC",Data_SubjectUserName="User",Data_CallerProcessId="0x3d5c",Data_SubjectLogonId="0x46d14f8d",Version=0i,EventRecordID=223157i,Message="A user's local group membership was enumerated.",Data_TargetUserName="User",Data_TargetSid="S-1-5-21-.-.-.-1001",Data_SubjectUserSid="S-1-5-21-.-.-.-1001",Data_CallerProcessName="C:\\Windows\\explorer.exe",ActivityID="{0d4cc11d-7099-0002-4dc1-4c0d9970d601}",UserID="",Data_SubjectDomainName="PC",TimeCreated="2020-08-21T08:43:27.3036771Z",ProcessName="lsass.exe" 1597999410000000000
+win_eventlog,Channel=Microsoft-Windows-Dhcp-Client/Admin,Computer=PC,EventID=1002,Keywords=0x4000000000000001,Level=2,LevelText=Error,Opcode=76,OpcodeText=IpLeaseDenied,Source=Microsoft-Windows-Dhcp-Client,Task=3,TaskText=Address\ Configuration\ State\ Event,host=PC Version=0i,Message="The IP address lease 10.20.30.40 for the Network Card with network address 0xaabbccddeeff has been denied by the DHCP server 10.20.30.1 (The DHCP Server sent a DHCPNACK message).",UserID="S-1-5-19",Data_HWLength="6",Data_HWAddress="545595B7EA01",TimeCreated="2020-08-21T08:43:42.8265853Z",EventRecordID=34i,ProcessName="svchost.exe",UserName="NT AUTHORITY\\LOCAL SERVICE" 1597999430000000000
+win_eventlog,Channel=System,Computer=PC,EventID=10016,Keywords=Classic,Level=3,LevelText=Warning,Opcode=0,OpcodeText=Info,Source=Microsoft-Windows-DistributedCOM,Task=0,host=PC Data_param3="Активация",Data_param6="PC",Data_param8="S-1-5-21-2007059868-50816014-3139024325-1001",Version=0i,UserName="PC\\User",Data_param1="по умолчанию для компьютера",Data_param2="Локально",Data_param7="User",Data_param9="LocalHost (с использованием LRPC)",Data_param10="Microsoft.Windows.ShellExperienceHost_10.0.19041.423_neutral_neutral_cw5n1h2txyewy",ActivityID="{839cac9e-73a1-4559-a847-62f3a5e73e44}",ProcessName="svchost.exe",Message="The по умолчанию для компьютера permission settings do not grant Локально Активация permission for the COM Server application with CLSID ",Data_param5="{316CDED5-E4AE-4B15-9113-7055D84DCC97}",Data_param11="S-1-15-2-.-.-.-.-.-.-2861478708",TimeCreated="2020-08-21T08:43:45.5233759Z",EventRecordID=1914689i,UserID="S-1-5-21-.-.-.-1001",Data_param4="{C2F03A33-21F5-47FA-B4BB-156362A2F239}" 1597999430000000000
+```
diff --git a/content/telegraf/v1/input-plugins/win_perf_counters/_index.md b/content/telegraf/v1/input-plugins/win_perf_counters/_index.md
new file mode 100644
index 000000000..119562641
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/win_perf_counters/_index.md
@@ -0,0 +1,736 @@
+---
+description: "Telegraf plugin for collecting metrics from Windows Performance Counters"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Windows Performance Counters
+    identifier: input-win_perf_counters
+tags: [Windows Performance Counters, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Windows Performance Counters Input Plugin
+
+This document presents the input plugin to read Performance Counters on Windows
+operating systems.
+
+The configuration is parsed and then tested for validity, such as
+whether the Object, Instance and Counter exist on Telegraf startup.
+
+Counter paths are refreshed periodically, see the
+CountersRefreshInterval configuration parameter for
+more info.
+
+In case of query for all instances `["*"]`, the plugin does not return the
+instance `_Total` by default. See IncludeTotal for more info.
+
+## Basics
+
+The examples contained in this file have been found on the internet
+as counters used when performance monitoring
+ Active Directory and IIS in particular.
+ There are a lot of other good objects to monitor, if you know what to look for.
+ This file is likely to be updated in the future with more examples for
+ useful configurations for separate scenarios.
+
+For more information on concepts and terminology including object,
+counter, and instance names, see the help in the Windows Performance
+Monitor app.
+
+### Schema
+
+*Measurement name* is specified per performance object
+or `win_perf_counters` by default.
+
+*Tags:*
+
+- source - computer name, as specified in the `Sources` parameter. Name `localhost` is translated into the host name
+- objectname - normalized name of the performance object
+- instance - instance name, if performance object supports multiple instances, otherwise omitted
+
+*Fields* are counters of the performance object.
+The field name is normalized counter name.
+
+### Plugin wide
+
+Plugin wide entries are underneath `[[inputs.win_perf_counters]]`.
+
+#### PrintValid
+
+Bool, if set to `true` will print out all matching performance objects.
+
+Example:
+`PrintValid=true`
+
+#### UseWildcardsExpansion
+
+If `UseWildcardsExpansion` is true, wildcards can be used in the
+instance name and the counter name. Instance indexes will also be
+returned in the instance name.
+
+Partial wildcards (e.g. `chrome*`) are supported only in the instance
+name on Windows Vista and newer.
+
+If disabled, wildcards (not partial) in instance names can still be
+used, but instance indexes will not be returned in the instance names.
+
+Example:
+`UseWildcardsExpansion=true`
+
+#### LocalizeWildcardsExpansion
+
+`LocalizeWildcardsExpansion` selects whether object and counter names
+are localized when `UseWildcardsExpansion` is true and Telegraf is
+running on a localized installation of Windows.
+
+When `LocalizeWildcardsExpansion` is true, Telegraf produces metrics
+with localized tags and fields even when object and counter names are
+in English.
+
+When `LocalizeWildcardsExpansion` is false, Telegraf expects object
+and counter names to be in English and produces metrics with English
+tags and fields.
+
+When `LocalizeWildcardsExpansion` is false, wildcards can only be used
+in instances. Object and counter names must not have wildcards.
+
+Example:
+`LocalizeWildcardsExpansion=true`
+
+#### CountersRefreshInterval
+
+Configured counters are matched against available counters at the interval
+specified by the `CountersRefreshInterval` parameter. The default value is `1m`
+(1 minute).
+
+If wildcards are used in instance or counter names, they are expanded at this
+point, if the `UseWildcardsExpansion` param is set to `true`.
+
+Setting the `CountersRefreshInterval` too low (order of seconds) can cause
+Telegraf to create a high CPU load.
+
+Set it to `0s` to disable periodic refreshing.
+
+Example:
+`CountersRefreshInterval=1m`
+
+#### PreVistaSupport
+
+(Deprecated in 1.7; Necessary features on Windows Vista and newer are checked
+dynamically)
+
+Bool, if set to `true`, the plugin will use the localized PerfCounter interface
+that has been present since before Vista for backwards compatibility.
+
+It is recommended NOT to use this on OSes starting with Vista and newer because
+it requires more configuration to use this than the newer interface present
+since Vista.
+
+Example for Windows Server 2003, this would be set to true:
+`PreVistaSupport=true`
+
+#### UsePerfCounterTime
+
+Bool, if set to `true` will request a timestamp along with the PerfCounter data.
+If se to `false`, current time will be used.
+
+Supported on Windows Vista/Windows Server 2008 and newer
+Example:
+`UsePerfCounterTime=true`
+
+#### IgnoredErrors
+
+IgnoredErrors accepts a list of PDH error codes which are defined in pdh.go, if
+this error is encountered it will be ignored.  For example, you can provide
+"PDH_NO_DATA" to ignore performance counters with no instances, but by default
+no errors are ignored.  You can find the list of possible errors here: PDH
+errors.
+
+Example:
+`IgnoredErrors=["PDH_NO_DATA"]`
+
+#### Sources
+
+(Optional)
+
+Host names or ip addresses of computers to gather all performance counters from.
+The user running Telegraf must be authenticated to the remote computer(s).
+E.g. via Windows sharing `net use \\SQL-SERVER-01`.
+Use either localhost (`"localhost"`) or real local computer name to gather
+counters also from localhost among other computers. Skip, if gather only from
+localhost.
+
+If a performance counter is present only on specific hosts set `Sources` param
+on the specific counter level configuration to override global (plugin wide)
+sources.
+
+Example:
+`Sources = ["localhost", "SQL-SERVER-01", "SQL-SERVER-02", "SQL-SERVER-03"]`
+
+Default:
+`Sources = ["localhost"]`
+
+### Object
+
+See Entry below.
+
+### Entry
+
+A new configuration entry consists of the TOML header starting with,
+`[[inputs.win_perf_counters.object]]`.
+This must follow before other plugin configurations,
+beneath the main win_perf_counters entry, `[[inputs.win_perf_counters]]`.
+
+Following this are 3 required key/value pairs and three optional parameters and
+their usage.
+
+#### ObjectName
+
+(Required)
+
+ObjectName is the Object to query for, like Processor, DirectoryServices,
+LogicalDisk or similar.
+
+Example: `ObjectName = "LogicalDisk"`
+
+#### Instances
+
+(Required)
+
+The instances key (this is an array) declares the instances of a counter you
+would like returned, it can be one or more values.
+
+Example: `Instances = ["C:","D:","E:"]`
+
+This will return only for the instances C:, D: and E: where relevant. To get all
+instances of a Counter, use `["*"]` only.  By default any results containing
+`_Total` are stripped, unless this is specified as the wanted instance.
+Alternatively see the option `IncludeTotal` below.
+
+It is also possible to set partial wildcards, eg. `["chrome*"]`, if the
+`UseWildcardsExpansion` param is set to `true`
+
+Some Objects do not have instances to select from at all.
+Here only one option is valid if you want data back,
+and that is to specify `Instances = ["------"]`.
+
+#### Counters
+
+(Required)
+
+The Counters key (this is an array) declares the counters of the ObjectName
+you would like returned, it can also be one or more values.
+
+Example: `Counters = ["% Idle Time", "% Disk Read Time", "% Disk Write Time"]`
+
+This must be specified for every counter you want the results of, or use
+`["*"]` for all the counters of the object, if the `UseWildcardsExpansion` param
+is set to `true`.
+
+#### Sources (Object)
+
+(Optional)
+
+Overrides the Sources global parameter for current performance
+object. See Sources description for more details.
+
+#### Measurement
+
+(Optional)
+
+This key is optional. If it is not set it will be `win_perf_counters`.  In
+InfluxDB this is the key underneath which the returned data is stored.  So for
+ordering your data in a good manner, this is a good key to set with a value when
+you want your IIS and Disk results stored separately from Processor results.
+
+Example: `Measurement = "win_disk"`
+
+#### UseRawValues
+
+(Optional)
+
+This key is optional. It is a simple bool.  If set to `true`, counter values
+will be provided in the raw, integer, form. This is in contrast with the default
+behavior, where values are returned in a formatted, displayable, form
+as seen in the Windows Performance Monitor.
+
+A field representing raw counter value has the `_Raw` suffix. Raw values should
+be further used in a calculation,
+e.g. `100-(non_negative_derivative("Percent_Processor_Time_Raw",1s)/100000`
+Note: Time based counters (i.e. *% Processor Time*) are reported in hundredths
+of nanoseconds.
+This key is optional. It is a simple bool.
+If set to `true`, counter values will be provided in the raw, integer, form.
+This is in contrast with the default behavior, where values are returned in a
+formatted, displayable, form as seen in the Windows Performance Monitor.
+A field representing raw counter value has the `_Raw` suffix.
+Raw values should be further used in a calculation,
+e.g. `100-(non_negative_derivative("Percent_Processor_Time_Raw",1s)/100000`
+Note: Time based counters (i.e. `% Processor Time`)
+are reported in hundredths of nanoseconds.
+
+Example: `UseRawValues = true`
+
+#### IncludeTotal
+
+(Optional)
+
+This key is optional. It is a simple bool.
+If it is not set to true or included it is treated as false.
+This key only has effect if the Instances key is set to `["*"]`
+and you would also like all instances containing `_Total` to be returned,
+like `_Total`, `0,_Total` and so on where applicable
+(Processor Information is one example).
+
+#### WarnOnMissing
+
+(Optional)
+
+This key is optional. It is a simple bool.
+If it is not set to true or included it is treated as false.
+This only has effect on the first execution of the plugin.
+It will print out any ObjectName/Instance/Counter combinations
+asked for that do not match. Useful when debugging new configurations.
+
+#### FailOnMissing
+
+(Internal)
+
+This key should not be used. It is for testing purposes only.  It is a simple
+bool. If it is not set to true or included this is treated as false.  If this is
+set to true, the plugin will abort and end prematurely if any of the
+combinations of ObjectName/Instances/Counters are invalid.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Input plugin to counterPath Performance Counters on Windows operating systems
+# This plugin ONLY supports Windows
+[[inputs.win_perf_counters]]
+  ## By default this plugin returns basic CPU and Disk statistics. See the
+  ## README file for more examples. Uncomment examples below or write your own
+  ## as you see fit. If the system being polled for data does not have the
+  ## Object at startup of the Telegraf agent, it will not be gathered.
+
+  ## Print All matching performance counters
+  # PrintValid = false
+
+  ## Whether request a timestamp along with the PerfCounter data or use current
+  ## time
+  # UsePerfCounterTime = true
+
+  ## If UseWildcardsExpansion params is set to true, wildcards (partial
+  ## wildcards in instance names and wildcards in counters names) in configured
+  ## counter paths will be expanded and in case of localized Windows, counter
+  ## paths will be also localized. It also returns instance indexes in instance
+  ## names. If false, wildcards (not partial) in instance names will still be
+  ## expanded, but instance indexes will not be returned in instance names.
+  # UseWildcardsExpansion = false
+
+  ## When running on a localized version of Windows and with
+  ## UseWildcardsExpansion = true, Windows will localize object and counter
+  ## names. When LocalizeWildcardsExpansion = false, use the names in
+  ## object.Counters instead of the localized names. Only Instances can have
+  ## wildcards in this case. ObjectName and Counters must not have wildcards
+  ## when this setting is false.
+  # LocalizeWildcardsExpansion = true
+
+  ## Period after which counters will be reread from configuration and
+  ## wildcards in counter paths expanded
+  # CountersRefreshInterval="1m"
+
+  ## Accepts a list of PDH error codes which are defined in pdh.go, if this
+  ## error is encountered it will be ignored. For example, you can provide
+  ## "PDH_NO_DATA" to ignore performance counters with no instances. By default
+  ## no errors are ignored You can find the list here:
+  ##   https://github.com/influxdata/telegraf/blob/master/plugins/inputs/win_perf_counters/pdh.go
+  ## e.g. IgnoredErrors = ["PDH_NO_DATA"]
+  # IgnoredErrors = []
+
+  ## Maximum size of the buffer for values returned by the API
+  ## Increase this value if you experience "buffer limit reached" errors.
+  # MaxBufferSize = "4MiB"
+
+  ## NOTE: Due to the way TOML is parsed, tables must be at the END of the
+  ## plugin definition, otherwise additional config options are read as part of
+  ## the table
+
+  # [[inputs.win_perf_counters.object]]
+    # Measurement = ""
+    # ObjectName = ""
+    # Instances = [""]
+    # Counters = []
+    ## Additional Object Settings
+    ##   * IncludeTotal: set to true to include _Total instance when querying
+    ##                   for all metrics via '*'
+    ##   * WarnOnMissing: print out when the performance counter is missing
+    ##                    from object, counter or instance
+    ##   * UseRawValues: gather raw values instead of formatted. Raw values are
+    ##                   stored in the field name with the "_Raw" suffix, e.g.
+    ##                   "Disk_Read_Bytes_sec_Raw".
+    # IncludeTotal = false
+    # WarnOnMissing = false
+    # UseRawValues = false
+
+  ## Processor usage, alternative to native, reports on a per core.
+  # [[inputs.win_perf_counters.object]]
+    # Measurement = "win_cpu"
+    # ObjectName = "Processor"
+    # Instances = ["*"]
+    # UseRawValues = true
+    # Counters = [
+    #   "% Idle Time",
+    #   "% Interrupt Time",
+    #   "% Privileged Time",
+    #   "% User Time",
+    #   "% Processor Time",
+    #   "% DPC Time",
+    # ]
+
+  ## Disk times and queues
+  # [[inputs.win_perf_counters.object]]
+    # Measurement = "win_disk"
+    # ObjectName = "LogicalDisk"
+    # Instances = ["*"]
+    # Counters = [
+    #   "% Idle Time",
+    #   "% Disk Time",
+    #   "% Disk Read Time",
+    #   "% Disk Write Time",
+    #   "% User Time",
+    #   "% Free Space",
+    #   "Current Disk Queue Length",
+    #   "Free Megabytes",
+    # ]
+
+  # [[inputs.win_perf_counters.object]]
+    # Measurement = "win_diskio"
+    # ObjectName = "PhysicalDisk"
+    # Instances = ["*"]
+    # Counters = [
+    #   "Disk Read Bytes/sec",
+    #   "Disk Write Bytes/sec",
+    #   "Current Disk Queue Length",
+    #   "Disk Reads/sec",
+    #   "Disk Writes/sec",
+    #   "% Disk Time",
+    #   "% Disk Read Time",
+    #   "% Disk Write Time",
+    # ]
+
+  # [[inputs.win_perf_counters.object]]
+    # Measurement = "win_net"
+    # ObjectName = "Network Interface"
+    # Instances = ["*"]
+    # Counters = [
+    # "Bytes Received/sec",
+    # "Bytes Sent/sec",
+    # "Packets Received/sec",
+    # "Packets Sent/sec",
+    # "Packets Received Discarded",
+    # "Packets Outbound Discarded",
+    # "Packets Received Errors",
+    # "Packets Outbound Errors",
+    # ]
+
+  # [[inputs.win_perf_counters.object]]
+    # Measurement = "win_system"
+    # ObjectName = "System"
+    # Instances = ["------"]
+    # Counters = [
+    #   "Context Switches/sec",
+    #   "System Calls/sec",
+    #   "Processor Queue Length",
+    #   "System Up Time",
+    # ]
+
+  ## Example counterPath where the Instance portion must be removed to get
+  ## data back, such as from the Memory object.
+  # [[inputs.win_perf_counters.object]]
+    # Measurement = "win_mem"
+    # ObjectName = "Memory"
+    ## Use 6 x - to remove the Instance bit from the counterPath.
+    # Instances = ["------"]
+    # Counters = [
+    #   "Available Bytes",
+    #   "Cache Faults/sec",
+    #   "Demand Zero Faults/sec",
+    #   "Page Faults/sec",
+    #   "Pages/sec",
+    #   "Transition Faults/sec",
+    #   "Pool Nonpaged Bytes",
+    #   "Pool Paged Bytes",
+    #   "Standby Cache Reserve Bytes",
+    #   "Standby Cache Normal Priority Bytes",
+    #   "Standby Cache Core Bytes",
+    # ]
+
+  ## Example query where the Instance portion must be removed to get data back,
+  ## such as from the Paging File object.
+  # [[inputs.win_perf_counters.object]]
+    # Measurement = "win_swap"
+    # ObjectName = "Paging File"
+    # Instances = ["_Total"]
+    # Counters = [
+    #   "% Usage",
+    # ]
+```
+
+### Generic Queries
+
+```toml
+[[inputs.win_perf_counters]]
+  [[inputs.win_perf_counters.object]]
+    # Processor usage, alternative to native, reports on a per core.
+    ObjectName = "Processor"
+    Instances = ["*"]
+    Counters = ["% Idle Time", "% Interrupt Time", "% Privileged Time", "% User Time", "% Processor Time"]
+    Measurement = "win_cpu"
+    #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
+
+  [[inputs.win_perf_counters.object]]
+    # Disk times and queues
+    ObjectName = "LogicalDisk"
+    Instances = ["*"]
+    Counters = ["% Idle Time", "% Disk Time","% Disk Read Time", "% Disk Write Time", "% User Time", "Current Disk Queue Length"]
+    Measurement = "win_disk"
+    #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
+
+  [[inputs.win_perf_counters.object]]
+    ObjectName = "System"
+    Counters = ["Context Switches/sec","System Calls/sec", "Processor Queue Length"]
+    Instances = ["------"]
+    Measurement = "win_system"
+    #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
+
+  [[inputs.win_perf_counters.object]]
+    # Example query where the Instance portion must be removed to get data back, such as from the Memory object.
+    ObjectName = "Memory"
+    Counters = ["Available Bytes","Cache Faults/sec","Demand Zero Faults/sec","Page Faults/sec","Pages/sec","Transition Faults/sec","Pool Nonpaged Bytes","Pool Paged Bytes"]
+    Instances = ["------"] # Use 6 x - to remove the Instance bit from the query.
+    Measurement = "win_mem"
+    #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
+
+  [[inputs.win_perf_counters.object]]
+    # more counters for the Network Interface Object can be found at
+    # https://msdn.microsoft.com/en-us/library/ms803962.aspx
+    ObjectName = "Network Interface"
+    Counters = ["Bytes Received/sec","Bytes Sent/sec","Packets Received/sec","Packets Sent/sec"]
+    Instances = ["*"] # Use 6 x - to remove the Instance bit from the query.
+    Measurement = "win_net"
+    #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
+```
+
+### Active Directory Domain Controller
+
+```toml
+[[inputs.win_perf_counters]]
+  [inputs.win_perf_counters.tags]
+    monitorgroup = "ActiveDirectory"
+  [[inputs.win_perf_counters.object]]
+    ObjectName = "DirectoryServices"
+    Instances = ["*"]
+    Counters = ["Base Searches/sec","Database adds/sec","Database deletes/sec","Database modifys/sec","Database recycles/sec","LDAP Client Sessions","LDAP Searches/sec","LDAP Writes/sec"]
+    Measurement = "win_ad" # Set an alternative measurement to win_perf_counters if wanted.
+    #Instances = [""] # Gathers all instances by default, specify to only gather these
+    #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
+
+  [[inputs.win_perf_counters.object]]
+    ObjectName = "Security System-Wide Statistics"
+    Instances = ["*"]
+    Counters = ["NTLM Authentications","Kerberos Authentications","Digest Authentications"]
+    Measurement = "win_ad"
+    #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
+
+  [[inputs.win_perf_counters.object]]
+    ObjectName = "Database"
+    Instances = ["*"]
+    Counters = ["Database Cache % Hit","Database Cache Page Fault Stalls/sec","Database Cache Page Faults/sec","Database Cache Size"]
+    Measurement = "win_db"
+    #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
+```
+
+### DFS Namespace + Domain Controllers
+
+```toml
+[[inputs.win_perf_counters]]
+  [[inputs.win_perf_counters.object]]
+    # AD, DFS N, Useful if the server hosts a DFS Namespace or is a Domain Controller
+    ObjectName = "DFS Namespace Service Referrals"
+    Instances = ["*"]
+    Counters = ["Requests Processed","Requests Failed","Avg. Response Time"]
+    Measurement = "win_dfsn"
+    #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
+    #WarnOnMissing = false # Print out when the performance counter is missing, either of object, counter or instance.
+```
+
+### DFS Replication + Domain Controllers
+
+```toml
+[[inputs.win_perf_counters]]
+  [[inputs.win_perf_counters.object]]
+    # AD, DFS R, Useful if the server hosts a DFS Replication folder or is a Domain Controller
+    ObjectName = "DFS Replication Service Volumes"
+    Instances = ["*"]
+    Counters = ["Data Lookups","Database Commits"]
+    Measurement = "win_dfsr"
+    #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
+    #WarnOnMissing = false # Print out when the performance counter is missing, either of object, counter or instance.
+```
+
+### DNS Server + Domain Controllers
+
+```toml
+[[inputs.win_perf_counters]]
+  [[inputs.win_perf_counters.object]]
+    ObjectName = "DNS"
+    Counters = ["Dynamic Update Received","Dynamic Update Rejected","Recursive Queries","Recursive Queries Failure","Secure Update Failure","Secure Update Received","TCP Query Received","TCP Response Sent","UDP Query Received","UDP Response Sent","Total Query Received","Total Response Sent"]
+    Instances = ["------"]
+    Measurement = "win_dns"
+    #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
+```
+
+### IIS / ASP.NET
+
+```toml
+[[inputs.win_perf_counters]]
+  [[inputs.win_perf_counters.object]]
+    # HTTP Service request queues in the Kernel before being handed over to User Mode.
+    ObjectName = "HTTP Service Request Queues"
+    Instances = ["*"]
+    Counters = ["CurrentQueueSize","RejectedRequests"]
+    Measurement = "win_http_queues"
+    #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
+
+  [[inputs.win_perf_counters.object]]
+    # IIS, ASP.NET Applications
+    ObjectName = "ASP.NET Applications"
+    Counters = ["Cache Total Entries","Cache Total Hit Ratio","Cache Total Turnover Rate","Output Cache Entries","Output Cache Hits","Output Cache Hit Ratio","Output Cache Turnover Rate","Compilations Total","Errors Total/Sec","Pipeline Instance Count","Requests Executing","Requests in Application Queue","Requests/Sec"]
+    Instances = ["*"]
+    Measurement = "win_aspnet_app"
+    #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
+
+  [[inputs.win_perf_counters.object]]
+    # IIS, ASP.NET
+    ObjectName = "ASP.NET"
+    Counters = ["Application Restarts","Request Wait Time","Requests Current","Requests Queued","Requests Rejected"]
+    Instances = ["*"]
+    Measurement = "win_aspnet"
+    #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
+
+  [[inputs.win_perf_counters.object]]
+    # IIS, Web Service
+    ObjectName = "Web Service"
+    Counters = ["Get Requests/sec","Post Requests/sec","Connection Attempts/sec","Current Connections","ISAPI Extension Requests/sec"]
+    Instances = ["*"]
+    Measurement = "win_websvc"
+    #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
+
+  [[inputs.win_perf_counters.object]]
+    # Web Service Cache / IIS
+    ObjectName = "Web Service Cache"
+    Counters = ["URI Cache Hits %","Kernel: URI Cache Hits %","File Cache Hits %"]
+    Instances = ["*"]
+    Measurement = "win_websvc_cache"
+    #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
+```
+
+### Process
+
+```toml
+[[inputs.win_perf_counters]]
+  [[inputs.win_perf_counters.object]]
+    # Process metrics, in this case for IIS only
+    ObjectName = "Process"
+    Counters = ["% Processor Time","Handle Count","Private Bytes","Thread Count","Virtual Bytes","Working Set"]
+    Instances = ["w3wp"]
+    Measurement = "win_proc"
+    #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
+```
+
+### .NET Monitoring
+
+```toml
+[[inputs.win_perf_counters]]
+  [[inputs.win_perf_counters.object]]
+    # .NET CLR Exceptions, in this case for IIS only
+    ObjectName = ".NET CLR Exceptions"
+    Counters = ["# of Exceps Thrown / sec"]
+    Instances = ["w3wp"]
+    Measurement = "win_dotnet_exceptions"
+    #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
+
+  [[inputs.win_perf_counters.object]]
+    # .NET CLR Jit, in this case for IIS only
+    ObjectName = ".NET CLR Jit"
+    Counters = ["% Time in Jit","IL Bytes Jitted / sec"]
+    Instances = ["w3wp"]
+    Measurement = "win_dotnet_jit"
+    #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
+
+  [[inputs.win_perf_counters.object]]
+    # .NET CLR Loading, in this case for IIS only
+    ObjectName = ".NET CLR Loading"
+    Counters = ["% Time Loading"]
+    Instances = ["w3wp"]
+    Measurement = "win_dotnet_loading"
+    #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
+
+  [[inputs.win_perf_counters.object]]
+    # .NET CLR LocksAndThreads, in this case for IIS only
+    ObjectName = ".NET CLR LocksAndThreads"
+    Counters = ["# of current logical Threads","# of current physical Threads","# of current recognized threads","# of total recognized threads","Queue Length / sec","Total # of Contentions","Current Queue Length"]
+    Instances = ["w3wp"]
+    Measurement = "win_dotnet_locks"
+    #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
+
+  [[inputs.win_perf_counters.object]]
+    # .NET CLR Memory, in this case for IIS only
+    ObjectName = ".NET CLR Memory"
+    Counters = ["% Time in GC","# Bytes in all Heaps","# Gen 0 Collections","# Gen 1 Collections","# Gen 2 Collections","# Induced GC","Allocated Bytes/sec","Finalization Survivors","Gen 0 heap size","Gen 1 heap size","Gen 2 heap size","Large Object Heap size","# of Pinned Objects"]
+    Instances = ["w3wp"]
+    Measurement = "win_dotnet_mem"
+    #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
+
+  [[inputs.win_perf_counters.object]]
+    # .NET CLR Security, in this case for IIS only
+    ObjectName = ".NET CLR Security"
+    Counters = ["% Time in RT checks","Stack Walk Depth","Total Runtime Checks"]
+    Instances = ["w3wp"]
+    Measurement = "win_dotnet_security"
+    #IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
+```
+
+## Troubleshooting
+
+If you are getting an error about an invalid counter, use the `typeperf` command
+to check the counter path on the command line.  E.g. `typeperf
+"Process(chrome*)\% Processor Time"`
+
+If no metrics are emitted even with the default config, you may need to repair
+your performance counters.
+
+1. Launch the Command Prompt as Administrator
+   (right click "Runs As Administrator").
+1. Drop into the C:\WINDOWS\System32 directory by typing `C:` then
+   `cd \Windows\System32`
+1. Rebuild your counter values, which may take a few moments so please be
+   patient, by running:
+
+```batchfile
+lodctr /r
+```
+
+## Metrics
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/win_services/_index.md b/content/telegraf/v1/input-plugins/win_services/_index.md
new file mode 100644
index 000000000..7a4c0b902
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/win_services/_index.md
@@ -0,0 +1,103 @@
+---
+description: "Telegraf plugin for collecting metrics from Windows Services"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Windows Services
+    identifier: input-win_services
+tags: [Windows Services, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Windows Services Input Plugin
+
+Reports information about Windows service status.
+
+Monitoring some services may require running Telegraf with administrator
+privileges.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Input plugin to report Windows services info.
+# This plugin ONLY supports Windows
+[[inputs.win_services]]
+  ## Names of the services to monitor. Leave empty to monitor all the available
+  ## services on the host. Globs accepted. Case insensitive.
+  service_names = [
+    "LanmanServer",
+    "TermService",
+    "Win*",
+  ]
+
+  # optional, list of service names to exclude
+  excluded_service_names = ['WinRM']
+```
+
+## Metrics
+
+- win_services
+  - state : integer
+  - startup_mode : integer
+
+The `state` field can have the following values:
+
+- 1 - stopped
+- 2 - start pending
+- 3 - stop pending
+- 4 - running
+- 5 - continue pending
+- 6 - pause pending
+- 7 - paused
+
+The `startup_mode` field can have the following values:
+
+- 0 - boot start
+- 1 - system start
+- 2 - auto start
+- 3 - demand start
+- 4 - disabled
+
+### Tags
+
+- All measurements have the following tags:
+  - service_name
+  - display_name
+
+## Example Output
+
+```text
+win_services,host=WIN2008R2H401,display_name=Server,service_name=LanmanServer state=4i,startup_mode=2i 1500040669000000000
+win_services,display_name=Remote\ Desktop\ Services,service_name=TermService,host=WIN2008R2H401 state=1i,startup_mode=3i 1500040669000000000
+```
+
+### TICK Scripts
+
+A sample TICK script for a notification about a not running service.  It sends a
+notification whenever any service changes its state to be not _running_ and when
+it changes that state back to _running_.  The notification is sent via an HTTP
+POST call.
+
+```shell
+stream
+    |from()
+        .database('telegraf')
+        .retentionPolicy('autogen')
+        .measurement('win_services')
+        .groupBy('host','service_name')
+    |alert()
+        .crit(lambda: "state" != 4)
+        .stateChangesOnly()
+        .message('Service {{ index .Tags "service_name" }} on Host {{ index .Tags "host" }} is in state {{ index .Fields "state" }} ')
+        .post('http://localhost:666/alert/service')
+```
diff --git a/content/telegraf/v1/input-plugins/win_wmi/_index.md b/content/telegraf/v1/input-plugins/win_wmi/_index.md
new file mode 100644
index 000000000..db26c9fb3
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/win_wmi/_index.md
@@ -0,0 +1,395 @@
+---
+description: "Telegraf plugin for collecting metrics from Windows Management Instrumentation"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Windows Management Instrumentation
+    identifier: input-win_wmi
+tags: [Windows Management Instrumentation, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Windows Management Instrumentation Input Plugin
+
+This document presents the input plugin to read WMI classes on Windows
+operating systems. With the win_wmi plugin, it is possible to
+capture and filter virtually any configuration or metric value exposed
+through the Windows Management Instrumentation ([WMI](https://learn.microsoft.com/en-us/windows/win32/wmisdk/wmi-start-page))
+service. At minimum, the telegraf service user must have permission
+to [read](https://learn.microsoft.com/en-us/windows/win32/wmisdk/access-to-wmi-namespaces) the WMI namespace that is being queried.
+
+[ACL]: https://learn.microsoft.com/en-us/windows/win32/wmisdk/access-to-wmi-namespaces
+[WMIdoc]: https://learn.microsoft.com/en-us/windows/win32/wmisdk/wmi-start-page
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `username` and
+`password` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Input plugin to query Windows Management Instrumentation
+# This plugin ONLY supports Windows
+[[inputs.win_wmi]]
+  ## Hostname or IP for remote connections, by default the local machine is queried
+  # host = ""
+  ## Credentials for the connection, by default no credentials are used
+  # username = ""
+  # password = ""
+
+  ## WMI query to execute, multiple methods are possible
+  [[inputs.win_wmi.query]]
+    ## Namespace, class and a list of properties to use in the WMI query
+    namespace = "root\\cimv2"
+    class_name = "Win32_Volume"
+    properties = ["Name", "Capacity", "FreeSpace"]
+    ## Optional WHERE clause for the WQL query
+    # filter = 'NOT Name LIKE "\\\\?\\%"'
+    ## Returned properties to use as tags instead of fields
+    # tag_properties = ["Name"]
+
+  # ## WMI method to invoke, multiple methods are possible
+  # [[inputs.win_wmi.method]]
+  #   ## WMI namespace, class and method to use
+  #   namespace = 'root\default'
+  #   class_name = "StdRegProv"
+  #   method = "GetStringValue"
+  #   ## Returned WMI method values to use as tags instead of fields
+  #   # tag_properties = ["ReturnValue"]
+  #   ## Named arguments for the method call
+  #   [inputs.win_wmi.method.arguments]
+  #     hDefKey = '2147483650'
+  #     sSubKeyName = 'Software\Microsoft\windows NT\CurrentVersion'
+  #     sValueName = 'ProductName'
+  #   ## Mapping of the name of the returned property to a field-name
+  #   [inputs.win_wmi.method.fields]
+  #       sValue = "product_name"
+```
+
+### Remote execution
+
+This plugin allows to execute queries and methods on a remote host. To do so,
+you need to provide the `host` as a hostname or IP-address as well as the
+credentials to execute the query or method as.
+
+Please note, the remote machine must be configured to allow remote execution and
+the user needs to have sufficient permission to execute the query or method!
+Check the [Microsoft guide](https://learn.microsoft.com/en-us/windows/win32/wmisdk/connecting-to-wmi-on-a-remote-computer#configuring-a-computer-for-a-remote-connection) for how to do this and test the
+connection with the `Get-WmiObject` method first.
+
+[remotedoc]:  https://learn.microsoft.com/en-us/windows/win32/wmisdk/connecting-to-wmi-on-a-remote-computer#configuring-a-computer-for-a-remote-connection
+
+### Query settings
+
+To issue a query you need to provide the `namespace` (e.g. `root\cimv2`) and the
+`class_name` (e.g. `Win32_Processor`) for the WMI query. Furthermore, you need
+to define which `properties` to output. An asterix (`*`) will output all values
+provided by the query.
+
+The `filter` setting specifies a WHERE clause passed to the query in the
+WMI Query Language (WQL). See [WHERE Clause](https://learn.microsoft.com/en-us/windows/win32/wmisdk/where-clause?source=recommendations) for more information.
+
+The `tag_properties` allows to provide a list of returned properties that should
+be provided as tags instead of fields in the metric.
+
+[WHERE]: https://learn.microsoft.com/en-us/windows/win32/wmisdk/where-clause?source=recommendations
+
+As an example
+
+```toml
+[[inputs.win_wmi]]
+  [[inputs.win_wmi.query]]
+    namespace = "root\\cimv2"
+    class_name = "Win32_Processor"
+    properties = ["Name"]
+```
+
+corresponds to executing
+
+```powershell
+Get-WmiObject -Namespace "root\cimv2" -Class "Win32_Processor" -Property "Name"
+```
+
+### Method settings
+
+To invoke a method you need to provide the `namespace` (e.g. `root\default`),
+the `class_name` (e.g. `StdRegProv`) and the `method` name
+(e.g. `GetStringValue`)for the method to invoke. Furthermore, you may need to
+provide `arguments` as key-value pair(s) to the method. The number and type of
+arguments depends on the method specified above.
+
+Check the [WMI reference](https://learn.microsoft.com/en-us/windows/win32/wmisdk/wmi-reference) for available methods and their
+arguments.
+
+The `tag_properties` allows to provide a list of returned properties that should
+be provided as tags instead of fields in the metric.
+
+[wmireferenc]: https://learn.microsoft.com/en-us/windows/win32/wmisdk/wmi-reference
+
+As an example
+
+```toml
+[[inputs.win_wmi]]
+  [[inputs.win_wmi.method]]
+    namespace = 'root\default'
+    class_name = "StdRegProv"
+    method = "GetStringValue"
+    [inputs.win_wmi.method.arguments]
+      hDefKey = '2147483650'
+      sSubKeyName = 'Software\Microsoft\windows NT\CurrentVersion'
+      sValueName = 'ProductName'
+```
+
+corresponds to executing
+
+```powershell
+Invoke-WmiMethod -Namespace "root\default" -Class "StdRegProv" -Name "GetStringValue" @(2147483650,"Software\Microsoft\windows NT\CurrentVersion", "ProductName")
+```
+
+## Metrics
+
+By default, a WMI class property's value is used as a metric field. If a class
+property's value is specified in `tag_properties`, then the value is
+instead included with the metric as a tag.
+
+## Troubleshooting
+
+### Errors
+
+If you are getting an error about an invalid WMI namespace, class, or property,
+use the `Get-WmiObject` or `Get-CimInstance` PowerShell commands in order to
+verify their validity. For example:
+
+```powershell
+Get-WmiObject -Namespace root\cimv2 -Class Win32_Volume -Property Capacity, FreeSpace, Name -Filter 'NOT Name LIKE "\\\\?\\%"'
+```
+
+```powershell
+Get-CimInstance -Namespace root\cimv2 -ClassName Win32_Volume -Property Capacity, FreeSpace, Name -Filter 'NOT Name LIKE "\\\\?\\%"'
+```
+
+### Data types
+
+Some WMI classes will return the incorrect data type for a field. In those
+cases, it is necessary to use a processor to convert the data type. For
+example, the Capacity and FreeSpace properties of the Win32_Volume class must
+be converted to integers:
+
+```toml
+[[processors.converter]]
+  namepass = ["win_wmi_Win32_Volume"]
+  [processors.converter.fields]
+    integer = ["Capacity", "FreeSpace"]
+```
+
+## Example Output
+
+### Physical Memory
+
+This query provides metrics for the speed and capacity of each physical memory
+device, along with tags describing the manufacturer, part number, and device
+locator of each device.
+
+```toml
+[[inputs.win_wmi]]
+  name_prefix = "win_wmi_"
+  [[inputs.win_wmi.query]]
+    namespace = "root\\cimv2"
+    class_name = "Win32_PhysicalMemory"
+    properties = [
+      "Name",
+      "Capacity",
+      "DeviceLocator",
+      "Manufacturer",
+      "PartNumber",
+      "Speed",
+    ]
+    tag_properties = ["Name","DeviceLocator","Manufacturer","PartNumber"]
+```
+
+Example Output:
+
+```text
+win_wmi_Win32_PhysicalMemory,DeviceLocator=DIMM1,Manufacturer=80AD000080AD,Name=Physical\ Memory,PartNumber=HMA82GU6DJR8N-XN\ \ \ \ ,host=foo Capacity=17179869184i,Speed=3200i 1654269272000000000
+```
+
+### Processor
+
+This query provides metrics for the number of cores in each physical processor.
+Since the Name property of the WMI class is included by default, the metrics
+will also contain a tag value describing the model of each CPU.
+
+```toml
+[[inputs.win_wmi]]
+  name_prefix = "win_wmi_"
+  [[inputs.win_wmi.query]]
+    namespace = "root\\cimv2"
+    class_name = "Win32_Processor"
+    properties = ["Name","NumberOfCores"]
+    tag_properties = ["Name"]
+```
+
+Example Output:
+
+```text
+win_wmi_Win32_Processor,Name=Intel(R)\ Core(TM)\ i9-10900\ CPU\ @\ 2.80GHz,host=foo NumberOfCores=10i 1654269272000000000
+```
+
+### Computer System
+
+This query provides metrics for the number of socketted processors, number of
+logical cores on each processor, and the total physical memory in the computer.
+The metrics include tag values for the domain, manufacturer, and model of the
+computer.
+
+```toml
+[[inputs.win_wmi]]
+  name_prefix = "win_wmi_"
+  [[inputs.win_wmi.query]]
+    namespace = "root\\cimv2"
+    class_name = "Win32_ComputerSystem"
+    properties = [
+      "Name",
+      "Domain",
+      "Manufacturer",
+      "Model",
+      "NumberOfLogicalProcessors",
+      "NumberOfProcessors",
+      "TotalPhysicalMemory"
+    ]
+    tag_properties = ["Name","Domain","Manufacturer","Model"]
+```
+
+Example Output:
+
+```text
+win_wmi_Win32_ComputerSystem,Domain=company.com,Manufacturer=Lenovo,Model=X1\ Carbon,Name=FOO,host=foo NumberOfLogicalProcessors=20i,NumberOfProcessors=1i,TotalPhysicalMemory=34083926016i 1654269272000000000
+```
+
+### Operating System
+
+This query provides metrics for the paging file's free space, the operating
+system's free virtual memory, the operating system SKU installed on the
+computer, and the Windows product type. The OS architecture is included as a
+tagged value to describe whether the installation is 32-bit or 64-bit.
+
+```toml
+[[inputs.win_wmi]]
+  name_prefix = "win_wmi_"
+  [[inputs.win_wmi.query]]
+    class_name = "Win32_OperatingSystem"
+    namespace = "root\\cimv2"
+    properties = [
+      "Name",
+      "Caption",
+      "FreeSpaceInPagingFiles",
+      "FreeVirtualMemory",
+      "OperatingSystemSKU",
+      "OSArchitecture",
+      "ProductType"
+    ]
+    tag_properties = ["Name","Caption","OSArchitecture"]
+```
+
+Example Output:
+
+```text
+win_wmi_Win32_OperatingSystem,Caption=Microsoft\ Windows\ 10\ Enterprise,InstallationType=Client,Name=Microsoft\ Windows\ 10\ Enterprise|C:\WINDOWS|\Device\Harddisk0\Partition3,OSArchitecture=64-bit,host=foo FreeSpaceInPagingFiles=5203244i,FreeVirtualMemory=16194496i,OperatingSystemSKU=4i,ProductType=1i 1654269272000000000
+```
+
+### Failover Clusters
+
+This query provides a boolean metric describing whether Dynamic Quorum is
+enabled for the cluster. The tag values for the metric also include the name of
+the Windows Server Failover Cluster and the type of Quorum in use.
+
+```toml
+[[inputs.win_wmi]]
+  name_prefix = "win_wmi_"
+  [[inputs.win_wmi.query]]
+    namespace = "root\\mscluster"
+    class_name = "MSCluster_Cluster"
+    properties = [
+      "Name",
+      "QuorumType",
+      "DynamicQuorumEnabled"
+    ]
+    tag_properties = ["Name","QuorumType"]
+```
+
+Example Output:
+
+```text
+win_wmi_MSCluster_Cluster,Name=testcluster1,QuorumType=Node\ and\ File\ Share\ Majority,host=testnode1 DynamicQuorumEnabled=1i 1671553260000000000
+```
+
+### Bitlocker
+
+This query provides a list of volumes which are eligible for bitlocker
+encryption and their compliance status. Because the MBAM_Volume class does not
+include a Name property, the ExcludeNameKey configuration is included. The
+VolumeName property is included in the metric as a tagged value.
+
+```toml
+[[inputs.win_wmi]]
+  name_prefix = "win_wmi_"
+  [[inputs.win_wmi.query]]
+    namespace = "root\\Microsoft\\MBAM"
+    class_name = "MBAM_Volume"
+    properties = [
+      "Compliant",
+      "VolumeName"
+    ]
+    tag_properties = ["VolumeName"]
+```
+
+Example Output:
+
+```text
+win_wmi_MBAM_Volume,VolumeName=C:,host=foo Compliant=1i 1654269272000000000
+```
+
+### SQL Server
+
+This query provides metrics which contain tags describing the version and SKU
+of SQL Server. These properties are useful for creating a dashboard of your SQL
+Server inventory, which includes the patch level and edition of SQL Server that
+is installed.
+
+```toml
+[[inputs.win_wmi]]
+  name_prefix = "win_wmi_"
+  [[inputs.win_wmi.query]]
+    namespace = "Root\\Microsoft\\SqlServer\\ComputerManagement15"
+    class_name = "SqlServiceAdvancedProperty"
+    properties = [
+      "PropertyName",
+      "ServiceName",
+      "PropertyStrValue",
+      "SqlServiceType"
+    ]
+    filter = "ServiceName LIKE 'MSSQLSERVER' AND SqlServiceType = 1 AND (PropertyName LIKE 'FILEVERSION' OR PropertyName LIKE 'SKUNAME')"
+    tag_properties = ["PropertyName","ServiceName","PropertyStrValue"]
+```
+
+Example Output:
+
+```text
+win_wmi_SqlServiceAdvancedProperty,PropertyName=FILEVERSION,PropertyStrValue=2019.150.4178.1,ServiceName=MSSQLSERVER,host=foo,sqlinstance=foo SqlServiceType=1i 1654269272000000000
+win_wmi_SqlServiceAdvancedProperty,PropertyName=SKUNAME,PropertyStrValue=Developer\ Edition\ (64-bit),ServiceName=MSSQLSERVER,host=foo,sqlinstance=foo SqlServiceType=1i 1654269272000000000
+```
diff --git a/content/telegraf/v1/input-plugins/wireguard/_index.md b/content/telegraf/v1/input-plugins/wireguard/_index.md
new file mode 100644
index 000000000..eb74e7617
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/wireguard/_index.md
@@ -0,0 +1,95 @@
+---
+description: "Telegraf plugin for collecting metrics from Wireguard"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Wireguard
+    identifier: input-wireguard
+tags: [Wireguard, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Wireguard Input Plugin
+
+The Wireguard input plugin collects statistics on the local Wireguard server
+using the [`wgctrl`](https://github.com/WireGuard/wgctrl-go) library. It
+reports gauge metrics for Wireguard interface device(s) and its peers.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Collect Wireguard server interface and peer statistics
+[[inputs.wireguard]]
+  ## Optional list of Wireguard device/interface names to query.
+  ## If omitted, all Wireguard interfaces are queried.
+  # devices = ["wg0"]
+```
+
+## Metrics
+
+- `wireguard_device`
+  - tags:
+    - `name` (interface device name, e.g. `wg0`)
+    - `type` (Wireguard tunnel type, e.g. `linux_kernel` or `userspace`)
+  - fields:
+    - `listen_port` (int, UDP port on which the interface is listening)
+    - `firewall_mark` (int, device's current firewall mark)
+    - `peers` (int, number of peers associated with the device)
+
+- `wireguard_peer`
+  - tags:
+    - `device` (associated interface device name, e.g. `wg0`)
+    - `public_key` (peer public key, e.g. `NZTRIrv/ClTcQoNAnChEot+WL7OH7uEGQmx8oAN9rWE=`)
+  - fields:
+    - `persistent_keepalive_interval_ns` (int, keepalive interval in nanoseconds; 0 if unset)
+    - `protocol_version` (int, Wireguard protocol version number)
+    - `allowed_ips` (int, number of allowed IPs for this peer)
+    - `last_handshake_time_ns` (int, Unix timestamp of the last handshake for this peer in nanoseconds)
+    - `rx_bytes` (int, number of bytes received from this peer)
+    - `tx_bytes` (int, number of bytes transmitted to this peer)
+    - `allowed_peer_cidr` (string, comma separated list of allowed peer CIDRs)
+
+## Troubleshooting
+
+### Error: `operation not permitted`
+
+When the kernelspace implementation of Wireguard is in use (as opposed to its
+userspace implementations), Telegraf communicates with the module over netlink.
+This requires Telegraf to either run as root, or for the Telegraf binary to
+have the `CAP_NET_ADMIN` capability.
+
+To add this capability to the Telegraf binary (to allow this communication under
+the default user `telegraf`):
+
+```bash
+sudo setcap CAP_NET_ADMIN+epi $(which telegraf)
+```
+
+N.B.: This capability is a filesystem attribute on the binary itself. The
+attribute needs to be re-applied if the Telegraf binary is rotated (e.g.
+on installation of new a Telegraf version from the system package manager).
+
+### Error: `error enumerating Wireguard devices`
+
+This usually happens when the device names specified in config are invalid.
+Ensure that `sudo wg show` succeeds, and that the device names in config match
+those printed by this command.
+
+## Example Output
+
+```text
+wireguard_device,host=WGVPN,name=wg0,type=linux_kernel firewall_mark=51820i,listen_port=58216i 1582513589000000000
+wireguard_device,host=WGVPN,name=wg0,type=linux_kernel peers=1i 1582513589000000000
+wireguard_peer,device=wg0,host=WGVPN,public_key=NZTRIrv/ClTcQoNAnChEot+WL7OH7uEGQmx8oAN9rWE= allowed_ips=2i,persistent_keepalive_interval_ns=60000000000i,protocol_version=1i,allowed_peer_cidr=192.168.1.0/24,10.0.0.0/8 1582513589000000000
+wireguard_peer,device=wg0,host=WGVPN,public_key=NZTRIrv/ClTcQoNAnChEot+WL7OH7uEGQmx8oAN9rWE= last_handshake_time_ns=1582513584530013376i,rx_bytes=6484i,tx_bytes=13540i 1582513589000000000
+```
diff --git a/content/telegraf/v1/input-plugins/wireless/_index.md b/content/telegraf/v1/input-plugins/wireless/_index.md
new file mode 100644
index 000000000..be99af500
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/wireless/_index.md
@@ -0,0 +1,61 @@
+---
+description: "Telegraf plugin for collecting metrics from Wireless"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Wireless
+    identifier: input-wireless
+tags: [Wireless, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Wireless Input Plugin
+
+The wireless plugin gathers metrics about wireless link quality by reading the
+`/proc/net/wireless` file. This plugin currently supports linux only.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Monitor wifi signal strength and quality
+# This plugin ONLY supports Linux
+[[inputs.wireless]]
+  ## Sets 'proc' directory path
+  ## If not specified, then default is /proc
+  # host_proc = "/proc"
+```
+
+## Metrics
+
+- metric
+  - tags:
+    - interface (wireless interface)
+  - fields:
+    - status (int64, gauge) - Its current state. This is a device dependent information
+    - link (int64, percentage, gauge) - general quality of the reception
+    - level (int64, dBm, gauge) - signal strength at the receiver
+    - noise (int64, dBm, gauge) - silence level (no packet) at the receiver
+    - nwid (int64, packets, counter) - number of discarded packets due to invalid network id
+    - crypt (int64, packets, counter) - number of packet unable to decrypt
+    - frag (int64, packets, counter) - fragmented packets
+    - retry (int64, packets, counter) - cumulative retry counts
+    - misc (int64, packets, counter) - dropped for un-specified reason
+    - missed_beacon (int64, packets, counter) - missed beacon packets
+
+## Example Output
+
+This section shows example output in Line Protocol format.
+
+```text
+wireless,host=example.localdomain,interface=wlan0 misc=0i,frag=0i,link=60i,level=-50i,noise=-256i,nwid=0i,crypt=0i,retry=1525i,missed_beacon=0i,status=0i 1519843022000000000
+```
diff --git a/content/telegraf/v1/input-plugins/x509_cert/_index.md b/content/telegraf/v1/input-plugins/x509_cert/_index.md
new file mode 100644
index 000000000..25ded6e12
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/x509_cert/_index.md
@@ -0,0 +1,105 @@
+---
+description: "Telegraf plugin for collecting metrics from x509 Certificate"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: x509 Certificate
+    identifier: input-x509_cert
+tags: [x509 Certificate, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# x509 Certificate Input Plugin
+
+This plugin provides information about X509 certificate accessible via local
+file, tcp, udp, https or smtp protocol.
+
+When using a UDP address as a certificate source, the server must support
+[DTLS](https://en.wikipedia.org/wiki/Datagram_Transport_Layer_Security).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Reads metrics from a SSL certificate
+[[inputs.x509_cert]]
+  ## List certificate sources, support wildcard expands for files
+  ## Prefix your entry with 'file://' if you intend to use relative paths
+  sources = ["tcp://example.org:443", "https://influxdata.com:443",
+            "smtp://mail.localhost:25", "udp://127.0.0.1:4433",
+            "/etc/ssl/certs/ssl-cert-snakeoil.pem",
+            "/etc/mycerts/*.mydomain.org.pem", "file:///path/to/*.pem"]
+
+  ## Timeout for SSL connection
+  # timeout = "5s"
+
+  ## Pass a different name into the TLS request (Server Name Indication).
+  ## This is synonymous with tls_server_name, and only one of the two
+  ## options may be specified at one time.
+  ##   example: server_name = "myhost.example.org"
+  # server_name = "myhost.example.org"
+
+  ## Only output the leaf certificates and omit the root ones.
+  # exclude_root_certs = false
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  # tls_server_name = "myhost.example.org"
+
+  ## Set the proxy URL
+  # use_proxy = true
+  # proxy_url = "http://localhost:8888"
+```
+
+## Metrics
+
+- x509_cert
+  - tags:
+    - type   - "leaf", "intermediate" or "root" classification of certificate
+    - source - source of the certificate
+    - organization
+    - organizational_unit
+    - country
+    - province
+    - locality
+    - verification
+    - serial_number
+    - signature_algorithm
+    - public_key_algorithm
+    - issuer_common_name
+    - issuer_serial_number
+    - san
+    - ocsp_stapled
+    - ocsp_status (when ocsp_stapled=yes)
+    - ocsp_verified (when ocsp_stapled=yes)
+  - fields:
+    - verification_code (int)
+    - verification_error (string)
+    - expiry (int, seconds) - Time when the certificate will expire, in seconds since the Unix epoch. `SELECT (expiry / 60 / 60 / 24) as "expiry_in_days"`
+    - age (int, seconds)
+    - startdate (int, seconds)
+    - enddate (int, seconds)
+    - ocsp_status_code (int)
+    - ocsp_next_update (int, seconds)
+    - ocsp_produced_at (int, seconds)
+    - ocsp_this_update (int, seconds)
+
+## Example Output
+
+```text
+x509_cert,common_name=ubuntu,ocsp_stapled=no,source=/etc/ssl/certs/ssl-cert-snakeoil.pem,verification=valid age=7693222i,enddate=1871249033i,expiry=307666777i,startdate=1555889033i,verification_code=0i 1563582256000000000
+x509_cert,common_name=www.example.org,country=US,locality=Los\ Angeles,organization=Internet\ Corporation\ for\ Assigned\ Names\ and\ Numbers,organizational_unit=Technology,province=California,ocsp_stapled=no,source=https://example.org:443,verification=invalid age=20219055i,enddate=1606910400i,expiry=43328144i,startdate=1543363200i,verification_code=1i,verification_error="x509: certificate signed by unknown authority" 1563582256000000000
+x509_cert,common_name=DigiCert\ SHA2\ Secure\ Server\ CA,country=US,organization=DigiCert\ Inc,ocsp_stapled=no,source=https://example.org:443,verification=valid age=200838255i,enddate=1678276800i,expiry=114694544i,startdate=1362744000i,verification_code=0i 1563582256000000000
+x509_cert,common_name=DigiCert\ Global\ Root\ CA,country=US,organization=DigiCert\ Inc,organizational_unit=www.digicert.com,ocsp_stapled=yes,ocsp_status=good,ocsp_verified=yes,source=https://example.org:443,verification=valid age=400465455i,enddate=1952035200i,expiry=388452944i,ocsp_next_update=1676714398i,ocsp_produced_at=1676112480i,ocsp_status_code=0i,ocsp_this_update=1676109600i,startdate=1163116800i,verification_code=0i 1563582256000000000
+```
diff --git a/content/telegraf/v1/input-plugins/xtremio/_index.md b/content/telegraf/v1/input-plugins/xtremio/_index.md
new file mode 100644
index 000000000..ce3d9cc10
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/xtremio/_index.md
@@ -0,0 +1,140 @@
+---
+description: "Telegraf plugin for collecting metrics from XtremIO"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: XtremIO
+    identifier: input-xtremio
+tags: [XtremIO, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# XtremIO Input Plugin
+
+The `xtremio` plugin gathers metrics from a Dell EMC XtremIO Storage Array's V3
+Rest API. Documentation can be found [here](https://dl.dell.com/content/docu96624_xtremio-storage-array-x1-and-x2-cluster-types-with-xms-6-3-0-to-6-3-3-and-xios-4-0-15-to-4-0-31-and-6-0-0-to-6-3-3-restful-api-3-x-guide.pdf?language=en_us).
+
+[1]: https://dl.dell.com/content/docu96624_xtremio-storage-array-x1-and-x2-cluster-types-with-xms-6-3-0-to-6-3-3-and-xios-4-0-15-to-4-0-31-and-6-0-0-to-6-3-3-restful-api-3-x-guide.pdf?language=en_us
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+ # Gathers Metrics From a Dell EMC XtremIO Storage Array's V3 API
+[[inputs.xtremio]]
+  ## XtremIO User Interface Endpoint
+  url = "https://xtremio.example.com/" # required
+
+  ## Credentials
+  username = "user1"
+  password = "pass123"
+
+  ## Metrics to collect from the XtremIO
+  # collectors = ["bbus","clusters","ssds","volumes","xms"]
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use SSL but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+## Metrics
+
+- bbus
+  - tags:
+    - serial_number
+    - guid
+    - power_feed
+    - name
+    - model_name
+  - fields:
+    - bbus_power
+    - bbus_average_daily_temp
+    - bbus_enabled
+    - bbus_ups_need_battery_replacement
+    - bbus_ups_low_battery_no_input
+
+- clusters
+  - tags:
+    - hardware_platform
+    - license_id
+    - guid
+    - name
+    - sys_psnt_serial_number
+  - fields:
+    - clusters_compression_factor
+    - clusters_percent_memory_in_use
+    - clusters_read_iops
+    - clusters_write_iops
+    - clusters_number_of_volumes
+    - clusters_free_ssd_space_in_percent
+    - clusters_ssd_num
+    - clusters_data_reduction_ratio
+
+- ssds
+  - tags:
+    - model_name
+    - firmware_version
+    - ssd_uid
+    - guid
+    - sys_name
+    - serial_number
+  - fields:
+    - ssds_ssd_size
+    - ssds_ssd_space_in_use
+    - ssds_write_iops
+    - ssds_read_iops
+    - ssds_write_bandwidth
+    - ssds_read_bandwidth
+    - ssds_num_bad_sectors
+
+- volumes
+  - tags:
+    - guid
+    - sys_name
+    - name
+  - fields:
+    - volumes_read_iops
+    - volumes_write_iops
+    - volumes_read_latency
+    - volumes_write_latency
+    - volumes_data_reduction_ratio
+    - volumes_provisioned_space
+    - volumes_used_space
+
+- xms
+  - tags:
+    - guid
+    - name
+    - version
+    - xms_ip
+  - fields:
+    - xms_write_iops
+    - xms_read_iops
+    - xms_overall_efficiency_ratio
+    - xms_ssd_space_in_use
+    - xms_ram_in_use
+    - xms_ram_total
+    - xms_cpu_usage_total
+    - xms_write_latency
+    - xms_read_latency
+    - xms_user_accounts_count
+
+## Example Output
+
+```text
+xio,guid=abcdefghifklmnopqrstuvwxyz111111,host=HOSTNAME,model_name=Eaton\ 5P\ 1550,name=X2-BBU,power_feed=PWR-B,serial_number=SER1234567890 bbus_average_daily_temp=22i,bbus_enabled=1i,bbus_power=286i,bbus_ups_low_battery_no_input=0i,bbus_ups_need_battery_replacement=0i 1638295340000000000
+xio,guid=abcdefghifklmnopqrstuvwxyz222222,host=HOSTNAME,model_name=Eaton\ 5P\ 1550,name=X1-BBU,power_feed=PWR-A,serial_number=SER1234567891 bbus_average_daily_temp=22i,bbus_enabled=1i,bbus_power=246i,bbus_ups_low_battery_no_input=0i,bbus_ups_need_battery_replacement=0i 1638295340000000000
+xio,guid=abcdefghifklmnopqrstuvwxyz333333,hardware_platform=X1,host=HOSTNAME,license_id=LIC123456789,name=SERVER01,sys_psnt_serial_number=FNM01234567890 clusters_compression_factor=1.5160012465000001,clusters_data_reduction_ratio=2.1613617899,clusters_free_ssd_space_in_percent=34i,clusters_number_of_volumes=36i,clusters_percent_memory_in_use=29i,clusters_read_iops=331i,clusters_ssd_num=50i,clusters_write_iops=4649i 1638295341000000000
+```
diff --git a/content/telegraf/v1/input-plugins/zfs/_index.md b/content/telegraf/v1/input-plugins/zfs/_index.md
new file mode 100644
index 000000000..8ec5595bf
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/zfs/_index.md
@@ -0,0 +1,430 @@
+---
+description: "Telegraf plugin for collecting metrics from ZFS"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: ZFS
+    identifier: input-zfs
+tags: [ZFS, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# ZFS Input Plugin
+
+This ZFS plugin provides metrics from your ZFS filesystems. It supports ZFS on
+Linux and FreeBSD. It gets ZFS stat from `/proc/spl/kstat/zfs` on Linux and
+from `sysctl`, 'zfs' and `zpool` on FreeBSD.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Read metrics of ZFS from arcstats, zfetchstats, vdev_cache_stats, pools and datasets
+# This plugin ONLY supports Linux & FreeBSD
+[[inputs.zfs]]
+  ## ZFS kstat path. Ignored on FreeBSD
+  ## If not specified, then default is:
+  # kstatPath = "/proc/spl/kstat/zfs"
+
+  ## By default, telegraf gather all zfs stats
+  ## Override the stats list using the kstatMetrics array:
+  ## For FreeBSD, the default is:
+  # kstatMetrics = ["arcstats", "zfetchstats", "vdev_cache_stats"]
+  ## For Linux, the default is:
+  # kstatMetrics = ["abdstats", "arcstats", "dnodestats", "dbufcachestats",
+  #     "dmu_tx", "fm", "vdev_mirror_stats", "zfetchstats", "zil"]
+
+  ## By default, don't gather zpool stats
+  # poolMetrics = false
+
+  ## By default, don't gather dataset stats
+  # datasetMetrics = false
+```
+
+## Metrics
+
+By default this plugin collects metrics about ZFS internals pool and dataset.
+These metrics are either counters or measure sizes
+in bytes. These metrics will be in the `zfs` measurement with the field
+names listed below.
+
+If `poolMetrics` is enabled then additional metrics will be gathered for
+each pool.
+
+If `datasetMetrics` is enabled then additional metrics will be gathered for
+each dataset.
+
+- zfs
+    With fields listed below.
+
+### ARC Stats (FreeBSD and Linux)
+
+- arcstats_allocated (FreeBSD only)
+- arcstats_anon_evict_data (Linux only)
+- arcstats_anon_evict_metadata (Linux only)
+- arcstats_anon_evictable_data (FreeBSD only)
+- arcstats_anon_evictable_metadata (FreeBSD only)
+- arcstats_anon_size
+- arcstats_arc_loaned_bytes (Linux only)
+- arcstats_arc_meta_limit
+- arcstats_arc_meta_max
+- arcstats_arc_meta_min (FreeBSD only)
+- arcstats_arc_meta_used
+- arcstats_arc_no_grow (Linux only)
+- arcstats_arc_prune (Linux only)
+- arcstats_arc_tempreserve (Linux only)
+- arcstats_c
+- arcstats_c_max
+- arcstats_c_min
+- arcstats_data_size
+- arcstats_deleted
+- arcstats_demand_data_hits
+- arcstats_demand_data_misses
+- arcstats_demand_hit_predictive_prefetch (FreeBSD only)
+- arcstats_demand_metadata_hits
+- arcstats_demand_metadata_misses
+- arcstats_duplicate_buffers
+- arcstats_duplicate_buffers_size
+- arcstats_duplicate_reads
+- arcstats_evict_l2_cached
+- arcstats_evict_l2_eligible
+- arcstats_evict_l2_ineligible
+- arcstats_evict_l2_skip (FreeBSD only)
+- arcstats_evict_not_enough (FreeBSD only)
+- arcstats_evict_skip
+- arcstats_hash_chain_max
+- arcstats_hash_chains
+- arcstats_hash_collisions
+- arcstats_hash_elements
+- arcstats_hash_elements_max
+- arcstats_hdr_size
+- arcstats_hits
+- arcstats_l2_abort_lowmem
+- arcstats_l2_asize
+- arcstats_l2_cdata_free_on_write
+- arcstats_l2_cksum_bad
+- arcstats_l2_compress_failures
+- arcstats_l2_compress_successes
+- arcstats_l2_compress_zeros
+- arcstats_l2_evict_l1cached (FreeBSD only)
+- arcstats_l2_evict_lock_retry
+- arcstats_l2_evict_reading
+- arcstats_l2_feeds
+- arcstats_l2_free_on_write
+- arcstats_l2_hdr_size
+- arcstats_l2_hits
+- arcstats_l2_io_error
+- arcstats_l2_misses
+- arcstats_l2_read_bytes
+- arcstats_l2_rw_clash
+- arcstats_l2_size
+- arcstats_l2_write_buffer_bytes_scanned (FreeBSD only)
+- arcstats_l2_write_buffer_iter (FreeBSD only)
+- arcstats_l2_write_buffer_list_iter (FreeBSD only)
+- arcstats_l2_write_buffer_list_null_iter (FreeBSD only)
+- arcstats_l2_write_bytes
+- arcstats_l2_write_full (FreeBSD only)
+- arcstats_l2_write_in_l2 (FreeBSD only)
+- arcstats_l2_write_io_in_progress (FreeBSD only)
+- arcstats_l2_write_not_cacheable (FreeBSD only)
+- arcstats_l2_write_passed_headroom (FreeBSD only)
+- arcstats_l2_write_pios (FreeBSD only)
+- arcstats_l2_write_spa_mismatch (FreeBSD only)
+- arcstats_l2_write_trylock_fail (FreeBSD only)
+- arcstats_l2_writes_done
+- arcstats_l2_writes_error
+- arcstats_l2_writes_hdr_miss (Linux only)
+- arcstats_l2_writes_lock_retry (FreeBSD only)
+- arcstats_l2_writes_sent
+- arcstats_memory_direct_count (Linux only)
+- arcstats_memory_indirect_count (Linux only)
+- arcstats_memory_throttle_count
+- arcstats_meta_size (Linux only)
+- arcstats_mfu_evict_data (Linux only)
+- arcstats_mfu_evict_metadata (Linux only)
+- arcstats_mfu_ghost_evict_data (Linux only)
+- arcstats_mfu_ghost_evict_metadata (Linux only)
+- arcstats_metadata_size (FreeBSD only)
+- arcstats_mfu_evictable_data (FreeBSD only)
+- arcstats_mfu_evictable_metadata (FreeBSD only)
+- arcstats_mfu_ghost_evictable_data (FreeBSD only)
+- arcstats_mfu_ghost_evictable_metadata (FreeBSD only)
+- arcstats_mfu_ghost_hits
+- arcstats_mfu_ghost_size
+- arcstats_mfu_hits
+- arcstats_mfu_size
+- arcstats_misses
+- arcstats_mru_evict_data (Linux only)
+- arcstats_mru_evict_metadata (Linux only)
+- arcstats_mru_ghost_evict_data (Linux only)
+- arcstats_mru_ghost_evict_metadata (Linux only)
+- arcstats_mru_evictable_data (FreeBSD only)
+- arcstats_mru_evictable_metadata (FreeBSD only)
+- arcstats_mru_ghost_evictable_data (FreeBSD only)
+- arcstats_mru_ghost_evictable_metadata (FreeBSD only)
+- arcstats_mru_ghost_hits
+- arcstats_mru_ghost_size
+- arcstats_mru_hits
+- arcstats_mru_size
+- arcstats_mutex_miss
+- arcstats_other_size
+- arcstats_p
+- arcstats_prefetch_data_hits
+- arcstats_prefetch_data_misses
+- arcstats_prefetch_metadata_hits
+- arcstats_prefetch_metadata_misses
+- arcstats_recycle_miss (Linux only)
+- arcstats_size
+- arcstats_sync_wait_for_async (FreeBSD only)
+
+### Zfetch Stats (FreeBSD and Linux)
+
+- zfetchstats_bogus_streams (Linux only)
+- zfetchstats_colinear_hits (Linux only)
+- zfetchstats_colinear_misses (Linux only)
+- zfetchstats_hits
+- zfetchstats_max_streams (FreeBSD only)
+- zfetchstats_misses
+- zfetchstats_reclaim_failures (Linux only)
+- zfetchstats_reclaim_successes (Linux only)
+- zfetchstats_streams_noresets (Linux only)
+- zfetchstats_streams_resets (Linux only)
+- zfetchstats_stride_hits (Linux only)
+- zfetchstats_stride_misses (Linux only)
+
+### Vdev Cache Stats (FreeBSD)
+
+- vdev_cache_stats_delegations
+- vdev_cache_stats_hits
+- vdev_cache_stats_misses
+
+### Pool Metrics (optional)
+
+On Linux (reference: kstat accumulated time and queue length statistics):
+
+- zfs_pool
+  - nread (integer, bytes)
+  - nwritten (integer, bytes)
+  - reads (integer, count)
+  - writes (integer, count)
+  - wtime (integer, nanoseconds)
+  - wlentime (integer, queuelength * nanoseconds)
+  - wupdate (integer, timestamp)
+  - rtime (integer, nanoseconds)
+  - rlentime (integer, queuelength * nanoseconds)
+  - rupdate (integer, timestamp)
+  - wcnt (integer, count)
+  - rcnt (integer, count)
+
+For ZFS >= 2.1.x the format has changed significantly:
+
+- zfs_pool
+  - writes (integer, count)
+  - nwritten (integer, bytes)
+  - reads (integer, count)
+  - nread (integer, bytes)
+  - nunlinks (integer, count)
+  - nunlinked (integer, count)
+
+For ZFS >= 2.2.x the following additional fields are available:
+
+- additional fields for ZFS > 2.2.x
+  - zil_commit_count (integer, count)
+  - zil_commit_writer_count (integer, count)
+  - zil_itx_count (integer, count)
+  - zil_itx_indirect_count (integer, count)
+  - zil_itx_indirect_bytes (integer, bytes)
+  - zil_itx_copied_count (integer, count)
+  - zil_itx_copied_bytes (integer, bytes)
+  - zil_itx_needcopy_count (integer, count)
+  - zil_itx_needcopy_bytes (integer, bytes)
+  - zil_itx_metaslab_normal_count (integer, count)
+  - zil_itx_metaslab_normal_bytes (integer, bytes)
+  - zil_itx_metaslab_normal_write (integer, bytes)
+  - zil_itx_metaslab_normal_alloc (integer, bytes)
+  - zil_itx_metaslab_slog_count (integer, count)
+  - zil_itx_metaslab_slog_bytes (integer, bytes)
+  - zil_itx_metaslab_slog_write (integer, bytes)
+  - zil_itx_metaslab_slog_alloc (integer, bytes)
+
+On FreeBSD:
+
+- zfs_pool
+  - allocated (integer, bytes)
+  - capacity (integer, bytes)
+  - dedupratio (float, ratio)
+  - free (integer, bytes)
+  - size (integer, bytes)
+  - fragmentation (integer, percent)
+
+### Dataset Metrics (optional, only on FreeBSD)
+
+- zfs_dataset
+  - avail (integer, bytes)
+  - used (integer, bytes)
+  - usedsnap (integer, bytes
+  - usedds (integer, bytes)
+
+### Tags
+
+- ZFS stats (`zfs`) will have the following tag:
+  - pools - A `::` concatenated list of all ZFS pools on the machine.
+  - datasets - A `::` concatenated list of all ZFS datasets on the machine.
+
+- Pool metrics (`zfs_pool`) will have the following tag:
+  - pool - with the name of the pool which the metrics are for.
+  - health - the health status of the pool. (FreeBSD only)
+  - dataset - ZFS >= 2.1.x only. (Linux only)
+
+- Dataset metrics (`zfs_dataset`) will have the following tag:
+  - dataset - with the name of the dataset which the metrics are for.
+
+## Example Output
+
+```text
+zfs_pool,health=ONLINE,pool=zroot allocated=1578590208i,capacity=2i,dedupratio=1,fragmentation=1i,free=64456531968i,size=66035122176i 1464473103625653908
+zfs_dataset,dataset=zata avail=10741741326336,used=8564135526400,usedsnap=0,usedds=90112
+zfs,pools=zroot arcstats_allocated=4167764i,arcstats_anon_evictable_data=0i,arcstats_anon_evictable_metadata=0i,arcstats_anon_size=16896i,arcstats_arc_meta_limit=10485760i,arcstats_arc_meta_max=115269568i,arcstats_arc_meta_min=8388608i,arcstats_arc_meta_used=51977456i,arcstats_c=16777216i,arcstats_c_max=41943040i,arcstats_c_min=16777216i,arcstats_data_size=0i,arcstats_deleted=1699340i,arcstats_demand_data_hits=14836131i,arcstats_demand_data_misses=2842945i,arcstats_demand_hit_predictive_prefetch=0i,arcstats_demand_metadata_hits=1655006i,arcstats_demand_metadata_misses=830074i,arcstats_duplicate_buffers=0i,arcstats_duplicate_buffers_size=0i,arcstats_duplicate_reads=123i,arcstats_evict_l2_cached=0i,arcstats_evict_l2_eligible=332172623872i,arcstats_evict_l2_ineligible=6168576i,arcstats_evict_l2_skip=0i,arcstats_evict_not_enough=12189444i,arcstats_evict_skip=195190764i,arcstats_hash_chain_max=2i,arcstats_hash_chains=10i,arcstats_hash_collisions=43134i,arcstats_hash_elements=2268i,arcstats_hash_elements_max=6136i,arcstats_hdr_size=565632i,arcstats_hits=16515778i,arcstats_l2_abort_lowmem=0i,arcstats_l2_asize=0i,arcstats_l2_cdata_free_on_write=0i,arcstats_l2_cksum_bad=0i,arcstats_l2_compress_failures=0i,arcstats_l2_compress_successes=0i,arcstats_l2_compress_zeros=0i,arcstats_l2_evict_l1cached=0i,arcstats_l2_evict_lock_retry=0i,arcstats_l2_evict_reading=0i,arcstats_l2_feeds=0i,arcstats_l2_free_on_write=0i,arcstats_l2_hdr_size=0i,arcstats_l2_hits=0i,arcstats_l2_io_error=0i,arcstats_l2_misses=0i,arcstats_l2_read_bytes=0i,arcstats_l2_rw_clash=0i,arcstats_l2_size=0i,arcstats_l2_write_buffer_bytes_scanned=0i,arcstats_l2_write_buffer_iter=0i,arcstats_l2_write_buffer_list_iter=0i,arcstats_l2_write_buffer_list_null_iter=0i,arcstats_l2_write_bytes=0i,arcstats_l2_write_full=0i,arcstats_l2_write_in_l2=0i,arcstats_l2_write_io_in_progress=0i,arcstats_l2_write_not_cacheable=380i,arcstats_l2_write_passed_headroom=0i,arcstats_l2_write_pios=0i,arcstats_l2_write_spa_mismatch=0i,arcstats_l2_write_trylock_fail=0i,arcstats_l2_writes_done=0i,arcstats_l2_writes_error=0i,arcstats_l2_writes_lock_retry=0i,arcstats_l2_writes_sent=0i,arcstats_memory_throttle_count=0i,arcstats_metadata_size=17014784i,arcstats_mfu_evictable_data=0i,arcstats_mfu_evictable_metadata=16384i,arcstats_mfu_ghost_evictable_data=5723648i,arcstats_mfu_ghost_evictable_metadata=10709504i,arcstats_mfu_ghost_hits=1315619i,arcstats_mfu_ghost_size=16433152i,arcstats_mfu_hits=7646611i,arcstats_mfu_size=305152i,arcstats_misses=3676993i,arcstats_mru_evictable_data=0i,arcstats_mru_evictable_metadata=0i,arcstats_mru_ghost_evictable_data=0i,arcstats_mru_ghost_evictable_metadata=80896i,arcstats_mru_ghost_hits=324250i,arcstats_mru_ghost_size=80896i,arcstats_mru_hits=8844526i,arcstats_mru_size=16693248i,arcstats_mutex_miss=354023i,arcstats_other_size=34397040i,arcstats_p=4172800i,arcstats_prefetch_data_hits=0i,arcstats_prefetch_data_misses=0i,arcstats_prefetch_metadata_hits=24641i,arcstats_prefetch_metadata_misses=3974i,arcstats_size=51977456i,arcstats_sync_wait_for_async=0i,vdev_cache_stats_delegations=779i,vdev_cache_stats_hits=323123i,vdev_cache_stats_misses=59929i,zfetchstats_hits=0i,zfetchstats_max_streams=0i,zfetchstats_misses=0i 1464473103634124908
+```
+
+## Description
+
+A short description for some of the metrics.
+
+### ARC Stats
+
+`arcstats_hits` Total amount of cache hits in the arc.
+
+`arcstats_misses` Total amount of cache misses in the arc.
+
+`arcstats_demand_data_hits` Amount of cache hits for demand data, this is what
+matters (is good) for your application/share.
+
+`arcstats_demand_data_misses` Amount of cache misses for demand data, this is
+what matters (is bad) for your application/share.
+
+`arcstats_demand_metadata_hits` Amount of cache hits for demand metadata, this
+matters (is good) for getting filesystem data (ls,find,…)
+
+`arcstats_demand_metadata_misses` Amount of cache misses for demand metadata,
+this matters (is bad) for getting filesystem data (ls,find,…)
+
+`arcstats_prefetch_data_hits` The zfs prefetcher tried to prefetch something,
+but it was already cached (boring)
+
+`arcstats_prefetch_data_misses` The zfs prefetcher prefetched something which
+was not in the cache (good job, could become a demand hit in the future)
+
+`arcstats_prefetch_metadata_hits` Same as above, but for metadata
+
+`arcstats_prefetch_metadata_misses` Same as above, but for metadata
+
+`arcstats_mru_hits` Cache hit in the “most recently used cache”, we move this to
+the mfu cache.
+
+`arcstats_mru_ghost_hits` Cache hit in the “most recently used ghost list” we
+had this item in the cache, but evicted it, maybe we should increase the mru
+cache size.
+
+`arcstats_mfu_hits` Cache hit in the “most frequently used cache” we move this
+to the beginning of the mfu cache.
+
+`arcstats_mfu_ghost_hits` Cache hit in the “most frequently used ghost list” we
+had this item in the cache, but evicted it, maybe we should increase the mfu
+cache size.
+
+`arcstats_allocated` New data is written to the cache.
+
+`arcstats_deleted` Old data is evicted (deleted) from the cache.
+
+`arcstats_evict_l2_cached` We evicted something from the arc, but its still
+cached in the l2 if we need it.
+
+`arcstats_evict_l2_eligible` We evicted something from the arc, and it’s not in
+the l2 this is sad. (maybe we hadn’t had enough time to store it there)
+
+`arcstats_evict_l2_ineligible` We evicted something which cannot be stored in
+ the l2.  Reasons could be:
+
+- We have multiple pools, we evicted something from a pool without an l2 device.
+- The zfs property secondary cache.
+
+`arcstats_c` Arc target size, this is the size the system thinks the arc should
+have.
+
+`arcstats_size` Total size of the arc.
+
+`arcstats_l2_hits` Hits to the L2 cache. (It was not in the arc, but in the l2
+cache)
+
+`arcstats_l2_misses` Miss to the L2 cache. (It was not in the arc, and not in
+the l2 cache)
+
+`arcstats_l2_size` Size of the l2 cache.
+
+`arcstats_l2_hdr_size` Size of the metadata in the arc (ram) used to manage
+(lookup if something is in the l2) the l2 cache.
+
+### Zfetch Stats
+
+`zfetchstats_hits` Counts the number of cache hits, to items which are in the
+cache because of the prefetcher.
+
+`zfetchstats_misses` Counts the number of prefetch cache misses.
+
+`zfetchstats_colinear_hits` Counts the number of cache hits, to items which are
+in the cache because of the prefetcher (prefetched linear reads)
+
+`zfetchstats_stride_hits` Counts the number of cache hits, to items which are in
+the cache because of the prefetcher (prefetched stride reads)
+
+### Vdev Cache Stats (FreeBSD only)
+
+note: the vdev cache is deprecated in some ZFS implementations
+
+`vdev_cache_stats_hits` Hits to the vdev (device level) cache.
+
+`vdev_cache_stats_misses` Misses to the vdev (device level) cache.
+
+### ABD Stats (Linux Only)
+
+ABD is a linear/scatter dual typed buffer for ARC
+
+`abdstats_linear_cnt` number of linear ABDs which are currently allocated
+
+`abdstats_linear_data_size` amount of data stored in all linear ABDs
+
+`abdstats_scatter_cnt` number of scatter ABDs which are currently allocated
+
+`abdstats_scatter_data_size` amount of data stored in all scatter ABDs
+
+### DMU Stats (Linux Only)
+
+`dmu_tx_dirty_throttle` counts when writes are throttled due to the amount of
+dirty data growing too large
+
+`dmu_tx_memory_reclaim` counts when memory is low and throttling activity
+
+`dmu_tx_memory_reserve` counts when memory footprint of the txg exceeds the ARC
+size
+
+### Fault Management Ereport errors (Linux Only)
+
+`fm_erpt-dropped` counts when an error report cannot be created (eg available
+memory is too low)
+
+### ZIL (Linux Only)
+
+note: `zil` measurements in `kstatMetrics` are system-wide, in `poolMetrics`
+they are pool-wide
+
+`zil_commit_count` counts when ZFS transactions are committed to a ZIL
diff --git a/content/telegraf/v1/input-plugins/zipkin/_index.md b/content/telegraf/v1/input-plugins/zipkin/_index.md
new file mode 100644
index 000000000..43678cc59
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/zipkin/_index.md
@@ -0,0 +1,230 @@
+---
+description: "Telegraf plugin for collecting metrics from Zipkin"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Zipkin
+    identifier: input-zipkin
+tags: [Zipkin, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Zipkin Input Plugin
+
+This plugin implements the Zipkin http server to gather trace and timing data
+needed to troubleshoot latency problems in microservice architectures.
+
+__Please Note:__ This plugin is experimental; Its data schema may be subject to
+change based on its main usage cases and the evolution of the OpenTracing
+standard.
+
+## Service Input <!-- @/docs/includes/service_input.md -->
+
+This plugin is a service input. Normal plugins gather metrics determined by the
+interval setting. Service plugins start a service to listens and waits for
+metrics or events to occur. Service plugins have two key differences from
+normal plugins:
+
+1. The global or plugin specific `interval` setting may not apply
+2. The CLI options of `--test`, `--test-wait`, and `--once` may not produce
+   output for this plugin
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# This plugin implements the Zipkin http server to gather trace and timing data needed to troubleshoot latency problems in microservice architectures.
+[[inputs.zipkin]]
+  ## URL path for span data
+  # path = "/api/v1/spans"
+
+  ## Port on which Telegraf listens
+  # port = 9411
+
+  ## Maximum duration before timing out read of the request
+  # read_timeout = "10s"
+  ## Maximum duration before timing out write of the response
+  # write_timeout = "10s"
+```
+
+The plugin accepts spans in `JSON` or `thrift` if the `Content-Type` is
+`application/json` or `application/x-thrift`, respectively.  If `Content-Type`
+is not set, then the plugin assumes it is `JSON` format.
+
+## Tracing
+
+This plugin uses Annotations tags and fields to track data from spans
+
+- __TRACE:__ is a set of spans that share a single root span.
+Traces are built by collecting all Spans that share a traceId.
+
+- __SPAN:__ is a set of Annotations and BinaryAnnotations that correspond to a particular RPC.
+
+- __Annotations:__ for each annotation & binary annotation of a span a metric is output. _Records an occurrence in time at the beginning and end of a request._
+
+  Annotations may have the following values:
+
+  - __CS (client start):__ beginning of span, request is made.
+  - __SR (server receive):__ server receives request and will start processing it
+      network latency & clock jitters differ it from cs
+  - __SS (server send):__ server is done processing and sends request back to client
+      amount of time it took to process request will differ it from sr
+  - __CR (client receive):__ end of span, client receives response from server
+      RPC is considered complete with this annotation
+
+## Metrics
+
+- __"duration_ns":__ The time in nanoseconds between the end and beginning of a span.
+
+### Tags
+
+- __"id":__               The 64-bit ID of the span.
+- __"parent_id":__        An ID associated with a particular child span.  If there is no child span, the parent ID is set to ID.
+- __"trace_id":__        The 64 or 128-bit ID of a particular trace. Every span in a trace shares this ID. Concatenation of high and low and converted to hexadecimal.
+- __"name":__             Defines a span
+
+#### Annotations have these additional tags
+
+- __"service_name":__     Defines a service
+- __"annotation":__       The value of an annotation
+- __"endpoint_host":__    Listening port concat with IPV4, if port is not present it will not be concatenated
+
+#### Binary Annotations have these additional tag
+
+- __"service_name":__     Defines a service
+- __"annotation":__       The value of an annotation
+- __"endpoint_host":__    Listening port concat with IPV4, if port is not present it will not be concatenated
+- __"annotation_key":__ label describing the annotation
+
+## Sample Queries
+
+__Get All Span Names for Service__ `my_web_server`
+
+```sql
+SHOW TAG VALUES FROM "zipkin" with key="name" WHERE "service_name" = 'my_web_server'
+```
+
+- __Description:__  returns a list containing the names of the spans which have annotations with the given `service_name` of `my_web_server`.
+
+-__Get All Service Names__-
+
+```sql
+SHOW TAG VALUES FROM "zipkin" WITH KEY = "service_name"
+```
+
+- __Description:__  returns a list of all `distinct` endpoint service names.
+
+-__Find spans with the longest duration__-
+
+```sql
+SELECT max("duration_ns") FROM "zipkin" WHERE "service_name" = 'my_service' AND "name" = 'my_span_name' AND time > now() - 20m GROUP BY "trace_id",time(30s) LIMIT 5
+```
+
+- __Description:__  In the last 20 minutes find the top 5 longest span durations for service `my_server` and span name `my_span_name`
+
+### Recommended InfluxDB setup
+
+This test will create high cardinality data so we recommend using the [tsi
+influxDB engine]().
+
+[1]: https://www.influxdata.com/path-1-billion-time-series-influxdb-high-cardinality-indexing-ready-testing/
+
+#### How To Set Up InfluxDB For Work With Zipkin
+
+##### Steps
+
+1. ___Update___ InfluxDB to >= 1.3, in order to use the new tsi engine.
+
+2. ___Generate___ a config file with the following command:
+
+   ```sh
+   influxd config > /path/for/config/file
+    ```
+
+3. ___Add___ the following to your config file, under the `[data]` tab:
+
+   ```toml
+   [data]
+     index-version = "tsi1"
+   ```
+
+4. ___Start___ `influxd` with your new config file:
+
+   ```sh
+   influxd -config=/path/to/your/config/file
+   ```
+
+5. ___Update___ your retention policy:
+
+   ```sql
+   ALTER RETENTION POLICY "autogen" ON "telegraf" DURATION 1d SHARD DURATION 30m
+   ```
+
+### Example Input Trace
+
+- [Cli microservice with two services Test](https://github.com/openzipkin/zipkin-go-opentracing/tree/master/examples/cli_with_2_services)
+- [Test data from distributed trace repo sample json](https://github.com/mattkanwisher/distributedtrace/blob/master/testclient/sample.json)
+
+#### [Trace Example from Zipkin model](http://zipkin.io/pages/data_model.html)
+
+```json
+{
+  "traceId": "bd7a977555f6b982",
+  "name": "query",
+  "id": "be2d01e33cc78d97",
+  "parentId": "ebf33e1a81dc6f71",
+  "timestamp": 1458702548786000,
+  "duration": 13000,
+  "annotations": [
+    {
+      "endpoint": {
+        "serviceName": "zipkin-query",
+        "ipv4": "192.168.1.2",
+        "port": 9411
+      },
+      "timestamp": 1458702548786000,
+      "value": "cs"
+    },
+    {
+      "endpoint": {
+        "serviceName": "zipkin-query",
+        "ipv4": "192.168.1.2",
+        "port": 9411
+      },
+      "timestamp": 1458702548799000,
+      "value": "cr"
+    }
+  ],
+  "binaryAnnotations": [
+    {
+      "key": "jdbc.query",
+      "value": "select distinct `zipkin_spans`.`trace_id` from `zipkin_spans` join `zipkin_annotations` on (`zipkin_spans`.`trace_id` = `zipkin_annotations`.`trace_id` and `zipkin_spans`.`id` = `zipkin_annotations`.`span_id`) where (`zipkin_annotations`.`endpoint_service_name` = ? and `zipkin_spans`.`start_ts` between ? and ?) order by `zipkin_spans`.`start_ts` desc limit ?",
+      "endpoint": {
+        "serviceName": "zipkin-query",
+        "ipv4": "192.168.1.2",
+        "port": 9411
+      }
+    },
+    {
+      "key": "sa",
+      "value": true,
+      "endpoint": {
+        "serviceName": "spanstore-jdbc",
+        "ipv4": "127.0.0.1",
+        "port": 3306
+      }
+    }
+  ]
+}
+```
+
+## Example Output
diff --git a/content/telegraf/v1/input-plugins/zookeeper/_index.md b/content/telegraf/v1/input-plugins/zookeeper/_index.md
new file mode 100644
index 000000000..870d76781
--- /dev/null
+++ b/content/telegraf/v1/input-plugins/zookeeper/_index.md
@@ -0,0 +1,119 @@
+---
+description: "Telegraf plugin for collecting metrics from Zookeeper"
+menu:
+  telegraf_v1_ref:
+    parent: input_plugins_reference
+    name: Zookeeper
+    identifier: input-zookeeper
+tags: [Zookeeper, "input-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Zookeeper Input Plugin
+
+The zookeeper plugin collects variables outputted from the 'mntr' command
+[Zookeeper Admin](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html).
+
+If in Zookeper, the Prometheus Metric provider is enabled, instead use the
+`prometheus` input plugin. By default, the Prometheus metrics are exposed at
+`http://<ip>:7000/metrics` URL. Using the `prometheus` input plugin provides a
+native solution to read and process Prometheus metrics, while this plugin is
+specific to using `mntr` to collect the Java Properties format.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Reads 'mntr' stats from one or many zookeeper servers
+[[inputs.zookeeper]]
+  ## An array of address to gather stats about. Specify an ip or hostname
+  ## with port. ie localhost:2181, 10.0.0.1:2181, etc.
+
+  ## If no servers are specified, then localhost is used as the host.
+  ## If no port is specified, 2181 is used
+  servers = [":2181"]
+
+  ## Timeout for metric collections from all servers.  Minimum timeout is "1s".
+  # timeout = "5s"
+
+  ## Float Parsing - the initial implementation forced any value unable to be
+  ## parsed as an int to be a string. Setting this to "float" will attempt to
+  ## parse float values as floats and not strings. This would break existing
+  ## metrics and may cause issues if a value switches between a float and int.
+  # parse_floats = "string"
+
+  ## Optional TLS Config
+  # enable_tls = false
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## If false, skip chain & host verification
+  # insecure_skip_verify = true
+```
+
+## Metrics
+
+Exact field names are based on Zookeeper response and may vary between
+configuration, platform, and version.
+
+- zookeeper
+  - tags:
+    - server
+    - port
+    - state
+  - fields:
+    - approximate_data_size (integer)
+    - avg_latency (integer)
+    - ephemerals_count (integer)
+    - max_file_descriptor_count (integer)
+    - max_latency (integer)
+    - min_latency (integer)
+    - num_alive_connections (integer)
+    - open_file_descriptor_count (integer)
+    - outstanding_requests (integer)
+    - packets_received (integer)
+    - packets_sent (integer)
+    - version (string)
+    - watch_count (integer)
+    - znode_count (integer)
+    - followers (integer, leader only)
+    - synced_followers (integer, leader only)
+    - pending_syncs (integer, leader only)
+
+## Debugging
+
+If you have any issues please check the direct Zookeeper output using netcat:
+
+```sh
+$ echo mntr | nc localhost 2181
+zk_version      3.4.9-3--1, built on Thu, 01 Jun 2017 16:26:44 -0700
+zk_avg_latency  0
+zk_max_latency  0
+zk_min_latency  0
+zk_packets_received     8
+zk_packets_sent 7
+zk_num_alive_connections        1
+zk_outstanding_requests 0
+zk_server_state standalone
+zk_znode_count  129
+zk_watch_count  0
+zk_ephemerals_count     0
+zk_approximate_data_size        10044
+zk_open_file_descriptor_count   44
+zk_max_file_descriptor_count    4096
+```
+
+## Example Output
+
+```text
+zookeeper,server=localhost,port=2181,state=standalone ephemerals_count=0i,approximate_data_size=10044i,open_file_descriptor_count=44i,max_latency=0i,packets_received=7i,outstanding_requests=0i,znode_count=129i,max_file_descriptor_count=4096i,version="3.4.9-3--1",avg_latency=0i,packets_sent=6i,num_alive_connections=1i,watch_count=0i,min_latency=0i 1522351112000000000
+```
diff --git a/content/telegraf/v1/output-plugins/_index.md b/content/telegraf/v1/output-plugins/_index.md
new file mode 100644
index 000000000..7ef0a29ae
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/_index.md
@@ -0,0 +1,14 @@
+---
+title: "Telegraf Output Plugins"
+description: "Telegraf output plugins send metrics to various destinations."
+menu:
+  telegraf_v1_ref:
+    name: Output plugins
+    identifier: output_plugins_reference
+    weight: 20
+tags: [output-plugins]
+---
+
+Telegraf output plugins send metrics to various destinations.
+
+{{<children>}}
diff --git a/content/telegraf/v1/output-plugins/amon/_index.md b/content/telegraf/v1/output-plugins/amon/_index.md
new file mode 100644
index 000000000..5c6ed5330
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/amon/_index.md
@@ -0,0 +1,46 @@
+---
+description: "Telegraf plugin for sending metrics to Amon"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Amon
+    identifier: output-amon
+tags: [Amon, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Amon Output Plugin
+
+This plugin writes to [Amon](https://www.amon.cx) and requires an `serverkey`
+and `amoninstance` URL which can be obtained
+[here](https://www.amon.cx/docs/monitoring/) for the account.
+
+If the point value being sent cannot be converted to a float64, the metric is
+skipped.
+
+Metrics are grouped by converting any `_` characters to `.` in the Point Name.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for Amon Server to send metrics to.
+[[outputs.amon]]
+  ## Amon Server Key
+  server_key = "my-server-key" # required.
+
+  ## Amon Instance URL
+  amon_instance = "https://youramoninstance" # required
+
+  ## Connection timeout.
+  # timeout = "5s"
+```
diff --git a/content/telegraf/v1/output-plugins/amqp/_index.md b/content/telegraf/v1/output-plugins/amqp/_index.md
new file mode 100644
index 000000000..205804350
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/amqp/_index.md
@@ -0,0 +1,151 @@
+---
+description: "Telegraf plugin for sending metrics to AMQP"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: AMQP
+    identifier: output-amqp
+tags: [AMQP, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# AMQP Output Plugin
+
+This plugin writes to a AMQP 0-9-1 Exchange, a prominent implementation of this
+protocol being [RabbitMQ](https://www.rabbitmq.com/).
+
+This plugin does not bind the exchange to a queue.
+
+For an introduction to AMQP see:
+
+- [amqp: concepts](https://www.rabbitmq.com/tutorials/amqp-concepts.html)
+- [rabbitmq: getting started](https://www.rabbitmq.com/getstarted.html)
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `username` and
+`password` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Publishes metrics to an AMQP broker
+[[outputs.amqp]]
+  ## Brokers to publish to.  If multiple brokers are specified a random broker
+  ## will be selected anytime a connection is established.  This can be
+  ## helpful for load balancing when not using a dedicated load balancer.
+  brokers = ["amqp://localhost:5672/influxdb"]
+
+  ## Maximum messages to send over a connection.  Once this is reached, the
+  ## connection is closed and a new connection is made.  This can be helpful for
+  ## load balancing when not using a dedicated load balancer.
+  # max_messages = 0
+
+  ## Exchange to declare and publish to.
+  exchange = "telegraf"
+
+  ## Exchange type; common types are "direct", "fanout", "topic", "header", "x-consistent-hash".
+  # exchange_type = "topic"
+
+  ## If true, exchange will be passively declared.
+  # exchange_passive = false
+
+  ## Exchange durability can be either "transient" or "durable".
+  # exchange_durability = "durable"
+
+  ## Additional exchange arguments.
+  # exchange_arguments = { }
+  # exchange_arguments = {"hash_property" = "timestamp"}
+
+  ## Authentication credentials for the PLAIN auth_method.
+  # username = ""
+  # password = ""
+
+  ## Auth method. PLAIN and EXTERNAL are supported
+  ## Using EXTERNAL requires enabling the rabbitmq_auth_mechanism_ssl plugin as
+  ## described here: https://www.rabbitmq.com/plugins.html
+  # auth_method = "PLAIN"
+
+  ## Metric tag to use as a routing key.
+  ##   ie, if this tag exists, its value will be used as the routing key
+  # routing_tag = "host"
+
+  ## Static routing key.  Used when no routing_tag is set or as a fallback
+  ## when the tag specified in routing tag is not found.
+  # routing_key = ""
+  # routing_key = "telegraf"
+
+  ## Delivery Mode controls if a published message is persistent.
+  ##   One of "transient" or "persistent".
+  # delivery_mode = "transient"
+
+  ## Static headers added to each published message.
+  # headers = { }
+  # headers = {"database" = "telegraf", "retention_policy" = "default"}
+
+  ## Connection timeout.  If not provided, will default to 5s.  0s means no
+  ## timeout (not recommended).
+  # timeout = "5s"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## Optional Proxy Configuration
+  # use_proxy = false
+  # proxy_url = "localhost:8888"
+
+  ## If true use batch serialization format instead of line based delimiting.
+  ## Only applies to data formats which are not line based such as JSON.
+  ## Recommended to set to true.
+  # use_batch_format = false
+
+  ## Content encoding for message payloads, can be set to "gzip" to or
+  ## "identity" to apply no encoding.
+  ##
+  ## Please note that when use_batch_format = false each amqp message contains only
+  ## a single metric, it is recommended to use compression with batch format
+  ## for best results.
+  # content_encoding = "identity"
+
+  ## Data format to output.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
+  # data_format = "influx"
+```
+
+### Routing
+
+If `routing_tag` is set, and the tag is defined on the metric, the value of the
+tag is used as the routing key.  Otherwise the value of `routing_key` is used
+directly.  If both are unset the empty string is used.
+
+Exchange types that do not use a routing key, `direct` and `header`, always use
+the empty string as the routing key.
+
+Metrics are published in batches based on the final routing key.
+
+### Proxy
+
+If you want to use a proxy, you need to set `use_proxy = true`. This will
+use the system's proxy settings to determine the proxy URL. If you need to
+specify a proxy URL manually, you can do so by using `proxy_url`, overriding
+the system settings.
diff --git a/content/telegraf/v1/output-plugins/application_insights/_index.md b/content/telegraf/v1/output-plugins/application_insights/_index.md
new file mode 100644
index 000000000..d7dc98d17
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/application_insights/_index.md
@@ -0,0 +1,75 @@
+---
+description: "Telegraf plugin for sending metrics to Application Insights"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Application Insights
+    identifier: output-application_insights
+tags: [Application Insights, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Application Insights Output Plugin
+
+This plugin writes telegraf metrics to [Azure Application
+Insights](https://azure.microsoft.com/en-us/services/application-insights/).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Send metrics to Azure Application Insights
+[[outputs.application_insights]]
+  ## Instrumentation key of the Application Insights resource.
+  instrumentation_key = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx"
+
+  ## Regions that require endpoint modification https://docs.microsoft.com/en-us/azure/azure-monitor/app/custom-endpoints
+  # endpoint_url = "https://dc.services.visualstudio.com/v2/track"
+
+  ## Timeout for closing (default: 5s).
+  # timeout = "5s"
+
+  ## Enable additional diagnostic logging.
+  # enable_diagnostic_logging = false
+
+  ## NOTE: Due to the way TOML is parsed, tables must be at the END of the
+  ## plugin definition, otherwise additional config options are read as part of
+  ## the table
+
+  ## Context Tag Sources add Application Insights context tags to a tag value.
+  ##
+  ## For list of allowed context tag keys see:
+  ## https://github.com/microsoft/ApplicationInsights-Go/blob/master/appinsights/contracts/contexttagkeys.go
+  # [outputs.application_insights.context_tag_sources]
+  #   "ai.cloud.role" = "kubernetes_container_name"
+  #   "ai.cloud.roleInstance" = "kubernetes_pod_name"
+```
+
+## Metric Encoding
+
+For each field an Application Insights Telemetry record is created named based
+on the measurement name and field.
+
+**Example:** Create the telemetry records `foo_first` and `foo_second`:
+
+```text
+foo,host=a first=42,second=43 1525293034000000000
+```
+
+In the special case of a single field named `value`, a single telemetry record
+is created named using only the measurement name
+
+**Example:** Create a telemetry record `bar`:
+
+```text
+bar,host=a value=42 1525293034000000000
+```
diff --git a/content/telegraf/v1/output-plugins/azure_data_explorer/_index.md b/content/telegraf/v1/output-plugins/azure_data_explorer/_index.md
new file mode 100644
index 000000000..3b4f4b38f
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/azure_data_explorer/_index.md
@@ -0,0 +1,308 @@
+---
+description: "Telegraf plugin for sending metrics to Azure Data Explorer"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Azure Data Explorer
+    identifier: output-azure_data_explorer
+tags: [Azure Data Explorer, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Azure Data Explorer Output Plugin
+
+Azure Data Explorer is a distributed, columnar store, purpose built for any type
+of logs, metrics and time series data.
+
+This plugin writes data collected by any of the Telegraf input plugins to
+[Azure Data Explorer](https://docs.microsoft.com/en-us/azure/data-explorer), [Azure Synapse Data Explorer](https://docs.microsoft.com/en-us/azure/synapse-analytics/data-explorer/data-explorer-overview),
+and [Real time analytics in Fabric](https://learn.microsoft.com/en-us/fabric/real-time-analytics/overview).
+
+[data_explorer]: https://docs.microsoft.com/en-us/azure/data-explorer
+[synapse]: https://docs.microsoft.com/en-us/azure/synapse-analytics/data-explorer/data-explorer-overview
+[fabric]: https://learn.microsoft.com/en-us/fabric/real-time-analytics/overview
+
+## Pre-requisites
+
+- [Create Azure Data Explorer cluster and
+  database](https://docs.microsoft.com/en-us/azure/data-explorer/create-cluster-database-portal)
+- VM/compute or container to host Telegraf - it could be hosted locally where an
+  app/service to be monitored is deployed or remotely on a dedicated monitoring
+  compute/container.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Sends metrics to Azure Data Explorer
+[[outputs.azure_data_explorer]]
+  ## The URI property of the Azure Data Explorer resource on Azure
+  ## ex: endpoint_url = https://myadxresource.australiasoutheast.kusto.windows.net
+  endpoint_url = ""
+
+  ## The Azure Data Explorer database that the metrics will be ingested into.
+  ## The plugin will NOT generate this database automatically, it's expected that this database already exists before ingestion.
+  ## ex: "exampledatabase"
+  database = ""
+
+  ## Timeout for Azure Data Explorer operations
+  # timeout = "20s"
+
+  ## Type of metrics grouping used when pushing to Azure Data Explorer.
+  ## Default is "TablePerMetric" for one table per different metric.
+  ## For more information, please check the plugin README.
+  # metrics_grouping_type = "TablePerMetric"
+
+  ## Name of the single table to store all the metrics (Only needed if metrics_grouping_type is "SingleTable").
+  # table_name = ""
+
+  ## Creates tables and relevant mapping if set to true(default).
+  ## Skips table and mapping creation if set to false, this is useful for running Telegraf with the lowest possible permissions i.e. table ingestor role.
+  # create_tables = true
+
+  ##  Ingestion method to use.
+  ##  Available options are
+  ##    - managed  --  streaming ingestion with fallback to batched ingestion or the "queued" method below
+  ##    - queued   --  queue up metrics data and process sequentially
+  # ingestion_type = "queued"
+```
+
+## Metrics Grouping
+
+Metrics can be grouped in two ways to be sent to Azure Data Explorer. To specify
+which metric grouping type the plugin should use, the respective value should be
+given to the `metrics_grouping_type` in the config file. If no value is given to
+`metrics_grouping_type`, by default, the metrics will be grouped using
+`TablePerMetric`.
+
+### TablePerMetric
+
+The plugin will group the metrics by the metric name, and will send each group
+of metrics to an Azure Data Explorer table. If the table doesn't exist the
+plugin will create the table, if the table exists then the plugin will try to
+merge the Telegraf metric schema to the existing table. For more information
+about the merge process check the [`.create-merge` documentation]().
+
+The table name will match the `name` property of the metric, this means that the
+name of the metric should comply with the Azure Data Explorer table naming
+constraints in case you plan to add a prefix to the metric name.
+
+[create-merge]: https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/create-merge-table-command
+
+### SingleTable
+
+The plugin will send all the metrics received to a single Azure Data Explorer
+table. The name of the table must be supplied via `table_name` in the config
+file. If the table doesn't exist the plugin will create the table, if the table
+exists then the plugin will try to merge the Telegraf metric schema to the
+existing table. For more information about the merge process check the
+[`.create-merge` documentation]().
+
+## Tables Schema
+
+The schema of the Azure Data Explorer table will match the structure of the
+Telegraf `Metric` object. The corresponding Azure Data Explorer command
+generated by the plugin would be like the following:
+
+```text
+.create-merge table ['table-name']  (['fields']:dynamic, ['name']:string, ['tags']:dynamic, ['timestamp']:datetime)
+```
+
+The corresponding table mapping would be like the following:
+
+```text
+.create-or-alter table ['table-name'] ingestion json mapping 'table-name_mapping' '[{"column":"fields", "Properties":{"Path":"$[\'fields\']"}},{"column":"name", "Properties":{"Path":"$[\'name\']"}},{"column":"tags", "Properties":{"Path":"$[\'tags\']"}},{"column":"timestamp", "Properties":{"Path":"$[\'timestamp\']"}}]'
+```
+
+**Note**: This plugin will automatically create Azure Data Explorer tables and
+corresponding table mapping as per the above mentioned commands.
+
+## Ingestion type
+
+**Note**:
+[Streaming ingestion](https://aka.ms/AAhlg6s)
+has to be enabled on ADX [configure the ADX cluster]
+in case of `managed` option.
+Refer the query below to check if streaming is enabled
+
+```kql
+.show database <DB-Name> policy streamingingestion
+```
+
+## Authentication
+
+### Supported Authentication Methods
+
+This plugin provides several types of authentication. The plugin will check the
+existence of several specific environment variables, and consequently will
+choose the right method.
+
+These methods are:
+
+1. AAD Application Tokens (Service Principals with secrets or certificates).
+
+    For guidance on how to create and register an App in Azure Active Directory
+    check [this article](https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals), and for more information on the Service
+    Principals check [this article](https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals).
+
+2. AAD User Tokens
+
+    - Allows Telegraf to authenticate like a user. This method is mainly used
+      for development purposes only.
+
+3. Managed Service Identity (MSI) token
+
+    - If you are running Telegraf from Azure VM or infrastructure, then this is
+      the preferred authentication method.
+
+[register]: https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app#register-an-application
+
+[principal]: https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals
+
+Whichever method, the designated Principal needs to be assigned the `Database
+User` role on the Database level in the Azure Data Explorer. This role will
+allow the plugin to create the required tables and ingest data into it.  If
+`create_tables=false` then the designated principal only needs the `Database
+Ingestor` role at least.
+
+### Configurations of the chosen Authentication Method
+
+The plugin will authenticate using the first available of the following
+configurations, **it's important to understand that the assessment, and
+consequently choosing the authentication method, will happen in order as
+below**:
+
+1. **Client Credentials**: Azure AD Application ID and Secret.
+
+    Set the following environment variables:
+
+    - `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate.
+    - `AZURE_CLIENT_ID`: Specifies the app client ID to use.
+    - `AZURE_CLIENT_SECRET`: Specifies the app secret to use.
+
+2. **Client Certificate**: Azure AD Application ID and X.509 Certificate.
+
+    - `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate.
+    - `AZURE_CLIENT_ID`: Specifies the app client ID to use.
+    - `AZURE_CERTIFICATE_PATH`: Specifies the certificate Path to use.
+    - `AZURE_CERTIFICATE_PASSWORD`: Specifies the certificate password to use.
+
+3. **Resource Owner Password**: Azure AD User and Password. This grant type is
+   *not recommended*, use device login instead if you need interactive login.
+
+    - `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate.
+    - `AZURE_CLIENT_ID`: Specifies the app client ID to use.
+    - `AZURE_USERNAME`: Specifies the username to use.
+    - `AZURE_PASSWORD`: Specifies the password to use.
+
+4. **Azure Managed Service Identity**: Delegate credential management to the
+   platform. Requires that code is running in Azure, e.g. on a VM. All
+   configuration is handled by Azure. See [Azure Managed Service Identity](https://docs.microsoft.com/en-us/azure/active-directory/msi-overview)
+   for more details. Only available when using the [Azure Resource
+   Manager]().
+
+[msi]: https://docs.microsoft.com/en-us/azure/active-directory/msi-overview
+[arm]: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview
+
+## Querying data collected in Azure Data Explorer
+
+Examples of data transformations and queries that would be useful to gain
+insights -
+
+### Using SQL input plugin
+
+Sample SQL metrics data -
+
+name | tags | timestamp | fields
+-----|------|-----------|-------
+sqlserver_database_io|{"database_name":"azure-sql-db2","file_type":"DATA","host":"adx-vm","logical_filename":"tempdev","measurement_db_type":"AzureSQLDB","physical_filename":"tempdb.mdf","replica_updateability":"READ_WRITE","sql_instance":"adx-sql-server"}|2021-09-09T13:51:20Z|{"current_size_mb":16,"database_id":2,"file_id":1,"read_bytes":2965504,"read_latency_ms":68,"reads":47,"rg_read_stall_ms":42,"rg_write_stall_ms":0,"space_used_mb":0,"write_bytes":1220608,"write_latency_ms":103,"writes":149}
+sqlserver_waitstats|{"database_name":"azure-sql-db2","host":"adx-vm","measurement_db_type":"AzureSQLDB","replica_updateability":"READ_WRITE","sql_instance":"adx-sql-server","wait_category":"Worker Thread","wait_type":"THREADPOOL"}|2021-09-09T13:51:20Z|{"max_wait_time_ms":15,"resource_wait_ms":4469,"signal_wait_time_ms":0,"wait_time_ms":4469,"waiting_tasks_count":1464}
+
+Since collected metrics object is of complex type so "fields" and "tags" are
+stored as dynamic data type, multiple ways to query this data-
+
+1. Query JSON attributes directly: Azure Data Explorer provides an ability to
+   query JSON data in raw format without parsing it, so JSON attributes can be
+   queried directly in following way:
+
+  ```text
+  Tablename
+  | where name == "sqlserver_azure_db_resource_stats" and todouble(fields.avg_cpu_percent) > 7
+  ```
+
+  ```text
+  Tablename
+  | distinct tostring(tags.database_name)
+  ```
+
+  **Note** - This approach could have performance impact in case of large
+  volumes of data, use below mentioned approach for such cases.
+
+1. Use [Update
+   policy](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/updatepolicy)**:
+   Transform dynamic data type columns using update policy. This is the
+   recommended performant way for querying over large volumes of data compared
+   to querying directly over JSON attributes:
+
+  ```json
+  // Function to transform data
+  .create-or-alter function Transform_TargetTableName() {
+        SourceTableName
+        | mv-apply fields on (extend key = tostring(bag_keys(fields)[0]))
+        | project fieldname=key, value=todouble(fields[key]), name, tags, timestamp
+  }
+
+  // Create destination table with above query's results schema (if it doesn't exist already)
+  .set-or-append TargetTableName <| Transform_TargetTableName() | limit 0
+
+  // Apply update policy on destination table
+  .alter table TargetTableName policy update
+  @'[{"IsEnabled": true, "Source": "SourceTableName", "Query": "Transform_TargetTableName()", "IsTransactional": true, "PropagateIngestionProperties": false}]'
+  ```
+
+### Using syslog input plugin
+
+Sample syslog data -
+
+name | tags | timestamp | fields
+-----|------|-----------|-------
+syslog|{"appname":"azsecmond","facility":"user","host":"adx-linux-vm","hostname":"adx-linux-vm","severity":"info"}|2021-09-20T14:36:44Z|{"facility_code":1,"message":" 2021/09/20 14:36:44.890110 Failed to connect to mdsd: dial unix /var/run/mdsd/default_djson.socket: connect: no such file or directory","procid":"2184","severity_code":6,"timestamp":"1632148604890477000","version":1}
+syslog|{"appname":"CRON","facility":"authpriv","host":"adx-linux-vm","hostname":"adx-linux-vm","severity":"info"}|2021-09-20T14:37:01Z|{"facility_code":10,"message":" pam_unix(cron:session): session opened for user root by (uid=0)","procid":"26446","severity_code":6,"timestamp":"1632148621120781000","version":1}
+
+There are multiple ways to flatten dynamic columns using 'extend' or
+'bag_unpack' operator. You can use either of these ways in above mentioned
+update policy function - 'Transform_TargetTableName()'
+
+- Use
+  [extend](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/extendoperator)
+  operator - This is the recommended approach compared to 'bag_unpack' as it is
+  faster and robust. Even if schema changes, it will not break queries or
+  dashboards.
+
+  ```text
+  Tablenmae
+  | extend facility_code=toint(fields.facility_code), message=tostring(fields.message), procid= tolong(fields.procid), severity_code=toint(fields.severity_code),
+  SysLogTimestamp=unixtime_nanoseconds_todatetime(tolong(fields.timestamp)), version= todouble(fields.version),
+  appname= tostring(tags.appname), facility= tostring(tags.facility),host= tostring(tags.host), hostname=tostring(tags.hostname), severity=tostring(tags.severity)
+  | project-away fields, tags
+  ```
+
+- Use [bag_unpack
+  plugin](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/bag-unpackplugin)
+  to unpack the dynamic type columns automatically. This method could lead to
+  issues if source schema changes as its dynamically expanding columns.
+
+  ```text
+  Tablename
+  | evaluate bag_unpack(tags, columnsConflict='replace_source')
+  | evaluate bag_unpack(fields, columnsConflict='replace_source')
+  ```
diff --git a/content/telegraf/v1/output-plugins/azure_monitor/_index.md b/content/telegraf/v1/output-plugins/azure_monitor/_index.md
new file mode 100644
index 000000000..1bba09e6a
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/azure_monitor/_index.md
@@ -0,0 +1,173 @@
+---
+description: "Telegraf plugin for sending metrics to Azure Monitor"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Azure Monitor
+    identifier: output-azure_monitor
+tags: [Azure Monitor, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Azure Monitor Output Plugin
+
+**The Azure Monitor custom metrics service is currently in preview and not
+available in a subset of Azure regions.**
+
+This plugin will send custom metrics to Azure Monitor. Azure Monitor has a
+metric resolution of one minute. To handle this in Telegraf, the Azure Monitor
+output plugin will automatically aggregates metrics into one minute buckets,
+which are then sent to Azure Monitor on every flush interval.
+
+The metrics from each input plugin will be written to a separate Azure Monitor
+namespace, prefixed with `Telegraf/` by default. The field name for each metric
+is written as the Azure Monitor metric name. All field values are written as a
+summarized set that includes: min, max, sum, count. Tags are written as a
+dimension on each Azure Monitor metric.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Send aggregate metrics to Azure Monitor
+[[outputs.azure_monitor]]
+  ## Timeout for HTTP writes.
+  # timeout = "20s"
+
+  ## Set the namespace prefix, defaults to "Telegraf/<input-name>".
+  # namespace_prefix = "Telegraf/"
+
+  ## Azure Monitor doesn't have a string value type, so convert string
+  ## fields to dimensions (a.k.a. tags) if enabled. Azure Monitor allows
+  ## a maximum of 10 dimensions so Telegraf will only send the first 10
+  ## alphanumeric dimensions.
+  # strings_as_dimensions = false
+
+  ## Both region and resource_id must be set or be available via the
+  ## Instance Metadata service on Azure Virtual Machines.
+  #
+  ## Azure Region to publish metrics against.
+  ##   ex: region = "southcentralus"
+  # region = ""
+  #
+  ## The Azure Resource ID against which metric will be logged, e.g.
+  ##   ex: resource_id = "/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.Compute/virtualMachines/<vm_name>"
+  # resource_id = ""
+
+  ## Optionally, if in Azure US Government, China, or other sovereign
+  ## cloud environment, set the appropriate REST endpoint for receiving
+  ## metrics. (Note: region may be unused in this context)
+  # endpoint_url = "https://monitoring.core.usgovcloudapi.net"
+```
+
+## Setup
+
+1. [Register the `microsoft.insights` resource provider in your Azure
+   subscription]().
+1. If using Managed Service Identities to authenticate an Azure VM, [enable
+   system-assigned managed identity]().
+1. Use a region that supports Azure Monitor Custom Metrics, For regions with
+   Custom Metrics support, an endpoint will be available with the format
+   `https://<region>.monitoring.azure.com`.
+
+[resource provider]: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-supported-services
+
+[enable msi]: https://docs.microsoft.com/en-us/azure/active-directory/managed-service-identity/qs-configure-portal-windows-vm
+
+### Region and Resource ID
+
+The plugin will attempt to discover the region and resource ID using the Azure
+VM Instance Metadata service. If Telegraf is not running on a virtual machine or
+the VM Instance Metadata service is not available, the following variables are
+required for the output to function.
+
+* region
+* resource_id
+
+### Authentication
+
+This plugin uses one of several different types of authenticate methods. The
+preferred authentication methods are different from the *order* in which each
+authentication is checked. Here are the preferred authentication methods:
+
+1. Managed Service Identity (MSI) token: This is the preferred authentication
+   method. Telegraf will automatically authenticate using this method when
+   running on Azure VMs.
+2. AAD Application Tokens (Service Principals)
+
+    * Primarily useful if Telegraf is writing metrics for other resources.
+      [More information](https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-application-objects).
+    * A Service Principal or User Principal needs to be assigned the `Monitoring
+      Metrics Publisher` role on the resource(s) metrics will be emitted
+      against.
+
+3. AAD User Tokens (User Principals)
+
+    * Allows Telegraf to authenticate like a user. It is best to use this method
+      for development.
+
+[principal]: https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-application-objects
+
+The plugin will authenticate using the first available of the following
+configurations:
+
+1. **Client Credentials**: Azure AD Application ID and Secret. Set the following
+   environment variables:
+
+    * `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate.
+    * `AZURE_CLIENT_ID`: Specifies the app client ID to use.
+    * `AZURE_CLIENT_SECRET`: Specifies the app secret to use.
+
+1. **Client Certificate**: Azure AD Application ID and X.509 Certificate.
+
+    * `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate.
+    * `AZURE_CLIENT_ID`: Specifies the app client ID to use.
+    * `AZURE_CERTIFICATE_PATH`: Specifies the certificate Path to use.
+    * `AZURE_CERTIFICATE_PASSWORD`: Specifies the certificate password to use.
+
+1. **Resource Owner Password**: Azure AD User and Password. This grant type is
+   *not recommended*, use device login instead if you need interactive login.
+
+    * `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate.
+    * `AZURE_CLIENT_ID`: Specifies the app client ID to use.
+    * `AZURE_USERNAME`: Specifies the username to use.
+    * `AZURE_PASSWORD`: Specifies the password to use.
+
+1. **Azure Managed Service Identity**: Delegate credential management to the
+   platform. Requires that code is running in Azure, e.g. on a VM. All
+   configuration is handled by Azure. See [Azure Managed Service Identity](https://docs.microsoft.com/en-us/azure/active-directory/msi-overview)
+   for more details. Only available when using the [Azure Resource
+   Manager]().
+
+[msi]: https://docs.microsoft.com/en-us/azure/active-directory/msi-overview
+[arm]: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview
+
+**Note: As shown above, the last option (#4) is the preferred way to
+authenticate when running Telegraf on Azure VMs.
+
+## Dimensions
+
+Azure Monitor only accepts values with a numeric type. The plugin will drop
+fields with a string type by default. The plugin can set all string type fields
+as extra dimensions in the Azure Monitor custom metric by setting the
+configuration option `strings_as_dimensions` to `true`.
+
+Keep in mind, Azure Monitor allows a maximum of 10 dimensions per metric. The
+plugin will deterministically dropped any dimensions that exceed the 10
+dimension limit.
+
+To convert only a subset of string-typed fields as dimensions, enable
+`strings_as_dimensions` and use the [`fieldinclude` or `fieldexclude`
+modifiers]() to limit the string-typed fields that are sent to
+the plugin.
+
+[conf-modifiers]: ../../../docs/CONFIGURATION.md#modifiers
diff --git a/content/telegraf/v1/output-plugins/bigquery/_index.md b/content/telegraf/v1/output-plugins/bigquery/_index.md
new file mode 100644
index 000000000..6f06a2a72
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/bigquery/_index.md
@@ -0,0 +1,130 @@
+---
+description: "Telegraf plugin for sending metrics to Google BigQuery"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Google BigQuery
+    identifier: output-bigquery
+tags: [Google BigQuery, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Google BigQuery Output Plugin
+
+This plugin writes to the [Google Cloud
+BigQuery](https://cloud.google.com/bigquery) and requires
+[authentication](https://cloud.google.com/bigquery/docs/authentication) with
+Google Cloud using either a service account or user credentials.
+
+Be aware that this plugin accesses APIs that are
+[chargeable](https://cloud.google.com/bigquery/pricing) and might incur costs.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for Google Cloud BigQuery to send entries
+[[outputs.bigquery]]
+  ## Credentials File
+  credentials_file = "/path/to/service/account/key.json"
+
+  ## Google Cloud Platform Project
+  # project = ""
+
+  ## The namespace for the metric descriptor
+  dataset = "telegraf"
+
+  ## Timeout for BigQuery operations.
+  # timeout = "5s"
+
+  ## Character to replace hyphens on Metric name
+  # replace_hyphen_to = "_"
+
+  ## Write all metrics in a single compact table
+  # compact_table = ""
+```
+
+Leaving `project` empty indicates the plugin will try to retrieve the project
+from the credentials file.
+
+Requires `dataset` to specify under which BigQuery dataset the corresponding
+metrics tables reside.
+
+Each metric should have a corresponding table to BigQuery.  The schema of the
+table on BigQuery:
+
+* Should contain the field `timestamp` which is the timestamp of a telegraph
+  metrics
+* Should contain the metric's tags with the same name and the column type should
+  be set to string.
+* Should contain the metric's fields with the same name and the column type
+  should match the field type.
+
+## Compact table
+
+When enabling the compact table, all metrics are inserted to the given table
+with the following schema:
+
+```json
+[
+  {
+    "mode": "REQUIRED",
+    "name": "timestamp",
+    "type": "TIMESTAMP"
+  },
+  {
+    "mode": "REQUIRED",
+    "name": "name",
+    "type": "STRING"
+  },
+  {
+    "mode": "REQUIRED",
+    "name": "tags",
+    "type": "JSON"
+  },
+  {
+    "mode": "REQUIRED",
+    "name": "fields",
+    "type": "JSON"
+  }
+]
+```
+
+## Restrictions
+
+Avoid hyphens on BigQuery tables, underlying SDK cannot handle streaming inserts
+to Table with hyphens.
+
+In cases of metrics with hyphens please use the [Rename Processor
+Plugin]().
+
+In case of a metric with hyphen by default hyphens shall be replaced with
+underscores (_).  This can be altered using the `replace_hyphen_to`
+configuration property.
+
+Available data type options are:
+
+* integer
+* float or long
+* string
+* boolean
+
+All field naming restrictions that apply to BigQuery should apply to the
+measurements to be imported.
+
+Tables on BigQuery should be created beforehand and they are not created during
+persistence
+
+Pay attention to the column `timestamp` since it is reserved upfront and cannot
+change.  If partitioning is required make sure it is applied beforehand.
+
+[rename]: ../../processors/rename/README.md
diff --git a/content/telegraf/v1/output-plugins/clarify/_index.md b/content/telegraf/v1/output-plugins/clarify/_index.md
new file mode 100644
index 000000000..1aabdbf04
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/clarify/_index.md
@@ -0,0 +1,98 @@
+---
+description: "Telegraf plugin for sending metrics to Clarify"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Clarify
+    identifier: output-clarify
+tags: [Clarify, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Clarify Output Plugin
+
+This plugin writes to [Clarify](https://clarify.io). To use this plugin you will
+need to obtain a set of [credentials](https://docs.clarify.io/users/admin/integrations/credentials).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+## Configuration to publish Telegraf metrics to Clarify
+[[outputs.clarify]]
+  ## Credentials File (Oauth 2.0 from Clarify integration)
+  credentials_file = "/path/to/clarify/credentials.json"
+
+  ## Clarify username password (Basic Auth from Clarify integration)
+  username = "i-am-bob"
+  password = "secret-password"
+
+  ## Timeout for Clarify operations
+  # timeout = "20s"
+
+  ## Optional tags to be included when generating the unique ID for a signal in Clarify
+  # id_tags = []
+  # clarify_id_tag = 'clarify_input_id'
+```
+
+You can use either a credentials file or username/password.
+If both are present and valid in the configuration the
+credentials file will be used.
+
+## How Telegraf Metrics map to Clarify signals
+
+Clarify signal names are formed by joining the Telegraf metric name and the
+field key with a `.` character. Telegraf tags are added to signal labels.
+
+If you wish to specify a specific tag to use as the input id, set the config
+option `clarify_id_tag` to the tag containing the id to be used.
+If this tag is present and there is only one field present in the metric,
+this tag will be used as the inputID in Clarify. If there are more fields
+available in the metric, the tag will be ignored and normal id generation
+will be used.
+
+If information from one or several tags is needed to uniquely identify a metric
+field, the id_tags array can be added to the config with the needed tag names.
+E.g:
+
+`id_tags = ['sensor']`
+
+Clarify only supports values that can be converted to floating point numbers.
+Strings and invalid numbers are ignored.
+
+## Example
+
+The following input would be stored in Clarify with the values shown below:
+
+```text
+temperature,host=demo.clarifylocal,sensor=TC0P value=49 1682670910000000000
+```
+
+```json
+"signal" {
+  "id": "temperature.value.TC0P"
+  "name": "temperature.value"
+  "labels": {
+    "host": ["demo.clarifylocal"],
+    "sensor": ["TC0P"]
+  }
+}
+"values" {
+  "times": ["2023-04-28T08:43:16+00:00"],
+  "series": {
+    "temperature.value.TC0P": [49]
+  }
+}
+```
+
+[clarify]: https://clarify.io
+[credentials]: https://docs.clarify.io/users/admin/integrations/credentials
diff --git a/content/telegraf/v1/output-plugins/cloud_pubsub/_index.md b/content/telegraf/v1/output-plugins/cloud_pubsub/_index.md
new file mode 100644
index 000000000..2864808a3
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/cloud_pubsub/_index.md
@@ -0,0 +1,90 @@
+---
+description: "Telegraf plugin for sending metrics to Google Cloud PubSub"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Google Cloud PubSub
+    identifier: output-cloud_pubsub
+tags: [Google Cloud PubSub, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Google Cloud PubSub Output Plugin
+
+The GCP PubSub plugin publishes metrics to a [Google Cloud PubSub](https://cloud.google.com/pubsub) topic
+as one of the supported [output data formats](/telegraf/v1/data_formats/output).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Publish Telegraf metrics to a Google Cloud PubSub topic
+[[outputs.cloud_pubsub]]
+  ## Required. Name of Google Cloud Platform (GCP) Project that owns
+  ## the given PubSub topic.
+  project = "my-project"
+
+  ## Required. Name of PubSub topic to publish metrics to.
+  topic = "my-topic"
+
+  ## Content encoding for message payloads, can be set to "gzip" or
+  ## "identity" to apply no encoding.
+  # content_encoding = "identity"
+
+  ## Required. Data format to consume.
+  ## Each data format has its own unique set of configuration options.
+  ## Read more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+  data_format = "influx"
+
+  ## Optional. Filepath for GCP credentials JSON file to authorize calls to
+  ## PubSub APIs. If not set explicitly, Telegraf will attempt to use
+  ## Application Default Credentials, which is preferred.
+  # credentials_file = "path/to/my/creds.json"
+
+  ## Optional. If true, will send all metrics per write in one PubSub message.
+  # send_batched = true
+
+  ## The following publish_* parameters specifically configures batching
+  ## requests made to the GCP Cloud PubSub API via the PubSub Golang library. Read
+  ## more here: https://godoc.org/cloud.google.com/go/pubsub#PublishSettings
+
+  ## Optional. Send a request to PubSub (i.e. actually publish a batch)
+  ## when it has this many PubSub messages. If send_batched is true,
+  ## this is ignored and treated as if it were 1.
+  # publish_count_threshold = 1000
+
+  ## Optional. Send a request to PubSub (i.e. actually publish a batch)
+  ## when it has this many PubSub messages. If send_batched is true,
+  ## this is ignored and treated as if it were 1
+  # publish_byte_threshold = 1000000
+
+  ## Optional. Specifically configures requests made to the PubSub API.
+  # publish_num_go_routines = 2
+
+  ## Optional. Specifies a timeout for requests to the PubSub API.
+  # publish_timeout = "30s"
+
+  ## Optional. If true, published PubSub message data will be base64-encoded.
+  # base64_data = false
+
+  ## NOTE: Due to the way TOML is parsed, tables must be at the END of the
+  ## plugin definition, otherwise additional config options are read as part of
+  ## the table
+
+  ## Optional. PubSub attributes to add to metrics.
+  # [outputs.cloud_pubsub.attributes]
+  #   my_attr = "tag_value"
+```
+
+[pubsub]: https://cloud.google.com/pubsub
+[output data formats]: /docs/DATA_FORMATS_OUTPUT.md
diff --git a/content/telegraf/v1/output-plugins/cloudwatch/_index.md b/content/telegraf/v1/output-plugins/cloudwatch/_index.md
new file mode 100644
index 000000000..0dd45943a
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/cloudwatch/_index.md
@@ -0,0 +1,138 @@
+---
+description: "Telegraf plugin for sending metrics to Amazon CloudWatch"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Amazon CloudWatch
+    identifier: output-cloudwatch
+tags: [Amazon CloudWatch, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Amazon CloudWatch Output Plugin
+
+This plugin will send metrics to Amazon CloudWatch.
+
+## Amazon Authentication
+
+This plugin uses a credential chain for Authentication with the CloudWatch API
+endpoint. In the following order the plugin will attempt to authenticate.
+
+1. Web identity provider credentials via STS if `role_arn` and
+   `web_identity_token_file` are specified
+1. Assumed credentials via STS if `role_arn` attribute is specified (source
+   credentials are evaluated from subsequent rules)
+1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
+1. Shared profile from `profile` attribute
+1. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables)
+1. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file)
+1. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
+
+If you are using credentials from a web identity provider, you can specify the
+session name using `role_session_name`. If left empty, the current timestamp
+will be used.
+
+The IAM user needs only the `cloudwatch:PutMetricData` permission.
+
+[1]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables
+[2]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file
+[3]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for AWS CloudWatch output.
+[[outputs.cloudwatch]]
+  ## Amazon REGION
+  region = "us-east-1"
+
+  ## Amazon Credentials
+  ## Credentials are loaded in the following order
+  ## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified
+  ## 2) Assumed credentials via STS if role_arn is specified
+  ## 3) explicit credentials from 'access_key' and 'secret_key'
+  ## 4) shared profile from 'profile'
+  ## 5) environment variables
+  ## 6) shared credentials file
+  ## 7) EC2 Instance Profile
+  #access_key = ""
+  #secret_key = ""
+  #token = ""
+  #role_arn = ""
+  #web_identity_token_file = ""
+  #role_session_name = ""
+  #profile = ""
+  #shared_credential_file = ""
+
+  ## Endpoint to make request against, the correct endpoint is automatically
+  ## determined and this option should only be set if you wish to override the
+  ## default.
+  ##   ex: endpoint_url = "http://localhost:8000"
+  # endpoint_url = ""
+
+  ## Set http_proxy
+  # use_system_proxy = false
+  # http_proxy_url = "http://localhost:8888"
+
+  ## Namespace for the CloudWatch MetricDatums
+  namespace = "InfluxData/Telegraf"
+
+  ## If you have a large amount of metrics, you should consider to send statistic
+  ## values instead of raw metrics which could not only improve performance but
+  ## also save AWS API cost. If enable this flag, this plugin would parse the required
+  ## CloudWatch statistic fields (count, min, max, and sum) and send them to CloudWatch.
+  ## You could use basicstats aggregator to calculate those fields. If not all statistic
+  ## fields are available, all fields would still be sent as raw metrics.
+  # write_statistics = false
+
+  ## Enable high resolution metrics of 1 second (if not enabled, standard resolution are of 60 seconds precision)
+  # high_resolution_metrics = false
+```
+
+For this output plugin to function correctly the following variables must be
+configured.
+
+* region
+* namespace
+
+### region
+
+The region is the Amazon region that you wish to connect to.  Examples include
+but are not limited to:
+
+* us-west-1
+* us-west-2
+* us-east-1
+* ap-southeast-1
+* ap-southeast-2
+
+### namespace
+
+The namespace used for AWS CloudWatch metrics.
+
+### write_statistics
+
+If you have a large amount of metrics, you should consider to send statistic
+values instead of raw metrics which could not only improve performance but also
+save AWS API cost. If enable this flag, this plugin would parse the required
+[CloudWatch statistic fields](https://docs.aws.amazon.com/sdk-for-go/api/service/cloudwatch/#StatisticSet) (count, min, max, and sum) and
+send them to CloudWatch. You could use `basicstats` aggregator to calculate
+those fields. If not all statistic fields are available, all fields would still
+be sent as raw metrics.
+
+[statistic fields]: https://docs.aws.amazon.com/sdk-for-go/api/service/cloudwatch/#StatisticSet
+
+### high_resolution_metrics
+
+Enable high resolution metrics (1 second precision) instead of standard ones
+(60 seconds precision).
diff --git a/content/telegraf/v1/output-plugins/cloudwatch_logs/_index.md b/content/telegraf/v1/output-plugins/cloudwatch_logs/_index.md
new file mode 100644
index 000000000..63d017bc7
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/cloudwatch_logs/_index.md
@@ -0,0 +1,115 @@
+---
+description: "Telegraf plugin for sending metrics to Amazon CloudWatch Logs"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Amazon CloudWatch Logs
+    identifier: output-cloudwatch_logs
+tags: [Amazon CloudWatch Logs, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Amazon CloudWatch Logs Output Plugin
+
+This plugin will send logs to Amazon CloudWatch.
+
+## Amazon Authentication
+
+This plugin uses a credential chain for Authentication with the CloudWatch Logs
+API endpoint. In the following order the plugin will attempt to authenticate.
+
+1. Web identity provider credentials via STS if `role_arn` and `web_identity_token_file` are specified
+1. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules).
+The `endpoint_url` attribute is used only for Cloudwatch Logs service. When fetching credentials, STS global endpoint will be used.
+1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
+1. Shared profile from `profile` attribute
+1. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables)
+1. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file)
+1. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
+
+The IAM user needs the following permissions (see this [reference](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/permissions-reference-cwl.html) for more):
+
+- `logs:DescribeLogGroups` - required for check if configured log group exist
+- `logs:DescribeLogStreams` - required to view all log streams associated with a
+  log group.
+- `logs:CreateLogStream` - required to create a new log stream in a log group.)
+- `logs:PutLogEvents` - required to upload a batch of log events into log
+  stream.
+
+[1]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables
+[2]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file
+[3]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
+[4]: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/permissions-reference-cwl.html
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for AWS CloudWatchLogs output.
+[[outputs.cloudwatch_logs]]
+  ## The region is the Amazon region that you wish to connect to.
+  ## Examples include but are not limited to:
+  ## - us-west-1
+  ## - us-west-2
+  ## - us-east-1
+  ## - ap-southeast-1
+  ## - ap-southeast-2
+  ## ...
+  region = "us-east-1"
+
+  ## Amazon Credentials
+  ## Credentials are loaded in the following order
+  ## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified
+  ## 2) Assumed credentials via STS if role_arn is specified
+  ## 3) explicit credentials from 'access_key' and 'secret_key'
+  ## 4) shared profile from 'profile'
+  ## 5) environment variables
+  ## 6) shared credentials file
+  ## 7) EC2 Instance Profile
+  #access_key = ""
+  #secret_key = ""
+  #token = ""
+  #role_arn = ""
+  #web_identity_token_file = ""
+  #role_session_name = ""
+  #profile = ""
+  #shared_credential_file = ""
+
+  ## Endpoint to make request against, the correct endpoint is automatically
+  ## determined and this option should only be set if you wish to override the
+  ## default.
+  ##   ex: endpoint_url = "http://localhost:8000"
+  # endpoint_url = ""
+
+  ## Cloud watch log group. Must be created in AWS cloudwatch logs upfront!
+  ## For example, you can specify the name of the k8s cluster here to group logs from all cluster in oine place
+  log_group = "my-group-name"
+
+  ## Log stream in log group
+  ## Either log group name or reference to metric attribute, from which it can be parsed:
+  ## tag:<TAG_NAME> or field:<FIELD_NAME>. If log stream is not exist, it will be created.
+  ## Since AWS is not automatically delete logs streams with expired logs entries (i.e. empty log stream)
+  ## you need to put in place appropriate house-keeping (https://forums.aws.amazon.com/thread.jspa?threadID=178855)
+  log_stream = "tag:location"
+
+  ## Source of log data - metric name
+  ## specify the name of the metric, from which the log data should be retrieved.
+  ## I.e., if you  are using docker_log plugin to stream logs from container, then
+  ## specify log_data_metric_name  = "docker_log"
+  log_data_metric_name  = "docker_log"
+
+  ## Specify from which metric attribute the log data should be retrieved:
+  ## tag:<TAG_NAME> or field:<FIELD_NAME>.
+  ## I.e., if you  are using docker_log plugin to stream logs from container, then
+  ## specify log_data_source  = "field:message"
+  log_data_source  = "field:message"
+```
diff --git a/content/telegraf/v1/output-plugins/cratedb/_index.md b/content/telegraf/v1/output-plugins/cratedb/_index.md
new file mode 100644
index 000000000..01a7455dd
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/cratedb/_index.md
@@ -0,0 +1,81 @@
+---
+description: "Telegraf plugin for sending metrics to CrateDB"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: CrateDB
+    identifier: output-cratedb
+tags: [CrateDB, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# CrateDB Output Plugin
+
+This plugin writes to [CrateDB](https://crate.io/) via its [PostgreSQL
+protocol](https://crate.io/docs/crate/reference/protocols/postgres.html).
+
+## Table Schema
+
+The plugin requires a table with the following schema.
+
+```sql
+CREATE TABLE IF NOT EXISTS my_metrics (
+  "hash_id" LONG INDEX OFF,
+  "timestamp" TIMESTAMP,
+  "name" STRING,
+  "tags" OBJECT(DYNAMIC),
+  "fields" OBJECT(DYNAMIC),
+  "day" TIMESTAMP GENERATED ALWAYS AS date_trunc('day', "timestamp"),
+  PRIMARY KEY ("timestamp", "hash_id","day")
+) PARTITIONED BY("day");
+```
+
+The plugin can create this table for you automatically via the `table_create`
+config option, see below.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Startup error behavior options <!-- @/docs/includes/startup_error_behavior.md -->
+
+In addition to the plugin-specific and global configuration settings the plugin
+supports options for specifying the behavior when experiencing startup errors
+using the `startup_error_behavior` setting. Available values are:
+
+- `error`:  Telegraf with stop and exit in case of startup errors. This is the
+            default behavior.
+- `ignore`: Telegraf will ignore startup errors for this plugin and disables it
+            but continues processing for all other plugins.
+- `retry`:  Telegraf will try to startup the plugin in every gather or write
+            cycle in case of startup errors. The plugin is disabled until
+            the startup succeeds.
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for CrateDB to send metrics to.
+[[outputs.cratedb]]
+  ## Connection parameters for accessing the database see
+  ##   https://pkg.go.dev/github.com/jackc/pgx/v4#ParseConfig
+  ## for available options
+  url = "postgres://user:password@localhost/schema?sslmode=disable"
+
+  ## Timeout for all CrateDB queries.
+  # timeout = "5s"
+
+  ## Name of the table to store metrics in.
+  # table = "metrics"
+
+  ## If true, and the metrics table does not exist, create it automatically.
+  # table_create = false
+
+  ## The character(s) to replace any '.' in an object key with
+  # key_separator = "_"
+```
diff --git a/content/telegraf/v1/output-plugins/datadog/_index.md b/content/telegraf/v1/output-plugins/datadog/_index.md
new file mode 100644
index 000000000..f379a7651
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/datadog/_index.md
@@ -0,0 +1,77 @@
+---
+description: "Telegraf plugin for sending metrics to Datadog"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Datadog
+    identifier: output-datadog
+tags: [Datadog, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Datadog Output Plugin
+
+This plugin writes to the [Datadog Metrics API](https://docs.datadoghq.com/api/v1/metrics/#submit-metrics) and requires an
+`apikey` which can be obtained [here](https://app.datadoghq.com/account/settings#api) for the account. This plugin
+supports the v1 API.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for DataDog API to send metrics to.
+[[outputs.datadog]]
+  ## Datadog API key
+  apikey = "my-secret-key"
+
+  ## Connection timeout.
+  # timeout = "5s"
+
+  ## Write URL override; useful for debugging.
+  ## This plugin only supports the v1 API currently due to the authentication
+  ## method used.
+  # url = "https://app.datadoghq.com/api/v1/series"
+
+  ## Set http_proxy
+  # use_system_proxy = false
+  # http_proxy_url = "http://localhost:8888"
+
+  ## Override the default (none) compression used to send data.
+  ## Supports: "zlib", "none"
+  # compression = "none"
+
+  ## When non-zero, converts count metrics submitted by inputs.statsd
+  ## into rate, while dividing the metric value by this number.
+  ## Note that in order for metrics to be submitted simultaenously alongside
+  ## a Datadog agent, rate_interval has to match the interval used by the
+  ## agent - which defaults to 10s
+  # rate_interval = 0s
+```
+
+## Metrics
+
+Datadog metric names are formed by joining the Telegraf metric name and the
+field key with a `.` character.
+
+Field values are converted to floating point numbers.  Strings and floats that
+cannot be sent over JSON, namely NaN and Inf, are ignored.
+
+Setting `rate_interval` to non-zero will convert `count` metrics to `rate`
+and divide its value by this interval before submitting to Datadog.
+This allows Telegraf to submit metrics alongside Datadog agents when their rate
+intervals are the same (Datadog defaults to `10s`).
+Note that this only supports metrics ingested via `inputs.statsd` given
+the dependency on the `metric_type` tag it creates. There is only support for
+`counter` metrics, and `count` values from `timing` and `histogram` metrics.
+
+[metrics]: https://docs.datadoghq.com/api/v1/metrics/#submit-metrics
+[apikey]: https://app.datadoghq.com/account/settings#api
diff --git a/content/telegraf/v1/output-plugins/discard/_index.md b/content/telegraf/v1/output-plugins/discard/_index.md
new file mode 100644
index 000000000..b4e37aafa
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/discard/_index.md
@@ -0,0 +1,33 @@
+---
+description: "Telegraf plugin for sending metrics to discard"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: discard
+    identifier: output-discard
+tags: [discard, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# discard Output Plugin
+
+This output plugin simply drops all metrics that are sent to it. It is only
+meant to be used for testing purposes.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Send metrics to nowhere at all
+[[outputs.discard]]
+  # no configuration
+```
diff --git a/content/telegraf/v1/output-plugins/dynatrace/_index.md b/content/telegraf/v1/output-plugins/dynatrace/_index.md
new file mode 100644
index 000000000..41b6754c0
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/dynatrace/_index.md
@@ -0,0 +1,261 @@
+---
+description: "Telegraf plugin for sending metrics to Dynatrace"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Dynatrace
+    identifier: output-dynatrace
+tags: [Dynatrace, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Dynatrace Output Plugin
+
+This plugin sends Telegraf metrics to [Dynatrace](https://www.dynatrace.com) via
+the [Dynatrace Metrics API V2](https://docs.dynatrace.com/docs/shortlink/api-metrics-v2). It may be run alongside the Dynatrace
+OneAgent for automatic authentication or it may be run standalone on a host
+without a OneAgent by specifying a URL and API Token.  More information on the
+plugin can be found in the [Dynatrace documentation](https://docs.dynatrace.com/docs/shortlink/api-metrics-v2-post-datapoints).  All metrics are
+reported as gauges, unless they are specified to be delta counters using the
+`additional_counters` or `additional_counters_patterns` config option
+(see below).
+See the [Dynatrace Metrics ingestion protocol documentation](https://docs.dynatrace.com/docs/shortlink/metric-ingestion-protocol)
+for details on the types defined there.
+
+[api-v2]: https://docs.dynatrace.com/docs/shortlink/api-metrics-v2
+
+[docs]: https://docs.dynatrace.com/docs/shortlink/telegraf
+
+[proto-docs]: https://docs.dynatrace.com/docs/shortlink/metric-ingestion-protocol
+
+## Requirements
+
+You will either need a Dynatrace OneAgent (version 1.201 or higher) installed on
+the same host as Telegraf; or a Dynatrace environment with version 1.202 or
+higher.
+
+- Telegraf minimum version: Telegraf 1.16
+
+## Getting Started
+
+Setting up Telegraf is explained in the [Telegraf
+Documentation]().
+The Dynatrace exporter may be enabled by adding an `[[outputs.dynatrace]]`
+section to your `telegraf.conf` config file.  All configurations are optional,
+but if a `url` other than the OneAgent metric ingestion endpoint is specified
+then an `api_token` is required.  To see all available options, see
+Configuration below.
+
+[getting-started]: https://docs.influxdata.com/telegraf/latest/introduction/getting-started/
+
+### Running alongside Dynatrace OneAgent (preferred)
+
+If you run the Telegraf agent on a host or VM that is monitored by the Dynatrace
+OneAgent then you only need to enable the plugin, but need no further
+configuration. The Dynatrace Telegraf output plugin will send all metrics to the
+OneAgent which will use its secure and load balanced connection to send the
+metrics to your Dynatrace SaaS or Managed environment.  Depending on your
+environment, you might have to enable metrics ingestion on the OneAgent first as
+described in the [Dynatrace documentation](https://docs.dynatrace.com/docs/shortlink/api-metrics-v2-post-datapoints).
+
+Note: The name and identifier of the host running Telegraf will be added as a
+dimension to every metric. If this is undesirable, then the output plugin may be
+used in standalone mode using the directions below.
+
+```toml
+[[outputs.dynatrace]]
+  ## No options are required. By default, metrics will be exported via the OneAgent on the local host.
+```
+
+### Running standalone
+
+If you run the Telegraf agent on a host or VM without a OneAgent you will need
+to configure the environment API endpoint to send the metrics to and an API
+token for security.
+
+You will also need to configure an API token for secure access. Find out how to
+create a token in the [Dynatrace documentation](https://docs.dynatrace.com/docs/shortlink/api-metrics-v2-post-datapoints) or simply navigate to
+**Settings > Integration > Dynatrace API** in your Dynatrace environment and
+create a token with Dynatrace API and create a new token with 'Ingest metrics'
+(`metrics.ingest`) scope enabled. It is recommended to limit Token scope to only
+this permission.
+
+The endpoint for the Dynatrace Metrics API v2 is
+
+- on Dynatrace Managed:
+  `https://{your-domain}/e/{your-environment-id}/api/v2/metrics/ingest`
+- on Dynatrace SaaS:
+  `https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest`
+
+```toml
+[[outputs.dynatrace]]
+  ## If no OneAgent is running on the host, url and api_token need to be set
+
+  ## Dynatrace Metrics Ingest v2 endpoint to receive metrics
+  url = "https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest"
+
+  ## API token is required if a URL is specified and should be restricted to the 'Ingest metrics' scope
+  api_token = "your API token here" // hard-coded for illustration only, should be read from environment
+```
+
+You can learn more about how to use the Dynatrace API
+[here](https://docs.dynatrace.com/docs/shortlink/section-api).
+
+[api-auth]: https://docs.dynatrace.com/docs/shortlink/api-authentication
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `api_token` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Send telegraf metrics to a Dynatrace environment
+[[outputs.dynatrace]]
+  ## For usage with the Dynatrace OneAgent you can omit any configuration,
+  ## the only requirement is that the OneAgent is running on the same host.
+  ## Only setup environment url and token if you want to monitor a Host without the OneAgent present.
+  ##
+  ## Your Dynatrace environment URL.
+  ## For Dynatrace OneAgent you can leave this empty or set it to "http://127.0.0.1:14499/metrics/ingest" (default)
+  ## For Dynatrace SaaS environments the URL scheme is "https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest"
+  ## For Dynatrace Managed environments the URL scheme is "https://{your-domain}/e/{your-environment-id}/api/v2/metrics/ingest"
+  url = ""
+
+  ## Your Dynatrace API token.
+  ## Create an API token within your Dynatrace environment, by navigating to Settings > Integration > Dynatrace API
+  ## The API token needs data ingest scope permission. When using OneAgent, no API token is required.
+  api_token = ""
+
+  ## Optional prefix for metric names (e.g.: "telegraf")
+  prefix = "telegraf"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Optional flag for ignoring tls certificate check
+  # insecure_skip_verify = false
+
+  ## Connection timeout, defaults to "5s" if not set.
+  timeout = "5s"
+
+  ## If you want metrics to be treated and reported as delta counters, add the metric names here
+  additional_counters = [ ]
+
+  ## In addition or as an alternative to additional_counters, if you want metrics to be treated and
+  ## reported as delta counters using regular expression pattern matching
+  additional_counters_patterns = [ ]
+
+  ## NOTE: Due to the way TOML is parsed, tables must be at the END of the
+  ## plugin definition, otherwise additional config options are read as part of
+  ## the table
+
+  ## Optional dimensions to be added to every metric
+  # [outputs.dynatrace.default_dimensions]
+  # default_key = "default value"
+```
+
+### `url`
+
+*required*: `false`
+
+*default*: Local OneAgent endpoint
+
+Set your Dynatrace environment URL (e.g.:
+`https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest`, see
+the [Dynatrace documentation](https://docs.dynatrace.com/docs/shortlink/api-metrics-v2-post-datapoints) for details) if you do not use a
+OneAgent or wish to export metrics directly to a Dynatrace metrics v2
+endpoint. If a URL is set to anything other than the local OneAgent endpoint,
+then an API token is required.
+
+```toml
+url = "https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest"
+```
+
+[post-ingest]: https://docs.dynatrace.com/docs/shortlink/api-metrics-v2-post-datapoints
+
+### `api_token`
+
+*required*: `false` unless `url` is specified
+
+API token is required if a URL other than the OneAgent endpoint is specified and
+it should be restricted to the 'Ingest metrics' scope.
+
+```toml
+api_token = "your API token here"
+```
+
+### `prefix`
+
+*required*: `false`
+
+Optional prefix to be prepended to all metric names (will be separated with a
+`.`).
+
+```toml
+prefix = "telegraf"
+```
+
+### `insecure_skip_verify`
+
+*required*: `false`
+
+Setting this option to true skips TLS verification for testing or when using
+self-signed certificates.
+
+```toml
+insecure_skip_verify = false
+```
+
+### `additional_counters`
+
+*required*: `false`
+
+If you want a metric to be treated and reported as a delta counter, add its name
+to this list.
+
+```toml
+additional_counters = [ ]
+```
+
+### `additional_counters_patterns`
+
+*required*: `false`
+
+In addition or as an alternative to additional_counters, if you want a metric
+to be treated and reported as a delta counter using regular expression,
+add its pattern to this list.
+
+```toml
+additional_counters_patterns = [ ]
+```
+
+### `default_dimensions`
+
+*required*: `false`
+
+Default dimensions that will be added to every exported metric.
+
+```toml
+[outputs.dynatrace.default_dimensions]
+default_key = "default value"
+```
+
+## Limitations
+
+Telegraf measurements which can't be converted to a number are skipped.
diff --git a/content/telegraf/v1/output-plugins/elasticsearch/_index.md b/content/telegraf/v1/output-plugins/elasticsearch/_index.md
new file mode 100644
index 000000000..f344636d9
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/elasticsearch/_index.md
@@ -0,0 +1,448 @@
+---
+description: "Telegraf plugin for sending metrics to Elasticsearch"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Elasticsearch
+    identifier: output-elasticsearch
+tags: [Elasticsearch, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Elasticsearch Output Plugin
+
+This plugin writes to [Elasticsearch](https://www.elastic.co) via HTTP using
+Elastic (<http://olivere.github.io/elastic/).>
+
+It supports Elasticsearch releases from 5.x up to 7.x.
+
+## Elasticsearch indexes and templates
+
+### Indexes per time-frame
+
+This plugin can manage indexes per time-frame, as commonly done in other tools
+with Elasticsearch.
+
+The timestamp of the metric collected will be used to decide the index
+destination.
+
+For more information about this usage on Elasticsearch, check [the
+docs]().
+
+[1]: https://www.elastic.co/guide/en/elasticsearch/guide/master/time-based.html#index-per-timeframe
+
+### Template management
+
+Index templates are used in Elasticsearch to define settings and mappings for
+the indexes and how the fields should be analyzed.  For more information on how
+this works, see [the docs](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html).
+
+This plugin can create a working template for use with telegraf metrics. It uses
+Elasticsearch dynamic templates feature to set proper types for the tags and
+metrics fields.  If the template specified already exists, it will not overwrite
+unless you configure this plugin to do so. Thus you can customize this template
+after its creation if necessary.
+
+Example of an index template created by telegraf on Elasticsearch 5.x:
+
+```json
+{
+  "order": 0,
+  "template": "telegraf-*",
+  "settings": {
+    "index": {
+      "mapping": {
+        "total_fields": {
+          "limit": "5000"
+        }
+      },
+      "auto_expand_replicas" : "0-1",
+      "codec" : "best_compression",
+      "refresh_interval": "10s"
+    }
+  },
+  "mappings": {
+    "_default_": {
+      "dynamic_templates": [
+        {
+          "tags": {
+            "path_match": "tag.*",
+            "mapping": {
+              "ignore_above": 512,
+              "type": "keyword"
+            },
+            "match_mapping_type": "string"
+          }
+        },
+        {
+          "metrics_long": {
+            "mapping": {
+              "index": false,
+              "type": "float"
+            },
+            "match_mapping_type": "long"
+          }
+        },
+        {
+          "metrics_double": {
+            "mapping": {
+              "index": false,
+              "type": "float"
+            },
+            "match_mapping_type": "double"
+          }
+        },
+        {
+          "text_fields": {
+            "mapping": {
+              "norms": false
+            },
+            "match": "*"
+          }
+        }
+      ],
+      "_all": {
+        "enabled": false
+      },
+      "properties": {
+        "@timestamp": {
+          "type": "date"
+        },
+        "measurement_name": {
+          "type": "keyword"
+        }
+      }
+    }
+  },
+  "aliases": {}
+}
+
+```
+
+[2]: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html
+
+### Example events
+
+This plugin will format the events in the following way:
+
+```json
+{
+  "@timestamp": "2017-01-01T00:00:00+00:00",
+  "measurement_name": "cpu",
+  "cpu": {
+    "usage_guest": 0,
+    "usage_guest_nice": 0,
+    "usage_idle": 71.85413456197966,
+    "usage_iowait": 0.256805341656516,
+    "usage_irq": 0,
+    "usage_nice": 0,
+    "usage_softirq": 0.2054442732579466,
+    "usage_steal": 0,
+    "usage_system": 15.04879301548127,
+    "usage_user": 12.634822807288275
+  },
+  "tag": {
+    "cpu": "cpu-total",
+    "host": "elastichost",
+    "dc": "datacenter1"
+  }
+}
+```
+
+```json
+{
+  "@timestamp": "2017-01-01T00:00:00+00:00",
+  "measurement_name": "system",
+  "system": {
+    "load1": 0.78,
+    "load15": 0.8,
+    "load5": 0.8,
+    "n_cpus": 2,
+    "n_users": 2
+  },
+  "tag": {
+    "host": "elastichost",
+    "dc": "datacenter1"
+  }
+}
+```
+
+### Timestamp Timezone
+
+Elasticsearch documents use RFC3339 timestamps, which include timezone
+information (for example `2017-01-01T00:00:00-08:00`). By default, the Telegraf
+system's configured timezone will be used.
+
+However, this may not always be desirable: Elasticsearch preserves timezone
+information and includes it when returning associated documents. This can cause
+issues for some pipelines. In particular, those that do not parse retrieved
+timestamps and instead assume that the timezone returned will always be
+consistent.
+
+Telegraf honours the timezone configured in the environment variable `TZ`, so
+the timezone sent to Elasticsearch can be amended without needing to change the
+timezone configured in the host system:
+
+```sh
+export TZ="America/Los_Angeles"
+export TZ="UTC"
+```
+
+If Telegraf is being run as a system service, this can be configured in the
+following way on Linux:
+
+```sh
+echo TZ="UTC" | sudo tee -a /etc/default/telegraf
+```
+
+## OpenSearch Support
+
+OpenSearch is a fork of Elasticsearch hosted by AWS. The OpenSearch server will
+report itself to clients with an AWS specific-version (e.g. v1.0). In reality,
+the actual underlying Elasticsearch version is v7.1. This breaks Telegraf and
+other Elasticsearch clients that need to know what major version they are
+interfacing with.
+
+Amazon has created a [compatibility mode](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/rename.html#rename-upgrade) to allow existing Elasticsearch
+clients to properly work when the version needs to be checked. To enable
+compatibility mode users need to set the `override_main_response_version` to
+`true`.
+
+On existing clusters run:
+
+```json
+PUT /_cluster/settings
+{
+  "persistent" : {
+    "compatibility.override_main_response_version" : true
+  }
+}
+```
+
+And on new clusters set the option to true under advanced options:
+
+```json
+POST https://es.us-east-1.amazonaws.com/2021-01-01/opensearch/upgradeDomain
+{
+  "DomainName": "domain-name",
+  "TargetVersion": "OpenSearch_1.0",
+  "AdvancedOptions": {
+    "override_main_response_version": "true"
+   }
+}
+```
+
+[3]: https://docs.aws.amazon.com/opensearch-service/latest/developerguide/rename.html#rename-upgrade
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `username`,
+`password` and `auth_bearer_token` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for Elasticsearch to send metrics to.
+[[outputs.elasticsearch]]
+  ## The full HTTP endpoint URL for your Elasticsearch instance
+  ## Multiple urls can be specified as part of the same cluster,
+  ## this means that only ONE of the urls will be written to each interval
+  urls = [ "http://node1.es.example.com:9200" ] # required.
+  ## Elasticsearch client timeout, defaults to "5s" if not set.
+  timeout = "5s"
+  ## Set to true to ask Elasticsearch a list of all cluster nodes,
+  ## thus it is not necessary to list all nodes in the urls config option
+  enable_sniffer = false
+  ## Set to true to enable gzip compression
+  enable_gzip = false
+  ## Set the interval to check if the Elasticsearch nodes are available
+  ## Setting to "0s" will disable the health check (not recommended in production)
+  health_check_interval = "10s"
+  ## Set the timeout for periodic health checks.
+  # health_check_timeout = "1s"
+  ## HTTP basic authentication details.
+  ## HTTP basic authentication details
+  # username = "telegraf"
+  # password = "mypassword"
+  ## HTTP bearer token authentication details
+  # auth_bearer_token = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9"
+
+  ## Index Config
+  ## The target index for metrics (Elasticsearch will create if it not exists).
+  ## You can use the date specifiers below to create indexes per time frame.
+  ## The metric timestamp will be used to decide the destination index name
+  # %Y - year (2016)
+  # %y - last two digits of year (00..99)
+  # %m - month (01..12)
+  # %d - day of month (e.g., 01)
+  # %H - hour (00..23)
+  # %V - week of the year (ISO week) (01..53)
+  ## Additionally, you can specify a tag name using the notation {{tag_name}}
+  ## which will be used as part of the index name. If the tag does not exist,
+  ## the default tag value will be used.
+  # index_name = "telegraf-{{host}}-%Y.%m.%d"
+  # default_tag_value = "none"
+  index_name = "telegraf-%Y.%m.%d" # required.
+
+  ## Optional Index Config
+  ## Set to true if Telegraf should use the "create" OpType while indexing
+  # use_optype_create = false
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## Template Config
+  ## Set to true if you want telegraf to manage its index template.
+  ## If enabled it will create a recommended index template for telegraf indexes
+  manage_template = true
+  ## The template name used for telegraf indexes
+  template_name = "telegraf"
+  ## Set to true if you want telegraf to overwrite an existing template
+  overwrite_template = false
+  ## If set to true a unique ID hash will be sent as sha256(concat(timestamp,measurement,series-hash)) string
+  ## it will enable data resend and update metric points avoiding duplicated metrics with different id's
+  force_document_id = false
+
+  ## Specifies the handling of NaN and Inf values.
+  ## This option can have the following values:
+  ##    none    -- do not modify field-values (default); will produce an error if NaNs or infs are encountered
+  ##    drop    -- drop fields containing NaNs or infs
+  ##    replace -- replace with the value in "float_replacement_value" (default: 0.0)
+  ##               NaNs and inf will be replaced with the given number, -inf with the negative of that number
+  # float_handling = "none"
+  # float_replacement_value = 0.0
+
+  ## Pipeline Config
+  ## To use a ingest pipeline, set this to the name of the pipeline you want to use.
+  # use_pipeline = "my_pipeline"
+  ## Additionally, you can specify a tag name using the notation {{tag_name}}
+  ## which will be used as part of the pipeline name. If the tag does not exist,
+  ## the default pipeline will be used as the pipeline. If no default pipeline is set,
+  ## no pipeline is used for the metric.
+  # use_pipeline = "{{es_pipeline}}"
+  # default_pipeline = "my_pipeline"
+  #
+  # Custom HTTP headers
+  # To pass custom HTTP headers please define it in a given below section
+  # [outputs.elasticsearch.headers]
+  #    "X-Custom-Header" = "custom-value"
+
+  ## Template Index Settings
+  ## Overrides the template settings.index section with any provided options.
+  ## Defaults provided here in the config
+  # template_index_settings = {
+  #   refresh_interval = "10s",
+  #   mapping.total_fields.limit = 5000,
+  #   auto_expand_replicas = "0-1",
+  #   codec = "best_compression"
+  # }
+```
+
+### Permissions
+
+If you are using authentication within your Elasticsearch cluster, you need to
+create a account and create a role with at least the manage role in the Cluster
+Privileges category.  Otherwise, your account will not be able to connect to
+your Elasticsearch cluster and send logs to your cluster.  After that, you need
+to add "create_indice" and "write" permission to your specific index pattern.
+
+### Required parameters
+
+* `urls`: A list containing the full HTTP URL of one or more nodes from your
+  Elasticsearch instance.
+* `index_name`: The target index for metrics. You can use the date specifiers
+  below to create indexes per time frame.
+
+```   %Y - year (2017)
+  %y - last two digits of year (00..99)
+  %m - month (01..12)
+  %d - day of month (e.g., 01)
+  %H - hour (00..23)
+  %V - week of the year (ISO week) (01..53)
+```
+
+Additionally, you can specify dynamic index names by using tags with the
+notation ```{{tag_name}}```. This will store the metrics with different tag
+values in different indices. If the tag does not exist in a particular metric,
+the `default_tag_value` will be used instead.
+
+### Optional parameters
+
+* `timeout`: Elasticsearch client timeout, defaults to "5s" if not set.
+* `enable_sniffer`: Set to true to ask Elasticsearch a list of all cluster
+  nodes, thus it is not necessary to list all nodes in the urls config option.
+* `health_check_interval`: Set the interval to check if the nodes are available,
+  in seconds. Setting to 0 will disable the health check (not recommended in
+  production).
+* `username`: The username for HTTP basic authentication details (eg. when using
+  Shield).
+* `password`: The password for HTTP basic authentication details (eg. when using
+  Shield).
+* `manage_template`: Set to true if you want telegraf to manage its index
+  template. If enabled it will create a recommended index template for telegraf
+  indexes.
+* `template_name`: The template name used for telegraf indexes.
+* `overwrite_template`: Set to true if you want telegraf to overwrite an
+  existing template.
+* `force_document_id`: Set to true will compute a unique hash from as
+  sha256(concat(timestamp,measurement,series-hash)),enables resend or update
+  data without ES duplicated documents.
+* `float_handling`: Specifies how to handle `NaN` and infinite field
+  values. `"none"` (default) will do nothing, `"drop"` will drop the field and
+  `replace` will replace the field value by the number in
+  `float_replacement_value`
+* `float_replacement_value`: Value (defaulting to `0.0`) to replace `NaN`s and
+  `inf`s if `float_handling` is set to `replace`. Negative `inf` will be
+  replaced by the negative value in this number to respect the sign of the
+  field's original value.
+* `use_optype_create`: If set, the "create" operation type will be used when
+   indexing into Elasticsearch, which is needed when using the Elasticsearch
+   data streams feature.
+* `use_pipeline`: If set, the set value will be used as the pipeline to call
+  when sending events to elasticsearch. Additionally, you can specify dynamic
+  pipeline names by using tags with the notation ```{{tag_name}}```.  If the tag
+  does not exist in a particular metric, the `default_pipeline` will be used
+  instead.
+* `default_pipeline`: If dynamic pipeline names the tag does not exist in a
+  particular metric, this value will be used instead.
+* `headers`: Custom HTTP headers, which are passed to Elasticsearch header
+  before each request.
+
+## Known issues
+
+Integer values collected that are bigger than 2^63 and smaller than 1e21 (or in
+this exact same window of their negative counterparts) are encoded by golang
+JSON encoder in decimal format and that is not fully supported by Elasticsearch
+dynamic field mapping. This causes the metrics with such values to be dropped in
+case a field mapping has not been created yet on the telegraf index. If that's
+the case you will see an exception on Elasticsearch side like this:
+
+```json
+{"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"failed to parse"}],"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"illegal_state_exception","reason":"No matching token for number_type [BIG_INTEGER]"}},"status":400}
+```
+
+The correct field mapping will be created on the telegraf index as soon as a
+supported JSON value is received by Elasticsearch, and subsequent insertions
+will work because the field mapping will already exist.
+
+This issue is caused by the way Elasticsearch tries to detect integer fields,
+and by how golang encodes numbers in JSON. There is no clear workaround for this
+at the moment.
diff --git a/content/telegraf/v1/output-plugins/event_hubs/_index.md b/content/telegraf/v1/output-plugins/event_hubs/_index.md
new file mode 100644
index 000000000..d095ac305
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/event_hubs/_index.md
@@ -0,0 +1,66 @@
+---
+description: "Telegraf plugin for sending metrics to Azure Event Hubs"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Azure Event Hubs
+    identifier: output-event_hubs
+tags: [Azure Event Hubs, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Azure Event Hubs Output Plugin
+
+This plugin for [Azure Event
+Hubs](https://azure.microsoft.com/en-gb/services/event-hubs/) will send metrics
+to a single Event Hub within an Event Hubs namespace. Metrics are sent as
+message batches, each message payload containing one metric object. The messages
+do not specify a partition key, and will thus be automatically load-balanced
+(round-robin) across all the Event Hub partitions.
+
+## Metrics
+
+The plugin uses the Telegraf serializers to format the metric data sent in the
+message payloads. You can select any of the supported output formats, although
+JSON is probably the easiest to integrate with downstream components.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for Event Hubs output plugin
+[[outputs.event_hubs]]
+  ## The full connection string to the Event Hub (required)
+  ## The shared access key must have "Send" permissions on the target Event Hub.
+  connection_string = "Endpoint=sb://namespace.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=superSecret1234=;EntityPath=hubName"
+
+  ## Client timeout (defaults to 30s)
+  # timeout = "30s"
+
+  ## Partition key
+  ## Metric tag or field name to use for the event partition key. The value of
+  ## this tag or field is set as the key for events if it exists. If both, tag
+  ## and field, exist the tag is preferred.
+  # partition_key = ""
+
+  ## Set the maximum batch message size in bytes
+  ## The allowable size depends on the Event Hub tier
+  ## See: https://learn.microsoft.com/azure/event-hubs/event-hubs-quotas#basic-vs-standard-vs-premium-vs-dedicated-tiers
+  ## Setting this to 0 means using the default size from the Azure Event Hubs Client library (1000000 bytes)
+  # max_message_size = 1000000
+
+  ## Data format to output.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
+  data_format = "json"
+```
diff --git a/content/telegraf/v1/output-plugins/exec/_index.md b/content/telegraf/v1/output-plugins/exec/_index.md
new file mode 100644
index 000000000..73c03467f
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/exec/_index.md
@@ -0,0 +1,62 @@
+---
+description: "Telegraf plugin for sending metrics to Exec"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Exec
+    identifier: output-exec
+tags: [Exec, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Exec Output Plugin
+
+This plugin sends telegraf metrics to an external application over stdin.
+
+The command should be defined similar to docker's `exec` form:
+
+```text
+["executable", "param1", "param2"]
+```
+
+On non-zero exit stderr will be logged at error level.
+
+For better performance, consider execd, which runs continuously.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Send metrics to command as input over stdin
+[[outputs.exec]]
+  ## Command to ingest metrics via stdin.
+  command = ["tee", "-a", "/dev/null"]
+
+  ## Environment variables
+  ## Array of "key=value" pairs to pass as environment variables
+  ## e.g. "KEY=value", "USERNAME=John Doe",
+  ## "LD_LIBRARY_PATH=/opt/custom/lib64:/usr/local/libs"
+  # environment = []
+
+  ## Timeout for command to complete.
+  # timeout = "5s"
+
+  ## Whether the command gets executed once per metric, or once per metric batch
+  ## The serializer will also run in batch mode when this is true.
+  # use_batch_format = true
+
+  ## Data format to output.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
+  # data_format = "influx"
+```
diff --git a/content/telegraf/v1/output-plugins/execd/_index.md b/content/telegraf/v1/output-plugins/execd/_index.md
new file mode 100644
index 000000000..486434f2e
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/execd/_index.md
@@ -0,0 +1,67 @@
+---
+description: "Telegraf plugin for sending metrics to Execd"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Execd
+    identifier: output-execd
+tags: [Execd, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Execd Output Plugin
+
+The `execd` plugin runs an external program as a daemon.
+
+Telegraf minimum version: Telegraf 1.15.0
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Run executable as long-running output plugin
+[[outputs.execd]]
+  ## One program to run as daemon.
+  ## NOTE: process and each argument should each be their own string
+  command = ["my-telegraf-output", "--some-flag", "value"]
+
+  ## Environment variables
+  ## Array of "key=value" pairs to pass as environment variables
+  ## e.g. "KEY=value", "USERNAME=John Doe",
+  ## "LD_LIBRARY_PATH=/opt/custom/lib64:/usr/local/libs"
+  # environment = []
+
+  ## Delay before the process is restarted after an unexpected termination
+  restart_delay = "10s"
+
+  ## Flag to determine whether execd should throw error when part of metrics is unserializable
+  ## Setting this to true will skip the unserializable metrics and process the rest of metrics
+  ## Setting this to false will throw error when encountering unserializable metrics and none will be processed
+  ## This setting does not apply when use_batch_format is set.
+  # ignore_serialization_error = false
+
+  ## Use batch serialization instead of per metric. The batch format allows for the
+  ## production of batch output formats and may more efficiently encode and write metrics.
+  # use_batch_format = false
+
+  ## Data format to export.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
+  data_format = "influx"
+```
+
+## Example
+
+see [examples](examples/)
+
+[examples]: examples/
diff --git a/content/telegraf/v1/output-plugins/file/_index.md b/content/telegraf/v1/output-plugins/file/_index.md
new file mode 100644
index 000000000..d9d275782
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/file/_index.md
@@ -0,0 +1,69 @@
+---
+description: "Telegraf plugin for sending metrics to File"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: File
+    identifier: output-file
+tags: [File, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# File Output Plugin
+
+This plugin writes telegraf metrics to files
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Send telegraf metrics to file(s)
+[[outputs.file]]
+  ## Files to write to, "stdout" is a specially handled file.
+  files = ["stdout", "/tmp/metrics.out"]
+
+  ## Use batch serialization format instead of line based delimiting.  The
+  ## batch format allows for the production of non line based output formats and
+  ## may more efficiently encode and write metrics.
+  # use_batch_format = false
+
+  ## The file will be rotated after the time interval specified.  When set
+  ## to 0 no time based rotation is performed.
+  # rotation_interval = "0h"
+
+  ## The logfile will be rotated when it becomes larger than the specified
+  ## size.  When set to 0 no size based rotation is performed.
+  # rotation_max_size = "0MB"
+
+  ## Maximum number of rotated archives to keep, any older logs are deleted.
+  ## If set to -1, no archives are removed.
+  # rotation_max_archives = 5
+
+  ## Data format to output.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
+  data_format = "influx"
+
+  ## Compress output data with the specified algorithm.
+  ## If empty, compression will be disabled and files will be plain text.
+  ## Supported algorithms are "zstd", "gzip" and "zlib".
+  # compression_algorithm = ""
+
+  ## Compression level for the algorithm above.
+  ## Please note that different algorithms support different levels:
+  ##   zstd  -- supports levels 1, 3, 7 and 11.
+  ##   gzip -- supports levels 0, 1 and 9.
+  ##   zlib -- supports levels 0, 1, and 9.
+  ## By default the default compression level for each algorithm is used.
+  # compression_level = -1
+```
diff --git a/content/telegraf/v1/output-plugins/graphite/_index.md b/content/telegraf/v1/output-plugins/graphite/_index.md
new file mode 100644
index 000000000..04f68f602
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/graphite/_index.md
@@ -0,0 +1,91 @@
+---
+description: "Telegraf plugin for sending metrics to Graphite"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Graphite
+    identifier: output-graphite
+tags: [Graphite, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Graphite Output Plugin
+
+This plugin writes to [Graphite](http://graphite.readthedocs.org/en/latest/index.html) via raw TCP.
+
+For details on the translation between Telegraf Metrics and Graphite output,
+see the [Graphite Data Format](/telegraf/v1/data_formats/output).
+
+[1]: http://graphite.readthedocs.org/en/latest/index.html
+[2]: ../../../docs/DATA_FORMATS_OUTPUT.md
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for Graphite server to send metrics to
+[[outputs.graphite]]
+  ## TCP endpoint for your graphite instance.
+  ## If multiple endpoints are configured, the output will be load balanced.
+  ## Only one of the endpoints will be written to with each iteration.
+  servers = ["localhost:2003"]
+
+  ## Local address to bind when connecting to the server
+  ## If empty or not set, the local address is automatically chosen.
+  # local_address = ""
+
+  ## Prefix metrics name
+  prefix = ""
+
+  ## Graphite output template
+  ## see https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
+  template = "host.tags.measurement.field"
+
+  ## Strict sanitization regex
+  ## This is the default sanitization regex that is used on data passed to the
+  ## graphite serializer. Users can add additional characters here if required.
+  ## Be aware that the characters, '/' '@' '*' are always replaced with '_',
+  ## '..' is replaced with '.', and '\' is removed even if added to the
+  ## following regex.
+  # graphite_strict_sanitize_regex = '[^a-zA-Z0-9-:._=\p{L}]'
+
+  ## Enable Graphite tags support
+  # graphite_tag_support = false
+
+  ## Applied sanitization mode when graphite tag support is enabled.
+  ## * strict - uses the regex specified above
+  ## * compatible - allows for greater number of characters
+  # graphite_tag_sanitize_mode = "strict"
+
+  ## Character for separating metric name and field for Graphite tags
+  # graphite_separator = "."
+
+  ## Graphite templates patterns
+  ## 1. Template for cpu
+  ## 2. Template for disk*
+  ## 3. Default template
+  # templates = [
+  #  "cpu tags.measurement.host.field",
+  #  "disk* measurement.field",
+  #  "host.measurement.tags.field"
+  #]
+
+  ## timeout in seconds for the write connection to graphite
+  # timeout = "2s"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
diff --git a/content/telegraf/v1/output-plugins/graylog/_index.md b/content/telegraf/v1/output-plugins/graylog/_index.md
new file mode 100644
index 000000000..2230cb8e4
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/graylog/_index.md
@@ -0,0 +1,85 @@
+---
+description: "Telegraf plugin for sending metrics to Graylog"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Graylog
+    identifier: output-graylog
+tags: [Graylog, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Graylog Output Plugin
+
+This plugin writes to a Graylog instance using the "[GELF](https://docs.graylog.org/en/3.1/pages/gelf.html#gelf-payload-specification)" format.
+
+[GELF]: https://docs.graylog.org/en/3.1/pages/gelf.html#gelf-payload-specification
+
+## GELF Fields
+
+The [GELF spec](https://docs.graylog.org/docs/gelf#gelf-payload-specification) spec defines a number of specific fields in a GELF payload.
+These fields may have specific requirements set by the spec and users of the
+Graylog plugin need to follow these requirements or metrics may be rejected due
+to invalid data.
+
+For example, the timestamp field defined in the GELF spec, is required to be a
+UNIX timestamp. This output plugin will not modify or check the timestamp field
+if one is present and send it as-is to Graylog. If the field is absent then
+Telegraf will set the timestamp to the current time.
+
+Any field not defined by the spec will have an underscore (e.g. `_`) prefixed to
+the field name.
+
+[GELF spec]: https://docs.graylog.org/docs/gelf#gelf-payload-specification
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Send telegraf metrics to graylog
+[[outputs.graylog]]
+  ## Endpoints for your graylog instances.
+  servers = ["udp://127.0.0.1:12201"]
+
+  ## Connection timeout.
+  # timeout = "5s"
+
+  ## The field to use as the GELF short_message, if unset the static string
+  ## "telegraf" will be used.
+  ##   example: short_message_field = "message"
+  # short_message_field = ""
+
+  ## According to GELF payload specification, additional fields names must be prefixed
+  ## with an underscore. Previous versions did not prefix custom field 'name' with underscore.
+  ## Set to true for backward compatibility.
+  # name_field_no_prefix = false
+
+  ## Connection retry options
+  ## Attempt to connect to the endpoints if the initial connection fails.
+  ## If 'false', Telegraf will give up after 3 connection attempt and will
+  ## exit with an error. If set to 'true', the plugin will retry to connect
+  ## to the unconnected endpoints infinitely.
+  # connection_retry = false
+  ## Time to wait between connection retry attempts.
+  # connection_retry_wait_time = "15s"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+Server endpoint may be specified without UDP or TCP scheme
+(eg. "127.0.0.1:12201").  In such case, UDP protocol is assumed. TLS config is
+ignored for UDP endpoints.
diff --git a/content/telegraf/v1/output-plugins/groundwork/_index.md b/content/telegraf/v1/output-plugins/groundwork/_index.md
new file mode 100644
index 000000000..d839cd02e
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/groundwork/_index.md
@@ -0,0 +1,98 @@
+---
+description: "Telegraf plugin for sending metrics to GroundWork"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: GroundWork
+    identifier: output-groundwork
+tags: [GroundWork, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# GroundWork Output Plugin
+
+This plugin writes to a [GroundWork Monitor](https://www.gwos.com/product/groundwork-monitor/) instance. Plugin only supports
+GW8+
+
+[1]: https://www.gwos.com/product/groundwork-monitor/
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `username` and
+`password` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Send telegraf metrics to GroundWork Monitor
+[[outputs.groundwork]]
+  ## URL of your groundwork instance.
+  url = "https://groundwork.example.com"
+
+  ## Agent uuid for GroundWork API Server.
+  agent_id = ""
+
+  ## Username and password to access GroundWork API.
+  username = ""
+  password = ""
+
+  ## Default application type to use in GroundWork client
+  # default_app_type = "TELEGRAF"
+
+  ## Default display name for the host with services(metrics).
+  # default_host = "telegraf"
+
+  ## Default service state.
+  # default_service_state = "SERVICE_OK"
+
+  ## The name of the tag that contains the hostname.
+  # resource_tag = "host"
+
+  ## The name of the tag that contains the host group name.
+  # group_tag = "group"
+```
+
+## List of tags used by the plugin
+
+* __group__ - to define the name of the group you want to monitor,
+  can be changed with config.
+* __host__ - to define the name of the host you want to monitor,
+  can be changed with config.
+* __service__ - to define the name of the service you want to monitor.
+* __status__ - to define the status of the service. Supported statuses:
+  "SERVICE_OK", "SERVICE_WARNING", "SERVICE_UNSCHEDULED_CRITICAL",
+  "SERVICE_PENDING", "SERVICE_SCHEDULED_CRITICAL", "SERVICE_UNKNOWN".
+* __message__ - to provide any message you want,
+  it overrides __message__ field value.
+* __unitType__ - to use in monitoring contexts (subset of The Unified Code for
+  Units of Measure standard). Supported types: "1", "%cpu", "KB", "GB", "MB".
+* __critical__ - to define the default critical threshold value,
+  it overrides value_cr field value.
+* __warning__ - to define the default warning threshold value,
+  it overrides value_wn field value.
+* __value_cr__ - to define critical threshold value,
+  it overrides __critical__ tag value and __value_cr__ field value.
+* __value_wn__ - to define warning threshold value,
+  it overrides __warning__ tag value and __value_wn__ field value.
+
+## NOTE
+
+The current version of GroundWork Monitor does not support metrics whose values
+are strings. Such metrics will be skipped and will not be added to the final
+payload. You can find more context in this pull request: [#10255](https://github.com/influxdata/telegraf/pull/10255).
+
+[#10255]: https://github.com/influxdata/telegraf/pull/10255
diff --git a/content/telegraf/v1/output-plugins/health/_index.md b/content/telegraf/v1/output-plugins/health/_index.md
new file mode 100644
index 000000000..3b6950981
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/health/_index.md
@@ -0,0 +1,91 @@
+---
+description: "Telegraf plugin for sending metrics to Health"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Health
+    identifier: output-health
+tags: [Health, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Health Output Plugin
+
+The health plugin provides a HTTP health check resource that can be configured
+to return a failure status code based on the value of a metric.
+
+When the plugin is healthy it will return a 200 response; when unhealthy it
+will return a 503 response.  The default state is healthy, one or more checks
+must fail in order for the resource to enter the failed state.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Configurable HTTP health check resource based on metrics
+[[outputs.health]]
+  ## Address and port to listen on.
+  ##   ex: service_address = "http://localhost:8080"
+  ##       service_address = "unix:///var/run/telegraf-health.sock"
+  # service_address = "http://:8080"
+
+  ## The maximum duration for reading the entire request.
+  # read_timeout = "5s"
+  ## The maximum duration for writing the entire response.
+  # write_timeout = "5s"
+
+  ## Username and password to accept for HTTP basic authentication.
+  # basic_username = "user1"
+  # basic_password = "secret"
+
+  ## Allowed CA certificates for client certificates.
+  # tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
+
+  ## TLS server certificate and private key.
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+
+  ## NOTE: Due to the way TOML is parsed, tables must be at the END of the
+  ## plugin definition, otherwise additional config options are read as part of
+  ## the table
+
+  ## One or more check sub-tables should be defined, it is also recommended to
+  ## use metric filtering to limit the metrics that flow into this output.
+  ##
+  ## When using the default buffer sizes, this example will fail when the
+  ## metric buffer is half full.
+  ##
+  ## namepass = ["internal_write"]
+  ## tagpass = { output = ["influxdb"] }
+  ##
+  ## [[outputs.health.compares]]
+  ##   field = "buffer_size"
+  ##   lt = 5000.0
+  ##
+  ## [[outputs.health.contains]]
+  ##   field = "buffer_size"
+```
+
+### compares
+
+The `compares` check is used to assert basic mathematical relationships.  Use
+it by choosing a field key and one or more comparisons that must hold true.  If
+the field is not found on a metric no comparison will be made.
+
+Comparisons must be hold true on all metrics for the check to pass.
+
+### contains
+
+The `contains` check can be used to require a field key to exist on at least
+one metric.
+
+If the field is found on any metric the check passes.
diff --git a/content/telegraf/v1/output-plugins/http/_index.md b/content/telegraf/v1/output-plugins/http/_index.md
new file mode 100644
index 000000000..3d939b435
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/http/_index.md
@@ -0,0 +1,173 @@
+---
+description: "Telegraf plugin for sending metrics to HTTP"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: HTTP
+    identifier: output-http
+tags: [HTTP, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# HTTP Output Plugin
+
+This plugin sends metrics in a HTTP message encoded using one of the output data
+formats. For data_formats that support batching, metrics are sent in batch
+format by default.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `username`, `password`
+`headers`, and `cookie_auth_headers` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# A plugin that can transmit metrics over HTTP
+[[outputs.http]]
+  ## URL is the address to send metrics to
+  url = "http://127.0.0.1:8080/telegraf"
+
+  ## Timeout for HTTP message
+  # timeout = "5s"
+
+  ## HTTP method, one of: "POST" or "PUT" or "PATCH"
+  # method = "POST"
+
+  ## HTTP Basic Auth credentials
+  # username = "username"
+  # password = "pa$$word"
+
+  ## OAuth2 Client Credentials Grant
+  # client_id = "clientid"
+  # client_secret = "secret"
+  # token_url = "https://indentityprovider/oauth2/v1/token"
+  # audience = ""
+  # scopes = ["urn:opc:idm:__myscopes__"]
+
+  ## Goole API Auth
+  # google_application_credentials = "/etc/telegraf/example_secret.json"
+
+  ## HTTP Proxy support
+  # use_system_proxy = false
+  # http_proxy_url = ""
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## Optional Cookie authentication
+  # cookie_auth_url = "https://localhost/authMe"
+  # cookie_auth_method = "POST"
+  # cookie_auth_username = "username"
+  # cookie_auth_password = "pa$$word"
+  # cookie_auth_headers = '{"Content-Type": "application/json", "X-MY-HEADER":"hello"}'
+  # cookie_auth_body = '{"username": "user", "password": "pa$$word", "authenticate": "me"}'
+  ## cookie_auth_renewal not set or set to "0" will auth once and never renew the cookie
+  # cookie_auth_renewal = "5m"
+
+  ## Data format to output.
+  ## Each data format has it's own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
+  # data_format = "influx"
+
+  ## Use batch serialization format (default) instead of line based format.
+  ## Batch format is more efficient and should be used unless line based
+  ## format is really needed.
+  # use_batch_format = true
+
+  ## HTTP Content-Encoding for write request body, can be set to "gzip" to
+  ## compress body or "identity" to apply no encoding.
+  # content_encoding = "identity"
+
+  ## MaxIdleConns controls the maximum number of idle (keep-alive)
+  ## connections across all hosts. Zero means no limit.
+  # max_idle_conn = 0
+
+  ## MaxIdleConnsPerHost, if non-zero, controls the maximum idle
+  ## (keep-alive) connections to keep per-host. If zero,
+  ## DefaultMaxIdleConnsPerHost is used(2).
+  # max_idle_conn_per_host = 2
+
+  ## Idle (keep-alive) connection timeout.
+  ## Maximum amount of time before idle connection is closed.
+  ## Zero means no limit.
+  # idle_conn_timeout = 0
+
+  ## Amazon Region
+  #region = "us-east-1"
+
+  ## Amazon Credentials
+  ## Amazon Credentials are not built unless the following aws_service
+  ## setting is set to a non-empty string. It may need to match the name of
+  ## the service output to as well
+  #aws_service = "execute-api"
+
+  ## Credentials are loaded in the following order
+  ## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified
+  ## 2) Assumed credentials via STS if role_arn is specified
+  ## 3) explicit credentials from 'access_key' and 'secret_key'
+  ## 4) shared profile from 'profile'
+  ## 5) environment variables
+  ## 6) shared credentials file
+  ## 7) EC2 Instance Profile
+  #access_key = ""
+  #secret_key = ""
+  #token = ""
+  #role_arn = ""
+  #web_identity_token_file = ""
+  #role_session_name = ""
+  #profile = ""
+  #shared_credential_file = ""
+
+  ## Optional list of statuscodes (<200 or >300) upon which requests should not be retried
+  # non_retryable_statuscodes = [409, 413]
+
+  ## NOTE: Due to the way TOML is parsed, tables must be at the END of the
+  ## plugin definition, otherwise additional config options are read as part of
+  ## the table
+
+  ## Additional HTTP headers
+  # [outputs.http.headers]
+  #   ## Should be set manually to "application/json" for json data_format
+  #   Content-Type = "text/plain; charset=utf-8"
+```
+
+### Google API Auth
+
+The `google_application_credentials` setting is used with Google Cloud APIs.
+It specifies the json key file. To learn about creating Google service accounts,
+consult Google's [oauth2 service account documentation](https://cloud.google.com/docs/authentication/production#create_service_account).
+An example use case is a metrics proxy deployed to Cloud Run. In this example,
+the service account must have the "run.routes.invoke" permission.
+
+[create_service_account]: https://cloud.google.com/docs/authentication/production#create_service_account
+
+### Optional Cookie Authentication Settings
+
+The optional Cookie Authentication Settings will retrieve a cookie from the
+given authorization endpoint, and use it in subsequent API requests.  This is
+useful for services that do not provide OAuth or Basic Auth authentication,
+e.g. the [Tesla Powerwall API](https://www.tesla.com/support/energy/powerwall/own/monitoring-from-home-network), which uses a Cookie Auth Body to
+retrieve an authorization cookie.  The Cookie Auth Renewal interval will renew
+the authorization by retrieving a new cookie at the given interval.
+
+[powerwall]: https://www.tesla.com/support/energy/powerwall/own/monitoring-from-home-network
diff --git a/content/telegraf/v1/output-plugins/influxdb/_index.md b/content/telegraf/v1/output-plugins/influxdb/_index.md
new file mode 100644
index 000000000..ec5dd4103
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/influxdb/_index.md
@@ -0,0 +1,137 @@
+---
+description: "Telegraf plugin for sending metrics to InfluxDB v1.x"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: InfluxDB v1.x
+    identifier: output-influxdb
+tags: [InfluxDB v1.x, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# InfluxDB v1.x Output Plugin
+
+The InfluxDB output plugin writes metrics to the [InfluxDB v1.x] HTTP or UDP
+service.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `username` and
+`password` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for sending metrics to InfluxDB
+[[outputs.influxdb]]
+  ## The full HTTP or UDP URL for your InfluxDB instance.
+  ##
+  ## Multiple URLs can be specified for a single cluster, only ONE of the
+  ## urls will be written to each interval.
+  # urls = ["unix:///var/run/influxdb.sock"]
+  # urls = ["udp://127.0.0.1:8089"]
+  # urls = ["http://127.0.0.1:8086"]
+
+  ## Local address to bind when connecting to the server
+  ## If empty or not set, the local address is automatically chosen.
+  # local_address = ""
+
+  ## The target database for metrics; will be created as needed.
+  ## For UDP url endpoint database needs to be configured on server side.
+  # database = "telegraf"
+
+  ## The value of this tag will be used to determine the database.  If this
+  ## tag is not set the 'database' option is used as the default.
+  # database_tag = ""
+
+  ## If true, the 'database_tag' will not be included in the written metric.
+  # exclude_database_tag = false
+
+  ## If true, no CREATE DATABASE queries will be sent.  Set to true when using
+  ## Telegraf with a user without permissions to create databases or when the
+  ## database already exists.
+  # skip_database_creation = false
+
+  ## Name of existing retention policy to write to.  Empty string writes to
+  ## the default retention policy.  Only takes effect when using HTTP.
+  # retention_policy = ""
+
+  ## The value of this tag will be used to determine the retention policy.  If this
+  ## tag is not set the 'retention_policy' option is used as the default.
+  # retention_policy_tag = ""
+
+  ## If true, the 'retention_policy_tag' will not be included in the written metric.
+  # exclude_retention_policy_tag = false
+
+  ## Write consistency (clusters only), can be: "any", "one", "quorum", "all".
+  ## Only takes effect when using HTTP.
+  # write_consistency = "any"
+
+  ## Timeout for HTTP messages.
+  # timeout = "5s"
+
+  ## HTTP Basic Auth
+  # username = "telegraf"
+  # password = "metricsmetricsmetricsmetrics"
+
+  ## HTTP User-Agent
+  # user_agent = "telegraf"
+
+  ## UDP payload size is the maximum packet size to send.
+  # udp_payload = "512B"
+
+  ## Optional TLS Config for use on HTTP connections.
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## HTTP Proxy override, if unset values the standard proxy environment
+  ## variables are consulted to determine which proxy, if any, should be used.
+  # http_proxy = "http://corporate.proxy:3128"
+
+  ## Additional HTTP headers
+  # http_headers = {"X-Special-Header" = "Special-Value"}
+
+  ## HTTP Content-Encoding for write request body, can be set to "gzip" to
+  ## compress body or "identity" to apply no encoding.
+  # content_encoding = "gzip"
+
+  ## When true, Telegraf will output unsigned integers as unsigned values,
+  ## i.e.: "42u".  You will need a version of InfluxDB supporting unsigned
+  ## integer values.  Enabling this option will result in field type errors if
+  ## existing data has been written.
+  # influx_uint_support = false
+
+  ## When true, Telegraf will omit the timestamp on data to allow InfluxDB
+  ## to set the timestamp of the data during ingestion. This is generally NOT
+  ## what you want as it can lead to data points captured at different times
+  ## getting omitted due to similar data.
+  # influx_omit_timestamp = false
+```
+
+To send every metrics into multiple influxdb,
+define additional `[[outputs.influxdb]]` section with new `urls`.
+
+## Metrics
+
+Reference the [influx serializer](/telegraf/v1/plugins/#serializer-influx) for details about metric production.
+
+[InfluxDB v1.x]: https://github.com/influxdata/influxdb
+
+[influx serializer]: /plugins/serializers/influx/README.md#Metrics
diff --git a/content/telegraf/v1/output-plugins/influxdb_v2/_index.md b/content/telegraf/v1/output-plugins/influxdb_v2/_index.md
new file mode 100644
index 000000000..a5da5e51d
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/influxdb_v2/_index.md
@@ -0,0 +1,117 @@
+---
+description: "Telegraf plugin for sending metrics to InfluxDB v2.x"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: InfluxDB v2.x
+    identifier: output-influxdb_v2
+tags: [InfluxDB v2.x, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# InfluxDB v2.x Output Plugin
+
+The InfluxDB output plugin writes metrics to the [InfluxDB v2.x] HTTP service.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `token` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for sending metrics to InfluxDB 2.0
+[[outputs.influxdb_v2]]
+  ## The URLs of the InfluxDB cluster nodes.
+  ##
+  ## Multiple URLs can be specified for a single cluster, only ONE of the
+  ## urls will be written to each interval.
+  ##   ex: urls = ["https://us-west-2-1.aws.cloud2.influxdata.com"]
+  urls = ["http://127.0.0.1:8086"]
+
+  ## Local address to bind when connecting to the server
+  ## If empty or not set, the local address is automatically chosen.
+  # local_address = ""
+
+  ## Token for authentication.
+  token = ""
+
+  ## Organization is the name of the organization you wish to write to.
+  organization = ""
+
+  ## Destination bucket to write into.
+  bucket = ""
+
+  ## The value of this tag will be used to determine the bucket.  If this
+  ## tag is not set the 'bucket' option is used as the default.
+  # bucket_tag = ""
+
+  ## If true, the bucket tag will not be added to the metric.
+  # exclude_bucket_tag = false
+
+  ## Timeout for HTTP messages.
+  # timeout = "5s"
+
+  ## Additional HTTP headers
+  # http_headers = {"X-Special-Header" = "Special-Value"}
+
+  ## HTTP Proxy override, if unset values the standard proxy environment
+  ## variables are consulted to determine which proxy, if any, should be used.
+  # http_proxy = "http://corporate.proxy:3128"
+
+  ## HTTP User-Agent
+  # user_agent = "telegraf"
+
+  ## Content-Encoding for write request body, can be set to "gzip" to
+  ## compress body or "identity" to apply no encoding.
+  # content_encoding = "gzip"
+
+  ## Enable or disable uint support for writing uints influxdb 2.0.
+  # influx_uint_support = false
+
+  ## When true, Telegraf will omit the timestamp on data to allow InfluxDB
+  ## to set the timestamp of the data during ingestion. This is generally NOT
+  ## what you want as it can lead to data points captured at different times
+  ## getting omitted due to similar data.
+  # influx_omit_timestamp = false
+
+  ## HTTP/2 Timeouts
+  ## The following values control the HTTP/2 client's timeouts. These settings
+  ## are generally not required unless a user is seeing issues with client
+  ## disconnects. If a user does see issues, then it is suggested to set these
+  ## values to "15s" for ping timeout and "30s" for read idle timeout and
+  ## retry.
+  ##
+  ## Note that the timer for read_idle_timeout begins at the end of the last
+  ## successful write and not at the beginning of the next write.
+  # ping_timeout = "0s"
+  # read_idle_timeout = "0s"
+
+  ## Optional TLS Config for use on HTTP connections.
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+## Metrics
+
+Reference the [influx serializer](/telegraf/v1/plugins/#serializer-influx) for details about metric production.
+
+[InfluxDB v2.x]: https://github.com/influxdata/influxdb
+[influx serializer]: /plugins/serializers/influx/README.md#Metrics
diff --git a/content/telegraf/v1/output-plugins/instrumental/_index.md b/content/telegraf/v1/output-plugins/instrumental/_index.md
new file mode 100644
index 000000000..0a530af52
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/instrumental/_index.md
@@ -0,0 +1,57 @@
+---
+description: "Telegraf plugin for sending metrics to Instrumental"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Instrumental
+    identifier: output-instrumental
+tags: [Instrumental, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Instrumental Output Plugin
+
+This plugin writes to the [Instrumental Collector
+API](https://instrumentalapp.com/docs/tcp-collector) and requires a
+Project-specific API token.
+
+Instrumental accepts stats in a format very close to Graphite, with the only
+difference being that the type of stat (gauge, increment) is the first token,
+separated from the metric itself by whitespace. The `increment` type is only
+used if the metric comes in as a counter through `[[input.statsd]]`.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `api_token` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for sending metrics to an Instrumental project
+[[outputs.instrumental]]
+  ## Project API Token (required)
+  api_token = "API Token"  # required
+  ## Prefix the metrics with a given name
+  prefix = ""
+  ## Stats output template (Graphite formatting)
+  ## see https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md#graphite
+  template = "host.tags.measurement.field"
+  ## Timeout in seconds to connect
+  timeout = "2s"
+  ## Debug true - Print communication to Instrumental
+  debug = false
+```
diff --git a/content/telegraf/v1/output-plugins/iotdb/_index.md b/content/telegraf/v1/output-plugins/iotdb/_index.md
new file mode 100644
index 000000000..1e96fa35c
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/iotdb/_index.md
@@ -0,0 +1,141 @@
+---
+description: "Telegraf plugin for sending metrics to IoTDB"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: IoTDB
+    identifier: output-iotdb
+tags: [IoTDB, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# IoTDB Output Plugin
+
+This output plugin saves Telegraf metrics to an Apache IoTDB backend,
+supporting session connection and data insertion.
+
+## Apache IoTDB
+
+Apache IoTDB (Database for Internet of Things) is an IoT native database with
+high performance for data management and analysis, deployable on the edge and
+the cloud. Due to its light-weight architecture, high performance and rich
+feature set together with its deep integration with Apache Hadoop, Spark and
+Flink, Apache IoTDB can meet the requirements of massive data storage,
+high-speed data ingestion and complex data analysis in the IoT industrial
+fields.
+
+For more details consult the [Apache IoTDB website](https://iotdb.apache.org)
+or the [Apache IoTDB GitHub page](https://github.com/apache/iotdb).
+
+## Getting started
+
+Before using this plugin, please configure the IP address, port number,
+user name, password and other information of the database server,
+as well as some data type conversion, time unit and other configurations.
+
+Please see the configuration section
+
+IoTDB uses a tree model for metadata while Telegraf uses a tag model
+(see [InfluxDB-Protocol Adapter](https://iotdb.apache.org/UserGuide/Master/API/InfluxDB-Protocol.html)).
+There are two available options of converting tags, which are specified by
+setting `convert_tags_to`:
+
+- `fields`. Treat Tags as measurements. For each Key:Value in Tag,
+convert them into Measurement, Value, DataType, which are supported in IoTDB.
+- `device_id`, default option. Treat Tags as part of device id. Tags
+constitute a subtree of `Name`.
+
+For example, there is a metric:
+
+```markdown
+Name="root.sg.device", Tags={tag1="private", tag2="working"}, Fields={s1=100, s2="hello"}
+```
+
+- `fields`, result: `root.sg.device, s1=100, s2="hello", tag1="private", tag2="working"`
+- `device_id`, result: `root.sg.device.private.working, s1=100, s2="hello"`
+
+[InfluxDB-Protocol Adapter]: https://iotdb.apache.org/UserGuide/Master/API/InfluxDB-Protocol.html
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `username` and
+`password` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Save metrics to an IoTDB Database
+[[outputs.iotdb]]
+  ## Configuration of IoTDB server connection
+  host = "127.0.0.1"
+  # port = "6667"
+
+  ## Configuration of authentication
+  # user = "root"
+  # password = "root"
+
+  ## Timeout to open a new session.
+  ## A value of zero means no timeout.
+  # timeout = "5s"
+
+  ## Configuration of type conversion for 64-bit unsigned int
+  ## IoTDB currently DOES NOT support unsigned integers (version 13.x).
+  ## 32-bit unsigned integers are safely converted into 64-bit signed integers by the plugin,
+  ## however, this is not true for 64-bit values in general as overflows may occur.
+  ## The following setting allows to specify the handling of 64-bit unsigned integers.
+  ## Available values are:
+  ##   - "int64"       --  convert to 64-bit signed integers and accept overflows
+  ##   - "int64_clip"  --  convert to 64-bit signed integers and clip the values on overflow to 9,223,372,036,854,775,807
+  ##   - "text"        --  convert to the string representation of the value
+  # uint64_conversion = "int64_clip"
+
+  ## Configuration of TimeStamp
+  ## TimeStamp is always saved in 64bits int. timestamp_precision specifies the unit of timestamp.
+  ## Available value:
+  ## "second", "millisecond", "microsecond", "nanosecond"(default)
+  # timestamp_precision = "nanosecond"
+
+  ## Handling of tags
+  ## Tags are not fully supported by IoTDB.
+  ## A guide with suggestions on how to handle tags can be found here:
+  ##     https://iotdb.apache.org/UserGuide/Master/API/InfluxDB-Protocol.html
+  ##
+  ## Available values are:
+  ##   - "fields"     --  convert tags to fields in the measurement
+  ##   - "device_id"  --  attach tags to the device ID
+  ##
+  ## For Example, a metric named "root.sg.device" with the tags `tag1: "private"`  and  `tag2: "working"` and
+  ##  fields `s1: 100`  and `s2: "hello"` will result in the following representations in IoTDB
+  ##   - "fields"     --  root.sg.device, s1=100, s2="hello", tag1="private", tag2="working"
+  ##   - "device_id"  --  root.sg.device.private.working, s1=100, s2="hello"
+  # convert_tags_to = "device_id"
+
+  ## Handling of unsupported characters
+  ## Some characters in different versions of IoTDB are not supported in path name
+  ## A guide with suggetions on valid paths can be found here:
+  ## for iotdb 0.13.x           -> https://iotdb.apache.org/UserGuide/V0.13.x/Reference/Syntax-Conventions.html#identifiers
+  ## for iotdb 1.x.x and above  -> https://iotdb.apache.org/UserGuide/V1.3.x/User-Manual/Syntax-Rule.html#identifier
+  ##
+  ## Available values are:
+  ##   - "1.0", "1.1", "1.2", "1.3"  -- enclose in `` the world having forbidden character 
+  ##                                    such as @ $ # : [ ] { } ( ) space
+  ##   - "0.13"                      -- enclose in `` the world having forbidden character 
+  ##                                    such as space
+  ##
+  ## Keep this section commented if you don't want to sanitize the path
+  # sanitize_tag = "1.3"
+```
diff --git a/content/telegraf/v1/output-plugins/kafka/_index.md b/content/telegraf/v1/output-plugins/kafka/_index.md
new file mode 100644
index 000000000..df8b4be4f
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/kafka/_index.md
@@ -0,0 +1,261 @@
+---
+description: "Telegraf plugin for sending metrics to Kafka"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Kafka
+    identifier: output-kafka
+tags: [Kafka, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Kafka Output Plugin
+
+This plugin writes to a [Kafka
+Broker](http://kafka.apache.org/07/quickstart.html) acting a Kafka Producer.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Startup error behavior options <!-- @/docs/includes/startup_error_behavior.md -->
+
+In addition to the plugin-specific and global configuration settings the plugin
+supports options for specifying the behavior when experiencing startup errors
+using the `startup_error_behavior` setting. Available values are:
+
+- `error`:  Telegraf with stop and exit in case of startup errors. This is the
+            default behavior.
+- `ignore`: Telegraf will ignore startup errors for this plugin and disables it
+            but continues processing for all other plugins.
+- `retry`:  Telegraf will try to startup the plugin in every gather or write
+            cycle in case of startup errors. The plugin is disabled until
+            the startup succeeds.
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `sasl_username`,
+`sasl_password` and `sasl_access_token` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for the Kafka server to send metrics to
+[[outputs.kafka]]
+  ## URLs of kafka brokers
+  ## The brokers listed here are used to connect to collect metadata about a
+  ## cluster. However, once the initial metadata collect is completed, telegraf
+  ## will communicate solely with the kafka leader and not all defined brokers.
+  brokers = ["localhost:9092"]
+
+  ## Kafka topic for producer messages
+  topic = "telegraf"
+
+  ## The value of this tag will be used as the topic.  If not set the 'topic'
+  ## option is used.
+  # topic_tag = ""
+
+  ## If true, the 'topic_tag' will be removed from to the metric.
+  # exclude_topic_tag = false
+
+  ## Optional Client id
+  # client_id = "Telegraf"
+
+  ## Set the minimal supported Kafka version.  Setting this enables the use of new
+  ## Kafka features and APIs.  Of particular interested, lz4 compression
+  ## requires at least version 0.10.0.0.
+  ##   ex: version = "1.1.0"
+  # version = ""
+
+  ## The routing tag specifies a tagkey on the metric whose value is used as
+  ## the message key.  The message key is used to determine which partition to
+  ## send the message to.  This tag is preferred over the routing_key option.
+  routing_tag = "host"
+
+  ## The routing key is set as the message key and used to determine which
+  ## partition to send the message to.  This value is only used when no
+  ## routing_tag is set or as a fallback when the tag specified in routing tag
+  ## is not found.
+  ##
+  ## If set to "random", a random value will be generated for each message.
+  ##
+  ## When unset, no message key is added and each message is routed to a random
+  ## partition.
+  ##
+  ##   ex: routing_key = "random"
+  ##       routing_key = "telegraf"
+  # routing_key = ""
+
+  ## Compression codec represents the various compression codecs recognized by
+  ## Kafka in messages.
+  ##  0 : None
+  ##  1 : Gzip
+  ##  2 : Snappy
+  ##  3 : LZ4
+  ##  4 : ZSTD
+  # compression_codec = 0
+
+  ## Idempotent Writes
+  ## If enabled, exactly one copy of each message is written.
+  # idempotent_writes = false
+
+  ##  RequiredAcks is used in Produce Requests to tell the broker how many
+  ##  replica acknowledgements it must see before responding
+  ##   0 : the producer never waits for an acknowledgement from the broker.
+  ##       This option provides the lowest latency but the weakest durability
+  ##       guarantees (some data will be lost when a server fails).
+  ##   1 : the producer gets an acknowledgement after the leader replica has
+  ##       received the data. This option provides better durability as the
+  ##       client waits until the server acknowledges the request as successful
+  ##       (only messages that were written to the now-dead leader but not yet
+  ##       replicated will be lost).
+  ##   -1: the producer gets an acknowledgement after all in-sync replicas have
+  ##       received the data. This option provides the best durability, we
+  ##       guarantee that no messages will be lost as long as at least one in
+  ##       sync replica remains.
+  # required_acks = -1
+
+  ## The maximum number of times to retry sending a metric before failing
+  ## until the next flush.
+  # max_retry = 3
+
+  ## The maximum permitted size of a message. Should be set equal to or
+  ## smaller than the broker's 'message.max.bytes'.
+  # max_message_bytes = 1000000
+
+  ## Producer timestamp
+  ## This option sets the timestamp of the kafka producer message, choose from:
+  ##   * metric: Uses the metric's timestamp
+  ##   * now: Uses the time of write
+  # producer_timestamp = metric
+
+  ## Add metric name as specified kafka header if not empty
+  # metric_name_header = ""
+
+  ## Optional TLS Config
+  # enable_tls = false
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## Period between keep alive probes.
+  ## Defaults to the OS configuration if not specified or zero.
+  # keep_alive_period = "15s"
+
+  ## Optional SOCKS5 proxy to use when connecting to brokers
+  # socks5_enabled = true
+  # socks5_address = "127.0.0.1:1080"
+  # socks5_username = "alice"
+  # socks5_password = "pass123"
+
+  ## Optional SASL Config
+  # sasl_username = "kafka"
+  # sasl_password = "secret"
+
+  ## Optional SASL:
+  ## one of: OAUTHBEARER, PLAIN, SCRAM-SHA-256, SCRAM-SHA-512, GSSAPI
+  ## (defaults to PLAIN)
+  # sasl_mechanism = ""
+
+  ## used if sasl_mechanism is GSSAPI
+  # sasl_gssapi_service_name = ""
+  # ## One of: KRB5_USER_AUTH and KRB5_KEYTAB_AUTH
+  # sasl_gssapi_auth_type = "KRB5_USER_AUTH"
+  # sasl_gssapi_kerberos_config_path = "/"
+  # sasl_gssapi_realm = "realm"
+  # sasl_gssapi_key_tab_path = ""
+  # sasl_gssapi_disable_pafxfast = false
+
+  ## Access token used if sasl_mechanism is OAUTHBEARER
+  # sasl_access_token = ""
+
+  ## Arbitrary key value string pairs to pass as a TOML table. For example:
+  # {logicalCluster = "cluster-042", poolId = "pool-027"}
+  # sasl_extensions = {}
+
+  ## SASL protocol version.  When connecting to Azure EventHub set to 0.
+  # sasl_version = 1
+
+  # Disable Kafka metadata full fetch
+  # metadata_full = false
+
+  ## Maximum number of retries for metadata operations including
+  ## connecting. Sets Sarama library's Metadata.Retry.Max config value. If 0 or
+  ## unset, use the Sarama default of 3,
+  # metadata_retry_max = 0
+
+  ## Type of retry backoff. Valid options: "constant", "exponential"
+  # metadata_retry_type = "constant"
+
+  ## Amount of time to wait before retrying. When metadata_retry_type is
+  ## "constant", each retry is delayed this amount. When "exponential", the
+  ## first retry is delayed this amount, and subsequent delays are doubled. If 0
+  ## or unset, use the Sarama default of 250 ms
+  # metadata_retry_backoff = 0
+
+  ## Maximum amount of time to wait before retrying when metadata_retry_type is
+  ## "exponential". Ignored for other retry types. If 0, there is no backoff
+  ## limit.
+  # metadata_retry_max_duration = 0
+
+  ## Data format to output.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
+  # data_format = "influx"
+
+  ## NOTE: Due to the way TOML is parsed, tables must be at the END of the
+  ## plugin definition, otherwise additional config options are read as part of
+  ## the table
+
+  ## Optional topic suffix configuration.
+  ## If the section is omitted, no suffix is used.
+  ## Following topic suffix methods are supported:
+  ##   measurement - suffix equals to separator + measurement's name
+  ##   tags        - suffix equals to separator + specified tags' values
+  ##                 interleaved with separator
+
+  ## Suffix equals to "_" + measurement name
+  # [outputs.kafka.topic_suffix]
+  #   method = "measurement"
+  #   separator = "_"
+
+  ## Suffix equals to "__" + measurement's "foo" tag value.
+  ## If there's no such a tag, suffix equals to an empty string
+  # [outputs.kafka.topic_suffix]
+  #   method = "tags"
+  #   keys = ["foo"]
+  #   separator = "__"
+
+  ## Suffix equals to "_" + measurement's "foo" and "bar"
+  ## tag values, separated by "_". If there is no such tags,
+  ## their values treated as empty strings.
+  # [outputs.kafka.topic_suffix]
+  #   method = "tags"
+  #   keys = ["foo", "bar"]
+  #   separator = "_"
+```
+
+### `max_retry`
+
+This option controls the number of retries before a failure notification is
+displayed for each message when no acknowledgement is received from the
+broker. When the setting is greater than `0`, message latency can be reduced,
+duplicate messages can occur in cases of transient errors, and broker loads can
+increase during downtime.
+
+The option is similar to the
+[retries](https://kafka.apache.org/documentation/#producerconfigs) Producer
+option in the Java Kafka Producer.
diff --git a/content/telegraf/v1/output-plugins/kinesis/_index.md b/content/telegraf/v1/output-plugins/kinesis/_index.md
new file mode 100644
index 000000000..91c837579
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/kinesis/_index.md
@@ -0,0 +1,205 @@
+---
+description: "Telegraf plugin for sending metrics to Amazon Kinesis"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Amazon Kinesis
+    identifier: output-kinesis
+tags: [Amazon Kinesis, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Amazon Kinesis Output Plugin
+
+This is an experimental plugin that is still in the early stages of
+development. It will batch up all of the Points in one Put request to
+Kinesis. This should save the number of API requests by a considerable level.
+
+## About Kinesis
+
+This is not the place to document all of the various Kinesis terms however it
+maybe useful for users to review Amazons official documentation which is
+available
+[here](http://docs.aws.amazon.com/kinesis/latest/dev/key-concepts.html).
+
+## Amazon Authentication
+
+This plugin uses a credential chain for Authentication with the Kinesis API
+endpoint. In the following order the plugin will attempt to authenticate.
+
+1. Web identity provider credentials via STS if `role_arn` and
+   `web_identity_token_file` are specified
+1. Assumed credentials via STS if `role_arn` attribute is specified (source
+   credentials are evaluated from subsequent rules)
+1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
+1. Shared profile from `profile` attribute
+1. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables)
+1. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file)
+1. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
+
+If you are using credentials from a web identity provider, you can specify the
+session name using `role_session_name`. If left empty, the current timestamp
+will be used.
+
+[1]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables
+[2]: https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file
+[3]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for the AWS Kinesis output.
+[[outputs.kinesis]]
+  ## Amazon REGION of kinesis endpoint.
+  region = "ap-southeast-2"
+
+  ## Amazon Credentials
+  ## Credentials are loaded in the following order
+  ## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified
+  ## 2) Assumed credentials via STS if role_arn is specified
+  ## 3) explicit credentials from 'access_key' and 'secret_key'
+  ## 4) shared profile from 'profile'
+  ## 5) environment variables
+  ## 6) shared credentials file
+  ## 7) EC2 Instance Profile
+  #access_key = ""
+  #secret_key = ""
+  #token = ""
+  #role_arn = ""
+  #web_identity_token_file = ""
+  #role_session_name = ""
+  #profile = ""
+  #shared_credential_file = ""
+
+  ## Endpoint to make request against, the correct endpoint is automatically
+  ## determined and this option should only be set if you wish to override the
+  ## default.
+  ##   ex: endpoint_url = "http://localhost:8000"
+  # endpoint_url = ""
+
+  ## Kinesis StreamName must exist prior to starting telegraf.
+  streamname = "StreamName"
+
+  ## Data format to output.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
+  data_format = "influx"
+
+  ## debug will show upstream aws messages.
+  debug = false
+
+  ## NOTE: Due to the way TOML is parsed, tables must be at the END of the
+  ## plugin definition, otherwise additional config options are read as part of
+  ## the table
+
+  ## The partition key can be calculated using one of several methods:
+  ##
+  ## Use a static value for all writes:
+  #  [outputs.kinesis.partition]
+  #    method = "static"
+  #    key = "howdy"
+  #
+  ## Use a random partition key on each write:
+  #  [outputs.kinesis.partition]
+  #    method = "random"
+  #
+  ## Use the measurement name as the partition key:
+  #  [outputs.kinesis.partition]
+  #    method = "measurement"
+  #
+  ## Use the value of a tag for all writes, if the tag is not set the empty
+  ## default option will be used. When no default, defaults to "telegraf"
+  #  [outputs.kinesis.partition]
+  #    method = "tag"
+  #    key = "host"
+  #    default = "mykey"
+```
+
+For this output plugin to function correctly the following variables must be
+configured.
+
+* region
+* streamname
+
+### region
+
+The region is the Amazon region that you wish to connect to. Examples include
+but are not limited to
+
+* us-west-1
+* us-west-2
+* us-east-1
+* ap-southeast-1
+* ap-southeast-2
+
+### streamname
+
+The streamname is used by the plugin to ensure that data is sent to the correct
+Kinesis stream. It is important to note that the stream *MUST* be pre-configured
+for this plugin to function correctly. If the stream does not exist the plugin
+will result in telegraf exiting with an exit code of 1.
+
+### partitionkey [DEPRECATED]
+
+This is used to group data within a stream. Currently this plugin only supports
+a single partitionkey.  Manually configuring different hosts, or groups of hosts
+with manually selected partitionkeys might be a workable solution to scale out.
+
+### use_random_partitionkey [DEPRECATED]
+
+When true a random UUID will be generated and used as the partitionkey when
+sending data to Kinesis. This allows data to evenly spread across multiple
+shards in the stream. Due to using a random partitionKey there can be no
+guarantee of ordering when consuming the data off the shards.  If true then the
+partitionkey option will be ignored.
+
+### partition
+
+This is used to group data within a stream. Currently four methods are
+supported: random, static, tag or measurement
+
+#### random
+
+This will generate a UUIDv4 for each metric to spread them across shards.  Any
+guarantee of ordering is lost with this method
+
+#### static
+
+This uses a static string as a partitionkey.  All metrics will be mapped to the
+same shard which may limit throughput.
+
+#### tag
+
+This will take the value of the specified tag from each metric as the
+partitionKey.  If the tag is not found the `default` value will be used or
+`telegraf` if unspecified
+
+#### measurement
+
+This will use the measurement's name as the partitionKey.
+
+### format
+
+The format configuration value has been designated to allow people to change the
+format of the Point as written to Kinesis. Right now there are two supported
+formats string and custom.
+
+#### string
+
+String is defined using the default Point.String() value and translated to
+[]byte for the Kinesis stream.
+
+#### custom
+
+Custom is a string defined by a number of values in the FormatMetric() function.
diff --git a/content/telegraf/v1/output-plugins/librato/_index.md b/content/telegraf/v1/output-plugins/librato/_index.md
new file mode 100644
index 000000000..a66227212
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/librato/_index.md
@@ -0,0 +1,67 @@
+---
+description: "Telegraf plugin for sending metrics to Librato"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Librato
+    identifier: output-librato
+tags: [Librato, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Librato Output Plugin
+
+This plugin writes to the [Librato Metrics API](http://dev.librato.com/v1/metrics#metrics) and requires an
+`api_user` and `api_token` which can be obtained [here](https://metrics.librato.com/account/api_tokens) for the account.
+
+The `source_tag` option in the Configuration file is used to send contextual
+information from Point Tags to the API.
+
+If the point value being sent cannot be converted to a float64, the metric is
+skipped.
+
+Currently, the plugin does not send any associated Point Tags.
+
+[metrics-api]: http://dev.librato.com/v1/metrics#metrics
+
+[tokens]: https://metrics.librato.com/account/api_tokens
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `api_user` and
+`api_token` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for Librato API to send metrics to.
+[[outputs.librato]]
+  ## Librato API Docs
+  ## http://dev.librato.com/v1/metrics-authentication
+  ## Librato API user
+  api_user = "telegraf@influxdb.com" # required.
+  ## Librato API token
+  api_token = "my-secret-token" # required.
+  ## Debug
+  # debug = false
+  ## Connection timeout.
+  # timeout = "5s"
+  ## Output source Template (same as graphite buckets)
+  ## see https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md#graphite
+  ## This template is used in librato's source (not metric's name)
+  template = "host"
+```
diff --git a/content/telegraf/v1/output-plugins/logzio/_index.md b/content/telegraf/v1/output-plugins/logzio/_index.md
new file mode 100644
index 000000000..cc38a1e52
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/logzio/_index.md
@@ -0,0 +1,64 @@
+---
+description: "Telegraf plugin for sending metrics to Logz.io"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Logz.io
+    identifier: output-logzio
+tags: [Logz.io, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Logz.io Output Plugin
+
+This plugin sends metrics to Logz.io over HTTPs.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `token` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# A plugin that can send metrics over HTTPs to Logz.io
+[[outputs.logzio]]
+  ## Connection timeout, defaults to "5s" if not set.
+  # timeout = "5s"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+
+  ## Logz.io account token
+  token = "your logz.io token" # required
+
+  ## Use your listener URL for your Logz.io account region.
+  # url = "https://listener.logz.io:8071"
+```
+
+### Required parameters
+
+* `token`: Your Logz.io token, which can be found under "settings" in your account.
+
+### Optional parameters
+
+* `check_disk_space`: Set to true if Logz.io sender checks the disk space before adding metrics to the disk queue.
+* `disk_threshold`: If the queue_dir space crosses this threshold (in % of disk usage), the plugin will start dropping logs.
+* `drain_duration`: Time to sleep between sending attempts.
+* `queue_dir`: Metrics disk path. All the unsent metrics are saved to the disk in this location.
+* `url`: Logz.io listener URL.
diff --git a/content/telegraf/v1/output-plugins/loki/_index.md b/content/telegraf/v1/output-plugins/loki/_index.md
new file mode 100644
index 000000000..3490e6f73
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/loki/_index.md
@@ -0,0 +1,78 @@
+---
+description: "Telegraf plugin for sending metrics to Loki"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Loki
+    identifier: output-loki
+tags: [Loki, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Loki Output Plugin
+
+This plugin sends logs to Loki, using metric name and tags as labels, log line
+will content all fields in `key="value"` format which is easily parsable with
+`logfmt` parser in Loki.
+
+Logs within each stream are sorted by timestamp before being sent to Loki.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `username` and
+`password` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# A plugin that can transmit logs to Loki
+[[outputs.loki]]
+  ## The domain of Loki
+  domain = "https://loki.domain.tld"
+
+  ## Endpoint to write api
+  # endpoint = "/loki/api/v1/push"
+
+  ## Connection timeout, defaults to "5s" if not set.
+  # timeout = "5s"
+
+  ## Basic auth credential
+  # username = "loki"
+  # password = "pass"
+
+  ## Additional HTTP headers
+  # http_headers = {"X-Scope-OrgID" = "1"}
+
+  ## If the request must be gzip encoded
+  # gzip_request = false
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+
+  ## Sanitize Tag Names
+  ## If true, all tag names will have invalid characters replaced with
+  ## underscores that do not match the regex: ^[a-zA-Z_:]()*.
+  # sanitize_label_names = false
+
+  ## Metric Name Label
+  ## Label to use for the metric name to when sending metrics. If set to an
+  ## empty string, this will not add the label. This is NOT suggested as there
+  ## is no way to differentiate between multiple metrics.
+  # metric_name_label = "__name"
+```
diff --git a/content/telegraf/v1/output-plugins/mongodb/_index.md b/content/telegraf/v1/output-plugins/mongodb/_index.md
new file mode 100644
index 000000000..d07e4d3b6
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/mongodb/_index.md
@@ -0,0 +1,74 @@
+---
+description: "Telegraf plugin for sending metrics to MongoDB"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: MongoDB
+    identifier: output-mongodb
+tags: [MongoDB, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# MongoDB Output Plugin
+
+This plugin sends metrics to MongoDB and automatically creates the collections
+as time series collections when they don't already exist.  **Please note:**
+Requires MongoDB 5.0+ for Time Series Collections
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `username` and
+`password` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# A plugin that can transmit logs to mongodb
+[[outputs.mongodb]]
+  # connection string examples for mongodb
+  dsn = "mongodb://localhost:27017"
+  # dsn = "mongodb://mongod1:27017,mongod2:27017,mongod3:27017/admin&replicaSet=myReplSet&w=1"
+
+  # overrides serverSelectionTimeoutMS in dsn if set
+  # timeout = "30s"
+
+  # default authentication, optional
+  # authentication = "NONE"
+
+  # for SCRAM-SHA-256 authentication
+  # authentication = "SCRAM"
+  # username = "root"
+  # password = "***"
+
+  # for x509 certificate authentication
+  # authentication = "X509"
+  # tls_ca = "ca.pem"
+  # tls_key = "client.pem"
+  # # tls_key_pwd = "changeme" # required for encrypted tls_key
+  # insecure_skip_verify = false
+
+  # database to store measurements and time series collections
+  # database = "telegraf"
+
+  # granularity can be seconds, minutes, or hours.
+  # configuring this value will be based on your input collection frequency.
+  # see https://docs.mongodb.com/manual/core/timeseries-collections/#create-a-time-series-collection
+  # granularity = "seconds"
+
+  # optionally set a TTL to automatically expire documents from the measurement collections.
+  # ttl = "360h"
+```
diff --git a/content/telegraf/v1/output-plugins/mqtt/_index.md b/content/telegraf/v1/output-plugins/mqtt/_index.md
new file mode 100644
index 000000000..ff3d4641c
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/mqtt/_index.md
@@ -0,0 +1,355 @@
+---
+description: "Telegraf plugin for sending metrics to MQTT Producer"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: MQTT Producer
+    identifier: output-mqtt
+tags: [MQTT Producer, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# MQTT Producer Output Plugin
+
+This plugin writes to a [MQTT Broker](http://http://mqtt.org/) acting as a mqtt
+Producer. It supports MQTT protocols `3.1.1` and `5`.
+
+## Mosquitto v2.0.12+ and `identifier rejected`
+
+In v2.0.12+ of the mosquitto MQTT server, there is a
+[bug](https://github.com/eclipse/mosquitto/issues/2117) which requires the
+`keep_alive` value to be set non-zero in your telegraf configuration. If not
+set, the server will return with `identifier rejected`.
+
+As a reference `eclipse/paho.golang` sets the `keep_alive` to 30.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `username` and
+`password` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for MQTT server to send metrics to
+[[outputs.mqtt]]
+  ## MQTT Brokers
+  ## The list of brokers should only include the hostname or IP address and the
+  ## port to the broker. This should follow the format `[{scheme}://]{host}:{port}`. For
+  ## example, `localhost:1883` or `mqtt://localhost:1883`.
+  ## Scheme can be any of the following: tcp://, mqtt://, tls://, mqtts://
+  ## non-TLS and TLS servers can not be mix-and-matched.
+  servers = ["localhost:1883", ] # or ["mqtts://tls.example.com:1883"]
+
+  ## Protocol can be `3.1.1` or `5`. Default is `3.1.1`
+  # protocol = "3.1.1"
+
+  ## MQTT Topic for Producer Messages
+  ## MQTT outputs send metrics to this topic format:
+  ## {{ .TopicPrefix }}/{{ .Hostname }}/{{ .PluginName }}/{{ .Tag "tag_key" }}
+  ## (e.g. prefix/web01.example.com/mem/some_tag_value)
+  ## Each path segment accepts either a template placeholder, an environment variable, or a tag key
+  ## of the form `{{.Tag "tag_key_name"}}`. Empty path elements as well as special MQTT characters
+  ## (such as `+` or `#`) are invalid to form the topic name and will lead to an error.
+  ## In case a tag is missing in the metric, that path segment omitted for the final topic.
+  topic = "telegraf/{{ .Hostname }}/{{ .PluginName }}"
+
+  ## QoS policy for messages
+  ## The mqtt QoS policy for sending messages.
+  ## See https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.dev.doc/q029090_.htm
+  ##   0 = at most once
+  ##   1 = at least once
+  ##   2 = exactly once
+  # qos = 2
+
+  ## Keep Alive
+  ## Defines the maximum length of time that the broker and client may not
+  ## communicate. Defaults to 0 which turns the feature off.
+  ##
+  ## For version v2.0.12 and later mosquitto there is a bug
+  ## (see https://github.com/eclipse/mosquitto/issues/2117), which requires
+  ## this to be non-zero. As a reference eclipse/paho.mqtt.golang defaults to 30.
+  # keep_alive = 0
+
+  ## username and password to connect MQTT server.
+  # username = "telegraf"
+  # password = "metricsmetricsmetricsmetrics"
+
+  ## client ID
+  ## The unique client id to connect MQTT server. If this parameter is not set
+  ## then a random ID is generated.
+  # client_id = ""
+
+  ## Timeout for write operations. default: 5s
+  # timeout = "5s"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## When true, metrics will be sent in one MQTT message per flush. Otherwise,
+  ## metrics are written one metric per MQTT message.
+  ## DEPRECATED: Use layout option instead
+  # batch = false
+
+  ## When true, metric will have RETAIN flag set, making broker cache entries until someone
+  ## actually reads it
+  # retain = false
+
+  ## Client trace messages
+  ## When set to true, and debug mode enabled in the agent settings, the MQTT
+  ## client's messages are included in telegraf logs. These messages are very
+  ## noisey, but essential for debugging issues.
+  # client_trace = false
+
+  ## Layout of the topics published.
+  ## The following choices are available:
+  ##   non-batch -- send individual messages, one for each metric
+  ##   batch     -- send all metric as a single message per MQTT topic
+  ## NOTE: The following options will ignore the 'data_format' option and send single values
+  ##   field     -- send individual messages for each field, appending its name to the metric topic
+  ##   homie-v4  -- send metrics with fields and tags according to the 4.0.0 specs
+  ##                see https://homieiot.github.io/specification/
+  # layout = "non-batch"
+
+  ## HOMIE specific settings
+  ## The following options provide templates for setting the device name
+  ## and the node-ID for the topics. Both options are MANDATORY and can contain
+  ## {{ .PluginName }} (metric name), {{ .Tag "key"}} (tag reference to 'key')
+  ## or constant strings. The templays MAY NOT contain slashes!
+  # homie_device_name = ""
+  # homie_node_id = ""
+
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
+  data_format = "influx"
+
+  ## NOTE: Due to the way TOML is parsed, tables must be at the END of the
+  ## plugin definition, otherwise additional config options are read as part of
+  ## the table
+
+  ## Optional MQTT 5 publish properties
+  ## These setting only apply if the "protocol" property is set to 5. This must
+  ## be defined at the end of the plugin settings, otherwise TOML will assume
+  ## anything else is part of this table. For more details on publish properties
+  ## see the spec:
+  ## https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901109
+  # [outputs.mqtt.v5]
+  #   content_type = ""
+  #   response_topic = ""
+  #   message_expiry = "0s"
+  #   topic_alias = 0
+  # [outputs.mqtt.v5.user_properties]
+  #   "key1" = "value 1"
+  #   "key2" = "value 2"
+```
+
+### `field` layout
+
+This layout will publish one topic per metric __field__, only containing the
+value as string. This means that the `data_format` option will be ignored.
+
+For example writing the metrics
+
+```text
+modbus,location=main\ building,source=device\ 1,status=ok,type=Machine\ A temperature=21.4,serial\ number="324nlk234r5u9834t",working\ hours=123i,supplied=true 1676522982000000000
+modbus,location=main\ building,source=device\ 2,status=offline,type=Machine\ B temperature=25.0,supplied=true 1676522982000000000
+```
+
+with configuration
+
+```toml
+[[outputs.mqtt]]
+  topic = 'telegraf/{{ .PluginName }}/{{ .Tag "source" }}'
+  layout = "field"
+  ...
+```
+
+will result in the following topics and values
+
+```text
+telegraf/modbus/device 1/temperature    21.4
+telegraf/modbus/device 1/serial number  324nlk234r5u9834t
+telegraf/modbus/device 1/supplied       true
+telegraf/modbus/device 1/working hours  123
+telegraf/modbus/device 2/temperature    25
+telegraf/modbus/device 2/supplied       false
+```
+
+__NOTE__: Only fields will be output, tags and the timestamp are omitted. To
+also output those, please convert them to fields first.
+
+### `homie-v4` layout
+
+This layout will publish metrics according to the
+[Homie v4.0 specification](https://homieiot.github.io/specification/spec-core-v4_0_0). Here, the `topic` template will be
+used to specify the `device-id` path. The __mandatory__ options
+`homie_device_name` will specify the content of the `$name` topic of the device,
+while `homie_node_id` will provide a template for the `node-id` part of the
+topic. Both options can contain [Go templates](https://pkg.go.dev/text/template) similar to `topic`
+with `{{ .PluginName }}` referencing the metric name and `{{ .Tag "key"}}`
+referencing the tag with the name `key`.
+
+For example writing the metrics
+
+```text
+modbus,source=device\ 1,location=main\ building,type=Machine\ A,status=ok temperature=21.4,serial\ number="324nlk234r5u9834t",working\ hours=123i,supplied=true 1676522982000000000
+modbus,source=device\ 2,location=main\ building,type=Machine\ B,status=offline supplied=false 1676522982000000000
+modbus,source=device\ 2,location=main\ building,type=Machine\ B,status=online supplied=true,Throughput=12345i,Load\ [%]=81.2,account\ no="T3L3GrAf",Temperature=25.38,Voltage=24.1,Current=100 1676542982000000000
+```
+
+with configuration
+
+```toml
+[[outputs.mqtt]]
+  topic = 'telegraf/{{ .PluginName }}'
+  layout = "homie-v4"
+
+  homie_device_name ='{{.PluginName}} plugin'
+  homie_node_id = '{{.Tag "source"}}'
+  ...
+```
+
+will result in the following topics and values
+
+```text
+telegraf/modbus/$homie                            4.0
+telegraf/modbus/$name                             modbus plugin
+telegraf/modbus/$state                            ready
+telegraf/modbus/$nodes                            device-1
+
+telegraf/modbus/device-1/$name                    device 1
+telegraf/modbus/device-1/$properties              location,serial-number,source,status,supplied,temperature,type,working-hours
+
+telegraf/modbus/device-1/location                 main building
+telegraf/modbus/device-1/location/$name           location
+telegraf/modbus/device-1/location/$datatype       string
+telegraf/modbus/device-1/status                   ok
+telegraf/modbus/device-1/status/$name             status
+telegraf/modbus/device-1/status/$datatype         string
+telegraf/modbus/device-1/type                     Machine A
+telegraf/modbus/device-1/type/$name               type
+telegraf/modbus/device-1/type/$datatype           string
+telegraf/modbus/device-1/source                   device 1
+telegraf/modbus/device-1/source/$name             source
+telegraf/modbus/device-1/source/$datatype         string
+telegraf/modbus/device-1/temperature              21.4
+telegraf/modbus/device-1/temperature/$name        temperature
+telegraf/modbus/device-1/temperature/$datatype    float
+telegraf/modbus/device-1/serial-number            324nlk234r5u9834t
+telegraf/modbus/device-1/serial-number/$name      serial number
+telegraf/modbus/device-1/serial-number/$datatype  string
+telegraf/modbus/device-1/working-hours            123
+telegraf/modbus/device-1/working-hours/$name      working hours
+telegraf/modbus/device-1/working-hours/$datatype  integer
+telegraf/modbus/device-1/supplied                 true
+telegraf/modbus/device-1/supplied/$name           supplied
+telegraf/modbus/device-1/supplied/$datatype       boolean
+
+telegraf/modbus/$nodes                            device-1,device-2
+
+telegraf/modbus/device-2/$name                    device 2
+telegraf/modbus/device-2/$properties              location,source,status,supplied,type
+
+telegraf/modbus/device-2/location                 main building
+telegraf/modbus/device-2/location/$name           location
+telegraf/modbus/device-2/location/$datatype       string
+telegraf/modbus/device-2/status                   offline
+telegraf/modbus/device-2/status/$name             status
+telegraf/modbus/device-2/status/$datatype         string
+telegraf/modbus/device-2/type                     Machine B
+telegraf/modbus/device-2/type/$name               type
+telegraf/modbus/device-2/type/$datatype           string
+telegraf/modbus/device-2/source                   device 2
+telegraf/modbus/device-2/source/$name             source
+telegraf/modbus/device-2/source/$datatype         string
+telegraf/modbus/device-2/supplied                 false
+telegraf/modbus/device-2/supplied/$name           supplied
+telegraf/modbus/device-2/supplied/$datatype       boolean
+
+telegraf/modbus/device-2/$properties              account-no,current,load,location,source,status,supplied,temperature,throughput,type,voltage
+
+telegraf/modbus/device-2/location                 main building
+telegraf/modbus/device-2/location/$name           location
+telegraf/modbus/device-2/location/$datatype       string
+telegraf/modbus/device-2/status                   online
+telegraf/modbus/device-2/status/$name             status
+telegraf/modbus/device-2/status/$datatype         string
+telegraf/modbus/device-2/type                     Machine B
+telegraf/modbus/device-2/type/$name               type
+telegraf/modbus/device-2/type/$datatype           string
+telegraf/modbus/device-2/source                   device 2
+telegraf/modbus/device-2/source/$name             source
+telegraf/modbus/device-2/source/$datatype         string
+telegraf/modbus/device-2/temperature              25.38
+telegraf/modbus/device-2/temperature/$name        Temperature
+telegraf/modbus/device-2/temperature/$datatype    float
+telegraf/modbus/device-2/voltage                  24.1
+telegraf/modbus/device-2/voltage/$name            Voltage
+telegraf/modbus/device-2/voltage/$datatype        float
+telegraf/modbus/device-2/current                  100
+telegraf/modbus/device-2/current/$name            Current
+telegraf/modbus/device-2/current/$datatype        float
+telegraf/modbus/device-2/throughput               12345
+telegraf/modbus/device-2/throughput/$name         Throughput
+telegraf/modbus/device-2/throughput/$datatype     integer
+telegraf/modbus/device-2/load                     81.2
+telegraf/modbus/device-2/load/$name               Load [%]
+telegraf/modbus/device-2/load/$datatype           float
+telegraf/modbus/device-2/account-no               T3L3GrAf
+telegraf/modbus/device-2/account-no/$name         account no
+telegraf/modbus/device-2/account-no/$datatype     string
+telegraf/modbus/device-2/supplied                 true
+telegraf/modbus/device-2/supplied/$name           supplied
+telegraf/modbus/device-2/supplied/$datatype       boolean
+```
+
+#### Important notes and limitations
+
+It is important to notice that the __"devices" and "nodes" are dynamically
+changing__ in Telegraf as the metrics and their structure is not known a-priori.
+As a consequence, the content of both `$nodes` and `$properties` topics are
+changing as new `device-id`s, `node-id`s and `properties` (i.e. tags and fields)
+appear. Best effort is made to limit the number of changes by keeping a
+superset of all devices and nodes seen, however especially during startup those
+topics will change more often. Both `topic` and `homie_node_id` should be chosen
+in a way to group metrics with identical structure!
+
+Furthermore, __lifecycle management of devices is very limited__! Devices will
+only be in `ready` state due to the dynamic nature of Telegraf. Due to
+limitations in the MQTT client library, it is not possible to set a "will"
+dynamically. In consequence, devices are only marked `lost` when exiting
+Telegraf normally and might not change in abnormal aborts.
+
+Note that __all field- and tag-names are automatically converted__ to adhere to
+the [Homie topic ID specification](https://homieiot.github.io/specification/#topic-ids). In that process, the
+names are converted to lower-case and forbidden character sequences (everything
+not being a lower-case character, digit or hyphen) will be replaces by a hyphen.
+Finally, leading and trailing hyphens are removed.
+This is important as there is a __risk of name collisions__ between fields and
+tags of the same node especially after the conversion to ID. Please __make sure
+to avoid those collisions__ as otherwise property topics will be sent multiple
+times for the colliding items.
+
+[HomieSpecV4]: https://homieiot.github.io/specification/spec-core-v4_0_0
+[GoTemplates]: https://pkg.go.dev/text/template
+[HomieSpecV4TopicIDs]: https://homieiot.github.io/specification/#topic-ids
diff --git a/content/telegraf/v1/output-plugins/nats/_index.md b/content/telegraf/v1/output-plugins/nats/_index.md
new file mode 100644
index 000000000..9320db7e9
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/nats/_index.md
@@ -0,0 +1,99 @@
+---
+description: "Telegraf plugin for sending metrics to NATS"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: NATS
+    identifier: output-nats
+tags: [NATS, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# NATS Output Plugin
+
+This plugin writes to a (list of) specified NATS instance(s).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `username` and
+`password` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Send telegraf measurements to NATS
+[[outputs.nats]]
+  ## URLs of NATS servers
+  servers = ["nats://localhost:4222"]
+
+  ## Optional client name
+  # name = ""
+
+  ## Optional credentials
+  # username = ""
+  # password = ""
+
+  ## Optional NATS 2.0 and NATS NGS compatible user credentials
+  # credentials = "/etc/telegraf/nats.creds"
+
+  ## NATS subject for producer messages
+  ## For jetstream this is also the subject where messages will be published
+  subject = "telegraf"
+
+  ## Use Transport Layer Security
+  # secure = false
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## Data format to output.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
+  data_format = "influx"
+
+  ## Jetstream specific configuration. If not nil, it will assume Jetstream context.
+  ## Since this is a table, it should be present at the end of the plugin section. Else you can use inline table format.
+  # [outputs.nats.jetstream]
+    ## Name of the stream, required when using jetstream. Telegraf will
+    ## use the union of the above subject and below the subjects array.
+    # name = "" 
+    # subjects = []
+    
+    ## Full jetstream create stream config, refer: https://docs.nats.io/nats-concepts/jetstream/streams
+    # retention = "limits"
+    # max_consumers = -1
+    # max_msgs_per_subject = -1
+    # max_msgs = -1
+    # max_bytes = -1
+    # max_age = 0
+    # max_msg_size = -1
+    # storage = "file"
+    # discard = "old"
+    # num_replicas = 1
+    # duplicate_window = 120000000000
+    # sealed = false
+    # deny_delete = false
+    # deny_purge = false
+    # allow_rollup_hdrs = false
+    # allow_direct = true
+    # mirror_direct = false
+```
diff --git a/content/telegraf/v1/output-plugins/nebius_cloud_monitoring/_index.md b/content/telegraf/v1/output-plugins/nebius_cloud_monitoring/_index.md
new file mode 100644
index 000000000..137b5ebb6
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/nebius_cloud_monitoring/_index.md
@@ -0,0 +1,100 @@
+---
+description: "Telegraf plugin for sending metrics to Nebius Cloud Monitoring"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Nebius Cloud Monitoring
+    identifier: output-nebius_cloud_monitoring
+tags: [Nebius Cloud Monitoring, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Nebius Cloud Monitoring Output Plugin
+
+This plugin will send custom metrics to
+[Nebuis Cloud Monitoring](https://nebius.com/il/services/monitoring).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Send aggregated metrics to Nebius.Cloud Monitoring
+[[outputs.nebius_cloud_monitoring]]
+  ## Timeout for HTTP writes.
+  # timeout = "20s"
+
+  ## Nebius.Cloud monitoring API endpoint. Normally should not be changed
+  # endpoint = "https://monitoring.api.il.nebius.cloud/monitoring/v2/data/write"
+```
+
+### Authentication
+
+This plugin currently only supports Compute metadata based authentication
+in Nebius Cloud Platform.
+
+When plugin is working inside a Compute instance it will take IAM token and
+Folder ID from instance metadata. In this plugin we use [Google Cloud notation]
+This internal metadata endpoint is only accessible for VMs from the cloud.
+
+[Google Cloud notation]: https://nebius.com/il/docs/compute/operations/vm-info/get-info#gce-metadata
+
+### Reserved Labels
+
+Nebius Monitoring backend using json format to receive the metrics:
+
+```json
+{
+  "name": "metric_name",
+  "labels": {
+    "key": "value",
+    "foo": "bar"
+  },
+  "ts": "2023-06-06T11:10:50Z",
+  "value": 0
+}
+```
+
+But key of label cannot be `name` because it's reserved for `metric_name`.
+
+So this payload:
+
+```json
+{
+  "name": "systemd_units_load_code",
+  "labels": {
+    "active": "active",
+    "host": "vm",
+    "load": "loaded",
+    "name": "accounts-daemon.service",
+    "sub": "running"
+  },
+  "ts": "2023-06-06T11:10:50Z",
+  "value": 0
+}
+```
+
+will be replaced with:
+
+```json
+{
+  "name": "systemd_units_load_code",
+  "labels": {
+    "active": "active",
+    "host": "vm",
+    "load": "loaded",
+    "_name": "accounts-daemon.service",
+    "sub": "running"
+  },
+  "ts": "2023-06-06T11:10:50Z",
+  "value": 0
+}
+```
diff --git a/content/telegraf/v1/output-plugins/newrelic/_index.md b/content/telegraf/v1/output-plugins/newrelic/_index.md
new file mode 100644
index 000000000..1c909d45c
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/newrelic/_index.md
@@ -0,0 +1,59 @@
+---
+description: "Telegraf plugin for sending metrics to New Relic"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: New Relic
+    identifier: output-newrelic
+tags: [New Relic, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# New Relic Output Plugin
+
+This plugins writes to New Relic Insights using the [Metrics API](https://docs.newrelic.com/docs/data-ingest-apis/get-data-new-relic/metric-api/introduction-metric-api).
+
+To use this plugin you must first obtain an [Insights API Key](https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys#user-api-key).
+
+Telegraf minimum version: Telegraf 1.15.0
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Send metrics to New Relic metrics endpoint
+[[outputs.newrelic]]
+  ## The 'insights_key' parameter requires a NR license key.
+  ## New Relic recommends you create one
+  ## with a convenient name such as TELEGRAF_INSERT_KEY.
+  ## reference: https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/#ingest-license-key
+  # insights_key = "New Relic License Key Here"
+
+  ## Prefix to add to add to metric name for easy identification.
+  ## This is very useful if your metric names are ambiguous.
+  # metric_prefix = ""
+
+  ## Timeout for writes to the New Relic API.
+  # timeout = "15s"
+
+  ## HTTP Proxy override. If unset use values from the standard
+  ## proxy environment variables to determine proxy, if any.
+  # http_proxy = "http://corporate.proxy:3128"
+
+  ## Metric URL override to enable geographic location endpoints.
+  # If not set use values from the standard
+  # metric_url = "https://metric-api.newrelic.com/metric/v1"
+```
+
+[Metrics API]: https://docs.newrelic.com/docs/data-ingest-apis/get-data-new-relic/metric-api/introduction-metric-api
+
+[Insights API Key]: https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys#user-api-key
diff --git a/content/telegraf/v1/output-plugins/nsq/_index.md b/content/telegraf/v1/output-plugins/nsq/_index.md
new file mode 100644
index 000000000..b945f619f
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/nsq/_index.md
@@ -0,0 +1,42 @@
+---
+description: "Telegraf plugin for sending metrics to NSQ"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: NSQ
+    identifier: output-nsq
+tags: [NSQ, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# NSQ Output Plugin
+
+This plugin writes to a specified NSQD instance, usually local to the
+producer. It requires a `server` name and a `topic` name.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Send telegraf measurements to NSQD
+[[outputs.nsq]]
+  ## Location of nsqd instance listening on TCP
+  server = "localhost:4150"
+  ## NSQ topic for producer messages
+  topic = "telegraf"
+
+  ## Data format to output.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
+  data_format = "influx"
+```
diff --git a/content/telegraf/v1/output-plugins/opensearch/_index.md b/content/telegraf/v1/output-plugins/opensearch/_index.md
new file mode 100644
index 000000000..ca80892e9
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/opensearch/_index.md
@@ -0,0 +1,373 @@
+---
+description: "Telegraf plugin for sending metrics to OpenSearch"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: OpenSearch
+    identifier: output-opensearch
+tags: [OpenSearch, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# OpenSearch Output Plugin
+
+This plugin writes to [OpenSearch](https://opensearch.org/) via HTTP
+
+It supports OpenSearch releases from 1 and 2. Future comparability with 1.x is
+not guaranteed and instead will focus on 2.x support. Consider using the
+existing Elasticsearch plugin for 1.x.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for OpenSearch to send metrics to.
+[[outputs.opensearch]]
+  ## URLs
+  ## The full HTTP endpoint URL for your OpenSearch instance. Multiple URLs can
+  ## be specified as part of the same cluster, but only one URLs is used to
+  ## write during each interval.
+  urls = ["http://node1.os.example.com:9200"]
+
+  ## Index Name
+  ## Target index name for metrics (OpenSearch will create if it not exists).
+  ## This is a Golang template (see https://pkg.go.dev/text/template)
+  ## You can also specify
+  ## metric name (`{{.Name}}`), tag value (`{{.Tag "tag_name"}}`), field value (`{{.Field "field_name"}}`)
+  ## If the tag does not exist, the default tag value will be empty string "".
+  ## the timestamp (`{{.Time.Format "xxxxxxxxx"}}`).
+  ## For example: "telegraf-{{.Time.Format \"2006-01-02\"}}-{{.Tag \"host\"}}" would set it to telegraf-2023-07-27-HostName
+  index_name = ""
+
+  ## Timeout
+  ## OpenSearch client timeout
+  # timeout = "5s"
+
+  ## Sniffer
+  ## Set to true to ask OpenSearch a list of all cluster nodes,
+  ## thus it is not necessary to list all nodes in the urls config option
+  # enable_sniffer = false
+
+  ## GZIP Compression
+  ## Set to true to enable gzip compression
+  # enable_gzip = false
+
+  ## Health Check Interval
+  ## Set the interval to check if the OpenSearch nodes are available
+  ## Setting to "0s" will disable the health check (not recommended in production)
+  # health_check_interval = "10s"
+
+  ## Set the timeout for periodic health checks.
+  # health_check_timeout = "1s"
+  ## HTTP basic authentication details.
+  # username = ""
+  # password = ""
+  ## HTTP bearer token authentication details
+  # auth_bearer_token = ""
+
+  ## Optional TLS Config
+  ## Set to true/false to enforce TLS being enabled/disabled. If not set,
+  ## enable TLS only if any of the other options are specified.
+  # tls_enable =
+  ## Trusted root certificates for server
+  # tls_ca = "/path/to/cafile"
+  ## Used for TLS client certificate authentication
+  # tls_cert = "/path/to/certfile"
+  ## Used for TLS client certificate authentication
+  # tls_key = "/path/to/keyfile"
+  ## Send the specified TLS server name via SNI
+  # tls_server_name = "kubernetes.example.com"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## Template Config
+  ## Manage templates
+  ## Set to true if you want telegraf to manage its index template.
+  ## If enabled it will create a recommended index template for telegraf indexes
+  # manage_template = true
+
+  ## Template Name
+  ## The template name used for telegraf indexes
+  # template_name = "telegraf"
+
+  ## Overwrite Templates
+  ## Set to true if you want telegraf to overwrite an existing template
+  # overwrite_template = false
+
+  ## Document ID
+  ## If set to true a unique ID hash will be sent as
+  ## sha256(concat(timestamp,measurement,series-hash)) string. It will enable
+  ## data resend and update metric points avoiding duplicated metrics with
+  ## different id's
+  # force_document_id = false
+
+  ## Value Handling
+  ## Specifies the handling of NaN and Inf values.
+  ## This option can have the following values:
+  ##    none    -- do not modify field-values (default); will produce an error
+  ##               if NaNs or infs are encountered
+  ##    drop    -- drop fields containing NaNs or infs
+  ##    replace -- replace with the value in "float_replacement_value" (default: 0.0)
+  ##               NaNs and inf will be replaced with the given number, -inf with the negative of that number
+  # float_handling = "none"
+  # float_replacement_value = 0.0
+
+  ## Pipeline Config
+  ## To use a ingest pipeline, set this to the name of the pipeline you want to use.
+  # use_pipeline = "my_pipeline"
+
+  ## Pipeline Name
+  ## Additionally, you can specify a tag name using the notation (`{{.Tag "tag_name"}}`)
+  ## which will be used as the pipeline name (e.g. "{{.Tag "os_pipeline"}}").
+  ## If the tag does not exist, the default pipeline will be used as the pipeline.
+  ## If no default pipeline is set, no pipeline is used for the metric.
+  # default_pipeline = ""
+```
+
+### Required parameters
+
+* `urls`: A list containing the full HTTP URL of one or more nodes from your
+  OpenSearch instance.
+* `index_name`: The target index for metrics. You can use the date format
+
+For example: "telegraf-{{.Time.Format \"2006-01-02\"}}" would set it to
+"telegraf-2023-07-27". You can also specify metric name (`{{ .Name }}`), tag
+value (`{{ .Tag \"tag_name\" }}`), and field value
+(`{{ .Field \"field_name\" }}`).
+
+If the tag does not exist, the default tag value will be empty string ""
+
+## Permissions
+
+If you are using authentication within your OpenSearch cluster, you need to
+create an account and create a role with at least the manage role in the Cluster
+Privileges category. Otherwise, your account will not be able to connect to your
+OpenSearch cluster and send logs to your cluster.  After that, you need to
+add "create_index" and "write" permission to your specific index pattern.
+
+## OpenSearch indexes and templates
+
+### Indexes per time-frame
+
+This plugin can manage indexes per time-frame, as commonly done in other tools
+with OpenSearch. The timestamp of the metric collected will be used to decide
+the index destination. For more information about this usage on OpenSearch,
+check [the docs](https://opensearch.org/docs/latest/opensearch/index-templates/).
+
+[1]: https://opensearch.org/docs/latest/
+
+### Template management
+
+Index templates are used in OpenSearch to define settings and mappings for
+the indexes and how the fields should be analyzed.  For more information on how
+this works, see [the docs](https://opensearch.org/docs/latest/opensearch/index-templates/).
+
+This plugin can create a working template for use with telegraf metrics. It uses
+OpenSearch dynamic templates feature to set proper types for the tags and
+metrics fields.  If the template specified already exists, it will not overwrite
+unless you configure this plugin to do so. Thus you can customize this template
+after its creation if necessary.
+
+Example of an index template created by telegraf on OpenSearch 2.x:
+
+```json
+{
+  "telegraf-2022.10.02" : {
+    "aliases" : { },
+    "mappings" : {
+      "properties" : {
+        "@timestamp" : {
+          "type" : "date"
+        },
+        "disk" : {
+          "properties" : {
+            "free" : {
+              "type" : "long"
+            },
+            "inodes_free" : {
+              "type" : "long"
+            },
+            "inodes_total" : {
+              "type" : "long"
+            },
+            "inodes_used" : {
+              "type" : "long"
+            },
+            "total" : {
+              "type" : "long"
+            },
+            "used" : {
+              "type" : "long"
+            },
+            "used_percent" : {
+              "type" : "float"
+            }
+          }
+        },
+        "measurement_name" : {
+          "type" : "text",
+          "fields" : {
+            "keyword" : {
+              "type" : "keyword",
+              "ignore_above" : 256
+            }
+          }
+        },
+        "tag" : {
+          "properties" : {
+            "cpu" : {
+              "type" : "text",
+              "fields" : {
+                "keyword" : {
+                  "type" : "keyword",
+                  "ignore_above" : 256
+                }
+              }
+            },
+            "device" : {
+              "type" : "text",
+              "fields" : {
+                "keyword" : {
+                  "type" : "keyword",
+                  "ignore_above" : 256
+                }
+              }
+            },
+            "host" : {
+              "type" : "text",
+              "fields" : {
+                "keyword" : {
+                  "type" : "keyword",
+                  "ignore_above" : 256
+                }
+              }
+            },
+            "mode" : {
+              "type" : "text",
+              "fields" : {
+                "keyword" : {
+                  "type" : "keyword",
+                  "ignore_above" : 256
+                }
+              }
+            },
+            "path" : {
+              "type" : "text",
+              "fields" : {
+                "keyword" : {
+                  "type" : "keyword",
+                  "ignore_above" : 256
+                }
+              }
+            }
+          }
+        }
+      }
+    },
+    "settings" : {
+      "index" : {
+        "creation_date" : "1664693522789",
+        "number_of_shards" : "1",
+        "number_of_replicas" : "1",
+        "uuid" : "TYugdmvsQfmxjzbGRJ8FIw",
+        "version" : {
+          "created" : "136247827"
+        },
+        "provided_name" : "telegraf-2022.10.02"
+      }
+    }
+  }
+}
+
+```
+
+[2]: https://opensearch.org/docs/latest/opensearch/index-templates/
+
+### Example events
+
+This plugin will format the events in the following way:
+
+```json
+{
+  "@timestamp": "2017-01-01T00:00:00+00:00",
+  "measurement_name": "cpu",
+  "cpu": {
+    "usage_guest": 0,
+    "usage_guest_nice": 0,
+    "usage_idle": 71.85413456197966,
+    "usage_iowait": 0.256805341656516,
+    "usage_irq": 0,
+    "usage_nice": 0,
+    "usage_softirq": 0.2054442732579466,
+    "usage_steal": 0,
+    "usage_system": 15.04879301548127,
+    "usage_user": 12.634822807288275
+  },
+  "tag": {
+    "cpu": "cpu-total",
+    "host": "opensearhhost",
+    "dc": "datacenter1"
+  }
+}
+```
+
+```json
+{
+  "@timestamp": "2017-01-01T00:00:00+00:00",
+  "measurement_name": "system",
+  "system": {
+    "load1": 0.78,
+    "load15": 0.8,
+    "load5": 0.8,
+    "n_cpus": 2,
+    "n_users": 2
+  },
+  "tag": {
+    "host": "opensearhhost",
+    "dc": "datacenter1"
+  }
+}
+```
+
+## Known issues
+
+Integer values collected that are bigger than 2^63 and smaller than 1e21 (or in
+this exact same window of their negative counterparts) are encoded by golang
+JSON encoder in decimal format and that is not fully supported by OpenSearch
+dynamic field mapping. This causes the metrics with such values to be dropped in
+case a field mapping has not been created yet on the telegraf index. If that's
+the case you will see an exception on OpenSearch side like this:
+
+```json
+{
+  "error": {
+    "root_cause": [
+      {"type": "mapper_parsing_exception", "reason": "failed to parse"}
+    ],
+    "type": "mapper_parsing_exception",
+    "reason": "failed to parse",
+    "caused_by": {
+      "type": "illegal_state_exception",
+      "reason": "No matching token for number_type [BIG_INTEGER]"
+    }
+  },
+  "status": 400
+}
+```
+
+The correct field mapping will be created on the telegraf index as soon as a
+supported JSON value is received by OpenSearch, and subsequent insertions
+will work because the field mapping will already exist.
+
+This issue is caused by the way OpenSearch tries to detect integer fields,
+and by how golang encodes numbers in JSON. There is no clear workaround for this
+at the moment.
diff --git a/content/telegraf/v1/output-plugins/opentelemetry/_index.md b/content/telegraf/v1/output-plugins/opentelemetry/_index.md
new file mode 100644
index 000000000..0c620fcac
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/opentelemetry/_index.md
@@ -0,0 +1,117 @@
+---
+description: "Telegraf plugin for sending metrics to OpenTelemetry"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: OpenTelemetry
+    identifier: output-opentelemetry
+tags: [OpenTelemetry, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# OpenTelemetry Output Plugin
+
+This plugin sends metrics to [OpenTelemetry](https://opentelemetry.io) servers
+and agents via gRPC.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Send OpenTelemetry metrics over gRPC
+[[outputs.opentelemetry]]
+  ## Override the default (localhost:4317) OpenTelemetry gRPC service
+  ## address:port
+  # service_address = "localhost:4317"
+
+  ## Override the default (5s) request timeout
+  # timeout = "5s"
+
+  ## Optional TLS Config.
+  ##
+  ## Root certificates for verifying server certificates encoded in PEM format.
+  # tls_ca = "/etc/telegraf/ca.pem"
+  ## The public and private key pairs for the client encoded in PEM format.
+  ## May contain intermediate certificates.
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS, but skip TLS chain and host verification.
+  # insecure_skip_verify = false
+  ## Send the specified TLS server name via SNI.
+  # tls_server_name = "foo.example.com"
+
+  ## Override the default (gzip) compression used to send data.
+  ## Supports: "gzip", "none"
+  # compression = "gzip"
+
+  ## NOTE: Due to the way TOML is parsed, tables must be at the END of the
+  ## plugin definition, otherwise additional config options are read as part of
+  ## the table
+
+  ## Configuration options for the Coralogix dialect
+  ## Enable the following section of you use this plugin with a Coralogix endpoint
+  # [outputs.opentelemetry.coralogix]
+  #   ## Your Coralogix private key (required).
+  #   ## Please note that this is sensitive data!
+  #   private_key = "your_coralogix_key"
+  #
+  #   ## Application and subsystem names for the metrics (required)
+  #   application = "$NAMESPACE"
+  #   subsystem = "$HOSTNAME"
+
+  ## Additional OpenTelemetry resource attributes
+  # [outputs.opentelemetry.attributes]
+  # "service.name" = "demo"
+
+  ## Additional gRPC request metadata
+  # [outputs.opentelemetry.headers]
+  # key1 = "value1"
+```
+
+## Supported dialects
+
+### Coralogix
+
+This plugins supports sending data to a [Coralogix](https://coralogix.com)
+server by enabling the corresponding dialect by uncommenting
+the `[output.opentelemetry.coralogix]` section.
+
+There, you can find the required setting to interact with the server.
+
+- The `private_key` is your Private Key, which you can find in Settings > Send Your Data.
+- The `application`, is your application name, which will be added to your metric attributes.
+- The `subsystem`, is your subsystem, which will be added to your metric attributes.
+
+More information in the
+[Getting Started page](https://coralogix.com/docs/guide-first-steps-coralogix/).
+
+### Schema
+
+The InfluxDB->OpenTelemetry conversion [schema](https://github.com/influxdata/influxdb-observability/blob/main/docs/index.md) and [implementation](https://github.com/influxdata/influxdb-observability/tree/main/influx2otel) are
+hosted on [GitHub](https://github.com/influxdata/influxdb-observability).
+
+For metrics, two input schemata exist.  Line protocol with measurement name
+`prometheus` is assumed to have a schema matching Prometheus input
+plugin when `metric_version = 2`.  Line
+protocol with other measurement names is assumed to have schema matching
+Prometheus input plugin
+- Metric name = `[measurement]_[field key]`
+- Metric value = line protocol field value, cast to float
+- Metric labels = line protocol tags
+
+Also see the OpenTelemetry input plugin.
+
+[schema]: https://github.com/influxdata/influxdb-observability/blob/main/docs/index.md
+
+[implementation]: https://github.com/influxdata/influxdb-observability/tree/main/influx2otel
+
+[repo]: https://github.com/influxdata/influxdb-observability
diff --git a/content/telegraf/v1/output-plugins/opentsdb/_index.md b/content/telegraf/v1/output-plugins/opentsdb/_index.md
new file mode 100644
index 000000000..6e979e966
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/opentsdb/_index.md
@@ -0,0 +1,136 @@
+---
+description: "Telegraf plugin for sending metrics to OpenTSDB"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: OpenTSDB
+    identifier: output-opentsdb
+tags: [OpenTSDB, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# OpenTSDB Output Plugin
+
+This plugin writes to an OpenTSDB instance using either the "telnet" or Http
+mode.
+
+Using the Http API is the recommended way of writing metrics since OpenTSDB 2.0
+To use Http mode, set useHttp to true in config. You can also control how many
+metrics is sent in each http request by setting batchSize in config.
+
+See [the docs](http://opentsdb.net/docs/build/html/api_http/put.html) for
+details.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for OpenTSDB server to send metrics to
+[[outputs.opentsdb]]
+  ## prefix for metrics keys
+  prefix = "my.specific.prefix."
+
+  ## DNS name of the OpenTSDB server
+  ## Using "opentsdb.example.com" or "tcp://opentsdb.example.com" will use the
+  ## telnet API. "http://opentsdb.example.com" will use the Http API.
+  host = "opentsdb.example.com"
+
+  ## Port of the OpenTSDB server
+  port = 4242
+
+  ## Number of data points to send to OpenTSDB in Http requests.
+  ## Not used with telnet API.
+  http_batch_size = 50
+
+  ## URI Path for Http requests to OpenTSDB.
+  ## Used in cases where OpenTSDB is located behind a reverse proxy.
+  http_path = "/api/put"
+
+  ## Debug true - Prints OpenTSDB communication
+  debug = false
+
+  ## Separator separates measurement name from field
+  separator = "_"
+```
+
+## Transfer "Protocol" in the telnet mode
+
+The expected input from OpenTSDB is specified in the following way:
+
+```text
+put <metric> <timestamp> <value> <tagk1=tagv1[ tagk2=tagv2 ...tagkN=tagvN]>
+```
+
+The telegraf output plugin adds an optional prefix to the metric keys so that a
+subamount can be selected.
+
+```text
+put <[prefix.]metric> <timestamp> <value> <tagk1=tagv1[ tagk2=tagv2 ...tagkN=tagvN]>
+```
+
+### Example
+
+```text
+put nine.telegraf.system_load1 1441910356 0.430000 dc=homeoffice host=irimame scope=green
+put nine.telegraf.system_load5 1441910356 0.580000 dc=homeoffice host=irimame scope=green
+put nine.telegraf.system_load15 1441910356 0.730000 dc=homeoffice host=irimame scope=green
+put nine.telegraf.system_uptime 1441910356 3655970.000000 dc=homeoffice host=irimame scope=green
+put nine.telegraf.system_uptime_format 1441910356  dc=homeoffice host=irimame scope=green
+put nine.telegraf.mem_total 1441910356 4145426432 dc=homeoffice host=irimame scope=green
+...
+put nine.telegraf.io_write_bytes 1441910366 0 dc=homeoffice host=irimame name=vda2 scope=green
+put nine.telegraf.io_read_time 1441910366 0 dc=homeoffice host=irimame name=vda2 scope=green
+put nine.telegraf.io_write_time 1441910366 0 dc=homeoffice host=irimame name=vda2 scope=green
+put nine.telegraf.io_io_time 1441910366 0 dc=homeoffice host=irimame name=vda2 scope=green
+put nine.telegraf.ping_packets_transmitted 1441910366  dc=homeoffice host=irimame scope=green url=www.google.com
+put nine.telegraf.ping_packets_received 1441910366  dc=homeoffice host=irimame scope=green url=www.google.com
+put nine.telegraf.ping_percent_packet_loss 1441910366 0.000000 dc=homeoffice host=irimame scope=green url=www.google.com
+put nine.telegraf.ping_average_response_ms 1441910366 24.006000 dc=homeoffice host=irimame scope=green url=www.google.com
+...
+```
+
+The OpenTSDB telnet interface can be simulated with this reader:
+
+```go
+// opentsdb_telnet_mode_mock.go
+package main
+
+import (
+    "io"
+    "log"
+    "net"
+    "os"
+)
+
+func main() {
+    l, err := net.Listen("tcp", "localhost:4242")
+    if err != nil {
+        log.Fatal(err)
+    }
+    defer l.Close()
+    for {
+        conn, err := l.Accept()
+        if err != nil {
+            log.Fatal(err)
+        }
+        go func(c net.Conn) {
+            defer c.Close()
+            io.Copy(os.Stdout, c)
+        }(conn)
+    }
+}
+
+```
+
+## Allowed values for metrics
+
+OpenTSDB allows `integers` and `floats` as input values
diff --git a/content/telegraf/v1/output-plugins/parquet/_index.md b/content/telegraf/v1/output-plugins/parquet/_index.md
new file mode 100644
index 000000000..5a037258a
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/parquet/_index.md
@@ -0,0 +1,127 @@
+---
+description: "Telegraf plugin for sending metrics to Parquet"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Parquet
+    identifier: output-parquet
+tags: [Parquet, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Parquet Output Plugin
+
+This plugin writes metrics to parquet files. By default, the parquet
+output groups metrics by metric name and write those metrics all to the same
+file. If a metric schema does not match then metrics are dropped.
+
+To lean more about Parquet check out the [Parquet docs](https://parquet.apache.org/docs/) as well as a blog
+post on [Querying Parquet](https://www.influxdata.com/blog/querying-parquet-millisecond-latency/).
+
+[Parquet docs]: https://parquet.apache.org/docs/
+[Querying Parquet]: https://www.influxdata.com/blog/querying-parquet-millisecond-latency/
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# A plugin that writes metrics to parquet files
+[[outputs.parquet]]
+  ## Directory to write parquet files in. If a file already exists the output
+  ## will attempt to continue using the existing file.
+  # directory = "."
+
+  ## Files are rotated after the time interval specified. When set to 0 no time
+  ## based rotation is performed.
+  # rotation_interval = "0h"
+
+  ## Timestamp field name
+  ## Field name to use to store the timestamp. If set to an empty string, then
+  ## the timestamp is omitted.
+  # timestamp_field_name = "timestamp"
+```
+
+## Building Parquet Files
+
+### Schema
+
+Parquet files require a schema when writing files. To generate a schema,
+Telegraf will go through all grouped metrics and generate an Apache Arrow schema
+based on the union of all fields and tags. If a field and tag have the same name
+then the field takes precedence.
+
+The consequence of schema generation is that the very first flush sequence a
+metric is seen takes much longer due to the additional looping through the
+metrics to generate the schema. Subsequent flush intervals are significantly
+faster.
+
+When writing to a file, the schema is used to look for each value and if it is
+not present a null value is added. The result is that if additional fields are
+present after the first metric flush those fields are omitted.
+
+### Write
+
+The plugin makes use of the buffered writer. This may buffer some metrics into
+memory before writing it to disk. This method is used as it can more compactly
+write multiple flushes of metrics into a single Parquet row group.
+
+Additionally, the Parquet format requires a proper footer, so close must be
+called on the file to ensure it is properly formatted.
+
+### Close
+
+Parquet files must close properly or the file will not be readable. The parquet
+format requires a footer at the end of the file and if that footer is not
+present then the file cannot be read correctly.
+
+If Telegraf were to crash while writing parquet files there is the possibility
+of this occurring.
+
+## File Rotation
+
+If a file with the same target name exists at start, the existing file is
+rotated to avoid over-writing it or conflicting schema.
+
+File rotation is available via a time based interval that a user can optionally
+set. Due to the usage of a buffered writer, a size based rotation is not
+possible as the file may not actually get data at each interval.
+
+## Explore Parquet Files
+
+If a user wishes to explore a schema or data in a Parquet file quickly, then
+consider the options below:
+
+### CLI
+
+The Arrow repo contains a Go CLI tool to read and parse Parquet files:
+
+```s
+go install github.com/apache/arrow/go/v18/parquet/cmd/parquet_reader@latest
+parquet_reader <file>
+```
+
+### Python
+
+Users can also use the [pyarrow](https://arrow.apache.org/docs/python/generated/pyarrow.parquet.read_table.html) library to quick open and explore Parquet
+files:
+
+```python
+import pyarrow.parquet as pq
+
+table = pq.read_table('example.parquet')
+```
+
+Once created, a user can look the various [pyarrow.Table](https://arrow.apache.org/docs/python/generated/pyarrow.Table.html#pyarrow.Table) functions to further
+explore the data.
+
+[pyarrow]: https://arrow.apache.org/docs/python/generated/pyarrow.parquet.read_table.html
+[pyarrow.Table]: https://arrow.apache.org/docs/python/generated/pyarrow.Table.html#pyarrow.Table
diff --git a/content/telegraf/v1/output-plugins/postgresql/_index.md b/content/telegraf/v1/output-plugins/postgresql/_index.md
new file mode 100644
index 000000000..55587a013
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/postgresql/_index.md
@@ -0,0 +1,314 @@
+---
+description: "Telegraf plugin for sending metrics to PostgreSQL"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: PostgreSQL
+    identifier: output-postgresql
+tags: [PostgreSQL, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# PostgreSQL Output Plugin
+
+This output plugin writes metrics to PostgreSQL (or compatible database).
+The plugin manages the schema, automatically updating missing columns.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Startup error behavior options <!-- @/docs/includes/startup_error_behavior.md -->
+
+In addition to the plugin-specific and global configuration settings the plugin
+supports options for specifying the behavior when experiencing startup errors
+using the `startup_error_behavior` setting. Available values are:
+
+- `error`:  Telegraf with stop and exit in case of startup errors. This is the
+            default behavior.
+- `ignore`: Telegraf will ignore startup errors for this plugin and disables it
+            but continues processing for all other plugins.
+- `retry`:  Telegraf will try to startup the plugin in every gather or write
+            cycle in case of startup errors. The plugin is disabled until
+            the startup succeeds.
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `connection` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Publishes metrics to a postgresql database
+[[outputs.postgresql]]
+  ## Specify connection address via the standard libpq connection string:
+  ##   host=... user=... password=... sslmode=... dbname=...
+  ## Or a URL:
+  ##   postgres://[user[:password]]@localhost[/dbname]?sslmode=[disable|verify-ca|verify-full]
+  ## See https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING
+  ##
+  ## All connection parameters are optional. Environment vars are also supported.
+  ## e.g. PGPASSWORD, PGHOST, PGUSER, PGDATABASE
+  ## All supported vars can be found here:
+  ##  https://www.postgresql.org/docs/current/libpq-envars.html
+  ##
+  ## Non-standard parameters:
+  ##   pool_max_conns (default: 1) - Maximum size of connection pool for parallel (per-batch per-table) inserts.
+  ##   pool_min_conns (default: 0) - Minimum size of connection pool.
+  ##   pool_max_conn_lifetime (default: 0s) - Maximum age of a connection before closing.
+  ##   pool_max_conn_idle_time (default: 0s) - Maximum idle time of a connection before closing.
+  ##   pool_health_check_period (default: 0s) - Duration between health checks on idle connections.
+  # connection = ""
+
+  ## Postgres schema to use.
+  # schema = "public"
+
+  ## Store tags as foreign keys in the metrics table. Default is false.
+  # tags_as_foreign_keys = false
+
+  ## Suffix to append to table name (measurement name) for the foreign tag table.
+  # tag_table_suffix = "_tag"
+
+  ## Deny inserting metrics if the foreign tag can't be inserted.
+  # foreign_tag_constraint = false
+
+  ## Store all tags as a JSONB object in a single 'tags' column.
+  # tags_as_jsonb = false
+
+  ## Store all fields as a JSONB object in a single 'fields' column.
+  # fields_as_jsonb = false
+
+  ## Name of the timestamp column
+  ## NOTE: Some tools (e.g. Grafana) require the default name so be careful!
+  # timestamp_column_name = "time"
+
+  ## Type of the timestamp column
+  ## Currently, "timestamp without time zone" and "timestamp with time zone"
+  ## are supported
+  # timestamp_column_type = "timestamp without time zone"
+
+  ## Templated statements to execute when creating a new table.
+  # create_templates = [
+  #   '''CREATE TABLE {{ .table }} ({{ .columns }})''',
+  # ]
+
+  ## Templated statements to execute when adding columns to a table.
+  ## Set to an empty list to disable. Points containing tags for which there is no column will be skipped. Points
+  ## containing fields for which there is no column will have the field omitted.
+  # add_column_templates = [
+  #   '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
+  # ]
+
+  ## Templated statements to execute when creating a new tag table.
+  # tag_table_create_templates = [
+  #   '''CREATE TABLE {{ .table }} ({{ .columns }}, PRIMARY KEY (tag_id))''',
+  # ]
+
+  ## Templated statements to execute when adding columns to a tag table.
+  ## Set to an empty list to disable. Points containing tags for which there is no column will be skipped.
+  # tag_table_add_column_templates = [
+  #   '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
+  # ]
+
+  ## The postgres data type to use for storing unsigned 64-bit integer values (Postgres does not have a native
+  ## unsigned 64-bit integer type).
+  ## The value can be one of:
+  ##   numeric - Uses the PostgreSQL "numeric" data type.
+  ##   uint8 - Requires pguint extension (https://github.com/petere/pguint)
+  # uint64_type = "numeric"
+
+  ## When using pool_max_conns>1, and a temporary error occurs, the query is retried with an incremental backoff. This
+  ## controls the maximum backoff duration.
+  # retry_max_backoff = "15s"
+
+  ## Approximate number of tag IDs to store in in-memory cache (when using tags_as_foreign_keys).
+  ## This is an optimization to skip inserting known tag IDs.
+  ## Each entry consumes approximately 34 bytes of memory.
+  # tag_cache_size = 100000
+
+  ## Enable & set the log level for the Postgres driver.
+  # log_level = "warn" # trace, debug, info, warn, error, none
+```
+
+### Concurrency
+
+By default the postgresql plugin does not utilize any concurrency. However it
+can for increased throughput. When concurrency is off, telegraf core handles
+things like retrying on failure, buffering, etc. When concurrency is used,
+these aspects have to be handled by the plugin.
+
+To enable concurrent writes to the database, set the `pool_max_conns`
+connection parameter to a value >1. When enabled, incoming batches will be
+split by measurement/table name. In addition, if a batch comes in and the
+previous batch has not completed, concurrency will be used for the new batch
+as well.
+
+If all connections are utilized and the pool is exhausted, further incoming
+batches will be buffered within telegraf core.
+
+### Foreign tags
+
+When using `tags_as_foreign_keys`, tags will be written to a separate table
+with a `tag_id` column used for joins. Each series (unique combination of tag
+values) gets its own entry in the tags table, and a unique `tag_id`.
+
+## Data types
+
+By default the postgresql plugin maps Influx data types to the following
+PostgreSQL types:
+
+| Influx                                                                                                       | PostgreSQL                                                                                         |
+|--------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------|
+| [float](https://docs.influxdata.com/influxdb/latest/reference/syntax/line-protocol/#float)                   | [double precision](https://www.postgresql.org/docs/current/datatype-numeric.html#DATATYPE-FLOAT)   |
+| [integer](https://docs.influxdata.com/influxdb/latest/reference/syntax/line-protocol/#integer)               | [bigint](https://www.postgresql.org/docs/current/datatype-numeric.html#DATATYPE-INT)               |
+| [uinteger](https://docs.influxdata.com/influxdb/latest/reference/syntax/line-protocol/#uinteger)             | [numeric](https://www.postgresql.org/docs/current/datatype-numeric.html#DATATYPE-NUMERIC-DECIMAL)* |
+| [string](https://docs.influxdata.com/influxdb/latest/reference/syntax/line-protocol/#string)                 | [text](https://www.postgresql.org/docs/current/datatype-character.html)                            |
+| [boolean](https://docs.influxdata.com/influxdb/latest/reference/syntax/line-protocol/#boolean)               | [boolean](https://www.postgresql.org/docs/current/datatype-boolean.html)                           |
+| [unix timestamp](https://docs.influxdata.com/influxdb/latest/reference/syntax/line-protocol/#unix-timestamp) | [timestamp](https://www.postgresql.org/docs/current/datatype-datetime.html)                        |
+
+It is important to note that `uinteger` (unsigned 64-bit integer) is mapped to
+the `numeric` PostgreSQL data type. The `numeric` data type is an arbitrary
+precision decimal data type that is less efficient than `bigint`. This is
+necessary as the range of values for the Influx `uinteger` data type can
+exceed `bigint`, and thus cause errors when inserting data.
+
+### pguint
+
+As a solution to the `uinteger`/`numeric` data type problem, there is a
+PostgreSQL extension that offers unsigned 64-bit integer support:
+[https://github.com/petere/pguint](https://github.com/petere/pguint).
+
+If this extension is installed, you can enable the `unsigned_integers` config
+parameter which will cause the plugin to use the `uint8` datatype instead of
+`numeric`.
+
+## Templating
+
+The postgresql plugin uses templates for the schema modification SQL
+statements. This allows for complete control of the schema by the user.
+
+Documentation on how to write templates can be found [sqltemplate docs](https://pkg.go.dev/github.com/influxdata/telegraf/plugins/outputs/postgresql/sqltemplate)
+
+[1]: https://pkg.go.dev/github.com/influxdata/telegraf/plugins/outputs/postgresql/sqltemplate
+
+### Samples
+
+#### TimescaleDB
+
+```toml
+tags_as_foreign_keys = true
+create_templates = [
+    '''CREATE TABLE {{ .table }} ({{ .columns }})''',
+    '''SELECT create_hypertable({{ .table|quoteLiteral }}, 'time', chunk_time_interval => INTERVAL '7d')''',
+    '''ALTER TABLE {{ .table }} SET (timescaledb.compress, timescaledb.compress_segmentby = 'tag_id')''',
+]
+```
+
+##### Multi-node
+
+```toml
+tags_as_foreign_keys = true
+create_templates = [
+    '''CREATE TABLE {{ .table }} ({{ .columns }})''',
+    '''SELECT create_distributed_hypertable({{ .table|quoteLiteral }}, 'time', partitioning_column => 'tag_id', number_partitions => (SELECT count(*) FROM timescaledb_information.data_nodes)::integer, replication_factor => 2, chunk_time_interval => INTERVAL '7d')''',
+    '''ALTER TABLE {{ .table }} SET (timescaledb.compress, timescaledb.compress_segmentby = 'tag_id')''',
+]
+```
+
+#### Tag table with view
+
+This example enables `tags_as_foreign_keys`, but creates a postgres view to
+automatically join the metric & tag tables. The metric & tag tables are stored
+in a "telegraf" schema, with the view in the "public" schema.
+
+```toml
+tags_as_foreign_keys = true
+schema = "telegraf"
+create_templates = [
+    '''CREATE TABLE {{ .table }} ({{ .columns }})''',
+    '''CREATE VIEW {{ .table.WithSchema "public" }} AS SELECT time, {{ (.tagTable.Columns.Tags.Concat .allColumns.Fields).Identifiers | join "," }} FROM {{ .table }} t, {{ .tagTable }} tt WHERE t.tag_id = tt.tag_id''',
+]
+add_column_templates = [
+    '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
+    '''DROP VIEW IF EXISTS {{ .table.WithSchema "public" }}''',
+    '''CREATE VIEW {{ .table.WithSchema "public" }} AS SELECT time, {{ (.tagTable.Columns.Tags.Concat .allColumns.Fields).Identifiers | join "," }} FROM {{ .table }} t, {{ .tagTable }} tt WHERE t.tag_id = tt.tag_id''',
+]
+tag_table_add_column_templates = [
+    '''ALTER TABLE {{.table}} ADD COLUMN IF NOT EXISTS {{.columns|join ", ADD COLUMN IF NOT EXISTS "}}''',
+    '''DROP VIEW IF EXISTS {{ .metricTable.WithSchema "public" }}''',
+    '''CREATE VIEW {{ .metricTable.WithSchema "public" }} AS SELECT time, {{ (.allColumns.Tags.Concat .metricTable.Columns.Fields).Identifiers | join "," }} FROM {{ .metricTable }} t, {{ .tagTable }} tt WHERE t.tag_id = tt.tag_id''',
+]
+```
+
+#### Immutable data table
+
+Some PostgreSQL-compatible databases don't allow modification of table schema
+after initial creation. This example works around the limitation by creating
+a new table and then using a view to join them together.
+
+```toml
+tags_as_foreign_keys = true
+schema = 'telegraf'
+create_templates = [
+    '''CREATE TABLE {{ .table }} ({{ .allColumns }})''',
+    '''SELECT create_hypertable({{ .table|quoteLiteral }}, 'time', chunk_time_interval => INTERVAL '7d')''',
+    '''ALTER TABLE {{ .table }} SET (timescaledb.compress, timescaledb.compress_segmentby = 'tag_id')''',
+    '''SELECT add_compression_policy({{ .table|quoteLiteral }}, INTERVAL '14d')''',
+    '''CREATE VIEW {{ .table.WithSuffix "_data" }} AS SELECT {{ .allColumns.Selectors | join "," }} FROM {{ .table }}''',
+    '''CREATE VIEW {{ .table.WithSchema "public" }} AS SELECT time, {{ (.tagTable.Columns.Tags.Concat .allColumns.Fields).Identifiers | join "," }} FROM {{ .table.WithSuffix "_data" }} t, {{ .tagTable }} tt WHERE t.tag_id = tt.tag_id''',
+]
+add_column_templates = [
+    '''ALTER TABLE {{ .table }} RENAME TO {{ (.table.WithSuffix "_" .table.Columns.Hash).WithSchema "" }}''',
+    '''ALTER VIEW {{ .table.WithSuffix "_data" }} RENAME TO {{ (.table.WithSuffix "_" .table.Columns.Hash "_data").WithSchema "" }}''',
+    '''DROP VIEW {{ .table.WithSchema "public" }}''',
+
+    '''CREATE TABLE {{ .table }} ({{ .allColumns }})''',
+    '''SELECT create_hypertable({{ .table|quoteLiteral }}, 'time', chunk_time_interval => INTERVAL '7d')''',
+    '''ALTER TABLE {{ .table }} SET (timescaledb.compress, timescaledb.compress_segmentby = 'tag_id')''',
+    '''SELECT add_compression_policy({{ .table|quoteLiteral }}, INTERVAL '14d')''',
+    '''CREATE VIEW {{ .table.WithSuffix "_data" }} AS SELECT {{ .allColumns.Selectors | join "," }} FROM {{ .table }} UNION ALL SELECT {{ (.allColumns.Union .table.Columns).Selectors | join "," }} FROM {{ .table.WithSuffix "_" .table.Columns.Hash "_data" }}''',
+    '''CREATE VIEW {{ .table.WithSchema "public" }} AS SELECT time, {{ (.tagTable.Columns.Tags.Concat .allColumns.Fields).Identifiers | join "," }} FROM {{ .table.WithSuffix "_data" }} t, {{ .tagTable }} tt WHERE t.tag_id = tt.tag_id''',
+]
+tag_table_add_column_templates = [
+    '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
+    '''DROP VIEW {{ .metricTable.WithSchema "public" }}''',
+    '''CREATE VIEW {{ .metricTable.WithSchema "public" }} AS SELECT time, {{ (.allColumns.Tags.Concat .metricTable.Columns.Fields).Identifiers | join "," }} FROM {{ .metricTable.WithSuffix "_data" }} t, {{ .table }} tt WHERE t.tag_id = tt.tag_id''',
+]
+```
+
+#### Index
+
+Create an index on time and tag columns for faster querying of data.
+
+```toml
+create_templates = [
+    '''CREATE TABLE {{ .table }} ({{ .columns }})''',
+    '''CREATE INDEX ON {{ .table }} USING btree({{ .columns.Keys.Identifiers | join "," }})'''
+  ]
+```
+
+## Error handling
+
+When the plugin encounters an error writing to the database, it attempts to
+determine whether the error is temporary or permanent. An error is considered
+temporary if it's possible that retrying the write will succeed. Some examples
+of temporary errors are things like connection interruption, deadlocks, etc.
+Permanent errors are things like invalid data type, insufficient permissions,
+etc.
+
+When an error is determined to be temporary, the plugin will retry the write
+with an incremental backoff.
+
+When an error is determined to be permanent, the plugin will discard the
+sub-batch. The "sub-batch" is the portion of the input batch that is being
+written to the same table.
diff --git a/content/telegraf/v1/output-plugins/prometheus_client/_index.md b/content/telegraf/v1/output-plugins/prometheus_client/_index.md
new file mode 100644
index 000000000..a2eed8c04
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/prometheus_client/_index.md
@@ -0,0 +1,101 @@
+---
+description: "Telegraf plugin for sending metrics to Prometheus"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Prometheus
+    identifier: output-prometheus_client
+tags: [Prometheus, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Prometheus Output Plugin
+
+This plugin starts a [Prometheus](https://prometheus.io/) Client, it exposes all
+metrics on `/metrics` (default) to be polled by a Prometheus server.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `basic_password` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for the Prometheus client to spawn
+[[outputs.prometheus_client]]
+  ## Address to listen on.
+  ##   ex:
+  ##     listen = ":9273"
+  ##     listen = "vsock://:9273"
+  listen = ":9273"
+
+  ## Maximum duration before timing out read of the request
+  # read_timeout = "10s"
+  ## Maximum duration before timing out write of the response
+  # write_timeout = "10s"
+
+  ## Metric version controls the mapping from Prometheus metrics into Telegraf metrics.
+  ## See "Metric Format Configuration" in plugins/inputs/prometheus/README.md for details.
+  ## Valid options: 1, 2
+  # metric_version = 1
+
+  ## Use HTTP Basic Authentication.
+  # basic_username = "Foo"
+  # basic_password = "Bar"
+
+  ## If set, the IP Ranges which are allowed to access metrics.
+  ##   ex: ip_range = ["192.168.0.0/24", "192.168.1.0/30"]
+  # ip_range = []
+
+  ## Path to publish the metrics on.
+  # path = "/metrics"
+
+  ## Expiration interval for each metric. 0 == no expiration
+  # expiration_interval = "60s"
+
+  ## Collectors to enable, valid entries are "gocollector" and "process".
+  ## If unset, both are enabled.
+  # collectors_exclude = ["gocollector", "process"]
+
+  ## Send string metrics as Prometheus labels.
+  ## Unless set to false all string metrics will be sent as labels.
+  # string_as_label = true
+
+  ## If set, enable TLS with the given certificate.
+  # tls_cert = "/etc/ssl/telegraf.crt"
+  # tls_key = "/etc/ssl/telegraf.key"
+
+  ## Set one or more allowed client CA certificate file names to
+  ## enable mutually authenticated TLS connections
+  # tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
+
+  ## Export metric collection time.
+  # export_timestamp = false
+
+  ## Specify the metric type explicitly.
+  ## This overrides the metric-type of the Telegraf metric. Globbing is allowed.
+  # [outputs.prometheus_client.metric_types]
+  #   counter = []
+  #   gauge = []
+```
+
+## Metrics
+
+Prometheus metrics are produced in the same manner as the [prometheus
+serializer]().
+
+[prometheus serializer]: /plugins/serializers/prometheus/README.md#Metrics
diff --git a/content/telegraf/v1/output-plugins/redistimeseries/_index.md b/content/telegraf/v1/output-plugins/redistimeseries/_index.md
new file mode 100644
index 000000000..c1f22a151
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/redistimeseries/_index.md
@@ -0,0 +1,61 @@
+---
+description: "Telegraf plugin for sending metrics to RedisTimeSeries Producer"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: RedisTimeSeries Producer
+    identifier: output-redistimeseries
+tags: [RedisTimeSeries Producer, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# RedisTimeSeries Producer Output Plugin
+
+The RedisTimeSeries output plugin writes metrics to the RedisTimeSeries server.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `username` and
+`password` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Publishes metrics to a redis timeseries server
+[[outputs.redistimeseries]]
+  ## The address of the RedisTimeSeries server.
+  address = "127.0.0.1:6379"
+
+  ## Redis ACL credentials
+  # username = ""
+  # password = ""
+  # database = 0
+
+  ## Timeout for operations such as ping or sending metrics
+  # timeout = "10s"
+
+  ## Enable attempt to convert string fields to numeric values
+  ## If "false" or in case the string value cannot be converted the string
+  ## field will be dropped.
+  # convert_string_fields = true
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  # insecure_skip_verify = false
+```
diff --git a/content/telegraf/v1/output-plugins/remotefile/_index.md b/content/telegraf/v1/output-plugins/remotefile/_index.md
new file mode 100644
index 000000000..dd699707e
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/remotefile/_index.md
@@ -0,0 +1,89 @@
+---
+description: "Telegraf plugin for sending metrics to Remote File"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Remote File
+    identifier: output-remotefile
+tags: [Remote File, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Remote File Output Plugin
+
+This plugin writes telegraf metrics to files in remote locations using the
+[rclone library](https://rclone.org). Currently the following backends are
+supported:
+
+- `local`: [Local filesystem](https://rclone.org/local/)
+- `s3`: [Amazon S3 storage providers](https://rclone.org/s3/)
+- `sftp`: [Secure File Transfer Protocol](https://rclone.org/sftp/)
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `remote` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Send telegraf metrics to file(s) in a remote filesystem
+[[outputs.remotefile]]
+  ## Remote location according to https://rclone.org/#providers
+  ## Check the backend configuration options and specify them in
+  ##   <backend type>[,<param1>=<value1>[,...,<paramN>=<valueN>]]:[root]
+  ## for example:
+  ##   remote = 's3,provider=AWS,access_key_id=...,secret_access_key=...,session_token=...,region=us-east-1:mybucket'
+  ## By default, remote is the local current directory
+  # remote = "local:"
+
+  ## Files to write in the remote location
+  ## Each file can be a Golang template for generating the filename from metrics.
+  ## See https://pkg.go.dev/text/template for a reference and use the metric
+  ## name (`{{.Name}}`), tag values (`{{.Tag "name"}}`), field values
+  ## (`{{.Field "name"}}`) or the metric time (`{{.Time}}) to derive the
+  ## filename.
+  ## The 'files' setting may contain directories relative to the root path
+  ## defined in 'remote'.
+  files = ['{{.Name}}-{{.Time.Format "2006-01-02"}}']
+
+  ## Use batch serialization format instead of line based delimiting.
+  ## The batch format allows for the production of non-line-based output formats
+  ## and may more efficiently encode metrics.
+  # use_batch_format = false
+
+  ## Cache settings
+  ## Time to wait for all writes to complete on shutdown of the plugin.
+  # final_write_timeout = "10s"
+
+  ## Time to wait between writing to a file and uploading to the remote location
+  # cache_write_back = "5s"
+
+  ## Maximum size of the cache on disk (infinite by default)
+  # cache_max_size = -1
+
+  ## Data format to output.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
+  data_format = "influx"
+```
+
+## Available custom functions
+
+The following functions can be used in the templates:
+
+- `now`: returns the current time (example: `{{now.Format "2006-01-02"}}`)
diff --git a/content/telegraf/v1/output-plugins/riemann/_index.md b/content/telegraf/v1/output-plugins/riemann/_index.md
new file mode 100644
index 000000000..c42e0e182
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/riemann/_index.md
@@ -0,0 +1,118 @@
+---
+description: "Telegraf plugin for sending metrics to Riemann"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Riemann
+    identifier: output-riemann
+tags: [Riemann, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Riemann Output Plugin
+
+This plugin writes to [Riemann](http://riemann.io/) via TCP or UDP.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for Riemann to send metrics to
+[[outputs.riemann]]
+  ## The full TCP or UDP URL of the Riemann server
+  url = "tcp://localhost:5555"
+
+  ## Riemann event TTL, floating-point time in seconds.
+  ## Defines how long that an event is considered valid for in Riemann
+  # ttl = 30.0
+
+  ## Separator to use between measurement and field name in Riemann service name
+  ## This does not have any effect if 'measurement_as_attribute' is set to 'true'
+  separator = "/"
+
+  ## Set measurement name as Riemann attribute 'measurement', instead of prepending it to the Riemann service name
+  # measurement_as_attribute = false
+
+  ## Send string metrics as Riemann event states.
+  ## Unless enabled all string metrics will be ignored
+  # string_as_state = false
+
+  ## A list of tag keys whose values get sent as Riemann tags.
+  ## If empty, all Telegraf tag values will be sent as tags
+  # tag_keys = ["telegraf","custom_tag"]
+
+  ## Additional Riemann tags to send.
+  # tags = ["telegraf-output"]
+
+  ## Description for Riemann event
+  # description_text = "metrics collected from telegraf"
+
+  ## Riemann client write timeout, defaults to "5s" if not set.
+  # timeout = "5s"
+```
+
+### Required parameters
+
+* `url`: The full TCP or UDP URL of the Riemann server to send events to.
+
+### Optional parameters
+
+* `ttl`: Riemann event TTL, floating-point time in seconds. Defines how long
+  that an event is considered valid for in Riemann.
+* `separator`: Separator to use between measurement and field name in Riemann
+  service name.
+* `measurement_as_attribute`: Set measurement name as a Riemann attribute,
+  instead of prepending it to the Riemann service name.
+* `string_as_state`: Send string metrics as Riemann event states. If this is not
+  enabled then all string metrics will be ignored.
+* `tag_keys`: A list of tag keys whose values get sent as Riemann tags. If
+  empty, all Telegraf tag values will be sent as tags.
+* `tags`: Additional Riemann tags that will be sent.
+* `description_text`: Description text for Riemann event.
+
+## Example Events
+
+Riemann event emitted by Telegraf with default configuration:
+
+```text
+#riemann.codec.Event{
+:host "postgresql-1e612b44-e92f-4d27-9f30-5e2f53947870", :state nil, :description nil, :ttl 30.0,
+:service "disk/used_percent", :metric 73.16736001949994, :path "/boot", :fstype "ext4", :time 1475605021}
+```
+
+Telegraf emitting the same Riemann event with `measurement_as_attribute` set to
+`true`:
+
+```text
+#riemann.codec.Event{ ...
+:measurement "disk", :service "used_percent", :metric 73.16736001949994,
+... :time 1475605021}
+```
+
+Telegraf emitting the same Riemann event with additional Riemann tags defined:
+
+```text
+#riemann.codec.Event{
+:host "postgresql-1e612b44-e92f-4d27-9f30-5e2f53947870", :state nil, :description nil, :ttl 30.0,
+:service "disk/used_percent", :metric 73.16736001949994, :path "/boot", :fstype "ext4", :time 1475605021,
+:tags ["telegraf" "postgres_cluster"]}
+```
+
+Telegraf emitting a Riemann event with a status text and `string_as_state` set
+to `true`, and a `description_text` defined:
+
+```text
+#riemann.codec.Event{
+:host "postgresql-1e612b44-e92f-4d27-9f30-5e2f53947870", :state "Running", :ttl 30.0,
+:description "PostgreSQL master node is up and running",
+:service "status", :time 1475605021}
+```
diff --git a/content/telegraf/v1/output-plugins/sensu/_index.md b/content/telegraf/v1/output-plugins/sensu/_index.md
new file mode 100644
index 000000000..4d30b2178
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/sensu/_index.md
@@ -0,0 +1,123 @@
+---
+description: "Telegraf plugin for sending metrics to Sensu Go"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Sensu Go
+    identifier: output-sensu
+tags: [Sensu Go, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Sensu Go Output Plugin
+
+This plugin writes metrics events to [Sensu Go](https://sensu.io) via its
+HTTP events API.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Send aggregate metrics to Sensu Monitor
+[[outputs.sensu]]
+  ## BACKEND API URL is the Sensu Backend API root URL to send metrics to
+  ## (protocol, host, and port only). The output plugin will automatically
+  ## append the corresponding backend API path
+  ## /api/core/v2/namespaces/:entity_namespace/events/:entity_name/:check_name).
+  ##
+  ## Backend Events API reference:
+  ## https://docs.sensu.io/sensu-go/latest/api/events/
+  ##
+  ## AGENT API URL is the Sensu Agent API root URL to send metrics to
+  ## (protocol, host, and port only). The output plugin will automatically
+  ## append the correspeonding agent API path (/events).
+  ##
+  ## Agent API Events API reference:
+  ## https://docs.sensu.io/sensu-go/latest/api/events/
+  ##
+  ## NOTE: if backend_api_url and agent_api_url and api_key are set, the output
+  ## plugin will use backend_api_url. If backend_api_url and agent_api_url are
+  ## not provided, the output plugin will default to use an agent_api_url of
+  ## http://127.0.0.1:3031
+  ##
+  # backend_api_url = "http://127.0.0.1:8080"
+  # agent_api_url = "http://127.0.0.1:3031"
+
+  ## API KEY is the Sensu Backend API token
+  ## Generate a new API token via:
+  ##
+  ## $ sensuctl cluster-role create telegraf --verb create --resource events,entities
+  ## $ sensuctl cluster-role-binding create telegraf --cluster-role telegraf --group telegraf
+  ## $ sensuctl user create telegraf --group telegraf --password REDACTED
+  ## $ sensuctl api-key grant telegraf
+  ##
+  ## For more information on Sensu RBAC profiles & API tokens, please visit:
+  ## - https://docs.sensu.io/sensu-go/latest/reference/rbac/
+  ## - https://docs.sensu.io/sensu-go/latest/reference/apikeys/
+  ##
+  # api_key = "${SENSU_API_KEY}"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## Timeout for HTTP message
+  # timeout = "5s"
+
+  ## HTTP Content-Encoding for write request body, can be set to "gzip" to
+  ## compress body or "identity" to apply no encoding.
+  # content_encoding = "identity"
+
+  ## NOTE: Due to the way TOML is parsed, tables must be at the END of the
+  ## plugin definition, otherwise additional config options are read as part of
+  ## the table
+
+  ## Sensu Event details
+  ##
+  ## Below are the event details to be sent to Sensu.  The main portions of the
+  ## event are the check, entity, and metrics specifications. For more information
+  ## on Sensu events and its components, please visit:
+  ## - Events - https://docs.sensu.io/sensu-go/latest/reference/events
+  ## - Checks -  https://docs.sensu.io/sensu-go/latest/reference/checks
+  ## - Entities - https://docs.sensu.io/sensu-go/latest/reference/entities
+  ## - Metrics - https://docs.sensu.io/sensu-go/latest/reference/events#metrics
+  ##
+  ## Check specification
+  ## The check name is the name to give the Sensu check associated with the event
+  ## created. This maps to check.metadata.name in the event.
+  [outputs.sensu.check]
+    name = "telegraf"
+
+  ## Entity specification
+  ## Configure the entity name and namespace, if necessary. This will be part of
+  ## the entity.metadata in the event.
+  ##
+  ## NOTE: if the output plugin is configured to send events to a
+  ## backend_api_url and entity_name is not set, the value returned by
+  ## os.Hostname() will be used; if the output plugin is configured to send
+  ## events to an agent_api_url, entity_name and entity_namespace are not used.
+  # [outputs.sensu.entity]
+  #   name = "server-01"
+  #   namespace = "default"
+
+  ## Metrics specification
+  ## Configure the tags for the metrics that are sent as part of the Sensu event
+  # [outputs.sensu.tags]
+  #   source = "telegraf"
+
+  ## Configure the handler(s) for processing the provided metrics
+  # [outputs.sensu.metrics]
+  #   handlers = ["influxdb","elasticsearch"]
+```
diff --git a/content/telegraf/v1/output-plugins/signalfx/_index.md b/content/telegraf/v1/output-plugins/signalfx/_index.md
new file mode 100644
index 000000000..1e7c48f4d
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/signalfx/_index.md
@@ -0,0 +1,56 @@
+---
+description: "Telegraf plugin for sending metrics to SignalFx"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: SignalFx
+    identifier: output-signalfx
+tags: [SignalFx, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# SignalFx Output Plugin
+
+The SignalFx output plugin sends metrics to [SignalFx](https://docs.signalfx.com/en/latest/).
+
+[docs]: https://docs.signalfx.com/en/latest/
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `access_token` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Send metrics and events to SignalFx
+[[outputs.signalfx]]
+  ## SignalFx Org Access Token
+  access_token = "my-secret-token"
+
+  ## The SignalFx realm that your organization resides in
+  signalfx_realm = "us9"  # Required if ingest_url is not set
+
+  ## You can optionally provide a custom ingest url instead of the
+  ## signalfx_realm option above if you are using a gateway or proxy
+  ## instance.  This option takes precedence over signalfx_realm.
+  ingest_url = "https://my-custom-ingest/"
+
+  ## Event typed metrics are omitted by default,
+  ## If you require an event typed metric you must specify the
+  ## metric name in the following list.
+  included_event_names = ["plugin.metric_name"]
+```
diff --git a/content/telegraf/v1/output-plugins/socket_writer/_index.md b/content/telegraf/v1/output-plugins/socket_writer/_index.md
new file mode 100644
index 000000000..4519d71ae
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/socket_writer/_index.md
@@ -0,0 +1,71 @@
+---
+description: "Telegraf plugin for sending metrics to Socket Writer"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Socket Writer
+    identifier: output-socket_writer
+tags: [Socket Writer, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Socket Writer Output Plugin
+
+The socket writer plugin can write to a UDP, TCP, or unix socket.
+
+It can output data in any of the [supported output formats](/telegraf/v1/data_formats/output).
+
+[formats]: ../../../docs/DATA_FORMATS_OUTPUT.md
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Generic socket writer capable of handling multiple socket types.
+[[outputs.socket_writer]]
+  ## URL to connect to
+  # address = "tcp://127.0.0.1:8094"
+  # address = "tcp://example.com:http"
+  # address = "tcp4://127.0.0.1:8094"
+  # address = "tcp6://127.0.0.1:8094"
+  # address = "tcp6://[2001:db8::1]:8094"
+  # address = "udp://127.0.0.1:8094"
+  # address = "udp4://127.0.0.1:8094"
+  # address = "udp6://127.0.0.1:8094"
+  # address = "unix:///tmp/telegraf.sock"
+  # address = "unixgram:///tmp/telegraf.sock"
+  # address = "vsock://cid:port"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## Period between keep alive probes.
+  ## Only applies to TCP sockets.
+  ## 0 disables keep alive probes.
+  ## Defaults to the OS configuration.
+  # keep_alive_period = "5m"
+
+  ## Content encoding for message payloads, can be set to "gzip" or to
+  ## "identity" to apply no encoding.
+  ##
+  # content_encoding = "identity"
+
+  ## Data format to generate.
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
+  # data_format = "influx"
+```
diff --git a/content/telegraf/v1/output-plugins/sql/_index.md b/content/telegraf/v1/output-plugins/sql/_index.md
new file mode 100644
index 000000000..53c3e23a3
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/sql/_index.md
@@ -0,0 +1,232 @@
+---
+description: "Telegraf plugin for sending metrics to SQL"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: SQL
+    identifier: output-sql
+tags: [SQL, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# SQL Output Plugin
+
+The SQL output plugin saves Telegraf metric data to an SQL database.
+
+The plugin uses a simple, hard-coded database schema. There is a table for each
+metric type and the table name is the metric name. There is a column per field
+and a column per tag. There is an optional column for the metric timestamp.
+
+A row is written for every input metric. This means multiple metrics are never
+merged into a single row, even if they have the same metric name, tags, and
+timestamp.
+
+The plugin uses Golang's generic "database/sql" interface and third party
+drivers. See the driver-specific section below for a list of supported drivers
+and details. Additional drivers may be added in future Telegraf releases.
+
+## Getting started
+
+To use the plugin, set the driver setting to the driver name appropriate for
+your database. Then set the data source name (DSN). The format of the DSN varies
+by driver but often includes a username, password, the database instance to use,
+and the hostname of the database server. The user account must have privileges
+to insert rows and create tables.
+
+## Generated SQL
+
+The plugin generates simple ANSI/ISO SQL that is likely to work on any DBMS. It
+doesn't use language features that are specific to a particular DBMS. If you
+want to use a feature that is specific to a particular DBMS, you may be able to
+set it up manually outside of this plugin or through the init_sql setting.
+
+The insert statements generated by the plugin use placeholder parameters. Most
+database drivers use question marks as placeholders but postgres uses indexed
+dollar signs. The plugin chooses which placeholder style to use depending on the
+driver selected.
+
+Through the nature of the inputs plugins, the amounts of columns inserted within
+rows for a given metric may differ. Since the tables are created based on the
+tags and fields available within an input metric, it's possible the created
+table won't contain all the necessary columns. You might need to initialize
+the schema yourself, to avoid this scenario.
+
+## Advanced options
+
+When the plugin first connects it runs SQL from the init_sql setting, allowing
+you to perform custom initialization for the connection.
+
+Before inserting a row, the plugin checks whether the table exists. If it
+doesn't exist, the plugin creates the table. The existence check and the table
+creation statements can be changed through template settings. The template
+settings allows you to have the plugin create customized tables or skip table
+creation entirely by setting the check template to any query that executes
+without error, such as "select 1".
+
+The name of the timestamp column is "timestamp" but it can be changed with the
+timestamp\_column setting. The timestamp column can be completely disabled by
+setting it to "".
+
+By changing the table creation template, it's possible with some databases to
+save a row insertion timestamp. You can add an additional column with a default
+value to the template, like "CREATE TABLE {TABLE}(insertion_timestamp TIMESTAMP
+DEFAULT CURRENT\_TIMESTAMP, {COLUMNS})".
+
+The mapping of metric types to sql column types can be customized through the
+convert settings.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Save metrics to an SQL Database
+[[outputs.sql]]
+  ## Database driver
+  ## Valid options: mssql (Microsoft SQL Server), mysql (MySQL), pgx (Postgres),
+  ##  sqlite (SQLite3), snowflake (snowflake.com) clickhouse (ClickHouse)
+  # driver = ""
+
+  ## Data source name
+  ## The format of the data source name is different for each database driver.
+  ## See the plugin readme for details.
+  # data_source_name = ""
+
+  ## Timestamp column name
+  # timestamp_column = "timestamp"
+
+  ## Table creation template
+  ## Available template variables:
+  ##  {TABLE} - table name as a quoted identifier
+  ##  {TABLELITERAL} - table name as a quoted string literal
+  ##  {COLUMNS} - column definitions (list of quoted identifiers and types)
+  # table_template = "CREATE TABLE {TABLE}({COLUMNS})"
+
+  ## Table existence check template
+  ## Available template variables:
+  ##  {TABLE} - tablename as a quoted identifier
+  # table_exists_template = "SELECT 1 FROM {TABLE} LIMIT 1"
+
+  ## Initialization SQL
+  # init_sql = ""
+
+  ## Maximum amount of time a connection may be idle. "0s" means connections are
+  ## never closed due to idle time.
+  # connection_max_idle_time = "0s"
+
+  ## Maximum amount of time a connection may be reused. "0s" means connections
+  ## are never closed due to age.
+  # connection_max_lifetime = "0s"
+
+  ## Maximum number of connections in the idle connection pool. 0 means unlimited.
+  # connection_max_idle = 2
+
+  ## Maximum number of open connections to the database. 0 means unlimited.
+  # connection_max_open = 0
+
+  ## NOTE: Due to the way TOML is parsed, tables must be at the END of the
+  ## plugin definition, otherwise additional config options are read as part of
+  ## the table
+
+  ## Metric type to SQL type conversion
+  ## The values on the left are the data types Telegraf has and the values on
+  ## the right are the data types Telegraf will use when sending to a database.
+  ##
+  ## The database values used must be data types the destination database
+  ## understands. It is up to the user to ensure that the selected data type is
+  ## available in the database they are using. Refer to your database
+  ## documentation for what data types are available and supported.
+  #[outputs.sql.convert]
+  #  integer              = "INT"
+  #  real                 = "DOUBLE"
+  #  text                 = "TEXT"
+  #  timestamp            = "TIMESTAMP"
+  #  defaultvalue         = "TEXT"
+  #  unsigned             = "UNSIGNED"
+  #  bool                 = "BOOL"
+  #  ## This setting controls the behavior of the unsigned value. By default the
+  #  ## setting will take the integer value and append the unsigned value to it. The other
+  #  ## option is "literal", which will use the actual value the user provides to
+  #  ## the unsigned option. This is useful for a database like ClickHouse where
+  #  ## the unsigned value should use a value like "uint64".
+  #  # conversion_style = "unsigned_suffix"
+```
+
+## Driver-specific information
+
+### go-sql-driver/mysql
+
+MySQL default quoting differs from standard ANSI/ISO SQL quoting. You must use
+MySQL's ANSI\_QUOTES mode with this plugin. You can enable this mode by using
+the setting `init_sql = "SET sql_mode='ANSI_QUOTES';"` or through a command-line
+option when running MySQL. See MySQL's docs for [details on
+ANSI\_QUOTES]() and [how to set the SQL mode](https://dev.mysql.com/doc/refman/8.0/en/sql-mode.html#sql-mode-setting).
+
+You can use a DSN of the format "username:password@tcp(host:port)/dbname". See
+the [driver docs](https://github.com/go-sql-driver/mysql) for details.
+
+[mysql-quotes]: https://dev.mysql.com/doc/refman/8.0/en/sql-mode.html#sqlmode_ansi_quotes
+
+[mysql-mode]: https://dev.mysql.com/doc/refman/8.0/en/sql-mode.html#sql-mode-setting
+
+[mysql-driver]: https://github.com/go-sql-driver/mysql
+
+### jackc/pgx
+
+You can use a DSN of the format
+"postgres://username:password@host:port/dbname". See the [driver
+docs](https://github.com/jackc/pgx) for more details.
+
+### modernc.org/sqlite
+
+It is not supported on windows/386, mips, and mips64 platforms.
+
+The DSN is a filename or url with scheme "file:". See the [driver
+docs](https://modernc.org/sqlite) for details.
+
+### clickhouse
+
+#### DSN
+
+Currently, Telegraf's sql output plugin depends on
+[clickhouse-go v1.5.4](https://github.com/ClickHouse/clickhouse-go/tree/v1.5.4)
+which uses a [different DSN
+format](https://github.com/ClickHouse/clickhouse-go/tree/v1.5.4#dsn) than its
+newer `v2.*` version.
+
+#### Metric type to SQL type conversion
+
+The following configuration makes the mapping compatible with Clickhouse:
+
+```toml
+  [outputs.sql.convert]
+    conversion_style     = "literal"
+    integer              = "Int64"
+    text                 = "String"
+    timestamp            = "DateTime"
+    defaultvalue         = "String"
+    unsigned             = "UInt64"
+    bool                 = "UInt8"
+    real                 = "Float64"
+```
+
+See [ClickHouse data
+types](https://clickhouse.com/docs/en/sql-reference/data-types/) for more info.
+
+### microsoft/go-mssqldb
+
+Telegraf doesn't have unit tests for go-mssqldb so it should be treated as
+experimental.
+
+### snowflakedb/gosnowflake
+
+Telegraf doesn't have unit tests for gosnowflake so it should be treated as
+experimental.
diff --git a/content/telegraf/v1/output-plugins/stackdriver/_index.md b/content/telegraf/v1/output-plugins/stackdriver/_index.md
new file mode 100644
index 000000000..557af4463
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/stackdriver/_index.md
@@ -0,0 +1,132 @@
+---
+description: "Telegraf plugin for sending metrics to Stackdriver Google Cloud Monitoring"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Stackdriver Google Cloud Monitoring
+    identifier: output-stackdriver
+tags: [Stackdriver Google Cloud Monitoring, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Stackdriver Google Cloud Monitoring Output Plugin
+
+This plugin writes to the [Google Cloud Monitoring API](https://cloud.google.com/monitoring/api/v3/) (formerly
+Stackdriver) and requires [authentication](https://cloud.google.com/docs/authentication/getting-started) with Google Cloud using either a
+service account or user credentials
+
+This plugin accesses APIs which are [chargeable](https://cloud.google.com/stackdriver/pricing#google-clouds-operations-suite-pricing); you might incur
+costs.
+
+Requires `project` to specify where Stackdriver metrics will be delivered to.
+
+By default, Metrics are grouped by the `namespace` variable and metric key -
+eg: `custom.googleapis.com/telegraf/system/load5`. However, this is not the
+best practice. Setting `metric_name_format = "official"` will produce a more
+easily queried format of: `metric_type_prefix/[namespace_]name_key/kind`. If
+the global namespace is not set, it is omitted as well.
+
+[Resource type](https://cloud.google.com/monitoring/api/resources) is configured
+by the `resource_type` variable (default `global`).
+
+Additional resource labels can be configured by `resource_labels`. By default
+the required `project_id` label is always set to the `project` variable.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for Google Cloud Stackdriver to send metrics to
+[[outputs.stackdriver]]
+  ## GCP Project
+  project = "erudite-bloom-151019"
+
+  ## The namespace for the metric descriptor
+  ## This is optional and users are encouraged to set the namespace as a
+  ## resource label instead. If omitted it is not included in the metric name.
+  namespace = "telegraf"
+
+  ## Metric Type Prefix
+  ## The DNS name used with the metric type as a prefix.
+  # metric_type_prefix = "custom.googleapis.com"
+
+  ## Metric Name Format
+  ## Specifies the layout of the metric name, choose from:
+  ##  * path: 'metric_type_prefix_namespace_name_key'
+  ##  * official: 'metric_type_prefix/namespace_name_key/kind'
+  # metric_name_format = "path"
+
+  ## Metric Data Type
+  ## By default, telegraf will use whatever type the metric comes in as.
+  ## However, for some use cases, forcing int64, may be preferred for values:
+  ##   * source: use whatever was passed in
+  ##   * double: preferred datatype to allow queries by PromQL.
+  # metric_data_type = "source"
+
+  ## Tags as resource labels
+  ## Tags defined in this option, when they exist, are added as a resource
+  ## label and not included as a metric label. The values from tags override
+  ## the values defined under the resource_labels config options.
+  # tags_as_resource_label = []
+
+  ## Custom resource type
+  # resource_type = "generic_node"
+
+  ## Override metric type by metric name
+  ## Metric names matching the values here, globbing supported, will have the
+  ## metric type set to the corresponding type.
+  # metric_counter = []
+  # metric_gauge = []
+  # metric_histogram = []
+
+  ## NOTE: Due to the way TOML is parsed, tables must be at the END of the
+  ## plugin definition, otherwise additional config options are read as part of
+  ## the table
+
+  ## Additional resource labels
+  # [outputs.stackdriver.resource_labels]
+  #   node_id = "$HOSTNAME"
+  #   namespace = "myapp"
+  #   location = "eu-north0"
+```
+
+## Restrictions
+
+Stackdriver does not support string values in custom metrics, any string fields
+will not be written.
+
+The Stackdriver API does not allow writing points which are out of order, older
+than 24 hours, or more with resolution greater than than one per point minute.
+Since Telegraf writes the newest points first and moves backwards through the
+metric buffer, it may not be possible to write historical data after an
+interruption.
+
+Points collected with greater than 1 minute precision may need to be aggregated
+before then can be written.  Consider using the [basicstats](/telegraf/v1/plugins/#aggregator-basicstats) aggregator to do
+this.
+
+Histograms are supported only via metrics generated via the Prometheus metric
+version 1 parser. The version 2 parser generates sparse metrics that would need
+to be heavily transformed before sending to Stackdriver.
+
+Note that the plugin keeps an in-memory cache of the start times and last
+observed values of all COUNTER metrics in order to comply with the requirements
+of the stackdriver API.  This cache is not GCed: if you remove a large number of
+counters from the input side, you may wish to restart telegraf to clear it.
+
+[basicstats]: /plugins/aggregators/basicstats/README.md
+
+[stackdriver]: https://cloud.google.com/monitoring/api/v3/
+
+[authentication]: https://cloud.google.com/docs/authentication/getting-started
+
+[pricing]: https://cloud.google.com/stackdriver/pricing#google-clouds-operations-suite-pricing
diff --git a/content/telegraf/v1/output-plugins/stomp/_index.md b/content/telegraf/v1/output-plugins/stomp/_index.md
new file mode 100644
index 000000000..103ba07a2
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/stomp/_index.md
@@ -0,0 +1,59 @@
+---
+description: "Telegraf plugin for sending metrics to STOMP Producer"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: STOMP Producer
+    identifier: output-stomp
+tags: [STOMP Producer, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# STOMP Producer Output Plugin
+
+This plugin writes to a [Active MQ Broker](http://activemq.apache.org/)
+for STOMP <http://stomp.github.io>.
+
+It also support Amazon MQ  <https://aws.amazon.com/amazon-mq/>
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `username` and
+`password` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for active mq with stomp protocol to send metrics to
+[[outputs.stomp]]
+  host = "localhost:61613"
+
+  ## Queue name for producer messages
+  queueName = "telegraf"
+
+  ## Username and password if required by the Active MQ server.
+  # username = ""
+  # password = ""
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+
+  ## Data format to output.
+  data_format = "json"
+```
diff --git a/content/telegraf/v1/output-plugins/sumologic/_index.md b/content/telegraf/v1/output-plugins/sumologic/_index.md
new file mode 100644
index 000000000..e1b64cd4a
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/sumologic/_index.md
@@ -0,0 +1,92 @@
+---
+description: "Telegraf plugin for sending metrics to Sumo Logic"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Sumo Logic
+    identifier: output-sumologic
+tags: [Sumo Logic, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Sumo Logic Output Plugin
+
+This plugin sends metrics to [Sumo Logic HTTP Source](https://help.sumologic.com/03Send-Data/Sources/02Sources-for-Hosted-Collectors/HTTP-Source/Upload-Metrics-to-an-HTTP-Source) in HTTP
+messages, encoded using one of the output data formats.
+
+Telegraf minimum version: Telegraf 1.16.0
+
+Currently metrics can be sent using one of the following data formats, supported
+by Sumologic HTTP Source:
+
+* `graphite` - for Content-Type of `application/vnd.sumologic.graphite`
+* `carbon2` - for Content-Type of `application/vnd.sumologic.carbon2`
+* `prometheus` - for Content-Type of `application/vnd.sumologic.prometheus`
+
+[http-source]: https://help.sumologic.com/03Send-Data/Sources/02Sources-for-Hosted-Collectors/HTTP-Source/Upload-Metrics-to-an-HTTP-Source
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# A plugin that can send metrics to Sumo Logic HTTP metric collector.
+[[outputs.sumologic]]
+  ## Unique URL generated for your HTTP Metrics Source.
+  ## This is the address to send metrics to.
+  # url = "https://events.sumologic.net/receiver/v1/http/<UniqueHTTPCollectorCode>"
+
+  ## Data format to be used for sending metrics.
+  ## This will set the "Content-Type" header accordingly.
+  ## Currently supported formats:
+  ## * graphite - for Content-Type of application/vnd.sumologic.graphite
+  ## * carbon2 - for Content-Type of application/vnd.sumologic.carbon2
+  ## * prometheus - for Content-Type of application/vnd.sumologic.prometheus
+  ##
+  ## More information can be found at:
+  ## https://help.sumologic.com/03Send-Data/Sources/02Sources-for-Hosted-Collectors/HTTP-Source/Upload-Metrics-to-an-HTTP-Source#content-type-headers-for-metrics
+  ##
+  ## NOTE:
+  ## When unset, telegraf will by default use the influx serializer which is currently unsupported
+  ## in HTTP Source.
+  data_format = "carbon2"
+
+  ## Timeout used for HTTP request
+  # timeout = "5s"
+
+  ## Max HTTP request body size in bytes before compression (if applied).
+  ## By default 1MB is recommended.
+  ## NOTE:
+  ## Bear in mind that in some serializer a metric even though serialized to multiple
+  ## lines cannot be split any further so setting this very low might not work
+  ## as expected.
+  # max_request_body_size = 1000000
+
+  ## Additional, Sumo specific options.
+  ## Full list can be found here:
+  ## https://help.sumologic.com/03Send-Data/Sources/02Sources-for-Hosted-Collectors/HTTP-Source/Upload-Metrics-to-an-HTTP-Source#supported-http-headers
+
+  ## Desired source name.
+  ## Useful if you want to override the source name configured for the source.
+  # source_name = ""
+
+  ## Desired host name.
+  ## Useful if you want to override the source host configured for the source.
+  # source_host = ""
+
+  ## Desired source category.
+  ## Useful if you want to override the source category configured for the source.
+  # source_category = ""
+
+  ## Comma-separated key=value list of dimensions to apply to every metric.
+  ## Custom dimensions will allow you to query your metrics at a more granular level.
+  # dimensions = ""
+```
diff --git a/content/telegraf/v1/output-plugins/syslog/_index.md b/content/telegraf/v1/output-plugins/syslog/_index.md
new file mode 100644
index 000000000..b8c0861e3
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/syslog/_index.md
@@ -0,0 +1,152 @@
+---
+description: "Telegraf plugin for sending metrics to Syslog"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Syslog
+    identifier: output-syslog
+tags: [Syslog, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Syslog Output Plugin
+
+The syslog output plugin sends syslog messages transmitted over
+[UDP](https://tools.ietf.org/html/rfc5426) or
+[TCP](https://tools.ietf.org/html/rfc6587) or
+[TLS](https://tools.ietf.org/html/rfc5425), with or without the octet counting
+framing.
+
+Syslog messages are formatted according to [RFC
+5424](https://tools.ietf.org/html/rfc5424). Per this RFC there are limitations
+to the field sizes when sending messages. See the [Syslog Message Format](https://datatracker.ietf.org/doc/html/rfc5424#section-6)
+section of the RFC. Sending messages beyond these sizes may get dropped by a
+strict receiver silently.
+
+[Syslog Message Format]: https://datatracker.ietf.org/doc/html/rfc5424#section-6
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Startup error behavior options <!-- @/docs/includes/startup_error_behavior.md -->
+
+In addition to the plugin-specific and global configuration settings the plugin
+supports options for specifying the behavior when experiencing startup errors
+using the `startup_error_behavior` setting. Available values are:
+
+- `error`:  Telegraf with stop and exit in case of startup errors. This is the
+            default behavior.
+- `ignore`: Telegraf will ignore startup errors for this plugin and disables it
+            but continues processing for all other plugins.
+- `retry`:  Telegraf will try to startup the plugin in every gather or write
+            cycle in case of startup errors. The plugin is disabled until
+            the startup succeeds.
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for Syslog server to send metrics to
+[[outputs.syslog]]
+  ## URL to connect to
+  ## ex: address = "tcp://127.0.0.1:8094"
+  ## ex: address = "tcp4://127.0.0.1:8094"
+  ## ex: address = "tcp6://127.0.0.1:8094"
+  ## ex: address = "tcp6://[2001:db8::1]:8094"
+  ## ex: address = "udp://127.0.0.1:8094"
+  ## ex: address = "udp4://127.0.0.1:8094"
+  ## ex: address = "udp6://127.0.0.1:8094"
+  address = "tcp://127.0.0.1:8094"
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## Period between keep alive probes.
+  ## Only applies to TCP sockets.
+  ## 0 disables keep alive probes.
+  ## Defaults to the OS configuration.
+  # keep_alive_period = "5m"
+
+  ## The framing technique with which it is expected that messages are
+  ## transported (default = "octet-counting").  Whether the messages come
+  ## using the octet-counting (RFC5425#section-4.3.1, RFC6587#section-3.4.1),
+  ## or the non-transparent framing technique (RFC6587#section-3.4.2).  Must
+  ## be one of "octet-counting", "non-transparent".
+  # framing = "octet-counting"
+
+  ## The trailer to be expected in case of non-transparent framing (default = "LF").
+  ## Must be one of "LF", or "NUL".
+  # trailer = "LF"
+
+  ## SD-PARAMs settings
+  ## Syslog messages can contain key/value pairs within zero or more
+  ## structured data sections.  For each unrecognized metric tag/field a
+  ## SD-PARAMS is created.
+  ##
+  ## Example:
+  ##   [[outputs.syslog]]
+  ##     sdparam_separator = "_"
+  ##     default_sdid = "default@32473"
+  ##     sdids = ["foo@123", "bar@456"]
+  ##
+  ##   input => xyzzy,x=y foo@123_value=42,bar@456_value2=84,something_else=1
+  ##   output (structured data only) => [foo@123 value=42]()[default@32473 something_else=1 x=y]
+
+  ## SD-PARAMs separator between the sdid and tag/field key (default = "_")
+  # sdparam_separator = "_"
+
+  ## Default sdid used for tags/fields that don't contain a prefix defined in
+  ## the explicit sdids setting below If no default is specified, no SD-PARAMs
+  ## will be used for unrecognized field.
+  # default_sdid = "default@32473"
+
+  ## List of explicit prefixes to extract from tag/field keys and use as the
+  ## SDID, if they match (see above example for more details):
+  # sdids = ["foo@123", "bar@456"]
+
+  ## Default severity value. Severity and Facility are used to calculate the
+  ## message PRI value (RFC5424#section-6.2.1).  Used when no metric field
+  ## with key "severity_code" is defined.  If unset, 5 (notice) is the default
+  # default_severity_code = 5
+
+  ## Default facility value. Facility and Severity are used to calculate the
+  ## message PRI value (RFC5424#section-6.2.1).  Used when no metric field with
+  ## key "facility_code" is defined.  If unset, 1 (user-level) is the default
+  # default_facility_code = 1
+
+  ## Default APP-NAME value (RFC5424#section-6.2.5)
+  ## Used when no metric tag with key "appname" is defined.
+  ## If unset, "Telegraf" is the default
+  # default_appname = "Telegraf"
+```
+
+## Metric mapping
+
+The output plugin expects syslog metrics tags and fields to match up with the
+ones created in the [syslog input](/telegraf/v1/plugins/#input-syslog#metrics).
+
+The following table shows the metric tags, field and defaults used to format
+syslog messages.
+
+| Syslog field | Metric Tag | Metric Field | Default value |
+| --- | --- | --- | --- |
+| APP-NAME | appname | - | default_appname = "Telegraf" |
+| TIMESTAMP | - | timestamp | Metric's own timestamp |
+| VERSION | - | version | 1 |
+| PRI | - | severity_code + (8 * facility_code)| default_severity_code=5 (notice), default_facility_code=1 (user-level)|
+| HOSTNAME | hostname OR source OR host | - | os.Hostname() |
+| MSGID | - | msgid | Metric name |
+| PROCID | - | procid | - |
+| MSG | - | msg | - |
+
+[syslog input]: /plugins/inputs/syslog#metrics
diff --git a/content/telegraf/v1/output-plugins/timestream/_index.md b/content/telegraf/v1/output-plugins/timestream/_index.md
new file mode 100644
index 000000000..27581aa8b
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/timestream/_index.md
@@ -0,0 +1,295 @@
+---
+description: "Telegraf plugin for sending metrics to Timestream"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Timestream
+    identifier: output-timestream
+tags: [Timestream, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Timestream Output Plugin
+
+The Timestream output plugin writes metrics to the [Amazon Timestream] service.
+
+## Authentication
+
+This plugin uses a credential chain for Authentication with Timestream
+API endpoint. In the following order the plugin will attempt to authenticate.
+
+1. Web identity provider credentials via STS if `role_arn` and `web_identity_token_file` are specified
+1. [Assumed credentials via STS] if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules). The `endpoint_url` attribute is used only for Timestream service. When fetching credentials, STS global endpoint will be used.
+1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
+1. Shared profile from `profile` attribute
+1. [Environment Variables]
+1. [Shared Credentials]
+1. [EC2 Instance Profile]
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Configuration for sending metrics to Amazon Timestream.
+[[outputs.timestream]]
+  ## Amazon Region
+  region = "us-east-1"
+
+  ## Amazon Credentials
+  ## Credentials are loaded in the following order:
+  ## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified
+  ## 2) Assumed credentials via STS if role_arn is specified
+  ## 3) explicit credentials from 'access_key' and 'secret_key'
+  ## 4) shared profile from 'profile'
+  ## 5) environment variables
+  ## 6) shared credentials file
+  ## 7) EC2 Instance Profile
+  #access_key = ""
+  #secret_key = ""
+  #token = ""
+  #role_arn = ""
+  #web_identity_token_file = ""
+  #role_session_name = ""
+  #profile = ""
+  #shared_credential_file = ""
+
+  ## Endpoint to make request against, the correct endpoint is automatically
+  ## determined and this option should only be set if you wish to override the
+  ## default.
+  ##   ex: endpoint_url = "http://localhost:8000"
+  # endpoint_url = ""
+
+  ## Timestream database where the metrics will be inserted.
+  ## The database must exist prior to starting Telegraf.
+  database_name = "yourDatabaseNameHere"
+
+  ## Specifies if the plugin should describe the Timestream database upon starting
+  ## to validate if it has access necessary permissions, connection, etc., as a safety check.
+  ## If the describe operation fails, the plugin will not start
+  ## and therefore the Telegraf agent will not start.
+  describe_database_on_start = false
+
+  ## Specifies how the data is organized in Timestream.
+  ## Valid values are: single-table, multi-table.
+  ## When mapping_mode is set to single-table, all of the data is stored in a single table.
+  ## When mapping_mode is set to multi-table, the data is organized and stored in multiple tables.
+  ## The default is multi-table.
+  mapping_mode = "multi-table"
+
+  ## Specifies if the plugin should create the table, if the table does not exist.
+  create_table_if_not_exists = true
+
+  ## Specifies the Timestream table magnetic store retention period in days.
+  ## Check Timestream documentation for more details.
+  ## NOTE: This property is valid when create_table_if_not_exists = true.
+  create_table_magnetic_store_retention_period_in_days = 365
+
+  ## Specifies the Timestream table memory store retention period in hours.
+  ## Check Timestream documentation for more details.
+  ## NOTE: This property is valid when create_table_if_not_exists = true.
+  create_table_memory_store_retention_period_in_hours = 24
+
+  ## Specifies how the data is written into Timestream.
+  ## Valid values are: true, false
+  ## When use_multi_measure_records is set to true, all of the tags and fields are stored
+  ## as a single row in a Timestream table.
+  ## When use_multi_measure_record is set to false, Timestream stores each field in a
+  ## separate table row, thereby storing the tags multiple times (once for each field).
+  ## The recommended setting is true.
+  ## The default is false.
+  use_multi_measure_records = "false"
+
+  ## Specifies the measure_name to use when sending multi-measure records.
+  ## NOTE: This property is valid when use_multi_measure_records=true and mapping_mode=multi-table
+  measure_name_for_multi_measure_records = "telegraf_measure"
+
+  ## Specifies the name of the table to write data into
+  ## NOTE: This property is valid when mapping_mode=single-table.
+  # single_table_name = ""
+
+  ## Specifies the name of dimension when all of the data is being stored in a single table
+  ## and the measurement name is transformed into the dimension value
+  ## (see Mapping data from Influx to Timestream for details)
+  ## NOTE: This property is valid when mapping_mode=single-table.
+  # single_table_dimension_name_for_telegraf_measurement_name = "namespace"
+
+  ## Only valid and optional if create_table_if_not_exists = true
+  ## Specifies the Timestream table tags.
+  ## Check Timestream documentation for more details
+  # create_table_tags = { "foo" = "bar", "environment" = "dev"}
+
+  ## Specify the maximum number of parallel go routines to ingest/write data
+  ## If not specified, defaulted to 1 go routines
+  max_write_go_routines = 25
+
+  ## Please see README.md to know how line protocol data is mapped to Timestream
+  ##
+```
+
+### Unsigned Integers
+
+Timestream does **DOES NOT** support unsigned int64 values. Values using uint64,
+which are less than the maximum signed int64 are returned as expected. Any
+larger value is caped at the maximum int64 value.
+
+### Batching
+
+Timestream WriteInputRequest.CommonAttributes are used to efficiently write data
+to Timestream.
+
+### Multithreading
+
+Single thread is used to write the data to Timestream, following general plugin
+design pattern.
+
+### Errors
+
+In case of an attempt to write an unsupported by Timestream Telegraf Field type,
+the field is dropped and error is emitted to the logs.
+
+In case of receiving ThrottlingException or InternalServerException from
+Timestream, the errors are returned to Telegraf, in which case Telegraf will
+keep the metrics in buffer and retry writing those metrics on the next flush.
+
+In case of receiving ResourceNotFoundException:
+
+- If `create_table_if_not_exists` configuration is set to `true`, the plugin
+  will try to create appropriate table and write the records again, if the table
+  creation was successful.
+- If `create_table_if_not_exists` configuration is set to `false`, the records
+  are dropped, and an error is emitted to the logs.
+
+In case of receiving any other AWS error from Timestream, the records are
+dropped, and an error is emitted to the logs, as retrying such requests isn't
+likely to succeed.
+
+### Logging
+
+Turn on debug flag in the Telegraf to turn on detailed logging (including
+records being written to Timestream).
+
+### Testing
+
+Execute unit tests with:
+
+```shell
+go test -v ./plugins/outputs/timestream/...
+```
+
+### Mapping data from Influx to Timestream
+
+When writing data from Influx to Timestream,
+data is written by default as follows:
+
+ 1. The timestamp is written as the time field.
+ 2. Tags are written as dimensions.
+ 3. Fields are written as measures.
+ 4. Measurements are written as table names.
+
+ For example, consider the following data in line protocol format:
+
+  > weather,location=us-midwest,season=summer temperature=82,humidity=71 1465839830100400200
+  > airquality,location=us-west no2=5,pm25=16 1465839830100400200
+
+where:
+  `weather` and `airquality` are the measurement names,
+  `location` and `season` are tags,
+  `temperature`, `humidity`, `no2`, `pm25` are fields.
+
+When you choose to create a separate table for each measurement and store
+multiple fields in a single table row, the data will be written into
+Timestream as:
+
+  1. The plugin will create 2 tables, namely, weather and airquality (mapping_mode=multi-table).
+  2. The tables may contain multiple fields in a single table row (use_multi_measure_records=true).
+  3. The table weather will contain the following columns and data:
+
+     | time | location | season | measure_name | temperature | humidity |
+     | :--- | :--- | :--- | :--- | :--- | :--- |
+     | 2016-06-13 17:43:50 | us-midwest | summer | `<measure_name_for_multi_measure_records>` | 82 | 71|
+
+  4. The table airquality will contain the following columns and data:
+
+     | time | location | measure_name | no2 | pm25 |
+     | :--- | :--- | :--- | :--- | :--- |
+     |2016-06-13 17:43:50 | us-west | `<measure_name_for_multi_measure_records>` | 5 | 16 |
+
+  NOTE:
+  `<measure_name_for_multi_measure_records>` represents the actual
+  value of that property.
+
+You can also choose to create a separate table per measurement and store
+each field in a separate row per table. In that case:
+
+  1. The plugin will create 2 tables, namely, weather and airquality (mapping_mode=multi-table).
+  2. Each table row will contain a single field only (use_multi_measure_records=false).
+  3. The table weather will contain the following columns and data:
+
+     | time | location | season | measure_name | measure_value::bigint |
+     | :--- | :--- | :--- | :--- | :--- |
+     | 2016-06-13 17:43:50 | us-midwest | summer | temperature | 82 |
+     | 2016-06-13 17:43:50 | us-midwest | summer | humidity | 71 |
+
+  4. The table airquality will contain the following columns and data:
+
+     | time | location | measure_name | measure_value::bigint |
+     | :--- | :--- | :--- | :--- |
+     | 2016-06-13 17:43:50 | us-west | no2 | 5 |
+     | 2016-06-13 17:43:50 | us-west | pm25 | 16 |
+
+You can also choose to store all the measurements in a single table and
+store all fields in a single table row. In that case:
+
+ 1. This plugin will create a table with name <single_table_name> (mapping_mode=single-table).
+ 2. The table may contain multiple fields in a single table row (use_multi_measure_records=true).
+ 3. The table will contain the following column and data:
+
+    | time | location | season | `<single_table_dimension_name_for_telegraf_measurement_name>`| measure_name | temperature | humidity | no2 | pm25 |
+    | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- |
+    | 2016-06-13 17:43:50 | us-midwest | summer | weather | `<measure_name_for_multi_measure_records>` | 82 | 71 | null | null |
+    | 2016-06-13 17:43:50 | us-west | null | airquality | `<measure_name_for_multi_measure_records>` | null | null | 5 | 16 |
+
+  NOTE:
+  `<single_table_name>` represents the actual value of that property.
+  `<single_table_dimension_name_for_telegraf_measurement_name>` represents
+  the actual value of that property.
+  `<measure_name_for_multi_measure_records>` represents the actual value of
+  that property.
+
+Furthermore, you can choose to store all the measurements in a single table
+and store each field in a separate table row. In that case:
+
+   1. Timestream will create a table with name <single_table_name> (mapping_mode=single-table).
+   2. Each table row will contain a single field only (use_multi_measure_records=false).
+   3. The table will contain the following column and data:
+
+      | time | location | season | namespace | measure_name | measure_value::bigint |
+      | :--- | :--- | :--- | :--- | :--- | :--- |
+      | 2016-06-13 17:43:50 | us-midwest | summer | weather | temperature | 82 |
+      | 2016-06-13 17:43:50 | us-midwest | summer | weather | humidity | 71 |
+      | 2016-06-13 17:43:50 | us-west | NULL | airquality | no2 | 5 |
+      | 2016-06-13 17:43:50 | us-west | NULL | airquality | pm25 | 16 |
+
+   NOTE:
+   `<single_table_name>` represents the actual value of that property.
+   `<single_table_dimension_name_for_telegraf_measurement_name>` represents the
+   actual value of that property.
+   `<measure_name_for_multi_measure_records>` represents the actual value of
+   that property.
+
+### References
+
+- [Amazon Timestream](https://aws.amazon.com/timestream/)
+- [Assumed credentials via STS](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/credentials/stscreds)
+- [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables)
+- [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file)
+- [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
diff --git a/content/telegraf/v1/output-plugins/warp10/_index.md b/content/telegraf/v1/output-plugins/warp10/_index.md
new file mode 100644
index 000000000..b464e11fc
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/warp10/_index.md
@@ -0,0 +1,80 @@
+---
+description: "Telegraf plugin for sending metrics to Warp10"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Warp10
+    identifier: output-warp10
+tags: [Warp10, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Warp10 Output Plugin
+
+The `warp10` output plugin writes metrics to [Warp 10](https://www.warp10.io).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `token` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Write metrics to Warp 10
+[[outputs.warp10]]
+  # Prefix to add to the measurement.
+  prefix = "telegraf."
+
+  # URL of the Warp 10 server
+  warp_url = "http://localhost:8080"
+
+  # Write token to access your app on warp 10
+  token = "Token"
+
+  # Warp 10 query timeout
+  # timeout = "15s"
+
+  ## Print Warp 10 error body
+  # print_error_body = false
+
+  ## Max string error size
+  # max_string_error_size = 511
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+```
+
+## Output Format
+
+Metrics are converted and sent using the [Geo Time Series](https://www.warp10.io/content/03_Documentation/03_Interacting_with_Warp_10/03_Ingesting_data/02_GTS_input_format) (GTS) input format.
+
+The class name of the reading is produced by combining the value of the
+`prefix` option, the measurement name, and the field key.  A dot (`.`)
+character is used as the joining character.
+
+The GTS form provides support for the Telegraf integer, float, boolean, and
+string types directly.  Unsigned integer fields will be capped to the largest
+64-bit integer (2^63-1) in case of overflow.
+
+Timestamps are sent in microsecond precision.
+
+[Warp 10]: https://www.warp10.io
+[Geo Time Series]: https://www.warp10.io/content/03_Documentation/03_Interacting_with_Warp_10/03_Ingesting_data/02_GTS_input_format
diff --git a/content/telegraf/v1/output-plugins/wavefront/_index.md b/content/telegraf/v1/output-plugins/wavefront/_index.md
new file mode 100644
index 000000000..4d69e0090
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/wavefront/_index.md
@@ -0,0 +1,184 @@
+---
+description: "Telegraf plugin for sending metrics to Wavefront"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Wavefront
+    identifier: output-wavefront
+tags: [Wavefront, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Wavefront Output Plugin
+
+This plugin writes to a [Wavefront](https://www.wavefront.com) instance or a
+Wavefront Proxy instance over HTTP or HTTPS.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `token` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+[[outputs.wavefront]]
+  ## Url for Wavefront API or Wavefront proxy instance.
+  ## Direct Ingestion via Wavefront API requires authentication. See below.
+  url = "https://metrics.wavefront.com"
+
+  ## Maximum number of metrics to send per HTTP request. This value should be higher than the `metric_batch_size`. Default is 10,000. Values higher than 40,000 are not recommended.
+  # http_maximum_batch_size = 10000
+
+  ## prefix for metrics keys
+  # prefix = "my.specific.prefix."
+
+  ## whether to use "value" for name of simple fields. default is false
+  # simple_fields = false
+
+  ## character to use between metric and field name.  default is . (dot)
+  # metric_separator = "."
+
+  ## Convert metric name paths to use metricSeparator character
+  ## When true will convert all _ (underscore) characters in final metric name. default is true
+  # convert_paths = true
+
+  ## Use Strict rules to sanitize metric and tag names from invalid characters
+  ## When enabled forward slash (/) and comma (,) will be accepted
+  # use_strict = false
+
+  ## Use Regex to sanitize metric and tag names from invalid characters
+  ## Regex is more thorough, but significantly slower. default is false
+  # use_regex = false
+
+  ## point tags to use as the source name for Wavefront (if none found, host will be used)
+  # source_override = ["hostname", "address", "agent_host", "node_host"]
+
+  ## whether to convert boolean values to numeric values, with false -> 0.0 and true -> 1.0. default is true
+  # convert_bool = true
+
+  ## Truncate metric tags to a total of 254 characters for the tag name value. Wavefront will reject any
+  ## data point exceeding this limit if not truncated. Defaults to 'false' to provide backwards compatibility.
+  # truncate_tags = false
+
+  ## Flush the internal buffers after each batch. This effectively bypasses the background sending of metrics
+  ## normally done by the Wavefront SDK. This can be used if you are experiencing buffer overruns. The sending
+  ## of metrics will block for a longer time, but this will be handled gracefully by the internal buffering in
+  ## Telegraf.
+  # immediate_flush = true
+
+  ## Send internal metrics (starting with `~sdk.go`) for valid, invalid, and dropped metrics. default is true.
+  # send_internal_metrics = true
+
+  ## Optional TLS Config
+  ## Set to true/false to enforce TLS being enabled/disabled. If not set,
+  ## enable TLS only if any of the other options are specified.
+  # tls_enable =
+  ## Trusted root certificates for server
+  # tls_ca = "/path/to/cafile"
+  ## Used for TLS client certificate authentication
+  # tls_cert = "/path/to/certfile"
+  ## Used for TLS client certificate authentication
+  # tls_key = "/path/to/keyfile"
+  ## Send the specified TLS server name via SNI
+  # tls_server_name = "kubernetes.example.com"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## HTTP Timeout
+  # timeout="10s"
+
+  ## MaxIdleConns controls the maximum number of idle (keep-alive)
+  ## connections across all hosts. Zero means no limit.
+  # max_idle_conn = 0
+
+  ## MaxIdleConnsPerHost, if non-zero, controls the maximum idle
+  ## (keep-alive) connections to keep per-host. If zero,
+  ## DefaultMaxIdleConnsPerHost is used(2).
+  # max_idle_conn_per_host = 2
+
+  ## Idle (keep-alive) connection timeout.
+  ## Maximum amount of time before idle connection is closed.
+  ## Zero means no limit.
+  # idle_conn_timeout = 0
+
+  ## Authentication for Direct Ingestion.
+  ## Direct Ingestion requires one of: `token`,`auth_csp_api_token`, or `auth_csp_client_credentials`
+  ## See https://docs.wavefront.com/csp_getting_started.html to learn more about using CSP credentials with Wavefront.
+  ## Not required if using a Wavefront proxy.
+
+  ## Wavefront API Token Authentication. Ignored if using a Wavefront proxy.
+  ## 1. Click the gear icon at the top right in the Wavefront UI.
+  ## 2. Click your account name (usually your email)
+  ## 3. Click *API access*.
+  # token = "YOUR_TOKEN"
+
+  ## Optional. defaults to "https://console.cloud.vmware.com/"
+  ## Ignored if using a Wavefront proxy or a Wavefront API token.
+  # auth_csp_base_url=https://console.cloud.vmware.com
+
+  ## CSP API Token Authentication for Wavefront. Ignored if using a Wavefront proxy.
+  # auth_csp_api_token=CSP_API_TOKEN_HERE
+
+  ## CSP Client Credentials Authentication Information for Wavefront. Ignored if using a Wavefront proxy.
+  ## See also: https://docs.wavefront.com/csp_getting_started.html#whats-a-server-to-server-app
+  # [outputs.wavefront.auth_csp_client_credentials]
+  #  app_id=CSP_APP_ID_HERE
+  #  app_secret=CSP_APP_SECRET_HERE
+  #  org_id=CSP_ORG_ID_HERE
+```
+
+### Convert Path & Metric Separator
+
+If the `convert_path` option is true any `_` in metric and field names will be
+converted to the `metric_separator` value.  By default, to ease metrics browsing
+in the Wavefront UI, the `convert_path` option is true, and `metric_separator`
+is `.` (dot).  Default integrations within Wavefront expect these values to be
+set to their defaults, however if converting from another platform it may be
+desirable to change these defaults.
+
+### Use Regex
+
+Most illegal characters in the metric name are automatically converted to `-`.
+The `use_regex` setting can be used to ensure all illegal characters are
+properly handled, but can lead to performance degradation.
+
+### Source Override
+
+Often when collecting metrics from another system, you want to use the target
+system as the source, not the one running Telegraf.  Many Telegraf plugins will
+identify the target source with a tag. The tag name can vary for different
+plugins. The `source_override` option will use the value specified in any of the
+listed tags if found. The tag names are checked in the same order as listed, and
+if found, the other tags will not be checked. If no tags specified are found,
+the default host tag will be used to identify the source of the metric.
+
+### Wavefront Data format
+
+The expected input for Wavefront is specified in the following way:
+
+```text
+<metric> <value> [<timestamp>] <source|host>=<sourceTagValue> [tagk1=tagv1 ...tagkN=tagvN]
+```
+
+More information about the Wavefront data format is available
+[here](https://community.wavefront.com/docs/DOC-1031)
+
+### Allowed values for metrics
+
+Wavefront allows `integers` and `floats` as input values.  By default it also
+maps `bool` values to numeric, false -> 0.0, true -> 1.0.  To map `strings` use
+the enum processor plugin.
diff --git a/content/telegraf/v1/output-plugins/websocket/_index.md b/content/telegraf/v1/output-plugins/websocket/_index.md
new file mode 100644
index 000000000..bf74fd815
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/websocket/_index.md
@@ -0,0 +1,84 @@
+---
+description: "Telegraf plugin for sending metrics to Websocket"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Websocket
+    identifier: output-websocket
+tags: [Websocket, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Websocket Output Plugin
+
+This plugin can write to a WebSocket endpoint.
+
+It can output data in any of the [supported output formats](/telegraf/v1/data_formats/output).
+
+[formats]: ../../../docs/DATA_FORMATS_OUTPUT.md
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `headers` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# A plugin that can transmit metrics over WebSocket.
+[[outputs.websocket]]
+  ## URL is the address to send metrics to. Make sure ws or wss scheme is used.
+  url = "ws://127.0.0.1:3000/telegraf"
+
+  ## Timeouts (make sure read_timeout is larger than server ping interval or set to zero).
+  # connect_timeout = "30s"
+  # write_timeout = "30s"
+  # read_timeout = "30s"
+
+  ## Optionally turn on using text data frames (binary by default).
+  # use_text_frames = false
+
+  ## Optional TLS Config
+  # tls_ca = "/etc/telegraf/ca.pem"
+  # tls_cert = "/etc/telegraf/cert.pem"
+  # tls_key = "/etc/telegraf/key.pem"
+  ## Use TLS but skip chain & host verification
+  # insecure_skip_verify = false
+
+  ## Optional SOCKS5 proxy to use
+  # socks5_enabled = true
+  # socks5_address = "127.0.0.1:1080"
+  # socks5_username = "alice"
+  # socks5_password = "pass123"
+
+  ## Optional HTTP proxy to use
+  # use_system_proxy = false
+  # http_proxy_url = "http://localhost:8888"
+
+  ## Data format to output.
+  ## Each data format has it's own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
+  # data_format = "influx"
+
+  ## NOTE: Due to the way TOML is parsed, tables must be at the END of the
+  ## plugin definition, otherwise additional config options are read as part of
+  ## the table
+
+  ## Additional HTTP Upgrade headers
+  # [outputs.websocket.headers]
+  #   Authorization = "Bearer <TOKEN>"
+```
diff --git a/content/telegraf/v1/output-plugins/yandex_cloud_monitoring/_index.md b/content/telegraf/v1/output-plugins/yandex_cloud_monitoring/_index.md
new file mode 100644
index 000000000..ea79b2a73
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/yandex_cloud_monitoring/_index.md
@@ -0,0 +1,49 @@
+---
+description: "Telegraf plugin for sending metrics to Yandex Cloud Monitoring"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Yandex Cloud Monitoring
+    identifier: output-yandex_cloud_monitoring
+tags: [Yandex Cloud Monitoring, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Yandex Cloud Monitoring Output Plugin
+
+This plugin will send custom metrics to [Yandex Cloud
+Monitoring](https://cloud.yandex.com/services/monitoring).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Send aggregated metrics to Yandex.Cloud Monitoring
+[[outputs.yandex_cloud_monitoring]]
+  ## Timeout for HTTP writes.
+  # timeout = "20s"
+
+  ## Yandex.Cloud monitoring API endpoint. Normally should not be changed
+  # endpoint_url = "https://monitoring.api.cloud.yandex.net/monitoring/v2/data/write"
+
+  ## All user metrics should be sent with "custom" service specified. Normally should not be changed
+  # service = "custom"
+```
+
+### Authentication
+
+This plugin currently support only YC.Compute metadata based authentication.
+
+When plugin is working inside a YC.Compute instance it will take IAM token and
+Folder ID from instance metadata.
+
+Other authentication methods will be added later.
diff --git a/content/telegraf/v1/output-plugins/zabbix/_index.md b/content/telegraf/v1/output-plugins/zabbix/_index.md
new file mode 100644
index 000000000..2fe8ec802
--- /dev/null
+++ b/content/telegraf/v1/output-plugins/zabbix/_index.md
@@ -0,0 +1,425 @@
+---
+description: "Telegraf plugin for sending metrics to Zabbix"
+menu:
+  telegraf_v1_ref:
+    parent: output_plugins_reference
+    name: Zabbix
+    identifier: output-zabbix
+tags: [Zabbix, "output-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Zabbix Output Plugin
+
+This plugin send metrics to [Zabbix](https://www.zabbix.com/) via
+[traps](https://www.zabbix.com/documentation/current/en/manual/appendix/items/trapper).
+
+It has been tested with versions
+[3.0](https://www.zabbix.com/documentation/3.0/en/manual/appendix/items/trapper)
+,
+[4.0](https://www.zabbix.com/documentation/4.0/en/manual/appendix/items/trapper)
+and
+[6.0](https://www.zabbix.com/documentation/6.0/en/manual/appendix/items/trapper)
+.
+
+[traps]: https://www.zabbix.com/documentation/current/en/manual/appendix/items/trapper
+
+It should work with newer versions as long as Zabbix does not change the
+protocol.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Send metrics to Zabbix
+[[outputs.zabbix]]
+  ## Address and (optional) port of the Zabbix server
+  address = "zabbix.example.com:10051"
+
+  ## Send metrics as type "Zabbix agent (active)"
+  # agent_active = false
+
+  ## Add prefix to all keys sent to Zabbix.
+  # key_prefix = "telegraf."
+
+  ## Name of the tag that contains the host name. Used to set the host in Zabbix.
+  ## If the tag is not found, use the hostname of the system running Telegraf.
+  # host_tag = "host"
+
+  ## Skip measurement prefix to all keys sent to Zabbix.
+  # skip_measurement_prefix = false
+
+  ## This field will be sent as HostMetadata to Zabbix Server to autoregister the host.
+  ## To enable this feature, this option must be set to a value other than "".
+  # autoregister = ""
+
+  ## Interval to resend auto-registration data to Zabbix.
+  ## Only applies if autoregister feature is enabled.
+  ## This value is a lower limit, the actual resend should be triggered by the next flush interval.
+  # autoregister_resend_interval = "30m"
+
+  ## Interval to send LLD data to Zabbix.
+  ## This value is a lower limit, the actual resend should be triggered by the next flush interval.
+  # lld_send_interval = "10m"
+
+  ## Interval to delete stored LLD known data and start capturing it again.
+  ## This value is a lower limit, the actual resend should be triggered by the next flush interval.
+  # lld_clear_interval = "1h"
+```
+
+### agent_active
+
+The `request` value in the package sent to Zabbix should be different if the
+items configured in Zabbix are [Zabbix trapper](https://www.zabbix.com/documentation/6.4/en/manual/config/items/itemtypes/trapper?hl=Trapper) or
+[Zabbix agent (active)](https://www.zabbix.com/documentation/6.4/en/manual/config/items/itemtypes/zabbix_agent).
+
+`agent_active = false` will send data as _sender data_, expecting trapper items.
+
+`agent_active = true` will send data as _agent data_, expecting active Zabbix
+agent items.
+
+[zabbixtrapper]: https://www.zabbix.com/documentation/6.4/en/manual/config/items/itemtypes/trapper?hl=Trapper
+[zabbixagentactive]: https://www.zabbix.com/documentation/6.4/en/manual/config/items/itemtypes/zabbix_agent
+
+### key_prefix
+
+We can set a prefix that should be added to all Zabbix keys.
+
+This is configurable with the option `key_prefix`, set by default to
+`telegraf.`.
+
+Example how the configuration `key_prefix = "telegraf."` will generate the
+Zabbix keys given a Telegraf metric:
+
+```diff
+- measurement,host=hostname valueA=0,valueB=1
++ telegraf.measurement.valueA
++ telegraf.measurement.valueB
+```
+
+### skip_measurement_prefix
+
+We can skip the measurement prefix added to all Zabbix keys.
+
+Example with `skip_measurement_prefix = true"` and `prefix = "telegraf."`:
+
+```diff
+- measurement,host=hostname valueA=0,valueB=1
++ telegraf.valueA
++ telegraf.valueB
+```
+
+Example with `skip_measurement_prefix = true"` and `prefix = ""`:
+
+```diff
+- measurement,host=hostname valueA=0,valueB=1
++ valueA
++ valueB
+```
+
+### autoregister
+
+If this field is active, Telegraf will send an
+[autoregister request](https://www.zabbix.com/documentation/current/en/manual/discovery/auto_registration?hl=autoregistration) to Zabbix, using the content of
+this field as the [HostMetadata](https://www.zabbix.com/documentation/current/en/manual/discovery/auto_registration?hl=autoregistration#using-host-metadata).
+
+One request is sent for each of the different values seen by Telegraf for the
+`host` tag.
+
+[autoregisterrequest]: https://www.zabbix.com/documentation/current/en/manual/discovery/auto_registration?hl=autoregistration
+[hostmetadata]: https://www.zabbix.com/documentation/current/en/manual/discovery/auto_registration?hl=autoregistration#using-host-metadata
+
+### autoregister_resend_interval
+
+If `autoregister` is defined, this field set the interval at which
+autoregister requests are resend to Zabbix.
+
+The [telegraf interval format](/telegraf/v1/configuration/#intervals) should be used.
+
+The actual send of the autoregister request will happen in the next output flush
+after this interval has been surpassed.
+
+[intervals_format]: ../../../docs/CONFIGURATION.md#intervals
+
+### lld_send_interval
+
+To reduce the number of LLD requests sent to Zabbix (LLD processing is
+[expensive](https://www.zabbix.com/documentation/4.2/en/manual/introduction/whatsnew420#:~:text=Daemons-,Separate%20processing%20for%20low%2Dlevel%20discovery,-Processing%20low%2Dlevel)), this plugin will send only one per
+`lld_send_interval`.
+
+When Telegraf is started, this plugin will start to collect the info needed to
+generate this LLD packets (measurements, tags keys and values).
+
+Once this interval is surpassed, the next flush of this plugin will add the
+packet with the LLD data.
+
+In the next interval, only new, or modified, LLDs will be sent.
+
+[lldexpensive]: https://www.zabbix.com/documentation/4.2/en/manual/introduction/whatsnew420#:~:text=Daemons-,Separate%20processing%20for%20low%2Dlevel%20discovery,-Processing%20low%2Dlevel
+
+### lld_clear_interval
+
+When this interval is surpassed, the next flush will clear all the LLD data
+collected.
+
+This allows this plugin to forget about old data and resend LLDs to Zabbix, in
+case the host has new discovery rules or the packet was lost.
+
+If we have `flush_interval = "1m"`, `lld_send_interval = "10m"` and
+`lld_clear_interval = "1h"` and Telegraf is started at 00:00, the first LLD will
+be sent at 00:10. At 01:00 the LLD data will be deleted and at 01:10 LLD data
+will be resent.
+
+## Trap format
+
+For each new metric generated by Telegraf, this output plugin will send one
+trap for each field.
+
+Given this Telegraf metric:
+
+```text
+measurement,host=hostname valueA=0,valueB=1
+```
+
+It will generate this Zabbix metrics:
+
+```json
+{"host": "hostname", "key": "telegraf.measurement.valueA", "value": "0"}
+{"host": "hostname", "key": "telegraf.measurement.valueB", "value": "1"}
+```
+
+If the metric has tags (aside from `host`), they will be added, in alphabetical
+order using the format for LLD metrics:
+
+```text
+measurement,host=hostname,tagA=keyA,tagB=keyB valueA=0,valueB=1
+```
+
+Zabbix generated metrics:
+
+```json
+{"host": "hostname", "key": "telegraf.measurement.valueA[keyA,keyB]", "value": "0"}
+{"host": "hostname", "key": "telegraf.measurement.valueB[keyA,keyB]", "value": "1"}
+```
+
+This order is based on the tags keys, not the tag values, so, for example, this
+Telegraf metric:
+
+```text
+measurement,host=hostname,aaaTag=999,zzzTag=111 value=0
+```
+
+Will generate this Zabbix metric:
+
+```json
+{"host": "hostname", "key": "telegraf.measurement.value[999,111]", "value": "0"}
+```
+
+## Zabbix low-level discovery
+
+Zabbix needs an `item` created before receiving any metric. In some cases we do
+not know in advance what are we going to send, for example, the name of a
+container to send its cpu and memory consumption.
+
+For this case Zabbix provides [low-level discovery](https://www.zabbix.com/documentation/current/manual/discovery/low_level_discovery) that allow to create
+new items dynamically based on the parameters sent by the trap.
+
+As explained previously, this output plugin will format the Zabbix key using
+the tags seen in the Telegraf metric following the LLD format.
+
+To create those _discovered items_ this plugin uses the same mechanism as the
+Zabbix agent, collecting information about which tags has been seen for each
+measurement and periodically sending a request to a discovery rule with the
+collected data.
+
+Keep in mind that, for metrics in this category, Zabbix will discard them until
+the low-level discovery (LLD) data is sent.
+Sending LLD to Zabbix is a heavy-weight process and is only done at the interval
+per the lld_send_interval setting.
+
+[lld]: https://www.zabbix.com/documentation/current/manual/discovery/low_level_discovery
+
+### Design
+
+To explain how everything interconnects we will use an example with the
+`net_response` input:
+
+```toml
+[[inputs.net_response]]
+  protocol = "tcp"
+  address = "example.com:80"
+```
+
+This input will generate this metric:
+
+```text
+$ telegraf -config example.conf -test
+* Plugin: inputs.net_response, Collection 1
+> net_response,server=example.com,port=80,protocol=tcp,host=myhost result_type="success",response_time=0.091026869 1522741063000000000
+```
+
+Here we have four tags: server, port, protocol and host (this one will be
+assumed that is always present and treated differently).
+
+The values those three parameters could take are unknown to Zabbix, so we
+cannot create trappers items in Zabbix to receive that values (at least without
+mixing that metric with another `net_response` metric with different tags).
+
+To solve this problem we use a discovery rule in Zabbix, that will receive the
+different groups of tag values and create the traps to gather the metrics.
+
+This plugin knows about three tags (excluding host) for the input
+`net_response`, therefore it will generate this new Telegraf metric:
+
+```text
+lld.host=myhost net_response.port.protocol.server="{\"data\":[{\"{#PORT}\":\"80\",\"{#PROTOCOL}\":\"tcp\",\"{#SERVER}\":\"example.com\"}]}"
+```
+
+Once sent, the final package will be:
+
+```json
+{
+  "request":"sender data",
+  "data":[
+    {
+      "host":"myhost",
+      "key":"telegraf.lld.net_response.port.protocol.server",
+      "value":"{\"data\":[{\"{#PORT}\":\"80\",\"{#PROTOCOL}\":\"tcp\",\"{#SERVER}\":\"example.com\"}]}",
+      "clock":1519043805
+    }
+  ],
+  "clock":1519043805
+}
+```
+
+The Zabbix key is generated joining `lld`, the input name and tags (keys)
+alphabetically sorted.
+Some inputs could use different groups of tags for different fields, that is
+why the tags are added to the key, to allow having different discovery rules
+for the same input.
+
+The tags used in `value` are changed to uppercase to match the format of Zabbix.
+
+In the Zabbix server we should have a discovery rule associated with that key
+(telegraf.lld.net_response.port.protocol.server) and one item prototype for
+each field, in this case `result_type` and `response_time`.
+
+The item prototypes will be Zabbix trappers with keys (data type should also
+match and some values will be better stored as _delta_):
+
+```text
+telegraf.net_response.response_time[{#PORT},{#PROTOCOL},{#SERVER}]
+telegraf.net_response.result_type[{#PORT},{#PROTOCOL},{#SERVER}]
+```
+
+The macros in the item prototypes keys should be alphabetically sorted so they
+can match the keys generated by this plugin.
+
+With that keys and the example trap, the host `myhost` will have two new items:
+
+```text
+telegraf.net_response.response_time[80,tcp,example.com]
+telegraf.net_response.result_type[80,tcp,example.com]
+```
+
+This plugin, for each metric, will send traps to the Zabbix server following
+the same structure (INPUT.FIELD[tags sorted]...), filling the items created by
+the discovery rule.
+
+In summary:
+
+- we need a discovery rule with the correct key and one item prototype for each
+field
+- this plugin will generate traps to create items based on the metrics seen in
+Telegraf
+- it will also send the traps to fill the new created items
+
+### Reducing the number of LLDs
+
+This plugin remembers which LLDs has been sent to Zabbix and avoid generating
+the same metrics again, to avoid the cost of LLD processing in Zabbix.
+
+It will only send LLD data each `lld_send_interval`.
+
+But, could happen that package is lost or some host get new discovery rules, so
+each `lld_clear_interval` the plugin will forget about the known data and start
+collecting again.
+
+### Note on inputs configuration
+
+Which tags should expose each input should be controlled, because an unexpected
+tag could modify the trap key and will not match the trapper defined in Zabbix.
+
+For example, in the docker input, each container label is a new tag.
+
+To control this we can add to the input a config like:
+
+```toml
+taginclude = ["host", "container_name"]
+```
+
+Allowing only the tags "host" and "container_name" to be used to generate the
+key (and loosing the information provided in the others tags).
+
+## Examples of metrics converted to traps
+
+### Without tags
+
+```text
+mem,host=myHost available_percent=14.684620843239944,used=14246531072i 152276442800000000
+```
+
+```json
+{
+  "request":"sender data",
+  "data":[
+    {
+      "host":"myHost",
+      "key":"telegraf.mem.available_percent",
+      "value":"14.382719",
+      "clock":1522764428
+    },
+    {
+      "host":"myHost",
+      "key":"telegraf.mem.used",
+      "value":"14246531072",
+      "clock":1522764428
+    }
+  ]
+}
+```
+
+### With tags
+
+```text
+docker_container_net,host=myHost,container_name=laughing_babbage rx_errors=0i,tx_errors=0i 1522764038000000000
+```
+
+```json
+{
+  "request":"sender data",
+  "data": [
+    {
+      "host":"myHost",
+      "key":"telegraf.docker_container_net.rx_errors[laughing_babbage]",
+      "value":"0",
+      "clock":15227640380
+    },
+    {
+      "host":"myHost",
+      "key":"telegraf.docker_container_net.tx_errors[laughing_babbage]",
+      "value":"0",
+      "clock":15227640380
+    }
+  ]
+}
+```
diff --git a/content/telegraf/v1/processor-plugins/_index.md b/content/telegraf/v1/processor-plugins/_index.md
new file mode 100644
index 000000000..4dd2520d8
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/_index.md
@@ -0,0 +1,15 @@
+---
+title: "Telegraf Processor Plugins"
+description: "Telegraf processor plugins transform individual metrics."
+menu:
+  telegraf_v1_ref:
+    name: Processor plugins
+    identifier: processor_plugins_reference
+    weight: 10
+tags: [processor-plugins]
+---
+
+Telegraf processor plugins transform individual metrics by e.g. converting
+tags and fields or data-types.
+
+{{<children>}}
diff --git a/content/telegraf/v1/processor-plugins/aws_ec2/_index.md b/content/telegraf/v1/processor-plugins/aws_ec2/_index.md
new file mode 100644
index 000000000..4ca0055fa
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/aws_ec2/_index.md
@@ -0,0 +1,138 @@
+---
+description: "Telegraf plugin for transforming metrics using AWS EC2 Metadata"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: AWS EC2 Metadata
+    identifier: processor-aws_ec2
+tags: [AWS EC2 Metadata, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# AWS EC2 Metadata Processor Plugin
+
+AWS EC2 Metadata processor plugin appends metadata gathered from [AWS IMDS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html)
+to metrics associated with EC2 instances.
+
+[AWS IMDS]: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Attach AWS EC2 metadata to metrics
+[[processors.aws_ec2]]
+  ## Instance identity document tags to attach to metrics.
+  ## For more information see:
+  ## https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-identity-documents.html
+  ##
+  ## Available tags:
+  ## * accountId
+  ## * architecture
+  ## * availabilityZone
+  ## * billingProducts
+  ## * imageId
+  ## * instanceId
+  ## * instanceType
+  ## * kernelId
+  ## * pendingTime
+  ## * privateIp
+  ## * ramdiskId
+  ## * region
+  ## * version
+  # imds_tags = []
+
+  ## EC2 instance tags retrieved with DescribeTags action.
+  ## In case tag is empty upon retrieval it's omitted when tagging metrics.
+  ## Note that in order for this to work, role attached to EC2 instance or AWS
+  ## credentials available from the environment must have a policy attached, that
+  ## allows ec2:DescribeTags.
+  ##
+  ## For more information see:
+  ## https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeTags.html
+  # ec2_tags = []
+
+  ## Paths to instance metadata information to attach to the metrics.
+  ## Specify the full path without the base-path e.g. `tags/instance/Name`.
+  ##
+  ## For more information see:
+  ## https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
+  # metadata_paths = []
+
+  ## Allows to convert metadata tag-names to canonical names representing the
+  ## full path with slashes ('/') being replaces with underscores. By default,
+  ## only the last path element is used to name the tag.
+  # canonical_metadata_tags = false
+
+  ## Timeout for http requests made by against aws ec2 metadata endpoint.
+  # timeout = "10s"
+
+  ## ordered controls whether or not the metrics need to stay in the same order
+  ## this plugin received them in. If false, this plugin will change the order
+  ## with requests hitting cached results moving through immediately and not
+  ## waiting on slower lookups. This may cause issues for you if you are
+  ## depending on the order of metrics staying the same. If so, set this to true.
+  ## Keeping the metrics ordered may be slightly slower.
+  # ordered = false
+
+  ## max_parallel_calls is the maximum number of AWS API calls to be in flight
+  ## at the same time.
+  ## It's probably best to keep this number fairly low.
+  # max_parallel_calls = 10
+
+  ## cache_ttl determines how long each cached item will remain in the cache before
+  ## it is removed and subsequently needs to be queried for from the AWS API. By
+  ## default, no items are cached.
+  # cache_ttl = "0s"
+
+  ## tag_cache_size determines how many of the values which are found in imds_tags
+  ## or ec2_tags will be kept in memory for faster lookup on successive processing
+  ## of metrics. You may want to adjust this if you have excessively large numbers
+  ## of tags on your EC2 instances, and you are using the ec2_tags field. This
+  ## typically does not need to be changed when using the imds_tags field.
+  # tag_cache_size = 1000
+
+  ## log_cache_stats will emit a log line periodically to stdout with details of
+  ## cache entries, hits, misses, and evacuations since the last time stats were
+  ## emitted. This can be helpful in determining whether caching is being effective
+  ## in your environment. Stats are emitted every 30 seconds. By default, this
+  ## setting is disabled.
+  # log_cache_stats = false
+```
+
+## Example
+
+Append `accountId` and `instanceId` to metrics tags:
+
+```toml
+[[processors.aws_ec2]]
+  tags = [ "accountId", "instanceId"]
+```
+
+```diff
+- cpu,hostname=localhost time_idle=42
++ cpu,hostname=localhost,accountId=123456789,instanceId=i-123456789123 time_idle=42
+```
+
+## Notes
+
+We use a single cache because telegraf's `AddTag` function models this.
+
+A user can specify a list of both EC2 tags and IMDS tags. The items in this list
+can, technically, be the same. This will result in a situation where the EC2
+Tag's value will override the IMDS tags value.
+
+Though this is undesirable, it is unavoidable because the `AddTag` function does
+not support this case.
+
+You should avoid using IMDS tags as EC2 tags because the EC2 tags will always
+"win" due to them being processed in this plugin *after* IMDS tags.
diff --git a/content/telegraf/v1/processor-plugins/clone/_index.md b/content/telegraf/v1/processor-plugins/clone/_index.md
new file mode 100644
index 000000000..036608362
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/clone/_index.md
@@ -0,0 +1,52 @@
+---
+description: "Telegraf plugin for transforming metrics using Clone"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Clone
+    identifier: processor-clone
+tags: [Clone, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Clone Processor Plugin
+
+The clone processor plugin create a copy of each metric passing through it,
+preserving untouched the original metric and allowing modifications in the
+copied one.
+
+The modifications allowed are the ones supported by input plugins and
+aggregators:
+
+* name_override
+* name_prefix
+* name_suffix
+* tags
+
+Select the metrics to modify using the standard metric
+filtering.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Apply metric modifications using override semantics.
+[[processors.clone]]
+  ## All modifications on inputs and aggregators can be overridden:
+  # name_override = "new_name"
+  # name_prefix = "new_name_prefix"
+  # name_suffix = "new_name_suffix"
+
+  ## Tags to be added (all values must be strings)
+  # [processors.clone.tags]
+  #   additional_tag = "tag_value"
+```
diff --git a/content/telegraf/v1/processor-plugins/converter/_index.md b/content/telegraf/v1/processor-plugins/converter/_index.md
new file mode 100644
index 000000000..b7c48a8be
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/converter/_index.md
@@ -0,0 +1,146 @@
+---
+description: "Telegraf plugin for transforming metrics using Converter"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Converter
+    identifier: processor-converter
+tags: [Converter, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Converter Processor Plugin
+
+The converter processor is used to change the type of tag or field values.  In
+addition to changing field types it can convert between fields and tags.
+
+Values that cannot be converted are dropped.
+
+**Note:** When converting tags to fields, take care to ensure the series is
+still uniquely identifiable.  Fields with the same series key (measurement +
+tags) will overwrite one another.
+
+**Note on large strings being converted to numeric types:** When converting a
+string value to a numeric type, precision may be lost if the number is too
+large. The largest numeric type this plugin supports is `float64`, and if a
+string 'number' exceeds its size limit, accuracy may be lost.
+
+**Note on multiple measurement or timestamps:** Users can provide multiple
+tags or fields to use as the measurement name or timestamp. However, note that
+the order in the array is not guaranteed!
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Convert values to another metric value type
+[[processors.converter]]
+  ## Tags to convert
+  ##
+  ## The table key determines the target type, and the array of key-values
+  ## select the keys to convert.  The array may contain globs.
+  ##   <target-type> = [<tag-key>...]
+  [processors.converter.tags]
+    measurement = []
+    string = []
+    integer = []
+    unsigned = []
+    boolean = []
+    float = []
+
+    ## Optional tag to use as metric timestamp
+    # timestamp = []
+
+    ## Format of the timestamp determined by the tag above. This can be any of
+    ## "unix", "unix_ms", "unix_us", "unix_ns", or a valid Golang time format.
+    ## It is required, when using the timestamp option.
+    # timestamp_format = ""
+
+  ## Fields to convert
+  ##
+  ## The table key determines the target type, and the array of key-values
+  ## select the keys to convert.  The array may contain globs.
+  ##   <target-type> = [<field-key>...]
+  [processors.converter.fields]
+    measurement = []
+    tag = []
+    string = []
+    integer = []
+    unsigned = []
+    boolean = []
+    float = []
+
+    ## Optional field to use as metric timestamp
+    # timestamp = []
+
+    ## Format of the timestamp determined by the field above. This can be any
+    ## of "unix", "unix_ms", "unix_us", "unix_ns", or a valid Golang time
+    ## format. It is required, when using the timestamp option.
+    # timestamp_format = ""
+```
+
+### Example
+
+Convert `port` tag to a string field:
+
+```toml
+[[processors.converter]]
+  [processors.converter.tags]
+    string = ["port"]
+```
+
+```diff
+- apache,port=80,server=debian-stretch-apache BusyWorkers=1,BytesPerReq=0
++ apache,server=debian-stretch-apache port="80",BusyWorkers=1,BytesPerReq=0
+```
+
+Convert all `scboard_*` fields to an integer:
+
+```toml
+[[processors.converter]]
+  [processors.converter.fields]
+    integer = ["scboard_*"]
+```
+
+```diff
+- apache scboard_closing=0,scboard_dnslookup=0,scboard_finishing=0,scboard_idle_cleanup=0,scboard_keepalive=0,scboard_logging=0,scboard_open=100,scboard_reading=0,scboard_sending=1,scboard_starting=0,scboard_waiting=49
++ apache scboard_closing=0i,scboard_dnslookup=0i,scboard_finishing=0i,scboard_idle_cleanup=0i,scboard_keepalive=0i,scboard_logging=0i,scboard_open=100i,scboard_reading=0i,scboard_sending=1i,scboard_starting=0i,scboard_waiting=49i
+```
+
+Rename the measurement from a tag value:
+
+```toml
+[[processors.converter]]
+  [processors.converter.tags]
+    measurement = ["topic"]
+```
+
+```diff
+- mqtt_consumer,topic=sensor temp=42
++ sensor temp=42
+```
+
+Set the metric timestamp from a tag:
+
+```toml
+[[processors.converter]]
+  [processors.converter.tags]
+    timestamp = ["time"]
+    timestamp_format = "unix
+```
+
+```diff
+- metric,time="1677610769" temp=42
++ metric temp=42 1677610769
+```
+
+This is also possible via the fields converter.
diff --git a/content/telegraf/v1/processor-plugins/date/_index.md b/content/telegraf/v1/processor-plugins/date/_index.md
new file mode 100644
index 000000000..a35687fa6
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/date/_index.md
@@ -0,0 +1,80 @@
+---
+description: "Telegraf plugin for transforming metrics using Date"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Date
+    identifier: processor-date
+tags: [Date, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Date Processor Plugin
+
+Use the `date` processor to add the metric timestamp as a human readable tag.
+
+A common use is to add a tag that can be used to group by month or year.
+
+A few example usecases include:
+
+1) consumption data for utilities on per month basis
+2) bandwidth capacity per month
+3) compare energy production or sales on a yearly or monthly basis
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Dates measurements, tags, and fields that pass through this filter.
+[[processors.date]]
+  ## New tag to create
+  tag_key = "month"
+
+  ## New field to create (cannot set both field_key and tag_key)
+  # field_key = "month"
+
+  ## Date format string, must be a representation of the Go "reference time"
+  ## which is "Mon Jan 2 15:04:05 -0700 MST 2006".
+  date_format = "Jan"
+
+  ## If destination is a field, date format can also be one of
+  ## "unix", "unix_ms", "unix_us", or "unix_ns", which will insert an integer field.
+  # date_format = "unix"
+
+  ## Offset duration added to the date string when writing the new tag.
+  # date_offset = "0s"
+
+  ## Timezone to use when creating the tag or field using a reference time
+  ## string.  This can be set to one of "UTC", "Local", or to a location name
+  ## in the IANA Time Zone database.
+  ##   example: timezone = "America/Los_Angeles"
+  # timezone = "UTC"
+```
+
+### timezone
+
+On Windows, only the `Local` and `UTC` zones are available by default.  To use
+other timezones, set the `ZONEINFO` environment variable to the location of
+[`zoneinfo.zip`]():
+
+```text
+set ZONEINFO=C:\zoneinfo.zip
+```
+
+## Example
+
+```diff
+- throughput lower=10i,upper=1000i,mean=500i 1560540094000000000
++ throughput,month=Jun lower=10i,upper=1000i,mean=500i 1560540094000000000
+```
+
+[zoneinfo]: https://github.com/golang/go/raw/50bd1c4d4eb4fac8ddeb5f063c099daccfb71b26/lib/time/zoneinfo.zip
diff --git a/content/telegraf/v1/processor-plugins/dedup/_index.md b/content/telegraf/v1/processor-plugins/dedup/_index.md
new file mode 100644
index 000000000..2f6368348
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/dedup/_index.md
@@ -0,0 +1,48 @@
+---
+description: "Telegraf plugin for transforming metrics using Dedup"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Dedup
+    identifier: processor-dedup
+tags: [Dedup, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Dedup Processor Plugin
+
+Filter metrics whose field values are exact repetitions of the previous values.
+This plugin will store its state between runs if the `statefile` option in the
+agent config section is set.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Filter metrics with repeating field values
+[[processors.dedup]]
+  ## Maximum time to suppress output
+  dedup_interval = "600s"
+```
+
+## Example
+
+```diff
+- cpu,cpu=cpu0 time_idle=42i,time_guest=1i
+- cpu,cpu=cpu0 time_idle=42i,time_guest=2i
+- cpu,cpu=cpu0 time_idle=42i,time_guest=2i
+- cpu,cpu=cpu0 time_idle=44i,time_guest=2i
+- cpu,cpu=cpu0 time_idle=44i,time_guest=2i
++ cpu,cpu=cpu0 time_idle=42i,time_guest=1i
++ cpu,cpu=cpu0 time_idle=42i,time_guest=2i
++ cpu,cpu=cpu0 time_idle=44i,time_guest=2i
+```
diff --git a/content/telegraf/v1/processor-plugins/defaults/_index.md b/content/telegraf/v1/processor-plugins/defaults/_index.md
new file mode 100644
index 000000000..883602808
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/defaults/_index.md
@@ -0,0 +1,77 @@
+---
+description: "Telegraf plugin for transforming metrics using Defaults"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Defaults
+    identifier: processor-defaults
+tags: [Defaults, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Defaults Processor Plugin
+
+The _Defaults_ processor allows you to ensure certain fields will always exist
+with a specified default value on your metric(s).
+
+There are three cases where this processor will insert a configured default
+field.
+
+1. The field is nil on the incoming metric
+1. The field is not nil, but its value is an empty string.
+1. The field is not nil, but its value is a string of one or more empty spaces.
+
+Telegraf minimum version: Telegraf 1.15.0
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+## Set default fields on your metric(s) when they are nil or empty
+[[processors.defaults]]
+  ## Ensures a set of fields always exists on your metric(s) with their
+  ## respective default value.
+  ## For any given field pair (key = default), if it's not set, a field
+  ## is set on the metric with the specified default.
+  ##
+  ## A field is considered not set if it is nil on the incoming metric;
+  ## or it is not nil but its value is an empty string or is a string
+  ## of one or more spaces.
+  ##   <target-field> = <value>
+  [processors.defaults.fields]
+    field_1 = "bar"
+    time_idle = 0
+    is_error = true
+```
+
+## Example
+
+Ensure a _status\_code_ field with _N/A_ is inserted in the metric when one is
+not set in the metric by default:
+
+```toml
+[[processors.defaults]]
+  [processors.defaults.fields]
+    status_code = "N/A"
+```
+
+```diff
+- lb,http_method=GET cache_status=HIT,latency=230
++ lb,http_method=GET cache_status=HIT,latency=230,status_code="N/A"
+```
+
+Ensure an empty string gets replaced by a default:
+
+```diff
+- lb,http_method=GET cache_status=HIT,latency=230,status_code=""
++ lb,http_method=GET cache_status=HIT,latency=230,status_code="N/A"
+```
diff --git a/content/telegraf/v1/processor-plugins/enum/_index.md b/content/telegraf/v1/processor-plugins/enum/_index.md
new file mode 100644
index 000000000..713001962
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/enum/_index.md
@@ -0,0 +1,73 @@
+---
+description: "Telegraf plugin for transforming metrics using Enum"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Enum
+    identifier: processor-enum
+tags: [Enum, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Enum Processor Plugin
+
+The Enum Processor allows the configuration of value mappings for metric tags or
+fields.  The main use-case for this is to rewrite status codes such as _red_,
+_amber_ and _green_ by numeric values such as 0, 1, 2. The plugin supports
+string, int, float64 and bool types for the field values. Multiple tags or
+fields can be configured with separate value mappings for each. Default mapping
+values can be configured to be used for all values, which are not contained in
+the value_mappings. The processor supports explicit configuration of a
+destination tag or field. By default the source tag or field is overwritten.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Map enum values according to given table.
+[[processors.enum]]
+  [[processors.enum.mapping]]
+    ## Name of the field to map. Globs accepted.
+    field = "status"
+
+    ## Name of the tag to map. Globs accepted.
+    # tag = "status"
+
+    ## Destination tag or field to be used for the mapped value.  By default the
+    ## source tag or field is used, overwriting the original value.
+    dest = "status_code"
+
+    ## Default value to be used for all values not contained in the mapping
+    ## table.  When unset and no match is found, the original field will remain
+    ## unmodified and the destination tag or field will not be created.
+    # default = 0
+
+    ## Table of mappings
+    [processors.enum.mapping.value_mappings]
+      green = 1
+      amber = 2
+      red = 3
+```
+
+## Example
+
+```diff
+- xyzzy status="green" 1502489900000000000
++ xyzzy status="green",status_code=1i 1502489900000000000
+```
+
+With unknown value and no default set:
+
+```diff
+- xyzzy status="black" 1502489900000000000
++ xyzzy status="black" 1502489900000000000
+```
diff --git a/content/telegraf/v1/processor-plugins/execd/_index.md b/content/telegraf/v1/processor-plugins/execd/_index.md
new file mode 100644
index 000000000..5f5dbaaeb
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/execd/_index.md
@@ -0,0 +1,151 @@
+---
+description: "Telegraf plugin for transforming metrics using Execd"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Execd
+    identifier: processor-execd
+tags: [Execd, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Execd Processor Plugin
+
+The `execd` processor plugin runs an external program as a separate process and
+pipes metrics in to the process's STDIN and reads processed metrics from its
+STDOUT.  The programs must accept influx line protocol on standard in (STDIN)
+and output metrics in influx line protocol to standard output (STDOUT).
+
+Program output on standard error is mirrored to the telegraf log.
+
+Telegraf minimum version: Telegraf 1.15.0
+
+## Caveats
+
+- Metrics with tracking will be considered "delivered" as soon as they are passed
+  to the external process. There is currently no way to match up which metric
+  coming out of the execd process relates to which metric going in (keep in mind
+  that processors can add and drop metrics, and that this is all done
+  asynchronously).
+- it's not currently possible to use a data_format other than "influx", due to
+  the requirement that it is serialize-parse symmetrical and does not lose any
+  critical type data.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Run executable as long-running processor plugin
+[[processors.execd]]
+  ## One program to run as daemon.
+  ## NOTE: process and each argument should each be their own string
+  ## eg: command = ["/path/to/your_program", "arg1", "arg2"]
+  command = ["cat"]
+
+  ## Environment variables
+  ## Array of "key=value" pairs to pass as environment variables
+  ## e.g. "KEY=value", "USERNAME=John Doe",
+  ## "LD_LIBRARY_PATH=/opt/custom/lib64:/usr/local/libs"
+  # environment = []
+
+  ## Delay before the process is restarted after an unexpected termination
+  # restart_delay = "10s"
+
+  ## Serialization format for communicating with the executed program
+  ## Please note that the corresponding data-format must exist both in
+  ## parsers and serializers
+  # data_format = "influx"
+```
+
+## Example
+
+### Go daemon example
+
+This go daemon reads a metric from stdin, multiplies the "count" field by 2,
+and writes the metric back out.
+
+```go
+package main
+
+import (
+    "fmt"
+    "os"
+
+    "github.com/influxdata/telegraf/metric"
+    "github.com/influxdata/telegraf/plugins/parsers/influx"
+    serializers_influx "github.com/influxdata/telegraf/plugins/serializers/influx"
+)
+
+func main() {
+    parser := influx.NewStreamParser(os.Stdin)
+    serializer := serializers_influx.Serializer{}
+    if err := serializer.Init(); err != nil {
+        fmt.Fprintf(os.Stderr, "serializer init failed: %v\n", err)
+        os.Exit(1)
+    }
+
+    for {
+        metric, err := parser.Next()
+        if err != nil {
+            if err == influx.EOF {
+                return // stream ended
+            }
+            if parseErr, isParseError := err.(*influx.ParseError); isParseError {
+                fmt.Fprintf(os.Stderr, "parse ERR %v\n", parseErr)
+                os.Exit(1)
+            }
+            fmt.Fprintf(os.Stderr, "ERR %v\n", err)
+            os.Exit(1)
+        }
+
+        c, found := metric.GetField("count")
+        if !found {
+            fmt.Fprintf(os.Stderr, "metric has no count field\n")
+            os.Exit(1)
+        }
+        switch t := c.(type) {
+        case float64:
+            t *= 2
+            metric.AddField("count", t)
+        case int64:
+            t *= 2
+            metric.AddField("count", t)
+        default:
+            fmt.Fprintf(os.Stderr, "count is not an unknown type, it's a %T\n", c)
+            os.Exit(1)
+        }
+        b, err := serializer.Serialize(metric)
+        if err != nil {
+            fmt.Fprintf(os.Stderr, "ERR %v\n", err)
+            os.Exit(1)
+        }
+        fmt.Fprint(os.Stdout, string(b))
+    }
+}
+```
+
+to run it, you'd build the binary using go, eg `go build -o multiplier.exe
+main.go`
+
+```toml
+[[processors.execd]]
+  command = ["multiplier.exe"]
+```
+
+### Ruby daemon
+
+- See Ruby daemon
+
+```toml
+[[processors.execd]]
+  command = ["ruby", "plugins/processors/execd/examples/multiplier_line_protocol/multiplier_line_protocol.rb"]
+```
diff --git a/content/telegraf/v1/processor-plugins/filepath/_index.md b/content/telegraf/v1/processor-plugins/filepath/_index.md
new file mode 100644
index 000000000..63c59b670
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/filepath/_index.md
@@ -0,0 +1,227 @@
+---
+description: "Telegraf plugin for transforming metrics using Filepath"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Filepath
+    identifier: processor-filepath
+tags: [Filepath, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Filepath Processor Plugin
+
+The `filepath` processor plugin maps certain go functions from
+[path/filepath](https://golang.org/pkg/path/filepath/) onto tag and field
+values. Values can be modified in place or stored in another key.
+
+Implemented functions are:
+
+* [Base](https://golang.org/pkg/path/filepath/#Base) (accessible through `[[processors.filepath.basename]]`)
+* [Rel](https://golang.org/pkg/path/filepath/#Rel) (accessible through `[[processors.filepath.rel]]`)
+* [Dir](https://golang.org/pkg/path/filepath/#Dir) (accessible through `[[processors.filepath.dir]]`)
+* [Clean](https://golang.org/pkg/path/filepath/#Clean) (accessible through `[[processors.filepath.clean]]`)
+* [ToSlash](https://golang.org/pkg/path/filepath/#ToSlash) (accessible through `[[processors.filepath.toslash]]`)
+
+On top of that, the plugin provides an extra function to retrieve the final path
+component without its extension. This function is accessible through the
+`[[processors.filepath.stem]]` configuration item.
+
+Please note that, in this implementation, these functions are processed in the
+order that they appear above( except for `stem` that is applied in the first
+place).
+
+Specify the `tag` and/or `field` that you want processed in each section and
+optionally a `dest` if you want the result stored in a new tag or field.
+
+If you plan to apply multiple transformations to the same `tag`/`field`, bear in
+mind the processing order stated above.
+
+Telegraf minimum version: Telegraf 1.15.0
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Performs file path manipulations on tags and fields
+[[processors.filepath]]
+  ## Treat the tag value as a path and convert it to its last element, storing the result in a new tag
+  # [[processors.filepath.basename]]
+  #   tag = "path"
+  #   dest = "basepath"
+
+  ## Treat the field value as a path and keep all but the last element of path, typically the path's directory
+  # [[processors.filepath.dirname]]
+  #   field = "path"
+
+  ## Treat the tag value as a path, converting it to its the last element without its suffix
+  # [[processors.filepath.stem]]
+  #   tag = "path"
+
+  ## Treat the tag value as a path, converting it to the shortest path name equivalent
+  ## to path by purely lexical processing
+  # [[processors.filepath.clean]]
+  #   tag = "path"
+
+  ## Treat the tag value as a path, converting it to a relative path that is lexically
+  ## equivalent to the source path when joined to 'base_path'
+  # [[processors.filepath.rel]]
+  #   tag = "path"
+  #   base_path = "/var/log"
+
+  ## Treat the tag value as a path, replacing each separator character in path with a '/' character. Has only
+  ## effect on Windows
+  # [[processors.filepath.toslash]]
+  #   tag = "path"
+```
+
+## Considerations
+
+### Clean Automatic Invocation
+
+Even though `clean` is provided a standalone function, it is also invoked when
+using the `rel` and `dirname` functions, so there is no need to use it along
+with them.
+
+That is:
+
+ ```toml
+[[processors.filepath]]
+   [[processors.filepath.dir]]
+     tag = "path"
+   [[processors.filepath.clean]]
+     tag = "path"
+ ```
+
+Is equivalent to:
+
+ ```toml
+[[processors.filepath]]
+   [[processors.filepath.dir]]
+     tag = "path"
+ ```
+
+### ToSlash Platform-specific Behavior
+
+The effects of this function are only noticeable on Windows platforms, because
+of the underlying golang implementation.
+
+## Examples
+
+### Basename
+
+```toml
+[[processors.filepath]]
+  [[processors.filepath.basename]]
+    tag = "path"
+```
+
+```diff
+- my_metric,path="/var/log/batch/ajob.log" duration_seconds=134 1587920425000000000
++ my_metric,path="ajob.log" duration_seconds=134 1587920425000000000
+```
+
+### Dirname
+
+```toml
+[[processors.filepath]]
+  [[processors.filepath.dirname]]
+    field = "path"
+    dest = "folder"
+```
+
+```diff
+- my_metric path="/var/log/batch/ajob.log",duration_seconds=134 1587920425000000000
++ my_metric path="/var/log/batch/ajob.log",folder="/var/log/batch",duration_seconds=134 1587920425000000000
+```
+
+### Stem
+
+```toml
+[[processors.filepath]]
+  [[processors.filepath.stem]]
+    tag = "path"
+```
+
+```diff
+- my_metric,path="/var/log/batch/ajob.log" duration_seconds=134 1587920425000000000
++ my_metric,path="ajob" duration_seconds=134 1587920425000000000
+```
+
+### Clean
+
+```toml
+[[processors.filepath]]
+  [[processors.filepath.clean]]
+    tag = "path"
+```
+
+```diff
+- my_metric,path="/var/log/dummy/../batch//ajob.log" duration_seconds=134 1587920425000000000
++ my_metric,path="/var/log/batch/ajob.log" duration_seconds=134 1587920425000000000
+```
+
+### Rel
+
+```toml
+[[processors.filepath]]
+  [[processors.filepath.rel]]
+    tag = "path"
+    base_path = "/var/log"
+```
+
+```diff
+- my_metric,path="/var/log/batch/ajob.log" duration_seconds=134 1587920425000000000
++ my_metric,path="batch/ajob.log" duration_seconds=134 1587920425000000000
+```
+
+### ToSlash
+
+```toml
+[[processors.filepath]]
+  [[processors.filepath.rel]]
+    tag = "path"
+```
+
+```diff
+- my_metric,path="\var\log\batch\ajob.log" duration_seconds=134 1587920425000000000
++ my_metric,path="/var/log/batch/ajob.log" duration_seconds=134 1587920425000000000
+```
+
+## Processing paths from tail plugin
+
+This plugin can be used together with the tail input
+plugin
+
+For this purpose, we will use the `tail` input plugin, the `grok` parser plugin
+and the `filepath` processor.
+
+```toml
+# Performs file path manipulations on tags and fields
+[[inputs.tail]]
+  files = ["/var/log/myjobs/**.log"]
+  data_format = "grok"
+  grok_patterns = ['%{TIMESTAMP_ISO8601:timestamp:ts-"2006-01-02 15:04:05"} total time execution: %{NUMBER:duration_seconds:int}']
+  name_override = "myjobs"
+
+[[processors.filepath]]
+   [[processors.filepath.stem]]
+     tag = "path"
+     dest = "stempath"
+```
+
+The resulting output for a job taking 70 seconds for the mentioned log file
+would look like:
+
+```text
+myjobs_duration_seconds,host="my-host",path="/var/log/myjobs/mysql_backup.log",stempath="mysql_backup" 70 1587920425000000000
+```
diff --git a/content/telegraf/v1/processor-plugins/filter/_index.md b/content/telegraf/v1/processor-plugins/filter/_index.md
new file mode 100644
index 000000000..153e848b3
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/filter/_index.md
@@ -0,0 +1,95 @@
+---
+description: "Telegraf plugin for transforming metrics using Filter"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Filter
+    identifier: processor-filter
+tags: [Filter, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Filter Processor Plugin
+
+The filter processor plugin allows to specify a set of rules for metrics
+with the ability to _keep_ or _drop_ those metrics. It does _not_ change the
+metric. As such a user might want to apply this processor to remove metrics
+from the processing/output stream.
+__NOTE:__ The filtering is _not_ output specific, but will apply to the metrics
+processed by this processor.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Filter metrics by the given criteria
+[[processors.filter]]
+    ## Default action if no rule applies
+    # default = "pass"
+
+    ## Rules to apply on the incoming metrics (multiple rules are possible)
+    ## The rules are evaluated in order and the first matching rule is applied.
+    ## In case no rule matches the "default" is applied.
+    ## All filter criteria in a rule must apply for the rule to match the metric
+    ## i.e. the criteria are combined by a logical AND. If a criterion is
+    ## omitted it is NOT applied at all and ignored.
+    [[processors.filter.rule]]
+        ## List of metric names to match including glob expressions
+        # name = []
+
+        ## List of tag key/values pairs to match including glob expressions
+        ## ALL given tags keys must exist and at least one value must match
+        ## for the metric to match the rule.
+        # tags = {}
+
+        ## List of field keys to match including glob expressions
+        ## At least one field must exist for the metric to match the rule.
+        # fields = []
+
+        ## Action to apply for this rule
+        ## "pass" will keep the metric and pass it on, while "drop" will remove
+        ## the metric
+        # action = "drop"
+```
+
+## Examples
+
+Consider a use-case where you collected a bunch of metrics
+
+```text
+machine,source="machine1",status="OK" operating_hours=37i,temperature=23.1
+machine,source="machine2",status="warning" operating_hours=1433i,temperature=48.9,message="too hot"
+machine,source="machine3",status="OK" operating_hours=811i,temperature=29.5
+machine,source="machine4",status="failure" operating_hours=1009i,temperature=67.3,message="temperature alert"
+```
+
+but only want to keep the ones indicating a `status` of `failure` or `warning`:
+
+```toml
+[[processors.filter]]
+  namepass = ["machine"]
+  default = "drop"
+
+  [[processors.filter.rule]]
+    tags = {"status" = ["warning", "failure"]}
+    action = "pass"
+```
+
+Alternatively, you can "black-list" the `OK` value via
+
+```toml
+[[processors.filter]]
+  namepass = ["machine"]
+
+  [[processors.filter.rule]]
+    tags = {"status" = "OK"}
+```
diff --git a/content/telegraf/v1/processor-plugins/ifname/_index.md b/content/telegraf/v1/processor-plugins/ifname/_index.md
new file mode 100644
index 000000000..6160b8187
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/ifname/_index.md
@@ -0,0 +1,114 @@
+---
+description: "Telegraf plugin for transforming metrics using Network Interface Name"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Network Interface Name
+    identifier: processor-ifname
+tags: [Network Interface Name, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Network Interface Name Processor Plugin
+
+The `ifname` plugin looks up network interface names using SNMP.
+
+Telegraf minimum version: Telegraf 1.15.0
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `auth_password` and
+`priv_password` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Add a tag of the network interface name looked up over SNMP by interface number
+[[processors.ifname]]
+  ## Name of tag holding the interface number
+  # tag = "ifIndex"
+
+  ## Name of output tag where service name will be added
+  # dest = "ifName"
+
+  ## Name of tag of the SNMP agent to request the interface name from
+  ##   example: agent = "source"
+  # agent = "agent"
+
+  ## Timeout for each request.
+  # timeout = "5s"
+
+  ## SNMP version; can be 1, 2, or 3.
+  # version = 2
+
+  ## SNMP community string.
+  # community = "public"
+
+  ## Number of retries to attempt.
+  # retries = 3
+
+  ## The GETBULK max-repetitions parameter.
+  # max_repetitions = 10
+
+  ## SNMPv3 authentication and encryption options.
+  ##
+  ## Security Name.
+  # sec_name = "myuser"
+  ## Authentication protocol; one of "MD5", "SHA", or "".
+  # auth_protocol = "MD5"
+  ## Authentication password.
+  # auth_password = "pass"
+  ## Security Level; one of "noAuthNoPriv", "authNoPriv", or "authPriv".
+  # sec_level = "authNoPriv"
+  ## Context Name.
+  # context_name = ""
+  ## Privacy protocol used for encrypted messages; one of "DES", "AES" or "".
+  # priv_protocol = ""
+  ## Privacy password used for encrypted messages.
+  # priv_password = ""
+
+  ## max_parallel_lookups is the maximum number of SNMP requests to
+  ## make at the same time.
+  # max_parallel_lookups = 100
+
+  ## ordered controls whether or not the metrics need to stay in the
+  ## same order this plugin received them in. If false, this plugin
+  ## may change the order when data is cached.  If you need metrics to
+  ## stay in order set this to true.  keeping the metrics ordered may
+  ## be slightly slower
+  # ordered = false
+
+  ## cache_ttl is the amount of time interface names are cached for a
+  ## given agent.  After this period elapses if names are needed they
+  ## will be retrieved again.
+  # cache_ttl = "8h"
+```
+
+## Example
+
+Example config:
+
+```toml
+[[processors.ifname]]
+  tag = "ifIndex"
+  dest = "ifName"
+```
+
+```diff
+- foo,ifIndex=2,agent=127.0.0.1 field=123 1502489900000000000
++ foo,ifIndex=2,agent=127.0.0.1,ifName=eth0 field=123 1502489900000000000
+```
diff --git a/content/telegraf/v1/processor-plugins/lookup/_index.md b/content/telegraf/v1/processor-plugins/lookup/_index.md
new file mode 100644
index 000000000..2b2f0a143
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/lookup/_index.md
@@ -0,0 +1,171 @@
+---
+description: "Telegraf plugin for transforming metrics using Lookup"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Lookup
+    identifier: processor-lookup
+tags: [Lookup, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Lookup Processor Plugin
+
+The Lookup Processor allows to use one or more files containing a lookup-table
+for annotating incoming metrics. The lookup is _static_ as the files are only
+used on startup. The main use-case for this is to annotate metrics with
+additional tags e.g. dependent on their source. Multiple tags can be added
+depending on the lookup-table _files_.
+
+The lookup key can be generated using a Golang template with the ability to
+access the metric name via `{{.Name}}`, the tag values via `{{.Tag "mytag"}}`,
+with `mytag` being the tag-name and field-values via `{{.Field "myfield"}}`,
+with `myfield` being the field-name. Non-existing tags and field will result
+in an empty string or `nil` respectively. In case the key cannot be found, the
+metric is passed-through unchanged. By default all matching tags are added and
+existing tag-values are overwritten.
+
+Please note: The plugin only supports the addition of tags and thus all mapped
+tag-values need to be strings!
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Lookup a key derived from metrics in a static file
+[[processors.lookup]]
+  ## List of files containing the lookup-table
+  files = ["path/to/lut.json", "path/to/another_lut.json"]
+
+  ## Format of the lookup file(s)
+  ## Available formats are:
+  ##    json               -- JSON file with 'key: {tag-key: tag-value, ...}' mapping
+  ##    csv_key_name_value -- CSV file with 'key,tag-key,tag-value,...,tag-key,tag-value' mapping
+  ##    csv_key_values     -- CSV file with a header containing tag-names and
+  ##                          rows with 'key,tag-value,...,tag-value' mappings
+  # format = "json"
+
+  ## Template for generating the lookup-key from the metric.
+  ## This is a Golang template (see https://pkg.go.dev/text/template) to
+  ## access the metric name (`{{.Name}}`), a tag value (`{{.Tag "name"}}`) or
+  ## a field value (`{{.Field "name"}}`).
+  key = '{{.Tag "host"}}'
+```
+
+## File formats
+
+The following descriptions assume `key`s to be unique identifiers used for
+matching the configured `key`. The `tag-name`/`tag-value` pairs are the tags
+added to a metric if the key matches.
+
+### `json` format
+
+In the `json` format, the input `files` must have the following format
+
+```json
+{
+  "keyA": {
+    "tag-name1": "tag-value1",
+    ...
+    "tag-nameN": "tag-valueN",
+  },
+  ...
+  "keyZ": {
+    "tag-name1": "tag-value1",
+    ...
+    "tag-nameM": "tag-valueM",
+  }
+}
+```
+
+Please note that only _strings_ are supported for all elements.
+
+### `csv_key_name_value` format
+
+The `csv_key_name_value` format specifies comma-separated-value files with
+the following format
+
+```csv
+# Optional comments
+keyA,tag-name1,tag-value1,...,tag-nameN,tag-valueN
+keyB,tag-name1,tag-value1
+...
+keyZ,tag-name1,tag-value1,...,tag-nameM,tag-valueM
+```
+
+The formatting uses commas (`,`) as separators and allows for comments defined
+as lines starting with a hash (`#`). All lines can have different numbers but
+must at least contain three columns and follow the name/value pair format, i.e.
+there cannot be a name without value.
+
+### `csv_key_values` format
+
+This setting specifies comma-separated-value files with the following format
+
+```csv
+# Optional comments
+ignored,tag-name1,...,tag-valueN
+keyA,tag-value1,...,,,,
+keyB,tag-value1,,,,...,
+...
+keyZ,tag-value1,...,tag-valueM,...,
+```
+
+The formatting uses commas (`,`) as separators and allows for comments defined
+as lines starting with a hash (`#`). All lines __must__ contain the same number
+of columns. The first non-comment line __must__ contain a header specifying the
+tag-names. As the first column contains the key to match the first header value
+is ignored. There have to be at least two columns.
+
+Please note that empty tag-values will be ignored and the tag will not be added.
+
+## Example
+
+With a lookup table of
+
+```json
+{
+  "xyzzy-green": {
+    "location": "eu-central",
+    "rack": "C12-01"
+  },
+  "xyzzy-red": {
+    "location": "us-west",
+    "rack": "C01-42"
+  },
+}
+```
+
+in `format = "json"` and a `key` of `key = '{{.Name}}-{{.Tag "host"}}'` you get
+
+```diff
+- xyzzy,host=green value=3.14 1502489900000000000
+- xyzzy,host=red  value=2.71 1502499100000000000
++ xyzzy,host=green,location=eu-central,rack=C12-01 value=3.14 1502489900000000000
++ xyzzy,host=red,location=us-west,rack=C01-42 value=2.71 1502499100000000000
+xyzzy,host=blue  value=6.62 1502499700000000000
+```
+
+The same results can be achieved with `format = "csv_key_name_value"` and
+
+```csv
+xyzzy-green,location,eu-central,rack,C12-01
+xyzzy-red,location,us-west,rack,C01-42
+```
+
+or `format = "csv_key_values"` and
+
+```csv
+-,location,rack
+xyzzy-green,eu-central,C12-01
+xyzzy-red,us-west,C01-42
+```
diff --git a/content/telegraf/v1/processor-plugins/noise/_index.md b/content/telegraf/v1/processor-plugins/noise/_index.md
new file mode 100644
index 000000000..5601878dc
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/noise/_index.md
@@ -0,0 +1,109 @@
+---
+description: "Telegraf plugin for transforming metrics using Noise"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Noise
+    identifier: processor-noise
+tags: [Noise, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Noise Processor Plugin
+
+The _Noise_ processor is used to add noise to numerical field values. For each
+field a noise is generated using a defined probability density function and
+added to the value. The function type can be configured as _Laplace_, _Gaussian_
+or _Uniform_.  Depending on the function, various parameters need to be
+configured:
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Adds noise to numerical fields
+[[processors.noise]]
+  ## Specified the type of the random distribution.
+  ## Can be "laplacian", "gaussian" or "uniform".
+  # type = "laplacian
+
+  ## Center of the distribution.
+  ## Only used for Laplacian and Gaussian distributions.
+  # mu = 0.0
+
+  ## Scale parameter for the Laplacian or Gaussian distribution
+  # scale = 1.0
+
+  ## Upper and lower bound of the Uniform distribution
+  # min = -1.0
+  # max = 1.0
+
+  ## Apply the noise only to numeric fields matching the filter criteria below.
+  ## Excludes takes precedence over includes.
+  # include_fields = []
+  # exclude_fields = []
+```
+
+Depending on the choice of the distribution function, the respective parameters
+must be set. Default settings are `noise_type = "laplacian"` with `mu = 0.0` and
+`scale = 1.0`:
+
+Using the `include_fields` and `exclude_fields` options a filter can be
+configured to apply noise only to numeric fields matching it.  The following
+distribution functions are available.
+
+### Laplacian
+
+- `noise_type = laplacian`
+- `scale`: also referred to as _diversity_ parameter, regulates the width & height of the function, a bigger `scale` value means a higher probability of larger noise, default set to 1.0
+- `mu`: location of the curve, default set to 0.0
+
+### Gaussian
+
+- `noise_type = gaussian`
+- `mu`: mean value, default set to 0.0
+- `scale`: standard deviation, default set to 1.0
+
+### Uniform
+
+- `noise_type = uniform`
+- `min`: minimal interval value, default set to -1.0
+- `max`: maximal interval value, default set to 1.0
+
+## Example
+
+Add noise to each value the _inputs.cpu_ plugin generates, except for the
+_usage\_steal_, _usage\_user_, _uptime\_format_, _usage\_idle_ field and all
+fields of the metrics _swap_, _disk_ and _net_:
+
+```toml
+[[inputs.cpu]]
+  percpu = true
+  totalcpu = true
+  collect_cpu_time = false
+  report_active = false
+
+[[processors.noise]]
+  scale = 1.0
+  mu = 0.0
+  noise_type = "laplacian"
+  include_fields = []
+  exclude_fields = ["usage_steal", "usage_user", "uptime_format", "usage_idle" ]
+  namedrop = ["swap", "disk", "net"]
+```
+
+Result of noise added to the _cpu_ metric:
+
+```diff
+- cpu map[cpu:cpu11 host:98d5b8dbad1c] map[usage_guest:0 usage_guest_nice:0 usage_idle:94.3999999994412 usage_iowait:0 usage_irq:0.1999999999998181 usage_nice:0 usage_softirq:0.20000000000209184 usage_steal:0 usage_system:1.2000000000080036 usage_user:4.000000000014552]
++ cpu map[cpu:cpu11 host:98d5b8dbad1c] map[usage_guest:1.0078071583066057 usage_guest_nice:0.523063861602435 usage_idle:95.53920223476884 usage_iowait:0.5162661526251292 usage_irq:0.7138529816101375 usage_nice:0.6119678488887954 usage_softirq:0.5573585443688622 usage_steal:0.2006120911289802 usage_system:1.2954475820198437 usage_user:6.885664792615023]
+```
diff --git a/content/telegraf/v1/processor-plugins/override/_index.md b/content/telegraf/v1/processor-plugins/override/_index.md
new file mode 100644
index 000000000..7fd76ebf6
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/override/_index.md
@@ -0,0 +1,57 @@
+---
+description: "Telegraf plugin for transforming metrics using Override"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Override
+    identifier: processor-override
+tags: [Override, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Override Processor Plugin
+
+The override processor plugin allows overriding all modifications that are
+supported by input plugins and aggregators:
+
+* name_override
+* name_prefix
+* name_suffix
+* tags
+
+All metrics passing through this processor will be modified accordingly.  Select
+the metrics to modify using the standard metric
+filtering options.
+
+Values of *name_override*, *name_prefix*, *name_suffix* and already present
+*tags* with conflicting keys will be overwritten. Absent *tags* will be
+created.
+
+Use-case of this plugin encompass ensuring certain tags or naming conventions
+are adhered to irrespective of input plugin configurations, e.g. by
+`taginclude`.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Apply metric modifications using override semantics.
+[[processors.override]]
+  ## All modifications on inputs and aggregators can be overridden:
+  # name_override = "new_name"
+  # name_prefix = "new_name_prefix"
+  # name_suffix = "new_name_suffix"
+
+  ## Tags to be added (all values must be strings)
+  # [processors.override.tags]
+  #   additional_tag = "tag_value"
+```
diff --git a/content/telegraf/v1/processor-plugins/parser/_index.md b/content/telegraf/v1/processor-plugins/parser/_index.md
new file mode 100644
index 000000000..bfabe9331
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/parser/_index.md
@@ -0,0 +1,82 @@
+---
+description: "Telegraf plugin for transforming metrics using Parser"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Parser
+    identifier: processor-parser
+tags: [Parser, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Parser Processor Plugin
+
+This plugin parses defined fields or tags containing the specified data format
+and creates new metrics based on the contents of the field or tag.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Parse a value in a specified field(s)/tag(s) and add the result in a new metric
+[[processors.parser]]
+  ## The name of the fields whose value will be parsed.
+  parse_fields = ["message"]
+
+  ## Fields to base64 decode.
+  ## These fields do not need to be specified in parse_fields.
+  ## Fields specified here will have base64 decode applied to them.
+  # parse_fields_base64 = []
+
+  ## The name of the tags whose value will be parsed.
+  # parse_tags = []
+
+  ## If true, incoming metrics are not emitted.
+  # drop_original = false
+
+  ## Merge Behavior
+  ## Only has effect when drop_original is set to false. Possible options
+  ## include:
+  ##  * override: emitted metrics are merged by overriding the original metric
+  ##    using the newly parsed metrics, but retains the original metric
+  ##    timestamp.
+  ##  * override-with-timestamp: the same as "override", but the timestamp is
+  ##    set based on the new metrics if present.
+  # merge = ""
+
+  ## The dataformat to be read from files
+  ## Each data format has its own unique set of configuration options, read
+  ## more about them here:
+  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+  data_format = "influx"
+```
+
+## Example
+
+```toml
+[[processors.parser]]
+  parse_fields = ["message"]
+  merge = "override"
+  data_format = "logfmt"
+```
+
+### Input
+
+```text
+syslog,appname=influxd,facility=daemon,hostname=http://influxdb.example.org\ (influxdb.example.org),severity=info facility_code=3i,message=" ts=2018-08-09T21:01:48.137963Z lvl=info msg=\"Executing query\" log_id=09p7QbOG000 service=query query=\"SHOW DATABASES\"",procid="6629",severity_code=6i,timestamp=1533848508138040000i,version=1i
+```
+
+### Output
+
+```text
+syslog,appname=influxd,facility=daemon,hostname=http://influxdb.example.org\ (influxdb.example.org),severity=info facility_code=3i,log_id="09p7QbOG000",lvl="info",message=" ts=2018-08-09T21:01:48.137963Z lvl=info msg=\"Executing query\" log_id=09p7QbOG000 service=query query=\"SHOW DATABASES\"",msg="Executing query",procid="6629",query="SHOW DATABASES",service="query",severity_code=6i,timestamp=1533848508138040000i,ts="2018-08-09T21:01:48.137963Z",version=1i
+```
diff --git a/content/telegraf/v1/processor-plugins/pivot/_index.md b/content/telegraf/v1/processor-plugins/pivot/_index.md
new file mode 100644
index 000000000..70e2293bd
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/pivot/_index.md
@@ -0,0 +1,52 @@
+---
+description: "Telegraf plugin for transforming metrics using Pivot"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Pivot
+    identifier: processor-pivot
+tags: [Pivot, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Pivot Processor Plugin
+
+You can use the `pivot` processor to rotate single valued metrics into a multi
+field metric.  This transformation often results in data that is more easily
+to apply mathematical operators and comparisons between, and flatten into a
+more compact representation for write operations with some output data
+formats.
+
+To perform the reverse operation use the [unpivot] processor.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Rotate a single valued metric into a multi field metric
+[[processors.pivot]]
+  ## Tag to use for naming the new field.
+  tag_key = "name"
+  ## Field to use as the value of the new field.
+  value_key = "value"
+```
+
+## Example
+
+```diff
+- cpu,cpu=cpu0,name=time_idle value=42i
+- cpu,cpu=cpu0,name=time_user value=43i
++ cpu,cpu=cpu0 time_idle=42i
++ cpu,cpu=cpu0 time_user=43i
+```
+
+[unpivot]: /plugins/processors/unpivot/README.md
diff --git a/content/telegraf/v1/processor-plugins/port_name/_index.md b/content/telegraf/v1/processor-plugins/port_name/_index.md
new file mode 100644
index 000000000..4bb063b1e
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/port_name/_index.md
@@ -0,0 +1,65 @@
+---
+description: "Telegraf plugin for transforming metrics using Port Name Lookup"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Port Name Lookup
+    identifier: processor-port_name
+tags: [Port Name Lookup, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Port Name Lookup Processor Plugin
+
+Use the `port_name` processor to convert a tag or field containing a well-known
+port number to the registered service name.
+
+Tag or field can contain a number ("80") or number and protocol separated by
+slash ("443/tcp"). If protocol is not provided it defaults to tcp but can be
+changed with the default_protocol setting. An additional tag or field can be
+specified for the protocol.
+
+If the source was found in tag, the service name will be added as a tag. If the
+source was found in a field, the service name will also be a field.
+
+Telegraf minimum version: Telegraf 1.15.0
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Given a tag/field of a TCP or UDP port number, add a tag/field of the service name looked up in the system services file
+[[processors.port_name]]
+  ## Name of tag holding the port number
+  # tag = "port"
+  ## Or name of the field holding the port number
+  # field = "port"
+
+  ## Name of output tag or field (depending on the source) where service name will be added
+  # dest = "service"
+
+  ## Default tcp or udp
+  # default_protocol = "tcp"
+
+  ## Tag containing the protocol (tcp or udp, case-insensitive)
+  # protocol_tag = "proto"
+
+  ## Field containing the protocol (tcp or udp, case-insensitive)
+  # protocol_field = "proto"
+```
+
+## Example
+
+```diff
+- measurement,port=80 field=123 1560540094000000000
++ measurement,port=80,service=http field=123 1560540094000000000
+```
diff --git a/content/telegraf/v1/processor-plugins/printer/_index.md b/content/telegraf/v1/processor-plugins/printer/_index.md
new file mode 100644
index 000000000..6e1d031a9
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/printer/_index.md
@@ -0,0 +1,54 @@
+---
+description: "Telegraf plugin for transforming metrics using Printer"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Printer
+    identifier: processor-printer
+tags: [Printer, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Printer Processor Plugin
+
+The printer processor plugin simple prints every metric passing through it.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Print all metrics that pass through this filter.
+[[processors.printer]]
+  ## Maximum line length in bytes.  Useful only for debugging.
+  # influx_max_line_bytes = 0
+
+  ## When true, fields will be output in ascending lexical order.  Enabling
+  ## this option will result in decreased performance and is only recommended
+  ## when you need predictable ordering while debugging.
+  # influx_sort_fields = false
+
+  ## When true, Telegraf will output unsigned integers as unsigned values,
+  ## i.e.: `42u`.  You will need a version of InfluxDB supporting unsigned
+  ## integer values.  Enabling this option will result in field type errors if
+  ## existing data has been written.
+  # influx_uint_support = false
+
+  ## When true, Telegraf will omit the timestamp on data to allow InfluxDB
+  ## to set the timestamp of the data during ingestion. This is generally NOT
+  ## what you want as it can lead to data points captured at different times
+  ## getting omitted due to similar data.
+  # influx_omit_timestamp = false
+```
+
+## Tags
+
+No tags are applied by this processor.
diff --git a/content/telegraf/v1/processor-plugins/regex/_index.md b/content/telegraf/v1/processor-plugins/regex/_index.md
new file mode 100644
index 000000000..e206deaca
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/regex/_index.md
@@ -0,0 +1,255 @@
+---
+description: "Telegraf plugin for transforming metrics using Regex"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Regex
+    identifier: processor-regex
+tags: [Regex, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Regex Processor Plugin
+
+This plugin transforms tag and field _values_ as well as renaming tags, fields
+and metrics using regex patterns. Tag and field _values_ can be transformed
+using named-groups in a batch fashion.
+
+The regex processor **only operates on string fields**. It will not work on
+any other data types, like an integer or float.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Transforms tag and field values as well as measurement, tag and field names with regex pattern
+[[processors.regex]]
+  namepass = ["nginx_requests"]
+
+  ## Tag value conversion(s). Multiple instances are allowed.
+  [[processors.regex.tags]]
+    ## Tag(s) to process with optional glob expressions such as '*'.
+    key = "resp_code"
+    ## Regular expression to match the tag value. If the value doesn't
+    ## match the tag is ignored.
+    pattern = "^(\\d)\\d\\d$"
+    ## Replacement expression defining the value of the target tag. You can
+    ## use regexp groups or named groups e.g. ${1} references the first group.
+    replacement = "${1}xx"
+    ## Name of the target tag defaulting to 'key' if not specified.
+    ## In case of wildcards being used in `key` the currently processed
+    ## tag-name is used as target.
+    # result_key = "method"
+    ## Appends the replacement to the target tag instead of overwriting it when
+    ## set to true.
+    # append = false
+
+  ## Field value conversion(s). Multiple instances are allowed.
+  [[processors.regex.fields]]
+    ## Field(s) to process with optional glob expressions such as '*'.
+    key = "request"
+    ## Regular expression to match the field value. If the value doesn't
+    ## match or the field doesn't contain a string the field is ignored.
+    pattern = "^/api(?P<method>/[\\w/]+)\\S*"
+    ## Replacement expression defining the value of the target field. You can
+    ## use regexp groups or named groups e.g. ${method} references the group
+    ## named "method".
+    replacement = "${method}"
+    ## Name of the target field defaulting to 'key' if not specified.
+    ## In case of wildcards being used in `key` the currently processed
+    ## field-name is used as target.
+    # result_key = "method"
+
+  ## Rename metric fields
+  [[processors.regex.field_rename]]
+    ## Regular expression to match on the field name
+    pattern = "^search_(\\w+)d$"
+    ## Replacement expression defining the name of the new field
+    replacement = "${1}"
+    ## If the new field name already exists, you can either "overwrite" the
+    ## existing one with the value of the renamed field OR you can "keep"
+    ## both the existing and source field.
+    # result_key = "keep"
+
+  ## Rename metric tags
+  [[processors.regex.tag_rename]]
+    ## Regular expression to match on a tag name
+    pattern = "^search_(\\w+)d$"
+    ## Replacement expression defining the name of the new tag
+    replacement = "${1}"
+    ## If the new tag name already exists, you can either "overwrite" the
+    ## existing one with the value of the renamed tag OR you can "keep"
+    ## both the existing and source tag.
+    # result_key = "keep"
+
+  ## Rename metrics
+  [[processors.regex.metric_rename]]
+    ## Regular expression to match on an metric name
+    pattern = "^search_(\\w+)d$"
+    ## Replacement expression defining the new name of the metric
+    replacement = "${1}"
+```
+
+Please note, you can use multiple `tags`, `fields`, `tag_rename`, `field_rename`
+and `metric_rename` sections in one processor. All of those are applied.
+
+### Tag and field _value_ conversions
+
+Conversions are only applied if a tag/field _name_ matches the `key` which can
+contain glob statements such as `*` (asterix) _and_ the `pattern` matches the
+tag/field _value_. For fields the field values has to be of type `string` to
+apply the conversion. If any of the given criteria does not apply the conversion
+is not applied to the metric.
+
+The `replacement` option specifies the value of the resulting tag or field. It
+can reference capturing groups by index (e.g. `${1}` being the first group) or
+by name (e.g. `${mygroup}` being the group named `mygroup`).
+
+By default, the currently processed tag or field is overwritten by the
+`replacement`. To create a new tag or field you can additionally specify the
+`result_key` option containing the new target tag or field name. In case the
+given tag or field already exists, its value is overwritten. For `tags` you
+might use the `append` flag to append the `replacement` value to an existing
+tag.
+
+### Batch processing using named groups
+
+In `tags` and `fields` sections it is possible to use named groups to create
+multiple new tags or fields respectively. To do so, _all_ capture groups have
+to be named in the `pattern`. Additional non-capturing ones or other
+expressions are allowed. Furthermore, neither `replacement` nor `result_key`
+can be set as the resulting tag/field name is the name of the group and the
+value corresponds to the group's content.
+
+### Tag and field _name_ conversions
+
+You can batch-rename tags and fields using the `tag_rename` and `field_rename`
+sections. Contrary to the `tags` and `fields` sections, the rename operates on
+the tag or field _name_, not its _value_.
+
+A tag or field is renamed if the given `pattern` matches the name. The new name
+is specified via the `replacement` option. Optionally, the `result_key` can be
+set to either `overwrite` or `keep` (default) to control the behavior in case
+the target tag/field already exists. For `overwrite` the target tag/field is
+replaced by the source key. With this setting, the source tag/field
+is removed in any case. When using the `keep` setting (default), the target
+tag/field as well as the source is left unchanged and no renaming takes place.
+
+### Metric _name_ conversions
+
+Similar to the tag and field renaming, `metric_rename` section(s) can be used
+to rename metrics matching the given `pattern`. The resulting metric name is
+given via `replacement` option. If matching `pattern` the conversion is always
+applied. The `result_key` option has no effect on metric renaming and shall
+not be specified.
+
+## Tags
+
+No tags are applied by this processor.
+
+## Example
+
+In the following examples we are using this metric
+
+```text
+nginx_requests,verb=GET,resp_code=200 request="/api/search/?category=plugins&q=regex&sort=asc",referrer="-",ident="-",http_version=1.1,agent="UserAgent",client_ip="127.0.0.1",auth="-",resp_bytes=270i 1519652321000000000
+```
+
+### Explicit specification
+
+```toml
+[[processors.regex]]
+  namepass = ["nginx_requests"]
+
+  [[processors.regex.tags]]
+    key = "resp_code"
+    pattern = "^(\\d)\\d\\d$"
+    replacement = "${1}xx"
+
+  [[processors.regex.fields]]
+    key = "request"
+    pattern = "^/api(?P<method>/[\\w/]+)\\S*"
+    replacement = "${method}"
+    result_key = "method"
+
+  [[processors.regex.fields]]
+    key = "request"
+    pattern = ".*category=(\\w+).*"
+    replacement = "${1}"
+    result_key = "search_category"
+
+  [[processors.regex.field_rename]]
+    pattern = "^client_(\\w+)$"
+    replacement = "${1}"
+```
+
+will result in
+
+```diff
+-nginx_requests,verb=GET,resp_code=200 request="/api/search/?category=plugins&q=regex&sort=asc",referrer="-",ident="-",http_version=1.1,agent="UserAgent",client_ip="127.0.0.1",auth="-",resp_bytes=270i 1519652321000000000
++nginx_requests,verb=GET,resp_code=2xx request="/api/search/?category=plugins&q=regex&sort=asc",method="/search/",category="plugins",referrer="-",ident="-",http_version=1.1,agent="UserAgent",ip="127.0.0.1",auth="-",resp_bytes=270i 1519652321000000000
+```
+
+### Appending
+
+```toml
+[[processors.regex]]
+  namepass = ["nginx_requests"]
+
+  [[processors.regex.tags]]
+    key = "resp_code"
+    pattern = '^2\d\d$'
+    replacement = " OK"
+    result_key = "verb"
+    append = true
+```
+
+will result in
+
+```diff
+-nginx_requests,verb=GET,resp_code=200 request="/api/search/?category=plugins&q=regex&sort=asc",referrer="-",ident="-",http_version=1.1,agent="UserAgent",client_ip="127.0.0.1",auth="-",resp_bytes=270i 1519652321000000000
++nginx_requests,verb=GET\ OK,resp_code=200 request="/api/search/?category=plugins&q=regex&sort=asc",referrer="-",ident="-",http_version=1.1,agent="UserAgent",client_ip="127.0.0.1",auth="-",resp_bytes=270i 1519652321000000000
+```
+
+### Named groups
+
+```toml
+[[processors.regex]]
+  namepass = ["nginx_requests"]
+
+  [[processors.regex.fields]]
+    key = "request"
+    pattern = '^/api/(?P<method>\w+)[/?].*category=(?P<category>\w+)&(?:.*)'
+```
+
+will result in
+
+```diff
+-nginx_requests,verb=GET,resp_code=200 request="/api/search/?category=plugins&q=regex&sort=asc",referrer="-",ident="-",http_version=1.1,agent="UserAgent",client_ip="127.0.0.1",auth="-",resp_bytes=270i 1519652321000000000
++nginx_requests,verb=GET,resp_code=200 request="/api/search/?category=plugins&q=regex&sort=asc",method="search",category="plugins",referrer="-",ident="-",http_version=1.1,agent="UserAgent",client_ip="127.0.0.1",auth="-",resp_bytes=270i 1519652321000000000
+```
+
+### Metric renaming
+
+```toml
+[[processors.regex]]
+  [[processors.regex.metric_rename]]
+    pattern = '^(\w+)_.*$'
+    replacement = "${1}"
+```
+
+will result in
+
+```diff
+-nginx_requests,verb=GET,resp_code=200 request="/api/search/?category=plugins&q=regex&sort=asc",referrer="-",ident="-",http_version=1.1,agent="UserAgent",client_ip="127.0.0.1",auth="-",resp_bytes=270i 1519652321000000000
++nginx,verb=GET,resp_code=200 request="/api/search/?category=plugins&q=regex&sort=asc",referrer="-",ident="-",http_version=1.1,agent="UserAgent",client_ip="127.0.0.1",auth="-",resp_bytes=270i 1519652321000000000
+```
diff --git a/content/telegraf/v1/processor-plugins/rename/_index.md b/content/telegraf/v1/processor-plugins/rename/_index.md
new file mode 100644
index 000000000..b31277c4e
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/rename/_index.md
@@ -0,0 +1,58 @@
+---
+description: "Telegraf plugin for transforming metrics using Rename"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Rename
+    identifier: processor-rename
+tags: [Rename, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Rename Processor Plugin
+
+The `rename` processor renames measurements, fields, and tags.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Rename measurements, tags, and fields that pass through this filter.
+[[processors.rename]]
+  ## Specify one sub-table per rename operation.
+  [[processors.rename.replace]]
+    measurement = "network_interface_throughput"
+    dest = "throughput"
+
+  [[processors.rename.replace]]
+    tag = "hostname"
+    dest = "host"
+
+  [[processors.rename.replace]]
+    field = "lower"
+    dest = "min"
+
+  [[processors.rename.replace]]
+    field = "upper"
+    dest = "max"
+```
+
+## Tags
+
+No tags are applied by this processor, though it can alter them by renaming.
+
+## Example
+
+```diff
+- network_interface_throughput,hostname=backend.example.com lower=10i,upper=1000i,mean=500i 1502489900000000000
++ throughput,host=backend.example.com min=10i,max=1000i,mean=500i 1502489900000000000
+```
diff --git a/content/telegraf/v1/processor-plugins/reverse_dns/_index.md b/content/telegraf/v1/processor-plugins/reverse_dns/_index.md
new file mode 100644
index 000000000..a6c057a2a
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/reverse_dns/_index.md
@@ -0,0 +1,94 @@
+---
+description: "Telegraf plugin for transforming metrics using Reverse DNS"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Reverse DNS
+    identifier: processor-reverse_dns
+tags: [Reverse DNS, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Reverse DNS Processor Plugin
+
+The `reverse_dns` processor does a reverse-dns lookup on tags (or fields) with
+IPs in them.
+
+Telegraf minimum version: Telegraf 1.15.0
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# ReverseDNS does a reverse lookup on IP addresses to retrieve the DNS name
+[[processors.reverse_dns]]
+  ## For optimal performance, you may want to limit which metrics are passed to this
+  ## processor. eg:
+  ## namepass = ["my_metric_*"]
+
+  ## cache_ttl is how long the dns entries should stay cached for.
+  ## generally longer is better, but if you expect a large number of diverse lookups
+  ## you'll want to consider memory use.
+  cache_ttl = "24h"
+
+  ## lookup_timeout is how long should you wait for a single dns request to respond.
+  ## this is also the maximum acceptable latency for a metric travelling through
+  ## the reverse_dns processor. After lookup_timeout is exceeded, a metric will
+  ## be passed on unaltered.
+  ## multiple simultaneous resolution requests for the same IP will only make a
+  ## single rDNS request, and they will all wait for the answer for this long.
+  lookup_timeout = "3s"
+
+  ## max_parallel_lookups is the maximum number of dns requests to be in flight
+  ## at the same time. Requesting hitting cached values do not count against this
+  ## total, and neither do mulptiple requests for the same IP.
+  ## It's probably best to keep this number fairly low.
+  max_parallel_lookups = 10
+
+  ## ordered controls whether or not the metrics need to stay in the same order
+  ## this plugin received them in. If false, this plugin will change the order
+  ## with requests hitting cached results moving through immediately and not
+  ## waiting on slower lookups. This may cause issues for you if you are
+  ## depending on the order of metrics staying the same. If so, set this to true.
+  ## keeping the metrics ordered may be slightly slower.
+  ordered = false
+
+  [[processors.reverse_dns.lookup]]
+    ## get the ip from the field "source_ip", and put the result in the field "source_name"
+    field = "source_ip"
+    dest = "source_name"
+
+  [[processors.reverse_dns.lookup]]
+    ## get the ip from the tag "destination_ip", and put the result in the tag
+    ## "destination_name".
+    tag = "destination_ip"
+    dest = "destination_name"
+
+    ## If you would prefer destination_name to be a field instead, you can use a
+    ## processors.converter after this one, specifying the order attribute.
+```
+
+## Example
+
+example config:
+
+```toml
+[[processors.reverse_dns]]
+  [[processors.reverse_dns.lookup]]
+    tag = "ip"
+    dest = "domain"
+```
+
+```diff
+- ping,ip=8.8.8.8 elapsed=300i 1502489900000000000
++ ping,ip=8.8.8.8,domain=dns.google. elapsed=300i 1502489900000000000
+```
diff --git a/content/telegraf/v1/processor-plugins/s2geo/_index.md b/content/telegraf/v1/processor-plugins/s2geo/_index.md
new file mode 100644
index 000000000..8cfb671d0
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/s2geo/_index.md
@@ -0,0 +1,53 @@
+---
+description: "Telegraf plugin for transforming metrics using S2 Geo"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: S2 Geo
+    identifier: processor-s2geo
+tags: [S2 Geo, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# S2 Geo Processor Plugin
+
+Use the `s2geo` processor to add tag with S2 cell ID token of specified [cell
+level]().  The tag is used in `experimental/geo` Flux package
+functions.  The `lat` and `lon` fields values should contain WGS-84 coordinates
+in decimal degrees.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Add the S2 Cell ID as a tag based on latitude and longitude fields
+[[processors.s2geo]]
+  ## The name of the lat and lon fields containing WGS-84 latitude and
+  ## longitude in decimal degrees.
+  # lat_field = "lat"
+  # lon_field = "lon"
+
+  ## New tag to create
+  # tag_key = "s2_cell_id"
+
+  ## Cell level (see https://s2geometry.io/resources/s2cell_statistics.html)
+  # cell_level = 9
+```
+
+## Example
+
+```diff
+- mta,area=llir,id=GO505_20_2704,status=1 lat=40.878738,lon=-72.517572 1560540094
++ mta,area=llir,id=GO505_20_2704,status=1,s2_cell_id=89e8ed4 lat=40.878738,lon=-72.517572 1560540094
+```
+
+[cell levels]: https://s2geometry.io/resources/s2cell_statistics.html
diff --git a/content/telegraf/v1/processor-plugins/scale/_index.md b/content/telegraf/v1/processor-plugins/scale/_index.md
new file mode 100644
index 000000000..a4e86f71c
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/scale/_index.md
@@ -0,0 +1,95 @@
+---
+description: "Telegraf plugin for transforming metrics using Scale"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Scale
+    identifier: processor-scale
+tags: [Scale, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Scale Processor Plugin
+
+The scale processor filters for a set of fields,
+and scales the respective values from an input range into
+the given output range according to this formula:
+
+```math
+\text{result}=(\text{value}-\text{input\_minimum})\cdot\frac{(\text{output\_maximum}-\text{output\_minimum})}
+{(\text{input\_maximum}-\text{input\_minimum})} +
+\text{output\_minimum}
+```
+
+Alternatively, you can apply a factor and offset to the input according to
+this formula
+
+```math
+\text{result}=\text{factor} \cdot \text{value} + \text{offset}
+```
+
+Input fields are converted to floating point values if possible. Otherwise,
+fields that cannot be converted are ignored and keep their original value.
+
+**Please note:** Neither the input nor the output values are clipped to their
+                 respective ranges!
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Scale values with a predefined range to a different output range.
+[[processors.scale]]
+    ## It is possible to define multiple different scaling that can be applied
+    ## do different sets of fields. Each scaling expects the following
+    ## arguments:
+    ##   - input_minimum: Minimum expected input value
+    ##   - input_maximum: Maximum expected input value
+    ##   - output_minimum: Minimum desired output value
+    ##   - output_maximum: Maximum desired output value
+    ## alternatively you can specify a scaling with factor and offset
+    ##   - factor: factor to scale the input value with
+    ##   - offset: additive offset for value after scaling
+    ##   - fields: a list of field names (or filters) to apply this scaling to
+
+    ## Example: Scaling with minimum and maximum values
+    # [[processors.scale.scaling]]
+    #    input_minimum = 0.0
+    #    input_maximum = 1.0
+    #    output_minimum = 0.0
+    #    output_maximum = 100.0
+    #    fields = ["temperature1", "temperature2"]
+
+    ## Example: Scaling with factor and offset
+    # [[processors.scale.scaling]]
+    #    factor = 10.0
+    #    offset = -5.0
+    #    fields = ["voltage*"]
+```
+
+## Example
+
+The example below uses these scaling values:
+
+```toml
+[[processors.scale.scaling]]
+    input_minimum = 0.0
+    input_maximum = 50.0
+    output_minimum = 50.0
+    output_maximum = 100.0
+    fields = ["cpu"]
+```
+
+```diff
+- temperature, cpu=25
++ temperature, cpu=75.0
+```
diff --git a/content/telegraf/v1/processor-plugins/snmp_lookup/_index.md b/content/telegraf/v1/processor-plugins/snmp_lookup/_index.md
new file mode 100644
index 000000000..8168c5c7f
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/snmp_lookup/_index.md
@@ -0,0 +1,151 @@
+---
+description: "Telegraf plugin for transforming metrics using SNMP Lookup"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: SNMP Lookup
+    identifier: processor-snmp_lookup
+tags: [SNMP Lookup, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# SNMP Lookup Processor Plugin
+
+The `snmp_lookup` plugin looks up extra tags using SNMP and caches them.
+
+Telegraf minimum version: Telegraf 1.30.0
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Secret-store support
+
+This plugin supports secrets from secret-stores for the `auth_password` and
+`priv_password` option.
+See the [secret-store documentation](/telegraf/v1/configuration/#secret-store-secrets) for more details on how
+to use them.
+
+[SECRETSTORE]: ../../../docs/CONFIGURATION.md#secret-store-secrets
+
+## Configuration
+
+```toml @sample.conf
+# Lookup extra tags via SNMP based on the table index
+[[processors.snmp_lookup]]
+  ## Name of tag of the SNMP agent to do the lookup on
+  # agent_tag = "source"
+
+  ## Name of tag holding the table row index
+  # index_tag = "index"
+
+  ## Timeout for each request.
+  # timeout = "5s"
+
+  ## SNMP version; can be 1, 2, or 3.
+  # version = 2
+
+  ## SNMP community string.
+  # community = "public"
+
+  ## Number of retries to attempt.
+  # retries = 3
+
+  ## The GETBULK max-repetitions parameter.
+  # max_repetitions = 10
+
+  ## SNMPv3 authentication and encryption options.
+  ##
+  ## Security Name.
+  # sec_name = "myuser"
+  ## Authentication protocol; one of "MD5", "SHA", or "".
+  # auth_protocol = "MD5"
+  ## Authentication password.
+  # auth_password = "pass"
+  ## Security Level; one of "noAuthNoPriv", "authNoPriv", or "authPriv".
+  # sec_level = "authNoPriv"
+  ## Context Name.
+  # context_name = ""
+  ## Privacy protocol used for encrypted messages; one of "DES", "AES" or "".
+  # priv_protocol = ""
+  ## Privacy password used for encrypted messages.
+  # priv_password = ""
+
+  ## The maximum number of SNMP requests to make at the same time.
+  # max_parallel_lookups = 16
+
+  ## The amount of agents to cache entries for. If limit is reached, 
+  ## oldest will be removed first. 0 means no limit.
+  # max_cache_entries = 100
+
+  ## Control whether the metrics need to stay in the same order this plugin
+  ## received them in. If false, this plugin may change the order when data is
+  ## cached. If you need metrics to stay in order set this to true. Keeping the
+  ## metrics ordered may be slightly slower.
+  # ordered = false
+
+  ## The amount of time entries are cached for a given agent. After this period
+  ## elapses if tags are needed they will be retrieved again.
+  # cache_ttl = "8h"
+
+  ## Minimum time between requests to an agent in case an index could not be
+  ## resolved. If set to zero no request on missing indices will be triggered.
+  # min_time_between_updates = "5m"
+
+  ## List of tags to be looked up.
+  [[processors.snmp_lookup.tag]]
+    ## Object identifier of the variable as a numeric or textual OID.
+    oid = "IF-MIB::ifName"
+
+    ## Name of the tag to create.  If not specified, it defaults to the value of 'oid'.
+    ## If 'oid' is numeric, an attempt to translate the numeric OID into a textual OID
+    ## will be made.
+    # name = ""
+
+    ## Apply one of the following conversions to the variable value:
+    ##   hwaddr:  Convert the value to a MAC address.
+    ##   ipaddr:  Convert the value to an IP address.
+    ##   enum(1): Convert the value according to its syntax in the MIB (full).
+    ##   enum:    Convert the value according to its syntax in the MIB.
+    ##
+    # conversion = ""
+```
+
+## Examples
+
+### Sample config
+
+```diff
+- foo,index=2,source=127.0.0.1 field=123
++ foo,ifName=eth0,index=2,source=127.0.0.1 field=123
+```
+
+### processors.ifname replacement
+
+The following config will use the same `ifDescr` fallback as `processors.ifname`
+when there is no `ifName` value on the device.
+
+```toml
+[[processors.snmp_lookup]]
+  agent_tag = "agent"
+  index_tag = "ifIndex"
+
+  [[processors.snmp_lookup.tag]]
+    oid = ".1.3.6.1.2.1.2.2.1.2"
+    name = "ifName"
+
+  [[processors.snmp_lookup.tag]]
+    oid = ".1.3.6.1.2.1.31.1.1.1.1"
+    name = "ifName"
+```
+
+```diff
+- foo,agent=127.0.0.1,ifIndex=2 field=123
++ foo,agent=127.0.0.1,ifIndex=2,ifName=eth0 field=123
+```
diff --git a/content/telegraf/v1/processor-plugins/split/_index.md b/content/telegraf/v1/processor-plugins/split/_index.md
new file mode 100644
index 000000000..8cd86107c
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/split/_index.md
@@ -0,0 +1,84 @@
+---
+description: "Telegraf plugin for transforming metrics using Split"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Split
+    identifier: processor-split
+tags: [Split, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Split Processor Plugin
+
+This plugin splits a metric up into one or more metrics based on a template
+the user provides. The timestamp of the new metric is based on the source
+metric. Templates can overlap, where a field or tag, is used across templates
+and as a result end up in multiple metrics.
+
+**NOTE**: If drop original is changed to true, then the plugin can result in
+dropping all metrics when no match is found! Please ensure to test
+templates before putting into production *and* use metric filtering to
+avoid data loss.
+
+Some outputs are sensitive to the number of metric series that are produced.
+Multiple metrics of the same series (i.e. identical name, tag key-values and
+field name) with the same timestamp might result in squashing those points
+to the latest metric produced.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Split a metric into one or more metrics with the specified field(s)/tag(s)
+[[processors.split]]
+  ## Keeps the original metric by default
+  # drop_original = false
+
+  ## Template for an output metric
+  ## Users can define multiple templates to split the original metric into
+  ## multiple, potentially overlapping, metrics.
+  [[processors.split.template]]
+    ## New metric name
+    name = ""
+
+    ## List of tag keys for this metric template, accepts globs, e.g. "*"
+    tags = []
+
+    ## List of field keys for this metric template, accepts globs, e.g. "*"
+    fields = []
+```
+
+## Example
+
+The following takes a single metric with data from two sensors and splits out
+each sensor into its own metric. It also copies all tags from the original
+metric to the new metric.
+
+```toml
+[[processors.split]]
+  drop_original = true
+  [[processors.split.template]]
+    name = "sensor1"
+    tags = [ "*" ]
+    fields = [ "sensor1*" ]
+  [[processors.split.template]]
+    name = "sensor2"
+    tags = [ "*" ]
+    fields = [ "sensor2*" ]
+```
+
+```diff
+-metric,status=active sensor1_channel1=4i,sensor1_channel2=2i,sensor2_channel1=1i,sensor2_channel2=2i 1684784689000000000
++sensor1,status=active sensor1_channel1=4i,sensor1_channel2=2i 1684784689000000000
++sensor2,status=active sensor2_channel1=1i,sensor2_channel2=2i 1684784689000000000
+```
diff --git a/content/telegraf/v1/processor-plugins/starlark/_index.md b/content/telegraf/v1/processor-plugins/starlark/_index.md
new file mode 100644
index 000000000..32a7e4128
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/starlark/_index.md
@@ -0,0 +1,284 @@
+---
+description: "Telegraf plugin for transforming metrics using Starlark"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Starlark
+    identifier: processor-starlark
+tags: [Starlark, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Starlark Processor Plugin
+
+The `starlark` processor calls a Starlark function for each matched metric,
+allowing for custom programmatic metric processing.
+
+The Starlark language is a dialect of Python, and will be familiar to those who
+have experience with the Python language. However, there are major
+differences.  Existing Python code is unlikely to work
+unmodified.  The execution environment is sandboxed, and it is not possible to
+do I/O operations such as reading from files or sockets.
+
+The **[Starlark specification](https://github.com/google/starlark-go/blob/d1966c6b9fcd/doc/spec.md)** has details about the syntax and available
+functions.
+
+Telegraf minimum version: Telegraf 1.15.0
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Process metrics using a Starlark script
+[[processors.starlark]]
+  ## The Starlark source can be set as a string in this configuration file, or
+  ## by referencing a file containing the script.  Only one source or script
+  ## should be set at once.
+
+  ## Source of the Starlark script.
+  source = '''
+def apply(metric):
+  return metric
+'''
+
+  ## File containing a Starlark script.
+  # script = "/usr/local/bin/myscript.star"
+
+  ## The constants of the Starlark script.
+  # [processors.starlark.constants]
+  #   max_size = 10
+  #   threshold = 0.75
+  #   default_name = "Julia"
+  #   debug_mode = true
+```
+
+## Usage
+
+The Starlark code should contain a function called `apply` that takes a metric
+as its single argument.  The function will be called with each metric, and can
+return `None`, a single metric, or a list of metrics.
+
+```python
+def apply(metric):
+    return metric
+```
+
+For a list of available types and functions that can be used in the code, see
+the [Starlark specification](https://github.com/google/starlark-go/blob/d1966c6b9fcd/doc/spec.md).
+
+In addition to these, the following InfluxDB-specific
+types and functions are exposed to the script.
+
+- **Metric(*name*)**:
+Create a new metric with the given measurement name.  The metric will have no
+tags or fields and defaults to the current time.
+
+- **name**:
+The name is a [string](https://github.com/google/starlark-go/blob/d1966c6b9fcd/doc/spec.md#strings) containing the metric measurement name.
+
+- **tags**:
+A [dict-like](https://github.com/google/starlark-go/blob/d1966c6b9fcd/doc/spec.md#dictionaries) object containing the metric's tags.
+
+- **fields**:
+A [dict-like](https://github.com/google/starlark-go/blob/d1966c6b9fcd/doc/spec.md#dictionaries) object containing the metric's fields.  The values may be
+of type int, float, string, or bool.
+
+- **time**:
+The timestamp of the metric as an integer in nanoseconds since the Unix
+epoch.
+
+- **deepcopy(*metric*, *track=false*)**:
+Copy an existing metric with or without tracking information. If `track` is set
+to `true`, the tracking information is copied.
+**Caution:** Make sure to always return *all* metrics with tracking information!
+Otherwise, the corresponding inputs will never receive the delivery information
+and potentially overrun!
+
+### Python Differences
+
+While Starlark is similar to Python, there are important differences to note:
+
+- Starlark has limited support for error handling and no exceptions.  If an
+  error occurs the script will immediately end and Telegraf will drop the
+  metric.  Check the Telegraf logfile for details about the error.
+
+- It is not possible to import other packages and the Python standard library
+  is not available.
+
+- It is not possible to open files or sockets.
+
+- These common keywords are **not supported** in the Starlark grammar:
+
+  ```text
+  as             finally        nonlocal
+  assert         from           raise
+  class          global         try
+  del            import         with
+  except         is             yield
+  ```
+
+### Libraries available
+
+The ability to load external scripts other than your own is pretty limited. The
+following libraries are available for loading:
+
+- json: `load("json.star", "json")` provides the following functions: `json.encode()`, `json.decode()`, `json.indent()`. See json.star for an example. For more details about the functions, please refer to [the documentation of this library](https://pkg.go.dev/go.starlark.net/lib/json).
+- log: `load("logging.star", "log")` provides the following functions: `log.debug()`, `log.info()`, `log.warn()`, `log.error()`. See logging.star` provides [the following functions and constants](https://pkg.go.dev/go.starlark.net/lib/math). See math.star`. See time_date.star, time_duration.star and/or time_timestamp.star for an example. For more details about the functions, please refer to [the documentation of this library](https://pkg.go.dev/go.starlark.net/lib/time).
+
+If you would like to see support for something else here, please open an issue.
+
+### Common Questions
+
+**What's the performance cost to using Starlark?**
+
+In local tests, it takes about 1µs (1 microsecond) to run a modest script to
+process one metric. This is going to vary with the size of your script, but the
+total impact is minimal.  At this pace, it's likely not going to be the
+bottleneck in your Telegraf setup.
+
+**How can I drop/delete a metric?**
+
+If you don't return the metric it will be deleted.  Usually this means the
+function should `return None`.
+
+**How should I make a copy of a metric?**
+
+Use `deepcopy(metric)` to create a copy of the metric.
+
+**How can I return multiple metrics?**
+
+You can return a list of metrics:
+
+```python
+def apply(metric):
+    m2 = deepcopy(metric)
+    return [metric, m2]
+```
+
+**What happens to a tracking metric if an error occurs in the script?**
+
+The metric is marked as undelivered.
+
+**How do I create a new metric?**
+
+Use the `Metric(name)` function and set at least one field.
+
+**What is the fastest way to iterate over tags/fields?**
+
+The fastest way to iterate is to use a for-loop on the tags or fields attribute:
+
+```python
+def apply(metric):
+    for k in metric.tags:
+        pass
+    return metric
+```
+
+When you use this form, it is not possible to modify the tags inside the loop,
+if this is needed you should use one of the `.keys()`, `.values()`, or
+`.items()` methods:
+
+```python
+def apply(metric):
+    for k, v in metric.tags.items():
+        pass
+    return metric
+```
+
+**How can I save values across multiple calls to the script?**
+
+Telegraf freezes the global scope, which prevents it from being modified, except
+for a special shared global dictionary named `state`, this can be used by the
+`apply` function.  See an example of this in compare with previous
+metric
+    if error != None:
+        # Some code to execute in case of an error
+        metric.fields["error"] = error
+    return metric
+
+def failing(metric):
+    json.decode("non-json-content")
+```
+
+**How to reuse the same script but with different parameters?**
+
+In case you have a generic script that you would like to reuse for different
+instances of the plugin, you can use constants as input parameters of your
+script.
+
+So for example, assuming that you have the next configuration:
+
+```toml
+[[processors.starlark]]
+  script = "/usr/local/bin/myscript.star"
+
+  [processors.starlark.constants]
+    somecustomnum = 10
+    somecustomstr = "mycustomfield"
+```
+
+Your script could then use the constants defined in the configuration as
+follows:
+
+```python
+def apply(metric):
+    if metric.fields[somecustomstr] >= somecustomnum:
+        metric.fields.clear()
+    return metric
+```
+
+**What does `cannot represent integer ...` mean?**
+
+The error occurs if an integer value in starlark exceeds the signed 64 bit
+integer limit. This can occur if you are summing up large values in a starlark
+integer value or convert an unsigned 64 bit integer to starlark and then create
+a new metric field from it.
+
+This is due to the fact that integer values in starlark are *always* signed and
+can grow beyond the 64-bit size. Therefore converting the value back fails in
+the cases mentioned above.
+
+As a workaround you can either clip the field value at the signed 64-bit limit
+or return the value as a floating-point number.
+
+### Examples
+
+- drop string fields - Drop fields containing string values.
+- drop fields with unexpected type - Drop fields containing unexpected value types.
+- iops
+- json - an example of processing JSON from a field in a metric
+- math - Use a math function to compute the value of a field. [The list of the supported math functions and constants](https://pkg.go.dev/go.starlark.net/lib/math).
+- number logic - transform a numerical value to another numerical value
+- pivot - Pivots a key's value to be the key for another key.
+- ratio - Compute the ratio of two integer fields
+- rename - Rename tags or fields using a name mapping.
+- scale - Multiply any field by a number
+- time date - Parse a date and extract the year, month and day from it.
+- time duration - Parse a duration and convert it into a total amount of seconds.
+- time timestamp - Filter metrics based on the timestamp in seconds.
+- time timestamp nanoseconds - Filter metrics based on the timestamp with nanoseconds.
+- time timestamp current - Setting the metric timestamp to the current/local time.
+- value filter - Remove a metric based on a field value.
+- logging - Log messages with the logger of Telegraf
+- multiple metrics - Return multiple metrics by using [a list](https://docs.bazel.build/versions/master/skylark/lib/list.html) of metrics.
+- multiple metrics from json array - Builds a new metric from each element of a json array then returns all the created metrics.
+- custom error - Return a custom error with [fail](https://docs.bazel.build/versions/master/skylark/lib/globals.html#fail).
+- compare with previous metric - Compare the current metric with the previous one using the shared state.
+- rename prometheus remote write - Rename prometheus remote write measurement name with fieldname and rename fieldname to value.
+
+All examples are in the testdata folder.
+
+Open a Pull Request to add any other useful Starlark examples.
+
+[Starlark specification]: https://github.com/google/starlark-go/blob/d1966c6b9fcd/doc/spec.md
+[string]: https://github.com/google/starlark-go/blob/d1966c6b9fcd/doc/spec.md#strings
+[dict]: https://github.com/google/starlark-go/blob/d1966c6b9fcd/doc/spec.md#dictionaries
diff --git a/content/telegraf/v1/processor-plugins/strings/_index.md b/content/telegraf/v1/processor-plugins/strings/_index.md
new file mode 100644
index 000000000..aaf9fedab
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/strings/_index.md
@@ -0,0 +1,198 @@
+---
+description: "Telegraf plugin for transforming metrics using Strings"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Strings
+    identifier: processor-strings
+tags: [Strings, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Strings Processor Plugin
+
+The `strings` plugin maps certain go string functions onto measurement, tag, and
+field values.  Values can be modified in place or stored in another key.
+
+Implemented functions are:
+
+- lowercase
+- uppercase
+- titlecase
+- trim
+- trim_left
+- trim_right
+- trim_prefix
+- trim_suffix
+- replace
+- left
+- base64decode
+- valid_utf8
+
+Please note that in this implementation these are processed in the order that
+they appear above.
+
+Specify the `measurement`, `tag`, `tag_key`, `field`, or `field_key` that you
+want processed in each section and optionally a `dest` if you want the result
+stored in a new tag or field. You can specify lots of transformations on data
+with a single strings processor.
+
+If you'd like to apply the change to every `tag`, `tag_key`, `field`,
+`field_key`, or `measurement`, use the value `"*"` for each respective
+field. Note that the `dest` field will be ignored if `"*"` is used.
+
+If you'd like to apply multiple processings to the same `tag_key` or
+`field_key`, note the process order stated above. See the second example below
+for an example.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Perform string processing on tags, fields, and measurements
+[[processors.strings]]
+  ## Convert a field value to lowercase and store in a new field
+  # [[processors.strings.lowercase]]
+  #   field = "uri_stem"
+  #   dest = "uri_stem_normalised"
+
+  ## Convert a tag value to uppercase
+  # [[processors.strings.uppercase]]
+  #   tag = "method"
+
+  ## Convert a field value to titlecase
+  # [[processors.strings.titlecase]]
+  #   field = "status"
+
+  ## Trim leading and trailing whitespace using the default cutset
+  # [[processors.strings.trim]]
+  #   field = "message"
+
+  ## Trim leading characters in cutset
+  # [[processors.strings.trim_left]]
+  #   field = "message"
+  #   cutset = "\t"
+
+  ## Trim trailing characters in cutset
+  # [[processors.strings.trim_right]]
+  #   field = "message"
+  #   cutset = "\r\n"
+
+  ## Trim the given prefix from the field
+  # [[processors.strings.trim_prefix]]
+  #   field = "my_value"
+  #   prefix = "my_"
+
+  ## Trim the given suffix from the field
+  # [[processors.strings.trim_suffix]]
+  #   field = "read_count"
+  #   suffix = "_count"
+
+  ## Replace all non-overlapping instances of old with new
+  # [[processors.strings.replace]]
+  #   measurement = "*"
+  #   old = ":"
+  #   new = "_"
+
+  ## Trims strings based on width
+  # [[processors.strings.left]]
+  #   field = "message"
+  #   width = 10
+
+  ## Decode a base64 encoded utf-8 string
+  # [[processors.strings.base64decode]]
+  #   field = "message"
+
+  ## Sanitize a string to ensure it is a valid utf-8 string
+  ## Each run of invalid UTF-8 byte sequences is replaced by the replacement string, which may be empty
+  # [[processors.strings.valid_utf8]]
+  #   field = "message"
+  #   replacement = ""
+```
+
+### Trim, TrimLeft, TrimRight
+
+The `trim`, `trim_left`, and `trim_right` functions take an optional parameter:
+`cutset`.  This value is a string containing the characters to remove from the
+value.
+
+### TrimPrefix, TrimSuffix
+
+The `trim_prefix` and `trim_suffix` functions remote the given `prefix` or
+`suffix` respectively from the string.
+
+### Replace
+
+The `replace` function does a substring replacement across the entire
+string to allow for different conventions between various input and output
+plugins. Some example usages are eliminating disallowed characters in
+field names or replacing separators between different separators.
+Can also be used to eliminate unneeded chars that were in metrics.
+If the entire name would be deleted, it will refuse to perform
+the operation and keep the old name.
+
+## Example
+
+A sample configuration:
+
+```toml
+[[processors.strings]]
+  [[processors.strings.lowercase]]
+    tag = "uri_stem"
+
+  [[processors.strings.trim_prefix]]
+    tag = "uri_stem"
+    prefix = "/api/"
+
+  [[processors.strings.uppercase]]
+    field = "cs-host"
+    dest = "cs-host_normalised"
+```
+
+Sample input:
+
+```text
+iis_log,method=get,uri_stem=/API/HealthCheck cs-host="MIXEDCASE_host",http_version=1.1 1519652321000000000
+```
+
+Sample output:
+
+```text
+iis_log,method=get,uri_stem=healthcheck cs-host="MIXEDCASE_host",http_version=1.1,cs-host_normalised="MIXEDCASE_HOST" 1519652321000000000
+```
+
+### Second Example
+
+A sample configuration:
+
+```toml
+[[processors.strings]]
+  [[processors.strings.lowercase]]
+    tag_key = "URI-Stem"
+
+  [[processors.strings.replace]]
+    tag_key = "uri-stem"
+    old = "-"
+    new = "_"
+```
+
+Sample input:
+
+```text
+iis_log,URI-Stem=/API/HealthCheck http_version=1.1 1519652321000000000
+```
+
+Sample output:
+
+```text
+iis_log,uri_stem=/API/HealthCheck http_version=1.1 1519652321000000000
+```
diff --git a/content/telegraf/v1/processor-plugins/tag_limit/_index.md b/content/telegraf/v1/processor-plugins/tag_limit/_index.md
new file mode 100644
index 000000000..50099c49d
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/tag_limit/_index.md
@@ -0,0 +1,49 @@
+---
+description: "Telegraf plugin for transforming metrics using Tag Limit"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Tag Limit
+    identifier: processor-tag_limit
+tags: [Tag Limit, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Tag Limit Processor Plugin
+
+Use the `tag_limit` processor to ensure that only a certain number of tags are
+preserved for any given metric, and to choose the tags to preserve when the
+number of tags appended by the data source is over the limit.
+
+This can be useful when dealing with output systems (e.g. Stackdriver) that
+impose hard limits on the number of tags/labels per metric or where high
+levels of cardinality are computationally and/or financially expensive.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Restricts the number of tags that can pass through this filter and chooses which tags to preserve when over the limit.
+[[processors.tag_limit]]
+  ## Maximum number of tags to preserve
+  limit = 3
+
+  ## List of tags to preferentially preserve
+  keep = ["environment", "region"]
+```
+
+## Example
+
+```diff
++ throughput month=Jun,environment=qa,region=us-east1,lower=10i,upper=1000i,mean=500i 1560540094000000000
++ throughput environment=qa,region=us-east1,lower=10i 1560540094000000000
+```
diff --git a/content/telegraf/v1/processor-plugins/template/_index.md b/content/telegraf/v1/processor-plugins/template/_index.md
new file mode 100644
index 000000000..ebf546cfe
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/template/_index.md
@@ -0,0 +1,145 @@
+---
+description: "Telegraf plugin for transforming metrics using Template"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Template
+    identifier: processor-template
+tags: [Template, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Template Processor Plugin
+
+The `template` processor applies a Go template to metrics to generate a new
+tag.  The primary use case of this plugin is to create a tag that can be used
+for dynamic routing to multiple output plugins or using an output specific
+routing option.
+
+The template has access to each metric's measurement name, tags, fields, and
+timestamp using the interface in `/template_metric.go`.
+
+Read the full [Go Template Documentation](https://golang.org/pkg/text/template/).
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Uses a Go template to create a new tag
+[[processors.template]]
+  ## Go template used to create the tag name of the output. In order to
+  ## ease TOML escaping requirements, you should use single quotes around
+  ## the template string.
+  tag = "topic"
+
+  ## Go template used to create the tag value of the output. In order to
+  ## ease TOML escaping requirements, you should use single quotes around
+  ## the template string.
+  template = '{{ .Tag "hostname" }}.{{ .Tag "level" }}'
+```
+
+## Examples
+
+### Combine multiple tags to create a single tag
+
+```toml
+[[processors.template]]
+  tag = "topic"
+  template = '{{ .Tag "hostname" }}.{{ .Tag "level" }}'
+```
+
+```diff
+- cpu,level=debug,hostname=localhost time_idle=42
++ cpu,level=debug,hostname=localhost,topic=localhost.debug time_idle=42
+```
+
+### Use a field value as tag name
+
+```toml
+[[processors.template]]
+  tag = '{{ .Field "type" }}'
+  template = '{{ .Name }}'
+```
+
+```diff
+- cpu,level=debug,hostname=localhost time_idle=42,type=sensor
++ cpu,level=debug,hostname=localhost,sensor=cpu time_idle=42,type=sensor
+```
+
+### Add measurement name as a tag
+
+```toml
+[[processors.template]]
+  tag = "measurement"
+  template = '{{ .Name }}'
+```
+
+```diff
+- cpu,hostname=localhost time_idle=42
++ cpu,hostname=localhost,measurement=cpu time_idle=42
+```
+
+### Add the year as a tag, similar to the date processor
+
+```toml
+[[processors.template]]
+  tag = "year"
+  template = '{{.Time.UTC.Year}}'
+```
+
+### Add all fields as a tag
+
+Sometimes it is useful to pass all fields with their values into a single
+message for sending it to a monitoring system (e.g. Syslog, GroundWork), then
+you can use `.Fields` or `.Tags`:
+
+```toml
+[[processors.template]]
+  tag = "message"
+  template = 'Message about {{.Name}} fields: {{.Fields}}'
+```
+
+```diff
+- cpu,hostname=localhost time_idle=42
++ cpu,hostname=localhost,message=Message\ about\ cpu\ fields:\ map[time_idle:42] time_idle=42
+```
+
+More advanced example, which might make more sense:
+
+```toml
+[[processors.template]]
+  tag = "message"
+  template = '''Message about {{.Name}} fields:
+{{ range $field, $value := .Fields -}}
+{{$field}}:{{$value}}
+{{ end }}'''
+```
+
+```diff
+- cpu,hostname=localhost time_idle=42
++ cpu,hostname=localhost,message=Message\ about\ cpu\ fields:\ntime_idle:42\n time_idle=42
+```
+
+### Just add the current metric as a tag
+
+```toml
+[[processors.template]]
+  tag = "metric"
+  template = '{{.}}'
+```
+
+```diff
+- cpu,hostname=localhost time_idle=42
++ cpu,hostname=localhost,metric=cpu\ map[hostname:localhost]\ map[time_idle:42]\ 1257894000000000000 time_idle=42
+```
+
+[Go Template Documentation]: https://golang.org/pkg/text/template/
diff --git a/content/telegraf/v1/processor-plugins/timestamp/_index.md b/content/telegraf/v1/processor-plugins/timestamp/_index.md
new file mode 100644
index 000000000..b0d13311b
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/timestamp/_index.md
@@ -0,0 +1,110 @@
+---
+description: "Telegraf plugin for transforming metrics using Timestamp"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Timestamp
+    identifier: processor-timestamp
+tags: [Timestamp, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Timestamp Processor Plugin
+
+Use the timestamp processor to parse fields containing timestamps into
+timestamps of other formats.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Convert a timestamp field to other timestamp format
+[[processors.timestamp]]
+  ## Timestamp key to convert
+  ## Specify the field name that contains the timestamp to convert. The result
+  ## will replace the current field value.
+  field = ""
+
+  ## Timestamp Format
+  ## This defines the time layout used to interpret the source timestamp field.
+  ## The time must be `unix`, `unix_ms`, `unix_us`, `unix_ns`, or a time in Go
+  ## "reference time". For more information on Go "reference time". For more
+  ## see: https://golang.org/pkg/time/#Time.Format
+  source_timestamp_format = ""
+
+  ## Timestamp Timezone
+  ## Source timestamp timezone. If not set, assumed to be in UTC.
+  ## Options are as follows:
+  ##   1. UTC                 -- or unspecified will return timestamp in UTC
+  ##   2. Local               -- interpret based on machine localtime
+  ##   3. "America/New_York"  -- Unix TZ values like those found in
+  ##        https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
+  # source_timestamp_timezone = ""
+
+  ## Target timestamp format
+  ## This defines the destination timestamp format. It also can accept either
+  ## `unix`, `unix_ms`, `unix_us`, `unix_ns`, or a time in Go "reference time".
+  destination_timestamp_format = ""
+
+  ## Target Timestamp Timezone
+  ## Source timestamp timezone. If not set, assumed to be in UTC.
+  ## Options are as follows:
+  ##   1. UTC                 -- or unspecified will return timestamp in UTC
+  ##   2. Local               -- interpret based on machine localtime
+  ##   3. "America/New_York"  -- Unix TZ values like those found in
+  ##        https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
+  # destination_timestamp_timezone = ""
+```
+
+## Example
+
+Convert a timestamp to unix timestamp:
+
+```toml
+[[processors.timestamp]]
+  source_timestamp_field = "timestamp"
+  source_timestamp_format = "2006-01-02T15:04:05.999999999Z"
+  destination_timestamp_format = "unix"
+```
+
+```diff
+- metric value=42i,timestamp="2024-03-04T10:10:32.123456Z" 1560540094000000000
++ metric value=42i,timestamp=1709547032 1560540094000000000
+```
+
+Convert the same timestamp to a nanosecond unix timestamp:
+
+```toml
+[[processors.timestamp]]
+  source_timestamp_field = "timestamp"
+  source_timestamp_format = "2006-01-02T15:04:05.999999999Z"
+  destination_timestamp_format = "unix_ns"
+```
+
+```diff
+- metric value=42i,timestamp="2024-03-04T10:10:32.123456789Z" 1560540094000000000
++ metric value=42i,timestamp=1709547032123456789 1560540094000000000
+```
+
+Convert the timestamp to another timestamp format:
+
+```toml
+[[processors.timestamp]]
+  source_timestamp_field = "timestamp"
+  source_timestamp_format = "2006-01-02T15:04:05.999999999Z"
+  destination_timestamp_format = "2006-01-02T15:04"
+```
+
+```diff
+- metric value=42i,timestamp="2024-03-04T10:10:32.123456Z" 1560540094000000000
++ metric value=42i,timestamp="2024-03-04T10:10" 1560540094000000000
+```
diff --git a/content/telegraf/v1/processor-plugins/topk/_index.md b/content/telegraf/v1/processor-plugins/topk/_index.md
new file mode 100644
index 000000000..34412df48
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/topk/_index.md
@@ -0,0 +1,145 @@
+---
+description: "Telegraf plugin for transforming metrics using TopK"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: TopK
+    identifier: processor-topk
+tags: [TopK, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# TopK Processor Plugin
+
+The TopK processor plugin is a filter designed to get the top series over a
+period of time. It can be tweaked to calculate the top metrics via different
+aggregation functions.
+
+This processor goes through these steps when processing a batch of metrics:
+
+1. Groups measurements in buckets based on their tags and name
+2. Every N seconds, for each bucket, for each selected field: aggregate all the measurements using a given aggregation function (min, sum, mean, etc) and the field.
+3. For each computed aggregation: order the buckets by the aggregation, then returns all measurements in the top `K` buckets
+
+Notes:
+
+* The deduplicates metrics
+* The name of the measurement is always used when grouping it
+* Depending on the amount of metrics on each  bucket, more than `K` series may be returned
+* If a measurement does not have one of the selected fields, it is dropped from the aggregation
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Print all metrics that pass through this filter.
+[[processors.topk]]
+  ## How many seconds between aggregations
+  # period = 10
+
+  ## How many top buckets to return per field
+  ## Every field specified to aggregate over will return k number of results.
+  ## For example, 1 field with k of 10 will return 10 buckets. While 2 fields
+  ## with k of 3 will return 6 buckets.
+  # k = 10
+
+  ## Over which tags should the aggregation be done. Globs can be specified, in
+  ## which case any tag matching the glob will aggregated over. If set to an
+  ## empty list is no aggregation over tags is done
+  # group_by = ['*']
+
+  ## The field(s) to aggregate
+  ## Each field defined is used to create an independent aggregation. Each
+  ## aggregation will return k buckets. If a metric does not have a defined
+  ## field the metric will be dropped from the aggregation. Considering using
+  ## the defaults processor plugin to ensure fields are set if required.
+  # fields = ["value"]
+
+  ## What aggregation function to use. Options: sum, mean, min, max
+  # aggregation = "mean"
+
+  ## Instead of the top k largest metrics, return the bottom k lowest metrics
+  # bottomk = false
+
+  ## The plugin assigns each metric a GroupBy tag generated from its name and
+  ## tags. If this setting is different than "" the plugin will add a
+  ## tag (which name will be the value of this setting) to each metric with
+  ## the value of the calculated GroupBy tag. Useful for debugging
+  # add_groupby_tag = ""
+
+  ## These settings provide a way to know the position of each metric in
+  ## the top k. The 'add_rank_field' setting allows to specify for which
+  ## fields the position is required. If the list is non empty, then a field
+  ## will be added to each and every metric for each string present in this
+  ## setting. This field will contain the ranking of the group that
+  ## the metric belonged to when aggregated over that field.
+  ## The name of the field will be set to the name of the aggregation field,
+  ## suffixed with the string '_topk_rank'
+  # add_rank_fields = []
+
+  ## These settings provide a way to know what values the plugin is generating
+  ## when aggregating metrics. The 'add_aggregate_field' setting allows to
+  ## specify for which fields the final aggregation value is required. If the
+  ## list is non empty, then a field will be added to each every metric for
+  ## each field present in this setting. This field will contain
+  ## the computed aggregation for the group that the metric belonged to when
+  ## aggregated over that field.
+  ## The name of the field will be set to the name of the aggregation field,
+  ## suffixed with the string '_topk_aggregate'
+  # add_aggregate_fields = []
+```
+
+### Tags
+
+This processor does not add tags by default. But the setting `add_groupby_tag`
+will add a tag if set to anything other than ""
+
+### Fields
+
+This processor does not add fields by default. But the settings
+`add_rank_fields` and `add_aggregation_fields` will add one or several fields if
+set to anything other than ""
+
+### Example
+
+Below is an example configuration:
+
+```toml
+[[processors.topk]]
+  period = 20
+  k = 3
+  group_by = ["pid"]
+  fields = ["cpu_usage"]
+```
+
+Output difference with topk:
+
+```diff
+< procstat,pid=2088,process_name=Xorg cpu_usage=7.296576662282613 1546473820000000000
+< procstat,pid=2780,process_name=ibus-engine-simple cpu_usage=0 1546473820000000000
+< procstat,pid=2554,process_name=gsd-sound cpu_usage=0 1546473820000000000
+< procstat,pid=3484,process_name=chrome cpu_usage=4.274300361942799 1546473820000000000
+< procstat,pid=2467,process_name=gnome-shell-calendar-server cpu_usage=0 1546473820000000000
+< procstat,pid=2525,process_name=gvfs-goa-volume-monitor cpu_usage=0 1546473820000000000
+< procstat,pid=2888,process_name=gnome-terminal-server cpu_usage=1.0224991500287577 1546473820000000000
+< procstat,pid=2454,process_name=ibus-x11 cpu_usage=0 1546473820000000000
+< procstat,pid=2564,process_name=gsd-xsettings cpu_usage=0 1546473820000000000
+< procstat,pid=12184,process_name=docker cpu_usage=0 1546473820000000000
+< procstat,pid=2432,process_name=pulseaudio cpu_usage=9.892858669796528 1546473820000000000
+---
+> procstat,pid=2432,process_name=pulseaudio cpu_usage=11.486933087507786 1546474120000000000
+> procstat,pid=2432,process_name=pulseaudio cpu_usage=10.056503212060552 1546474130000000000
+> procstat,pid=23620,process_name=chrome cpu_usage=2.098690278123081 1546474120000000000
+> procstat,pid=23620,process_name=chrome cpu_usage=17.52514619948493 1546474130000000000
+> procstat,pid=2088,process_name=Xorg cpu_usage=1.6016732172309973 1546474120000000000
+> procstat,pid=2088,process_name=Xorg cpu_usage=8.481040931533833 1546474130000000000
+```
diff --git a/content/telegraf/v1/processor-plugins/unpivot/_index.md b/content/telegraf/v1/processor-plugins/unpivot/_index.md
new file mode 100644
index 000000000..7988b75c6
--- /dev/null
+++ b/content/telegraf/v1/processor-plugins/unpivot/_index.md
@@ -0,0 +1,68 @@
+---
+description: "Telegraf plugin for transforming metrics using Unpivot"
+menu:
+  telegraf_v1_ref:
+    parent: processor_plugins_reference
+    name: Unpivot
+    identifier: processor-unpivot
+tags: [Unpivot, "processor-plugins", "configuration"]
+related:
+  - /telegraf/v1/configure_plugins/
+---
+
+# Unpivot Processor Plugin
+
+You can use the `unpivot` processor to rotate a multi field series into single
+valued metrics.  This transformation often results in data that is more easy to
+aggregate across fields.
+
+To perform the reverse operation use the [pivot] processor.
+
+## Global configuration options <!-- @/docs/includes/plugin_config.md -->
+
+In addition to the plugin-specific configuration settings, plugins support
+additional global and plugin configuration settings. These settings are used to
+modify metrics, tags, and field or create aliases and configure ordering, etc.
+See the [CONFIGURATION.md](/telegraf/v1/configuration/#plugins) for more details.
+
+[CONFIGURATION.md]: ../../../docs/CONFIGURATION.md#plugins
+
+## Configuration
+
+```toml @sample.conf
+# Rotate multi field metric into several single field metrics
+[[processors.unpivot]]
+  ## Metric mode to pivot to
+  ## Set to "tag", metrics are pivoted as a tag and the metric is kept as
+  ## the original measurement name. Tag key name is set by tag_key value.
+  ## Set to "metric" creates a new metric named the field name. With this
+  ## option the tag_key is ignored. Be aware that this could lead to metric
+  ## name conflicts!
+  # use_fieldname_as = "tag"
+
+  ## Tag to use for the name.
+  # tag_key = "name"
+
+  ## Field to use for the name of the value.
+  # value_key = "value"
+```
+
+## Example
+
+Metric mode `tag`:
+
+```diff
+- cpu,cpu=cpu0 time_idle=42i,time_user=43i
++ cpu,cpu=cpu0,name=time_idle value=42i
++ cpu,cpu=cpu0,name=time_user value=43i
+```
+
+Metric mode `metric`:
+
+```diff
+- cpu,cpu=cpu0 time_idle=42i,time_user=43i
++ time_idle,cpu=cpu0 value=42i
++ time_user,cpu=cpu0 value=43i
+```
+
+[pivot]: /plugins/processors/pivot/README.md