Work with Prometheus metrics in Flux (#3232)

* initial changes for flux restructure

* added all aliases

* added introduced date to all flux functions

* marked linearBins and logarithmicBins as draft

* migrated flux stdlib to new flux section, added version range to article template

* fixed list-all-functions shortcode

* duplicated and reordered flux spec, added page-nav shortcode, closes #1870

* added filtering functionality to list-all-functions shortcode

* added function tags

* Stdlib reorg (#2130)

* consolidated influxdb packages

* stdlib rename and reorg

* reorg existing contrib docs

* added keep-url to http.get example

* reorg built-in directory, add function types docs

* updated links

* updated all related links

* fixed reference links in influxdb docs

* updated all internal flux links

* updated flux links in influxdb

* one last link update

* restyle product dropdown

* update flux links in influxdb 1.7 and 1.8

* fixed shortcode call

* updated task options in flux options doc

* Flux 'interpolate' package (#2148)

* add interpolate package, closes #1649

* added missing page description to interpolate package doc

* removed unnecessary space from interpolate description

* updated interpolate package description

* ported from() note to new flux section

* New list filter javascript (#2185)

* generalized list filtering for telegraf plugins and flux functions

* added flux tags, updated filter list functionality

* added more flux tags

* added new experimental functions

* updated derivative params

* ported over new experimental functions

* fixed bad copy-pasta

* ported new notification endpoints into new flux docs

* updated flux function categories

* ported flux changes from master

* fixed product dropdown

* fixed regexp.findString example

* ported flux 0.109 changes

* updated array package aliases and supported version

* ported new functions into flux dir

* added aliases to interpolate package

* ported flux v0.114 packages

* added enterpise logic to url selector modal

* fix minor typo

* Update Flux param type convention (#2515)

* fix minor typo

* WIP new flux data type convention

* wip more param type updates

* cleaned up function type specs

* ported flux 0.115.0 packages and functions

* ported tickscript package

* ported today function

* added aliases to tickscript pkg

* updated timedMovingAverage params example

* updated to function with remote creds

* port flux 0.118 changes over

* port flux changes into flux-restructure

* ported changes from flux 0.123.0 and updated flux function docs

* updated contrib package summary

* updated function definition of schema.tagValues

* ported recent flux changes to the restructure branch

* port changes from master

* Flux get started (#3036)

* Flux group keys demo (#2553)

* interactive group key example

* added js and shortcode for group key demo

* updated group key demo to address PR feedback

* shortened sample data set

* Flux get started intro and data model (#2619)

* starting flux intro content, resolved merge conflicts

* WIP flux get started docs

* WIP flux get started

* flux get started intro and data model

* Apply suggestions from code review

Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>

* Update content/flux/v0.x/get-started/data-model.md

Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* addressed PR feedback in flux get started

* updated flux docs landing page

* more updates to flux landing page

Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>
Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* Flux query basics (#2887)

* WIP flux query basics

* WIP flux query basics

* WIP flux query basics

* WIP flux query basics

* wrap up content for flux query basics

* Apply suggestions from code review

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* properly close code block on flux query basics

* Flux – query data (#2891)

* added query data sources with flux and query influxdb

* Apply suggestions from code review

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* Update content/flux/v0.x/query-data/influxdb.md

* Query Prometheus with Flux (#2893)

* query prometheus with flux

* Apply suggestions from code review

Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>

Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>

* Query CSV data with Flux (#2895)

* query csv data with flux

* address PR feedback

* Update content/flux/v0.x/query-data/csv.md

* update flux query data sources landing page

* updated flux query data doc formats and links

* Query SQL databases (#2922)

* WIP query sql guides

* query SQL data sources, closes #1738

* updated related link on sql.from

* added link to stream of tables and updated text

* updated connection string text

* updated query sql landing page and children hr styles

* updated sql query docs to address PR feedback

* added missing colon

* Query Google Cloud Bigtable with Flux (#2928)

* Query Google Cloud Bigtable with Flux

* updated doc structure of query bigtable doc

* fixed typo in bigquery doc

* Apply suggestions from code review

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>
Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>

* Work with Flux data types (#2967)

* scaffolding for flux types, work with strings

* Apply suggestions from code review

Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>

* removed note about interpolation vs concatenation

* updated wording of variable type association

* generalized type inference

* Apply suggestions from code review

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* WIP work with ints

* reverted int content

* updated strings doc to address PR feedback

* added description to data types landing page

* Apply suggestions from code review

* Update content/flux/v0.x/data-types/basic/string.md

* updated composite front-matter

* Work with time types in Flux  (#2974)

* work with time types in flux, closes #2260

* updated time type doc

* fixed type in time type description

* fixed typo

* updated work with time doc

* fixed typos

* updated verbiage

* added related links

* Apply suggestions from code review

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* updated time type doc to address PR feedback

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* Work with booleans (#2975)

* work with boolean types

* updated working with booleans

* updated verbiage

* added related links

* Update content/flux/v0.x/data-types/basic/boolean.md

* Work with bytes types (#2976)

* work with bytes types

* added toc to bytes type doc

* Apply suggestions from code review

Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>

* updated work with bytes doc

* fixed typo

* added related links

Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>

* Work with durations (#2977)

* work with durations in flux

* added keywords to duration doc to improve searchability

* minor updates to duration type doc

* updated verbiage

* added related links and removed toDuration from menu

* Update content/flux/v0.x/data-types/basic/duration.md

* Work with null types (#2978)

* WIP null types

* work with null types in flux

* updated null types doc

* Update content/flux/v0.x/data-types/basic/null.md

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* Work with floats (#2979)

* work with floats in flux

* Apply suggestions from code review

Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>

* Apply suggestions from code review

* updated floats type doc

* Update content/flux/v0.x/data-types/basic/float.md

* updated verbiage

* added related links

* Apply suggestions from code review

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>
Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* Work with integers (#2980)

* WIP work with ints

* work with integers

* work with integers

* updated float to int behavior, added related links, closes #2973

* added toc to ints doc

* Apply suggestions from code review

Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* Update content/flux/v0.x/data-types/basic/integer.md

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>
Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* Work with uintegers (#2981)

* WIP work with uints

* work with uints

* Apply suggestions from code review

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* fixed minor type

* Work with records (#2982)

* work with records in flux

* updated record type doc

* Apply suggestions from code review

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* Work with arrays (#2983)

* work with arrays

* added array.from example, added related links

* Work with dictionaries (#2984)

* WIP work with dicts

* work with dictionaries

* added related links to dict package

* added introduced version to dict package

* Apply suggestions from code review

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* added sample dict output

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* Work with functions (#2985)

* work with functions

* Apply suggestions from code review

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* revamped type landing pages

* marked union types as draft

* miscellaneous updates

* Work with regular expression (#3024)

* work with regular expression types, closes #2573, closes influxdata/flux#3741

* add context for quoteMeta function

* Apply suggestions from code review

Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>

Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>

Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>
Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>
Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>

* fix latest links in page descriptions

* updated influxdb links

* Flux syntax basics (#3033)

* flux syntax basics

* Apply suggestions from code review

Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>

* updated function description

* Apply suggestions from code review

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* Apply suggestions from code review

* Update content/flux/v0.x/get-started/syntax-basics.md

Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>
Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>
Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>
Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>

* added table param to transformations, closes #2392 (#3039)

* updated flux function links

* update algolia configuration to fix search, closes #1902 (#3042)

* ported notes in the from function doc

* Flux package options (#3083)

* add now option to universe package

* added missing package options, closes #2464

* addressed PR feedback

* Flux transformation input/output examples (#3103)

* added flux/sample shortcode

* standardize flux package titles and list titles

* added start and stop columns as an option with flux/sample shortcode

* minor updates to stdlib

* WIP add input and output examples to flux transformations

* WIP removed sample data demo from universe index page

* WIP function input and output examples

* WIP flux input output examples

* WIP flux input output examples

* flux transformation input and output examples

* Add Flux 'sampledata' package (#3088)

* add flux sampledata package

* updated sampledata example titles

* Write data with Flux (#3084)

* WIP write to sql data sources

* write to sql data sources

* added write data to influxdb with flux doc

* Apply suggestions from code review

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* made sql headings specific to each db

* updated write to influxdb

* added tag to influxdb to example

* restructred influxdb write examples as code tabs

Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* fixed list on influxdb write with flux page

* Flux move changelog (#3117)

* updated flux redirects in edge.js

* move flux changelog into Flux restructure

* add flux redirects to edge.js

* removed extra parentheses from monitor.notify examples, closes #2505

* updated flux release notes with flux 0.129.0

* moved from and to into the influxdata/influxdb package

* updated notes on to and from docs

* added flux card to homepage

* added flux-0.130.0 to flux release notes

* flux link cleanup

* updated experimental message, closes #3097 (#3128)

* Remove Flux stdlib and language from InfluxDB (#3133)

* remove flux stdlib and lang from influxdb, update flux get-started, closes #2132

* flux link cleanup

* cleaned up prometheus verbiage, updated flux data type links

* function cleanup

* fixed sidenav toggle button

* updated group key links, added aliases for flux landing page

* WIP prometheus rework

* fixed broken links, commented out prometheus content, updated flux types names

* added flux links to the left nav

* fixed flux links in kapacitor docs

* WIP flux prometheus docs

* WIP prometheus metrics

* added note about prom metrics data structures

* resolved merge conflicts

* added prometheus doc in write data, added prometheus metric versions ref doc

* added prometheus counter contnet

* resolved merge conflict

* WIP flux prometheus histograms

* added content for prometheus summaries

* added histogram content

* removed commented notes

* Apply suggestions from code review

Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>
Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>

* updates to address pr feedback

* updated prometheus-metrics.md

Co-authored-by: pierwill <19642016+pierwill@users.noreply.github.com>
Co-authored-by: kelseiv <47797004+kelseiv@users.noreply.github.com>
Co-authored-by: Jason Stirnaman <jstirnaman@influxdata.com>
pull/3283/head
Scott Anderson 2021-10-18 16:25:20 -06:00 committed by GitHub
parent a53304ef79
commit 1b582a7cdb
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
47 changed files with 1865 additions and 150 deletions

View File

@ -0,0 +1,4 @@
var date = new Date()
var timestamp = date.toISOString().replace(/^(.*)(\.\d+)(Z)/, '$1$3')
$('span.current-time').text(timestamp)

View File

@ -184,6 +184,8 @@
}
}
.nowrap { white-space: nowrap }
/////////////////////////// Getting Started Buttons //////////////////////////
.get-started-btns {

View File

@ -23,6 +23,13 @@ blockquote {
color: rgba($article-text, .5);
}
*:last-child {margin-bottom: 0;}
.cite {
display: block;
margin-top: -1rem;
font-style: italic;
font-size: .85rem;
opacity: .8;
}
}
////////////////////////////////////////////////////////////////////////////////

View File

@ -28,4 +28,4 @@ h2,h3,h4,h5,h6 {
margin: 0;
opacity: 1;
}
}
}

View File

@ -16,6 +16,8 @@
&.half, &.third, &.quarter {
table:not(:last-child) {margin-right: 1rem;}
}
img { margin-bottom: 0;}
}
////////////////////////////////////////////////////////////////////////////////

View File

@ -52,6 +52,13 @@
}
}
}
&.small {
p { justify-content: flex-start; }
a {
flex-grow: unset;
padding: 0rem .5rem;
}
}
}
.code-tabs {

View File

@ -0,0 +1,17 @@
---
title: Work with Prometheus
description: >
Flux provides tools for scraping and processing raw [Prometheus-formatted metrics](https://prometheus.io/docs/concepts/data_model/)
from an HTTP-accessible endpoint.
menu: flux_0_x
weight: 8
flux/v0.x/tags: [prometheus]
---
[Prometheus](https://prometheus.io/) is an open-source toolkit designed
to build simple and robust monitoring and alerting systems.
Flux provides tools for scraping raw [Prometheus-formatted metrics](https://prometheus.io/docs/concepts/data_model/)
from an HTTP-accessible endpoint, writing them to InfluxDB, then processing those
raw metrics for visualization in InfluxDB dashboards.
{{< children >}}

View File

@ -0,0 +1,22 @@
---
title: Work with Prometheus metric types
description: >
Learn how to use Flux to work with Prometheus' four main metric types
(counter, gauge, histogram, and summary) and process them for visualizations
in InfluxDB dashboards.
menu:
flux_0_x:
name: Prometheus metric types
parent: Work with Prometheus
weight: 102
cascade:
related:
- https://prometheus.io/docs/concepts/metric_types/, Prometheus metric types
flux/v0.x/tags: [prometheus]
---
Learn how to use Flux to work with the four core
[Prometheus metric types](https://prometheus.io/docs/concepts/metric_types/) and
process them for visualizations in InfluxDB dashboards:
{{< children >}}

View File

@ -0,0 +1,516 @@
---
title: Work with Prometheus counters
list_title: Counter
description: >
Use Flux to query and transform Prometheus **counter** metrics stored in InfluxDB.
A counter is a cumulative metric that represents a single
[monotonically increasing counter](https://en.wikipedia.org/wiki/Monotonic_function)
whose value can only increase or be reset to zero on restart.
menu:
flux_0_x:
name: Counter
parent: Prometheus metric types
weight: 101
related:
- https://prometheus.io/docs/concepts/metric_types/, Prometheus metric types
- /{{< latest "influxdb" >}}/reference/prometheus-metrics/
flux/v0.x/tags: [prometheus]
---
Use Flux to query and transform Prometheus **counter** metrics stored in InfluxDB.
> A _counter_ is a cumulative metric that represents a single
> [monotonically increasing counter](https://en.wikipedia.org/wiki/Monotonic_function)
> whose value can only increase or be reset to zero on restart.
>
> {{% cite %}}[Prometheus metric types](https://prometheus.io/docs/concepts/metric_types/#counter){{% /cite %}}
##### Example counter metric in Prometheus format
```sh
# HELP example_counter_total Total representing an example counter metric
# TYPE example_counter_total counter
example_counter_total 282327
```
Because counters can periodically reset to 0, **any query involving counter
metrics should [normalize the data](#normalize-counter-resets) to account for
counter resets** before further processing.
The examples below include example data collected from the **InfluxDB OSS 2.x `/metrics` endpoint**
using `prometheus.scrape()` and stored in InfluxDB.
{{% note %}}
#### Prometheus metric parsing formats
Query structure depends on the [Prometheus metric parsing format](/{{< latest "influxdb" >}}/reference/prometheus-metrics/)
used to scrape the Prometheus metrics.
Select the appropriate metric format version below.
{{% /note %}}
- [Normalize counter resets](#normalize-counter-resets)
- [Calculate changes between normalized counter values](#calculate-changes-between-normalized-counter-values)
- [Calculate the rate of change in normalized counter values](#calculate-the-rate-of-change-in-normalized-counter-values)
## Normalize counter resets
{{< tabs-wrapper >}}
{{% tabs "small" %}}
[Metric version 2](#)
[Metric version 1](#)
{{% /tabs %}}
{{% tab-content %}}
1. Filter results by the `prometheus` measurement and **counter metric name** field.
2. Use [`increase()`](/flux/v0.x/stdlib/universe/increase/) to normalize counter resets.
`increase()` returns the cumulative sum of positive changes in column values.
{{% note %}}
`increase()` accounts for counter resets, but may lose some precision on reset
depending on your scrape interval.
On counter reset, `increase()` assumes no increase.
{{% /note %}}
```js
from(bucket: "example-bucket")
|> range(start: -1m)
|> filter(fn: (r) =>
r._measurement == "prometheus" and
r._field == "http_query_request_bytes"
)
|> increase()
```
{{< flex >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-counter-increase-input.png" alt="Raw Prometheus counter metric in InfluxDB" />}}
{{< /flex-content >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-counter-increase-output.png" alt="Increase on Prometheus counter metric in InfluxDB" />}}
{{< /flex-content >}}
{{< /flex >}}
{{< expand-wrapper >}}
{{% expand "View example input and output data" %}}
#### Example input {id="example-input-2"}
| _time | _measurement | _field | _value |
| :------------------- | :----------- | :----------------------- | -----: |
| 2021-01-01T00:00:00Z | prometheus | http_query_request_bytes | 4302 |
| 2021-01-01T00:00:10Z | prometheus | http_query_request_bytes | 4844 |
| 2021-01-01T00:00:20Z | prometheus | http_query_request_bytes | 5091 |
| 2021-01-01T00:00:30Z | prometheus | http_query_request_bytes | 13 |
| 2021-01-01T00:00:40Z | prometheus | http_query_request_bytes | 215 |
| 2021-01-01T00:00:50Z | prometheus | http_query_request_bytes | 762 |
| 2021-01-01T00:01:00Z | prometheus | http_query_request_bytes | 1108 |
#### Example output {id="example-output-2"}
| _time | _measurement | _field | _value |
| :------------------- | :----------- | :----------------------- | -----: |
| 2021-01-01T00:00:10Z | prometheus | http_query_request_bytes | 542 |
| 2021-01-01T00:00:20Z | prometheus | http_query_request_bytes | 789 |
| 2021-01-01T00:00:30Z | prometheus | http_query_request_bytes | 789 |
| 2021-01-01T00:00:40Z | prometheus | http_query_request_bytes | 991 |
| 2021-01-01T00:00:50Z | prometheus | http_query_request_bytes | 1538 |
| 2021-01-01T00:01:00Z | prometheus | http_query_request_bytes | 1884 |
{{% /expand %}}
{{< /expand-wrapper >}}
{{% /tab-content %}}
{{% tab-content %}}
1. Filter results by the **counter metric name** measurement and `counter` field.
2. Use [`increase()`](/flux/v0.x/stdlib/universe/increase/) to normalize counter resets.
`increase()` returns the cumulative sum of positive changes in column values.
{{% note %}}
`increase()` accounts for counter resets, but may lose some precision on reset
depending on your scrape interval.
On counter reset, `increase()` assumes no increase.
{{% /note %}}
```js
from(bucket: "example-bucket")
|> range(start: -1m)
|> filter(fn: (r) =>
r._measurement == "http_query_request_bytes" and
r._field == "counter"
)
|> increase()
```
{{< flex >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-counter-increase-input.png" alt="Raw Prometheus counter metric in InfluxDB" />}}
{{< /flex-content >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-counter-increase-output.png" alt="Increase on Prometheus counter metric in InfluxDB" />}}
{{< /flex-content >}}
{{< /flex >}}
{{< expand-wrapper >}}
{{% expand "View example input and output data" %}}
#### Example input {id="example-input-1"}
| _time | _measurement | _field | _value |
| :------------------- | :----------------------- | :------ | -----: |
| 2021-01-01T00:00:00Z | http_query_request_bytes | counter | 4302 |
| 2021-01-01T00:00:10Z | http_query_request_bytes | counter | 4844 |
| 2021-01-01T00:00:20Z | http_query_request_bytes | counter | 5091 |
| 2021-01-01T00:00:30Z | http_query_request_bytes | counter | 13 |
| 2021-01-01T00:00:40Z | http_query_request_bytes | counter | 215 |
| 2021-01-01T00:00:50Z | http_query_request_bytes | counter | 762 |
| 2021-01-01T00:01:00Z | http_query_request_bytes | counter | 1108 |
#### Example output {id="example-output-1"}
| _time | _measurement | _field | _value |
| :------------------- | :----------------------- | :------ | -----: |
| 2021-01-01T00:00:10Z | http_query_request_bytes | counter | 542 |
| 2021-01-01T00:00:20Z | http_query_request_bytes | counter | 789 |
| 2021-01-01T00:00:30Z | http_query_request_bytes | counter | 789 |
| 2021-01-01T00:00:40Z | http_query_request_bytes | counter | 991 |
| 2021-01-01T00:00:50Z | http_query_request_bytes | counter | 1538 |
| 2021-01-01T00:01:00Z | http_query_request_bytes | counter | 1884 |
{{% /expand %}}
{{< /expand-wrapper >}}
{{% /tab-content %}}
{{< /tabs-wrapper >}}
## Calculate changes between normalized counter values
Use [`difference()`](/flux/v0.x/stdlib/universe/difference/) with
[normalized counter data](#normalize-counter-resets) to return the difference
between subsequent values.
{{< tabs-wrapper >}}
{{% tabs "small" %}}
[Metric version 2](#)
[Metric version 1](#)
{{% /tabs %}}
{{% tab-content %}}
```js
from(bucket: "example-bucket")
|> range(start: -1m)
|> filter(fn: (r) =>
r._measurement == "prometheus" and
r._field == "http_query_request_bytes"
)
|> increase()
|> difference()
```
{{< flex >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-counter-normalized-input.png" alt="Raw Prometheus counter metric in InfluxDB" />}}
{{< /flex-content >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-counter-difference-output.png" alt="Normalize Prometheus counter metric to account for counter resets" />}}
{{< /flex-content >}}
{{< /flex >}}
{{< expand-wrapper >}}
{{% expand "View example input and output data" %}}
#### Example normalized counter data {id="example-normalized-counter-data-2"}
| _time | _measurement | _field | _value |
| :------------------- | :----------- | :----------------------- | -----: |
| 2021-01-01T00:00:10Z | prometheus | http_query_request_bytes | 542 |
| 2021-01-01T00:00:20Z | prometheus | http_query_request_bytes | 789 |
| 2021-01-01T00:00:30Z | prometheus | http_query_request_bytes | 789 |
| 2021-01-01T00:00:40Z | prometheus | http_query_request_bytes | 991 |
| 2021-01-01T00:00:50Z | prometheus | http_query_request_bytes | 1538 |
| 2021-01-01T00:01:00Z | prometheus | http_query_request_bytes | 1884 |
#### Example difference output {id="example-difference-output-2"}
| _time | _measurement | _field | _value |
| :------------------- | :----------- | :----------------------- | -----: |
| 2021-01-01T00:00:20Z | prometheus | http_query_request_bytes | 247 |
| 2021-01-01T00:00:30Z | prometheus | http_query_request_bytes | 0 |
| 2021-01-01T00:00:40Z | prometheus | http_query_request_bytes | 202 |
| 2021-01-01T00:00:50Z | prometheus | http_query_request_bytes | 547 |
| 2021-01-01T00:01:00Z | prometheus | http_query_request_bytes | 346 |
{{% /expand %}}
{{< /expand-wrapper >}}
{{% /tab-content %}}
{{% tab-content %}}
```js
from(bucket: "example-bucket")
|> range(start: -1m)
|> filter(fn: (r) =>
r._measurement == "http_query_request_bytes" and
r._field == "counter"
)
|> increase()
|> difference()
```
{{< flex >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-counter-normalized-input.png" alt="Raw Prometheus counter metric in InfluxDB" />}}
{{< /flex-content >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-counter-difference-output.png" alt="Normalize Prometheus counter metric to account for counter resets" />}}
{{< /flex-content >}}
{{< /flex >}}
{{< expand-wrapper >}}
{{% expand "View example input and output data" %}}
#### Example normalized counter data {id="example-normalized-counter-data-1"}
| _time | _measurement | _field | _value |
| :------------------- | :----------------------- | :------ | -----: |
| 2021-01-01T00:00:10Z | http_query_request_bytes | counter | 542 |
| 2021-01-01T00:00:20Z | http_query_request_bytes | counter | 789 |
| 2021-01-01T00:00:30Z | http_query_request_bytes | counter | 789 |
| 2021-01-01T00:00:40Z | http_query_request_bytes | counter | 991 |
| 2021-01-01T00:00:50Z | http_query_request_bytes | counter | 1538 |
| 2021-01-01T00:01:00Z | http_query_request_bytes | counter | 1884 |
#### Example difference output {id="example-difference-output-1"}
| _time | _measurement | _field | _value |
| :------------------- | :----------------------- | :------ | -----: |
| 2021-01-01T00:00:20Z | http_query_request_bytes | counter | 247 |
| 2021-01-01T00:00:30Z | http_query_request_bytes | counter | 0 |
| 2021-01-01T00:00:40Z | http_query_request_bytes | counter | 202 |
| 2021-01-01T00:00:50Z | http_query_request_bytes | counter | 547 |
| 2021-01-01T00:01:00Z | http_query_request_bytes | counter | 346 |
{{% /expand %}}
{{< /expand-wrapper >}}
{{% /tab-content %}}
{{< /tabs-wrapper >}}
## Calculate the rate of change in normalized counter values
Use [`derivative()`](/flux/v0.x/stdlib/universe/derivative/) to calculate the rate
of change between [normalized counter values](#normalize-counter-resets).
By default, `derivative()` returns the rate of change per second.
Use the [`unit` parameter](/flux/v0.x/stdlib/universe/derivative/#unit) to
customize the rate unit.
{{< tabs-wrapper >}}
{{% tabs "small" %}}
[Metric version 2](#)
[Metric version 1](#)
{{% /tabs %}}
{{% tab-content %}}
```js
from(bucket: "example-bucket")
|> range(start: -1m)
|> filter(fn: (r) =>
r._measurement == "prometheus" and
r._field == "http_query_request_bytes"
)
|> increase()
|> derivative()
```
{{< flex >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-counter-normalized-input.png" alt="Normalized Prometheus counter metric in InfluxDB" />}}
{{< /flex-content >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-counter-derivative-output.png" alt="Calculate the rate of change in Prometheus counter metric with Flux" />}}
{{< /flex-content >}}
{{< /flex >}}
{{< expand-wrapper >}}
{{% expand "View example input and output data" %}}
#### Example normalized counter data {id="example-normalized-counter-data-2-1"}
| _time | _measurement | _field | _value |
| :------------------- | :----------- | :----------------------- | -----: |
| 2021-01-01T00:00:10Z | prometheus | http_query_request_bytes | 542 |
| 2021-01-01T00:00:20Z | prometheus | http_query_request_bytes | 789 |
| 2021-01-01T00:00:30Z | prometheus | http_query_request_bytes | 789 |
| 2021-01-01T00:00:40Z | prometheus | http_query_request_bytes | 991 |
| 2021-01-01T00:00:50Z | prometheus | http_query_request_bytes | 1538 |
| 2021-01-01T00:01:00Z | prometheus | http_query_request_bytes | 1884 |
#### Example derivative output {id="example-derivative-output-2"}
| _time | _measurement | _field | _value |
| :------------------- | :----------- | :----------------------- | -----: |
| 2021-01-01T00:00:20Z | prometheus | http_query_request_bytes | 24.7 |
| 2021-01-01T00:00:30Z | prometheus | http_query_request_bytes | 0.0 |
| 2021-01-01T00:00:40Z | prometheus | http_query_request_bytes | 20.2 |
| 2021-01-01T00:00:50Z | prometheus | http_query_request_bytes | 54.7 |
| 2021-01-01T00:01:00Z | prometheus | http_query_request_bytes | 34.6 |
{{% /expand %}}
{{< /expand-wrapper >}}
{{% /tab-content %}}
{{% tab-content %}}
```js
from(bucket: "example-bucket")
|> range(start: -1m)
|> filter(fn: (r) =>
r._measurement == "http_query_request_bytes" and
r._field == "counter"
)
|> increase()
|> derivative()
```
{{< flex >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-counter-normalized-input.png" alt="Normalized Prometheus counter metric in InfluxDB" />}}
{{< /flex-content >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-counter-derivative-output.png" alt="Calculate the rate of change in Prometheus counter metric with Flux" />}}
{{< /flex-content >}}
{{< /flex >}}
{{< expand-wrapper >}}
{{% expand "View example input and output data" %}}
#### Example normalized counter data {id="example-normalized-counter-data-1-1"}
| _time | _measurement | _field | _value |
| :------------------- | :----------------------- | :------ | -----: |
| 2021-01-01T00:00:10Z | http_query_request_bytes | counter | 542 |
| 2021-01-01T00:00:20Z | http_query_request_bytes | counter | 789 |
| 2021-01-01T00:00:30Z | http_query_request_bytes | counter | 789 |
| 2021-01-01T00:00:40Z | http_query_request_bytes | counter | 991 |
| 2021-01-01T00:00:50Z | http_query_request_bytes | counter | 1538 |
| 2021-01-01T00:01:00Z | http_query_request_bytes | counter | 1884 |
#### Example derivative output {id="example-derivative-output-1"}
| _time | _measurement | _field | _value |
| :------------------- | :----------------------- | :------ | -----: |
| 2021-01-01T00:00:20Z | http_query_request_bytes | counter | 24.7 |
| 2021-01-01T00:00:30Z | http_query_request_bytes | counter | 0.0 |
| 2021-01-01T00:00:40Z | http_query_request_bytes | counter | 20.2 |
| 2021-01-01T00:00:50Z | http_query_request_bytes | counter | 54.7 |
| 2021-01-01T00:01:00Z | http_query_request_bytes | counter | 34.6 |
{{% /expand %}}
{{< /expand-wrapper >}}
{{% /tab-content %}}
{{< /tabs-wrapper >}}
## Calculate the average rate of change in specified time windows
To calculate the average rate of change in [normalized counter values](#normalize-counter-resets)
in specified time windows:
1. Import the [`experimental/aggregate` package](/flux/v0.x/stdlib/experimental/aggregate/).
2. [Normalized counter values](#normalize-counter-resets).
3. Use [`aggregate.rate()`](/flux/v0.x/stdlib/experimental/aggregate/rate/)
to calculate the average rate of change per time window.
- Use the [`every` parameter](/flux/v0.x/stdlib/experimental/aggregate/rate/#every)
to define the time window interval.
- Use the [`unit` parameter](/flux/v0.x/stdlib/experimental/aggregate/rate/#unit)
to customize the rate unit.By default, `aggregate.rate()` returns the per second
(`1s`) rate of change.
- Use the [`groupColumns` parameter](/flux/v0.x/stdlib/experimental/aggregate/rate/#groupcolumns)
to specify columns to group by when performing the aggregation.
{{< tabs-wrapper >}}
{{% tabs "small" %}}
[Metric version 2](#)
[Metric version 1](#)
{{% /tabs %}}
{{% tab-content %}}
```js
import "experimental/aggregate"
from(bucket: "example-bucket")
|> range(start: -1m)
|> filter(fn: (r) =>
r._measurement == "prometheus" and
r._field == "http_query_request_bytes"
)
|> increase()
|> aggregate.rate(every: 15s, unit: 1s)
```
{{< flex >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-counter-normalized-input.png" alt="Normalized Prometheus counter metric in InfluxDB" />}}
{{< /flex-content >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-counter-aggregate-rate-output.png" alt="Calculate the rate of change in Prometheus counter metrics per time window with Flux" />}}
{{< /flex-content >}}
{{< /flex >}}
{{< expand-wrapper >}}
{{% expand "View example input and output data" %}}
{{% note %}}
`_start` and `_stop` columns have been omitted.
{{% /note %}}
#### Example normalized counter data {id="example-normalized-counter-data-2-2"}
| _time | _measurement | _field | _value |
| :------------------- | :----------- | :----------------------- | -----: |
| 2021-01-01T00:00:10Z | prometheus | http_query_request_bytes | 542 |
| 2021-01-01T00:00:20Z | prometheus | http_query_request_bytes | 789 |
| 2021-01-01T00:00:30Z | prometheus | http_query_request_bytes | 789 |
| 2021-01-01T00:00:40Z | prometheus | http_query_request_bytes | 991 |
| 2021-01-01T00:00:50Z | prometheus | http_query_request_bytes | 1538 |
| 2021-01-01T00:01:00Z | prometheus | http_query_request_bytes | 1884 |
#### Example aggregate.rate output {id="example-aggregaterate-output-2"}
| _time | _value |
| :------------------- | -----: |
| 2021-01-01T00:00:15Z | |
| 2021-01-01T00:01:30Z | 24.7 |
| 2021-01-01T00:01:45Z | 10.1 |
| 2021-01-01T00:01:00Z | 54.7 |
{{% /expand %}}
{{< /expand-wrapper >}}
{{% /tab-content %}}
{{% tab-content %}}
```js
import "experimental/aggregate"
from(bucket: "example-bucket")
|> range(start: -1m)
|> filter(fn: (r) =>
r._measurement == "http_query_request_bytes" and
r._field == "counter"
)
|> increase()
|> aggregate.rate(every: 15s, unit: 1s)
```
{{< flex >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-counter-normalized-input.png" alt="Normalized Prometheus counter metric in InfluxDB" />}}
{{< /flex-content >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-counter-aggregate-rate-output.png" alt="Calculate the rate of change in Prometheus counter metrics per time window with Flux" />}}
{{< /flex-content >}}
{{< /flex >}}
{{< expand-wrapper >}}
{{% expand "View example input and output data" %}}
{{% note %}}
`_start` and `_stop` columns have been omitted.
{{% /note %}}
#### Example normalized counter data {id="example-normalized-counter-data-1-2"}
| _time | _measurement | _field | _value |
| :------------------- | :----------------------- | :------ | -----: |
| 2021-01-01T00:00:10Z | http_query_request_bytes | counter | 542 |
| 2021-01-01T00:00:20Z | http_query_request_bytes | counter | 789 |
| 2021-01-01T00:00:30Z | http_query_request_bytes | counter | 789 |
| 2021-01-01T00:00:40Z | http_query_request_bytes | counter | 991 |
| 2021-01-01T00:00:50Z | http_query_request_bytes | counter | 1538 |
| 2021-01-01T00:01:00Z | http_query_request_bytes | counter | 1884 |
#### Example aggregate.rate output {id="example-aggregaterate-output-1"}
| _time | _value |
| :------------------- | -----: |
| 2021-01-01T00:00:15Z | |
| 2021-01-01T00:01:30Z | 24.7 |
| 2021-01-01T00:01:45Z | 10.1 |
| 2021-01-01T00:01:00Z | 54.7 |
{{% /expand %}}
{{< /expand-wrapper >}}
{{% /tab-content %}}
{{< /tabs-wrapper >}}

View File

@ -0,0 +1,315 @@
---
title: Work with Prometheus gauges
list_title: Gauge
description: >
Use Flux to query and transform Prometheus **gauge** metrics stored in InfluxDB.
A gauge is a metric that represents a single numerical value that can
arbitrarily go up and down.
menu:
flux_0_x:
name: Gauge
parent: Prometheus metric types
weight: 101
related:
- https://prometheus.io/docs/concepts/metric_types/, Prometheus metric types
- /{{< latest "influxdb" >}}/reference/prometheus-metrics/
flux/v0.x/tags: [prometheus]
---
Use Flux to query and transform Prometheus **gauge** metrics stored in InfluxDB.
> A _gauge_ is a metric that represents a single numerical value that can arbitrarily go up and down.
>
> {{% cite %}}[Prometheus metric types](https://prometheus.io/docs/concepts/metric_types/#gauge){{% /cite %}}
##### Example gauge metric in Prometheus data
```sh
# HELP example_gauge_current Current number of items as example gauge metric
# TYPE example_gauge_current gauge
example_gauge_current 128
```
Generally gauge metrics can be used as they are reported and don't require any
additional processing.
The examples below include example data collected from the **InfluxDB OSS 2.x `/metrics` endpoint**
using `prometheus.scrape()` and stored in InfluxDB.
{{% note %}}
#### Prometheus metric parsing formats
Query structure depends on the [Prometheus metric parsing format](/{{< latest "influxdb" >}}/reference/prometheus-metrics/)
used to scrape the Prometheus metrics.
Select the appropriate metric format version below.
{{% /note %}}
- [Calculate the rate of change in gauge values](#calculate-the-rate-of-change-in-gauge-values)
- [Calculate the average rate of change in specified time windows](#calculate-the-average-rate-of-change-in-specified-time-windows)
## Calculate the rate of change in gauge values
{{< tabs-wrapper >}}
{{% tabs "small" %}}
[Metric version 2](#)
[Metric version 1](#)
{{% /tabs %}}
{{% tab-content %}}
1. Filter results by the `prometheus` measurement and **counter metric name** field.
2. Use [`derivative()`](/flux/v0.x/stdlib/universe/derivative/) to calculate the rate
of change between gauge values.
By default, `derivative()` returns the rate of change per second.
Use the [`unit` parameter](/flux/v0.x/stdlib/universe/derivative/#unit) to
customize the rate unit.
To replace negative derivatives with null values, set the
[`nonNegative` parameter](/flux/v0.x/stdlib/universe/derivative/#unit) to `true`.
```js
from(bucket: "example-bucket")
|> range(start: -1m)
|> filter(fn: (r) =>
r._measurement == "prometheus" and
r._field == "go_goroutines"
)
|> increase()
|> derivative(nonNegative: true)
```
{{< flex >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-gauge-input.png" alt="Raw Prometheus gauge metric in InfluxDB" />}}
{{< /flex-content >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-gauge-derivative-output.png" alt="Derivative of Prometheus gauge metrics in InfluxDB" />}}
{{< /flex-content >}}
{{< /flex >}}
{{< expand-wrapper >}}
{{% expand "View example input and output data" %}}
#### Example normalized counter data
| _time | _measurement | _field | _value |
| :------------------- | :----------- | :------------ | ------: |
| 2021-01-01T00:00:00Z | prometheus | go_goroutines | 1571.97 |
| 2021-01-01T00:00:10Z | prometheus | go_goroutines | 1577.35 |
| 2021-01-01T00:00:20Z | prometheus | go_goroutines | 1591.67 |
| 2021-01-01T00:00:30Z | prometheus | go_goroutines | 1598.85 |
| 2021-01-01T00:00:40Z | prometheus | go_goroutines | 1600.0 |
| 2021-01-01T00:00:50Z | prometheus | go_goroutines | 1598.04 |
| 2021-01-01T00:01:00Z | prometheus | go_goroutines | 1602.93 |
#### Example difference output
| _time | _measurement | _field | _value |
| :------------------- | :----------- | :------------ | -----------------: |
| 2021-01-01T00:00:10Z | prometheus | go_goroutines | 0.5379999999999882 |
| 2021-01-01T00:00:20Z | prometheus | go_goroutines | 1.4320000000000164 |
| 2021-01-01T00:00:30Z | prometheus | go_goroutines | 0.7179999999999837 |
| 2021-01-01T00:00:40Z | prometheus | go_goroutines | 0.1150000000000091 |
| 2021-01-01T00:00:50Z | prometheus | go_goroutines | |
| 2021-01-01T00:01:00Z | prometheus | go_goroutines | 0.48900000000001 |
{{% /expand %}}
{{< /expand-wrapper >}}
{{% /tab-content %}}
{{% tab-content %}}
1. Filter results by the **counter metric name** measurement and `gauge` field.
2. Use [`derivative()`](/flux/v0.x/stdlib/universe/derivative/) to calculate the rate
of change between gauge values.
By default, `derivative()` returns the rate of change per second.
Use the [`unit` parameter](/flux/v0.x/stdlib/universe/derivative/#unit) to
customize the rate unit.
To replace negative derivatives with null values, set the
[`nonNegative` parameter](/flux/v0.x/stdlib/universe/derivative/#unit) to `true`.
```js
from(bucket: "example-bucket")
|> range(start: -1m)
|> filter(fn: (r) =>
r._measurement == "go_goroutines" and
r._field == "gauge"
)
|> increase()
|> derivative(nonNegative: true)
```
{{< flex >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-gauge-input.png" alt="Raw Prometheus gauge metric in InfluxDB" />}}
{{< /flex-content >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-gauge-derivative-output.png" alt="Derivative of Prometheus gauge metrics in InfluxDB" />}}
{{< /flex-content >}}
{{< /flex >}}
{{< expand-wrapper >}}
{{% expand "View example input and output data" %}}
#### Example gauge data
| _time | _measurement | _field | _value |
| :------------------- | :------------ | :----- | ------: |
| 2021-01-01T00:00:00Z | go_goroutines | gauge | 1571.97 |
| 2021-01-01T00:00:10Z | go_goroutines | gauge | 1577.35 |
| 2021-01-01T00:00:20Z | go_goroutines | gauge | 1591.67 |
| 2021-01-01T00:00:30Z | go_goroutines | gauge | 1598.85 |
| 2021-01-01T00:00:40Z | go_goroutines | gauge | 1600.0 |
| 2021-01-01T00:00:50Z | go_goroutines | gauge | 1598.04 |
| 2021-01-01T00:01:00Z | go_goroutines | gauge | 1602.93 |
#### Example difference output
| _time | _measurement | _field | _value |
| :------------------- | :------------ | :----- | -----------------: |
| 2021-01-01T00:00:10Z | go_goroutines | gauge | 0.5379999999999882 |
| 2021-01-01T00:00:20Z | go_goroutines | gauge | 1.4320000000000164 |
| 2021-01-01T00:00:30Z | go_goroutines | gauge | 0.7179999999999837 |
| 2021-01-01T00:00:40Z | go_goroutines | gauge | 0.1150000000000091 |
| 2021-01-01T00:00:50Z | go_goroutines | gauge | |
| 2021-01-01T00:01:00Z | go_goroutines | gauge | 0.48900000000001 |
{{% /expand %}}
{{< /expand-wrapper >}}
{{% /tab-content %}}
{{< /tabs-wrapper >}}
## Calculate the average rate of change in specified time windows
{{< tabs-wrapper >}}
{{% tabs "small" %}}
[Metric version 2](#)
[Metric version 1](#)
{{% /tabs %}}
{{% tab-content %}}
1. Import the [`experimental/aggregate` package](/flux/v0.x/stdlib/experimental/aggregate/).
2. Filter results by the `prometheus` measurement and **counter metric name** field.
3. Use [`aggregate.rate()`](/flux/v0.x/stdlib/experimental/aggregate/rate/)
to calculate the average rate of change per time window.
- Use the [`every` parameter](/flux/v0.x/stdlib/experimental/aggregate/rate/#every)
to define the time window interval.
- Use the [`unit` parameter](/flux/v0.x/stdlib/experimental/aggregate/rate/#unit)
to customize the rate unit. By default, `aggregate.rate()` returns the per second
(`1s`) rate of change.
- Use the [`groupColumns` parameter](/flux/v0.x/stdlib/experimental/aggregate/rate/#groupcolumns)
to specify columns to group by when performing the aggregation.
```js
import "experimental/aggregate"
from(bucket: "example-bucket")
|> range(start: -1m)
|> filter(fn: (r) =>
r._measurement == "prometheus" and
r._field == "go_goroutines"
)
|> aggregate.rate(every: 10s, unit: 1s)
```
{{< flex >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-gauge-input.png" alt="Raw Prometheus gauge metric in InfluxDB" />}}
{{< /flex-content >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-gauge-aggregate-rate-output.png" alt="Calculate the average rate of change of Prometheus gauge metrics per time window with Flux" />}}
{{< /flex-content >}}
{{< /flex >}}
{{< expand-wrapper >}}
{{% expand "View example input and output data" %}}
{{% note %}}
`_start` and `_stop` columns have been omitted.
{{% /note %}}
#### Example gauge data
| _time | _measurement | _field | _value |
| :------------------- | :----------- | :------------ | ------: |
| 2021-01-01T00:00:00Z | prometheus | go_goroutines | 1571.97 |
| 2021-01-01T00:00:10Z | prometheus | go_goroutines | 1577.35 |
| 2021-01-01T00:00:20Z | prometheus | go_goroutines | 1591.67 |
| 2021-01-01T00:00:30Z | prometheus | go_goroutines | 1598.85 |
| 2021-01-01T00:00:40Z | prometheus | go_goroutines | 1600.0 |
| 2021-01-01T00:00:50Z | prometheus | go_goroutines | 1598.04 |
| 2021-01-01T00:01:00Z | prometheus | go_goroutines | 1602.93 |
#### Example aggregate.rate output
| _time | _value |
| :------------------- | -----------------: |
| 2021-01-01T00:00:10Z | |
| 2021-01-01T00:00:20Z | 0.5379999999999882 |
| 2021-01-01T00:00:30Z | 1.4320000000000164 |
| 2021-01-01T00:00:40Z | 0.7179999999999837 |
| 2021-01-01T00:00:50Z | 0.1150000000000091 |
| 2021-01-01T00:01:00Z | |
{{% /expand %}}
{{< /expand-wrapper >}}
{{% /tab-content %}}
{{% tab-content %}}
1. Import the [`experimental/aggregate` package](/flux/v0.x/stdlib/experimental/aggregate/).
2. Filter results by the **counter metric name** measurement and `gauge` field.
3. Use [`aggregate.rate()`](/flux/v0.x/stdlib/experimental/aggregate/rate/)
to calculate the average rate of change per time window.
- Use the [`every` parameter](/flux/v0.x/stdlib/experimental/aggregate/rate/#every)
to define the time window interval.
- Use the [`unit` parameter](/flux/v0.x/stdlib/experimental/aggregate/rate/#unit)
to customize the rate unit. By default, `aggregate.rate()` returns the per second
(`1s`) rate of change.
- Use the [`groupColumns` parameter](/flux/v0.x/stdlib/experimental/aggregate/rate/#groupcolumns)
to specify columns to group by when performing the aggregation.
```js
import "experimental/aggregate"
from(bucket: "example-bucket")
|> range(start: -1m)
|> filter(fn: (r) =>
r._measurement == "go_goroutines" and
r._field == "gauge"
)
|> aggregate.rate(every: 10s, unit: 1s)
```
{{< flex >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-gauge-input.png" alt="Raw Prometheus gauge metric in InfluxDB" />}}
{{< /flex-content >}}
{{< flex-content >}}
{{< img-hd src="/img/flux/0-x-prometheus-gauge-aggregate-rate-output.png" alt="Calculate the average rate of change of Prometheus gauge metrics per time window with Flux" />}}
{{< /flex-content >}}
{{< /flex >}}
{{< expand-wrapper >}}
{{% expand "View example input and output data" %}}
{{% note %}}
`_start` and `_stop` columns have been omitted.
{{% /note %}}
#### Example normalized counter data {id="example-normalized-counter-data-2"}
| _time | _measurement | _field | _value |
| :------------------- | :------------ | :----- | ------: |
| 2021-01-01T00:00:00Z | go_goroutines | gauge | 1571.97 |
| 2021-01-01T00:00:10Z | go_goroutines | gauge | 1577.35 |
| 2021-01-01T00:00:20Z | go_goroutines | gauge | 1591.67 |
| 2021-01-01T00:00:30Z | go_goroutines | gauge | 1598.85 |
| 2021-01-01T00:00:40Z | go_goroutines | gauge | 1600.0 |
| 2021-01-01T00:00:50Z | go_goroutines | gauge | 1598.04 |
| 2021-01-01T00:01:00Z | go_goroutines | gauge | 1602.93 |
#### Example aggregate.rate output
| _time | _value |
| :------------------- | -----------------: |
| 2021-01-01T00:00:10Z | |
| 2021-01-01T00:00:20Z | 0.5379999999999882 |
| 2021-01-01T00:00:30Z | 1.4320000000000164 |
| 2021-01-01T00:00:40Z | 0.7179999999999837 |
| 2021-01-01T00:00:50Z | 0.1150000000000091 |
| 2021-01-01T00:01:00Z | |
{{% /expand %}}
{{< /expand-wrapper >}}
{{% /tab-content %}}
{{< /tabs-wrapper >}}

View File

@ -0,0 +1,171 @@
---
title: Work with Prometheus histograms
list_title: Histogram
description: >
Use Flux to query and transform Prometheus **histogram** metrics stored in InfluxDB.
A histogram samples observations (usually things like request durations or
response sizes) and counts them in configurable buckets.
It also provides a sum of all observed values.
menu:
flux_0_x:
name: Histogram
parent: Prometheus metric types
weight: 101
flux/v0.x/tags: [prometheus]
related:
- https://prometheus.io/docs/concepts/metric_types/, Prometheus metric types
- /{{< latest "influxdb" >}}/reference/prometheus-metrics/
- /flux/v0.x/stdlib/experimental/prometheus/histogramQuantile/
---
Use Flux to query and transform Prometheus **histogram** metrics stored in InfluxDB.
> A _histogram_ samples observations (usually things like request durations or
> response sizes) and counts them in configurable buckets.
> It also provides a sum of all observed values.
>
> {{% cite %}}[Prometheus metric types](https://prometheus.io/docs/concepts/metric_types/#histogram){{% /cite %}}
##### Example histogram metric in Prometheus data
```sh
# HELP example_histogram_duration Duration of given tasks as example histogram metric
# TYPE example_histogram_duration histogram
example_histogram_duration_bucket{le="0.1"} 80
example_histogram_duration_bucket{le="0.25"} 85
example_histogram_duration_bucket{le="0.5"} 85
example_histogram_duration_bucket{le="1"} 87
example_histogram_duration_bucket{le="2.5"} 87
example_histogram_duration_bucket{le="5"} 88
example_histogram_duration_bucket{le="+Inf"} 88
example_histogram_duration_sum 6.833441910000001
example_histogram_duration_count 88
```
The examples below include example data collected from the **InfluxDB OSS 2.x `/metrics` endpoint**
and stored in InfluxDB.
{{% note %}}
#### Prometheus metric parsing formats
Query structure depends on the [Prometheus metric parsing format](/{{< latest "influxdb" >}}/reference/prometheus-metrics/)
used to scrape the Prometheus metrics.
Select the appropriate metric format version below.
{{% /note %}}
- [Calculate quantile values from Prometheus histograms](#calculate-quantile-values-from-prometheus-histograms)
- [Calculate multiple quantiles from Prometheus histograms](#calculate-multiple-quantiles-from-prometheus-histograms)
## Visualize Prometheus histograms in InfluxDB
_InfluxDB does not currently support visualizing Prometheus histogram metrics
as a traditional histogram. The existing [InfluxDB histogram visualization](/influxdb/cloud/visualize-data/visualization-types/histogram/)
is **not compatible** with the format of Prometheus histogram data stored in InfluxDB._
## Calculate quantile values from Prometheus histograms
{{< tabs-wrapper >}}
{{% tabs "small" %}}
[Metric version 2](#)
[Metric version 1](#)
{{% /tabs %}}
{{% tab-content %}}
1. Import the [`experimental/prometheus` package](/flux/v0.x/stdlib/experimental/prometheus/).
2. Filter results by the `prometheus` measurement and **histogram metric name** field.
3. _(Recommended)_ Use [`aggregateWindow()`](/flux/v0.x/stdlib/universe/aggregatewindow/)
to downsample data and optimize the query.
4. Use [`prometheus.histogramQuantile()`](/flux/v0.x/stdlib/experimental/prometheus/histogramQuantile/)
to calculate a specific quantile.
```js
import "experimental/prometheus"
from(bucket: "example-bucket")
|> start(range: -1h)
|> filter(fn: (r) => r._measurement == "prometheus")
|> filter(fn: (r) => r._field == "qc_all_duration_seconds")
|> aggregateWindow(every: 1m, fn: mean, createEmpty: false)
|> prometheus.histogramQuantile(quantile: 0.99)
```
{{% /tab-content %}}
{{% tab-content %}}
1. Import the [`experimental/prometheus` package](/flux/v0.x/stdlib/experimental/prometheus/).
2. Filter results by the **histogram metric name** measurement.
3. _(Recommended)_ Use [`aggregateWindow()`](/flux/v0.x/stdlib/universe/aggregatewindow/)
to downsample data and optimize the query.
**Set the `createEmpty` parameter to `false`.**
4. Use [`prometheus.histogramQuantile()`](/flux/v0.x/stdlib/experimental/prometheus/histogramQuantile)
to calculate a specific quantile. Specify the `metricVersion` as `1`.
```js
import "experimental/prometheus"
from(bucket: "example-bucket")
|> start(range: -1h)
|> filter(fn: (r) => r._measurement == "qc_all_duration_seconds")
|> aggregateWindow(every: 1m, fn: mean, createEmpty: false)
|> prometheus.histogramQuantile(quantile: 0.99, metricVersion: 1)
```
{{% /tab-content %}}
{{< /tabs-wrapper >}}
{{< img-hd src="/img/flux/0-x-prometheus-histogram-quantile.png" alt="Calculate a quantile from Prometheus histogram metrics" />}}
{{% note %}}
#### Set createEmpty to false
When using `aggregateWindow()` to downsample data for `prometheus.histogramQuantile`,
**set the `createEmpty` parameter to `false`**.
Empty tables produced from `aggregateWindow()` result in the following error.
```
histogramQuantile: unexpected null in the countColumn
```
{{% /note %}}
## Calculate multiple quantiles from Prometheus histograms
1. Query histogram data using [steps 1-2 (optionally 3) from above](#calculate-quantiles-from-prometheus-histograms).
2. Use [`union()`](/flux/v0.x/stdlib/universe/union/) to union multiple
streams of tables that calculate unique quantiles.
{{< code-tabs-wrapper >}}
{{% code-tabs "small" %}}
[Metric version 2](#)
[Metric version 1](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```js
import "experimental/prometheus"
data = from(bucket: "example-bucket")
|> start(range: -1h)
|> filter(fn: (r) => r._measurement == "prometheus")
|> filter(fn: (r) => r._field == "qc_all_duration_seconds")
|> aggregateWindow(every: 1m, fn: mean, createEmpty: false)
union(tables: [
data |> prometheus.histogramQuantile(quantile: 0.99),
data |> prometheus.histogramQuantile(quantile: 0.5),
data |> prometheus.histogramQuantile(quantile: 0.25)
])
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
import "experimental/prometheus"
data = from(bucket: "example-bucket")
|> start(range: -1h)
|> filter(fn: (r) => r._measurement == "qc_all_duration_seconds")
|> aggregateWindow(every: 1m, fn: mean, createEmpty: false)
union(tables: [
data |> prometheus.histogramQuantile(quantile: 0.99, metricVersion: 1),
data |> prometheus.histogramQuantile(quantile: 0.5, metricVersion: 1),
data |> prometheus.histogramQuantile(quantile: 0.25, metricVersion: 1)
])
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
{{< img-hd src="/img/flux/0-x-prometheus-histogram-multiple-quantiles.png" alt="Calculate multiple quantiles from Prometheus histogram metrics" />}}

View File

@ -0,0 +1,138 @@
---
title: Work with Prometheus summaries
list_title: Summary
description: >
Use Flux to query and transform Prometheus **summary** metrics stored in InfluxDB.
A summary samples observations, e.g. request durations and response sizes.
While it also provides a total count of observations and a sum of all observed
values, it calculates configurable quantiles over a sliding time window.
menu:
flux_0_x:
name: Summary
parent: Prometheus metric types
weight: 101
flux/v0.x/tags: [prometheus]
related:
- https://prometheus.io/docs/concepts/metric_types/, Prometheus metric types
- /{{< latest "influxdb" >}}/reference/prometheus-metrics/
---
Use Flux to query and transform Prometheus **summary** metrics stored in InfluxDB.
> A _summary_ samples observations (usually things like request durations and response sizes).
> While it also provides a total count of observations and a sum of all observed
> values, it calculates configurable quantiles over a sliding time window.
>
> {{% cite %}}[Prometheus metric types](https://prometheus.io/docs/concepts/metric_types/#summary){{% /cite %}}
##### Example summary metric in Prometheus data
```sh
# HELP task_executor_run_duration The duration in seconds between a run starting and finishing.
# TYPE task_executor_run_duration summary
example_summary_duration{label="foo",quantile="0.5"} 4.147907251
example_summary_duration{label="foo",quantile="0.9"} 4.147907251
example_summary_duration{label="foo",quantile="0.99"} 4.147907251
example_summary_duration_sum{label="foo"} 2701.367126714001
example_summary_duration_count{label="foo"} 539
```
The examples below include example data collected from the **InfluxDB OSS 2.x `/metrics` endpoint**
and stored in InfluxDB.
{{% note %}}
#### Prometheus metric parsing formats
Query structure depends on the [Prometheus metric parsing format](/{{< latest "influxdb" >}}/reference/prometheus-metrics/)
used to scrape the Prometheus metrics.
Select the appropriate metric format version below.
{{% /note %}}
- [Visualize summary metric quantile values](#visualize-summary-metric-quantile-values)
- [Derive average values from a summary metric](#derive-average-values-from-a-summary-metric)
## Visualize summary metric quantile values
Prometheus summary metrics provide quantile values that can be visualized without modification.
{{< tabs-wrapper >}}
{{% tabs "small" %}}
[Metric version 2](#)
[Metric version 1](#)
{{% /tabs %}}
{{% tab-content %}}
1. Filter by the `prometheus` measurement.
2. Filter by your **Prometheus metric name** field.
```js
from(bucket: "example-bucket")
|> range(start: -1m)
|> filter(fn: (r) => r._measurement == "prometheus")
|> filter(fn: (r) => r._field == "go_gc_duration_seconds")
```
{{% /tab-content %}}
{{% tab-content %}}
1. Filter by your **Prometheus metric name** measurement.
2. Filter out the `sum` and `count` fields.
```js
from(bucket: "example-bucket")
|> range(start: -1m)
|> filter(fn: (r) => r._measurement == "go_gc_duration_seconds")
|> filter(fn: (r) => r._field != "count" and r._field != "sum")
```
{{% /tab-content %}}
{{< /tabs-wrapper >}}
{{< img-hd src="/img/flux/0-x-prometheus-summary-quantiles.png" alt="Visualize Prometheus summary quantiles" />}}
## Derive average values from a summary metric
Use the **sum** and **count** values provided in Prometheus summary metrics to
derive an average summary value.
{{< tabs-wrapper >}}
{{% tabs "small" %}}
[Metric version 2](#)
[Metric version 1](#)
{{% /tabs %}}
{{% tab-content %}}
1. Filter by the `prometheus` measurement.
2. Filter by the `<metric_name>_count` and `<metric_name>_sum` fields.
3. Use [`pivot()`](/flux/v0.x/stdlib/universe/pivot/) to pivot fields into
columns based on time. Each row then contains a `<metric_name>_count` and
`<metric_name>_sum` column.
4. Divide the `<metric_name>_sum` column by the `<metric_name>_count` column to
produce a new `_value`.
```js
from(bucket: "example-bucket")
|> range(start: -1m)
|> filter(fn: (r) => r._measurement == "prometheus")
|> filter(fn: (r) =>
r._field == "go_gc_duration_seconds_count" or
r._field == "go_gc_duration_seconds_sum"
)
|> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")
|> map(fn: (r) => ({ r with
_value: r.go_gc_duration_seconds_sum / r.go_gc_duration_seconds_count
}))
```
{{% /tab-content %}}
{{% tab-content %}}
1. Filter by your **Prometheus metric name** measurement.
2. Filter by the `count` and `sum` fields.
3. Use [`pivot()`](/flux/v0.x/stdlib/universe/pivot/) to pivot fields into columns.
Each row then contains a `count` and `sum` column.
4. Divide the `sum` column by the `count` column to produce a new `_value`.
```js
from(bucket: "example-bucket")
|> range(start: -1m)
|> filter(fn: (r) => r._measurement == "go_gc_duration_seconds")
|> filter(fn: (r) => r._field == "count" or r._field == "sum")
|> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")
|> map(fn: (r) => ({ r with _value: r.sum / r.count }))
```
{{% /tab-content %}}
{{< /tabs-wrapper >}}

View File

@ -0,0 +1,193 @@
---
title: Scrape Prometheus metrics
description: >
Use the Flux [`prometheus.scrape`](/flux/v0.x/stdlib/experimental/prometheus/scrape/) function to
scrape Prometheus-formatted metrics from an HTTP-accessible endpoint.
menu:
flux_0_x:
parent: Work with Prometheus
weight: 101
flux/v0.x/tags: [prometheus]
related:
- https://prometheus.io/docs/concepts/data_model/, Prometheus data model
- /flux/v0.x/stdlib/experimental/prometheus/scrape/
- /influxdb/cloud/process-data/manage-tasks/create-task/, Create an InfluxDB task
- /{{< latest "influxdb" >}}/reference/prometheus-metrics/, InfluxDB Prometheus metric parsing formats
- /influxdb/cloud/write-data/developer-tools/scrape-prometheus-metrics, Scrape Prometheus metrics with InfluxDB Cloud
- /{{< latest "influxdb" >}}/write-data/developer-tools/scrape-prometheus-metrics, Scrape Prometheus metrics with InfluxDB OSS
---
To use Flux to scrape [Prometheus-formatted metrics](https://prometheus.io/docs/concepts/data_model/)
from an HTTP-accessible endpoint:
1. Import the [`experimental/prometheus` package](/flux/v0.x/stdlib/experimental/prometheus/).
2. Use [`prometheus.scrape`](/flux/v0.x/stdlib/experimental/prometheus/scrape/) and
specify the **url** to scrape metrics from.
{{< keep-url >}}
```js
import "experimental/prometheus"
prometheus.scrape(url: "http://localhost:8086/metrics")
```
## Output structure
`prometheus.scrape()` returns a [stream of tables](/flux/v0.x/get-started/data-model/#stream-of-tables)
with the following columns:
- **_time**: Data timestamp
- **_measurement**: `prometheus`
- **_field**: [Prometheus metric name](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels)
_(`_bucket` is trimmed from histogram metric names)_
- **_value**: [Prometheus metric value](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels)
- **url**: URL metrics were scraped from
- **Label columns**: A column for each [Prometheus label](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels).
The column label is the label name and the column value is the label value.
Tables are grouped by **_measurement**, **_field**, and **Label columns**.
{{% note %}}
#### Columns with the underscore prefix
Columns with the underscore (`_`) prefix are considered "system" columns.
Some Flux functions require these columns to function properly.
{{% /note %}}
### Example Prometheus query results
The following are example Prometheus metrics scraped from the **InfluxDB OSS 2.x `/metrics`** endpoint:
```sh
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 1.42276424e+09
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 5.259247e+06
# HELP task_executor_run_latency_seconds Records the latency between the time the run was due to run and the time the task started execution, by task type
# TYPE task_executor_run_latency_seconds histogram
task_executor_run_latency_seconds_bucket{task_type="system",le="0.25"} 4413
task_executor_run_latency_seconds_bucket{task_type="system",le="0.5"} 11901
task_executor_run_latency_seconds_bucket{task_type="system",le="1"} 12565
task_executor_run_latency_seconds_bucket{task_type="system",le="2.5"} 12823
task_executor_run_latency_seconds_bucket{task_type="system",le="5"} 12844
task_executor_run_latency_seconds_bucket{task_type="system",le="10"} 12864
task_executor_run_latency_seconds_bucket{task_type="system",le="+Inf"} 74429
task_executor_run_latency_seconds_sum{task_type="system"} 4.256783538679698e+11
task_executor_run_latency_seconds_count{task_type="system"} 74429
# HELP task_executor_run_duration The duration in seconds between a run starting and finishing.
# TYPE task_executor_run_duration summary
task_executor_run_duration{taskID="00xx0Xx0xx00XX0x0",task_type="threshold",quantile="0.5"} 5.178160855
task_executor_run_duration{taskID="00xx0Xx0xx00XX0x0",task_type="threshold",quantile="0.9"} 5.178160855
task_executor_run_duration{taskID="00xx0Xx0xx00XX0x0",task_type="threshold",quantile="0.99"} 5.178160855
task_executor_run_duration_sum{taskID="00xx0Xx0xx00XX0x0",task_type="threshold"} 2121.9758301650004
task_executor_run_duration_count{taskID="00xx0Xx0xx00XX0x0",task_type="threshold"} 570
```
When scraped by Flux, these metrics return the following stream of tables:
| _time | _measurement | url | _field | _value |
| :------------------------ | :----------- | :---------------------------- | :---------------------------- | -----------: |
| {{< flux/current-time >}} | prometheus | http://localhost:8086/metrics | go_memstats_alloc_bytes_total | 1422764240.0 |
| _time | _measurement | url | _field | _value |
| :------------------------ | :----------- | :---------------------------- | :------------------------------ | --------: |
| {{< flux/current-time >}} | prometheus | http://localhost:8086/metrics | go_memstats_buck_hash_sys_bytes | 5259247.0 |
| _time | _measurement | task_type | url | le | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------------------- | :--- | :-------------------------------- | -----: |
| {{< flux/current-time >}} | prometheus | system | http://localhost:8086/metrics | 0.25 | task_executor_run_latency_seconds | 4413 |
| _time | _measurement | task_type | url | le | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------------------- | :-- | :-------------------------------- | -----: |
| {{< flux/current-time >}} | prometheus | system | http://localhost:8086/metrics | 0.5 | task_executor_run_latency_seconds | 11901 |
| _time | _measurement | task_type | url | le | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------------------- | :-- | :-------------------------------- | -----: |
| {{< flux/current-time >}} | prometheus | system | http://localhost:8086/metrics | 1 | task_executor_run_latency_seconds | 12565 |
| _time | _measurement | task_type | url | le | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------------------- | :-- | :-------------------------------- | -----: |
| {{< flux/current-time >}} | prometheus | system | http://localhost:8086/metrics | 2.5 | task_executor_run_latency_seconds | 12823 |
| _time | _measurement | task_type | url | le | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------------------- | :-- | :-------------------------------- | -----: |
| {{< flux/current-time >}} | prometheus | system | http://localhost:8086/metrics | 5 | task_executor_run_latency_seconds | 12844 |
| _time | _measurement | task_type | url | le | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------------------- | :--- | :-------------------------------- | -----: |
| {{< flux/current-time >}} | prometheus | system | http://localhost:8086/metrics | +Inf | task_executor_run_latency_seconds | 74429 |
| _time | _measurement | task_type | url | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------------------- | :------------------------------------ | ----------------: |
| {{< flux/current-time >}} | prometheus | system | http://localhost:8086/metrics | task_executor_run_latency_seconds_sum | 425678353867.9698 |
| _time | _measurement | task_type | url | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------------------- | :-------------------------------------- | -----: |
| {{< flux/current-time >}} | prometheus | system | http://localhost:8086/metrics | task_executor_run_latency_seconds_count | 74429 |
| _time | _measurement | task_type | taskID | url | quantile | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------- | :---------------------------- | :------- | :------------------------- | ----------: |
| {{< flux/current-time >}} | prometheus | threshold | 00xx0Xx0xx00XX0x0 | http://localhost:8086/metrics | 0.5 | task_executor_run_duration | 5.178160855 |
| _time | _measurement | task_type | taskID | url | quantile | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------- | :---------------------------- | :------- | :------------------------- | ----------: |
| {{< flux/current-time >}} | prometheus | threshold | 00xx0Xx0xx00XX0x0 | http://localhost:8086/metrics | 0.9 | task_executor_run_duration | 5.178160855 |
| _time | _measurement | task_type | taskID | url | quantile | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------- | :---------------------------- | :------- | :------------------------- | ----------: |
| {{< flux/current-time >}} | prometheus | threshold | 00xx0Xx0xx00XX0x0 | http://localhost:8086/metrics | 0.99 | task_executor_run_duration | 5.178160855 |
| _time | _measurement | task_type | taskID | url | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------- | :---------------------------- | :----------------------------- | -----------------: |
| {{< flux/current-time >}} | prometheus | threshold | 00xx0Xx0xx00XX0x0 | http://localhost:8086/metrics | task_executor_run_duration_sum | 2121.9758301650004 |
| _time | _measurement | task_type | taskID | url | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------- | :---------------------------- | :------------------------------- | -----: |
| {{< flux/current-time >}} | prometheus | threshold | 00xx0Xx0xx00XX0x0 | http://localhost:8086/metrics | task_executor_run_duration_count | 570 |
{{% note %}}
#### Different data structures for scraped Prometheus metrics
[Telegraf](/{{< latest "telegraf" >}}/) and [InfluxDB](/{{< latest "influxdb" >}}/)
provide tools that scrape Prometheus metrics and store them in InfluxDB.
Depending on the tool and and configuration you use to scrape metrics,
the resulting data structure may differ from the structure returned by `prometheus.scrape()`
described [above](#output-structure).
For information about the different data structures of scraped Prometheus metrics
stored in InfluxDB, see [InfluxDB Prometheus metric parsing formats](/{{< latest "influxdb" >}}/reference/prometheus-metrics/).
{{% /note %}}
## Write Prometheus metrics to InfluxDB
To write scraped Prometheus metrics to InfluxDB:
1. Use [`prometheus.scrape`](/flux/v0.x/stdlib/experimental/prometheus/scrape)
to scrape Prometheus metrics.
2. Use [`to()`](/flux/v0.x/stdlib/influxdata/influxdb/to/) to write the scraped
metrics to InfluxDB.
```js
import "experimental/prometheus"
prometheus.scrape(url: "http://example.com/metrics")
|> to(
bucket: "example-bucket",
host: "http://localhost:8086",
org: "example-org",
token: "mYsuP3R5eCR37t0K3n"
)
```
### Write Prometheus metrics to InfluxDB at regular intervals
To scrape Prometheus metrics and write them to InfluxDB at regular intervals,
scrape Prometheus metrics in an [InfluxDB task](/influxdb/cloud/process-data/get-started/).
```js
import "experimental/prometheus"
option task = {
name: "Scrape Prometheus metrics",
every: 10s
}
prometheus.scrape(url: "http://example.com/metrics")
|> to(bucket: "example-bucket")
```

View File

@ -21,6 +21,7 @@ The `prometheus.scrape()` function retrieves [Prometheus-formatted metrics](http
from a specified URL.
The function groups metrics (including histogram and summary values) into individual tables.
{{< keep-url >}}
```js
import "experimental/prometheus"

View File

@ -12,6 +12,7 @@ menu:
weight: 210
related:
- /{{< latest "flux" >}}/stdlib/universe/histogram
- /{{< latest "flux" >}}/prometheus/metric-types/histogram/, Work with Prometheus histograms in Flux
list_query_example: histogram
---

View File

@ -0,0 +1,19 @@
---
title: Prometheus metric parsing formats
description: >
When scraping [Prometheus-formatted metrics](https://prometheus.io/docs/concepts/data_model/)
and writing them to InfluxDB Cloud, metrics are parsed and stored in InfluxDB in different formats.
menu:
influxdb_cloud_ref:
name: Prometheus metrics
weight: 8
influxdb/v2.0/tags: [prometheus]
related:
- https://prometheus.io/docs/concepts/data_model/, Prometheus data model
- /influxdb/cloud/write-data/developer-tools/scrape-prometheus-metrics/
- /{{< latest "flux" >}}/prometheus/, Work with Prometheus in Flux
- /{{< latest "telegraf" >}}/plugins/#prometheus, Telegraf Prometheus input plugin
- /{{< latest "flux" >}}/stdlib/experimental/prometheus/scrape/
---
{{< duplicate-oss >}}

View File

@ -0,0 +1,20 @@
---
title: Scrape Prometheus metrics
seotitle: Scape Prometheus metrics into InfluxDB
weight: 205
description: >
Use Telegraf or Flux to scrape Prometheus-formatted metrics
from an HTTP-accessible endpoint and store them in InfluxDB.
menu:
influxdb_cloud:
name: Scrape Prometheus metrics
parent: Developer tools
related:
- /{{< latest "telegraf" >}}/plugins/#prometheus, Telegraf Prometheus input plugin
- /{{< latest "flux" >}}/prometheus/scrape-prometheus/
- /{{< latest "flux" >}}/stdlib/experimental/prometheus/scrape/
- /{{< latest "flux" >}}/prometheus/metric-types/
influxdb/v2.0/tags: [prometheus]
---
{{< duplicate-oss >}}

View File

@ -13,6 +13,7 @@ aliases:
- /influxdb/v2.0/query-data/guides/histograms/
related:
- /{{< latest "flux" >}}/stdlib/universe/histogram
- /{{< latest "flux" >}}/prometheus/metric-types/histogram/, Work with Prometheus histograms in Flux
list_query_example: histogram
---
@ -192,124 +193,5 @@ and `severity` as the **Group By** option:
### Use Prometheus histograms in Flux
Use InfluxDB and Telegraf to monitor a service instrumented with an endpoint that outputs [prometheus-formatted metrics](https://prometheus.io/docs/concepts/data_model/). This example demonstrates how to use Telegraf to scrape metrics from the InfluxDB 2.0 OSS `/metrics` endpoint at regular intervals (10s by default), and then store those metrics in InfluxDB.
Use Prometheus histograms to measure the distribution of a variable, for example, the time it takes a server to respond to a request. Prometheus represents histograms as many sets of buckets (notably, different from an InfluxDB bucket).
Each unique set of labels corresponds to one set of buckets; within that set, each bucket is labeled with an upper bound.
In this example, the upper bound label is `le`, which stands for *less than or equal to*.
In the example `/metrics` endpoint output below, there is a bucket for requests that take less-than-or-equal-to 0.005s, 0.01s, and so on, up to 10s and then +Inf. Note that the buckets are cumulative, so if a request takes 7.5s, Prometheus increments the counters in the buckets for 10s as well as +Inf.
{{< expand-wrapper >}}
{{% expand "View sample histogram data from the /metrics endpoint" %}}
Sample histogram metrics from the `/metrics` endpoint on an instance of InfluxDB OSS 2.0, including two histograms for requests served by the `/api/v2/write` and `/api/v2/query` endpoints.
```js
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/write",response_code="204",status="2XX",user_agent="Telegraf",le="0.005"} 0
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/write",response_code="204",status="2XX",user_agent="Telegraf",le="0.01"} 1
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/write",response_code="204",status="2XX",user_agent="Telegraf",le="0.025"} 13
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/write",response_code="204",status="2XX",user_agent="Telegraf",le="0.05"} 14
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/write",response_code="204",status="2XX",user_agent="Telegraf",le="0.1"} 16
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/write",response_code="204",status="2XX",user_agent="Telegraf",le="0.25"} 16
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/write",response_code="204",status="2XX",user_agent="Telegraf",le="0.5"} 16
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/write",response_code="204",status="2XX",user_agent="Telegraf",le="1"} 16
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/write",response_code="204",status="2XX",user_agent="Telegraf",le="2.5"} 16
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/write",response_code="204",status="2XX",user_agent="Telegraf",le="5"} 16
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/write",response_code="204",status="2XX",user_agent="Telegraf",le="10"} 16
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/write",response_code="204",status="2XX",user_agent="Telegraf",le="+Inf"} 16
http_api_request_duration_seconds_sum{handler="platform",method="POST",path="/api/v2/write",response_code="204",status="2XX",user_agent="Telegraf"} 0.354163124
http_api_request_duration_seconds_count{handler="platform",method="POST",path="/api/v2/write",response_code="204",status="2XX",user_agent="Telegraf"} 16
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/query",response_code="200",status="2XX",user_agent="Chrome",le="0.005"} 0
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/query",response_code="200",status="2XX",user_agent="Chrome",le="0.01"} 16
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/query",response_code="200",status="2XX",user_agent="Chrome",le="0.025"} 68
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/query",response_code="200",status="2XX",user_agent="Chrome",le="0.05"} 70
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/query",response_code="200",status="2XX",user_agent="Chrome",le="0.1"} 70
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/query",response_code="200",status="2XX",user_agent="Chrome",le="0.25"} 70
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/query",response_code="200",status="2XX",user_agent="Chrome",le="0.5"} 70
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/query",response_code="200",status="2XX",user_agent="Chrome",le="1"} 71
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/query",response_code="200",status="2XX",user_agent="Chrome",le="2.5"} 71
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/query",response_code="200",status="2XX",user_agent="Chrome",le="5"} 71
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/query",response_code="200",status="2XX",user_agent="Chrome",le="10"} 71
http_api_request_duration_seconds_bucket{handler="platform",method="POST",path="/api/v2/query",response_code="200",status="2XX",user_agent="Chrome",le="+Inf"} 71
http_api_request_duration_seconds_sum{handler="platform",method="POST",path="/api/v2/query",response_code="200",status="2XX",user_agent="Chrome"} 1.4840353630000003
http_api_request_duration_seconds_count{handler="platform",method="POST",path="/api/v2/query",response_code="200",status="2XX",user_agent="Chrome"} 71
```
{{% /expand %}}
{{< /expand-wrapper >}}
Use the [histogramQuantile()](/{{< latest "flux" >}}/stdlib/universe/histogramquantile/) function to convert a Prometheus histogram to a specified quantile.
This function expects a stream of input tables where each table has the following form:
- Each row represents one bucket of a histogram, where the upper bound of the bucket is defined by the
argument `upperBoundColumn` (by default, `le`).
- A value column represents the number of items (requests, events, etc.) in the bucket (by default, `_value`).
- Buckets are strictly cumulative, so for example, if for some reason the `+Inf` bucket had a count of `9`,
but the `10s` bucket had a count of 10, the following error would occur: "histogram records counts are not monotonic".
Given Prometheus increments the counts in each bucket continually as the process runs, whatever is most recently scraped from `/metrics` is a histogram of all the requests (events, etc) since the process started (maybe days, weeks, or longer). To be more useful, Telegraf scrapes at regular intervals, and we can subtract adjacent samples from the same bucket to discover the number of new items in that bucket for a given interval.
To transform a set of cumulative histograms collected over time and visualize that as some quantile (such as the 50th percentile or 99th percentile) and show change over time, complete the following high-level transformations:
1. Use `aggregateWindow()` to downsample the data to a specified time resolution to improve query performance. For example, to see how the 50th percentile changes over a month, downsample to a resolution of `1h`.
```js
// ...
|> aggregateWindow(every: 1h, fn: last)
```
2. Use `difference()` to subtract adjacent samples so that buckets contain only the new counts for each period.
3. Sum data across the dimensions that we aren't interested in. For example, in the Prometheus data from above, there is a label for path, but we may not care to break out http requests by path. If this is the case, we would ungroup the path dimension, and then add corresponding buckets together.
4. Reshape the data so all duration buckets for the same period are in their own tables, with an upper bound column that describes the bucket represented by each row.
5. Transform each table from a histogram to a quantile with the `histogramQuantile()` function.
The following query performs the above steps.
```js
import "experimental"
// The "_field" is necessary. Any columns following "_field" will be used
// to create quantiles for each unique value in that column.
// E.g., put "path" in here to see quantiles for each unique value of "path".
groupCols = ["_field"]
// This is a helper function that takes a stream of tables,
// each containing "le" buckets for one window period.
// It uses histogramQuantile to transform the bin counts
// to a quantile defined in the "q" parameter.
// The windows are then reassembled to produce a single table for each
// unique group key.
doQuantile = (tables=<-, q) => tables
|> histogramQuantile(quantile: q)
|> duplicate(as: "_time", column: "_stop")
|> window(every: inf)
|> map(fn: (r) => ({r with _measurement: "quantile", _field: string(v: q)}))
|> experimental.group(mode: "extend", columns: ["_measurement", "_field"])
histograms =
from(bucket: "telegraf")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r._measurement == "http_api_request_duration_seconds"
and r._field != "count" and r._field != "sum")
// Downsample the data. This helps a lot with performance!
|> aggregateWindow(fn: last, every: v.windowPeriod)
// Calling difference() transforms the cumulative count of requests
// to the number of new requests per window period.
|> difference(nonNegative: true)
// Counters may be reset when a server restarts.
// When this happens there will be null values produced by difference().
|> filter(fn: (r) => exists r._value)
// Group data on the requested dimensions, window it, and sum within those dimensions, for each window.
|> group(columns: groupCols)
|> window(every: v.windowPeriod)
|> sum()
// Fields will have names like "0.001", etc. Change _field to a float column called "le".
// This also has the effect of ungrouping by _field, so bucket counts for each period
// will be within the same table.
|> map(fn: (r) => ({r with le: float(v: r._field)}))
|> drop(columns: ["_field"])
// Compute the 50th and 95th percentile for duration of http requests.
union(tables: [
histograms |> doQuantile(q: 0.95),
histograms |> doQuantile(q: 0.5)
])
```
The tables piped-forward into `histogramQuantile()` should look similar to those returned by the `histograms` variable in the example above. Note, that rows are sorted by `le` to be clear that the counts increase for larger upper bounds.
_For information about working with Prometheus histograms in Flux, see
[Work with Prometheus histograms](/{{< latest "flux" >}}/prometheus/metric-types/histogram/)._

View File

@ -2,7 +2,7 @@
title: Glossary
description: >
Terms related to InfluxData products and platforms.
weight: 8
weight: 9
menu:
influxdb_2_0_ref:
name: Glossary

View File

@ -0,0 +1,286 @@
---
title: Prometheus metric parsing formats
description: >
When scraping [Prometheus-formatted metrics](https://prometheus.io/docs/concepts/data_model/)
and writing them to InfluxDB, metrics are parsed and stored in InfluxDB in different formats.
menu:
influxdb_2_0_ref:
name: Prometheus metrics
weight: 8
influxdb/v2.0/tags: [prometheus]
related:
- https://prometheus.io/docs/concepts/data_model/, Prometheus data model
- /influxdb/v2.0/write-data/developer-tools/scrape-prometheus-metrics/
- /{{< latest "flux" >}}/prometheus/, Work with Prometheus in Flux
- /{{< latest "telegraf" >}}/plugins/#prometheus, Telegraf Prometheus input plugin
- /influxdb/v2.0/write-data/no-code/scrape-data/
- /{{< latest "flux" >}}/stdlib/experimental/prometheus/scrape/
---
[Prometheus-formatted metrics](https://prometheus.io/docs/concepts/data_model/)
are parsed and written to InfluxDB in one of two formats, depending on the scraping tool used:
- [Metric version 1](#metric-version-1)
- [Metric version 2](#metric-version-2)
#### Scraping tools and parsing format
{{% oss-only %}}
| Scraping tool | InfluxDB Metric version |
| :----------------------------------------------------------------------------------------- | ----------------------------------------------------: |
| [Telegraf Prometheus plugin](/{{< latest "telegraf" >}}/plugins/#prometheus) | _Determined by `metric_version` configuration option_ |
| [InfluxDB scraper](/influxdb/v2.0/write-data/no-code/scrape-data/) | 1 |
| Flux [`prometheus.scrape()`]({{< latest "flux" >}}/stdlib/experimental/prometheus/scrape/) | 2 |
{{% /oss-only %}}
{{% cloud-only %}}
| Scraping tool | InfluxDB Metric version |
| :----------------------------------------------------------------------------------------- | ----------------------------------------------------: |
| [Telegraf Prometheus plugin](/{{< latest "telegraf" >}}/plugins/#prometheus) | _Determined by `metric_version` configuration option_ |
| Flux [`prometheus.scrape()`]({{< latest "flux" >}}/stdlib/experimental/prometheus/scrape/) | 2 |
{{% /cloud-only %}}
## Metric version 1
- **_time**: timestamp
- **_measurement**: [Prometheus metric name](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels)
_(`_bucket`, `_sum`, and `_count` are trimmed from histogram and summary metric names)_
- **\_field**: _depends on the [Prometheus metric type](https://prometheus.io/docs/concepts/metric_types/)_
- Counter: `counter`
- Gauge: `gauge`
- Histogram: _histogram bucket upper limits_, `count`, `sum`
- Summary: _summary quantiles_, `count`, `sum`
- **_value**: [Prometheus metric value](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels)
- **tags**: A tag for each [Prometheus label](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels)
_(except for histogram bucket upper limits (`le`) or summary quantiles (`quantile`))_.
The label name is the tag key and the label value is the tag value.
### Example Prometheus query results
The following are example Prometheus metrics scraped from the **InfluxDB OSS 2.x `/metrics`** endpoint:
```sh
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 1.42276424e+09
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 5.259247e+06
# HELP task_executor_run_latency_seconds Records the latency between the time the run was due to run and the time the task started execution, by task type
# TYPE task_executor_run_latency_seconds histogram
task_executor_run_latency_seconds_bucket{task_type="system",le="0.25"} 4413
task_executor_run_latency_seconds_bucket{task_type="system",le="0.5"} 11901
task_executor_run_latency_seconds_bucket{task_type="system",le="1"} 12565
task_executor_run_latency_seconds_bucket{task_type="system",le="2.5"} 12823
task_executor_run_latency_seconds_bucket{task_type="system",le="5"} 12844
task_executor_run_latency_seconds_bucket{task_type="system",le="10"} 12864
task_executor_run_latency_seconds_bucket{task_type="system",le="+Inf"} 74429
task_executor_run_latency_seconds_sum{task_type="system"} 4.256783538679698e+11
task_executor_run_latency_seconds_count{task_type="system"} 74429
# HELP task_executor_run_duration The duration in seconds between a run starting and finishing.
# TYPE task_executor_run_duration summary
task_executor_run_duration{taskID="00xx0Xx0xx00XX0x0",task_type="threshold",quantile="0.5"} 5.178160855
task_executor_run_duration{taskID="00xx0Xx0xx00XX0x0",task_type="threshold",quantile="0.9"} 5.178160855
task_executor_run_duration{taskID="00xx0Xx0xx00XX0x0",task_type="threshold",quantile="0.99"} 5.178160855
task_executor_run_duration_sum{taskID="00xx0Xx0xx00XX0x0",task_type="threshold"} 2121.9758301650004
task_executor_run_duration_count{taskID="00xx0Xx0xx00XX0x0",task_type="threshold"} 570
```
#### Resulting line protocol
```
go_memstats_alloc_bytes_total counter=1.42276424e+09
go_memstats_buck_hash_sys_bytes gauge=5.259247e+06
task_executor_run_latency_seconds,task_type=system 0.25=4413,0.5=11901,1=12565,2.5=12823,5=12844,10=12864,+Inf=74429,sum=4.256783538679698e+11,count=74429
task_executor_run_duration,taskID=00xx0Xx0xx00XX0x0,task_type=threshold 0.5=5.178160855,0.9=5.178160855,0.99=5.178160855,sum=2121.9758301650004,count=570
```
{{< expand-wrapper >}}
{{% expand "View version 1 tables when queried from InfluxDB" %}}
| _time | _measurement | _field | _value |
| :------------------------ | :---------------------------- | :------ | -----------: |
| {{< flux/current-time >}} | go_memstats_alloc_bytes_total | counter | 1422764240.0 |
| _time | _measurement | _field | _value |
| :------------------------ | :------------------------------ | :----- | --------: |
| {{< flux/current-time >}} | go_memstats_buck_hash_sys_bytes | gauge | 5259247.0 |
| _time | _measurement | task_type | _field | _value |
| :------------------------ | :-------------------------------- | :-------- | :----- | -----: |
| {{< flux/current-time >}} | task_executor_run_latency_seconds | system | 0.25 | 4413.0 |
| _time | _measurement | task_type | _field | _value |
| :------------------------ | :-------------------------------- | :-------- | :----- | ------: |
| {{< flux/current-time >}} | task_executor_run_latency_seconds | system | 0.5 | 11901.0 |
| _time | _measurement | task_type | _field | _value |
| :------------------------ | :-------------------------------- | :-------- | :----- | ------: |
| {{< flux/current-time >}} | task_executor_run_latency_seconds | system | 1 | 12565.0 |
| _time | _measurement | task_type | _field | _value |
| :------------------------ | :-------------------------------- | :-------- | :----- | ------: |
| {{< flux/current-time >}} | task_executor_run_latency_seconds | system | 2.5 | 12823.0 |
| _time | _measurement | task_type | _field | _value |
| :------------------------ | :-------------------------------- | :-------- | :----- | ------: |
| {{< flux/current-time >}} | task_executor_run_latency_seconds | system | 5 | 12844.0 |
| _time | _measurement | task_type | _field | _value |
| :------------------------ | :-------------------------------- | :-------- | :----- | ------: |
| {{< flux/current-time >}} | task_executor_run_latency_seconds | system | +Inf | 74429.0 |
| _time | _measurement | task_type | _field | _value |
| :------------------------ | :-------------------------------- | :-------- | :----- | ----------------: |
| {{< flux/current-time >}} | task_executor_run_latency_seconds | system | sum | 425678353867.9698 |
| _time | _measurement | task_type | _field | _value |
| :------------------------ | :-------------------------------- | :-------- | :----- | -----: |
| {{< flux/current-time >}} | task_executor_run_latency_seconds | system | count | 74429.0 |
| _time | _measurement | task_type | _field | _value |
| :------------------------ | :------------------------- | :-------- | :----- | ----------: |
| {{< flux/current-time >}} | task_executor_run_duration | threshold | 0.5 | 5.178160855 |
| _time | _measurement | task_type | _field | _value |
| :------------------------ | :------------------------- | :-------- | :----- | ----------: |
| {{< flux/current-time >}} | task_executor_run_duration | threshold | 0.9 | 5.178160855 |
| _time | _measurement | task_type | _field | _value |
| :------------------------ | :------------------------- | :-------- | :----- | ----------: |
| {{< flux/current-time >}} | task_executor_run_duration | threshold | 0.99 | 5.178160855 |
| _time | _measurement | task_type | _field | _value |
| :------------------------ | :------------------------- | :-------- | :----- | -----------------: |
| {{< flux/current-time >}} | task_executor_run_duration | threshold | sum | 2121.9758301650004 |
| _time | _measurement | task_type | _field | _value |
| :------------------------ | :------------------------- | :-------- | :----- | -----: |
| {{< flux/current-time >}} | task_executor_run_duration | threshold | count | 570.0 |
{{% /expand %}}
{{< /expand-wrapper >}}
## Metrics version 2
- **_time**: timestamp
- **_measurement**: `prometheus`
- **_field**: [Prometheus metric name](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels)
_(`_bucket` is trimmed from histogram metric names)_
- **_value**: [Prometheus metric value](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels)
- **url**: URL metrics were scraped from
- **tags**: A tag for each [Prometheus label](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels).
The label name is the tag key and the label value is the tag value.
### Example Prometheus query results
The following are example Prometheus metrics scraped from the **InfluxDB OSS 2.x `/metrics`** endpoint:
```sh
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 1.42276424e+09
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 5.259247e+06
# HELP task_executor_run_latency_seconds Records the latency between the time the run was due to run and the time the task started execution, by task type
# TYPE task_executor_run_latency_seconds histogram
task_executor_run_latency_seconds_bucket{task_type="system",le="0.25"} 4413
task_executor_run_latency_seconds_bucket{task_type="system",le="0.5"} 11901
task_executor_run_latency_seconds_bucket{task_type="system",le="1"} 12565
task_executor_run_latency_seconds_bucket{task_type="system",le="2.5"} 12823
task_executor_run_latency_seconds_bucket{task_type="system",le="5"} 12844
task_executor_run_latency_seconds_bucket{task_type="system",le="10"} 12864
task_executor_run_latency_seconds_bucket{task_type="system",le="+Inf"} 74429
task_executor_run_latency_seconds_sum{task_type="system"} 4.256783538679698e+11
task_executor_run_latency_seconds_count{task_type="system"} 74429
# HELP task_executor_run_duration The duration in seconds between a run starting and finishing.
# TYPE task_executor_run_duration summary
task_executor_run_duration{taskID="00xx0Xx0xx00XX0x0",task_type="threshold",quantile="0.5"} 5.178160855
task_executor_run_duration{taskID="00xx0Xx0xx00XX0x0",task_type="threshold",quantile="0.9"} 5.178160855
task_executor_run_duration{taskID="00xx0Xx0xx00XX0x0",task_type="threshold",quantile="0.99"} 5.178160855
task_executor_run_duration_sum{taskID="00xx0Xx0xx00XX0x0",task_type="threshold"} 2121.9758301650004
task_executor_run_duration_count{taskID="00xx0Xx0xx00XX0x0",task_type="threshold"} 570
```
#### Resulting line protocol
{{< keep-url >}}
```
prometheus,url=http://localhost:8086/metrics go_memstats_alloc_bytes_total=1.42276424e+09
prometheus,url=http://localhost:8086/metrics go_memstats_buck_hash_sys_bytes=5.259247e+06
prometheus,url=http://localhost:8086/metrics,task_type=system,le=0.25 task_executor_run_latency_seconds=4413
prometheus,url=http://localhost:8086/metrics,task_type=system,le=0.5 task_executor_run_latency_seconds=11901
prometheus,url=http://localhost:8086/metrics,task_type=system,le=1 task_executor_run_latency_seconds=12565
prometheus,url=http://localhost:8086/metrics,task_type=system,le=2.5 task_executor_run_latency_seconds=12823
prometheus,url=http://localhost:8086/metrics,task_type=system,le=5 task_executor_run_latency_seconds=12844
prometheus,url=http://localhost:8086/metrics,task_type=system,le=10 task_executor_run_latency_seconds=12864
prometheus,url=http://localhost:8086/metrics,task_type=system,le=+Inf task_executor_run_latency_seconds=74429
prometheus,url=http://localhost:8086/metrics,task_type=system task_executor_run_latency_seconds_sum=4.256783538679698e+11
prometheus,url=http://localhost:8086/metrics,task_type=system task_executor_run_latency_seconds_count=74429
prometheus,url=http://localhost:8086/metrics,taskID=00xx0Xx0xx00XX0x0,task_type=threshold quantile=0.5 task_executor_run_duration=5.178160855
prometheus,url=http://localhost:8086/metrics,taskID=00xx0Xx0xx00XX0x0,task_type=threshold quantile=0.9 task_executor_run_duration=5.178160855
prometheus,url=http://localhost:8086/metrics,taskID=00xx0Xx0xx00XX0x0,task_type=threshold quantile=0.99 task_executor_run_duration=5.178160855
prometheus,url=http://localhost:8086/metrics,taskID=00xx0Xx0xx00XX0x0,task_type=threshold task_executor_run_duration_sum=2121.9758301650004
prometheus,url=http://localhost:8086/metrics,taskID=00xx0Xx0xx00XX0x0,task_type=threshold task_executor_run_duration_count=570
```
{{< expand-wrapper >}}
{{% expand "View version 2 tables when queried from InfluxDB" %}}
| _time | _measurement | url | _field | _value |
| :------------------------ | :----------- | :---------------------------- | :---------------------------- | -----------: |
| {{< flux/current-time >}} | prometheus | http://localhost:8086/metrics | go_memstats_alloc_bytes_total | 1422764240.0 |
| _time | _measurement | url | _field | _value |
| :------------------------ | :----------- | :---------------------------- | :------------------------------ | --------: |
| {{< flux/current-time >}} | prometheus | http://localhost:8086/metrics | go_memstats_buck_hash_sys_bytes | 5259247.0 |
| _time | _measurement | task_type | url | le | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------------------- | :--- | :-------------------------------- | -----: |
| {{< flux/current-time >}} | prometheus | system | http://localhost:8086/metrics | 0.25 | task_executor_run_latency_seconds | 4413 |
| _time | _measurement | task_type | url | le | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------------------- | :-- | :-------------------------------- | -----: |
| {{< flux/current-time >}} | prometheus | system | http://localhost:8086/metrics | 0.5 | task_executor_run_latency_seconds | 11901 |
| _time | _measurement | task_type | url | le | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------------------- | :-- | :-------------------------------- | -----: |
| {{< flux/current-time >}} | prometheus | system | http://localhost:8086/metrics | 1 | task_executor_run_latency_seconds | 12565 |
| _time | _measurement | task_type | url | le | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------------------- | :-- | :-------------------------------- | -----: |
| {{< flux/current-time >}} | prometheus | system | http://localhost:8086/metrics | 2.5 | task_executor_run_latency_seconds | 12823 |
| _time | _measurement | task_type | url | le | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------------------- | :-- | :-------------------------------- | -----: |
| {{< flux/current-time >}} | prometheus | system | http://localhost:8086/metrics | 5 | task_executor_run_latency_seconds | 12844 |
| _time | _measurement | task_type | url | le | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------------------- | :--- | :-------------------------------- | -----: |
| {{< flux/current-time >}} | prometheus | system | http://localhost:8086/metrics | +Inf | task_executor_run_latency_seconds | 74429 |
| _time | _measurement | task_type | url | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------------------- | :------------------------------------ | ----------------: |
| {{< flux/current-time >}} | prometheus | system | http://localhost:8086/metrics | task_executor_run_latency_seconds_sum | 425678353867.9698 |
| _time | _measurement | task_type | url | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------------------- | :-------------------------------------- | -----: |
| {{< flux/current-time >}} | prometheus | system | http://localhost:8086/metrics | task_executor_run_latency_seconds_count | 74429 |
| _time | _measurement | task_type | taskID | url | quantile | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------- | :---------------------------- | :------- | :------------------------- | ----------: |
| {{< flux/current-time >}} | prometheus | threshold | 00xx0Xx0xx00XX0x0 | http://localhost:8086/metrics | 0.5 | task_executor_run_duration | 5.178160855 |
| _time | _measurement | task_type | taskID | url | quantile | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------- | :---------------------------- | :------- | :------------------------- | ----------: |
| {{< flux/current-time >}} | prometheus | threshold | 00xx0Xx0xx00XX0x0 | http://localhost:8086/metrics | 0.9 | task_executor_run_duration | 5.178160855 |
| _time | _measurement | task_type | taskID | url | quantile | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------- | :---------------------------- | :------- | :------------------------- | ----------: |
| {{< flux/current-time >}} | prometheus | threshold | 00xx0Xx0xx00XX0x0 | http://localhost:8086/metrics | 0.99 | task_executor_run_duration | 5.178160855 |
| _time | _measurement | task_type | taskID | url | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------- | :---------------------------- | :----------------------------- | -----------------: |
| {{< flux/current-time >}} | prometheus | threshold | 00xx0Xx0xx00XX0x0 | http://localhost:8086/metrics | task_executor_run_duration_sum | 2121.9758301650004 |
| _time | _measurement | task_type | taskID | url | _field | _value |
| :------------------------ | :----------- | :-------- | :---------------- | :---------------------------- | :------------------------------- | -----: |
| {{< flux/current-time >}} | prometheus | threshold | 00xx0Xx0xx00XX0x0 | http://localhost:8086/metrics | task_executor_run_duration_count | 570 |
{{% /expand %}}
{{< /expand-wrapper >}}

View File

@ -0,0 +1,105 @@
---
title: Scrape Prometheus metrics
seotitle: Scape Prometheus metrics into InfluxDB
weight: 205
description: >
Use Telegraf, InfluxDB scrapers, or the `prometheus.scrape` Flux function to
scrape Prometheus-formatted metrics from an HTTP-accessible endpoint and store
them in InfluxDB.
menu:
influxdb_2_0:
name: Scrape Prometheus metrics
parent: Developer tools
related:
- /{{< latest "telegraf" >}}/plugins/#prometheus, Telegraf Prometheus input plugin
- /{{< latest "flux" >}}/prometheus/scrape-prometheus/, Scrape Prometheus metrics with Flux
- /{{< latest "flux" >}}/stdlib/experimental/prometheus/scrape/
- /{{< latest "flux" >}}/prometheus/metric-types/
- /influxdb/v2.0/reference/prometheus-metrics/
- /influxdb/v2.0/write-data/no-code/scrape-data/
influxdb/v2.0/tags: [prometheus, scraper]
---
Use [Telegraf](/{{< latest "telegraf" >}}/){{% oss-only %}}, [InfluxDB scrapers](/influxdb/v2.0/write-data/no-code/scrape-data/),{{% /oss-only %}}
or the [`prometheus.scrape` Flux function](/{{< latest "flux" >}}/stdlib/experimental/prometheus/scrape/)
to scrape Prometheus-formatted metrics from an HTTP-accessible endpoint and store them in InfluxDB.
{{% oss-only %}}
- [Use Telegraf](#use-telegraf)
- [Use an InfluxDB scraper](#use-an-influxdb-scraper)
- [Use prometheus.scrape()](#use-prometheusscrape)
{{% /oss-only %}}
{{% cloud-only %}}
- [Use Telegraf](#use-telegraf)
- [Use prometheus.scrape()](#use-prometheusscrape)
{{% /cloud-only %}}
## Use Telegraf
To use Telegraf to scrape Prometheus-formatted metrics from an HTTP-accessible
endpoint and write them to InfluxDB{{% cloud-only %}} Cloud{{% /cloud-only %}}, follow these steps:
1. Add the [Prometheus input plugin](/{{< latest "telegraf" >}}/plugins/#prometheus) to your Telegraf configuration file.
1. Set the `urls` to scrape metrics from.
2. Set the `metric_version` configuration option to specify which
[metric parsing version](/influxdb/v2.0/reference/prometheus-metrics/) to use
_(version `2` is recommended)_.
2. Add the [InfluxDB v2 output plugin](/{{< latest "telegraf" >}}/plugins/#influxdb_v2)
to your Telegraf configuration file and configure it to to write to
InfluxDB{{% cloud-only %}} Cloud{{% /cloud-only %}}.
##### Example telegraf.conf
```toml
# ...
## Collect Prometheus formatted metrics
[[inputs.prometheus]]
urls = ["http://example.com/metrics"]
metric_version = 2
## Write Prometheus formatted metrics to InfluxDB
[[outputs.influxdb_v2]]
urls = ["http://localhost:8086"]
token = "$INFLUX_TOKEN"
organization = "example-org"
bucket = "example-bucket"
# ...
```
{{% oss-only %}}
## Use an InfluxDB scraper
InfluxDB scrapers automatically scrape Prometheus-formatted metrics from an
HTTP-accessible endpoint at a regular interval.
For information about setting up an InfluxDB scraper, see
[Scrape data using InfluxDB scrapers](/influxdb/v2.0/write-data/no-code/scrape-data/).
{{% /oss-only %}}
## Use prometheus.scrape()
To use the [`prometheus.scrape()` Flux function](/{{< latest "flux" >}}/stdlib/experimental/prometheus/scrape/)
to scrape Prometheus-formatted metrics from an HTTP-accessible endpoint and write
them to InfluxDB{{% cloud-only %}} Cloud{{% /cloud-only %}}, do the following in your Flux script:
1. Import the [`experimental/prometheus` package](/{{< latest "flux" >}}/stdlib/experimental/prometheus/).
2. Use `prometheus.scrape()` and provide the URL to scrape metrics from.
3. Use [`to()`](/{{< latest "flux" >}}/stdlib/influxdata/influxdb/to/) and specify the InfluxDB{{% cloud-only %}} Cloud{{% /cloud-only %}} bucket to write
the scraped metrics to.
##### Example Flux script
```js
import "experimental/prometheus"
prometheus.scrape(url: "http://example.com/metrics")
|> to(bucket: "example-bucket")
```
4. (Optional) To scrape Prometheus metrics at regular intervals using Flux, add your Flux
scraping script as an [InfluxDB task](/{{< latest "influxdb" >}}/process-data/).
_For information about scraping Prometheus-formatted metrics with `prometheus.scrape()`,
see [Scrape Prometheus metrics with Flux](/{{< latest "flux" >}}/prometheus/scrape-prometheus/)._

View File

@ -1,6 +1,5 @@
---
title: Scrape data
seotitle: Scrape data using InfluxDB scrapers
title: Scrape data using InfluxDB scrapers
weight: 103
description: >
Scrape data from InfluxDB instances or remote endpoints using InfluxDB scrapers.
@ -15,7 +14,6 @@ menu:
influxdb_2_0:
name: Scrape data
parent: No-code solutions
products: [oss]
---
InfluxDB scrapers collect data from specified targets at regular intervals,

View File

@ -1,6 +1,5 @@
---
title: Manage scrapers
seotitle: Manage InfluxDB scrapers
title: Manage InfluxDB scrapers
description: Create, update, and delete InfluxDB data scrapers in the InfluxDB user interface.
aliases:
- /influxdb/v2.0/collect-data/scrape-data/manage-scrapers
@ -10,8 +9,7 @@ menu:
name: Manage scrapers
parent: Scrape data
weight: 201
influxdb/v2.0/tags: [scraper]
products: [oss]
influxdb/v2.0/tags: [scraper, prometheus]
---
The following articles walk through managing InfluxDB scrapers:

View File

@ -1,15 +1,15 @@
---
title: Create a scraper
seotitle: Create an InfluxDB scraper
title: Create an InfluxDB scraper
list_title: Create a scraper
description: Create an InfluxDB scraper that collects data from InfluxDB or a remote endpoint.
aliases:
- /influxdb/v2.0/collect-data/scrape-data/manage-scrapers/create-a-scraper
- /influxdb/v2.0/write-data/scrape-data/manage-scrapers/create-a-scraper
menu:
influxdb_2_0:
name: Create a scraper
parent: Manage scrapers
weight: 301
products: [oss]
---
Create a new scraper in the InfluxDB user interface (UI).

View File

@ -1,15 +1,15 @@
---
title: Delete a scraper
seotitle: Delete an InfluxDB scraper
title: Delete an InfluxDB scraper
list_title: Delete a scraper
description: Delete an InfluxDB scraper in the InfluxDB user interface.
aliases:
- /influxdb/v2.0/collect-data/scrape-data/manage-scrapers/delete-a-scraper
- /influxdb/v2.0/write-data/scrape-data/manage-scrapers/delete-a-scraper
menu:
influxdb_2_0:
name: Delete a scraper
parent: Manage scrapers
weight: 303
products: [oss]
---
Delete a scraper from the InfluxDB user interface (UI).

View File

@ -1,15 +1,15 @@
---
title: Update a scraper
seotitle: Update an InfluxDB scraper
title: Update an InfluxDB scraper
list_title: Update a scraper
description: Update an InfluxDB scraper that collects data from InfluxDB or a remote endpoint.
aliases:
- /influxdb/v2.0/collect-data/scrape-data/manage-scrapers/update-a-scraper
- /influxdb/v2.0/write-data/scrape-data/manage-scrapers/update-a-scraper
menu:
influxdb_2_0:
name: Update a scraper
parent: Manage scrapers
weight: 302
products: [oss]
---
Update a scraper in the InfluxDB user interface (UI).

View File

@ -12,7 +12,7 @@ menu:
influxdb_2_0:
parent: Scrape data
weight: 202
influxdb/v2.0/tags: [scraper]
influxdb/v2.0/tags: [scraper, prometheus]
---
InfluxDB scrapers can collect data from any HTTP(S)-accessible endpoint that returns data

View File

@ -7,8 +7,10 @@
{{ $notifications := resources.Get "js/notifications.js" }}
{{ $keybindings := resources.Get "js/keybindings.js" }}
{{ $fluxGroupKeys := resources.Get "js/flux-group-keys.js" }}
{{ $fluxCurrentTime := resources.Get "js/flux-current-time.js" }}
{{ $footerjs := slice $versionSelector $contentInteractions $searchInteractions $listFilters $influxdbURLs $featureCallouts $notifications $keybindings | resources.Concat "js/footer.bundle.js" | resources.Fingerprint }}
{{ $fluxGroupKeyjs := slice $fluxGroupKeys | resources.Concat "js/flux-group-keys.js" | resources.Fingerprint }}
{{ $fluxCurrentTimejs := slice $fluxCurrentTime | resources.Concat "js/flux-current-time.js" | resources.Fingerprint }}
<!-- Load cloudUrls array -->
<script type="text/javascript">
@ -44,4 +46,9 @@
<!-- Load group key demo JS if when the group key demo shortcode is present -->
{{ if .Page.HasShortcode "flux/group-key-demo" }}
<script type="text/javascript" src="{{ $fluxGroupKeyjs.RelPermalink }}"></script>
{{ end }}
<!-- Load Flux current time js if when the flux/current-time shortcode is present -->
{{ if .Page.HasShortcode "flux/current-time" }}
<script type="text/javascript" src="{{ $fluxCurrentTime.RelPermalink }}"></script>
{{ end }}

View File

@ -0,0 +1 @@
<span class="cite">{{ .Inner }}</span>

View File

@ -1,5 +1,5 @@
{{ $width := .Get 0 | default "half" }}
{{ $_hugo_config := `{ "version": 1 }` }}
<div class="flex-container {{ $width }}">
{{ .Inner }}
{{ .Inner }}
</div>

View File

@ -1,4 +1,4 @@
{{ $_hugo_config := `{ "version": 1 }` }}
<div class="flex-wrapper">
{{ .Inner }}
{{ .Inner }}
</div>

View File

@ -0,0 +1 @@
<span class="current-time nowrap">2021-01-01T00:00:00Z</span>

View File

@ -3,10 +3,10 @@
{{ $alt := .Get "alt" }}
{{ if (fileExists ( print "/static" $src )) }}
{{ with (imageConfig ( print "/static" $src )) }}
{{ $imageWidth := div .Width 2 }}
<img src='{{ $src }}' alt='{{ $alt }}' width='{{ $imageWidth }}' />
{{ end }}
{{ else }}
<img src='{{ $src }}' alt='{{ $alt }}'/>
{{ with (imageConfig ( print "/static" $src )) }}
{{ $imageWidth := div .Width 2 }}
<img src='{{ $src }}' alt='{{ $alt }}' width='{{ $imageWidth }}' />
{{ end }}
{{ else }}
<img src='{{ $src }}' alt='{{ $alt }}'/>
{{ end }}

View File

@ -1 +1 @@
<span style="white-space:nowrap">{{ .Inner }}</span>
<span class="nowrap">{{ .Inner }}</span>

View File

@ -1,4 +1,6 @@
{{ $_hugo_config := `{ "version": 1 }` }}
<div class="tabs">
{{ $styleParsed := .Get 0 | default "" }}
{{ $style := .Get "style" | default $styleParsed }}
<div class="tabs {{ $style }}">
{{ .Inner }}
</div>

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB