Merge branch 'master' into alpha-14

pull/291/head
Scott Anderson 2019-06-28 11:15:02 -06:00
commit e92421e9a4
23 changed files with 1612 additions and 85 deletions

View File

@ -10,8 +10,8 @@ menu:
#### Welcome
Welcome to the InfluxDB v2.0 documentation!
InfluxDB is an open source time series database designed to handle high write and query loads.
InfluxDB is an open source time series database designed to handle high write and query workloads.
This documentation is meant to help you learn how to use and leverage InfluxDB to meet your needs.
Common use cases include infrastructure monitoring, IoT data collection, events handling and more.
Common use cases include infrastructure monitoring, IoT data collection, events handling, and more.
If your use case involves time series data, InfluxDB is purpose-built to handle it.

View File

@ -0,0 +1,246 @@
---
title: Annotated CSV syntax
description: >
Annotated CSV format is used to encode HTTP responses and results returned to the Flux `csv.from()` function.
menu:
v2_0_ref:
name: Annotated CSV
weight: 2
---
Annotated CSV (comma-separated values) format is used to encode HTTP responses and results returned to the Flux [`csv.from()` function](https://v2.docs.influxdata.com/v2.0/reference/flux/functions/csv/from/).
CSV tables must be encoded in UTF-8 and Unicode Normal Form C as defined in [UAX15](http://www.unicode.org/reports/tr15/). Line endings must be CRLF (Carriage Return Line Feed) as defined by the `text/csv` MIME type in [RFC 4180](https://tools.ietf.org/html/rfc4180).
## Examples
In this topic, you'll find examples of valid CSV syntax for responses to the following query:
```js
from(bucket:"mydb/autogen")
|> range(start:2018-05-08T20:50:00Z, stop:2018-05-08T20:51:00Z)
|> group(columns:["_start","_stop", "region", "host"])
|> yield(name:"my-result")
```
## CSV response format
Flux supports encodings listed below.
### Tables
A table may have the following rows and columns.
#### Rows
- **Annotation rows**: describe column properties.
- **Header row**: defines column labels (one header row per table).
- **Record row**: describes data in the table (one record per row).
##### Example
Encoding of a table with and without a header row.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Header row](#)
[Without header row](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```js
result,table,_start,_stop,_time,region,host,_value
my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,east,A,15.43
my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,east,B,59.25
my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,east,C,52.62
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,east,A,15.43
my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,east,B,59.25
my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,east,C,52.62
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
#### Columns
In addition to the data columns, a table may include the following columns:
- **Annotation column**: Only used in annotation rows. Always the first column. Displays the name of an annotation. Value can be empty or a supported [annotation](#annotations). You'll notice a space for this column for the entire length of the table, so rows appear to start with `,`.
- **Result column**: Contains the name of the result specified by the query.
- **Table column**: Contains a unique ID for each table in a result.
### Multiple tables and results
If a file or data stream contains multiple tables or results, the following requirements must be met:
- A table column indicates which table a row belongs to.
- All rows in a table are contiguous.
- An empty row delimits a new table boundary in the following cases:
- Between tables in the same result that do not share a common table schema.
- Between concatenated CSV files.
- Each new table boundary starts with new annotation and header rows.
##### Example
Encoding of two tables in the same result with the same schema (header row) and different schema.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Same schema](#)
[Different schema](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```js
result,table,_start,_stop,_time,region,host,_value
my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,east,A,15.43
my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,east,B,59.25
my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,east,C,52.62
my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,west,A,62.73
my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,west,B,12.83
my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,west,C,51.62
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
,result,table,_start,_stop,_time,region,host,_value
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,east,A,15.43
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,east,B,59.25
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,east,C,52.62
,result,table,_start,_stop,_time,location,device,min,max
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,USA,5825,62.73,68.42
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,USA,2175,12.83,56.12
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,USA,6913,51.62,54.25
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
### Dialect options
Flux supports the following dialect options for `text/csv` format.
| Option | Description| Default |
| :-------- | :--------- | :-------|
| **header** | If true, the header row is included.| `true`|
| **delimiter** | Character used to delimit columns. | `,`|
| **quoteChar** | Character used to quote values containing the delimiter. |`"`|
| **annotations** | List of annotations to encode (datatype, group, or default). |`empty`|
| **commentPrefix** | String prefix to identify a comment. Always added to annotations. |`#`|
### Annotations
Annotation rows are optional, describe column properties, and start with `#` (or commentPrefix value). The first column in an annotation row always contains the annotation name. Subsequent columns contain annotation values as shown in the table below.
|Annotation name | Values| Description |
| :-------- | :--------- | :-------|
| **datatype** | a [valid data type](#Valid-data-types) | Describes the type of data. |
| **group** | boolean flag `true` or `false` | Indicates the column is part of the group key.|
| **default** | a [valid data type](#Valid-data-types) |Value to use for rows with an empty string value.|
##### Example
Encoding of datatype and group annotations for two tables.
{{< code-tabs-wrapper >}}
{{% code-tabs %}}
[Datatype annotation](#)
[Group annotation](#)
[Datatype and group annotations](#)
{{% /code-tabs %}}
{{% code-tab-content %}}
```js
#datatype,string,long,dateTime:RFC3339,dateTime:RFC3339,dateTime:RFC3339,string,string,double
,result,table,_start,_stop,_time,region,host,_value
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,east,A,15.43
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,east,B,59.25
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,east,C,52.62
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,west,A,62.73
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,west,B,12.83
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,west,C,51.62
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
#group,false,false,true,true,false,true,false,false
,result,table,_start,_stop,_time,region,host,_value
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,east,A,15.43
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,east,B,59.25
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,east,C,52.62
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,west,A,62.73
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,west,B,12.83
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,west,C,51.62
```
{{% /code-tab-content %}}
{{% code-tab-content %}}
```js
#datatype,string,long,dateTime:RFC3339,dateTime:RFC3339,dateTime:RFC3339,string,string,double
,result,table,_start,_stop,_time,region,host,_value
#group,false,false,true,true,false,true,false,false
,result,table,_start,_stop,_time,region,host,_value
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,east,A,15.43
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,east,B,59.25
,my-result,0,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,east,C,52.62
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,west,A,62.73
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,west,B,12.83
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,west,C,51.62
```
{{% /code-tab-content %}}
{{< /code-tabs-wrapper >}}
**Notes:**
{{% note %}}
To encode a table with its group key, the `datatype`, `group`, and `default` annotations must be included.
If a table has no rows, the `default` annotation provides the group key values.
{{% /note %}}
### Valid data types
| Datatype | Flux type | Description |
| :-------- | :--------- | :-----------------------------------------------------------------------------|
| boolean | bool | a truth value, one of "true" or "false" |
| unsignedLong | uint | an unsigned 64-bit integer |
| long | int | a signed 64-bit integer |
| double | float | an IEEE-754 64-bit floating-point number |
| string | string | a UTF-8 encoded string |
| base64Binary | bytes | a base64 encoded sequence of bytes as defined in RFC 4648 |
| dateTime | time | an instant in time, may be followed with a colon : and a description of the format |
| duration | duration | a length of time represented as an unsigned 64-bit integer number of nanoseconds |
## Errors
If an error occurs during execution, a table returns with:
- An error column that contains an error message.
- A reference column with a unique reference code to identify more information about the error.
- A second row with error properties.
If an error occurs:
- Before results materialize, the HTTP status code indicates an error. Error details are encoded in the csv table.
- After partial results are sent to the client, the error is encoded as the next table and remaining results are discarded. In this case, the HTTP status code remains 200 OK.
##### Example
Encoding for an error with the datatype annotation:
```js
#datatype,string,long
,error,reference
,Failed to parse query,897
```
Encoding for an error that occurs after a valid table has been encoded:
```js
#datatype,string,long,dateTime:RFC3339,dateTime:RFC3339,dateTime:RFC3339,string,string,double,result,table,_start,_stop,_time,region,host,_value
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:00Z,west,A,62.73
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:20Z,west,B,12.83
,my-result,1,2018-05-08T20:50:00Z,2018-05-08T20:51:00Z,2018-05-08T20:50:40Z,west,C,51.62
```
```js
#datatype,string,long
,error,reference,query terminated: reached maximum allowed memory limits,576
```

View File

@ -48,6 +48,10 @@ VariableAssignment = identifier "=" Expression
##### Examples of variable assignment
{{% note %}}
In this code snippet, `n` and `m` are defined in an outer block as integers. Within the anonymous function, `n` and `m` are defined as strings, but only within that scope. So while the function will return `"ab"`, `n` and `m` in the outer scope are unchanged, remaining `n = 1` and `m = 2`.
{{% /note %}}
```js
n = 1
m = 2
@ -55,7 +59,7 @@ x = 5.4
f = () => {
n = "a"
m = "b"
return a + b
return n + m
}
```

View File

@ -0,0 +1,799 @@
---
title: Glossary
description: >
Terms related to InfluxData products and platforms.
weight: 6
menu:
v2_0_ref:
name: Glossary
v2.0/tags: [glossary]
---
[A](#a) | [B](#b) | [C](#c) | [D](#d) | [E](#e) | [F](#f) |[G](#g) | [H](#h) | [I](#i) | [J](#j) | [K](#k) | [L](#l) | [M](#m) | [N](#n) | [O](#o) | [P](#p) | [Q](#q) | [R](#r) | [S](#s) | [T](#t) | [U](#u) | [V](#v) | [W](#w) | [X](#x) | [Y](#y) | [Z](#z)
## A
### agent
A background process started by or on behalf of a user and typically requires user input. Telegraf is an example of an agent that requires user input (a configuration file) to gather metrics from declared input plugins and sends metrics to declared output plugins, based on the plugins enabled for a configuration.
Related entries: [input plugin](#input-plugin), [output plugin](#output-plugin), [daemon](#daemon)
### aggregator plugin
Receives metrics from input plugins, creates aggregate metrics, and then passes aggregate metrics to configured output plugins.
Related entries: [input plugin](#input-plugin), [output plugin](#output-plugin), [processor plugin](#processor-plugin)
### aggregate
A function that returns an aggregated value across a set of points.
For a list of available aggregation functions, see [Flux built-in aggregate functions](/v2.0/reference/flux/functions/built-in/transformations/aggregates/).
Related entries: [function](#function), [selector](#selector), [transformation](#transformation)
## B
<!-- bar graph -->
### batch
A collection of points in line protocol format, separated by newlines (`0x0A`).
Submitting a batch of points using a single HTTP request to the write endpoints drastically increases performance by reducing the HTTP overhead.
InfluxData typically recommends batch sizes of 5,000-10,000 points. In some use cases, performance may improve with significantly smaller or larger batches.
Related entries: [line protocol](/v2.0/reference/line-protocol/), [point](#point)
### batch size
The number of lines or individual data points in a line protocol batch. The Telegraf agent sends metrics to output plugins in batches rather than individually.
Batch size controls the size of each write batch that Telegraf sends to the output plugins.
Related entries: [output plugin](#output-plugin)
<!-- ### block-->
### boolean
A data type with two possible values: true or false.
By convention, you can express `true` as the integer `1` and false as the integer `0` (zero).
### bucket
A bucket is a named location where time series data is stored. All buckets have a retention policy, a duration of time that each data point persists. A bucket belongs to an organization.
<!-->
### bytes
## C
### CSV
-->
### cardinality
Cardinality is the number of unique series in a bucket or database as a whole.
<!--
### cluster
### co-monitoring dashboard
### collect
-->
### collection interval
The default global interval for collecting data from each Telegraf input plugin.
The collection interval can be overridden by each individual input plugin's configuration.
Related entries: [input plugin](#input-plugin)
<!--Likely configurable for scrapers in the future.-->
### collection jitter
Collection jitter prevents every input plugin from collecting metrics simultaneously, which can have a measurable effect on the system. For each collection interval, every Telegraf input plugin will sleep for a random time between zero and the collection jitter before collecting the metrics.
Related entries: [collection interval](#collection-interval), [input plugin](#input-plugin)
<!-- ### column
### comment
### common log format (CLF)
-->
### continuous query (CQ)
Continuous queries are the predecessor to tasks in InfluxDB 2.0. Continuous queries run automatically and periodically on a database.
Related entries: [function](#function)
## D
### daemon
A background process that runs without user input.
<!--
### dashboard
### dashboard variable
### Data Explorer
### data model
-->
<!-- ### data node
A node that runs the InfluxDB? data service.
For high availability, installations must have at least two data nodes.
The number of data nodes in your cluster must be the same as your highest replication factor.
Any replication factor greater than two gives you additional fault tolerance and
query capacity in the cluster.
Data node sizes will depend on your needs. The Amazon EC2 m4.large or m4.xlarge are good starting points.
Related entries: [data service](#data-service), [replication factor](#replication-factor)
-->
### data service
Stores time series data and handles writes and queries.
Related entries: [data node](#data-node)
<!--### data type -->
### database
In InfluxDB 2.0, a database represents the InfluxDB instance as a whole.
Related entries: [continuous query](#continuous-query-cq), [retention policy](#retention-policy-rp), [user](#user)
<!-- ### date-time-->
### downsample
Aggregating high resolution data into lower resolution data to preserve disk space.
### duration
A data type that represents a duration of time (1s, 1m, 1h, 1d). Retention policies are set using durations. Data older than the duration is automatically dropped from the database.
<!-- See [Database Management](/influxdb/v1.7/query_language/database_management/#create-retention-policies-with-create-retention-policy) for how to set duration.
-->
Related entries: [retention policy](#retention-policy-rp)
<!-- ### duration (data type)
-->
## E
### event
Metrics gathered at irregular time intervals.
<!-- ### explicit block
-->
### expression
A combination of one or more constants, variables, operators, and functions.
## F
### field
The key-value pair in InfluxDB's data structure that records metadata and the actual data value.
Fields are required in InfluxDB's data structure and they are not indexed - queries on field values scan all points that match the specified time range and, as a result, are not performant relative to tags.
*Query tip:* Compare fields to tags; tags are indexed.
Related entries: [field key](#field-key), [field set](#field-set), [field value](#field-value), [tag](#tag)
### field key
The key of the key-value pair.
Field keys are strings and they store metadata.
Related entries: [field](#field), [field set](#field-set), [field value](#field-value), [tag key](#tag-key)
### field set
The collection of field keys and field values on a point.
Related entries: [field](#field), [field key](#field-key), [field value](#field-value), [point](#point)
### field value
The value of a key-value pair.
Field values are the actual data; they can be strings, floats, integers, or booleans.
A field value is always associated with a timestamp.
Field values are not indexed - queries on field values scan all points that match the specified time range and, as a result, are not performant.
*Query tip:* Compare field values to tag values; tag values are indexed.
Related entries: [field](#field), [field key](#field-key), [field set](#field-set), [tag value](#tag-value), [timestamp](#timestamp)
<!-- ### file block
-->
### float
A float represents real numbers and is written with a decimal point dividing the integer and fractional parts. For example, 1.0, 3.14.
### flush interval
The global interval for flushing data from each Telegraf output plugin to its destination.
This value should not be set lower than the collection interval.
Related entries: [collection interval](#collection-interval), [flush jitter](#flush-jitter), [output plugin](#output-plugin)
### flush jitter
Flush jitter prevents every Telegraf output plugin from sending writes simultaneously, which can overwhelm some data sinks.
Each flush interval, every Telegraf output plugin will sleep for a random time between zero and the flush jitter before emitting metrics.
Flush jitter smooths out write spikes when running a large number of Telegraf instances.
Related entries: [flush interval](#flush-interval), [output plugin](#output-plugin)
### Flux
A lightweight scripting language for querying databases (like InfluxDB) and working with data.
### function
Flux functions aggregate, select, and transform time series data. For a complete list of Flux functions, see [Flux functions](/v2.0/reference/flux/functions/all-functions/).
<!--Or opt to use Flux functions' predecessor, InfluxQL functions. See [InfluxQL functions](/influxdb/v1.7/query_language/functions/) for a complete list. -->
Related entries: [aggregation](#aggregation), [selector](#selector), [transformation](#transformation)
<!--### function block
## G
### gauge
### graph
### gzip
- compression
- file (`.gz`)
## H
### Hinted Handoff (HH)
-->
### histogram
A visual representation of statistical information that uses rectangles to show the frequency of data items in successive, equal intervals or bins.
## I
### identifier
Identifiers are tokens that refer to task names, bucket names, field keys,
measurement names, subscription names, tag keys, and
user names.
For examples and rules, see [Flux language lexical elements](/v2.0/reference/flux/language/lexical-elements/#identifiers).
Related entries:
[bucket](#bucket)
[field key](#field-key),
[measurement]/#measurement),
[retention policy](#retention-policy-rp),
[tag key](#tag-key),
[user](#user)
<!--### implicit block -->
### influx
A command line interface (CLI) that interacts with the InfluxDB daemon (influxd).
### influxd
The InfluxDB daemon that runs the InfluxDB server and other required processes.
<!--### InfluxDB -->
### InfluxDB UI
The graphical web interface provided by InfluxDB for visualizing data and managing InfluxDB functionality.
### InfluxQL
The SQL-like query language used to query data in InfluxDB 1.x.
### input plugin
Telegraf input plugins actively gather metrics and deliver them to the core agent, where aggregator, processor, and output plugins can operate on the metrics.
In order to activate an input plugin, it needs to be enabled and configured in Telegraf's configuration file.
Related entries: [aggregator plugin](/telegraf/v1.10/concepts/glossary/#aggregator-plugin), [collection interval](/telegraf/v1.10/concepts/glossary/#collection-interval), [output plugin](/telegraf/v1.10/concepts/glossary/#output-plugin), [processor plugin](/telegraf/v1.10/concepts/glossary/#processor-plugin)
<!-- ### instance
### int (data type)
## J
### JWT
### Jaeger
### Java Web Tokens
### join
## K
### keyword
-->
## L
<!-- ### literal
### load balancing
### Log Viewer
### logging
-->
### Line Protocol (LP)
The text based format for writing points to InfluxDB. See [Line Protocol](/v2.0/reference/line-protocol/).
## M
### measurement
The part of InfluxDB's structure that describes the data stored in the associated fields.
Measurements are strings.
Related entries: [field](#field), [series](#series)
### member
A user in an organization. <!--or a node in a cluster. -->
<!--### meta node
A node that runs the meta service.
For high availability, installations must have three meta nodes.
Meta nodes can be very modestly sized instances like an EC2 t2.micro or even a
nano.
For additional fault tolerance, installations may use five meta nodes. The
number of meta nodes must be an odd number.
Related entries: [meta service](#meta-service)
### meta service
The consistent data store that keeps state about the cluster, including which
servers, buckets, users, tasks, subscriptions, and blocks of time exist.
Related entries: [meta node](#meta-node)
-->
### metastore
Contains internal information about the status of the system.
The metastore contains the user information, buckets, shard metadata, tasks, and subscriptions.
Related entries: [bucket](#bucket), [retention policy](#retention-policy-rp), [user](#user)
### metric
Data tracked over time.
### metric buffer
The metric buffer caches individual metrics when writes are failing for an Telegraf output plugin.
Telegraf will attempt to flush the buffer upon a successful write to the output.
The oldest metrics are dropped first when this buffer fills.
Related entries: [output plugin](/#output-plugin)
<!-- ### missing value
-->
## N
### node
An independent `influxd` process.
Related entries: [server](#server)
### now()
The local server's nanosecond timestamp.
### null
A data type that represents a missing or unknown value. Denoted by the null value.
## O
### operator
A symbol that usually represents an action or process. For example: `+`, `-`, `>`.
### operand
The object or value on either side of an operator.
<!-- ### option
### option assignment
-->
### organization
A workspace for a group of users. All dashboards, tasks, buckets, members, and so on, belong to an organization.
### output plugin
Telegraf output plugins deliver metrics to their configured destination. In order to activate an output plugin, it needs to be enabled and configured in Telegraf's configuration file.
Related entries: [aggregator plugin](/telegraf/v1.10/concepts/glossary/#aggregator-plugin), [flush interval](/telegraf/v1.10/concepts/glossary/#flush-interval), [input plugin](/telegraf/v1.10/concepts/glossary/#input-plugin), [processor plugin](/telegraf/v1.10/concepts/glossary/#processor-plugin)
## P
### parameter
A key-value pair used to pass information to functions.
<!--
### pipe
-->
### pipe-forward operator
An operator (`|>`) used in Flux to chain operations together. Specifies the output from a function is input to next function.
### point
A point in the InfluxDB data structure that consists of a single collection of fields in a series. Each point is uniquely identified by its series and timestamp. In a series, you cannot store more than one point with the same timestamp.
When you write a new point to a series with a timestamp that matches an existing point, the field set becomes a union of the old and new field set, where any ties go to the new field set.
<!-- For an example, see [Frequently Asked Questions](/influxdb/v1.7/troubleshooting/frequently-asked-questions/#how-does-influxdb-handle-duplicate-points).
Related entries: [field set](/influxdb/v1.7/concepts/glossary/#field-set), [series](/influxdb/v1.7/concepts/glossary/#series), [timestamp](/influxdb/v1.7/concepts/glossary/#timestamp)
## points per second - in 1.x - obsolete?
A deprecated measurement of the rate at which data are persisted to InfluxDB.
The schema allows and even encourages the recording of multiple metric values per point, rendering points per second ambiguous.
Write speeds are generally quoted in values per second, a more precise metric.
Related entries: [point](/influxdb/v1.7/concepts/glossary/#point), [schema](/influxdb/v1.7/concepts/glossary/#schema), [values per second](/influxdb/v1.7/concepts/glossary/#values-per-second)
-->
### precision
The precision configuration setting determines the timestamp precision retained for input data points. All incoming timestamps are truncated to the specified precision. Valid precisions are `ns`, `us` or `µs`, `ms`, and `s`.
In Telegraf, truncated timestamps are padded with zeros to create a nanosecond timestamp. Telegraf output plugins emit timestamps in nanoseconds. For example, if the precision is set to `ms`, the nanosecond epoch timestamp `1480000000123456789` is truncated to `1480000000123` in millisecond precision and padded with zeroes to make a new, less precise nanosecond timestamp of `1480000000123000000`. Telegraf output plugins do not alter the timestamp further. The precision setting is ignored for service input plugins.
Related entries: [aggregator plugin](#aggregator-plugin), [input plugin](#input-plugin), [output plugin](#output-plugin), [processor plugin](/#processor-plugin), [service input plugin](#service-input-plugin)
<!-- ### process -->
### processor plugin
Telegraf processor plugins transform, decorate, and filter metrics collected by input plugins, passing the transformed metrics to the output plugins.
Related entries: [aggregator plugin](#aggregator-plugin), [input plugin](#input-plugin), [output plugin](#output-plugin)
<!-- ### Prometheus format -->
## Q
### query
An operation that retrieves data from InfluxDB.
See [Query data in InfluxDB](/v2.0/query-data/).
## R
### REPL
A read-eval-print-loop is an interactive programming environment where you type a command and immediately see the result.
See [Use the influx CLI's REPL](/v2.0/query-data/get-started/syntax-basics/#use-the-influx-cli-s-repl)
### record
A tuple of named values represented using an object type.
### regular expressions
Regular expressions (regex or regexp) are patterns used to match character combinations in strings.
### retention policy (RP)
Retention policy is a duration of time that each data point persists. Retention policies are specified in a bucket.
<!--Retention polices describe how many copies of the data is stored in the cluster (replication factor), and the time range covered by shard groups (shard group duration). Retention policies are unique per bucket.
-->
Related entries: [duration](#duration), [measurement](#measurement), [replication factor](#replication-factor), [series](#series), [shard duration](#shard-duration), [tag set](#tag-set)
## S
### schema
How data is organized in InfluxDB. The fundamentals of the InfluxDB schema are buckets (which include retention policies), series, measurements, tag keys, tag values, and field keys.
<!-- See [Schema Design](/influxdb/v1.7/concepts/schema_and_data_layout/) for more information.
should we replace this with influxd generate help-schema link? -->
Related entries: [bucket](#bucket), [field key](#field-key), [measurement](#measurement), [retention policy](#retention-policy-rp), [series](#series), [tag key](#tag-key), [tag value](#tag-value)
<!-- ### scrape -->
### selector
A Flux function that returns a single point from the range of specified points.
See [Flux built-in selector functions](/v2.0/reference/flux/functions/built-in/transformations/selectors/) for a complete list of available built-in selector functions.
Related entries: [aggregation](#aggregation), [function](#function), [transformation](#transformation)
### series
A collection of data in the InfluxDB data structure that shares a measurement, tag set, and bucket.
Related entries: [field set](#field-set), [measurement](#measurement), [retention policy](/#retention-policy-rp), [tag set](#tag-set)
### series cardinality
The number of unique bucket, measurement, tag set, and field key combinations in an InfluxDB instance.
For example, assume that an InfluxDB instance has a single bucket and one measurement.
The single measurement has two tag keys: `email` and `status`.
If there are three different `email`s, and each email address is associated with two
different `status`es, the series cardinality for the measurement is 6
(3 * 2 = 6):
| email | status |
| :-------------------- | :----- |
| lorr@influxdata.com | start |
| lorr@influxdata.com | finish |
| marv@influxdata.com | start |
| marv@influxdata.com | finish |
| cliff@influxdata.com | start |
| cliff@influxdata.com | finish |
In some cases, performing this multiplication may overestimate series cardinality because of the presence of dependent tags. Dependent tags are scoped by another tag and do not increase series
cardinality.
If we add the tag `firstname` to the example above, the series cardinality
would not be 18 (3 * 2 * 3 = 18).
The series cardinality would remain unchanged at 6, as `firstname` is already scoped by the `email` tag:
| email | status | firstname |
| :-------------------- | :----- | :-------- |
| lorr@influxdata.com | start | lorraine |
| lorr@influxdata.com | finish | lorraine |
| marv@influxdata.com | start | marvin |
| marv@influxdata.com | finish | marvin |
| cliff@influxdata.com | start | clifford |
| cliff@influxdata.com | finish | clifford |
<!--See [SHOW CARDINALITY](/influxdb/v1.7/query_language/spec/#show-cardinality) to learn about the InfluxQL commands for series cardinality. -->
Related entries: [field key](#field-key),[measurement](#measurement), [tag key](#tag-key), [tag set](#tag-set)
### server
A computer, virtual or physical, running InfluxDB. <!--still valid There should only be one InfluxDB process per server. -->
Related entries: [node](#node)
<!-- ### service -->
### service input plugin
Telegraf input plugins that run in a passive collection mode while the Telegraf agent is running.
Service input plugins listen on a socket for known protocol inputs, or apply their own logic to ingested metrics before delivering metrics to the Telegraf agent.
Related entries: [aggregator plugin](#aggregator-plugin), [input plugin](#input-plugin), [output plugin](#output-plugin), [processor plugin](#processor-plugin)
<!--### shard
A shard contains encoded and compressed data. Shards are represented by a TSM file on disk.
Every shard belongs to one and only one shard group.
Multiple shards may exist in a single shard group.
Each shard contains a specific set of series.
All points falling on a given series in a given shard group will be stored in the same shard (TSM file) on disk.
Related entries: [series](#series), [shard duration](#shard-duration), [shard group](#shard-group), [tsm](#tsm-time-structured-merge-tree)
### shard duration
The shard duration determines how much time each shard group spans.
The specific interval is determined by the `SHARD DURATION` of the retention policy.
<!-- See [Retention Policy management](/influxdb/v1.7/query_language/database_management/#retention-policy-management) for more information.
For example, given a retention policy with `SHARD DURATION` set to `1w`, each shard group will span a single week and contain all points with timestamps in that week.
Related entries: [database](#database), [retention policy](#retention-policy-rp), [series](/#series), [shard](#shard), [shard group](#shard-group)
### shard group
Shard groups are logical containers for shards.
Shard groups are organized by time and retention policy.
Every retention policy that contains data has at least one associated shard group.
A given shard group contains all shards with data for the interval covered by the shard group.
The interval spanned by each shard group is the shard duration.
Related entries: [database](#database), [retention policy](#retention-policy-rp), [series](/#series), [shard](#shard), [shard duration](#shard-duration)
-->
<!--### Single Stat
### Snappy compression
### source
### stacked graph
### statement
### step-plot
### stream
"stream of tables"
-->
### string
A data type used to represent text.
<!-- how does this work in 2.0? ### subscription
Subscriptions allow [Kapacitor](/kapacitor/latest/) to receive data from InfluxDB in a push model rather than the pull model based on querying data.
When Kapacitor is configured to work with InfluxDB, the subscription will automatically push every write for the subscribed database from InfluxDB to Kapacitor.
Subscriptions can use TCP or UDP for transmitting the writes.
-->
## T
<!--### TCP
### TSL
### TSM (Time-structured merge tree)
### TSM file
### table
-->
### tag
The key-value pair in InfluxDB's data structure that records metadata.
Tags are an optional part of InfluxDB's data structure but they are useful for storing commonly-queried metadata; tags are indexed so queries on tags are performant.
*Query tip:* Compare tags to fields; fields are not indexed.
Related entries: [field](/influxdb/v1.7/concepts/glossary/#field), [tag key](/influxdb/v1.7/concepts/glossary/#tag-key), [tag set](/influxdb/v1.7/concepts/glossary/#tag-set), [tag value](/influxdb/v1.7/concepts/glossary/#tag-value)
### tag key
The key of a tag key-value pair. Tag keys are strings and store metadata.
Tag keys are indexed so queries on tag keys are processed quickly.
*Query tip:* Compare tag keys to field keys. Field keys are not indexed.
Related entries: [field key](/#field-key), [tag](#tag), [tag set](#tag-set), [tag value](#tag-value)
### tag set
The collection of tag keys and tag values on a point.
Related entries: [point](#point), [series](#series), [tag](#tag), [tag key](#tag-key), [tag value](#tag-value)
### tag value
The value of a tag key-value pair.
Tag values are strings and they store metadata.
Tag values are indexed so queries on tag values are processed quickly.
Related entries: [tag]#tag), [tag key](#tag-key), [tag set](#tag-set)
<!--### task
### Telegraf
-->
### time (data type)
A data type that represents a single point in time with nanosecond precision.
### time series data
Sequence of data points typically consisting of successive measurements made from the same source over a time interval. Time series data shows how data evolves over
time. On a time series data graph, one of the axes is always time. Time series data may be regular or irregular. Regular time series data changes in constant intervals. Irregular time series data changes at non-constant intervals.
### timestamp
The date and time associated with a point. Time in InfluxDB is in UTC.
To specify time when writing data, see [Elements of line protocol](/v2.0/reference/line-protocol/#elements-of-line-protocol).
To specify time when querying data, see [Query InfluxDB with Flux](/v2.0/query-data/get-started/query-influxdb/#2-specify-a-time-range).
Related entries: [point](#point)
<!--### token
### tracing
### transformation
An InfluxQL function that returns a value or a set of values calculated from specified points, but does not return an aggregated value across those points.
See [InfluxQL Functions](/influxdb/v1.7/query_language/functions/#transformations) for a complete list of the available and upcoming aggregations.
Related entries: [aggregation](/influxdb/v1.7/concepts/glossary/#aggregation), [function](/influxdb/v1.7/concepts/glossary/#function), [selector](/influxdb/v1.7/concepts/glossary/#selector)
## tsm (Time Structured Merge tree) - in 1.x - obsolete?
The purpose-built data storage format for InfluxDB. TSM allows for greater compaction and higher write and read throughput than existing B+ or LSM tree implementations. See [Storage Engine](http://docs.influxdata.com/influxdb/v1.7/concepts/storage_engine/) for more.
## U
### UDP
### universe block
### user
There are two kinds of users in InfluxDB:
* *Admin users* have `READ` and `WRITE` access to all databases and full access to administrative queries and user management commands.
* *Non-admin users* have `READ`, `WRITE`, or `ALL` (both `READ` and `WRITE`) access per database.
When authentication is enabled, InfluxDB only executes HTTP requests that are sent with a valid username and password.
See [Authentication and Authorization](/influxdb/v1.7/administration/authentication_and_authorization/).
-->
## V
## values per second
The preferred measurement of the rate at which data are persisted to InfluxDB. Write speeds are generally quoted in values per second.
To calculate the values per second rate, multiply the number of points written per second by the number of values stored per point. For example, if the points have four fields each, and a batch of 5000 points is written 10 times per second, the values per second rate is `4 field values per point * 5000 points per batch * 10 batches per second = 200,000 values per second`.
Related entries: [batch](#batch), [field](#field), [point](#point), [points per second](#points-per-second)
<!-- ### variable
### variable assignment
## W
### WAL (Write Ahead Log)
The temporary cache for recently written points. To reduce the frequency that permanent storage files are accessed, InfluxDB caches new points in the WAL until their total size or age triggers a flush to more permanent storage. This allows for efficient batching of the writes into the TSM.
Points in the WAL can be queried and persist through a system reboot. On process start, all points in the WAL must be flushed before the system accepts new writes.
Related entries: [tsm](#tsm-time-structured-merge-tree)
<!-- ## web console - e - obsolete?
Legacy user interface for the InfluxDB Enterprise.
This has been deprecated and the suggestion is to use [Chronograf](/chronograf/latest/introduction/).
If you are transitioning from the Enterprise Web Console to Chronograf and helpful [transition guide](/chronograf/latest/guides/transition-web-admin-interface/) is available.
-->
### windowing
The process of partitioning data based on equal windows of time.
<!--
## X
## Y
## Z
-->

View File

@ -55,15 +55,7 @@ weight: 201
2. Select the **Templates** tab.
- In the **Static Templates** tab, a list of pre-created templates appears. The following pre-created templates are available:
- **Docker**
- **Getting Started with Flux**
- **Kubernetes**
- **Local Metrics**
- **Nginx**
- **Redis**
- **System**
- In the **Static Templates** tab, a list of pre-created templates appears.
- In the **User Templates** tab, a list of custom user-created templates appears.
3. Hover over the name of the template you want to create a dashboard from, then click **Create**.

View File

@ -18,7 +18,188 @@ To view templates in the InfluxDB UI:
2. Select the **Templates** tab.
- In the **Static Templates** tab, a list of pre-created templates appears.
- In the **Static Templates** tab, a list of pre-created templates appears. For a list of static templates, see [Static templates](#static-templates) below.
- In the **User Templates** tab, a list of custom user-created templates appears.
3. Click on the name of a template to view its JSON.
## Static templates
The following Telegraf-related dashboards templates are available:
- [Docker](#docker)
- [Getting Started with Flux](#getting-started-with-flux)
- [Kubernetes](#kubernetes)
- [Local Metrics](#local-metrics)
- [Nginx](#nginx)
- [Redis](#redis)
- [System](#system)
### Docker
The Docker dashboard template contains an overview of Docker metrics. It displays the following information:
- System Uptime
- nCPUs
- System Load
- Total Memory
- Memory Usage
- Disk Usage
- CPU Usage
- System Load
- Swap
- Number of Docker containers
- CPU usage per container
- Memory usage % per container
- Memory usage per container
- Network TX traffic per container/sec
- Network RX traffic per container/sec
- Disk I/O read per container/sec
- Disk I/O write per container/sec
#### Plugins
- [`cpu` plugin](/v2.0/reference/telegraf-plugins/#cpu)
- [`disk` plugin](/v2.0/reference/telegraf-plugins/#disk)
- [`diskio` plugin](/v2.0/reference/telegraf-plugins/#diskio)
- [`docker` plugin](//v2.0/reference/telegraf-plugins/#docker)
- [`mem` plugin](/v2.0/reference/telegraf-plugins/#mem)
- [`swap` plugin](/v2.0/reference/telegraf-plugins/#swap)
- [`system` plugin](/v2.0/reference/telegraf-plugins/#system)
### Getting Started with Flux
This dashboard is designed to get you started with the Flux language. It contains explanations and visualizations for a series of increasingly complex example Flux queries.
- Creating your first basic Flux query
- Filtering data using the `filter` function
- Windowing data with the `window` function
- Aggregating data with the `aggregateWindow` function
- Multiple aggregates using Flux variables and the `yield` function
- Joins and maps with the `join`, `map`, `group`, and `drop` functions
#### Plugins
- [`cpu` plugin](/v2.0/reference/telegraf-plugins/#cpu)
- [`disk` plugin](/v2.0/reference/telegraf-plugins/#disk)
### Kubernetes
The Kubernetes dashboard gives a visual overview of Kubernetes metrics. It displays the following information:
- Allocatable Memory
- Running Pods
- Running Containers
- K8s Node Capacity CPUs
- K8s Node Allocatable CPUs
- DaemonSet
- Capacity Pods
- Allocatable Pods
- Resource Requests CPU
- Resource Limit milliscpu
- Resource Memory
- Node Memory
- Replicas Available
- Persistent Volumes Status
- Running Containers
#### Plugins
- [`kubernetes` plugin](/v2.0/reference/telegraf-plugins/)
### Local Metrics
The Local Metrics dashboard shows a visual overview of some of the metrics available from the Local Metrics endpoint located at /`metrics`. It displays the following information:
- Uptime
- Instance Info
- # of Orgs
- # of Users
- # of Buckets
- # of Tokens
- # of Telegraf configurations
- # of Dashboards
- # of Scrapers
- # of Tasks
- Local Object Store IO
- Memory Allocations (Bytes)
- Memory Usage (%)
- Memory Allocs & Frees (Bytes)
### Nginx
The Nginx dashboard gives a visual overview of Nginx metrics. It displays the following information:
- System Uptime
- nCPUs
- System Load
- Total Memory
- Memory Usage
- Disk Usage
- CPU Usage
- System Load
- Swap
- Nginx active connections
- Nginx reading: writing/waiting
- Nginx requests & connections/min
- Network
#### Plugins
- [`cpu` plugin](/v2.0/reference/telegraf-plugins/#cpu)
- [`disk` plugin](/v2.0/reference/telegraf-plugins/#disk)
- [`diskio` plugin](/v2.0/reference/telegraf-plugins/#diskio)
- [`mem` plugin](/v2.0/reference/telegraf-plugins/#mem)
- [`nginx` plugin](/v2.0/reference/telegraf-plugins/#nginx)
- [`swap` plugin](/v2.0/reference/telegraf-plugins/#swap)
- [`system` plugin](/v2.0/reference/telegraf-plugins/#system)
### Redis
The Redis dashboard gives a visual overview of Nginx metrics. It displays the following information:
- System Uptime
- nCPUs
- System Load
- Total Memory
- Memory Usage
- Disk Usage
- CPU Usage
- System Load
- Swap
- Redis used memory
- Redis CPU
- Redis # commands processed per sec
- Redis eviced/expired keys
- Redis connected slaves
- Keyspace hitrate
- Redis - Network Input/Output
- Redis connections
- Redis uptime
#### Plugins
- [`cpu` plugin](/v2.0/reference/telegraf-plugins/#cpu)
- [`disk` plugin](/v2.0/reference/telegraf-plugins/#disk)
- [`mem` plugin](/v2.0/reference/telegraf-plugins/#mem)
- [`redis` plugin](/v2.0/reference/telegraf-plugins/#redis)
- [`swap` plugin](/v2.0/reference/telegraf-plugins/#swap)
- [`system` plugin](/v2.0/reference/telegraf-plugins/#system)
### System
The System dashboard gives a visual overview of system metrics. It displays the following information:
- System Uptime
- nCPUs
- System Load
- Total Memory
- Memory Usage
- Disk Usage
- CPU Usage
- System Load
- Disk IO
- Network
- Processes
- Swap
#### Plugins
- [`disk` plugin](/v2.0/reference/telegraf-plugins/#disk)
- [`diskio` plugin](/v2.0/reference/telegraf-plugins/#diskio)
- [`mem` plugin](/v2.0/reference/telegraf-plugins/#mem)
- [`net` plugin](/v2.0/reference/telegraf-plugins/#net)
- [`swap` plugin](/v2.0/reference/telegraf-plugins/#swap)
- [`system` plugin](/v2.0/reference/telegraf-plugins/#system)

View File

@ -11,14 +11,25 @@ menu:
parent: Visualization types
---
The **Gauge** view displays the single value most recent value for a time series in a gauge view.
The **Gauge** visualization displays the most recent value for a time series in a gauge.
{{< img-hd src="/img/2-0-visualizations-gauge-example.png" alt="Gauge example" />}}
To select this view, select the **Gauge** option from the visualization dropdown in the upper right.
Select the **Gauge** option from the visualization dropdown in the upper right.
#### Gauge Controls
## Gauge behavior
The gauge visualization displays a single numeric data point within a defined spectrum (_default is 0-100_).
It uses the latest point in the first table (or series) returned by the query.
{{% note %}}
#### Queries should return one table
Flux does not guarantee the order in which tables are returned.
If a query returns multiple tables (or series), the table order can change between query executions
and result in the Gauge displaying inconsistent data.
For consistent results, the Gauge query should return a single table.
{{% /note %}}
## Gauge Controls
To view **Gauge** controls, click the settings icon ({{< icon "gear" >}}) next to
the visualization dropdown in the upper right.
@ -32,3 +43,25 @@ the visualization dropdown in the upper right.
- **Add a Threshold**: Change the color of the gauge based on the current value.
- **Value is**: Enter the value at which the gauge should appear in the selected color.
Choose a color from the dropdown menu next to the value.
## Gauge examples
Gauge visualizations are useful for showing the current value of a metric and displaying
where it falls within a spectrum.
### Steam pressure gauge
The following example queries sensor data that tracks the pressure of steam pipes
in a facility and displays it as a gauge.
###### Query pressure data from a specific sensor
```js
from(bucket: "example-bucket")
|> range(start: -1m)
|> filter(fn: (r) =>
r._measurement == "steam-sensors" and
r._field == "psi"
r.sensorID == "a211i"
)
```
###### Visualization options for pressure gauge
{{< img-hd src="/img/2-0-visualizations-guage-pressure.png" alt="Pressure guage example" />}}

View File

@ -10,6 +10,9 @@ menu:
v2_0:
name: Graph + Single Stat
parent: Visualization types
related:
- /v2.0/visualize-data/visualization-types/graph
- /v2.0/visualize-data/visualization-types/single-stat
---
The **Graph + Single Stat** view displays the specified time series in a line graph
@ -17,11 +20,24 @@ and overlays the single most recent value as a large numeric value.
{{< img-hd src="/img/2-0-visualizations-line-graph-single-stat-example.png" alt="Line Graph + Single Stat example" />}}
To select this view, select the **Graph + Single Stat** option from the visualization
dropdown in the upper right.
Select the **Graph + Single Stat** option from the visualization dropdown in the upper right.
#### Graph + Single Stat Controls
## Graph + Single Stat behavior
The Graph visualization color codes each table (or series) in the queried data set.
When multiple series are present, it automatically assigns colors based on the selected [Line Colors option](#options).
The Single Stat visualization displays a single numeric data point.
It uses the latest point in the first table (or series) returned by the query.
{{% note %}}
#### Queries should return one table
Flux does not guarantee the order in which tables are returned.
If a query returns multiple tables (or series), the table order can change between query executions
and result in the Single Stat visualization displaying inconsistent data.
For consistent Single Stat results, the query should return a single table.
{{% /note %}}
## Graph + Single Stat Controls
To view **Graph + Single Stat** controls, click the settings icon ({{< icon "gear" >}})
next to the visualization dropdown in the upper right.
@ -56,3 +72,22 @@ next to the visualization dropdown in the upper right.
Choose a color from the dropdown menu next to the value.
- **Colorization**: Choose **Text** for the single stat to change color based on the configured thresholds.
Choose **Background** for the background of the graph to change color based on the configured thresholds.
## Graph + Single Stat examples
The primary use case for the Graph + Single Stat visualization is to show the current or latest
value as well as historical values.
### Show current value and historical values
The following example shows the current percentage of memory used as well as memory usage over time:
###### Query memory usage percentage
```js
from(bucket: "example-bucket")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) =>
r._measurement == "mem" and
r._field == "used_percent"
)
```
###### Memory usage visualization
{{< img-hd src="/img/2-0-visualizations-graph-single-stat-mem.png" alt="Graph + Single Stat Memory Usage Example" />}}

View File

@ -11,15 +11,20 @@ menu:
parent: Visualization types
---
There are several types of graphs you can create.
To select this view, select the **Graph** option from the visualization dropdown
in the upper right.
The Graph visualization provides several types of graphs, each configured through
the [Graph options](#graph-options).
{{< img-hd src="/img/2-0-visualizations-line-graph-example.png" alt="Line Graph example" />}}
#### Graph controls
Select the **Graph** option from the visualization dropdown in the upper right.
## Graph behavior
The Graph visualization color codes each table (or series) in the queried data set.
When multiple series are present, it automatically assigns colors based on the selected [Line Colors option](#options).
When using a line graph, all points within a single table are connected. When multiple series are present, it automatically assigns colors based on the selected [Line Colors option](#options).
## Graph controls
To view **Graph** controls, click the settings icon ({{< icon "gear" >}}) next
to the visualization dropdown in the upper right.
@ -47,22 +52,19 @@ to the visualization dropdown in the upper right.
- **Min**: Minimum y-axis value.
- **Max**: Maximum y-axis value.
##### Graph with linear interpolation
## Graph Examples
##### Graph with linear interpolation
{{< img-hd src="/img/2-0-visualizations-line-graph-example.png" alt="Line Graph example" />}}
##### Graph with smooth interpolation
{{< img-hd src="/img/2-0-visualizations-line-graph-smooth-example.png" alt="Step-Plot Graph example" />}}
##### Graph with step interpolation
{{< img-hd src="/img/2-0-visualizations-line-graph-step-example.png" alt="Step-Plot Graph example" />}}
<!-- ##### Stacked Graph example
{{< img-hd src="/img/2-0-visualizations-stacked-graph-example.png" alt="Stacked Graph example" />}} -->
<!-- ##### Bar Graph example
{{< img-hd src="/img/2-0-visualizations-bar-graph-example.png" alt="Bar Graph example" />}} -->

View File

@ -10,6 +10,8 @@ menu:
v2_0:
name: Heatmap
parent: Visualization types
related:
- /v2.0/visualize-data/visualization-types/scatter
---
A **Heatmap** displays the distribution of data on an x and y axes where color
@ -17,10 +19,16 @@ represents different concentrations of data points.
{{< img-hd src="/img/2-0-visualizations-heatmap-example.png" alt="Heatmap example" />}}
To select this view, select the **Heatmap** option from the visualization dropdown in the upper right.
Select the **Heatmap** option from the visualization dropdown in the upper right.
#### Heatmap Controls
## Heatmap behavior
Heatmaps divide data points into "bins" segments of the visualization with upper
and lower bounds for both [X and Y axes](#data).
The [Bin Size option](#options) determines the bounds for each bin.
The total number of points that fall within a bin determine the its value and color.
Warmer or brighter colors represent higher bin values or density of points within the bin.
## Heatmap Controls
To view **Heatmap** controls, click the settings icon ({{< icon "gear" >}})
next to the visualization dropdown in the upper right.
@ -51,3 +59,57 @@ next to the visualization dropdown in the upper right.
- **Custom**: Manually specify the value range of the y-axis.
- **Min**: Minimum y-axis value.
- **Max**: Maximum y-axis value.
## Heatmap examples
### Cross-measurement correlation
The following example explores possible correlation between CPU and Memory usage.
It uses data collected with the Telegraf [Mem](/v2.0/reference/telegraf-plugins/#mem)
and [CPU](/v2.0/reference/telegraf-plugins/#cpu) input plugins.
###### Join CPU and memory usage
The following query joins CPU and memory usage on `_time`.
Each row in the output table contains `_value_cpu` and `_value_mem` columns.
```js
cpu = from(bucket: "example-bucket")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) =>
r._measurement == "cpu" and
r._field == "usage_system" and
r.cpu == "cpu-total"
)
mem = from(bucket: "example-bucket")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) =>
r._measurement == "mem" and
r._field == "used_percent"
)
join(tables: {cpu: cpu, mem: mem}, on: ["_time"], method: "inner")
```
###### Use a heatmap to visualize correlation
In the Heatmap visualization controls, `_value_cpu` is selected as the [X Column](#data)
and `_value_mem` is selected as the [Y Column](#data).
The domain for each axis is also customized to account for the scale difference
between column values.
{{< img-hd src="/img/2-0-visualizations-heatmap-correlation.png" alt="Heatmap correlation example" />}}
## Important notes
### Differences between a heatmap and a scatter plot
Heatmaps and [Scatter plots](/v2.0/visualize-data/visualization-types/scatter/)
both visualize the distribution of data points on X and Y axes.
However, in certain cases, heatmaps provide better visibility into point density.
For example, the dashboard cells below visualize the same query results:
{{< img-hd src="/img/2-0-visualizations-heatmap-vs-scatter.png" alt="Heatmap vs Scatter plot" />}}
The heatmap indicates isolated high point density, which isn't visible in the scatter plot.
In the scatter plot visualization, points that share the same X and Y coordinates
appear as a single point.

View File

@ -3,7 +3,7 @@ title: Histogram visualization
list_title: Histogram
list_image: /img/2-0-visualizations-histogram-example.png
description: >
A histogram is a way to view the distribution of data. Unlike column charts, histograms have no time axis.
A histogram is a way to view the distribution of data.
The y-axis is dedicated to count, and the x-axis is divided into bins.
weight: 204
menu:
@ -12,20 +12,32 @@ menu:
parent: Visualization types
---
A histogram is a way to view the distribution of data. Unlike column charts, histograms have no time axis.
The y-axis is dedicated to count, and the x-axis is divided into bins.
A histogram is a way to view the distribution of data.
The y-axis is dedicated to count, and the X-axis is divided into bins.
{{< img-hd src="/img/2-0-visualizations-histogram-example.png" alt="Histogram example" />}}
To select this view, select the **Histogram** option from the visualization dropdown in the upper right.
Select the **Histogram** option from the visualization dropdown in the upper right.
#### Histogram Controls
## Histogram behavior
The Histogram visualization is a bar graph that displays the number of data points
that fall within "bins" segments of the X axis with upper and lower bounds.
Bin thresholds are determined by dividing the width of the X axis by the number
of bins set using the [Bins option](#options).
Data within bins can be further grouped or segmented by selecting columns in the
[Group By option](#options).
{{% note %}}
The Histogram visualization automatically bins, segments, and counts data.
To work properly, query results **should not** be structured as histogram data.
{{% /note %}}
## Histogram Controls
To view **Histogram** controls, click the settings icon ({{< icon "gear" >}}) next
to the visualization dropdown in the upper right.
###### Data
- **Column**: The column to select data from.
- **X Column**: The column to select data from.
- **Group By**: The column to group by.
###### Options
@ -43,3 +55,26 @@ to the visualization dropdown in the upper right.
- **Custom**: Manually specify the value range of the x-axis.
- **Min**: Minimum x-axis value.
- **Max**: Maximum x-axis value.
## Histogram examples
### View error counts by severity over time
The following example uses the Histogram visualization to show the number of errors
"binned" by time and segmented by severity.
_It utilizes data from the [Telegraf Syslog plugin](/v2.0/reference/telegraf-plugins/#syslog)._
##### Query for errors by severity code
```js
from(bucket: "example-bucket")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) =>
r._measurement == "syslog" and
r._field == "severity_code"
)
```
##### Histogram settings
In the Histogram visualization options, select `_time` as the [X Column](#data)
and `severity` as the [Group By](#data) option:
{{< img-hd src="/img/2-0-visualizations-histogram-errors.png" alt="Errors histogram" />}}

View File

@ -9,21 +9,30 @@ menu:
v2_0:
name: Scatter
parent: Visualization types
related:
- /v2.0/visualize-data/visualization-types/heatmap
---
The **Scatter** view uses a scatter plot to display time series data.
{{< img-hd src="/img/2-0-visualizations-scatter-example.png" alt="Scatter plot example" />}}
To select this view, select the **Scatter** option from the visualization dropdown in the upper right.
Select the **Scatter** option from the visualization dropdown in the upper right.
#### Scatter controls
## Scatter behavior
The scatter visualization maps each data point to X and Y coordinates.
X and Y axes are specified with the [X Column](#data) and [Y Column](#data) visualization options.
Each unique series is differentiated using fill colors and symbols.
Use the [Symbol Column](#data) and [Fill Column](#data) options to select columns
used to differentiate points in the visualization.
## Scatter controls
To view **Scatter** controls, click the settings icon ({{< icon "gear" >}}) next
to the visualization dropdown in the upper right.
###### Data
- **Symbol column**: Define a column containing values that should be differentiated with symbols.
- **Fill column**: Define a column containing values that should be differentiated with fill color.
- **Symbol Column**: Define a column containing values that should be differentiated with symbols.
- **Fill Column**: Define a column containing values that should be differentiated with fill color.
- **X Column**: Select a column to display on the x-axis.
- **Y Column**: Select a column to display on the y-axis.
@ -42,3 +51,59 @@ to the visualization dropdown in the upper right.
- **Custom**: Manually specify the value range of the y-axis.
- **Min**: Minimum y-axis value.
- **Max**: Maximum y-axis value.
## Scatter examples
### Cross-measurement correlation
The following example explores possible correlation between CPU and Memory usage.
It uses data collected with the Telegraf [Mem](/v2.0/reference/telegraf-plugins/#mem)
and [CPU](/v2.0/reference/telegraf-plugins/#cpu) input plugins.
###### Query CPU and memory usage
The following query creates a union of CPU and memory usage.
It scales the CPU usage metric to better align with baseline memory usage.
```js
cpu = from(bucket: "example-bucket")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) =>
r._measurement == "cpu" and
r._field == "usage_system" and
r.cpu == "cpu-total"
)
// Scale CPU usage
|> map(fn: (r) => ({
_value: r._value + 60.0,
_time: r._time
})
)
mem = from(bucket: "example-bucket")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) =>
r._measurement == "mem" and
r._field == "used_percent"
)
union(tables: [cpu, mem])
```
###### Use a scatter plot to visualize correlation
In the Scatter visualization controls, points are differentiated based on their group keys.
{{< img-hd src="/img/2-0-visualizations-scatter-correlation.png" alt="Heatmap correlation example" />}}
## Important notes
### Differences between a scatter plot and a heatmap
Scatter plots and [Heatmaps](/v2.0/visualize-data/visualization-types/heatmap/)
both visualize the distribution of data points on X and Y axes.
However, in certain cases, scatterplots can "hide" points if they share the same X and Y coordinates.
For example, the dashboard cells below visualize the same query results:
{{< img-hd src="/img/2-0-visualizations-heatmap-vs-scatter.png" alt="Heatmap vs Scatter plot" />}}
The heatmap indicates isolated high point density, which isn't visible in the scatter plot.
In the scatter plot visualization, points that share the same X and Y coordinates
appear as a single point.

View File

@ -15,10 +15,21 @@ The **Single Stat** view displays the most recent value of the specified time se
{{< img-hd src="/img/2-0-visualizations-single-stat-example.png" alt="Single stat example" />}}
To select this view, select the **Single Stat** option from the visualization dropdown in the upper right.
Select the **Single Stat** option from the visualization dropdown in the upper right.
#### Single Stat Controls
## Single Stat behavior
The Single Stat visualization displays a single numeric data point.
It uses the latest point in the first table (or series) returned by the query.
{{% note %}}
#### Queries should return one table
Flux does not guarantee the order in which tables are returned.
If a query returns multiple tables (or series), the table order can change between query executions
and result in the Single Stat visualization displaying inconsistent data.
For consistent results, the Single Stat query should return a single table.
{{% /note %}}
## Single Stat Controls
To view **Single Stat** controls, click the settings icon ({{< icon "gear" >}})
next to the visualization dropdown in the upper right.
@ -34,3 +45,21 @@ next to the visualization dropdown in the upper right.
Choose a color from the dropdown menu next to the value.
- **Colorization**: Choose **Text** for the single stat to change color based on the configured thresholds.
Choose **Background** for the background of the graph to change color based on the configured thresholds.
## Single Stat examples
### Show human-readable current value
The following example shows the current memory usage displayed has a human-readable percentage:
###### Query memory usage percentage
```js
from(bucket: "example-bucket")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) =>
r._measurement == "mem" and
r._field == "used_percent"
)
```
###### Memory usage as a single stat
{{< img-hd src="/img/2-0-visualizations-single-stat-memor.png" alt="Graph + Single Stat Memory Usage Example" />}}

View File

@ -1,6 +1,6 @@
---
title: Table visualization
list_title: Single stat
list_title: Table
list_image: /img/2-0-visualizations-table-example.png
description: >
The Table option displays the results of queries in a tabular view, which is
@ -17,10 +17,16 @@ sometimes easier to analyze than graph views of data.
{{< img-hd src="/img/2-0-visualizations-table-example.png" alt="Table example" />}}
To select this view, select the **Table** option from the visualization dropdown in the upper right.
Select the **Table** option from the visualization dropdown in the upper right.
#### Table Controls
## Table behavior
The table visualization renders queried data in structured, easy-to-read tables.
Columns and rows match those in the query output.
If query results contain multiple tables, only one table is shown at a time.
Select other output tables in the far left column of the table visualization.
Tables are identified by their [group key](/v2.0/query-data/get-started/#group-keys).
## Table Controls
To view **Table** controls, click the settings icon ({{< icon "gear" >}}) next to
the visualization dropdown in the upper right.
@ -51,3 +57,27 @@ the visualization dropdown in the upper right.
- **Add a Threshold**: Change the color of the table based on the current value.
- **Value is**: Enter the value at which the table should appear in the selected color.
Choose a color from the dropdown menu next to the value.
## Table examples
Tables are helpful when displaying many human-readable metrics in a dashboard
such as cluster statistics or log messages.
### Human-readable cluster metrics
The following example queries the latest reported memory usage from a cluster of servers.
###### Query the latest memory usage from each host
```js
from(bucket: "example-bucket")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) =>
r._measurement == "mem" and
r._field == "used_percent"
)
|> group(columns: ["host"])
|> last()
|> group()
|> keep(columns: ["_value", "host"])
```
###### Cluster metrics in a table
{{< img-hd src="/img/2-0-visualizations-table-human-readable.png" alt="Human readable metrics in a table" />}}

View File

@ -156,55 +156,69 @@ Resources:
Description: Lambda function performing request URI rewriting.
Code:
ZipFile: |
const config = {
suffix: '.html',
appendToDirs: 'index.html',
removeTrailingSlash: false,
};
'use strict';
const regexSuffixless = /\/[a-z0-9]+([0-9\.]+)?$/; // e.g. "/some/page" but not "/", "/some/" or "/some.jpg"
const regexTrailingSlash = /.+\/$/; // e.g. "/some/" or "/some/page/" but not root "/"
exports.handler = function handler(event, context, callback) {
exports.handler = (event, context, callback) => {
const { request } = event.Records[0].cf;
const { uri } = request;
const { suffix, appendToDirs, removeTrailingSlash } = config;
const { uri, headers, origin } = request;
const extension = uri.substr(uri.lastIndexOf('.') + 1);
// Append ".html" to origin request
if (suffix && uri.match(regexSuffixless)) {
request.uri = uri + suffix;
callback(null, request);
return;
const validExtensions = ['.html', '.css', '.js', '.xml', '.png', '.jpg', '.svg', '.otf', '.eot', '.ttf', '.woff'];
const indexPath = 'index.html';
const defaultPath = '/v2.0/'
// If path ends with '/', then append 'index.html', otherwise redirect to a
// path with '/' or ignore if the path ends with a valid file extension.
if ((uri == '/') || (uri.length < defaultPath.length)) {
callback(null, {
status: '302',
statusDescription: 'Found',
headers: {
location: [{
key: 'Location',
value: defaultPath,
}],
}
});
} else if (uri.endsWith('/')) {
request.uri = uri + indexPath;
} else if (uri.endsWith('/index.html')) {
callback(null, {
status: '302',
statusDescription: 'Found',
headers: {
location: [{
key: 'Location',
value: uri.substr(0, uri.length - indexPath.length),
}],
}
});
} else if (validExtensions.filter((ext) => uri.endsWith(ext)) == 0) {
callback(null, {
status: '302',
statusDescription: 'Found',
headers: {
location: [{
key: 'Location',
value: uri + '/',
}],
}
});
}
// Append "index.html" to origin request
if (appendToDirs && uri.match(regexTrailingSlash)) {
request.uri = uri + appendToDirs;
callback(null, request);
return;
}
const pathsV1 = ['/influxdb', '/telegraf', '/chronograf', '/kapacitor', '/enterprise_influxdb', '/enterprise_kapacitor'];
const originV1 = process.env.ORIGIN_V1;
// Redirect (301) non-root requests ending in "/" to URI without trailing slash
if (removeTrailingSlash && uri.match(/.+\/$/)) {
const response = {
// body: '',
// bodyEncoding: 'text',
headers: {
'location': [{
key: 'Location',
value: uri.slice(0, -1)
}]
},
status: '301',
statusDescription: 'Moved Permanently'
};
callback(null, response);
return;
// Send to v1 origin if start of path matches
if (pathsV1.filter((path) => uri.startsWith(path)) > 0) {
headers['host'] = [{key: 'host', value: originV1}];
origin.s3.domainName = originV1;
}
// If nothing matches, return request unchanged
callback(null, request);
};
Handler: index.handler
MemorySize: 128
Role: !Sub ${DocsOriginRequestRewriteLambdaRole.Arn}

Binary file not shown.

After

Width:  |  Height:  |  Size: 117 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 147 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB